What should be a test?
A good example of a test is something a user would like to do, or achieve. A user story or user journey should probably map 1:1 to a test case.Test examples
- Log in with correct credentials
- Log in with incorrect credentials
- Create a new task
- Edit a task’s due date
Creating a test
You can create test cases through the UI with AI-generated suggestions, or conversationally through the AI Chat Assistant.Create Tests via UI
Click the “Add Test Case” button to open the test creation modal with two options:


- Name: Clear, descriptive name (e.g., “Create admin user and verify access”)
- Goal: What the test should accomplish in natural language
- Expected Result (optional): What success looks like
- Dependencies: Configure test execution order (see Test Dependencies for WAIT_FOR and RESUME_FROM)
- Configurations: Add required data like credentials or test accounts
- Advanced: Agent selection and other settings
Create Tests via AI Chat Assistant
The AI Chat Assistant provides a conversational, exploratory approach to test creation: Natural conversation — Describe your testing needs in plain English. Ask for multiple tests, request specific coverage areas, or iterate on suggestions through back-and-forth conversation. Upload context — Drag and drop specification documents, design files, or requirements (PDFs, images, text files) directly into the chat. The AI uses this context to generate more accurate, relevant tests. Safe experimentation — Tests generated in chat aren’t committed to your project until you explicitly click “Add Selected Tests”. You can review, refine, or discard them without affecting your team. Uploaded files remain isolated to that chat conversation only.Example prompts:
- “Generate 5 tests for the checkout flow”
- “What areas of my product should I cover with test cases first?”
- “Create a test that validates login and checks the user profile page”
Review and Refine
After creating a test, the AI agent automatically attempts to execute it and generate test steps. Click the review button to inspect the results:

Activating and Organizing Tests
After creating and reviewing your test, you’ll want to activate it and organize it within your project.Activate Your Test
Tests are created in draft mode so you can review and refine them before they run. When you’re ready, click the “Activate” button to enable the test and make it part of your active test suite.If your test has dependencies that are also in draft mode, QA.tech will prompt you to activate them together to ensure proper execution order.
Organize with Scenarios
On the Test Cases page, you can drag and drop tests to organize them into Scenario groups. Scenarios help you:- Keep related tests together (e.g., “Checkout Flow”, “User Management”)
- Create logical test groupings for better organization
- Get a clear overview during test execution
- Set up test dependencies within related workflows
Writing Effective Tests
Writing a Good Goal
The goal is the main objective of the test. The agent uses this to build steps and adapt when your application changes. Focus on describing what to do, not what to validate (use expected result for that). Good goal examples:- Search for ‘Chair’, navigate to a product and add it to the cart
- Invite a new member with Admin role to the project
- Open the customer support chat and send a message
- Action-oriented — Start with verbs like “Create”, “Search”, “Navigate”, “Add”
- Specific — Include exact details (product name, user role, button labels)
- Focused — Describe actions to take, not validation criteria
Writing a Good Expected Result
The expected result defines what the agent should verify at the end of the test. Describe what should be visible or observable when the test completes successfully. Good expected result examples:- The page should contain a user avatar
- A success message appears and the user is redirected to the product list
- The user receives an email with a password reset link
- Observable — Focus on things that can be verified visually or through system responses
- Specific — Include exact elements, messages, or states to check
- Outcome-focused — Describe the end state, not how to get there
Keep tests to 10 steps or less. If you need more steps, create a new test with a dependency instead. Shorter tests are faster to execute and easier to maintain.
Test Dependencies
Tests can depend on other tests to control execution order and reuse browser state. This is essential for complex workflows where one test needs data or state from another. Learn more in Test Dependencies.Testing with Multiple Users: If your test requires multiple users logged in simultaneously (e.g., collaboration, sharing), create separate login tests for each user. Each login test becomes the root of an independent chain with its own isolated browser session, ensuring users don’t interfere with each other. Learn more about Multi-User Testing Scenarios.
Organizing Tests for Complex Projects
When working on projects with multiple products or versions, it’s often beneficial to organize tests into a single project with multiple applications and use Test Scenarios to manage different versions. This approach allows you to:- Maintain a single source of truth for all test cases across your applications.
- Easily manage and update test cases across different product versions.
- Scale your testing efforts by reusing test cases across multiple applications.
- Reduce duplication and ensure consistency across your test suites.
- Project: Your overall product or platform
- Application: Each distinct product, major version, or variant
- Environment: Where each application runs (dev, staging, production)
- Test Cases: Belong to specific applications
- Test Scenarios: Organize related tests within or across applications
- Test Plans: Group tests into suites for CI/CD automation (e.g., “PR Smoke Tests”, “Nightly Regression”). Test plans are how you integrate QA.tech into your deployment pipeline via the API
Example Structure
For a complex project with multiple versions running simultaneously:- Each version’s tests are naturally isolated within their Application
- Share common infrastructure (configs, knowledge) at the Project level
- Run the same tests against different environments (staging → production)
- Test cross-version compatibility when needed
Using Test Plans for Release Management
Test Plans group tests you want to run together and can be reused across different environments by updating their parameters (application and environment settings). For example, run the same “Smoke Tests” plan against staging, then production—no need to duplicate the plan.CI/CD Integration: Test plans are designed for automation workflows. Use them with the API to trigger specific test suites from your deployment pipeline, GitHub Actions, or scheduled jobs. See Test Plans for API execution, scheduling, and environment override details.
- “Smoke Tests” - Critical paths across all versions/applications
- “v2.0 Full Regression” - All tests for the v2.0 application
- “Cross-Version Integration” - Tests verifying compatibility between versions
- Create a Test Plan called “Critical User Flows”
- Add essential tests from all your applications
- Run it against staging environments first
- Reuse the same plan for production by updating the environment parameters
- Avoid duplicating Test Plans for each environment
- Run comprehensive regression tests across multiple versions
- Maintain consistent test coverage as you promote between environments
- Schedule plans independently (e.g., smoke tests every hour, full regression nightly)