Looking to manually trigger tests from CI/CD workflows? See GitHub Actions instead.
What It Does
| Feature | Description |
|---|---|
| ✅ Intelligent Test Selection | AI semantically matches PR changes to relevant tests (typically 5-15 tests selected) |
| ✅ Gap-Only Test Generation | Creates 1-3 tests only when coverage gaps exist; most PRs create zero new tests |
| ✅ Persistent Test Suite | Auto-generated tests become permanent regression tests for future PRs |
| ✅ Preview Environment Testing | Tests against PR preview deployments |
| ✅ Approval/Rejection | Posts reviews with pass/fail verdicts |
| ❌ Manual Control | No control over which tests run (use Actions for that) |
| ❌ Custom Workflows | No YAML workflows needed or supported |
1
Install GitHub App
Go to Settings → Organization → Connections and add the GitHub App connection. Follow the OAuth flow to grant access to your repositories.
2
Select Repository
Navigate to Settings → Integrations and select the repository you want to enable PR reviews for.PR reviews are enabled automatically once you select a repository.
3
Create a Pull Request
Once enabled, the agent automatically:
- Detects code changes when PRs are opened or updated
- Determines which tests are relevant
- Creates new tests for untested functionality
- Runs all relevant tests against the PR preview
- Posts a review with approval or decline based on results
Understanding GitHub Deployments
GitHub Deployments are GitHub’s native way of tracking when code is deployed to an environment. When your CI/CD deploys a PR, it can create a deployment record that includes the environment name and URL. Platforms with automatic GitHub deployment integration:- Vercel - Creates GitHub deployments automatically for every PR
- Netlify - Auto-registers deployments when configured
- Render, Railway, Fly.io - Most modern platforms support this
Without GitHub deployments, QA.tech won’t know which URL to test your PR against. Environment mapping only works when your CI/CD creates these deployment records.
Environment Mapping
Map GitHub deployment environments to QA.tech Applications so tests run against the correct preview URLs. When to use:- You have multiple Applications (frontend, backend, etc.)
- Your CI/CD creates GitHub deployment environments
- You want tests to run against PR-specific URLs
- Your CI/CD deploys a PR and creates a GitHub environment (e.g., “Preview” or “pr-123”)
- QA.tech detects the deployment
- Tests run using the mapped Application’s URL from that environment
How PR Reviews Work
Agent Workflow
Understanding Test Selection & Creation
The PR Review agent uses AI to intelligently select and create tests. It matches PR changes to test goals semantically, not through keyword matching. Example: PR changes checkout payment flow- AI semantically matches changed files to test goals
- Runs ALL tests it determines are relevant (no arbitrary limits)
- Only considers tests with
status='enabled'(skips draft/error tests) - More intelligent than fixed CI that runs all existing tests every time
Typical Test Counts
| PR Type | Existing Tests Selected | New Tests Created | Total |
|---|---|---|---|
| Bug fix in login | 3-7 | 0 (already covered) | 3-7 |
| Small feature | 5-10 | 1-2 (fill gaps) | 6-12 |
| Major feature | 15-25 | 3-5 (new functionality) | 18-30 |
| Refactor | 10-20 | 0 (no new behavior) | 10-20 |
| Docs/infra only | 0 | 0 (untestable via UI) | 0 |
Creating Tests for Coverage Gaps
Tests are created only when gaps exist - not for every PR. Example: PR adds Apple Pay to checkout- New features with zero coverage
- New user flows not tested before
- New payment methods, auth methods, etc.
- Bug fixes (existing tests already cover the flow)
- Refactors (no new user-facing behavior)
- Code improvements (same functionality)
- Documentation/infrastructure changes
Auto-Generated Test Lifecycle
Tests created during PR review stay in the system labeled ‘ephemeral’ and can be enabled for future runs. This way you’re automatically getting new functionality in your app covered by tests and you have full control over it. The Journey:- Review tests in UI (Settings → Test Cases, filter by ‘ephemeral’)
- Enable/refine tests removes ‘ephemeral’ label
- Delete tests you don’t want
Preventing Test Overlap
The agent actively prevents duplicate tests through semantic understanding. It checks all your enabled test cases before creating any new tests, reviews all existing test names, goals, and coverage areas, practically understanding functional overlap (not just name matching). Example - Avoiding Duplicates:| Existing Test | PR Change | Agent Decision |
|---|---|---|
| ”User login with credentials” | Add OAuth | Create “User login with Google OAuth” (different method) |
| “Complete checkout flow” | Fix checkout bug | Skip creation (already covered) |
| “Test payment processing” | Add refunds | Create “Process refund” (new functionality) |
Self-Limiting Growth
As your test suite grows, the system creates fewer tests automatically:Review Format
The agent posts a review with:- Verdict: ✅ Tests passing / ❌ Tests failing / ℹ️ Unable to verify
- What was tested: Description of coverage in prose
- Results summary: Patterns and themes from test outcomes
- Test details: Automatic table with individual test results
- Code quality opinions
- Implementation suggestions
- References to other bot comments
Frequently Asked Questions
Will auto-generated tests pollute my test suite?
No. The agent only creates tests for coverage gaps - most PRs (bug fixes, refactors) create zero new tests. Even major feature PRs typically add 3-5 focused tests, not hundreds. The system is self-limiting: better coverage means fewer gaps, which means less test generation over time.Can I control which tests run?
No - the GitHub App is fully autonomous for speed and simplicity. For manual test selection and control, use GitHub Actions instead. You can use both: GitHub App for automatic PR reviews + Actions for manual deep testing.How do I review or delete auto-generated tests?
Go to Settings → Test Cases and filter by ‘ephemeral’ label. You can enable (removes label), edit, disable, or delete any test. Tests work fine as-is - review is optional.What if the agent creates duplicate tests?
The agent actively prevents duplicates by reading all existing tests first and using semantic deduplication. If a duplicate slips through (AI isn’t perfect), simply disable or delete it in the UI. Better to occasionally have slight redundancy than miss coverage gaps.Troubleshooting
PR reviews aren’t posting
Check:- GitHub App installed and granted repository access
- Repository integration configured in QA.tech
- PR reviews are enabled for this repository
- PR has user-facing changes (not docs/infra only)
Tests running against wrong URL
Solutions:- Map your GitHub environments to Applications (Settings → Integrations → GitHub App → Map Environments)
- Verify your CI/CD creates GitHub deployment environments
- Check that environment names match between GitHub and QA.tech
Agent created irrelevant tests
Solutions:- Add review context with specific testing guidelines
- Update existing tests to cover the functionality better
- The agent learns from your existing test patterns
Review says “Unable to verify through end-to-end testing”
This is expected for:- Documentation-only changes (.md, .txt, README)
- Infrastructure changes (CI/CD, Docker, deployment scripts)
- Build configuration (package.json without functional changes)
Limitations
- Fully autonomous: Cannot manually select specific tests (use GitHub Actions for control)
- Requires preview deployments: Cannot test without accessible PR environment
- UI-testable changes only: Backend-only microservices without UI access can’t be tested
- No workflow customization: Unlike GitHub Actions, there’s no YAML configuration
Need Manual Control?
If you need to:- Choose specific test plans to run
- Trigger tests from custom CI/CD steps
- Control test execution timing
- Run deep exploratory testing on-demand
@qatech mentions. You can use both approaches for comprehensive coverage.