Skip to main content
Get autonomous AI-powered test coverage and reviews on every pull request. The QA.tech GitHub App analyzes code changes, creates missing tests, and posts comprehensive reviews based on test results.
Looking to manually trigger tests from CI/CD workflows? See GitHub Actions instead.

What It Does

FeatureDescription
Intelligent Test SelectionAI semantically matches PR changes to relevant tests (typically 5-15 tests selected)
Gap-Only Test GenerationCreates 1-3 tests only when coverage gaps exist; most PRs create zero new tests
Persistent Test SuiteAuto-generated tests become permanent regression tests for future PRs
Preview Environment TestingTests against PR preview deployments
Approval/RejectionPosts reviews with pass/fail verdicts
Manual ControlNo control over which tests run (use Actions for that)
Custom WorkflowsNo YAML workflows needed or supported
1

Install GitHub App

Go to Settings → Organization → Connections and add the GitHub App connection. Follow the OAuth flow to grant access to your repositories.
2

Select Repository

Navigate to Settings → Integrations and select the repository you want to enable PR reviews for.PR reviews are enabled automatically once you select a repository.
3

Create a Pull Request

Once enabled, the agent automatically:
  1. Detects code changes when PRs are opened or updated
  2. Determines which tests are relevant
  3. Creates new tests for untested functionality
  4. Runs all relevant tests against the PR preview
  5. Posts a review with approval or decline based on results

Understanding GitHub Deployments

GitHub Deployments are GitHub’s native way of tracking when code is deployed to an environment. When your CI/CD deploys a PR, it can create a deployment record that includes the environment name and URL. Platforms with automatic GitHub deployment integration:
  • Vercel - Creates GitHub deployments automatically for every PR
  • Netlify - Auto-registers deployments when configured
  • Render, Railway, Fly.io - Most modern platforms support this
For other platforms: You’ll need to manually create GitHub deployment records in your CI/CD pipeline. Need help setting this up? Contact us at support@qa.tech.
Without GitHub deployments, QA.tech won’t know which URL to test your PR against. Environment mapping only works when your CI/CD creates these deployment records.

Environment Mapping

Map GitHub deployment environments to QA.tech Applications so tests run against the correct preview URLs. When to use:
  • You have multiple Applications (frontend, backend, etc.)
  • Your CI/CD creates GitHub deployment environments
  • You want tests to run against PR-specific URLs
How it works:
  1. Your CI/CD deploys a PR and creates a GitHub environment (e.g., “Preview” or “pr-123”)
  2. QA.tech detects the deployment
  3. Tests run using the mapped Application’s URL from that environment
Location: Settings → Integrations → GitHub App → Map Environments

How PR Reviews Work

Agent Workflow

PR opened/updated

1. Classify Changes
   → User-facing? Continue
   → Docs/infra only? Skip testing, post info comment

2. Assess Coverage
   → Find relevant existing tests
   → Identify gaps in coverage

3. Create Tests (if needed)
   → Generate tests for untested functionality
   → Configure dependencies (e.g., login tests)

4. Run Tests
   → Execute against PR preview environment
   → Wait for completion

5. Post Review
   → ✅ Approve if all tests pass
   → ❌ Decline if tests fail
   → ℹ️ Informational if untestable

Understanding Test Selection & Creation

The PR Review agent uses AI to intelligently select and create tests. It matches PR changes to test goals semantically, not through keyword matching. Example: PR changes checkout payment flow
Your project: 645 total tests

Agent analyzes:
├─ "Complete checkout with credit card" → RELEVANT ✓
├─ "Complete checkout with PayPal" → RELEVANT ✓  
├─ "Verify payment confirmation email" → RELEVANT ✓
├─ "User profile photo upload" → NOT RELEVANT ✗
└─ "Search products by category" → NOT RELEVANT ✗

Selected: 12 tests covering payment & checkout flows
Key mechanics:
  • AI semantically matches changed files to test goals
  • Runs ALL tests it determines are relevant (no arbitrary limits)
  • Only considers tests with status='enabled' (skips draft/error tests)
  • More intelligent than fixed CI that runs all existing tests every time

Typical Test Counts

PR TypeExisting Tests SelectedNew Tests CreatedTotal
Bug fix in login3-70 (already covered)3-7
Small feature5-101-2 (fill gaps)6-12
Major feature15-253-5 (new functionality)18-30
Refactor10-200 (no new behavior)10-20
Docs/infra only00 (untestable via UI)0
No upper limit - the agent runs as many tests as needed for confidence, optimized for coverage not speed.

Creating Tests for Coverage Gaps

Tests are created only when gaps exist - not for every PR. Example: PR adds Apple Pay to checkout
Agent assesses existing coverage:
├─ "Complete checkout with credit card" ✓ Exists
├─ "Complete checkout with PayPal" ✓ Exists
└─ Apple Pay integration ✗ Gap identified

Decision: Create 1 new test
→ "Complete checkout with Apple Pay"
When tests ARE created:
  • New features with zero coverage
  • New user flows not tested before
  • New payment methods, auth methods, etc.
When tests are NOT created (most common):
  • Bug fixes (existing tests already cover the flow)
  • Refactors (no new user-facing behavior)
  • Code improvements (same functionality)
  • Documentation/infrastructure changes

Auto-Generated Test Lifecycle

Tests created during PR review stay in the system labeled ‘ephemeral’ and can be enabled for future runs. This way you’re automatically getting new functionality in your app covered by tests and you have full control over it. The Journey:
PR #42: Add Apple Pay
├─ Agent identifies gap
├─ Creates "Complete checkout with Apple Pay"  
├─ Test runs on PR #42 ✅
├─ Test persists in your suite (labeled 'ephemeral')
└─ Available for future PRs

PR #58: Refactor checkout UI
├─ Agent finds "Complete checkout with Apple Pay" exists
├─ Runs existing test (no new test created) ✅
└─ Your suite protects against Apple Pay regressions
Managing auto-generated tests:
  • Review tests in UI (Settings → Test Cases, filter by ‘ephemeral’)
  • Enable/refine tests removes ‘ephemeral’ label
  • Delete tests you don’t want

Preventing Test Overlap

The agent actively prevents duplicate tests through semantic understanding. It checks all your enabled test cases before creating any new tests, reviews all existing test names, goals, and coverage areas, practically understanding functional overlap (not just name matching). Example - Avoiding Duplicates:
Existing TestPR ChangeAgent Decision
”User login with credentials”Add OAuthCreate “User login with Google OAuth” (different method)
“Complete checkout flow”Fix checkout bugSkip creation (already covered)
“Test payment processing”Add refundsCreate “Process refund” (new functionality)

Self-Limiting Growth

As your test suite grows, the system creates fewer tests automatically:
Early PRs:
├─ Small test suite = Many gaps
└─ More test generation

Later PRs:
├─ Large test suite = Better coverage
└─ Mostly reuses existing tests

Result: Test creation naturally slows as coverage improves

Review Format

The agent posts a review with:
  • Verdict: ✅ Tests passing / ❌ Tests failing / ℹ️ Unable to verify
  • What was tested: Description of coverage in prose
  • Results summary: Patterns and themes from test outcomes
  • Test details: Automatic table with individual test results
What reviews DON’T include:
  • Code quality opinions
  • Implementation suggestions
  • References to other bot comments

Frequently Asked Questions

Will auto-generated tests pollute my test suite?

No. The agent only creates tests for coverage gaps - most PRs (bug fixes, refactors) create zero new tests. Even major feature PRs typically add 3-5 focused tests, not hundreds. The system is self-limiting: better coverage means fewer gaps, which means less test generation over time.

Can I control which tests run?

No - the GitHub App is fully autonomous for speed and simplicity. For manual test selection and control, use GitHub Actions instead. You can use both: GitHub App for automatic PR reviews + Actions for manual deep testing.

How do I review or delete auto-generated tests?

Go to Settings → Test Cases and filter by ‘ephemeral’ label. You can enable (removes label), edit, disable, or delete any test. Tests work fine as-is - review is optional.

What if the agent creates duplicate tests?

The agent actively prevents duplicates by reading all existing tests first and using semantic deduplication. If a duplicate slips through (AI isn’t perfect), simply disable or delete it in the UI. Better to occasionally have slight redundancy than miss coverage gaps.

Troubleshooting

PR reviews aren’t posting

Check:
  • GitHub App installed and granted repository access
  • Repository integration configured in QA.tech
  • PR reviews are enabled for this repository
  • PR has user-facing changes (not docs/infra only)

Tests running against wrong URL

Solutions:
  • Map your GitHub environments to Applications (Settings → Integrations → GitHub App → Map Environments)
  • Verify your CI/CD creates GitHub deployment environments
  • Check that environment names match between GitHub and QA.tech

Agent created irrelevant tests

Solutions:
  • Add review context with specific testing guidelines
  • Update existing tests to cover the functionality better
  • The agent learns from your existing test patterns

Review says “Unable to verify through end-to-end testing”

This is expected for:
  • Documentation-only changes (.md, .txt, README)
  • Infrastructure changes (CI/CD, Docker, deployment scripts)
  • Build configuration (package.json without functional changes)
The agent can only test changes accessible through the UI.

Limitations

  • Fully autonomous: Cannot manually select specific tests (use GitHub Actions for control)
  • Requires preview deployments: Cannot test without accessible PR environment
  • UI-testable changes only: Backend-only microservices without UI access can’t be tested
  • No workflow customization: Unlike GitHub Actions, there’s no YAML configuration

Need Manual Control?

If you need to:
  • Choose specific test plans to run
  • Trigger tests from custom CI/CD steps
  • Control test execution timing
  • Run deep exploratory testing on-demand
See GitHub Actions for manual CI/CD integration and on-demand exploratory testing via @qatech mentions. You can use both approaches for comprehensive coverage.