Skip to main content
QA.tech uses specialized AI agents to test your application like humans would, but faster and more thoroughly. Our system automatically routes testing tasks to the right agent based on context-no manual configuration needed.

Our AI Agent System

Entry PointPurposeHow It ActivatesLearn More
Chat AssistantInteractive testing, test creation, site analysisYou start a conversationAI Chat Assistant
PR ReviewAutonomous testing of every pull requestAutomatic on PR open/updateGitHub App
On-Demand PR TestingDeep testing with custom instructions@qatech mention in PR commentsGitHub Actions

How It Works in Practice

Here’s what happens when you open a PR that changes your checkout flow: 1. Automatic Detection
PR opened: "Add Apple Pay to checkout"
→ PR Review agent activates
2. Change Classification
Analyzing diff...
├─ checkout.tsx: USER-FACING ✓
├─ payment-methods.ts: USER-FACING ✓
└─ README.md: DOCS ONLY ✗

Classification: USER-FACING changes detected
3. Coverage Assessment
Existing tests found:
├─ "Complete checkout with credit card" ✓
├─ "Complete checkout with PayPal" ✓
└─ Apple Pay integration: NOT COVERED ⚠️
4. Test Generation
Creating new test:
"Complete checkout flow with Apple Pay"
├─ Add item to cart
├─ Navigate to checkout
├─ Select Apple Pay as payment method
├─ Verify payment confirmation
└─ Verify order appears in account
5. Execution & Review
Running against PR preview environment...
✅ All tests passing (3/3)

Posting GitHub review:
"✅ Tests passing - Apple Pay integration verified"
This entire workflow runs autonomously - no human intervention required.

When to Use AI Agents vs Scripts

ScenarioAI AgentsScriptsManual QA
Exploratory testing✅ Best - discovers edge cases through varied behavior❌ Fixed paths only✅ Good - but slow and expensive
Regression testing✅ Good - handles UI changes gracefully✅ Best - fastest execution❌ Too repetitive
Exact same steps every time⚠️ Possible but overkill✅ Best - fully deterministic❌ Error-prone over time
Testing multiple user flows✅ Best - tries different approaches automatically⚠️ Need separate script per variant✅ Good - but doesn’t scale
Complex calculations❌ Use assertions in code instead✅ Best - precise math⚠️ Manual calculation prone to errors
Form filling with realistic data✅ Best - understands context⚠️ Need hardcoded test data✅ Good - but tedious
Testing preview environments✅ Best - automatic PR reviews⚠️ Need CI/CD integration❌ Requires manual coordination
First-time test coverage✅ Best - AI Chat Assistant sets up quickly❌ Requires upfront investment⚠️ Slow to establish baseline
In practice, most teams use a combination: AI agents for exploration and edge cases, scripts for critical deterministic flows, and manual QA for subjective evaluation (design, UX, brand consistency).

How It All Connects

The system coordinates automatically based on what you’re trying to accomplish: Creating tests via Chat
You: "Create 5 tests for the checkout flow"
→ Analyzes your application structure
→ Creates tests prioritizing revenue-critical paths
→ Shows suggestions for your approval
PR Review fills coverage gaps
PR detected with untested functionality
→ Creates tests for new features
→ Runs all relevant tests
→ Posts review with results
You don’t manage this coordination - it happens automatically.

Test Execution Model

QA.tech uses Claude Haiku 4.5 as the default AI model for test execution. This model provides the fastest test execution while maintaining high-quality results.
ModelSpeedBest For
Claude Haiku 4.5 (default)FastestMost tests - recommended for day-to-day testing
Claude Sonnet 4.5ModerateComplex scenarios requiring deeper reasoning
The AI model handles all test execution decisions: navigating your application, filling forms, clicking buttons, and verifying outcomes. You write test goals in natural language, and the model figures out how to achieve them.
You can override the agent per-test in the Advanced settings when creating or editing a test. See Creating Tests for details.

Get Started