Skip to main content
QA.tech uses specialized AI agents to test your application like humans would, but faster and more thoroughly. Our system automatically routes testing tasks to the right agent based on context-no manual configuration needed.

Our AI Agent System

AgentPurposeHow It ActivatesLearn More
Chat AssistantInteractive testing, test creation, project analysisYou start a conversationAI Chat Assistant
PR ReviewAutonomous testing of every pull requestAutomatic on PR open/updateGitHub App
ExplorativeDeep on-demand testing with custom instructions@qatech mention in PR commentsGitHub Actions
Test GenerationCreates test cases from requirementsVia Chat AssistantAI Chat Assistant
OnboardingFirst-time setup and baseline test coverageVia Chat AssistantAI Chat Assistant

How It Works in Practice

Here’s what happens when you open a PR that changes your checkout flow: 1. Automatic Detection
PR opened: "Add Apple Pay to checkout"
→ PR Review agent activates
2. Change Classification
Analyzing diff...
├─ checkout.tsx: USER-FACING ✓
├─ payment-methods.ts: USER-FACING ✓
└─ README.md: DOCS ONLY ✗

Classification: USER-FACING changes detected
3. Coverage Assessment
Existing tests found:
├─ "Complete checkout with credit card" ✓
├─ "Complete checkout with PayPal" ✓
└─ Apple Pay integration: NOT COVERED ⚠️
4. Test Generation
Creating new test:
"Complete checkout flow with Apple Pay"
├─ Add item to cart
├─ Navigate to checkout
├─ Select Apple Pay as payment method
├─ Verify payment confirmation
└─ Verify order appears in account
5. Execution & Review
Running against PR preview environment...
✅ All tests passing (3/3)

Posting GitHub review:
"✅ Tests passing - Apple Pay integration verified"
This entire workflow runs autonomously - no human intervention required.

When to Use AI Agents vs Scripts

ScenarioAI AgentsScriptsManual QA
Exploratory testing✅ Best - discovers edge cases through varied behavior❌ Fixed paths only✅ Good - but slow and expensive
Regression testing✅ Good - handles UI changes gracefully✅ Best - fastest execution❌ Too repetitive
Exact same steps every time⚠️ Possible but overkill✅ Best - fully deterministic❌ Error-prone over time
Testing multiple user flows✅ Best - tries different approaches automatically⚠️ Need separate script per variant✅ Good - but doesn’t scale
Complex calculations❌ Use assertions in code instead✅ Best - precise math⚠️ Manual calculation prone to errors
Form filling with realistic data✅ Best - understands context⚠️ Need hardcoded test data✅ Good - but tedious
Testing preview environments✅ Best - automatic PR reviews⚠️ Need CI/CD integration❌ Requires manual coordination
First-time test coverage✅ Best - onboarding agent sets up quickly❌ Requires upfront investment⚠️ Slow to establish baseline
In practice, most teams use a combination: AI agents for exploration and edge cases, scripts for critical deterministic flows, and manual QA for subjective evaluation (design, UX, brand consistency).

Agent Coordination

Agents automatically hand off tasks to specialists when needed: Chat → Test Generation
You: "Create 5 tests for the checkout flow"
→ Chat Assistant routes to Test Generation agent
→ Analyzes your application structure
→ Creates tests prioritizing revenue-critical paths
Chat → Explorative
You: "Test this pull request"
→ Chat Assistant routes to Explorative agent
→ Fetches PR details and preview environment
→ Runs tests and posts review
PR Review → Test Generation
PR detected with untested functionality
→ PR Review agent requests new tests
→ Test Generation creates coverage
→ PR Review executes and reports
You don’t manage this coordination - it happens automatically based on what you’re trying to accomplish.

Test Execution Model

QA.tech uses Claude Haiku 4.5 as the default AI model for test execution. This model provides the fastest test execution while maintaining high-quality results.
ModelSpeedBest For
Claude Haiku 4.5 (default)FastestMost tests - recommended for day-to-day testing
Claude Sonnet 4.5ModerateComplex scenarios requiring deeper reasoning
The AI model handles all test execution decisions: navigating your application, filling forms, clicking buttons, and verifying outcomes. You write test goals in natural language, and the model figures out how to achieve them.
You can override the agent per-test in the Advanced settings when creating or editing a test. See Creating Tests for details.

Get Started