What types of issues does QA.tech detect?
We continuously improve our issue detection and can export issues to Linear, Jira, and Trello. Some examples of issues we detect:Failed tests
When a test cannot be completed, we create a failed test issue. This often indicates a UX problem or functional issue in your application that needs investigation.Console errors
We capture JavaScript console errors that occur during test execution. These are typically filed as low-severity issues but can indicate bugs or misconfigurations.Accessibility issues
We automatically check for WCAG 2.0, 2.1, and 2.2 violations using axe-core on every page your tests visit.How it works
- Scans run automatically during test execution (on page load, after navigation, after clicks)
- Issues appear in the dashboard with the exact URL, HTML element, and WCAG documentation
- Same issue across multiple test runs is grouped together
WCAG coverage
- ✅ WCAG 2.0 (Level A, AA, AAA)
- ✅ WCAG 2.1 (Level A, AA)
- ✅ WCAG 2.2 (Level AA)
Limitations
- Maximum 10 issues captured per test session (see limits)
- Color contrast not checked (use browser DevTools for this)
- Content in iframes not scanned
- Certain rules disabled:
region
,landmark-one-main
Accessibility scans are integrated into test execution. To schedule regular accessibility checks, create a test plan and schedule it to run daily/weekly.
What we don’t detect
Because we interact with your product as a user through a browser:- Server-side errors: Use a monitoring tool like Sentry or BugSnag for backend exceptions
- Network request failures: We log these for debugging but don’t create issues for them