Types of Knowledge
Knowledge Items
Add documentation, links, and text content that AI assistants can reference when generating tests and answering questions. Knowledge items are stored at the project level and shared across all team members. What You Can Add:- URLs: Link to documentation sites, help pages, API docs
- Text Content: Add custom instructions, project notes, or guidelines
Global Context
Global context is included in the AI agent’s prompt for all test executions and chat conversations in your project. Use this to specify project-specific behaviors, requirements, or guidelines that should always apply. How It Works:- Loaded once when a test starts executing
- Passed to the AI on every decision it makes during test execution
- Changes take effect immediately for new test runs (not mid-run)
- Applies to both test execution and chat conversations
| Use Case | Example | What It Achieves |
|---|---|---|
| Domain knowledge | ”This is a railway booking system for American routes. Stations use AMTRAK codes.” | Agent understands domain terminology |
| Quality standards | ”Tests should fail if obvious typos or broken layouts are detected.” | Agent enforces quality expectations |
| Data requirements | ”When creating test users, always use Swedish names and addresses.” | Agent generates appropriate test data |
| UI conventions | ”Red buttons indicate destructive actions - verify confirmation dialogs appear before clicking.” | Agent handles UI patterns correctly |
| Auth flows | ”Login requires SMS 2FA. The code will be available in the test inbox.” | Agent knows to check inbox for 2FA |
| Timing expectations | ”Page loads may take up to 10 seconds in staging. Wait for loading indicators to disappear.” | Agent waits appropriately |
| Known limitations | ”The checkout flow is broken on mobile viewports - skip mobile tests for checkout.” | Agent avoids known issues |
- Keep it concise but specific - the agent sees this on every action
- Focus on universal rules that apply to all tests
- Include domain-specific terminology and behaviors
- Mention any unusual UI patterns or interactions
- Update when your product behavior changes
- Test-specific instructions - put these in test steps instead
- Credentials - use Configs instead
- URLs - use Applications and Environments instead
- Temporary workarounds - document in test steps where needed
Project Summary
An AI-generated summary of your website’s functionality and structure, created by analyzing your pages and user interactions. This summary updates automatically as your site evolves. What’s Included:- Overview of your platform’s purpose and main functionality
- Key features and user flows
- Important pages and their purposes
- Integration points and external services
1
Navigate to Knowledge Settings
Navigate to Settings →
Knowledge.
2
Find Project Summary
Find the “Project Summary” section on the page.
3
Refresh Summary
Click “Refresh” to generate an updated summary based on your latest site
analysis.
The summary requires initial site analysis to be completed. If no summary
appears, ensure you’ve run some tests or site analysis first.
How Knowledge Works
Knowledge has different roles during test creation versus test execution:| Knowledge Type | Test Creation (Chat) | Test Execution |
|---|---|---|
| Knowledge Items | Used - AI searches for relevant docs | Not used |
| Knowledge Graph | Used - provides site structure context | Not used |
| Global Context | Used - domain understanding | Used - every AI decision |
| Project Summary | Used - overall context | Not used |
| Uploaded Files | Used - in that chat session | Not used |
Uploaded files are temporary: Files you upload to chat are only available
within that specific conversation and their content is compressed after
approximately 3 message exchanges. For information you need long-term or
across multiple chats, add it to your Project
Knowledge instead, or ask
the AI Chat to generate a knowledge item based on the conversation for you.
- AI searches knowledge items for relevant documentation and specifications
- Global context helps the AI understand your product domain
- Project summary provides overall context about your application
- Chat assistants use knowledge to answer questions and suggest tests
- The AI generates test steps based on all the above context
- Only the generated test steps, goal, expected result, and global context are used
- Knowledge items and knowledge graph are not accessed
- Configs provide credentials, environment URLs, and test files
- Tests execute based on the instructions that were created, not the original documentation
This separation means you can safely update knowledge without affecting tests
that are already running or scheduled. Global context is the only knowledge
type that affects both creation and execution.
Project Knowledge vs. Chat-Local Files
Understanding the scope of your knowledge helps you organize it effectively: Project Knowledge (Settings → Knowledge):- Shared across all chats in the project
- Accessible to all team members
- Permanent and searchable by semantic search
- Best for: Official documentation, stable specifications, product rules
- Only exists in that specific chat conversation
- Other users’ chats cannot access it
- Not added to project knowledge automatically
- Best for: Experimental specs, PR descriptions, feature branch docs, exploratory work
Semantic Search
Knowledge items are automatically indexed using embeddings, allowing the AI to find relevant information based on meaning rather than exact keywords. When you ask a question or generate tests, the system:- Analyzes your request to understand intent
- Searches knowledge items for semantically similar content
- Provides the most relevant information to the AI assistant
- Uses clear file names to improve search accuracy
Adding Knowledge Items
1
Navigate to Knowledge Settings
Navigate to Settings →
Knowledge.
2
Add New Knowledge
Click “Add Knowledge” to create a new knowledge item.
3
Choose Content Type
Choose your content type (URL or Text) based on what you want to add.
4
Fill in Details
Fill in the title and content for your knowledge item. Use clear,
descriptive titles that reflect the content - this helps both the AI search
and allows you to reference specific knowledge by name in chat (e.g., “use
the API documentation knowledge”).
5
Save and Process
Save to process and index the content for use by AI assistants.
Organization Tips
- Use Clear Titles: Make knowledge items easy to find with specific, descriptive names
- Keep Content Current: Regularly review and update documentation links
- Remove Outdated Info: Delete or update knowledge that no longer reflects your product
- Test Your Knowledge: Ask the chat assistant questions to verify it finds the right information
- Avoid Contradictions: When updating specs, remove old versions to prevent confusion
Troubleshooting
Knowledge not being found in chat:- Ensure content has been fully processed (check status indicators)
- Try different search terms or questions
- Verify the content is relevant to your question
- Use clear, descriptive titles for better semantic search results
- Check that you’ve saved the global context settings
- Verify the context applies to the type of test you’re running
- Review test results to see if context guidelines are being followed
- Ensure you’ve completed initial site analysis
- Try running a few tests to build the knowledge graph
- Manually refresh the summary after adding new functionality
- Remember: files uploaded in chat are conversation-specific
- To share with team, add the file to Project Knowledge instead
- Other team members need to upload it to their own chats, or use project knowledge