Best Practices for Using TestGenie

Follow these guidelines to get the best results from TestGenie — from AI test case generation through execution and reporting.


Configuration: Issue Mapping

Fill in all three sections before running executions

The Defect Orchestration and Execution Planning sections must be configured before the ACTIONS column and Export Report appear in the Execution Dashboard. Do this once per project during initial setup.

Use dedicated issue types where possible

If your project has custom issue types (e.g., "Test Case", "Test Plan", "Test Execution"), use them in Issue Mapping. This keeps test management issues visually distinct in Jira boards and avoids confusion with general tasks.

The link types you choose in Issue Mapping appear in Jira issue views as standard links. Choose names that make sense to the whole team — e.g., "is tested by" for Req ↔ Test Case, "blocks" for Defect ↔ Execution.


AI Test Case Generation

Write clear scenario descriptions

Be specific about user roles, inputs, expected outputs, and both main and alternative flows.

  • Less helpful: "Test login page."

  • More helpful: "A registered user attempts to log in with valid credentials, invalid credentials, and a locked account. The system should respond with appropriate messages and allow only valid logins."

Clearer scenarios produce higher-quality generated test cases.

Use requirement issues with rich descriptions

Ensure Stories, Features, and Bugs have clear acceptance criteria and descriptive summaries. The Rovo agent uses this content to generate more accurate and complete test cases.

Define preconditions atomically

Make each precondition a single, reusable state — for example, "User exists in system" rather than "User exists, is logged in, and has an active subscription." Atomic preconditions are easier to reuse across test cases and generate more precise test steps.

Review planned test cases before generating drafts

Always review the planned list (the summaries) before asking the agent to generate full drafts. Check for:

  • Missing critical paths or edge cases

  • Duplicates or overly similar test cases

  • Incorrect assumptions about system behaviour

If something is wrong, adjust the scenario or preconditions and let the agent regenerate — before any Jira issues are created.


Test Plans

Create one plan per sprint or release

Organise test cases into Test Plans that map to your sprint or release cycle. This makes the Execution Dashboard's Active Runs sidebar meaningful — each run corresponds to a specific sprint or release candidate.

Assign test cases to the right people before executing

Use the assign capability in Test Plans to allocate test cases to team members before clicking Execute Plan. This avoids ambiguity about who is responsible for each test result during a run.

Avoid adding every test case in the project to every plan. Focus each plan on the feature area or sprint being validated. Smaller, focused plans produce cleaner execution reports.


Test Execution

Mark results as you test, not at the end

Update the TEST RESULT column in real time as each test is executed. This keeps the stat cards accurate and gives the team live visibility during a run.

When you mark a test as Failed, link it to the defect in Jira using the ACTIONS column immediately. Do not defer this — the defect trail is most accurate when created at the point of failure.

Use Active Runs to track multiple environments

If you run the same plan against multiple environments (e.g., staging and production), create a separate execution run per environment. The Active Runs sidebar lets you track them side by side.


Reporting and Export

Export the report at the end of every sprint or release

Use Export Report after each completed run to create a shareable record of test results. Attach it to the release ticket or sprint retrospective for traceability.

Combine AI generation with QA expertise

Use TestGenie to quickly generate an initial test suite and discover test ideas you might have missed. Apply human QA judgement to refine the highest-value cases and add any specialised or domain-specific scenarios the AI may not capture.

TestGenie is an accelerator for QA — not a replacement for it.