Running Tests
Execution walkthrough
This tutorial shows the two run-creation paths that matter most in AppraiseJS: broad selection by tags and precise selection by explicit test cases.
Before you begin
Section titled “Before you begin”- At least one test case is saved.
- A target environment exists.
- You know whether you want broad selection by tags or precise selection by named cases.
What you’ll finish with
Section titled “What you’ll finish with”A completed run with explainable scope
You will know when to choose By Tags, when to choose By Test Cases, and
how to verify that AppraiseJS ran the intended scope.
Test run flows
Section titled “Test run flows”Choosing a run type
Section titled “Choosing a run type”| Mode | Use when | Example |
|---|---|---|
| By Tags | You want a reusable group of scenarios | @smoke, @auth |
| By Test Cases | You need specific named test cases | Login, Checkout |
⚠️ Key Decision Your choice between
By TagsandBy Test Casesdefines the scope of your run. Make this decision before configuring the rest of the run.
Running a test
Section titled “Running a test”- Open
Test Runs -> Create. - Decide between
By TagsandBy Test Cases. - Choose
By Tagsif you want a reusable slice such as@smokeor@auth. - Choose
By Test Casesif you need to run a very specific set of saved scenarios. - Set a run name that will still make sense later in the reports list.
- Select the environment, browser engine, and worker count.
- Submit the run and open the run details page immediately.
- Watch live status, then confirm the final scope, status, and linked report after completion.
AppraiseJS separates these modes so you can switch between repeatable regression runs and targeted debugging without restructuring your test suite.
Choosing between By Tags and By Test Cases upfront ensures your run scope is
intentional and avoids confusion when reviewing results later.
Here is a demo of running tests by tags.
Here is a demo of running tests by test cases.
Checkpoint
Section titled “Checkpoint”- You can explain why this run used tags or explicit case selection.
- The run details page shows the environment, browser, and current result clearly.
- You can navigate from the run to the resulting report without losing context.
Common mistakes
Section titled “Common mistakes”- Using
By Tagswithout confirming the relevant cases actually carry the intended tags. - Choosing
By Test Casesfor a slice that should have been a reusable regression or smoke tag. - Interpreting a queued or running state as a failure before the process completes.
- Ignoring environment mismatch when the run result is inconsistent with the expected application state.
Next step
Section titled “Next step”Continue to Viewing Reports to turn execution output into actionable debugging signals.