Test workflows
Testing allows you to run your workflow directly from the builder to verify its behavior before publishing. You can execute the entire workflow or individual nodes, mock data for specific nodes, and see results in real-time on the graph.
Workflow testing is part of the Workflows (Beta) feature, which is currently in closed beta with limited availability.
Overview
When building a workflow, you often need to verify that your nodes are configured correctly, that data flows as expected between steps, and that conditions route to the right branches. Instead of publishing and triggering the workflow each time, you can run a test execution directly from the workflow builder.
A test run:
- Executes the current draft of the workflow (not the last published version).
- Creates a real workflow run with source
TEST, visible in the run history. - Shows live status indicators on the graph as nodes execute.
- Supports mocked outputs so you can stub upstream data and test specific parts of your flow.
Test runs currently require your workflow to have a self-service trigger. Event-triggered workflows cannot be tested using this feature yet.
Running a test
Execute the full workflow
To test your entire workflow from the builder:
- Open a workflow in the visual editor.
- Click the Execute workflow button in the top header.
- If your self-service trigger has user inputs and no test data has been set yet, a form dialog will appear for you to provide the trigger inputs.
- The test run starts, and you can watch execution progress in real-time on the graph.
Execute a single node
You can also test up to a specific node rather than the full workflow. This runs only the selected node and its ancestors (all nodes in the path from the trigger to the selected node):
- Click on a node in the graph to open the test panel.
- Click the Execute node button.
- Port will execute the trigger and all upstream nodes leading to the selected node, then stop.
This is useful when you want to focus on a specific part of your workflow without running the entire flow.
Test data (mocked outputs)
Every node supports test outputs — a JSON object that you can set to mock the node's output instead of running its actual logic. When a node has test outputs defined, the orchestrator skips the node's real execution and immediately completes it with the mocked data.
This is useful for:
- Stubbing external services — Avoid calling real APIs during testing by providing sample response data.
- Testing downstream logic — Set mock outputs on upstream nodes so you can focus on verifying the behavior of a specific downstream node.
- Iterating quickly — Skip slow or rate-limited operations and use cached data instead.
Setting test outputs on a node
- Click on a node in the graph to open the test panel.
- Navigate to the Data out tab.
- Enter a JSON object representing the output you want the node to produce.
- When you execute the workflow (or this node), the mocked output will be used instead of running the node's real action.
The test output must be a valid JSON object matching the structure expected by downstream nodes. For example, if a downstream node references {{ .outputs["fetch_service"].entity.title }}, the test output for fetch_service should contain:
{
"entity": {
"title": "My Service",
"identifier": "my-service",
"properties": {
"status": "running",
"region": "us-east-1"
}
}
}
When you fill in the trigger form during a test run, those values are saved as the trigger node's test outputs. On subsequent test runs, Port will reuse them automatically without prompting you again. You can edit or clear them from the trigger node's Data out tab.
Understanding the test panel
When you click on a node in the graph, a test panel opens with two main sections:
Data in
The Data in section shows the outputs from each ancestor (upstream) node that feed into the selected node. For each source node, you can see:
- The output data from the last test run, or the mocked test output if one was set.
- If no data is available yet, an option to execute previous nodes to populate the upstream context.
This helps you understand what data is available for the selected node to consume and verify that upstream outputs match your expectations.
Data out
The Data out section has two purposes:
- View actual output — After a test run, see the real output that the node produced.
- Set test outputs — Enter mocked JSON data that will be used instead of executing the node's real logic.
If the node has both mocked test data and a real run output, the panel indicates when the mocked data differs from the actual output.
Visual feedback on the graph
During and after a test run, the workflow graph provides visual indicators for each node and edge:
Node status indicators
| Status | Visual indicator | Description |
|---|---|---|
| Default | Standard appearance | Node has not been part of a test run. |
| In progress | Loading spinner | Node is currently executing. |
| Success | Green indicator | Node completed successfully. |
| Failed | Red indicator | Node execution failed. |
| Modified | Purple indicator | Node has mocked test outputs that differ from the last actual run output. |
Example workflow
Let's walk through testing a workflow that provisions a cloud resource:
Workflow structure:
- Self-service trigger — Collects a
resourceNameandenvironmentfrom the user. - Create resource — Sends a webhook to provision the resource.
- Condition — Checks if the resource was created successfully.
- Notify success — Sends a Slack notification on success.
- Notify failure — Sends an alert on failure.
Testing steps:
-
Open the workflow in the builder and click Execute workflow.
-
Fill in the trigger form with test values:
resourceName:test-databaseenvironment:staging
-
Watch the graph as each node executes. The trigger and "Create resource" nodes turn green as they succeed.
-
To avoid hitting the real provisioning API during testing, click on the Create resource node and set test outputs in the Data out tab:
{
"response": {
"status": 201,
"data": {
"resourceId": "res-12345",
"resourceUrl": "https://console.cloud.example.com/res-12345"
}
}
} -
Run the test again. The "Create resource" node completes instantly with the mocked data, and the condition node evaluates against it.
-
Click on the Condition node to verify it routes to the correct branch based on the mocked response.
Limitations
The following testing capabilities are not yet available:
- Event-triggered workflows — Only self-service trigger workflows can be tested.
- Partial re-runs — You cannot restart a test from a specific node mid-run; the test always starts from the trigger.