Skip to main content

Check out Port for yourself ➜ 

Track DORA metrics

The four DORA (DevOps Research and Assessment) metrics deployment frequency, lead time for changes, change failure rate, and mean time to recovery are the industry standard for measuring software delivery performance.
By tracking these metrics, you can identify areas for improvement and ensure your team is delivering high-quality software efficiently.

This guide walks you through building DORA metrics tracking in Port for deployment frequency, lead time for changes, change failure rate (CFR), and mean time to recovery (MTTR) at both the service and team level.

By the end of this guide, you will have a DORA metrics dashboard providing visibility into delivery performance across services and teams, with automatic tier classification (Elite, High, Medium, Low) based on DORA benchmarks.

Steps to build DORA metrics

  1. Validate your data model: ensure the foundational blueprints (service, Team) and relations are in place and aligned. DORA metrics build on top of this foundation.
  2. Track deployments: create the deployment blueprint and map merged PRs to deployment entities. This is the recommended default strategy. For organizations that require a more customized approach, alternative strategies such as workflow runs, CI/CD pipelines, or releases are also supported.
  3. Track incidents (optional): connect PagerDuty (or another tool) to enable CFR and MTTR.
  4. Configure metrics: add aggregation and tier calculation properties for deployment frequency, lead time, CFR, and MTTR at the service and team level.
  5. Build the dashboard: create widgets to visualize DORA metrics across your organization.

Prerequisites

This guide assumes the following:

  • A Port account with the onboarding process completed.
  • A connected Git repository (GitHub, GitLab, or Azure Repos) linked to Port. Note: Other Git providers are supported, though this guide focuses on the three mentioned above.
  • An active Git integration: For GitHub users ensure Port's GitHub integration or Port's GitHub Ocean integration is installed.
  • (Optional) Incident Management integration: To track change failure rate and MTTR metrics, a PagerDuty integration with the pagerdutyIncident blueprint is required.
Incident-dependent metrics

Change failure rate and MTTR require an incident management integration with incident entities linked to services. This guide uses PagerDuty as the example, but the same approach applies to other tools like OpsGenie, FireHydrant, or ServiceNow. You just need to adjust the blueprint identifier and property names to match your integration.
Without an incident integration, only deployment frequency and lead time metrics will be available.

Tracking deployments

Deployments refer to releasing new or updated code into various environments such as Production, Staging, or Testing.
Tracking deployments helps you understand how efficiently your team ships features and monitor release stability.

By default, this guide creates deployment entities from merged pull requests to the default branch, which is the simplest and most common approach. If your organization requires a more customized deployment tracking strategy (e.g., via CI/CD pipelines, workflow runs, or releases), see Alternative deployment tracking strategies below.

Deployments contribute to three key DORA metrics: deployment frequency, change failure rate, and lead time for changes.

Create the deployment blueprint

  1. Navigate to your Port Builder page.

  2. Click the + Blueprint button to create a new blueprint.

  3. Click on the {...} Edit JSON button in the top right corner.

  4. Paste the JSON for your Git provider and click Save:

    Deployment blueprint (click to expand)
    {
    "identifier": "deployment",
    "title": "Deployment",
    "icon": "Deployment",
    "description": "A production deployment created from a merged PR to the default branch",
    "schema": {
    "properties": {
    "deploymentStatus": {
    "title": "Deployment Status",
    "type": "string",
    "enum": ["Success", "Failed"],
    "enumColors": {
    "Success": "green",
    "Failed": "red"
    }
    },
    "environment": {
    "title": "Environment",
    "type": "string",
    "enum": ["Production", "Staging", "Development"],
    "enumColors": {
    "Production": "green",
    "Staging": "yellow",
    "Development": "blue"
    }
    },
    "createdAt": {
    "title": "Deployment Time",
    "type": "string",
    "format": "date-time"
    }
    },
    "required": []
    },
    "mirrorProperties": {
    "lead_time_for_changes_hours": {
    "title": "Lead Time for Changes (Hours)",
    "path": "pullRequest.cycle_time_hours"
    }
    },
    "calculationProperties": {},
    "aggregationProperties": {},
    "relations": {
    "service": {
    "target": "service",
    "title": "Service",
    "many": false,
    "required": false
    },
    "pullRequest": {
    "target": "githubPullRequest",
    "title": "Pull Request",
    "many": false,
    "required": false
    }
    }
    }

What you should see: After saving, the Deployment blueprint appears in your Builder with relations to both Service and your PR/MR blueprint, plus a lead_time_for_changes_hours mirror property.

Missing lead time

If you do not have the lead time (cycle_time_hours) configured on your pull request / merge request blueprint, follow the relevant guide for your Git provider:

Alternatively, you can use the default leadTimeHours property that comes with some integrations and update the mirror property path to pullRequest.leadTimeHours.

Map deployments from merged PRs

The recommended approach is to create deployment entities automatically when pull requests are merged into the default branch. Each merged PR creates a deployment entity with the lead time calculated as the time from PR creation to merge. If your organization requires a more complex setup (e.g., workflow runs, CI/CD pipelines, releases, or custom API), see Alternative deployment tracking strategies below.

  1. Navigate to the data sources page in your Port portal.

  2. Select your Git integration.

  3. Add the mapping configuration for your provider:

    Deployment mapping from merged PRs (click to expand)
    Hardcoded values

    The deploymentStatus is hardcoded to Success and environment to Production in these examples. You can modify these values based on your requirements.

    - kind: pull-request
    selector:
    query: .base.ref == "main" and .state == "closed" and .merged_at != null
    states: ["open", "closed"]
    maxResults: 100
    since: 90
    port:
    entity:
    mappings:
    identifier: .head.repo.name + "-deploy-" + (.number | tostring)
    title: '"Deploy: " + .title'
    blueprint: '"deployment"'
    properties:
    environment: '"Production"'
    deploymentStatus: '"Success"'
    createdAt: .merged_at
    relations:
    service: .head.repo.name
    pullRequest: .head.repo.name + (.id|tostring)
    Match your pull request mapping

    Set pullRequest to the same expression as the identifier in your GitHub Ocean pull-request resource (often .head.repo.name + (.id|tostring)). Tune maxResults and since to limit how many closed pull requests sync. See GitHub Ocean examples and Migrate from the GitHub app.

    Default branch

    The mappings above filter for PRs merged to the main branch. If your repositories use a different default branch (e.g., master), update the filter accordingly.

  4. Click Save & Resync.

After resync, navigate to Deployments page in your catalog. You should see deployment entities appearing for each merged PR, linked to the corresponding service and pull request.

Alternative deployment tracking strategies

If PR/MR merges don't fit your workflow, Port supports several other deployment tracking methods.

Other deployment tracking strategies (click to expand)

Workflow/Job runs

Track deployments by monitoring workflow runs in your pipeline. The deployment status is set dynamically based on whether the workflow concluded successfully or failed.

- kind: workflow-run
selector:
query: >
(.head_branch == "main") and
(.name | test("deploy|CD"; "i"))
port:
entity:
mappings:
identifier: .head_repository.name + "-deploy-" + (.run_number | tostring)
title: .head_repository.name + " Deployment via workflow"
blueprint: '"deployment"'
properties:
environment: '"Production"'
createdAt: .created_at
deploymentStatus: (.conclusion | ascii_upcase[0:1] + .[1:])
relations:
service: .head_repository.name

CI/CD pipelines (Jenkins, CircleCI, Azure Pipelines, etc.)

CI/CD pipelines can report deployments to Port using Port's API as part of the pipeline execution. See the relevant guide for your CI/CD tool:

These integrations use search relations to map the deployment to the correct service based on the service's $title. See mapping relations using search queries for more details.

Releases/Tags (GitHub only)

- kind: release
selector:
query: (.target_commitish == "main") and (.name | test("Production"; "i"))
port:
entity:
mappings:
identifier: .release.name + "-" + .release.tag_name
title: .release.name + " Deployment on release"
blueprint: '"deployment"'
properties:
environment: '"Production"'
createdAt: .release.created_at
deploymentStatus: '"Success"'
relations:
service: .repo.name

Find more details about setting up GitHub integrations for releases and tags in Repositories, repository releases and tags.

Custom API

If your tool or workflow is not natively supported, you can create deployment entities directly via Port's API:

curl -X POST https://api.port.io/v1/blueprints/deployment/entities?upsert=true&merge=true \
-H "Authorization: Bearer $YOUR_PORT_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"identifier": "custom-deployment-1234",
"title": "Custom Deployment 1234",
"properties": {
"environment": "Production",
"createdAt": "2024-09-01T12:00:00Z",
"deploymentStatus": "Success"
},
"relations": {
"service": "your-service-identifier"
}
}'

Replace $YOUR_PORT_API_TOKEN with your actual API token. See mapping relations using search queries for details.

Tracking incidents

Incidents are essential for tracking change failure rate (CFR) and mean time to recovery (MTTR). Effective incident tracking reveals how frequently deployments fail and how quickly teams resolve issues.

The steps below use PagerDuty as the example incident integration. If you use a different tool (OpsGenie, FireHydrant, ServiceNow, etc.), adapt the blueprint identifier (e.g., replace pagerdutyIncident with your incident blueprint) and property names accordingly.

Set up data model

Ensure that your PagerDuty incident blueprint is properly configured to map incidents to the correct services. Use the PagerDuty incident blueprint in the integration examples, and the default mapping configuration on the main PagerDuty page, as references when aligning your data model.

Add incident resolution time and recovery time properties:

  1. Navigate to your Port Builder page.

  2. Select the PagerDuty Incident blueprint.

  3. Click on the {...} button in the top right corner, and choose Edit JSON.

  4. Add the following properties:

    Additional PagerDuty incident properties (click to expand)
    "resolvedAt": {
    "title": "Incident Resolution Time",
    "type": "string",
    "format": "date-time",
    "description": "The timestamp when the incident was resolved"
    },
    "recoveryTime": {
    "title": "Time to Recovery",
    "type": "number",
    "description": "The time (in minutes) between the incident being triggered and resolved"
    }
  5. Click Save.

Add incident mapping config:

  1. Navigate to your Port Data Sources page.

  2. Select the PagerDuty data source.

  3. Add the following property mappings to the incident mapping section:

    Incident mapping for resolvedAt and recoveryTime (click to expand)
    resolvedAt: .resolved_at
    recoveryTime: >-
    (.created_at as $createdAt | .resolved_at as $resolvedAt |
    if $resolvedAt == null then null else
    ( ($resolvedAt | strptime("%Y-%m-%dT%H:%M:%SZ") | mktime) -
    ($createdAt | strptime("%Y-%m-%dT%H:%M:%SZ") | mktime) ) / 60 end)
  4. Click Save & Resync.

Syncing incidents

To sync incidents from PagerDuty, follow the PagerDuty guide. For other tools:

Automatic relations

The relation between the PagerDuty incident blueprint and the service blueprint is automatically created when you install the PagerDuty integration.

After resync, navigate to PagerDuty Incidents in your catalog. You should see incident entities appearing with resolvedAt and recoveryTime properties populated for resolved incidents, and each incident linked to the corresponding service.

Set up metrics

Now we'll add aggregation and calculation properties to compute DORA metrics and classify services and teams into performance tiers.

For each metric below, you'll add properties to both the Service and Team blueprints. To edit a blueprint's JSON:

Relation chain for team-level metrics

Team-level aggregation properties use a pathFilter that traverses the relation chain (e.g., deployment → service → team). For team-level metrics to populate, each service must have its team relation set to the appropriate team entity.

  1. Go to the Builder in your Port portal.

  2. Click on the blueprint you want to edit (Service or Team).

  3. Click on the {...} button in the top right corner, and choose Edit JSON.

  4. Add the properties shown below to the "aggregationProperties" and "calculationProperties" sections.

  5. Click Save.

Aggregation data availability

Aggregation properties are calculated based on data ingested after the property is created. Historical data that was ingested before you add these properties will not be included. To backfill, trigger a resync on the relevant integration after saving the aggregation properties. See aggregation properties for more details.

Deployment frequency

Deployment frequency measures how often your services deploy to production. It is calculated as the average number of successful production deployments per week. The tier calculation classifies services as Elite (≥7/week), High (≥1/week), Medium (≥0.25/week), or Low.

Service level

Add the following to the Service blueprint:

Aggregation properties (click to expand)

Add to "aggregationProperties":

"total_deployments": {
"title": "Total Deployments",
"type": "number",
"target": "deployment",
"description": "Total successful deployments to Production",
"query": {
"combinator": "and",
"rules": [
{ "property": "deploymentStatus", "operator": "=", "value": "Success" },
{ "property": "environment", "operator": "=", "value": "Production" }
]
},
"calculationSpec": { "func": "count", "calculationBy": "entities" }
},
"deployment_frequency": {
"title": "Deployment Frequency (per week)",
"type": "number",
"target": "deployment",
"description": "Average successful Production deployments per week",
"query": {
"combinator": "and",
"rules": [
{ "property": "deploymentStatus", "operator": "=", "value": "Success" },
{ "property": "environment", "operator": "=", "value": "Production" }
]
},
"calculationSpec": {
"func": "average",
"averageOf": "week",
"calculationBy": "entities",
"measureTimeBy": "createdAt"
}
}
Tier calculation property (click to expand)

Add to "calculationProperties":

"deploy_freq_tier": {
"title": "Deployment Frequency",
"description": "DORA deployment frequency tier",
"type": "string",
"colorized": true,
"colors": {
"Low": "red",
"Medium": "orange",
"High": "blue",
"Elite": "lime"
},
"calculation": "if (.properties.total_deployments == null or .properties.total_deployments == 0) then \"Low\" else if (.properties.deployment_frequency // 0) >= 7 then \"Elite\" elif (.properties.deployment_frequency // 0) >= 1 then \"High\" elif (.properties.deployment_frequency // 0) >= 0.25 then \"Medium\" else \"Low\" end end"
}

Team level

Add the following to the Team blueprint. The team-level deployment frequency tier divides total deployment frequency by the number of services owned by the team, ensuring a fair cross-team comparison.

Aggregation properties (click to expand)

Add to "aggregationProperties":

"services_count": {
"title": "Services Count",
"type": "number",
"target": "service",
"calculationSpec": { "func": "count", "calculationBy": "entities" },
"pathFilter": [{ "fromBlueprint": "service", "path": ["team"] }]
},
"total_deployments": {
"title": "Total Deployments",
"type": "number",
"target": "deployment",
"description": "Total successful deployments across team services",
"query": {
"combinator": "and",
"rules": [
{ "property": "deploymentStatus", "operator": "=", "value": "Success" },
{ "property": "environment", "operator": "=", "value": "Production" }
]
},
"calculationSpec": { "func": "count", "calculationBy": "entities" },
"pathFilter": [{ "fromBlueprint": "deployment", "path": ["service", "team"] }]
},
"deployment_frequency": {
"title": "Deployment Frequency (per week)",
"type": "number",
"target": "deployment",
"description": "Average weekly deployments across team services",
"query": {
"combinator": "and",
"rules": [
{ "property": "deploymentStatus", "operator": "=", "value": "Success" },
{ "property": "environment", "operator": "=", "value": "Production" }
]
},
"calculationSpec": {
"func": "average",
"averageOf": "week",
"calculationBy": "entities",
"measureTimeBy": "createdAt"
},
"pathFilter": [{ "fromBlueprint": "deployment", "path": ["service", "team"] }]
}
Tier calculation properties (click to expand)

Add to "calculationProperties":

"deployment_frequency_per_service": {
"title": "Deployment Frequency (per service)",
"description": "Deployment frequency normalized by the number of services",
"type": "number",
"calculation": "if (.properties.services_count != null and .properties.services_count != 0) then ((.properties.deployment_frequency // 0) / .properties.services_count) else 0 end"
},
"deploy_freq_tier": {
"title": "Deployment Frequency",
"description": "DORA deployment frequency tier (per service)",
"type": "string",
"colorized": true,
"colors": {
"Low": "red",
"Medium": "orange",
"High": "blue",
"Elite": "lime"
},
"calculation": "if (.properties.total_deployments == null or .properties.total_deployments == 0) then \"Low\" else (if (.properties.services_count != null and .properties.services_count != 0) then ((.properties.deployment_frequency // 0) / .properties.services_count) else 0 end) as $dpf | if $dpf >= 7 then \"Elite\" elif $dpf >= 1 then \"High\" elif $dpf >= 0.25 then \"Medium\" else \"Low\" end end"
}

What you should see: After saving both blueprints, open any service or team entity. You should see Total Deployments, Deployment Frequency (per week), and a Deployment Frequency tier badge (Elite/High/Medium/Low). Team entities also show Services Count and Deployment Frequency (per service).

Lead time for changes

Lead time for changes measures how quickly code moves from development into production. Port supports two approaches, you can choose the one that best fits your workflow:

Lead time is measured from when a pull request is created to when it is merged, using the cycle_time_hours property that Port calculates automatically as part of your PR/MR mapping. No additional setup is required, the aggregation below reads this property directly.

Scoping lead time to the default branch

Aggregation queries filter on blueprint properties in Port, not raw API field names. By default, the aggregation below includes merged PRs/MRs in the last 30 days. To scope by target branch, add a string property to your pull request blueprint (for example targetBranch) and map it from your Git provider (for example GitHub .base.ref, GitLab .target_branch, or Azure DevOps .targetRefName with refs/heads/ stripped). Then add a query rule such as "property": "targetBranch", "operator": "=", "value": "main".

The tier calculation classifies services as Elite (≤24h), High (≤168h / 1 week), Medium (≤720h / 30 days), or Low.

Using the first-commit method in the aggregation below

If you chose the first commit to merge method, replace "property": "cycle_time_hours" with "property": "first_commit_to_merge_hours" in each aggregation property below.

Service level

Add the following to the Service blueprint. Select your Git provider to get the correct target blueprint:

Aggregation property (click to expand)

Add to "aggregationProperties":

"lead_time_for_changes": {
"title": "Lead Time for Changes (Hours)",
"type": "number",
"target": "githubPullRequest",
"description": "Average time from PR creation to merge in the last 30 days",
"query": {
"combinator": "and",
"rules": [
{ "property": "mergedAt", "operator": "between", "value": { "preset": "lastMonth" } }
]
},
"calculationSpec": {
"func": "average",
"averageOf": "total",
"calculationBy": "property",
"property": "cycle_time_hours",
"measureTimeBy": "$createdAt"
}
}
Tier calculation property (click to expand)

Add to "calculationProperties":

"lead_time_tier": {
"title": "Lead Time for Changes",
"description": "DORA lead time for changes tier",
"type": "string",
"colorized": true,
"colors": {
"Low": "red",
"Medium": "orange",
"High": "blue",
"Elite": "lime"
},
"calculation": "if (.properties.lead_time_for_changes == null) then \"Low\" elif .properties.lead_time_for_changes <= 24 then \"Elite\" elif .properties.lead_time_for_changes <= 168 then \"High\" elif .properties.lead_time_for_changes <= 720 then \"Medium\" else \"Low\" end"
}

Team level

Add the following to the Team blueprint. Select your Git provider, both the target and fromBlueprint in pathFilter must match:

Aggregation property (click to expand)

Add to "aggregationProperties":

"lead_time_for_changes": {
"title": "Lead Time for Changes (Hours)",
"type": "number",
"target": "githubPullRequest",
"description": "Average lead time across team services in the last 30 days",
"query": {
"combinator": "and",
"rules": [
{ "property": "mergedAt", "operator": "between", "value": { "preset": "lastMonth" } }
]
},
"calculationSpec": {
"func": "average",
"averageOf": "total",
"calculationBy": "property",
"property": "cycle_time_hours",
"measureTimeBy": "$createdAt"
},
"pathFilter": [{ "fromBlueprint": "githubPullRequest", "path": ["service", "team"] }]
}
Tier calculation property (click to expand)

Add to "calculationProperties":

"lead_time_tier": {
"title": "Lead Time for Changes",
"description": "DORA lead time for changes tier",
"type": "string",
"colorized": true,
"colors": {
"Low": "red",
"Medium": "orange",
"High": "blue",
"Elite": "lime"
},
"calculation": "if (.properties.lead_time_for_changes == null) then \"Low\" elif .properties.lead_time_for_changes <= 24 then \"Elite\" elif .properties.lead_time_for_changes <= 168 then \"High\" elif .properties.lead_time_for_changes <= 720 then \"Medium\" else \"Low\" end"
}

What you should see: After saving both blueprints, service and team entities should display Lead Time for Changes (Hours) and a Lead Time for Changes tier badge.

Change failure rate (CFR)

Requires incident integration

Change failure rate requires an incident management integration with incident entities linked to services. The examples below use PagerDuty (pagerdutyIncident), you can adjust the blueprint identifier and property names if you use a different tool. See the Tracking incidents section above.

Change failure rate measures the percentage of deployments that are associated with incidents. It is calculated as incidents / (deployments + incidents) × 100. The tier calculation classifies services as Elite (≤5%), High (≤20%), Medium (≤30%), or Low.

CFR calculation approach

The standard DORA definition of CFR is failed deployments / total deployments. Since most incident tools don't directly link incidents to specific deployments, this guide uses incident count as a proxy for failed deployments. This is a common and practical adaptation and if your workflow allows you to mark deployments as failed directly (e.g., via a deploymentStatus of Failed), you can adjust the formula accordingly.

Service level

Add the following to the Service blueprint:

Aggregation property (click to expand)

Add to "aggregationProperties":

"total_incidents": {
"title": "Total Incidents",
"type": "number",
"target": "pagerdutyIncident",
"description": "Total incidents linked to this service",
"calculationSpec": {
"func": "count",
"calculationBy": "entities"
},
"pathFilter": [
{
"fromBlueprint": "pagerdutyIncident",
"path": ["service"]
}
]
}
Tier calculation property (click to expand)

Add to "calculationProperties":

"change_failure_rate": {
"title": "Change Failure Rate (%)",
"description": "Percentage of deployments that caused incidents",
"type": "number",
"calculation": "if (.properties.total_deployments == null or .properties.total_deployments == 0) then null else ((.properties.total_incidents // 0) / (.properties.total_deployments + (.properties.total_incidents // 0)) * 100 | floor) end"
},
"cfr_tier": {
"title": "CFR",
"description": "DORA change failure rate tier",
"type": "string",
"colorized": true,
"colors": {
"Low": "red",
"Medium": "orange",
"High": "blue",
"Elite": "lime"
},
"calculation": ".properties.total_incidents as $i | .properties.total_deployments as $d | if ($d == null or $d == 0) then null else ((($i // 0) / ($d + ($i // 0)) * 100) | floor) as $cfr | if $cfr <= 5 then \"Elite\" elif $cfr <= 20 then \"High\" elif $cfr <= 30 then \"Medium\" else \"Low\" end end"
}

Team level

Add the following to the Team blueprint:

Aggregation property (click to expand)

Add to "aggregationProperties":

"total_incidents": {
"title": "Total Incidents",
"type": "number",
"target": "pagerdutyIncident",
"description": "Total incidents across team services",
"calculationSpec": {
"func": "count",
"calculationBy": "entities"
},
"pathFilter": [
{
"fromBlueprint": "pagerdutyIncident",
"path": ["service", "team"]
}
]
}
Tier calculation properties (click to expand)

Add to "calculationProperties":

"change_failure_rate": {
"title": "Change Failure Rate (%)",
"description": "Percentage of deployments that caused incidents",
"type": "number",
"calculation": "if (.properties.total_deployments == null or .properties.total_deployments == 0) then null else ((.properties.total_incidents // 0) / (.properties.total_deployments + (.properties.total_incidents // 0)) * 100 | floor) end"
},
"cfr_tier": {
"title": "CFR",
"description": "DORA change failure rate tier",
"type": "string",
"colorized": true,
"colors": {
"Low": "red",
"Medium": "orange",
"High": "blue",
"Elite": "lime"
},
"calculation": ".properties.total_incidents as $i | .properties.total_deployments as $d | if ($d == null or $d == 0) then null else ((($i // 0) / ($d + ($i // 0)) * 100) | floor) as $cfr | if $cfr <= 5 then \"Elite\" elif $cfr <= 20 then \"High\" elif $cfr <= 30 then \"Medium\" else \"Low\" end end"
}

What you should see: After saving both blueprints, service entities show Total Incidents and a CFR tier badge. Team entities additionally show a Change Failure Rate (%) value.

Mean time to recovery (MTTR)

MTTR measures the average time from incident trigger to resolution, reflecting how quickly teams recover from failures. The tier calculation classifies services as Elite (≤60min), High (≤1,440min / 1 day), Medium (≤43,200min / 30 days), or Low.

Requires incident integration

MTTR requires an incident management integration with incident entities that include a recovery time property. The examples below use PagerDuty (pagerdutyIncident with recoveryTime), you can adjust the blueprint identifier and property names if you use a different tool. See the Tracking incidents section above.

Service level

Add the following to the Service blueprint:

Aggregation property (click to expand)

Add to "aggregationProperties":

"mean_time_to_recovery": {
"title": "MTTR (Minutes)",
"type": "number",
"target": "pagerdutyIncident",
"description": "Average time in minutes from incident trigger to resolution",
"calculationSpec": {
"func": "average",
"averageOf": "total",
"calculationBy": "property",
"property": "recoveryTime",
"measureTimeBy": "$createdAt"
},
"pathFilter": [
{
"fromBlueprint": "pagerdutyIncident",
"path": ["service"]
}
]
}
Tier calculation property (click to expand)

Add to "calculationProperties":

"mttr_tier": {
"title": "MTTR",
"description": "DORA MTTR tier",
"type": "string",
"colorized": true,
"colors": {
"Low": "red",
"Medium": "orange",
"High": "blue",
"Elite": "lime"
},
"calculation": "if (.properties.total_incidents == null or .properties.total_incidents == 0) then null elif (.properties.mean_time_to_recovery == null) then \"Elite\" elif .properties.mean_time_to_recovery <= 60 then \"Elite\" elif .properties.mean_time_to_recovery <= 1440 then \"High\" elif .properties.mean_time_to_recovery <= 43200 then \"Medium\" else \"Low\" end"
}

Team level

Add the following to the Team blueprint:

Aggregation property (click to expand)

Add to "aggregationProperties":

"mean_time_to_recovery": {
"title": "MTTR (Minutes)",
"type": "number",
"target": "pagerdutyIncident",
"description": "Average recovery time across team services",
"calculationSpec": {
"func": "average",
"averageOf": "total",
"calculationBy": "property",
"property": "recoveryTime",
"measureTimeBy": "$createdAt"
},
"pathFilter": [
{
"fromBlueprint": "pagerdutyIncident",
"path": ["service", "team"]
}
]
}
Tier calculation property (click to expand)

Add to "calculationProperties":

"mttr_tier": {
"title": "MTTR",
"description": "DORA MTTR tier",
"type": "string",
"colorized": true,
"colors": {
"Low": "red",
"Medium": "orange",
"High": "blue",
"Elite": "lime"
},
"calculation": "if (.properties.total_incidents == null or .properties.total_incidents == 0) then null elif (.properties.mean_time_to_recovery == null) then \"Elite\" elif .properties.mean_time_to_recovery <= 60 then \"Elite\" elif .properties.mean_time_to_recovery <= 1440 then \"High\" elif .properties.mean_time_to_recovery <= 43200 then \"Medium\" else \"Low\" end"
}

What you should see: After saving both blueprints, service and team entities show MTTR (Minutes) and an MTTR tier badge.

Visualize metrics

We will create a dedicated dashboard to monitor DORA metrics using Port's customizable widgets. The dashboard covers deployment frequency, lead time, and optionally change failure rate and MTTR.

Create the dashboard

  1. Navigate to your software catalog.
  2. Click on the + button in the left sidebar.
  3. Select New folder (if you don't already have one).
  4. Name the folder Engineering Intelligence and click Create. The folder identifier will be automatically set to engineering_intelligence, this is required for the API script method to work.
  5. Inside the Engineering Intelligence folder, click the + button again.
  6. Select New dashboard.
  7. Name the dashboard DORA Metrics and click Create.

Add widgets

You can populate the dashboard using either an API script or by manually creating each widget through the UI.

The fastest way to set up the dashboard is by using Port's API to create all widgets at once.

Get your Port API token

  1. In your Port application, click on your profile picture .

  2. Select Credentials.

  3. Click Generate API token.

  4. Copy the generated token and store it as an environment variable:

    export PORT_ACCESS_TOKEN="YOUR_GENERATED_TOKEN"
EU region

If your portal is hosted in the EU region, replace api.port.io with api.port-eu.io in the dashboard creation command below.

Create the dashboard with widgets

Save the following JSON to a file named dora_dashboard.json:

Dashboard JSON payload (click to expand)
{
"identifier": "dora_metrics",
"title": "DORA Metrics",
"icon": "Metric",
"type": "dashboard",
"parent": "engineering_intelligence",
"widgets": [
{
"id": "doraDashboardWidget",
"type": "dashboard-widget",
"layout": [
{
"height": 400,
"columns": [
{"id": "deployFreqKpi", "size": 4},
{"id": "deployFreqTrend", "size": 8}
]
},
{
"height": 400,
"columns": [
{"id": "leadTimeKpi", "size": 4},
{"id": "leadTimeTrend", "size": 8}
]
},
{
"height": 400,
"columns": [
{"id": "cfrKpi", "size": 4},
{"id": "mttrKpi", "size": 8}
]
},
{
"height": 400,
"columns": [
{"id": "serviceDoraTable", "size": 12}
]
},
{
"height": 400,
"columns": [
{"id": "teamDoraTable", "size": 12}
]
}
],
"widgets": [
{
"id": "deployFreqKpi",
"type": "entities-number-chart",
"title": "Avg Deployment Frequency",
"icon": "Metric",
"description": "Average weekly deployments per service across all teams",
"blueprint": "_team",
"chartType": "aggregateByProperty",
"calculationBy": "property",
"func": "average",
"property": "deployment_frequency_per_service",
"averageOf": "total",
"displayFormatting": "custom",
"decimalPlaces": ".00",
"unit": "custom",
"unitCustom": "per week",
"dataset": {
"combinator": "and",
"rules": []
}
},
{
"id": "deployFreqTrend",
"type": "line-chart",
"title": "Deployment Frequency (Weekly Trend)",
"icon": "LineChart",
"description": "Production deployments per week over the last 3 months",
"blueprint": "deployment",
"chartType": "countEntities",
"func": "count",
"measureTimeBy": "createdAt",
"timeInterval": "isoWeek",
"timeRange": {"preset": "last3Months"},
"xAxisTitle": "Week",
"yAxisTitle": "Deployments",
"dataset": {
"combinator": "and",
"rules": [
{"property": "deploymentStatus", "operator": "=", "value": "Success"},
{"property": "environment", "operator": "=", "value": "Production"}
]
}
},
{
"id": "leadTimeKpi",
"type": "entities-number-chart",
"title": "Avg Lead Time for Changes",
"icon": "Metric",
"description": "Average lead time for changes across the organization in the last 30 days",
"blueprint": "deployment",
"chartType": "aggregateByProperty",
"calculationBy": "property",
"func": "average",
"property": "lead_time_for_changes_hours",
"averageOf": "total",
"measureTimeBy": "createdAt",
"displayFormatting": "custom",
"decimalPlaces": ".00",
"unit": "custom",
"unitCustom": "Hours",
"dataset": {
"combinator": "and",
"rules": [
{"property": "deploymentStatus", "operator": "=", "value": "Success"},
{"property": "environment", "operator": "=", "value": "Production"},
{"property": "createdAt", "operator": "between", "value": {"preset": "lastMonth"}}
]
}
},
{
"id": "leadTimeTrend",
"type": "line-chart",
"title": "Lead Time for Changes (Weekly Trend)",
"icon": "LineChart",
"description": "Average lead time from PR creation to merge, by week",
"blueprint": "deployment",
"chartType": "aggregatePropertiesValues",
"func": "average",
"properties": ["properties.lead_time_for_changes_hours"],
"measureTimeBy": "createdAt",
"timeInterval": "isoWeek",
"timeRange": {"preset": "last3Months"},
"xAxisTitle": "Week",
"yAxisTitle": "Hours",
"dataset": {
"combinator": "and",
"rules": [
{"property": "deploymentStatus", "operator": "=", "value": "Success"},
{"property": "environment", "operator": "=", "value": "Production"}
]
}
},
{
"id": "cfrKpi",
"type": "entities-number-chart",
"title": "Avg Change Failure Rate (CFR)",
"icon": "Metric",
"description": "Average change failure rate across all teams in the organization",
"blueprint": "_team",
"chartType": "aggregateByProperty",
"calculationBy": "property",
"func": "average",
"property": "change_failure_rate",
"averageOf": "total",
"displayFormatting": "round",
"unit": "custom",
"unitCustom": "%",
"dataset": {
"combinator": "and",
"rules": [
{"property": "type", "operator": "=", "value": "team"}
]
}
},
{
"id": "mttrKpi",
"type": "entities-number-chart",
"title": "Avg Daily Mean Time to Recovery (MTTR)",
"icon": "Metric",
"description": "Average mean time to recovery across the organization in the last 30 days",
"blueprint": "pagerdutyIncident",
"chartType": "aggregateByProperty",
"calculationBy": "property",
"func": "average",
"property": "recoveryTime",
"averageOf": "total",
"displayFormatting": "round",
"unit": "custom",
"unitCustom": "Minutes",
"dataset": {
"combinator": "and",
"rules": [
{"property": "resolvedAt", "operator": "between", "value": {"preset": "lastMonth"}}
]
}
},
{
"id": "serviceDoraTable",
"type": "table-entities-explorer",
"displayMode": "widget",
"title": "Service - DORA Metrics",
"icon": "Table",
"description": "DORA metrics per service, scored against benchmarks: Elite / High / Medium / Low",
"blueprint": "service",
"dataset": {"combinator": "and", "rules": []},
"excludedFields": [],
"blueprintConfig": {
"service": {
"groupSettings": {"groupBy": ["team"]},
"propertiesSettings": {
"order": ["$title", "team", "deploy_freq_tier", "deployment_frequency", "lead_time_tier", "lead_time_for_changes", "cfr_tier", "mttr_tier", "total_deployments"],
"shown": ["$title", "team", "deploy_freq_tier", "deployment_frequency", "lead_time_tier", "lead_time_for_changes", "cfr_tier", "mttr_tier", "total_deployments"]
},
"filterSettings": {"filterBy": {"combinator": "and", "rules": []}},
"sortSettings": {"sortBy": [{"property": "deployment_frequency", "order": "desc"}]}
}
}
},
{
"id": "teamDoraTable",
"type": "table-entities-explorer",
"displayMode": "widget",
"title": "Team - DORA Metrics",
"icon": "Table",
"description": "DORA metrics per team, scored against benchmarks: Elite / High / Medium / Low",
"blueprint": "_team",
"dataset": {"combinator": "and", "rules": []},
"excludedFields": [],
"blueprintConfig": {
"_team": {
"groupSettings": {"groupBy": []},
"propertiesSettings": {
"order": ["$title", "deploy_freq_tier", "deployment_frequency_per_service", "lead_time_tier", "lead_time_for_changes", "cfr_tier", "mttr_tier"],
"shown": ["$title", "deploy_freq_tier", "deployment_frequency_per_service", "lead_time_tier", "lead_time_for_changes", "cfr_tier", "mttr_tier"]
},
"filterSettings": {"filterBy": {"combinator": "and", "rules": []}},
"sortSettings": {"sortBy": [{"property": "deployment_frequency_per_service", "order": "desc"}]}
}
}
}
]
}
]
}

Then run the following command to create the dashboard with all widgets:

curl -s -X POST "https://api.port.io/v1/pages" \
-H "Authorization: Bearer $PORT_ACCESS_TOKEN" \
-H "Content-Type: application/json" \
-d @dora_dashboard.json | python3 -m json.tool
Engineering Intelligence folder

The script assumes an engineering_intelligence folder already exists in your catalog. If you haven't created it yet, follow steps 1-4 in the create the dashboard section first.

Next steps

Once your DORA metrics dashboard is in place, consider these additional improvements:

  • Set up DORA scorecards to automatically evaluate services and teams against DORA performance targets and track improvement over time.
  • Add incident integration (PagerDuty) to unlock change failure rate and MTTR metrics for the full four-metric DORA picture.
  • Create automations to send Slack notifications when a service's DORA tier drops below a threshold or when deployment frequency declines significantly.
  • Add an AI agent to provide natural language insights into your DORA data directly on the dashboard.