Skip to main content

Check out Port for yourself ➜ 

Context lake

Open Beta

This feature is currently in open beta and available to all organizations. Should you encounter any bugs or functionality issues, please let us know so we can rectify them as soon as possible. Your feedback is greatly appreciated! ⭐

To get access, please fill out this form with your organization details.

Port's Context Lake is your unified engineering knowledge layer—connecting data from across your entire toolchain into a single, semantically-rich source of truth. It's not a separate feature, but rather the powerful result of Port's core capabilities working together to provide organizational context that AI agents, developers, and workflows can understand and act upon.

What comprises the context lake

The context lake transforms scattered data across your engineering tools into unified organizational knowledge. It is built from four core components:

Software catalog - your data

The software catalog is where you define YOUR organization's data model using blueprints (services, environments, teams, deployments, incidents, etc.) and populate it with entities from all your tools. This catalog becomes your organizational semantic layer—teaching Port what "service," "deployment," or "incident" means specifically in your context, providing the schema and structure that gives meaning to your data.

Business context - holistic view

Beyond technical metadata, the Context Lake enriches your software catalog with business context—the organizational, financial, and operational signals that help prioritize work, assess risk, and align engineering decisions with business objectives. This business layer transforms technical catalogs into decision-making platforms by answering: "What matters most to the business?"

Business context in Port can include:

Cost & financial context

  • Cost center attribution - Track which department or budget owns each resource via AWS Cost integration or Kubecost
  • Revenue impact - Tag services that directly generate revenue or support revenue-generating features
  • Cloud spending patterns - Understand resource costs to inform optimization and prioritization decisions

Criticality & risk context

  • Business criticality levels - Classify services (e.g., mission-critical, customer-facing, internal tooling) to drive different SLA requirements, triage workflows, and automation policies
  • Disaster recovery tier - Define RTO/RPO requirements based on business impact to inform backup strategies and incident response priorities
  • Data sensitivity - Mark resources handling PII, financial data, or regulated information to enforce compliance controls
  • Compliance scope - Tag services subject to SOC 2, GDPR, HIPAA, or PCI-DSS to ensure audit readiness

Operational context

  • SLAs & SLOs - Define service-level agreements and objectives to measure reliability, track MTTR, and ensure SLA compliance
  • On-call ownership - Integrate PagerDuty schedules to understand who's responsible right now for incident response
  • Escalation policies - Define who to notify and when for different severity levels based on business impact
  • Incident captain & responder roles - Track who's currently leading incidents or available for triage

Organizational context

  • Team affiliation - Connect services to teams via GitHub CODEOWNERS or Jira project mappings for clear ownership
  • Reporting hierarchy - Map organizational structure (team → department → division) for escalation paths
  • Business unit alignment - Associate services with product lines or business units to understand impact radius

Customer & product context

  • Customer tier - Identify which customer segments are affected (e.g., enterprise, gold-tier, freemium) to prioritize incidents and features affecting high-value customers
  • Product lifecycle stage - Tag services by maturity (closed beta, open beta, GA, deprecated) to set appropriate expectations and SLAs—a closed beta feature with 10 freemium users has different urgency than a GA feature serving enterprise customers

Why business context matters:

When AI agents and workflows understand business context, they can:

  • Prioritize vulnerabilities affecting revenue-generating production services over internal dev tools
  • Route incidents to the right on-call engineer based on service ownership and escalation policies
  • Estimate blast radius of a deployment by understanding dependent services and their business criticality
  • Automatically enforce policies like "critical services must have SLOs defined" or "PII-handling services require SOC 2 compliance checks"
  • Calculate risk scores that combine technical severity (CVSS) with business impact (criticality + revenue + SLA + customer tier)
  • Adjust incident response based on affected customer tier—a P1 incident affecting enterprise customers triggers immediate executive notification, while the same issue in closed beta may follow standard on-call procedures

Example use cases:

Scenario: A critical CVE is discovered in a library used by multiple services.

Without business context: Security team gets hundreds of alerts—no clear way to prioritize which services to patch first.

With Port's business context:

  1. Port enriches each vulnerability with:

    • Service business criticality (mission-critical vs. internal)
    • Revenue impact (directly revenue-generating or not)
    • SLA requirements (99.99% uptime vs. best-effort)
    • Data sensitivity (handles customer PII or not)
    • Compliance scope (subject to SOC 2 audit)
    • Customer tier (enterprise vs. freemium)
  2. AI agent or automation calculates risk score:

    Risk = CVE Severity × (Business Criticality + Revenue Impact + SLA Weight + Compliance Factor + Customer Tier)
  3. Results in prioritized triage queue:

    • Fix immediately: Payment service (mission-critical, revenue-generating, 99.99% SLA, PCI-DSS scope, enterprise customers)
    • Fix this sprint: Customer portal (customer-facing, revenue-supporting, 99.5% SLA, gold-tier customers)
    • Backlog: Internal dev tools (low criticality, no SLA, internal users only)

Learn more: Prioritize vulnerabilities with business context

Ingesting business context into Port:

Business context comes from many sources:

The Context Lake unifies all these sources so AI agents, workflows, and dashboards can make business-aware decisions.

Access controls - data governance

RBAC and permissions ensure that the right people and systems see the right data. Teams, roles, and policies control who can view, edit, or act on catalog data, maintaining security while enabling collaboration and providing governed access to your organizational knowledge.

Scorecards - your standards

Scorecards define and track your engineering standards, KPIs, and quality metrics. They encode organizational expectations—production readiness requirements, security compliance rules, operational best practices—as measurable criteria within the Context Lake, providing the organizational standards and quality signals that inform decisions.

Interface layer - how you access it

Context Lake data becomes actionable through multiple interfaces: AI Interfaces where AI agents and assistants query through Port MCP Server to understand your organization, API for programmatic access, and Interface Designer with dashboards and visualizations that surface insights to your teams—providing multiple ways to query, visualize, and act on your organizational context.

Why the context lake matters

Generic AI doesn't understand what "production-ready" means in YOUR organization, who owns which services, or how your deployment pipeline works. Context Lake provides this semantic understanding, enabling AI agents to:

  • Answer ownership questions with definitive data (not guesses from code comments).
  • Understand dependencies and relationships between services.
  • Follow your organization's standards and guardrails when taking actions.
  • Make decisions based on real-time operational context.

Context lake in action

Developer asks: "Who owns the payments service?"

  • Without Context Lake: AI guesses based on code comments or recent contributors.
  • With Context Lake: AI queries the catalog → sees Team relation → returns the owning team with Slack channel and on-call schedule.

External agents and AI workflows

External AI agents and automation workflows can leverage Port's Context Lake to make intelligent, context-aware decisions without needing direct access to your entire toolchain. Instead of building custom integrations for each tool, external systems can query Port's unified knowledge layer to understand your organization's structure, relationships, and standards.

n8n integration

Port provides a custom n8n node that simplifies integration with Port's AI agents and Context Lake. To get started:

  1. Set up Port's n8n custom node — Install and configure the Port node in your n8n instance
  2. Build automation workflows — See an example of using Port as a context lake for vulnerability management workflows

Getting started

Building your Context Lake is a natural part of setting up Port:

  1. Define your data model - Create blueprints that represent your organization's entities.
  2. Connect your tools - Ingest data from GitHub, Kubernetes, PagerDuty, and 100+ other integrations.
  3. Set up relationships - Define how entities connect to each other.
  4. Configure access controls - Ensure proper data governance.
  5. Define standards - Create scorecards that encode your quality requirements.

As you build your catalog, you're simultaneously building your Context Lake—the unified knowledge layer that powers intelligent automation and AI-driven workflows.