Skip to main content

Check out Port for yourself 

Context lake

Open Beta

This feature is currently in open beta and available to all organizations. Should you encounter any bugs or functionality issues, please let us know so we can rectify them as soon as possible. Your feedback is greatly appreciated! ⭐

To get access, please fill out this form with your organization details.

Port's Context Lake is your unified engineering knowledge layer—connecting data from across your entire toolchain into a single, semantically-rich source of truth. It's not a separate feature, but rather the powerful result of Port's core capabilities working together to provide organizational context that AI agents, developers, and workflows can understand and act upon.

What comprises the context lake

The context lake transforms scattered data across your engineering tools into unified organizational knowledge. It is built from four core components:

Software catalog - your data

The software catalog is where you define YOUR organization's data model using blueprints (services, environments, teams, deployments, incidents, etc.) and populate it with entities from all your tools. This catalog becomes your organizational semantic layer—teaching Port what "service," "deployment," or "incident" means specifically in your context, providing the schema and structure that gives meaning to your data.

Access controls - data governance

RBAC and permissions ensure that the right people and systems see the right data. Teams, roles, and policies control who can view, edit, or act on catalog data, maintaining security while enabling collaboration and providing governed access to your organizational knowledge.

Scorecards - your standards

Scorecards define and track your engineering standards, KPIs, and quality metrics. They encode organizational expectations—production readiness requirements, security compliance rules, operational best practices—as measurable criteria within the Context Lake, providing the organizational standards and quality signals that inform decisions.

Interface layer - how you access it

Context Lake data becomes actionable through multiple interfaces: AI Interfaces where AI agents and assistants query through Port MCP Server to understand your organization, API for programmatic access, and Interface Designer with dashboards and visualizations that surface insights to your teams—providing multiple ways to query, visualize, and act on your organizational context.

Why the context lake matters

Generic AI doesn't understand what "production-ready" means in YOUR organization, who owns which services, or how your deployment pipeline works. Context Lake provides this semantic understanding, enabling AI agents to:

  • Answer ownership questions with definitive data (not guesses from code comments).
  • Understand dependencies and relationships between services.
  • Follow your organization's standards and guardrails when taking actions.
  • Make decisions based on real-time operational context.

Context lake in action

Developer asks: "Who owns the payments service?"

  • Without Context Lake: AI guesses based on code comments or recent contributors.
  • With Context Lake: AI queries the catalog → sees Team relation → returns the owning team with Slack channel and on-call schedule.

Getting started

Building your Context Lake is a natural part of setting up Port:

  1. Define your data model - Create blueprints that represent your organization's entities.
  2. Connect your tools - Ingest data from GitHub, Kubernetes, PagerDuty, and 100+ other integrations.
  3. Set up relationships - Define how entities connect to each other.
  4. Configure access controls - Ensure proper data governance.
  5. Define standards - Create scorecards that encode your quality requirements.

As you build your catalog, you're simultaneously building your Context Lake—the unified knowledge layer that powers intelligent automation and AI-driven workflows.