Skip to main content

Check out Port for yourselfย 

AI agents overview

Closed Beta Feature

This feature is currently in closed beta with limited availability. Access is provided on an application basis.

To request access, please reach out to us by filling this form.

What are Port AI agents?โ€‹

Port AI agents are customizable building blocks that enhance your developer portal with intelligent assistance.
These agents help your developers find information faster and complete tasks more efficiently across your development ecosystem.

What can AI agents do?โ€‹

AI agents serve two primary functions:

  1. Answer questions about your development environment, services, and processes using natural language. Developers can ask questions and get immediate, contextual answers.

  2. Assist with actions by helping developers complete common tasks faster. Agents can suggest and pre-fill forms, guide developers through workflows, and provide relevant context for decision-making. You can decide whether they can run an action or require human approval.

Enhanced capabilities with MCP server backendโ€‹

New capability

Port AI agents now support an enhanced MCP server backend mode that provides significantly expanded capabilities. This is a new feature that enhances your existing agents - you can enable it for any agent to unlock these advanced capabilities.

When using the MCP server backend mode, your AI agents gain:

  • Expanded data access: Intelligently queries your entire catalog without blueprint restrictions
  • Enhanced reasoning: Powered by Claude models for improved analysis and decision-making
  • Broader tool access: Uses all read-only tools available in the MCP server for comprehensive insights
  • Smarter action selection: Still respects your configured allowed actions while providing better context

Your existing agents can immediately benefit from these enhancements by enabling the MCP server backend mode when interacting with them through widgets and API calls.

Example use casesโ€‹

Questions your agents can answer:

  • "Which services are failing security checks?".
  • "When was the last successful deployment of the payment service?".
  • "Who is the owner for this component?".

Actions your agents can help with:

  • "Can you help me deploy service X to production?".
  • "Please notify the reviewers of PR #1234".

Getting started with AI agentsโ€‹

To start working with AI agents, follow these steps:

  1. Apply for access - Submit your application via this form.
  2. Access the feature - If accepted, you will be able to activate the AI agents in your Port organization.
  3. Build your agents - Create custom agents to meet your developers' needs.
  4. Interact with your agents - Engage with your agents by following our interaction guide.

Customization and controlโ€‹

Build and customize your AI agents:

  • Define which data sources your agents can access.
  • Determine what actions your agents can assist with.
  • Set permissions for who can use specific agents.
  • Configure how agents integrate with your workflows.
  • Choose between standard and MCP server backend modes when interacting with agents.

Security and data handlingโ€‹

AI agents are designed with security as a priority:

  • Agents only have access to the data you explicitly provide.
  • Your data remains within Port's secure infrastructure.
  • LLM processing happens within our cloud infrastructure.
  • Your data is not used for model training.

We store data from your interactions with AI agents for up to 30 days. We use this data to ensure agents function correctly and to identify and prevent problematic or inappropriate AI behavior. We limit this data storage strictly to these purposes. You can contact us to opt-out of this data storage.

Start simple & expand as neededโ€‹

Begin with focused use cases that deliver immediate value, such as helping developers find service information or streamlining incident management.
As your team builds confidence in the agents, you can expand their capabilities to cover more complex scenarios and workflows.

Access to the featureโ€‹

Currently, AI agents are in closed beta access, and you must get approved for the feature first. Once approved, you can enable the feature in your Port organization using the interactive tool below:

Enter your full token (with or without "Bearer " prefix)
View cURL command

If you prefer to use cURL directly, you can run this command in your terminal:

curl --location --request PATCH 'https://api.getport.io/v1/organization/ai/register' \
--header 'Authorization: Bearer <YOUR_PORT_API_TOKEN>'

Your organization now has the system blueprints required for the feature to work.

Data Modelโ€‹

The data model of AI agents includes two main blueprints:

  1. AI agents - The agents themselves that you can interact with. You can build new ones and customize them as you wish. Learn more in our Build an AI agent guide.

  2. AI invocations - Each interaction made with an AI agent is recorded as an invocation. This acts as a log of everything going through your AI agents so you can monitor and improve them over time. Learn more in our Interact with AI agents guide.

Relevant guidesโ€‹

Explore these guides to see AI agents in action and learn how to implement them in your organization:

Frequently asked questionsโ€‹

What are the main use cases Port AI will support? (Click to expand)

Port AI supports two primary interaction types:

  1. Ask Me Anything (Information Queries)
    • Natural language queries about your development ecosystem
    • Examples: "Who owns service X?", "What's the deployment frequency of team Y?"
    • Focused on surfacing information from connected data sources
  2. Run an Action (Form Generation)
    • Assist with running or pre-filling self-service actions
    • Examples: "Create a bug report", "Set up a new service"
    • Important: you can decide whether the agent can run the action automatically
How do users interact with Port AI? (Click to expand)
Can customers customize the AI agents? (Click to expand)

Yes - you can create custom AI agents within Port. Customization includes:

  • Creating new agents using Port's blueprint system.
  • Configuring agent knowledge base and access to tools.
  • Adjusting prompts and agent behaviors.
  • Setting permissions and usage boundaries.

All agents operate within Port's secure framework and governance controls.

How is customer data handled? (Click to expand)

All data processing occurs within our cloud infrastructure, and no data used for model training. We ensure complete logical separation between different customers' data.

We store data from your interactions with AI agents for up to 30 days. We use this data to ensure agents function correctly and to identify and prevent problematic or inappropriate AI behavior. We limit this data storage strictly to these purposes. You can contact us to opt-out of this data storage.

Which LLM models are you using? (Click to expand)

We use different models depending on the backend mode:

  • Standard backend: OpenAI's GPT models for reliable performance and broad compatibility
  • MCP server backend: Claude models for enhanced reasoning and analysis capabilities

We aim to use the best models that will yield the best results while keeping your data safe. Model selection may evolve as we continue to optimize agent performance.

How can we audit and control AI usage? (Click to expand)

Each interaction of the agent is saved and can be viewed in the audit logs, ensuring transparency and accountability. You have control over who can interact with and see the agents through our granular permission controls, along with an admin dashboard for monitoring usage, export capabilities for audit logs, and available rate limiting and usage controls.