Skip to main content

Check out Port for yourself ➜ 

OpenAI compatible setup

Port's openai-compatible provider type connects to an HTTP endpoint that implements the OpenAI chat completions API. You supply a base URL, optional custom headers, and a catalog of model names your server actually exposes. This page walks through LiteLLM as a concrete example. LiteLLM is a common choice because it can route many upstream models through one OpenAI-compatible surface. See the LiteLLM docs for product-specific behavior and deployment options.

Prerequisites

Complete any LiteLLM deployment and networking your organization requires. Your LiteLLM base URL (including path prefix, often /v1) must be reachable from Port, and you must know which model strings your LiteLLM instance accepts. Then follow store API keys in secrets in the main setup guide, and return here for the request body you send to the Create or connect an LLM provider API.

Example values for a LiteLLM proxy

Use names that match your own deployment.

WhatExample
Base URLhttps://litellm.corp.example.com/v1
API key in Port (secret name)LITELLM_API_KEY (the secret value is the API key your LiteLLM proxy accepts)
Model name in Portmy-litellm-model (a model id you configured in LiteLLM, such as a key from model_list in config.yaml, or a router-exposed name)

The name in Port must match the model string you will pass as model in general-purpose AI interactions or invoke a specific agent when you set provider to openai-compatible.

Register LiteLLM with the Port API

Call Create or connect an LLM provider with validate_connection=true in the query string while testing. The body uses provider: "openai-compatible" and a config that matches your server.

Below is a minimal example you can copy and edit. The models array must list at least one object with a name of at least three characters. Optional fields such as displayName, contextWindow, and supportedFeatures help the Port UI and routing. Set supportsStructuredOutputs to align with what your stack returns when you use structured output with Port.

{
"provider": "openai-compatible",
"enabled": true,
"config": {
"baseUrl": "https://litellm.corp.example.com/v1",
"apiKeySecretName": "LITELLM_API_KEY",
"models": [
{
"name": "claude-sonnet",
"displayName": "Claude Sonnet 4.6",
"contextWindow": 128000,
"supportedFeatures": {
"temperature": true
}
}
],
"supportsStructuredOutputs": true
}
}

If you need static routing or extra auth headers, add a customHeaders map. Keys and values are sent on every request to the LiteLLM base URL. See the API reference for the full schema.

After registration

For validation flow, default selection, and common failures, use Setup & configuration alongside your LiteLLM access logs and metrics.