Codex
This integration was created using the custom Ocean integration builder.
Please note that:
- This integration will not be listed in the
Data sourcespage of your Port application, and must be installed manually using the instructions on this page. - This integration will not create components (e.g.
blueprints,mapping, etc.) in your portal automatically, you will need to create them manually using the instructions on this page.
Port's Codex integration ingests foundational OpenAI usage metrics into your software catalog using the Ocean Custom Integration framework. It focuses on two reliable data sources: daily cost summaries and model-level usage statistics.
Supported resources
The Codex integration can ingest the following resources into Port:
openai_daily_usage– Daily totals for requests, tokens, and spend from/dashboard/billing/usage.openai_model_usage– Model-level request and token breakdowns from/usage.openai_model– Available OpenAI models and their details from/models.
These resources provide visibility into your OpenAI usage, costs, and available models.
Prerequisites
To use this integration, you need:
- An OpenAI API key with access to the usage and billing endpoints.
- Network access from the Ocean integration to
api.openai.com.
To create an OpenAI API key:
- Navigate to the OpenAI Platform and sign in to your account.
- Click on your profile icon in the top right corner and select API keys.
- Click Create new secret key.
- Give your key a name (e.g., "Port Integration") and click Create secret key.
- Copy the API key immediately (it starts with
sk-). You won't be able to see it again after closing the dialog.
Store your API key securely and never share it. The key provides access to your OpenAI account usage and billing data.
Review the OpenAI usage docs to understand the exact response structure returned by each endpoint.
Installation
Choose one of the following installation methods to deploy the Ocean Custom Integration:
- Helm
- Docker
Prerequisites
To install the integration, you need a Kubernetes cluster that the integration's container chart will be deployed to.
Please make sure that you have kubectl and helm installed on your machine, and that your kubectl CLI is connected to the Kubernetes cluster where you plan to install the integration.
If you are having trouble installing this integration, please refer to these troubleshooting steps.
Installation
- Add Port's Helm repo and install the Ocean Custom Integration:
Remember to replace the placeholders for YOUR_PORT_CLIENT_ID, YOUR_PORT_CLIENT_SECRET, and YOUR_OPENAI_API_KEY.
helm repo add --force-update port-labs https://port-labs.github.io/helm-charts
helm upgrade --install my-ocean-codex-integration port-labs/port-ocean \
--set port.clientId="YOUR_PORT_CLIENT_ID" \
--set port.clientSecret="YOUR_PORT_CLIENT_SECRET" \
--set port.baseUrl="https://api.getport.io" \
--set initializePortResources=true \
--set integration.identifier="codex-integration" \
--set integration.type="custom" \
--set integration.eventListener.type="POLLING" \
--set integration.config.baseUrl="https://api.openai.com/v1" \
--set integration.config.authType="bearer_token" \
--set integration.config.apiToken="YOUR_OPENAI_API_KEY"
The port_region, port.baseUrl, portBaseUrl, port_base_url and OCEAN__PORT__BASE_URL parameters are used to select which instance of Port API will be used.
Port exposes two API instances, one for the EU region of Port, and one for the US region of Port.
- If you use the EU region of Port (https://app.port.io), your API URL is
https://api.port.io. - If you use the US region of Port (https://app.us.port.io), your API URL is
https://api.us.port.io.
Configuration parameters
| Parameter | Description | Example | Required |
|---|---|---|---|
port.clientId | Your Port client id. | ✅ | |
port.clientSecret | Your Port client secret. | ✅ | |
port.baseUrl | Your Port API URL (https://api.getport.io for EU, https://api.us.getport.io for US). | ✅ | |
integration.config.baseUrl | Base URL for the OpenAI API. | https://api.openai.com/v1 | ✅ |
integration.config.authType | Authentication type for OpenAI (use bearer_token for OpenAI). | bearer_token | ✅ |
integration.config.apiToken | OpenAI API key (starts with sk-). | sk-abc123 | ✅ |
integration.eventListener.type | Event listener type for the integration. | POLLING | ✅ |
integration.type | Integration type. Must be custom. | custom | ✅ |
integration.identifier | Unique identifier for this integration instance. | codex-integration | ✅ |
initializePortResources | Create default blueprints and mappings on first run. | true | ❌ |
scheduledResyncInterval | Minutes between scheduled syncs. Defaults to event listener interval when omitted. | 120 | ❌ |
sendRawDataExamples | Send sample payloads for easier mapping. | true | ❌ |
For advanced configuration such as proxies or self-signed certificates, click here.
To run the integration using Docker for a one-time sync:
Remember to replace the placeholders for YOUR_PORT_CLIENT_ID, YOUR_PORT_CLIENT_SECRET, and YOUR_OPENAI_API_KEY.
docker run -i --rm --platform=linux/amd64 \
-e OCEAN__EVENT_LISTENER='{"type":"ONCE"}' \
-e OCEAN__INITIALIZE_PORT_RESOURCES=true \
-e OCEAN__SEND_RAW_DATA_EXAMPLES=true \
-e OCEAN__INTEGRATION__CONFIG__BASE_URL="https://api.openai.com/v1" \
-e OCEAN__INTEGRATION__CONFIG__AUTH_TYPE="bearer_token" \
-e OCEAN__INTEGRATION__CONFIG__API_TOKEN="YOUR_OPENAI_API_KEY" \
-e OCEAN__PORT__CLIENT_ID="YOUR_PORT_CLIENT_ID" \
-e OCEAN__PORT__CLIENT_SECRET="YOUR_PORT_CLIENT_SECRET" \
-e OCEAN__PORT__BASE_URL="https://api.getport.io" \
ghcr.io/port-labs/port-ocean-custom:latest
The port_region, port.baseUrl, portBaseUrl, port_base_url and OCEAN__PORT__BASE_URL parameters are used to select which instance of Port API will be used.
Port exposes two API instances, one for the EU region of Port, and one for the US region of Port.
- If you use the EU region of Port (https://app.port.io), your API URL is
https://api.port.io. - If you use the US region of Port (https://app.us.port.io), your API URL is
https://api.us.port.io.
For advanced configuration such as proxies or self-signed certificates, click here.
Set up data model
Before syncing data, create the blueprints that define your OpenAI entities (usage metrics and available models).
To create the blueprints:
-
Go to your Builder page.
-
Click the
+ Blueprintbutton. -
Copy each blueprint JSON from the sections below.
OpenAI daily usage blueprint (Click to expand)
{
"identifier": "openai_daily_usage",
"title": "OpenAI Daily Usage",
"icon": "OpenAI",
"schema": {
"properties": {
"date": {
"type": "string",
"format": "date",
"title": "Date"
},
"total_requests": {
"type": "number",
"title": "Total Requests"
},
"total_tokens": {
"type": "number",
"title": "Total Tokens"
},
"total_cost": {
"type": "number",
"title": "Total Cost (USD)"
}
},
"required": [
"date"
]
},
"mirrorProperties": {},
"calculationProperties": {},
"aggregationProperties": {},
"relations": {}
}OpenAI model usage blueprint (Click to expand)
{
"identifier": "openai_model_usage",
"title": "OpenAI Model Usage",
"icon": "OpenAI",
"schema": {
"properties": {
"model": {
"type": "string",
"title": "Model Name"
},
"date": {
"type": "string",
"format": "date",
"title": "Date"
},
"requests": {
"type": "number",
"title": "Requests"
},
"tokens": {
"type": "number",
"title": "Tokens Used"
}
},
"required": [
"model",
"date"
]
},
"mirrorProperties": {},
"calculationProperties": {},
"aggregationProperties": {},
"relations": {}
}OpenAI Model blueprint (Click to expand)
{
"identifier": "openai_model",
"title": "OpenAI Model",
"icon": "Claude",
"schema": {
"properties": {
"modelId": {
"type": "string",
"title": "Model ID"
},
"object": {
"type": "string",
"title": "Object Type"
},
"created": {
"type": "number",
"title": "Created Timestamp"
},
"ownedBy": {
"type": "string",
"title": "Owned By"
},
"permission": {
"type": "array",
"title": "Permissions"
}
},
"required": [
"modelId"
]
},
"mirrorProperties": {},
"calculationProperties": {},
"aggregationProperties": {},
"relations": {}
} -
Click
Saveafter each blueprint is added.
Configuration
Each resource maps an OpenAI endpoint to the Port entities defined above.
Key mapping components:
kind– API endpoint path appended tohttps://api.openai.com/v1.selector– Request payload, pagination controls, and data selection logic.port.entity.mappings– JQ expressions that transform the API payload into Port entities.
Daily usage summary mapping (Click to expand)
resources:
- kind: /dashboard/billing/usage
selector:
query: 'true'
query_params:
start_date: '((now | floor) - (86400 * 30)) | strftime("%Y-%m-%d")'
end_date: '(now | floor) | strftime("%Y-%m-%d")'
port:
entity:
mappings:
identifier: "daily-" + (.timestamp // .aggregation_timestamp // "unknown")
title: "OpenAI Usage " + (.timestamp // .aggregation_timestamp // "unknown")
blueprint: '"openai_daily_usage"'
properties:
date: (.timestamp // .aggregation_timestamp // "" | split("T")[0])
total_requests: .total_requests // 0
total_tokens: .total_tokens // 0
total_cost: (.total_usage // 0) / 100
/dashboard/billing/usage returns costs in cents. Divide by 100 to store USD.
Model usage breakdown mapping (Click to expand)
resources:
- kind: /usage
selector:
query: 'true'
query_params:
date: '(now | floor) | strftime("%Y-%m-%d")'
port:
entity:
mappings:
identifier: .snapshot_id + "-" + ((.aggregation_timestamp // 0) | tostring)
title: .snapshot_id + " usage"
blueprint: '"openai_model_usage"'
properties:
model: .snapshot_id
date: (.aggregation_timestamp // 0 | strftime("%Y-%m-%d"))
requests: .n_requests // 0
tokens: (.n_context_tokens_total // 0) + (.n_generated_tokens_total // 0)
snapshot_id typically corresponds to the model name (for example, gpt-4o). Use it for both the identifier and the model property to keep the mapping simple.
OpenAI Models mapping (Click to expand)
resources:
- kind: /models
selector:
query: 'true'
data_path: '.data'
port:
entity:
mappings:
identifier: .id
title: .id
blueprint: '"openai_model"'
properties:
modelId: .id
object: .object
ownedBy: .owned_by
permission: .permission
The /models endpoint returns a list of all available OpenAI models. This is useful for cataloging which models are available in your account and tracking model availability over time.
- Click
Saveto persist the mapping.
Customization
If you want to expand beyond the starter resources, use the interactive builder to:
- Test additional OpenAI endpoints.
- Explore the response shape and detected property types.
- Generate blueprint JSON and mapping snippets automatically.
- Export installation commands with your configuration pre-filled.
Start with the daily and model usage entities above, then add more resources (such as per-organization or per-team reports) once you verify the value.