Skip to main content

Check out Port for yourself ➜ 

Codex

Custom Ocean integration

This integration was created using the custom Ocean integration builder.
Please note that:

  1. This integration will not be listed in the Data sources page of your Port application, and must be installed manually using the instructions on this page.
  2. This integration will not create components (e.g. blueprints, mapping, etc.) in your portal automatically, you will need to create them manually using the instructions on this page.

Port's Codex integration ingests foundational OpenAI usage metrics into your software catalog using the Ocean Custom Integration framework. It focuses on two reliable data sources: daily cost summaries and model-level usage statistics.

Supported resources

The Codex integration can ingest the following resources into Port:

  • openai_daily_usage – Daily totals for requests, tokens, and spend from /dashboard/billing/usage.
  • openai_model_usage – Model-level request and token breakdowns from /usage.
  • openai_model – Available OpenAI models and their details from /models.

These resources provide visibility into your OpenAI usage, costs, and available models.

Prerequisites

To use this integration, you need:

  • An OpenAI API key with access to the usage and billing endpoints.
  • Network access from the Ocean integration to api.openai.com.

To create an OpenAI API key:

  1. Navigate to the OpenAI Platform and sign in to your account.
  2. Click on your profile icon in the top right corner and select API keys.
  3. Click Create new secret key.
  4. Give your key a name (e.g., "Port Integration") and click Create secret key.
  5. Copy the API key immediately (it starts with sk-). You won't be able to see it again after closing the dialog.
API key security

Store your API key securely and never share it. The key provides access to your OpenAI account usage and billing data.

Review the OpenAI usage docs to understand the exact response structure returned by each endpoint.

Installation

Choose one of the following installation methods to deploy the Ocean Custom Integration:

Prerequisites

To install the integration, you need a Kubernetes cluster that the integration's container chart will be deployed to.

Please make sure that you have kubectl and helm installed on your machine, and that your kubectl CLI is connected to the Kubernetes cluster where you plan to install the integration.

Troubleshooting

If you are having trouble installing this integration, please refer to these troubleshooting steps.

Installation

  1. Add Port's Helm repo and install the Ocean Custom Integration:
Replace placeholders

Remember to replace the placeholders for YOUR_PORT_CLIENT_ID, YOUR_PORT_CLIENT_SECRET, and YOUR_OPENAI_API_KEY.

helm repo add --force-update port-labs https://port-labs.github.io/helm-charts
helm upgrade --install my-ocean-codex-integration port-labs/port-ocean \
--set port.clientId="YOUR_PORT_CLIENT_ID" \
--set port.clientSecret="YOUR_PORT_CLIENT_SECRET" \
--set port.baseUrl="https://api.getport.io" \
--set initializePortResources=true \
--set integration.identifier="codex-integration" \
--set integration.type="custom" \
--set integration.eventListener.type="POLLING" \
--set integration.config.baseUrl="https://api.openai.com/v1" \
--set integration.config.authType="bearer_token" \
--set integration.config.apiToken="YOUR_OPENAI_API_KEY"
Selecting a Port API URL by account region

The port_region, port.baseUrl, portBaseUrl, port_base_url and OCEAN__PORT__BASE_URL parameters are used to select which instance of Port API will be used.

Port exposes two API instances, one for the EU region of Port, and one for the US region of Port.

Configuration parameters

ParameterDescriptionExampleRequired
port.clientIdYour Port client id.
port.clientSecretYour Port client secret.
port.baseUrlYour Port API URL (https://api.getport.io for EU, https://api.us.getport.io for US).
integration.config.baseUrlBase URL for the OpenAI API.https://api.openai.com/v1
integration.config.authTypeAuthentication type for OpenAI (use bearer_token for OpenAI).bearer_token
integration.config.apiTokenOpenAI API key (starts with sk-).sk-abc123
integration.eventListener.typeEvent listener type for the integration.POLLING
integration.typeIntegration type. Must be custom.custom
integration.identifierUnique identifier for this integration instance.codex-integration
initializePortResourcesCreate default blueprints and mappings on first run.true
scheduledResyncIntervalMinutes between scheduled syncs. Defaults to event listener interval when omitted.120
sendRawDataExamplesSend sample payloads for easier mapping.true

Advanced integration configuration

For advanced configuration such as proxies or self-signed certificates, click here.

Set up data model

Before syncing data, create the blueprints that define your OpenAI entities (usage metrics and available models).

To create the blueprints:

  1. Go to your Builder page.

  2. Click the + Blueprint button.

  3. Copy each blueprint JSON from the sections below.

    OpenAI daily usage blueprint (Click to expand)
    {
    "identifier": "openai_daily_usage",
    "title": "OpenAI Daily Usage",
    "icon": "OpenAI",
    "schema": {
    "properties": {
    "date": {
    "type": "string",
    "format": "date",
    "title": "Date"
    },
    "total_requests": {
    "type": "number",
    "title": "Total Requests"
    },
    "total_tokens": {
    "type": "number",
    "title": "Total Tokens"
    },
    "total_cost": {
    "type": "number",
    "title": "Total Cost (USD)"
    }
    },
    "required": [
    "date"
    ]
    },
    "mirrorProperties": {},
    "calculationProperties": {},
    "aggregationProperties": {},
    "relations": {}
    }
    OpenAI model usage blueprint (Click to expand)
    {
    "identifier": "openai_model_usage",
    "title": "OpenAI Model Usage",
    "icon": "OpenAI",
    "schema": {
    "properties": {
    "model": {
    "type": "string",
    "title": "Model Name"
    },
    "date": {
    "type": "string",
    "format": "date",
    "title": "Date"
    },
    "requests": {
    "type": "number",
    "title": "Requests"
    },
    "tokens": {
    "type": "number",
    "title": "Tokens Used"
    }
    },
    "required": [
    "model",
    "date"
    ]
    },
    "mirrorProperties": {},
    "calculationProperties": {},
    "aggregationProperties": {},
    "relations": {}
    }
    OpenAI Model blueprint (Click to expand)
    {
    "identifier": "openai_model",
    "title": "OpenAI Model",
    "icon": "Claude",
    "schema": {
    "properties": {
    "modelId": {
    "type": "string",
    "title": "Model ID"
    },
    "object": {
    "type": "string",
    "title": "Object Type"
    },
    "created": {
    "type": "number",
    "title": "Created Timestamp"
    },
    "ownedBy": {
    "type": "string",
    "title": "Owned By"
    },
    "permission": {
    "type": "array",
    "title": "Permissions"
    }
    },
    "required": [
    "modelId"
    ]
    },
    "mirrorProperties": {},
    "calculationProperties": {},
    "aggregationProperties": {},
    "relations": {}
    }
  4. Click Save after each blueprint is added.

Configuration

Each resource maps an OpenAI endpoint to the Port entities defined above.

Key mapping components:

  • kind – API endpoint path appended to https://api.openai.com/v1.
  • selector – Request payload, pagination controls, and data selection logic.
  • port.entity.mappings – JQ expressions that transform the API payload into Port entities.
Daily usage summary mapping (Click to expand)
resources:
- kind: /dashboard/billing/usage
selector:
query: 'true'
query_params:
start_date: '((now | floor) - (86400 * 30)) | strftime("%Y-%m-%d")'
end_date: '(now | floor) | strftime("%Y-%m-%d")'
port:
entity:
mappings:
identifier: "daily-" + (.timestamp // .aggregation_timestamp // "unknown")
title: "OpenAI Usage " + (.timestamp // .aggregation_timestamp // "unknown")
blueprint: '"openai_daily_usage"'
properties:
date: (.timestamp // .aggregation_timestamp // "" | split("T")[0])
total_requests: .total_requests // 0
total_tokens: .total_tokens // 0
total_cost: (.total_usage // 0) / 100
Cost units

/dashboard/billing/usage returns costs in cents. Divide by 100 to store USD.

Model usage breakdown mapping (Click to expand)
resources:
- kind: /usage
selector:
query: 'true'
query_params:
date: '(now | floor) | strftime("%Y-%m-%d")'
port:
entity:
mappings:
identifier: .snapshot_id + "-" + ((.aggregation_timestamp // 0) | tostring)
title: .snapshot_id + " usage"
blueprint: '"openai_model_usage"'
properties:
model: .snapshot_id
date: (.aggregation_timestamp // 0 | strftime("%Y-%m-%d"))
requests: .n_requests // 0
tokens: (.n_context_tokens_total // 0) + (.n_generated_tokens_total // 0)
Snapshot identifiers

snapshot_id typically corresponds to the model name (for example, gpt-4o). Use it for both the identifier and the model property to keep the mapping simple.

OpenAI Models mapping (Click to expand)
resources:
- kind: /models
selector:
query: 'true'
data_path: '.data'
port:
entity:
mappings:
identifier: .id
title: .id
blueprint: '"openai_model"'
properties:
modelId: .id
object: .object
ownedBy: .owned_by
permission: .permission
Models endpoint

The /models endpoint returns a list of all available OpenAI models. This is useful for cataloging which models are available in your account and tracking model availability over time.

  1. Click Save to persist the mapping.

Customization

If you want to expand beyond the starter resources, use the interactive builder to:

  1. Test additional OpenAI endpoints.
  2. Explore the response shape and detected property types.
  3. Generate blueprint JSON and mapping snippets automatically.
  4. Export installation commands with your configuration pre-filled.

Start with the daily and model usage entities above, then add more resources (such as per-organization or per-team reports) once you verify the value.