Skip to main content

Check out Port for yourself ➜ 

Setup & Configuration

This guide covers all technical details for setting up and configuring LLM providers, including permissions, changing defaults, validation flow, and troubleshooting common issues.

Permissions & Access Control

Admin access required

Managing LLM provider settings requires organization administrator permissions. Only admins can modify default providers or add new provider configurations.

Administrators can perform all LLM provider management operations:

Configuration Operations

Management Capabilities

  • Set organization-wide default providers and models
  • Configure provider-specific settings and credentials
  • Manage provider access and permissions
  • Test provider connections with validation

Prerequisites

Before configuring LLM providers, ensure you have:

  1. Access to Port AI: Your organization has access to the Port AI features.
  2. Provider Accounts: Active accounts with the LLM providers you want to use
  3. Admin Permissions: Organization administrator role in Port

Some providers require additional setup before you can configure them in Port. See Step 1: Configure provider policies and settings for provider-specific configuration instructions.

Step 1: Configure provider policies and settings (optional)

Some providers require additional setup before you can register them in Port.

Step 2: Store API Keys in Secrets

Before configuring providers, store your API keys in Port's secrets system. The secret names you choose are flexible - you'll reference them in your provider configuration.

  1. In your Port application, click on your profile picture .
  2. Click on Credentials.
  3. Click on the Secrets tab.
  4. Click on + Secret and add the required secrets for your chosen provider(s):

Required Secret:

  • API Key secret (e.g., openai-api-key) - Your OpenAI API key
Secret naming flexibility

You can choose any names for your secrets. The examples above are suggestions - use names that make sense for your organization. You'll reference these exact names in your provider configuration.

One-time view

After creating a secret, you will be able to view its value only once. Afterwards, you will be able to delete the secret or edit its value, but not to view it.

For more details on managing secrets, see the Port Secrets documentation.

Step 3: Configure LLM Providers

Use the Create or connect an LLM provider API to configure your providers. The interactive API reference provides detailed examples and allows you to test the configuration for each provider type (OpenAI, Anthropic, Azure OpenAI, Azure Anthropic, AWS Bedrock, OpenAI compatible).

After configuration

Once providers are configured, you can view and select default providers and models through the UI (BuilderOrganization SettingsAI tab) or continue using the API for all operations.

Model overrides

By default, Port tests all supported models when you register a provider. If your account only has access to a subset of models, the validation will fail for the ones you don't have, and the entire registration will be rejected.

To register a provider with only specific models, use the overrides field to explicitly enable the models you want and disable the rest:

Example: Bedrock with selective model enablement (click to expand)
curl -s -X POST 'https://api.port.io/v1/llm-providers?validate_connection=true' \
-H "Authorization: Bearer $PORT_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"provider": "bedrock",
"enabled": true,
"config": {
"roleArn": "arn:aws:iam::<YOUR_ACCOUNT_ID>:role/<YOUR_ROLE_NAME>",
"region": "eu-central-1",
"externalIdSecretName": "BEDROCK_ROLE_EXTERNAL_ID"
},
"overrides": {
"models": {
"claude-haiku-4-5-20251001": {
"enabled": true
},
"claude-sonnet-4-5-20250929": {
"enabled": true
},
"claude-sonnet-4-20250514": {
"enabled": false
},
"claude-opus-4-5-20251101": {
"enabled": false
},
"claude-opus-4-6": {
"enabled": false
}
}
}
}'
All unsupported models must be explicitly disabled

Setting only the models you want to enabled: true is not enough. Models you don't have access to must be explicitly set to enabled: false, otherwise Port will still attempt to validate them and the registration will fail.

Step 4: Validate configuration

Test your provider configuration with connection validation using the Create or connect an LLM provider API with the validate_connection=true parameter. The interactive API reference shows how to test your configuration before saving it.

Getting your current configuration

You can view your organization's current LLM provider defaults through the UI or API:

Using the UI:

  1. Go to BuilderOrganization SettingsAI tab.
  2. View all configured providers and models.
  3. See which provider and model are currently set as defaults.

Using the API: Retrieve your organization's current LLM provider defaults using the Get default LLM provider and model API. The interactive API reference shows the response format and allows you to test the endpoint.

System Defaults

When no organization-specific defaults are configured, Port uses these system defaults:

  • Default Provider: port
  • Default Model: claude-sonnet-4-5-20250929

Changing Default Providers

You can change your organization's default LLM provider and model through the UI or API:

Using the UI:

  1. Go to BuilderOrganization SettingsAI tab.
  2. Select your preferred Default LLM provider from the dropdown.
  3. Select your preferred Default model from the dropdown.
  4. Click Save to apply your changes.
Adding new providers

To add a new custom LLM provider, you still need to use the Create or connect an LLM provider API. Once a provider is configured, it will appear in the UI dropdown for selection.

Using the API: Update your organization's default LLM provider and model using the Change default LLM provider and model API. The interactive API reference provides the request format and response examples.

Validation Flow

The system validates provider configurations to ensure they work correctly before saving. This includes checking credentials, testing connections, and verifying model availability.

For detailed information about how validation works during API requests, see Selecting LLM Provider.

Configuration Hierarchy

LLM provider settings follow a hierarchy from organization defaults to system defaults.

For detailed information about how defaults are selected during API requests, see Selecting LLM Provider.

Frequently Asked Questions

I'm getting "LLM provider not found" - what should I do?

This error occurs when trying to use a provider that hasn't been configured:

{
"ok": false,
"error": {
"name": "LLMProviderNotFoundError",
"message": "LLM provider 'openai' not found for organization"
}
}

Solution: Create the provider configuration first using the steps above, or contact your organization administrator.

Why is my connection test failing?

Connection test failures usually indicate credential or configuration issues:

{
"ok": false,
"error": "llm_provider_model_test_error",
"message": "Connection test failed for one or more models",
"details": {
"testedModels": {
"claude-sonnet-4-5-20250929": { "isValid": true },
"claude-sonnet-4-20250514": { "isValid": false, "message": "Failed to process your request, Please contact support." }
}
}
}

Common causes:

  • Partial model access (most common with Bedrock): Port validates all supported models by default. If your IAM policy only covers some models, the uncovered ones will fail and block the entire registration. Use model overrides to disable models you don't need.
  • Incorrect API key or secret: Verify the key is stored correctly in Port secrets. For Bedrock with assume role, ensure the external ID secret value matches your trust policy exactly, including no trailing whitespace.
  • Missing model access: For Bedrock, confirm you've completed the one-time Anthropic usage form and that the models show "Access granted" in the AWS Console under Bedrock → Model access.
  • Insufficient quota/credits: Check your provider account's billing and usage limits.
How do I troubleshoot AWS Bedrock connection failures?

If your Bedrock provider registration fails, use these steps to isolate the issue.

1. Check your external ID secret

The secret value in Port must match the sts:ExternalId in your trust policy character-for-character. Even a trailing space will cause a silent failure. Go to ...CredentialsSecrets and recreate the secret if unsure.

2. Verify Anthropic model access

In the AWS Console, go to BedrockModel access and confirm the Claude models you want to use show "Access granted." If not, submit the one-time Anthropic usage form through the Bedrock console.

3. Self-test with AWS CLI

You can test the assume-role and model invocation independently, without Port, to isolate where the failure occurs.

First, temporarily add yourself to the trust policy so you can assume the role. If using SSO, add arn:aws:iam::<your_account_id>:root as a principal (SSO role ARNs have hidden path prefixes that AWS rejects). If using regular IAM, add your IAM user or role ARN directly.

Then run:

# Step 1: Assume the role
eval $(aws sts assume-role \
--role-arn "arn:aws:iam::<YOUR_ACCOUNT_ID>:role/<YOUR_ROLE_NAME>" \
--role-session-name "test-byollm" \
--external-id "<YOUR_EXTERNAL_ID_VALUE>" \
--query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]' \
--output text | awk '{print "export AWS_ACCESS_KEY_ID="$1"\nexport AWS_SECRET_ACCESS_KEY="$2"\nexport AWS_SESSION_TOKEN="$3}')

# Step 2: Test model invocation
echo '{"anthropic_version":"bedrock-2023-05-31","max_tokens":256,"messages":[{"role":"user","content":"Hello"}]}' > /tmp/body.json

aws bedrock-runtime invoke-model \
--model-id us.anthropic.claude-sonnet-4-20250514-v1:0 \
--region us-east-1 \
--content-type application/json \
--body fileb:///tmp/body.json \
/tmp/output.json && cat /tmp/output.json

Interpreting results:

  • If assume-role fails → trust policy issue (wrong principal ARN, wrong external ID, or missing Port gateway role)
  • If invoke-model fails → IAM permissions issue or model access not granted

Remove the temporary principal from your trust policy after testing.

I'm getting "apiKeySecretName is required" error

This indicates missing required configuration parameters:

{
"ok": false,
"error": {
"name": "LLMProviderInvalidConfigError",
"message": "apiKeySecretName is required"
}
}

Solution: Check the provider-specific configuration requirements in the setup steps above and ensure all required fields are provided.

I don't have permission to manage LLM providers
{
"name": "llm_provider_manage_forbidden",
"message": "You do not have permission to manage LLM providers"
}

Solution: Only organization administrators can manage LLM providers. Contact your admin to get the necessary permissions or ask them to configure the providers for you.

How can I debug provider configuration issues?

General debugging steps:

  • Test connection: Use validate_connection=true parameter when creating providers. The response shows per-model validation results.
  • Check secrets: Ensure API keys are stored correctly in Port's secrets system. Secret values can only be viewed once after creation.
  • Verify permissions: Ensure your provider credentials have the required permissions for the models you're using.
  • Check quotas: Monitor usage limits and billing status for external providers.
  • Provider status: Check if your external provider service is experiencing outages.

Bedrock-specific debugging:

  • IAM policy: Ensure your policy includes both inference-profile and foundation-model ARN entries for each model. See the IAM policy examples above.
  • Trust policy: Verify the correct Port gateway role is in your trust policy. See the trust relationship configuration section.
  • External ID: Recreate the secret in Port if you suspect a mismatch. Even trailing whitespace causes failures.
  • Self-test: Use the AWS CLI to test assume-role and model invocation independently. See the Bedrock troubleshooting FAQ above.
What should I do if a model isn't enabled for my provider?
{
"ok": false,
"error": {
"name": "LLMProviderModelNotEnabledError",
"message": "Model 'gpt-5' is not enabled for provider 'openai'"
}
}

Solution: This usually means the model needs to be enabled in your provider configuration. Contact your organization administrator to enable the specific model for your provider.