Setup & Configuration
This guide covers all technical details for setting up and configuring LLM providers, including permissions, changing defaults, validation flow, and troubleshooting common issues.
Permissions & Access Control
Managing LLM provider settings requires organization administrator permissions. Only admins can modify default providers or add new provider configurations.
- Admin Users
- Organization Members
Administrators can perform all LLM provider management operations:
Configuration Operations
- Get default LLM provider and model - View current default provider settings
- Change default LLM provider and model - Update organization default providers
- Create or connect an LLM provider - Create and configure new LLM provider connections
- Get a specific provider configuration - View existing provider configurations
- Delete a specific provider configuration - Delete provider configurations
Management Capabilities
- Set organization-wide default providers and models
- Configure provider-specific settings and credentials
- Manage provider access and permissions
- Test provider connections with validation
Organization members have read-only access to LLM provider information:
Read-Only Operations
- Get default LLM provider and model - View current default provider settings
- Get configured LLM providers - View available providers and their status
- See which models are currently configured as defaults
No Management Access
- Cannot modify provider configurations
- Cannot change default settings
- Cannot add or remove providers
Prerequisites
Before configuring LLM providers, ensure you have:
- Access to Port AI: Your organization has access to the Port AI features.
- Provider Accounts: Active accounts with the LLM providers you want to use
- Admin Permissions: Organization administrator role in Port
Some providers require additional setup before you can configure them in Port. See Step 1: Configure provider policies and settings for provider-specific configuration instructions.
Step 1: Configure provider policies and settings (optional)
Some providers require additional setup before you can register them in Port.
- AWS Bedrock: Requires IAM policy and authentication configuration. See the AWS Bedrock setup guide for the full walkthrough.
- OpenAI compatible: You need a reachable base URL for your server (or gateway) and the model names it exposes, plus a Port secret for the key or the values you will send in custom headers. See the OpenAI compatible setup guide (LiteLLM example), then continue with Step 2: store API keys in secrets and Step 3: configure LLM providers.
- OpenAI, Anthropic, Azure OpenAI, Azure Anthropic: No additional setup required. Skip to Step 2: store API keys in secrets.
Step 2: Store API Keys in Secrets
Before configuring providers, store your API keys in Port's secrets system. The secret names you choose are flexible - you'll reference them in your provider configuration.
- In your Port application, click on your profile picture
.
- Click on Credentials.
- Click on the
Secretstab. - Click on
+ Secretand add the required secrets for your chosen provider(s):
- OpenAI
- Anthropic
- Azure OpenAI
- Azure Anthropic
- AWS Bedrock
- OpenAI compatible
Required Secret:
- API Key secret (e.g.,
openai-api-key) - Your OpenAI API key
Required Secret:
- API Key secret (e.g.,
anthropic-api-key) - Your Anthropic API key
Required Secret:
- API Key secret (e.g.,
azure-openai-api-key) - Your Azure OpenAI API key
Required Secret:
- API Key secret (e.g.,
azure-anthropic-api-key) - Your Azure Anthropic API key
Option 1: Using access keys (required if not using assume role)
- Access Key ID secret (e.g.,
aws-bedrock-access-key-id) - Your AWS access key ID - Secret Access Key secret (e.g.,
aws-bedrock-secret-access-key) - Your AWS secret access key
Option 2: Using assume role (alternative to access keys)
- External ID secret (e.g.,
BEDROCK_ROLE_EXTERNAL_ID) - Optional external ID for the trust relationship
See the AWS Bedrock configuration section in Step 1 for configuration details.
We document a LiteLLM-based path in the OpenAI compatible setup guide.
Typical secret
- API key secret (e.g.,
LITELLM_API_KEY) - Store the API key the proxy validates, unless you use only custom headers for auth.
Before you call the API
- Confirm the base URL (including a
/v1prefix if your gateway uses the OpenAI layout) is reachable from Port. - Register at least one model
name(three or more characters) that your server exposes, matching what you will send asmodelin invocations.
You can choose any names for your secrets. The examples above are suggestions - use names that make sense for your organization. You'll reference these exact names in your provider configuration.
After creating a secret, you will be able to view its value only once. Afterwards, you will be able to delete the secret or edit its value, but not to view it.
For more details on managing secrets, see the Port Secrets documentation.
Step 3: Configure LLM Providers
Use the Create or connect an LLM provider API to configure your providers. The interactive API reference provides detailed examples and allows you to test the configuration for each provider type (OpenAI, Anthropic, Azure OpenAI, Azure Anthropic, AWS Bedrock, OpenAI compatible).
Once providers are configured, you can view and select default providers and models through the UI (Builder → Organization Settings → AI tab) or continue using the API for all operations.
Model overrides
By default, Port tests all supported models when you register a provider. If your account only has access to a subset of models, the validation will fail for the ones you don't have, and the entire registration will be rejected.
To register a provider with only specific models, use the overrides field to explicitly enable the models you want and disable the rest:
Example: Bedrock with selective model enablement (click to expand)
curl -s -X POST 'https://api.port.io/v1/llm-providers?validate_connection=true' \
-H "Authorization: Bearer $PORT_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"provider": "bedrock",
"enabled": true,
"config": {
"roleArn": "arn:aws:iam::<YOUR_ACCOUNT_ID>:role/<YOUR_ROLE_NAME>",
"region": "eu-central-1",
"externalIdSecretName": "BEDROCK_ROLE_EXTERNAL_ID"
},
"overrides": {
"models": {
"claude-haiku-4-5-20251001": {
"enabled": true
},
"claude-sonnet-4-5-20250929": {
"enabled": true
},
"claude-sonnet-4-20250514": {
"enabled": false
},
"claude-opus-4-5-20251101": {
"enabled": false
},
"claude-opus-4-6": {
"enabled": false
}
}
}
}'
Setting only the models you want to enabled: true is not enough. Models you don't have access to must be explicitly set to enabled: false, otherwise Port will still attempt to validate them and the registration will fail.
Step 4: Validate configuration
Test your provider configuration with connection validation using the Create or connect an LLM provider API with the validate_connection=true parameter. The interactive API reference shows how to test your configuration before saving it.
Getting your current configuration
You can view your organization's current LLM provider defaults through the UI or API:
Using the UI:
- Go to Builder → Organization Settings → AI tab.
- View all configured providers and models.
- See which provider and model are currently set as defaults.
Using the API: Retrieve your organization's current LLM provider defaults using the Get default LLM provider and model API. The interactive API reference shows the response format and allows you to test the endpoint.
System Defaults
When no organization-specific defaults are configured, Port uses these system defaults:
- Default Provider:
port - Default Model:
claude-sonnet-4-5-20250929
Changing Default Providers
You can change your organization's default LLM provider and model through the UI or API:
Using the UI:
- Go to Builder → Organization Settings → AI tab.
- Select your preferred Default LLM provider from the dropdown.
- Select your preferred Default model from the dropdown.
- Click Save to apply your changes.
To add a new custom LLM provider, you still need to use the Create or connect an LLM provider API. Once a provider is configured, it will appear in the UI dropdown for selection.
Using the API: Update your organization's default LLM provider and model using the Change default LLM provider and model API. The interactive API reference provides the request format and response examples.
Validation Flow
The system validates provider configurations to ensure they work correctly before saving. This includes checking credentials, testing connections, and verifying model availability.
For detailed information about how validation works during API requests, see Selecting LLM Provider.
Configuration Hierarchy
LLM provider settings follow a hierarchy from organization defaults to system defaults.
For detailed information about how defaults are selected during API requests, see Selecting LLM Provider.
Frequently Asked Questions
I'm getting "LLM provider not found" - what should I do?
This error occurs when trying to use a provider that hasn't been configured:
{
"ok": false,
"error": {
"name": "LLMProviderNotFoundError",
"message": "LLM provider 'openai' not found for organization"
}
}
Solution: Create the provider configuration first using the steps above, or contact your organization administrator.
Why is my connection test failing?
Connection test failures usually indicate credential or configuration issues:
{
"ok": false,
"error": "llm_provider_model_test_error",
"message": "Connection test failed for one or more models",
"details": {
"testedModels": {
"claude-sonnet-4-5-20250929": { "isValid": true },
"claude-sonnet-4-20250514": { "isValid": false, "message": "Failed to process your request, Please contact support." }
}
}
}
Common causes:
- Partial model access (most common with Bedrock): Port validates all supported models by default. If your IAM policy only covers some models, the uncovered ones will fail and block the entire registration. Use model overrides to disable models you don't need.
- Incorrect API key or secret: Verify the key is stored correctly in Port secrets. For Bedrock with assume role, ensure the external ID secret value matches your trust policy exactly, including no trailing whitespace.
- Missing model access: For Bedrock, confirm you've completed the one-time Anthropic usage form and that the models show "Access granted" in the AWS Console under Bedrock → Model access.
- Insufficient quota/credits: Check your provider account's billing and usage limits.
How do I troubleshoot AWS Bedrock connection failures?
If your Bedrock provider registration fails, use these steps to isolate the issue.
1. Check your external ID secret
The secret value in Port must match the sts:ExternalId in your trust policy character-for-character. Even a trailing space will cause a silent failure. Go to ... → Credentials → Secrets and recreate the secret if unsure.
2. Verify Anthropic model access
In the AWS Console, go to Bedrock → Model access and confirm the Claude models you want to use show "Access granted." If not, submit the one-time Anthropic usage form through the Bedrock console.
3. Self-test with AWS CLI
You can test the assume-role and model invocation independently, without Port, to isolate where the failure occurs.
First, temporarily add yourself to the trust policy so you can assume the role. If using SSO, add arn:aws:iam::<your_account_id>:root as a principal (SSO role ARNs have hidden path prefixes that AWS rejects). If using regular IAM, add your IAM user or role ARN directly.
Then run:
# Step 1: Assume the role
eval $(aws sts assume-role \
--role-arn "arn:aws:iam::<YOUR_ACCOUNT_ID>:role/<YOUR_ROLE_NAME>" \
--role-session-name "test-byollm" \
--external-id "<YOUR_EXTERNAL_ID_VALUE>" \
--query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]' \
--output text | awk '{print "export AWS_ACCESS_KEY_ID="$1"\nexport AWS_SECRET_ACCESS_KEY="$2"\nexport AWS_SESSION_TOKEN="$3}')
# Step 2: Test model invocation
echo '{"anthropic_version":"bedrock-2023-05-31","max_tokens":256,"messages":[{"role":"user","content":"Hello"}]}' > /tmp/body.json
aws bedrock-runtime invoke-model \
--model-id us.anthropic.claude-sonnet-4-20250514-v1:0 \
--region us-east-1 \
--content-type application/json \
--body fileb:///tmp/body.json \
/tmp/output.json && cat /tmp/output.json
Interpreting results:
- If assume-role fails → trust policy issue (wrong principal ARN, wrong external ID, or missing Port gateway role)
- If invoke-model fails → IAM permissions issue or model access not granted
Remove the temporary principal from your trust policy after testing.
I'm getting "apiKeySecretName is required" error
This indicates missing required configuration parameters:
{
"ok": false,
"error": {
"name": "LLMProviderInvalidConfigError",
"message": "apiKeySecretName is required"
}
}
Solution: Check the provider-specific configuration requirements in the setup steps above and ensure all required fields are provided.
I don't have permission to manage LLM providers
{
"name": "llm_provider_manage_forbidden",
"message": "You do not have permission to manage LLM providers"
}
Solution: Only organization administrators can manage LLM providers. Contact your admin to get the necessary permissions or ask them to configure the providers for you.
How can I debug provider configuration issues?
General debugging steps:
- Test connection: Use
validate_connection=trueparameter when creating providers. The response shows per-model validation results. - Check secrets: Ensure API keys are stored correctly in Port's secrets system. Secret values can only be viewed once after creation.
- Verify permissions: Ensure your provider credentials have the required permissions for the models you're using.
- Check quotas: Monitor usage limits and billing status for external providers.
- Provider status: Check if your external provider service is experiencing outages.
Bedrock-specific debugging:
- IAM policy: Ensure your policy includes both
inference-profileandfoundation-modelARN entries for each model. See the IAM policy examples above. - Trust policy: Verify the correct Port gateway role is in your trust policy. See the trust relationship configuration section.
- External ID: Recreate the secret in Port if you suspect a mismatch. Even trailing whitespace causes failures.
- Self-test: Use the AWS CLI to test assume-role and model invocation independently. See the Bedrock troubleshooting FAQ above.
What should I do if a model isn't enabled for my provider?
{
"ok": false,
"error": {
"name": "LLMProviderModelNotEnabledError",
"message": "Model 'gpt-5' is not enabled for provider 'openai'"
}
}
Solution: This usually means the model needs to be enabled in your provider configuration. Contact your organization administrator to enable the specific model for your provider.