Setup & Configuration
Port's AI offerings are currently in closed beta and will be gradually rolled out to users by the end of 2025.
This guide covers all technical details for setting up and configuring LLM providers, including permissions, changing defaults, validation flow, and troubleshooting common issues.
Permissions & Access Controlโ
Managing LLM provider settings requires organization administrator permissions. Only admins can modify default providers or add new provider configurations.
- Admin Users
- Organization Members
Administrators can perform all LLM provider management operations:
Configuration Operations
- Get default LLM provider and model - View current default provider settings
- Change default LLM provider and model - Update organization default providers
- Create or connect an LLM provider - Create and configure new LLM provider connections
- Get a specific provider configuration - View existing provider configurations
- Delete a specific provider configuration - Delete provider configurations
Management Capabilities
- Set organization-wide default providers and models
- Configure provider-specific settings and credentials
- Manage provider access and permissions
- Test provider connections with validation
Organization members have read-only access to LLM provider information:
Read-Only Operations
- Get default LLM provider and model - View current default provider settings
- Get configured LLM providers - View available providers and their status
- See which models are currently configured as defaults
No Management Access
- Cannot modify provider configurations
- Cannot change default settings
- Cannot add or remove providers
Prerequisitesโ
Before configuring LLM providers, ensure you have:
- Access to Port AI: Your organization has access to the Port AI features.
- Provider Accounts: Active accounts with the LLM providers you want to use
- Admin Permissions: Organization administrator role in Port
Step 1: Store API Keys in Secretsโ
Before configuring providers, store your API keys in Port's secrets system. The secret names you choose are flexible - you'll reference them in your provider configuration.
- Click on the
...
button in the top right corner of your Port application - Click on Credentials
- Click on the
Secrets
tab - Click on
+ Secret
and add the required secrets for your chosen provider(s):
- OpenAI
- Anthropic
- Azure OpenAI
- AWS Bedrock
Required Secret:
- API Key secret (e.g.,
openai-api-key
) - Your OpenAI API key
Required Secret:
- API Key secret (e.g.,
anthropic-api-key
) - Your Anthropic API key
Required Secret:
- API Key secret (e.g.,
azure-openai-api-key
) - Your Azure OpenAI API key
Required Secrets:
- Access Key ID secret (e.g.,
aws-bedrock-access-key-id
) - Your AWS access key ID - Secret Access Key secret (e.g.,
aws-bedrock-secret-access-key
) - Your AWS secret access key
You can choose any names for your secrets. The examples above are suggestions - use names that make sense for your organization. You'll reference these exact names in your provider configuration.
After creating a secret, you will be able to view its value only once. Afterwards, you will be able to delete the secret or edit its value, but not to view it.
For more details on managing secrets, see the Port Secrets documentation.
Step 2: Configure LLM Providersโ
Use the Create or connect an LLM provider API to configure your providers. The interactive API reference provides detailed examples and allows you to test the configuration for each provider type (OpenAI, Anthropic, Azure OpenAI, AWS Bedrock).
Step 3: Validate Configurationโ
Test your provider configuration with connection validation using the Create or connect an LLM provider API with the validate_connection=true
parameter. The interactive API reference shows how to test your configuration before saving it.
Getting Your Current Configurationโ
Retrieve your organization's current LLM provider defaults using the Get default LLM provider and model API. The interactive API reference shows the response format and allows you to test the endpoint.
System Defaultsโ
When no organization-specific defaults are configured, Port uses these system defaults:
- Default Provider:
port
- Default Model:
claude-sonnet-4-20250514
Changing Default Providersโ
Update your organization's default LLM provider and model using the Change default LLM provider and model API. The interactive API reference provides the request format and response examples.
Validation Flowโ
The system validates provider configurations to ensure they work correctly before saving. This includes checking credentials, testing connections, and verifying model availability.
For detailed information about how validation works during API requests, see Selecting LLM Provider.
Configuration Hierarchyโ
LLM provider settings follow a hierarchy from organization defaults to system defaults.
For detailed information about how defaults are selected during API requests, see Selecting LLM Provider.
Frequently Asked Questionsโ
I'm getting "LLM provider not found" - what should I do?
This error occurs when trying to use a provider that hasn't been configured:
{
"ok": false,
"error": {
"name": "LLMProviderNotFoundError",
"message": "LLM provider 'openai' not found for organization"
}
}
Solution: Create the provider configuration first using the steps above, or contact your organization administrator.
Why is my connection test failing?
Connection test failures usually indicate credential or configuration issues:
{
"ok": false,
"error": {
"name": "LLMProviderModelTestError",
"message": "Connection test failed for provider 'openai'",
"details": {
"testedModels": {
"gpt-5": { "isValid": false, "message": "Invalid API key" }
}
}
}
}
Solution:
- Verify your API key is correct and stored properly in secrets
- Ensure the API key has the required permissions for your provider
- Check if your provider account has sufficient quota/credits
I'm getting "apiKeySecretName is required" error
This indicates missing required configuration parameters:
{
"ok": false,
"error": {
"name": "LLMProviderInvalidConfigError",
"message": "apiKeySecretName is required"
}
}
Solution: Check the provider-specific configuration requirements in the setup steps above and ensure all required fields are provided.
I don't have permission to manage LLM providers
{
"name": "llm_provider_manage_forbidden",
"message": "You do not have permission to manage LLM providers"
}
Solution: Only organization administrators can manage LLM providers. Contact your admin to get the necessary permissions or ask them to configure the providers for you.
How can I debug provider configuration issues?
Here are useful debugging tips:
- Check Logs: Monitor AI invocation logs for detailed error messages
- Validate Secrets: Ensure API keys are stored correctly in Port's secrets system
- Test Connection: Use
validate_connection=true
parameter when creating providers - Verify Permissions: Ensure your provider API keys have the required permissions
- Check Quotas: Monitor usage limits and billing status for external providers
- Provider Status: Check if your external provider service is experiencing outages
What should I do if a model isn't enabled for my provider?
{
"ok": false,
"error": {
"name": "LLMProviderModelNotEnabledError",
"message": "Model 'gpt-5' is not enabled for provider 'openai'"
}
}
Solution: This usually means the model needs to be enabled in your provider configuration. Contact your organization administrator to enable the specific model for your provider.