-
Notifications
You must be signed in to change notification settings - Fork 31
Description
Is your feature request related to a problem?
Yes. Currently, there is no programmatic way to retrieve the list of available LLM models from the LLM Gateway API. Developers must manually maintain a static list of models based on the documentation, which leads to:
- Maintenance burden: Applications must be updated manually when AssemblyAI adds, deprecates, or removes models
- Sync issues: No way to detect when the local model list is outdated
- Missing metadata: No access to dynamic information like model status, pricing changes, or capabilities
Describe the solution you'd like
Add a GET /v1/models endpoint (similar to OpenAI's /v1/models) that returns the list of available LLM Gateway models with their metadata.
Proposed endpoint:
GET https://llm-gateway.assemblyai.com/v1/models
Authorization: <API_KEY>
Expected response:
{
"object": "list",
"data": [
{
"id": "claude-sonnet-4-5-20250929",
"object": "model",
"created": 1727568000,
"owned_by": "anthropic",
"provider": "anthropic",
"display_name": "Claude 4.5 Sonnet",
"description": "Claude's best model for complex agents and coding",
"pricing": {
"input_per_million_tokens": 3.00,
"output_per_million_tokens": 15.00,
"currency": "USD"
},
"capabilities": {
"max_context_tokens": 200000,
"max_output_tokens": 16000,
"supports_tools": true,
"supports_vision": false
},
"status": "active",
"deprecated_at": null
},
{
"id": "gpt-5.2",
"object": "model",
"created": 1735689600,
"owned_by": "openai",
"provider": "openai",
"display_name": "GPT-5.2",
"description": "OpenAI's best model for coding and agentic tasks",
"pricing": {
"input_per_million_tokens": 5.00,
"output_per_million_tokens": 15.00,
"currency": "USD"
},
"capabilities": {
"max_context_tokens": 128000,
"max_output_tokens": 16384,
"supports_tools": true,
"supports_vision": true
},
"status": "active",
"deprecated_at": null
}
// ... other models
]
}SDK Integration
Add a corresponding method to the Python SDK:
import assemblyai as aai
aai.settings.api_key = "your-api-key"
# List all available models
models = aai.LlmGateway.list_models()
for model in models:
print(f"{model.id}: {model.display_name}")
print(f" Provider: {model.provider}")
print(f" Pricing: ${model.pricing.input_per_million_tokens}/M input")
print(f" Status: {model.status}")Use Cases
- Dynamic model selection: Applications can present users with the current list of available models
- Automated sync: Backend systems can automatically detect new models or deprecations
- Pricing transparency: Display accurate, up-to-date pricing to users before API calls
- Capability filtering: Filter models by features (tool support, context length, etc.)
Alternatives Considered
Currently, we maintain a static list of models in our codebase and manually sync it with the documentation. This is error-prone and requires constant monitoring of the AssemblyAI changelog.
Additional Context
This feature would bring the LLM Gateway API in line with other major LLM providers:
- OpenAI:
GET /v1/models - Anthropic: Model list in API responses
- Google AI:
GET /v1/models
Thank you for considering this feature request!