-
Notifications
You must be signed in to change notification settings - Fork 1.2k
feat: Inference Migration Using FastAPI router factory #4445
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Hi @JayDi11a! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
|
This pull request has merge conflicts that must be resolved before it can be merged. @JayDi11a please rebase it. https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork |
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
|
This pull request has merge conflicts that must be resolved before it can be merged. @JayDi11a please rebase it. https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork |
|
This pull request has merge conflicts that must be resolved before it can be merged. @JayDi11a please rebase it. https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork |
Following PR llamastack#4191 pattern, migrate Inference API from @webmethod decorators to FastAPI router-based architecture. - Created inference package (api.py, models.py, fastapi_routes.py, __init__.py) - Registered inference router in fastapi_router_registry - Updated server.py and routes.py for inference route processing - Regenerated OpenAPI specs (49 paths, 68 operations) - Updated stainless config for /v1/inference/rerank endpoint - Fixed Chunk model: optional chunk_id for backward compatibility - Re-exported InterleavedContent in inference/__init__.py Inference endpoints (6 routes): - POST /v1/chat/completions (streaming support) - GET /v1/chat/completions (list) - GET /v1/chat/completions/{completion_id} (retrieve) - POST /v1/completions (streaming support) - POST /v1/embeddings - POST /v1/inference/rerank
Update demo script to use the newer LlamaStackClient and Agent API instead of the manual OpenAI client approach. Changes: - Switch from OpenAI client to LlamaStackClient - Use Agent API for simplified RAG implementation - Auto-select models with preference for Ollama (no API key needed) - Reduce code complexity from ~136 to ~102 lines - Remove manual RAG implementation in favor of agentic approach This provides a cleaner, more modern example for users getting started with Llama Stack.
Simplify the Ollama model selection logic in the detailed tutorial. Changes: - Replace complex custom_metadata filtering with simple ID check - Use direct 'ollama' in model ID check instead of metadata lookup - Makes code more concise and easier to understand This aligns with the simplified approach used in the updated demo_script.py.
Update the agent examples to use the latest API methods. Changes: - Simplify model selection (already applied in previous commit) - Use response.output_text instead of response.output_message.content - Use direct print(event) instead of event.print() for streaming This aligns the tutorial with the current Agent API implementation.
Modernize the RAG agent example to use the latest Vector Stores API. Changes: - Replace deprecated VectorDB API with Vector Stores API - Use file upload and vector_stores.create() instead of rag_tool.insert() - Download files via requests and upload to Llama Stack - Update to use file_search tool type with vector_store_ids - Simplify model selection with Ollama preference - Improve logging and user feedback - Update event logging to handle both old and new API - Add note about known server routing issues This provides a more accurate example using current Llama Stack APIs.
Fix conformance test failures by explicitly defining both application/json and text/event-stream media types in the 200 responses for streaming endpoints (/chat/completions and /completions). Changes: - Updated fastapi_routes.py to include explicit response schemas for both media types - Regenerated OpenAPI specs with proper 200 responses - Regenerated Stainless config This fixes the "response-success-status-removed" conformance errors while maintaining the dynamic streaming/non-streaming behavior. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
BREAKING CHANGE: Chunk.chunk_id field was made optional in commit 53af013 to support backward compatibility with legacy data. This changes the OpenAPI schema but maintains API compatibility.
34c1612 to
b3c5f4f
Compare
|
This pull request has merge conflicts that must be resolved before it can be merged. @JayDi11a please rebase it. https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork |
|
thanks a lot for your contribution, you didn't pick the easiest one, please see the tests failures. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we introducing so many changes to the demo_script?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wrote it up in the forum-llama-stack-core channel. But here is the list of what needed to be done to fix the demo script:
Vector DB API Change ([demo_script.py:21-23](vscode-webview://01gn3kfohjdkuq986mnfpbqrs203d8vlfvp0a49o8qe60jf1pkr3/demo_script.py#L21-L23))
1. Removed client.vector_dbs.register() - this API no longer exists
2. The vector store doesn't need to be explicitly registered; it's handled automatically
RAG Tool Insert API Mismatch ([demo_script.py:35-42](vscode-webview://01gn3kfohjdkuq986mnfpbqrs203d8vlfvp0a49o8qe60jf1pkr3/demo_script.py#L35-L42))
Changed from client.tool_runtime.rag_tool.insert() to using client.post() directly
1. The client library expects vector_db_id but the server requires vector_store_id in the request body
2. Used direct POST to work around this client/server API mismatch (exposes a deeper issue with client and server maintenance)
Tool Configuration for Agent ([demo_script.py:48-53](vscode-webview://01gn3kfohjdkuq986mnfpbqrs203d8vlfvp0a49o8qe60jf1pkr3/demo_script.py#L48-L53))
Changed from using builtin::rag/knowledge_search to the correct format
1. Used type: "file_search" with vector_store_ids parameter
2. This matches the OpenAI-compatible API format
Event Logger Output ([demo_script.py:69-73](vscode-webview://01gn3kfohjdkuq986mnfpbqrs203d8vlfvp0a49o8qe60jf1pkr3/demo_script.py#L69-L73))
1. Added check for log objects that might not have a .print() method
Falls back to regular print() for string outputs
These fixes still seem to hold for client v.040 as well.
| """Request model for listing chat completions.""" | ||
|
|
||
| after: str | None = Field(default=None, description="The ID of the last chat completion to return.") | ||
| limit: int | None = Field(default=20, description="The maximum number of chat completions to return.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing ge=1 constraint
| class GetChatCompletionRequest(BaseModel): | ||
| """Request model for getting a chat completion.""" | ||
|
|
||
| completion_id: str = Field(..., description="ID of the chat completion.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing min_length=1
| ..., | ||
| description="The search query to rank items against. Can be a string, text content part, or image content part. The input must not exceed the model's max input token length.", | ||
| ) | ||
| items: list[str | OpenAIChatCompletionContentPartTextParam | OpenAIChatCompletionContentPartImageParam] = Field( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing min_length=1 for non-empty list
| description="List of items to rerank. Each item can be a string, text content part, or image content part. Each input must not exceed the model's max input token length.", | ||
| ) | ||
| max_num_results: int | None = Field( | ||
| default=None, description="Maximum number of results to return. Default: returns all." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing ge=1 when provided
| :returns: RerankResponse with indices sorted by relevance score (descending). | ||
| """ | ||
| raise NotImplementedError("Reranking is not implemented") | ||
| return # this is so mypy's safe-super rule will consider the method concrete |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we chose to appease mypy, might as well do it for the methods at lines 100-119 as well (that is to add the return at the end of the method)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new inference/models.py is 15x larger than any other API's models file. This suggests:
- It may be duplicating types that already exist elsewhere
- It should potentially be split into multiple files
- Some types might belong in llama_stack_api/common
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comparing api.py files:
| API | Lines | Classes | Pattern |
|---|---|---|---|
| batches | 54 | 1 Protocol | All ... (pure protocol) |
| models | 39 | 1 Protocol | All ... (pure protocol) |
| inference | 120 | 2 classes | Mixed: ... + NotImplementedError |
The inference/api.py:
- Has two classes (InferenceProvider + Inference) instead of one Protocol
- Uses mixed patterns - some methods use ..., others raise NotImplementedError
- The Inference class extends a Protocol (unusual pattern)
Not saying this is necessarily bad, but worth looking into again
…dpoints Updated the InferenceRouter.list_chat_completions and InferenceRouter.get_chat_completion methods to accept the new request object parameters (ListChatCompletionsRequest and GetChatCompletionRequest) instead of individual parameters. This fixes a SQLite parameter binding error where Pydantic request objects were being passed directly to SQL queries. The router now unpacks request object fields when calling the underlying store. Fixes the following error: sqlite3.ProgrammingError: Error binding parameter 1: type 'ListChatCompletionsRequest' is not supported 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Added missing exports for GetChatCompletionRequest and ListChatCompletionsRequest to llama_stack_api/__init__.py. These request types are needed by InferenceRouter and must be importable from the top-level llama_stack_api package. Fixes import error: ImportError: cannot import name 'GetChatCompletionRequest' from 'llama_stack_api' 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
FastAPI routers need to explicitly return StreamingResponse for
streaming endpoints. The old webmethod system had middleware that
automatically wrapped AsyncIterator results in SSE format, but the
new router system requires explicit handling.
Changes:
- Added _create_sse_event() helper to format data as SSE events
- Added _sse_generator() to convert async generators to SSE format
- Updated openai_chat_completion() to return StreamingResponse when stream=True
- Updated openai_completion() to return StreamingResponse when stream=True
Fixes streaming errors:
TypeError("'async_generator' object is not iterable")
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Thank you @leseb. I am honored to help. Then it sounds like I picked the right one. I had some silly TypeErrors handling streaming but I think those are fixed and I am hopefully getting closer. |
Upgrade llama-stack-client from >=0.3.0 to >=0.4.0 to ensure compatibility with recently added APIs (admin, vector_io parameter changes, beta namespace, and tool_runtime authorization parameter). This resolves test failures caused by client SDK being out of sync with server API changes from router migrations. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
This pull request has merge conflicts that must be resolved before it can be merged. @JayDi11a please rebase it. https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork |
What does this PR do?
This commit builds on top of the work done in PR #4191 instantiating APIs to register FastAPI router factories and migrating away from the legacy @webmethod decorator system. The implementation primarily focuses on the migration of the Inference API which updates the server and OpenAPI generation while maintaining the existing routing and inspection.
The Inference API has been migrated to adopt the same new API Package Structure as the migrated Batches AI migration, i.e., protocol definitions and models live in llama_stack_api/inference. The FastAPI router implementation lives in llama_stack/core/server/routers/inference maintaining the established pattern of API contracts and server routing logic.
The nuances of migrating the Inference API include fixing model chunking where chunk_id aren't uniform across 100+ models and adding a sync for chunk id and meta data and an overall effort for backwards compatibility including content types. Last but not least, the Stainless config needed to be updated for the /v1/inference/rerank path.
This implementation represents an incremental migrating of the Inference API to the router system while maintaining full backward compatibility with existing webmethod-based APIs.
Test Plan
Run this from the command line and the same routes should be upheld:
curl http://localhost:8321/v1/inspect/routes | jq '.data[] | select(.route | contains("inference") or contains("chat") or contains("completion") or contains("embedding"))'Since the inference unit tests only import types and not routing logic and the types are reimported, unit testing didn't need modification. Therefore:
Lastly, after the completion of each migration, the server gets tested against the
demo_script.pyin the getting started documentation as well as theinference.py,agent.py, andrag_agent.pyexamples created from thedetailed_tutorial.mdxdocumentation.