-
Notifications
You must be signed in to change notification settings - Fork 571
test(llm): add model initialization scenario tests #1544
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Add extensive test coverage for all model initialization paths and edge cases.Tests cover success scenarios, error handling, exception priority, mode filtering, and E2E integration through the full initialization chain. fix
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
Greptile OverviewGreptile SummaryThis PR adds comprehensive test coverage for the LangChain model initialization flow and reorganizes the test directory structure. Key Changes:
Test Coverage Validates:
Test Organization:
|
| Filename | Score | Overview |
|---|---|---|
| tests/llm/models/test_langchain_init_scenarios.py | 5/5 | New comprehensive test file (990 lines) with 27 test scenarios covering model initialization paths, exception priority, error recovery, and E2E integration for PR #1516 |
| tests/llm/models/test_langchain_initialization_methods.py | 5/5 | Renamed from tests/llm_providers/; tests individual initialization methods (_init_chat_completion_model, _init_community_chat_models, _init_text_completion_model) |
| tests/llm/models/test_langchain_initializer.py | 5/5 | Renamed from tests/llm_providers/; unit tests for init_langchain_model with mocked initializers to verify call order and exception handling |
| tests/llm/models/test_langchain_special_cases.py | 5/5 | Renamed from tests/llm_providers/; tests special case handlers for gpt-3.5-turbo-instruct and NVIDIA provider initialization |
| tests/llm/providers/test_langchain_nvidia_ai_endpoints_patch.py | 5/5 | Renamed from tests/llm_providers/; tests ChatNVIDIA streaming decorator, async generation, and LLMRails integration |
| tests/llm/test_langchain_integration.py | 5/5 | Renamed from tests/llm_providers/; integration tests for LangChain model initialization with OpenAI |
Sequence Diagram
sequenceDiagram
participant Test as Test Suite
participant init as init_langchain_model()
participant special as _handle_model_special_cases
participant chat as _init_chat_completion_model
participant community as _init_community_chat_models
participant text as _init_text_completion_model
Test->>init: init_langchain_model(model, provider, mode, kwargs)
alt Chat Mode
init->>special: Try special cases first
special-->>init: None (no match) or Model
alt Special returns None
init->>chat: Try chat completion
chat-->>init: Model or raises ValueError
alt Chat returns None/error
init->>community: Try community chat
community-->>init: Model or raises ValueError
alt Community returns None/error
init->>text: Try text completion (fallback)
text-->>init: Model or None
end
end
end
else Text Mode
init->>special: Try special cases
special-->>init: None (no match) or Model
alt Special returns None
Note over init,chat: Skip chat-only initializers
init->>text: Try text completion
text-->>init: Model or None
end
end
alt All initializers fail
init-->>Test: ModelInitializationError<br/>(ImportError prioritized, else last exception)
else Success
init-->>Test: Return initialized model
end
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
10 files reviewed, no comments
Description
Adds comprehensive test coverage for the LangChain model initialization flow in
nemoguardrails/llm/models/langchain_initializer.py. This test suite validates:Noneproviders, empty model names, invalid modesTest Coverage
MockProviderfactory pattern for clean, reusable test setupValidation: Tests Fail Before Fix, Pass After Fix
8 tests that fail without PR #1516 (proving the fix works):
These tests verify that meaningful errors are preserved instead of being masked by "Could not find LLM provider" RuntimeErrors:
TestSingleErrorScenarios::test_single_error_preserved[chat_error_preserved]"Could not find LLM provider '_test_err'""Invalid API key"TestSingleErrorScenarios::test_single_error_preserved[community_error_preserved]"Could not find LLM provider '_test_err'""Rate limit exceeded"TestMultipleErrorPriority::test_exception_priority[valueerror_last_wins]"Could not find LLM provider '_test_priority'""Error B"(last ValueError wins)TestMultipleErrorPriority::test_exception_priority[different_types_last_wins]"Could not find LLM provider '_test_priority'""Error B"(last exception wins)TestE2EIntegration::test_e2e_meaningful_error_from_config"Could not find LLM provider '_e2e_test'""Invalid API key"TestMultipleErrorScenarios::test_all_initializers_raise_valueerror_last_one_wins"Could not find LLM provider 'fake_provider'""Community chat error"TestMultipleErrorScenarios::test_chat_and_community_both_fail_community_wins"Could not find LLM provider '_test_chat_community_fail'""rate limit exceeded"TestMultipleErrorScenarios::test_runtimeerror_vs_valueerror_last_wins"Could not find LLM provider '_test_runtime_vs_value'""ValueError from community"