AI-powered medical report generation and analysis system with RAG (Retrieval-Augmented Generation) capabilities.
# Clone and setup
git clone <repository-url>
cd BBB
# Start with unified Docker (single frontend)
make docker-build
make docker-up
# Access: API: http://localhost:8082, Frontend: http://localhost:5173
# OR start with separate frontends (recommended for production)
make docker-build
make docker-up-separate
# Access: API: http://localhost:8082, Patient: http://localhost:3000, Doctor: http://localhost:3001# Setup (one-time)
make setup
# Run unified development server
make dev
# Access: API: http://localhost:8082, Frontend: http://localhost:5173
# OR run separate frontend services
make api & # Backend API (port 8082)
make ui-patient & # Patient Frontend (port 3000)
make ui-doctor & # Doctor Frontend (port 3001)- Docker & Docker Compose
- Git
- Python 3.12+
- Node.js 18+
- Git
# Setup
make setup # One-time setup
make dev # Run API + UI together
make api # Run API only
make ui-patient # Patient Frontend (port 3000)
make ui-doctor # Doctor Frontend (port 3001)
# Docker
make docker-build # Build Docker images
make docker-up # Start unified Docker services
make docker-up-separate # Start separate Docker services
make docker-down # Stop unified Docker services
make docker-down-separate # Stop separate Docker services
make docker-logs # View logs
make docker-shell # Open container shell
# Quality
make test # Run tests (excludes trio)
make lint # Lint code
make fmt # Format code
# make type # Type checking (disabled)
make precommit # Run all checks
# Utilities
make pdf # Generate sample PDF
make clean # Clean caches
make distclean # Remove all dependenciesBBB/
βββ api/ # FastAPI Backend (Port 8082)
β βββ core/ # Core functionality
β βββ routers/ # API endpoints
β βββ services/ # Business logic
β βββ middleware/ # Request/response processing
β βββ tests/ # Backend tests
βββ src/ # Next.js Frontend
β βββ app/ # App router pages
β β βββ page.tsx # Unified interface (Port 5173)
β β βββ patient/ # Patient interface (Port 3000)
β β βββ doctor/ # Doctor interface (Port 3001)
β βββ components/ # React components
β βββ lib/ # Utilities
βββ docker/ # Docker configurations
β βββ Dockerfile # Backend API
β βββ Dockerfile.patient # Patient frontend
β βββ Dockerfile.doctor # Doctor frontend
β βββ docker-compose*.yml # Service orchestration
βββ data/ # Sample data
βββ docs/ # Documentation
βββ scripts/ # Utility scripts
- Medical Report Generation: AI-powered report creation
- Symptom Analysis: Intelligent symptom interpretation
- Code Generation: Automatic ICD-10/CPT coding
- Evidence Retrieval: RAG-based evidence search
- PDF Export: Professional report formatting
- FastAPI Backend: High-performance Python API
- Next.js Frontend: Modern TypeScript UI with App Router
- RAG Integration: FAISS + Sentence Transformers with query expansion
- HIPAA Compliance: PHI masking and security
- Comprehensive Logging: Structured logging with PHI protection
- Error Handling: Global exception management
- Health Checks: Application monitoring
- Docker Support: Multi-stage builds for production
- CI/CD Pipeline: GitHub Actions with automated testing
- PHI Masking: Automatic PII/PHI redaction
- Write Guards: Demo mode protection
- CORS Configuration: Secure cross-origin requests
- Input Validation: Comprehensive data validation
- Security Logging: Audit trail for sensitive operations
POST /api/v1/summary- Generate medical summaryPOST /api/v1/evidence- Retrieve evidencePOST /api/v1/codes- Generate medical codesPOST /api/v1/reports- Create medical reportsGET /api/v1/rag/status- RAG system status
GET /health- Health checkGET /api/v1/llm/health- LLM service statusGET /api/v1/rag/health- RAG service status
# Run all tests (excludes trio tests)
make test
# Run specific test categories
pytest api/tests/test_summary.py
pytest api/tests/test_rag.py
pytest api/tests/test_llm_cache.py
# Run hardening tests
make test-hardening
# Run LLM tests with mock data
make test-llm# Build production image
docker build -t bbb-medical:latest .
# Run production container
docker run -p 8082:8082 \
-e OPENAI_API_KEY=your_key \
-e DEMO_ACCESS_CODE=your_code \
bbb-medical:latestOPENAI_API_KEY=your_openai_key
DEMO_ACCESS_CODE=your_demo_code
DEMO_MODE=true
HIPAA_MODE=false
enable_rag=true
LLM_TEMPERATURE=0.1
LLM_TOP_P=0.9
LLM_SEED=42- Response Time: < 2s for most operations
- Concurrent Users: Supports multiple simultaneous requests
- Memory Usage: Optimized for production workloads
- Database: SQLite for development, PostgreSQL for production
- Fork the repository
- Create a feature branch
- Make your changes
- Run tests and linting
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
For questions or issues:
- Check the documentation
- Review existing issues
- Create a new issue with detailed information
- Separate Frontend Ports: Patient (3000) and Doctor (3001) interfaces
- Vercel Deployment: Separate Vercel projects for Patient and Doctor frontends
- LLM Hardening: JSON schema validation, rule engine, normalization
- RAG Quality: Query expansion, MMR diversity, metadata extraction
- Docker Support: Multi-stage builds, production optimization
- CI/CD Pipeline: GitHub Actions with automated testing
- Test Coverage: Comprehensive test suite with mock data
- Security: Enhanced PHI masking, CORS configuration
- β Trio test failures in CI/CD
- β Docker frontend build path issues
- β Python version consistency (3.12)
- β Type checking and linting errors
- β RAG performance optimization
- β Docker Compose command compatibility
- β PostCSS configuration for Tailwind CSS
- β Hardcoded file paths in code generation
- β Test type checking issues
- Language: Python 3.12, TypeScript
- Framework: FastAPI, Next.js 15.5.4
- Database: SQLite (dev), PostgreSQL (prod)
- AI/ML: OpenAI GPT-4, Sentence Transformers, FAISS
- Deployment: Docker, Vercel, GitHub Actions
- Security: HIPAA compliant, PHI masking
- Enhanced RAG capabilities
- Multi-language support
- Advanced analytics
- Mobile application
- Integration with EHR systems