The FLARE language provides a powerful framework for recursive AI prompting. The ability to specify models, control response variability, and apply advanced post-processing functions enables developers to extract the most value from their AI tools. By leveraging multiple models, they can ensure diverse and accurate responses, similar to the "wisdom of the (llm) crowd" while post-processing functions like summarizing, combining, or contrasting these responses allow for nuanced and comprehensive outputs. This flexibility makes FLARE a versatile and valuable language for AI tool development.
- π€ Multi-Model Support - Query OpenAI, Mistral, and other models simultaneously
- π§ Intelligent Post-Processing - Vote, summarize, combine, and analyze responses
- β‘ High Performance - Parallel model queries with automatic fallbacks
- ποΈ Atomic Architecture - Maintainable, testable, and scalable codebase
- π Real-Time Monitoring - Health checks, diagnostics, and API insights
- π§ Easy Integration - RESTful API with comprehensive error handling
# Clone the repository
git clone <your-repository-url>
cd FLARE
# Install dependencies
npm install
# Optional: Set up environment variables (works without API key!)
cp .env.example .env# Start FLARE server
npm start
# Server runs on http://localhost:8080
# API available at http://localhost:8080/api/info
# Health check at http://localhost:8080/health# Test with curl
curl -X POST http://localhost:8080/process-flare \
-H "Content-Type: application/json" \
-d '{"command": "{ flare model:mistral temp:0.7 `Write a haiku about AI` }"}'
# Or use the web interface at http://localhost:8080FLARE works out-of-the-box with fallback API access. For production use:
# .env file
POLLINATIONS_API_KEY=your_api_key_here # Optional - fallback provided
PORT=8080 # Server portFLARE uses an intuitive curly-brace syntax to define AI orchestration commands:
{ flare model:model_name temp:temperature post_processing `your prompt here` }// Single model query
{ flare model:mistral `Explain quantum computing` }
// Multiple models with voting
{ flare model:openai,mistral vote `What is the best programming language?` }
// Temperature control (0.0 = deterministic, 1.0+ = creative)
{ flare model:mistral temp:0.1 `Count from 1 to 5` }
{ flare model:mistral temp:0.9 `Write a creative story opening` }| Command | Description | Example |
|---|---|---|
sum |
Summarize multiple responses | { flare model:openai,mistral sum Explain AI } |
vote |
Select the best response | { flare model:openai,mistral vote Rate JavaScript 1-10 } |
comb |
Combine all responses | { flare model:openai,mistral comb List AI benefits } |
diff |
Compare responses | { flare model:openai,mistral diff React vs Vue } |
exp |
Expand responses | { flare model:mistral exp Explain machine learning } |
filter |
Filter quality responses | { flare model:openai,mistral filter Pros and cons } |
model: Specify one or more models (see available models below)temp: Control randomness (0.0-2.0, default: 1.0)- Post-processing: Apply intelligent response processing. If a specific model is not provided for a post-processing command (e.g.,
sum:openai), it will default to the first model specified in the mainmodel:parameter. If no models are specified in the mainmodel:parameter, it will fall back to'openai'.
FLARE integrates with Pollinations.ai and supports these anonymous-tier models:
| Model | Description | Specialization |
|---|---|---|
mistral |
Mistral Small 3.1 24B | General-purpose, creative writing |
gemini |
Gemini 2.5 Flash Lite | Fast responses, analysis |
nova-fast |
Amazon Nova Micro | Quick processing |
openai |
OpenAI GPT-5 Nano | General-purpose (Note: Does not support temp parameter) |
openai-fast |
OpenAI GPT-4.1 Nano | Faster responses |
qwen-coder |
Qwen 2.5 Coder 32B | Code generation & debugging |
bidara |
NASA's BIDARA | Biomimetic design & research |
midijourney |
MIDIjourney | Music composition |
# Process FLARE commands
POST /process-flare
Content-Type: application/json
{
"command": "{ flare model:mistral `Your prompt here` }"
}
# Process text documents with embedded FLARE commands
POST /process-text
Content-Type: application/json
{
"text": "Your document with { flare model:mistral `embedded commands` } inside"
}
# Health check
GET /health
# API information
GET /api/infoimport requests
def query_flare(command):
response = requests.post('http://localhost:8080/process-flare',
json={'command': command},
headers={'Content-Type': 'application/json'}
)
return response.json()
# Example usage
result = query_flare("{ flare model:mistral temp:0.7 `Explain Python` }")
print(result['result'])async function queryFLARE(command) {
const response = await fetch('http://localhost:8080/process-flare', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ command })
});
return await response.json();
}
// Example usage
const result = await queryFLARE("{ flare model:mistral vote `Best web framework?` }");
console.log(result.result);# Run all tests
npm test
# Run specific test suites
npm run test:unit # Unit tests (94.2% coverage)
npm run test:parser # Parser tests only
npm run test:integration # Integration tests
npm run test:e2e # End-to-end tests
# Development server with auto-restart
npm run devComplete Feature Test - Tests all post-processing functions and capabilities:
# Ensure server is running first
npm start &
# Run comprehensive test suite
./test-all-features.shThis comprehensive script tests:
- β
All 6 post-processing functions (
vote,sum,comb,diff,filter,exp) - β Single and multi-model queries with temperature control
- β Document processing with embedded commands
- β
Specialized models (
qwen-coder,bidara,midijourney) - β Error handling and graceful degradation
- β Multi-model coordination with parallel processing
Model Testing - Quick verification of available models:
./test-models.sh # Test mistral, gemini, openai models
./demo-flare.sh # Interactive demo with debug informationTest Results: All scripts generate detailed markdown reports:
test-results.md- Complete feature test results with actual API responsesoutput.md- Model-specific test results showing real AI outputs- Server logs show detailed processing pipeline execution
Prerequisites: Server must be running on localhost:8080 before executing test scripts.
FLARE v2.0 uses an Atomic File Structure where each file contains exactly one function, organized by language constructs:
src/
βββ server/ # Express server components
β βββ createExpressApp.js # Express app creation
β βββ setupMiddleware.js # CORS, body parsing
β βββ setupApiRoutes.js # API route definitions
β βββ startServer.js # Server startup
β βββ exports.js # Module exports
βββ parser/ # FLARE command parsing (atomic functions)
β βββ parseFlareCommand.js # Parse single FLARE command
β βββ validateParsedCommand.js # Command validation
β βββ extractFlareCommands.js # Extract commands from text
β βββ processFlareResponse.js # Process complete response
β βββ replaceFlareCommands.js # Replace commands with results
βββ services/ # Business logic (atomic functions)
β βββ executeModelQuery.js # Single model query execution
β βββ queryMultipleModels.js # Multi-model coordination
β βββ applyPostProcessing.js # Post-processing operations
β βββ handleQueryFailure.js # Error handling
β βββ processFlareCommand.js # Complete command processing
βββ operations/ # Post-processing operations
β βββ sum.js # Summarization
β βββ vote.js # Response voting
β βββ comb.js # Response combination
β βββ diff.js # Response comparison
βββ test/ # Comprehensive test suite
Atomic Architecture Principles:
- One Function Per File - Each
.jsfile contains exactly one function with the same name - Maximum Modularity - Functions are pure, testable, and composable
- Clear Dependencies - Import/export relationships are explicit and minimal
- Language-Based Organization - Structure follows code constructs, not application features
Core Processing Pipeline:
- Text Input β
extractFlareCommands()β Extract embedded FLARE commands - FLARE Commands β
parseFlareCommand()β Parse syntax and parameters - Parsed Commands β
queryMultipleModels()β Execute model queries in parallel - Raw Responses β
applyPostProcessing()β Apply intelligent post-processing - Processed Results β
replaceFlareCommands()β Replace commands with results - Final Output β Seamlessly integrated natural text
This structure provides:
- β Ultimate Maintainability - Individual functions are easy to locate, test, and modify
- β Perfect Testability - Each function can be tested in isolation with clear inputs/outputs
- β Infinite Scalability - New functionality adds new files without affecting existing code
- β Maximum Clarity - Function names match file names for instant comprehension
FLARE's /process-text endpoint enables natural document processing where AI-generated content seamlessly integrates into your text:
Artificial intelligence is transforming our world in unprecedented ways.
{ flare model:mistral temp:0.5 `Explain in 2-3 sentences how AI is changing healthcare specifically` }
Additionally, the field of education is experiencing significant changes due to AI integration.
{ flare model:gemini temp:0.5 `Describe in 2-3 sentences how AI is revolutionizing education and learning` }
Looking toward the future, these technological advances promise even more remarkable developments.
{ flare model:mistral temp:0.7 `Predict in 2-3 sentences what AI might accomplish in the next 5-10 years` }
Artificial intelligence is transforming our world in unprecedented ways. AI is significantly transforming healthcare by enabling more accurate diagnoses through advanced image analysis and predictive algorithms, and by personalizing treatment plans based on vast amounts of patient data. Additionally, AI-driven tools are streamlining administrative tasks, enhancing patient monitoring, and facilitating the development of new drugs, ultimately improving efficiency and patient outcomes. Additionally, the field of education is experiencing significant changes due to AI integration. AI is revolutionizing education by offering personalized learning experiences tailored to individual student needs and paces, providing instant feedback and adaptive content. It's also automating administrative tasks for educators, freeing up their time for more impactful teaching and student interaction. This shift promises to make education more accessible, efficient, and effective for learners of all backgrounds. Looking toward the future, these technological advances promise even more remarkable developments. In the next 5-10 years, AI is likely to make significant strides in personalizing healthcare through advanced diagnostics and predictive analytics, potentially revolutionizing disease prevention and treatment. Additionally, AI could enhance autonomous systems, leading to more widespread use of self-driving cars and drones, and it may also play a crucial role in addressing climate change by optimizing resource management and energy efficiency.
- β Perfect Integration - FLARE commands are seamlessly replaced with AI-generated content
- β Context Preservation - Each AI response understands and maintains the narrative flow
- β Natural Reading - Final text reads as a coherent document, not a patchwork
- β Multi-Model Coordination - Different models contribute their specialized strengths
- β Zero Manual Editing - No post-processing needed for natural language flow
// Generate multiple perspectives on a topic
{ flare model:openai,mistral vote `Explain climate change impacts` }
// Create comprehensive summaries
{ flare model:openai,mistral sum `Benefits of renewable energy` }// Compare different viewpoints
{ flare model:openai,mistral diff `Pros and cons of remote work` }
// Expand on technical concepts
{ flare model:mistral exp `Explain blockchain technology` }// Generate creative content with controlled randomness
{ flare model:mistral temp:0.9 `Write a sci-fi story opening` }
// Combine different creative approaches
{ flare model:openai,mistral comb `Create a marketing slogan for AI tools` }// Filter and improve content quality
{ flare model:openai,mistral filter `Write professional email about project delays` }
// Vote for the best solution
{ flare model:openai,mistral vote `Best approach to database optimization` }curl http://localhost:8080/healthReturns comprehensive system status:
{
"status": "healthy",
"version": "2.0.0",
"environment": {
"healthy": true,
"checks": {
"apiKey": true,
"networkAccess": true
},
"uptime": 3600.5,
"nodeVersion": "v20.18.1"
}
}curl http://localhost:8080/api/infoLists all available features and supported commands.
β API Connection Errors
- Check internet connectivity
- Verify API key if using custom configuration
- Check server logs for detailed error messages
β Invalid FLARE Syntax
- Ensure proper curly brace structure:
{ flare ... } - Verify model names are correct
- Check temperature values are between 0.0-2.0
β Server Won't Start
- Check port 8080 isn't in use:
lsof -i :8080 - Verify Node.js version (v14+ required)
- Check npm dependencies are installed
- π Check the test files in
src/test/for usage examples - π Review server logs for detailed error information
- π Use the health endpoint to diagnose system status
- π§ͺ Run the test suite to verify functionality:
npm test - π Run feature tests to see real examples:
./test-all-features.sh - π Check generated reports:
test-results.mdandoutput.md
- Parallel Processing - Multiple model queries execute simultaneously
- Automatic Fallbacks - Graceful degradation when models fail
- Retry Logic - Exponential backoff for failed requests
- Error Recovery - Continue processing even if some models fail
- Resource Management - Intelligent timeout and connection management
- API Key Protection - Environment variable configuration
- Input Validation - Comprehensive FLARE command validation
- Error Handling - Secure error messages without exposing internals
- Rate Limiting - Built-in request throttling
- CORS Support - Configurable cross-origin request handling
- Node.js v14 or higher
- npm v6 or higher
- Internet connection for API access
- 2GB RAM minimum (recommended: 4GB+)
- Port 8080 available (configurable via PORT environment variable)
- π Production Ready - Used in live applications with proven reliability
- β‘ Fast & Efficient - Optimized for performance with parallel processing
- π§ Intelligent - AI-powered post-processing for enhanced results
- π§ Easy to Use - Simple syntax that anyone can learn quickly
- ποΈ Scalable Architecture - Built to grow with your needs
- π§ͺ Well Tested - 94.2% test coverage ensures reliability
- π Fully Documented - Comprehensive guides and examples
Start orchestrating AI models today with FLARE v2.0! π** - Comprehensive guides and examples
Start orchestrating AI models today with FLARE v2.0! π