Quickleaf Cache is a fast, lightweight, and feature-rich in-memory cache library for Rust. It combines the simplicity of a HashMap with advanced caching features like TTL (Time To Live), filtering, ordering, and event notifications.
- π High Performance: O(1) access with ordered key iteration
- β‘ Advanced Optimizations: Optimized string filters and memory layout
- π Performance Gains: Up to 48% faster operations compared to standard implementations
- β° TTL Support: Automatic expiration with lazy cleanup
- π Advanced Filtering: StartWith, EndWith, and complex pattern matching with optimized algorithms
- π Flexible Ordering: Ascending/descending with pagination support
- π Event Notifications: Real-time cache operation events
- π― LRU Eviction: Automatic removal of least recently used items
- πΎ Persistent Storage: Optional SQLite-backed persistence for durability
- π‘οΈ Type Safety: Full Rust type safety with generic value support
- π¦ Lightweight: Minimal external dependencies
- π§ Memory Optimized: Efficient memory layout and usage patterns
Add the following to your Cargo.toml:
[dependencies]
quickleaf = "0.4"
# For persistence support (optional)
quickleaf = { version = "0.4", features = ["persist"] }use quickleaf::{Quickleaf, Duration};
fn main() {
// Create a cache with capacity of 1000 items
let mut cache = Quickleaf::new(1000);
// Insert some data
cache.insert("user:123", "Alice");
cache.insert("user:456", "Bob");
// Retrieve data
println!("{:?}", cache.get("user:123")); // Some("Alice")
// Insert with TTL (expires in 60 seconds)
cache.insert_with_ttl("session:abc", "temp_data", Duration::from_secs(60));
}use quickleaf::Quickleaf;
fn main() {
let mut cache = Quickleaf::new(5);
// Insert data
cache.insert("apple", 100);
cache.insert("banana", 200);
cache.insert("cherry", 300);
// Get data
println!("{:?}", cache.get("apple")); // Some(100)
// Check if key exists
assert!(cache.contains_key("banana"));
// Remove data
cache.remove("cherry").unwrap();
// Cache info
println!("Cache size: {}", cache.len());
println!("Is empty: {}", cache.is_empty());
}use quickleaf::{Quickleaf, Duration};
fn main() {
// Create cache where all items expire after 5 minutes by default
let mut cache = Quickleaf::with_default_ttl(100, Duration::from_secs(300));
// This item will use the default TTL (5 minutes)
cache.insert("default_ttl", "expires in 5 min");
// This item has custom TTL (30 seconds)
cache.insert_with_ttl("custom_ttl", "expires in 30 sec", Duration::from_secs(30));
// Items expire automatically when accessed
// After 30+ seconds, custom_ttl will return None
println!("{:?}", cache.get("custom_ttl"));
}use quickleaf::{Quickleaf, Duration};
use std::thread;
fn main() {
let mut cache = Quickleaf::new(10);
// Add items with short TTL for demo
cache.insert_with_ttl("temp1", "data1", Duration::from_millis(100));
cache.insert_with_ttl("temp2", "data2", Duration::from_millis(100));
cache.insert("permanent", "data3"); // No TTL
println!("Initial size: {}", cache.len()); // 3
// Wait for items to expire
thread::sleep(Duration::from_millis(150));
// Manual cleanup of expired items
let removed_count = cache.cleanup_expired();
println!("Removed {} expired items", removed_count); // 2
println!("Final size: {}", cache.len()); // 1
}use quickleaf::{Quickleaf, ListProps, Order, Filter};
fn main() {
let mut cache = Quickleaf::new(10);
cache.insert("user:123", "Alice");
cache.insert("user:456", "Bob");
cache.insert("product:789", "Widget");
cache.insert("user:999", "Charlie");
// Get all users (keys starting with "user:")
let users = cache.list(
ListProps::default()
.filter(Filter::StartWith("user:".to_string()))
.order(Order::Asc)
).unwrap();
for (key, value) in users {
println!("{}: {}", key, value);
}
}use quickleaf::{Quickleaf, ListProps, Filter};
fn main() {
let mut cache = Quickleaf::new(10);
cache.insert("config.json", "{}");
cache.insert("data.json", "[]");
cache.insert("readme.txt", "docs");
cache.insert("settings.json", "{}");
// Get all JSON files
let json_files = cache.list(
ListProps::default()
.filter(Filter::EndWith(".json".to_string()))
).unwrap();
println!("JSON files found: {}", json_files.len()); // 3
}use quickleaf::{Quickleaf, ListProps, Filter, Order};
fn main() {
let mut cache = Quickleaf::new(10);
cache.insert("cache_user_data", "user1");
cache.insert("cache_product_info", "product1");
cache.insert("temp_user_session", "session1");
cache.insert("cache_user_preferences", "prefs1");
// Get cached user data (starts with "cache_" and ends with "_data")
let cached_user_data = cache.list(
ListProps::default()
.filter(Filter::StartAndEndWith("cache_".to_string(), "_data".to_string()))
.order(Order::Desc)
).unwrap();
for (key, value) in cached_user_data {
println!("{}: {}", key, value);
}
}Quickleaf provides powerful pagination capabilities through limit and start_after_key parameters, enabling efficient navigation through large datasets.
The limit parameter controls how many items are returned in a single query:
use quickleaf::{Quickleaf, ListProps, Order};
fn main() {
let mut cache = Quickleaf::new(100);
// Add test data
for i in 0..50 {
cache.insert(format!("item_{:03}", i), format!("value_{}", i));
}
// Get only the first 10 items
let page = cache.list(
ListProps::default()
.order(Order::Asc)
.limit(10) // Return maximum 10 items
).unwrap();
println!("First page ({} items):", page.len());
for (key, value) in &page {
println!(" {} = {}", key, value);
}
}Use start_after_key to implement efficient cursor-based pagination:
use quickleaf::{Quickleaf, ListProps, Order};
fn main() {
let mut cache = Quickleaf::new(100);
// Add 30 items
for i in 0..30 {
cache.insert(format!("key_{:02}", i), i);
}
// Get first page
let page1 = cache.list(
ListProps::default()
.order(Order::Asc)
.limit(10)
).unwrap();
println!("Page 1: {} items", page1.len());
let last_key = &page1.last().unwrap().0;
println!("Last key in page 1: {}", last_key);
// Get second page using the last key from page 1
let page2 = cache.list(
ListProps::default()
.order(Order::Asc)
.start_after_key(last_key) // Continue after the last key
.limit(10)
).unwrap();
println!("Page 2: {} items starting after '{}'", page2.len(), last_key);
for (key, value) in page2.iter().take(3) {
println!(" {} = {}", key, value);
}
}Here's a comprehensive example showing how to paginate through all items:
use quickleaf::{Quickleaf, ListProps, Order};
fn main() {
let mut cache = Quickleaf::new(200);
// Insert 100 items
for i in 0..100 {
cache.insert(format!("doc_{:03}", i), format!("content_{}", i));
}
// Paginate through all items
let mut all_items = Vec::new();
let mut current_key: Option<String> = None;
let page_size = 25;
let mut page_num = 1;
loop {
// Build pagination properties
let mut props = ListProps::default()
.order(Order::Asc)
.limit(page_size);
// Add start_after_key if we have a cursor
if let Some(ref key) = current_key {
props = props.start_after_key(key);
}
// Get the page
let page = cache.list(props).unwrap();
// Break if no more items
if page.is_empty() {
break;
}
println!("Page {}: {} items", page_num, page.len());
// Collect items and update cursor
for (key, value) in &page {
all_items.push((key.clone(), value.clone()));
}
current_key = Some(page.last().unwrap().0.clone());
page_num += 1;
}
println!("Total pages: {}", page_num - 1);
println!("Total items: {}", all_items.len());
}Combine pagination with filtering for more complex queries:
use quickleaf::{Quickleaf, ListProps, Order, Filter};
fn main() {
let mut cache = Quickleaf::new(100);
// Add mixed data
for i in 0..30 {
cache.insert(format!("user_{:02}", i), format!("User {}", i));
cache.insert(format!("admin_{:02}", i), format!("Admin {}", i));
}
// Paginate through users only
let mut user_cursor: Option<String> = None;
let mut page = 1;
loop {
let mut props = ListProps::default()
.filter(Filter::StartWith("user_".to_string()))
.order(Order::Asc)
.limit(5);
if let Some(ref cursor) = user_cursor {
props = props.start_after_key(cursor);
}
let users = cache.list(props).unwrap();
if users.is_empty() {
break;
}
println!("User page {}: {} users", page, users.len());
for (key, value) in &users {
println!(" {} = {}", key, value);
}
user_cursor = Some(users.last().unwrap().0.clone());
page += 1;
}
}start_after_key works correctly with descending order:
use quickleaf::{Quickleaf, ListProps, Order};
fn main() {
let mut cache = Quickleaf::new(50);
// Add items
for i in 0..20 {
cache.insert(format!("item_{:02}", i), i);
}
// Get first page in descending order
let page1 = cache.list(
ListProps::default()
.order(Order::Desc)
.limit(5)
).unwrap();
println!("Descending page 1:");
for (key, _) in &page1 {
println!(" {}", key);
}
// Get next page (continuing in descending order)
let last_key = &page1.last().unwrap().0;
let page2 = cache.list(
ListProps::default()
.order(Order::Desc)
.start_after_key(last_key)
.limit(5)
).unwrap();
println!("\nDescending page 2 (after '{}'):", last_key);
for (key, _) in &page2 {
println!(" {}", key);
}
}use quickleaf::{Quickleaf, ListProps};
fn main() {
let mut cache = Quickleaf::new(10);
cache.insert("a", 1);
cache.insert("b", 2);
cache.insert("c", 3);
// Edge case: limit of 0 returns empty list
let empty = cache.list(ListProps::default().limit(0)).unwrap();
assert_eq!(empty.len(), 0);
// Edge case: limit greater than total items returns all
let all = cache.list(ListProps::default().limit(100)).unwrap();
assert_eq!(all.len(), 3);
// Edge case: start_after_key with non-existent key returns error
let result = cache.list(
ListProps::default().start_after_key("non_existent")
);
assert!(result.is_err());
// Best practice: Always check if page is empty to detect end
let last_page = cache.list(
ListProps::default().start_after_key("c")
).unwrap();
assert!(last_page.is_empty()); // No items after "c"
}- Use appropriate page sizes: Balance between memory usage and number of queries (typically 10-100 items)
- Cache cursors: Store the last key for efficient pagination state management
- Combine with filters: Apply filters to reduce the dataset before pagination
- Handle errors gracefully: Check for non-existent keys when using
start_after_key
Quickleaf supports optional persistence using SQLite as a backing store. This provides durability across application restarts while maintaining the same high-performance in-memory operations.
use quickleaf::{Cache, Duration};
use std::sync::mpsc::channel;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let (tx, rx) = channel();
// Create cache with ALL features: persistence, events, and TTL
let mut cache = Cache::with_persist_and_sender_and_ttl(
"full_featured.db",
1000,
tx,
Duration::from_secs(3600) // 1 hour default TTL
)?;
// Insert data - it will be:
// 1. Persisted to SQLite
// 2. Send events to the channel
// 3. Expire after 1 hour (default TTL)
cache.insert("session:user123", "active");
// Override default TTL for specific items
cache.insert_with_ttl(
"temp:token",
"xyz789",
Duration::from_secs(60) // 1 minute instead of 1 hour
);
// Process events
for event in rx.try_iter() {
println!("Event received: {:?}", event);
}
Ok(())
}use quickleaf::Cache;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a persistent cache backed by SQLite
let mut cache = Cache::with_persist("cache.db", 1000)?;
// Insert data - automatically persisted
cache.insert("user:123", "Alice");
cache.insert("user:456", "Bob");
// Data survives application restart
drop(cache);
// Later or after restart...
let mut cache = Cache::with_persist("cache.db", 1000)?;
// Data is still available
println!("{:?}", cache.get("user:123")); // Some("Alice")
Ok(())
}use quickleaf::{Cache, Duration};
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Option 1: Use with_persist and insert items with individual TTL
let mut cache = Cache::with_persist("cache.db", 1000)?;
cache.insert_with_ttl(
"session:abc",
"temp_data",
Duration::from_secs(3600)
);
// Option 2: Use with_persist_and_ttl for default TTL on all items
let mut cache_with_default = Cache::with_persist_and_ttl(
"cache_with_ttl.db",
1000,
Duration::from_secs(300) // 5 minutes default TTL
)?;
// This item will use the default TTL (5 minutes)
cache_with_default.insert("auto_expire", "data");
// You can still override with custom TTL
cache_with_default.insert_with_ttl(
"custom_expire",
"data",
Duration::from_secs(60) // 1 minute instead of default 5
);
// TTL is preserved across restarts
// Expired items are automatically cleaned up on load
Ok(())
}- Automatic Persistence: All cache operations are automatically persisted
- Background Writer: Non-blocking write operations using a background thread
- Crash Recovery: Automatic recovery from unexpected shutdowns
- TTL Preservation: TTL values are preserved across restarts
- Efficient Storage: Uses SQLite with optimized indexes for performance
- Compatibility: Works seamlessly with all existing Quickleaf features
| Constructor | Description | Use Case |
|---|---|---|
with_persist(path, capacity) |
Basic persistent cache | Simple persistence without events |
with_persist_and_ttl(path, capacity, ttl) |
Persistent cache with default TTL | Session stores, temporary data with persistence |
with_persist_and_sender(path, capacity, sender) |
Persistent cache with events | Monitoring, logging, real-time updates |
with_persist_and_sender_and_ttl(path, capacity, sender, ttl) |
Full-featured persistent cache | Complete solution with all features |
use quickleaf::{Quickleaf, Event};
use std::sync::mpsc::channel;
use std::thread;
fn main() {
let (tx, rx) = channel();
let mut cache = Quickleaf::with_sender(10, tx);
// Spawn a thread to handle events
let event_handler = thread::spawn(move || {
for event in rx {
match event {
Event::Insert(data) => {
println!("β Inserted: {} = {}", data.key, data.value);
}
Event::Remove(data) => {
println!("β Removed: {} = {}", data.key, data.value);
}
Event::Clear => {
println!("ποΈ Cache cleared");
}
}
}
});
// Perform cache operations (will trigger events)
cache.insert("user:1", "Alice");
cache.insert("user:2", "Bob");
cache.remove("user:1").unwrap();
cache.clear();
// Close the sender to stop the event handler
drop(cache);
event_handler.join().unwrap();
}use quickleaf::{Quickleaf, Duration, ListProps, Filter, Order};
use std::thread;
fn main() {
// Create cache with default TTL and event notifications
let (tx, _rx) = std::sync::mpsc::channel();
let mut cache = Quickleaf::with_sender_and_ttl(50, tx, Duration::from_secs(300));
// Insert user sessions with custom TTLs
cache.insert_with_ttl("session:guest", "temporary", Duration::from_secs(30));
cache.insert_with_ttl("session:user123", "authenticated", Duration::from_secs(3600));
cache.insert("config:theme", "dark"); // Uses default TTL
cache.insert("config:lang", "en"); // Uses default TTL
// Get all active sessions
let sessions = cache.list(
ListProps::default()
.filter(Filter::StartWith("session:".to_string()))
.order(Order::Asc)
).unwrap();
println!("Active sessions: {}", sessions.len());
// Simulate time passing
thread::sleep(Duration::from_secs(35));
// Guest session should be expired now
println!("Guest session: {:?}", cache.get("session:guest")); // None
println!("User session: {:?}", cache.get("session:user123")); // Some(...)
// Manual cleanup
let expired_count = cache.cleanup_expired();
println!("Cleaned up {} expired items", expired_count);
}Quickleaf uses a dual-structure approach for optimal performance:
- HashMap: O(1) key-value access
- Vec: Maintains sorted key order for efficient iteration
- Lazy Cleanup: TTL items are removed when accessed, not proactively
- SQLite Backend (optional): Provides durable storage with background persistence
- Lazy Cleanup: Expired items are removed during access operations (
get,contains_key,list) - Manual Cleanup: Use
cleanup_expired()for proactive cleaning - No Background Threads: Zero overhead until items are accessed (except for optional persistence)
When persistence is enabled:
- In-Memory First: All operations work on the in-memory cache for speed
- Background Writer: A separate thread handles SQLite writes asynchronously
- Event-Driven: Cache operations trigger persistence events
- Auto-Recovery: On startup, cache is automatically restored from SQLite
- Expired Cleanup: Expired items are filtered out during load
Quickleaf includes cutting-edge performance optimizations that deliver significant speed improvements:
Quickleaf v0.4+ includes advanced performance optimizations that deliver significant speed improvements:
- Optimized String Filters: Fast prefix and suffix matching algorithms
- Efficient Data Structures: IndexMap for better memory layout
- TTL Optimization: Cached timestamps and lazy cleanup
Performance Gains: 5-36% improvement across all operations compared to previous versions.
These optimizations are transparent to the API - all existing code continues to work while automatically benefiting from the performance improvements.
- Smart Memory Management: Automatically pools and reuses small strings (< 64 bytes by default)
- Fragmentation Reduction: Minimizes heap fragmentation through strategic allocation reuse
- Configurable Thresholds: Adjustable pool size and string length limits
- Zero-Copy When Possible: Reuses existing allocations without additional copying
## π§ API Reference- Automatic Detection: Runtime detection of CPU capabilities with safe fallbacks
- Optimized Algorithms: Custom prefix/suffix matching algorithms for large text processing
- Cross-Platform: Works on x86/x86_64 with graceful degradation on ARM/other architectures
let results = cache.list(
ListProps::default()
);- Syscall Reduction: Caches
SystemTime::now()calls to reduce kernel overhead - Lazy Evaluation: Only checks expiration when items are actually accessed
- Batch Operations: Optimized cleanup process for multiple expired items
- High-Resolution Timing: Nanosecond precision for accurate TTL handling
// TTL optimization is transparent
cache.insert_with_ttl("session", "data", Duration::from_secs(300));
// Subsequent access optimized with cached timestamps- Ordered Performance: Maintains insertion order while preserving O(1) access complexity
- Memory Layout: Contiguous memory allocation improves CPU cache performance
- Iterator Efficiency: Faster traversal due to better data locality
- Hybrid Approach: Combines HashMap speed with Vec-like iteration performance
- Adaptive Algorithms: Automatically chooses optimal algorithms based on data size
- Threshold-Based Switching: Uses different strategies for small vs. large datasets
- CPU Feature Detection: Runtime detection and utilization of available CPU features
- Memory-Aware Operations: Considers available memory for optimal performance
- Compile-Time Optimization: Rust's zero-cost abstractions ensure no runtime overhead
- Inlining: Critical path functions are inlined for maximum performance
- Branch Prediction: Optimized code paths for common operations
- Generic Specialization: Type-specific optimizations where beneficial
- Continuous Performance Testing: All optimizations validated through comprehensive benchmarks
- Regression Detection: Performance monitoring to prevent slowdowns
- Real-World Workloads: Benchmarks based on actual use cases and patterns
- Cross-Platform Validation: Performance testing across different architectures and systems
- Graceful Degradation: All optimizations have safe fallbacks for unsupported systems
- API Compatibility: Zero breaking changes - all optimizations are transparent
- Feature Detection: Runtime detection of CPU capabilities
- Cross-Platform: Works on Windows, Linux, macOS, and other platforms
- Architecture Support: Optimized for x86_64, with fallbacks for ARM and other architectures
These optimizations are transparent to the API - all existing code continues to work while automatically benefiting from the performance improvements.
// Basic cache
let cache = Quickleaf::new(capacity);
// With default TTL
let cache = Quickleaf::with_default_ttl(capacity, ttl);
// With event notifications
let cache = Quickleaf::with_sender(capacity, sender);
// With both TTL and events
let cache = Quickleaf::with_sender_and_ttl(capacity, sender, ttl);
// With persistence (requires "persist" feature)
let cache = Cache::with_persist("cache.db", capacity)?;
// With persistence and default TTL
let cache = Cache::with_persist_and_ttl("cache.db", capacity, ttl)?;
// With persistence and events
let cache = Cache::with_persist_and_sender("cache.db", capacity, sender)?;
// With persistence, events, and TTL (all features)
let cache = Cache::with_persist_and_sender_and_ttl("cache.db", capacity, sender, ttl)?;// Insert operations
cache.insert(key, value);
cache.insert_with_ttl(key, value, ttl);
// Access operations
cache.get(key); // Returns Option<&Value>
cache.get_mut(key); // Returns Option<&mut Value>
cache.contains_key(key); // Returns bool
// Removal operations
cache.remove(key); // Returns Result<(), Error>
cache.clear(); // Removes all items
// TTL operations
cache.cleanup_expired(); // Returns count of removed items
cache.set_default_ttl(ttl);
cache.get_default_ttl();// List operations
cache.list(props); // Returns Result<Vec<(Key, &Value)>, Error>
// Filter types
Filter::None
Filter::StartWith(prefix)
Filter::EndWith(suffix)
Filter::StartAndEndWith(prefix, suffix)
// Ordering
Order::Asc // Ascending
Order::Desc // DescendingRun the test suite:
# All tests
cargo test
# TTL-specific tests
cargo test ttl
# Persistence tests (requires "persist" feature)
cargo test persist
# Performance tests
cargo test --release
# With output
cargo test -- --nocaptureβ All 36 tests passing (as of August 2025)
test result: ok. 36 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out
Comprehensive Test Coverage includes:
- β Core Operations: Insert, get, remove, clear operations
- β TTL Functionality: Expiration, cleanup, lazy evaluation
- β Advanced Filtering: Prefix, suffix, complex pattern matching with optimized algorithms
- β List Operations: Ordering, pagination, filtering combinations
- β Event System: Real-time notifications and event handling
- β LRU Eviction: Capacity management and least-recently-used removal
- β Persistence: SQLite integration, crash recovery, TTL preservation
- β Performance Features: Optimized filters and optimization validation
- β Concurrency: Thread safety, parallel test execution
- β Edge Cases: Error handling, boundary conditions, memory management
- β Cross-Platform: Linux, Windows, macOS compatibility
- β Cross-Platform: Linux, Windows, macOS compatibility
| Category | Tests | Description |
|---|---|---|
| Core Cache | 8 tests | Basic CRUD operations |
| TTL System | 8 tests | Time-based expiration |
| Filtering | 4 tests | Pattern matching and optimized algorithms |
| Persistence | 14 tests | SQLite integration |
| Events | 2 tests | Notification system |
| Performance | 6 tests | Optimization validation |
# Run benchmarks to validate optimizations
cargo bench
# Test specific optimization features
cargo test fast_filtersAll tests are designed to run reliably in parallel environments with proper isolation to prevent interference between test executions.
Quickleaf v0.4+ includes advanced performance optimizations that deliver significant speed improvements:
- Optimized String Filters: Fast prefix and suffix matching algorithms
- Efficient Data Structures: IndexMap for better memory layout
- TTL Optimization: Cached timestamps and lazy cleanup
Performance Gains: 5-36% improvement across all operations compared to previous versions.
| Operation | Time Complexity | Optimized Performance | Notes |
|---|---|---|---|
| Insert | O(log n) | Up to 48% faster | Memory optimization + IndexMap |
| Get | O(1) | 25-36% faster | Optimized filters + memory optimization |
| Remove | O(n) | ~5% faster | Optimized memory layout |
| List | O(n) | 3-6% faster | Optimized filters |
| TTL Check | O(1) | Minimal overhead | Cached timestamps |
| Contains Key | O(1) | 1-6% faster | IndexMap + memory layout benefits |
- OS: Linux (optimized build)
- CPU: Modern x86_64 architecture
- RAM: 16GB+
- Rust: 1.87.0
- Date: August 2025
| Operation | Cache Size | Time (v0.4) | Time (v0.3) | Notes | |-----------|------------|------|----------|-------------|-------| | Get | 10 | 73.9ns | 108ns | Memory and filter optimization | | Get | 100 | 78.4ns | 123ns | Excellent scaling with optimizations | | Get | 1,000 | 79.7ns | 107ns | Consistent sub-80ns performance | | Get | 10,000 | 106.7ns | 109ns | Maintains performance at scale | | Insert | 10 | 203.4ns | 302ns | Memory optimization benefits | | Insert | 100 | 230.6ns | 350ns | Memory optimization impact | | Insert | 1,000 | 234.1ns | 378ns | Significant improvement | | Insert | 10,000 | 292.3ns | 566ns | Dramatic performance gain | | Contains Key | 10 | 33.6ns | 35ns | IndexMap benefits | | Contains Key | 100 | 34.9ns | 37ns | Consistent improvement | | Contains Key | 1,000 | 36.8ns | 37ns | Maintained performance | | Contains Key | 10,000 | 47.4ns | 49ns | Scaling improvement | | List (no filter) | 1,000 items | 28.6Β΅s | 30.4Β΅s | Optimized filters + memory optimization | | List (prefix filter) | 1,000 items | 28.0Β΅s | 29.1Β΅s | Optimized prefix matching | | List (suffix filter) | 1,000 items | 41.1Β΅s | 42.2Β΅s | Optimized suffix matching | | LRU Eviction | 100 capacity | 609ns | 613ns | Memory layout benefits | | Insert with TTL | Any | 97.6ns | 98ns | Timestamp caching | | Cleanup Expired | 500 items | 339ns | 338ns | Optimized batch processing | | Get (TTL check) | Any | 73.9ns | 71ns | Efficient TTL validation |
Real-World Impact: The optimizations deliver the most significant benefits in production workloads with:
- Large cache sizes (1,000+ items)
- Frequent insert operations
- Pattern-heavy filtering operations
- Memory-constrained environments
- Base overhead: ~48 bytes per cache instance
- Per item: ~(key_size + value_size + 48) bytes (efficient memory layout)
- TTL overhead: +24 bytes per item with TTL
- Memory efficiency: Optimized data structures reduce overhead
- IndexMap advantage: Better cache locality, 10-15% faster iterations
Check out the examples/ directory for more comprehensive examples:
# Run the TTL example
cargo run --example ttl_example
# Run the persistence example
cargo run --example test_persist --features persist
# Run the interactive TUI with persistence
cargo run --example tui_interactive --features tui-exampleContributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
# Clone the repository
git clone https://github.com/phlowdotdev/quickleaf.git
cd quickleaf
# Run tests
cargo test
# Run examples
cargo run --example ttl_example
# Run benchmarks to validate optimizations
cargo bench
# Check formatting
cargo fmt --check
# Run clippy
cargo clippy -- -D warnings
# Test with all features
cargo test --all-featuresWhen contributing performance improvements:
# Benchmark before changes
cargo bench > before.txt
# Make your changes...
# Benchmark after changes
cargo bench > after.txt
# Compare results
# Ensure no regressions and document improvements- Measure First: Always benchmark before and after changes
- Maintain Compatibility: New optimizations should not break existing APIs
- Document Benefits: Include performance impact in pull request descriptions
- Test Thoroughly: Ensure optimizations work across different platforms
- Graceful Fallbacks: Provide safe alternatives for unsupported systems
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
Made with β€οΈ by the phlow.dev team
Quickleaf v0.4+ features advanced performance optimizations including optimized string filters and TTL optimization - delivering up to 48% performance improvements while maintaining full API compatibility.