Popular repositories Loading
Repositories
Showing 10 of 65 repositories
- kronk Public
This project lets you use Go for hardware accelerated local inference with llama.cpp directly integrated into your applications via the yzma module. Kronk provides a high-level API that feels similar to using an OpenAI compatible API. Kronk also provide a model server to run local workloads.
ardanlabs/kronk’s past year of commit activity - kronk_catalogs Public
Catalog of models that have been verified to be compatible with Kronk. Kronk can work with any GGUF based models that can perform Text Generation, Audio/Image Text to Text, and Embedding.
ardanlabs/kronk_catalogs’s past year of commit activity - yzma Public Forked from hybridgroup/yzma
Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration.
ardanlabs/yzma’s past year of commit activity