We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 627811e commit 3adc8fbCopy full SHA for 3adc8fb
README.md
@@ -68,18 +68,9 @@ This allows you to use llama.cpp compatible models with any OpenAI compatible cl
68
69
To install the server package and get started:
70
71
-Linux/MacOS
72
```bash
73
pip install llama-cpp-python[server]
74
-export MODEL=./models/7B/ggml-model.bin
75
-python3 -m llama_cpp.server
76
-```
77
-
78
-Windows
79
-```cmd
80
-pip install llama-cpp-python[server]
81
-SET MODEL=..\models\7B\ggml-model.bin
82
+python3 -m llama_cpp.server --model models/7B/ggml-model.bin
83
```
84
85
Navigate to [http://localhost:8000/docs](http://localhost:8000/docs) to see the OpenAPI documentation.
0 commit comments