Skip to content

Commit 3adc8fb

Browse files
committed
Update README to use cli options for server
1 parent 627811e commit 3adc8fb

File tree

1 file changed

+1
-10
lines changed

1 file changed

+1
-10
lines changed

README.md

Lines changed: 1 addition & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -68,18 +68,9 @@ This allows you to use llama.cpp compatible models with any OpenAI compatible cl
6868

6969
To install the server package and get started:
7070

71-
Linux/MacOS
7271
```bash
7372
pip install llama-cpp-python[server]
74-
export MODEL=./models/7B/ggml-model.bin
75-
python3 -m llama_cpp.server
76-
```
77-
78-
Windows
79-
```cmd
80-
pip install llama-cpp-python[server]
81-
SET MODEL=..\models\7B\ggml-model.bin
82-
python3 -m llama_cpp.server
73+
python3 -m llama_cpp.server --model models/7B/ggml-model.bin
8374
```
8475

8576
Navigate to [http://localhost:8000/docs](http://localhost:8000/docs) to see the OpenAPI documentation.

0 commit comments

Comments
 (0)