File tree Expand file tree Collapse file tree 1 file changed +4
-4
lines changed
Expand file tree Collapse file tree 1 file changed +4
-4
lines changed Original file line number Diff line number Diff line change @@ -26,19 +26,19 @@ conda create -n llama python=3.9.16
2626conda activate llama
2727```
2828
29- ** (4) Install the LATEST llama-cpp-python.. which, as of just today, happily supports MacOS Metal GPU**
29+ ** (4) Install the LATEST llama-cpp-python... which happily supports MacOS Metal GPU as of version 0.1.62 **
3030 * (you needed xcode installed in order pip to build/compile the C++ code)*
3131```
3232pip uninstall llama-cpp-python -y
3333CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir
3434pip install 'llama-cpp-python[server]'
3535
36- # you should now have llama-cpp-python v0.1.62 installed
37- llama-cpp-python 0.1.62
36+ # you should now have llama-cpp-python v0.1.62 or higher installed
37+ llama-cpp-python 0.1.68
3838
3939```
4040
41- ** (4 ) Download a v3 ggml model**
41+ ** (5 ) Download a v3 ggml model**
4242 - ** ggmlv3**
4343 - file name ends with ** q4_0.bin** - indicating it is 4bit quantized, with quantisation method 0
4444
You can’t perform that action at this time.
0 commit comments