mirror of
https://github.com/ivabus/pantry
synced 2024-11-10 10:35:17 +03:00
.. | ||
llama-fetch | ||
llama.cpp | ||
package.yml | ||
README.md |
getting started
$ llama.cpp
# ^^ default chat prompt with the OpenLLaMA model
If you want to run llama.cpp
with your own args specify them and chat mode
will be skipped.
If you want to use a different model specify --model
.
converting your own models
We provide a working convert.py
from the llama.cpp project. To use it you
need to launch it via a tea pkgenv:
tea +github.com/ggerganov/llama.cpp convert.py path/to/your/model