pantry/projects/github.com/ggerganov/llama.cpp/README.md

21 lines
463 B
Markdown
Raw Normal View History

# getting started
```sh
$ llama.cpp
# ^^ default chat prompt with the OpenLLaMA model
```
If you want to run `llama.cpp` with your own args specify them and chat mode
will be skipped.
If you want to use a different model specify `--model`.
# converting your own models
We provide a working `convert.py` from the llama.cpp project. To use it you
need to launch it via a tea pkgenv:
```sh
tea +github.com/ggerganov/llama.cpp convert.py path/to/your/model
```