pantry/projects/github.com/ggerganov/llama.cpp
2023-07-24 16:54:33 -04:00
..
llama-fetch fix model download url 2023-07-24 16:54:33 -04:00
llama.cpp update llama.cpp; use OpenLLaMA (#2655) 2023-07-24 16:43:32 -04:00
package.yml update llama.cpp; use OpenLLaMA (#2655) 2023-07-24 16:43:32 -04:00
README.md update llama.cpp; use OpenLLaMA (#2655) 2023-07-24 16:43:32 -04:00

getting started

$ llama.cpp
# ^^ default chat prompt with the OpenLLaMA model

If you want to run llama.cpp with your own args specify them and chat mode will be skipped.

If you want to use a different model specify --model.

converting your own models

We provide a working convert.py from the llama.cpp project. To use it you need to launch it via a tea pkgenv:

tea +github.com/ggerganov/llama.cpp convert.py path/to/your/model