mirror of
https://github.com/ivabus/pantry
synced 2024-11-10 02:25:18 +03:00
2b06942c62
* llama.cpp, github version instead of hardcoded version * llama.cpp, check if model is specified, if yes, run it, if not, then download model * Use entrypoint for custom llama.cpp invocation * `llama.cpp` is just raw executable. This I think is our new pattern. * To run chat use the entrypoint: `pkgx +brewkit -- run llama.cpp` Co-authored-by: James Reynolds <magnsuviri@me.com> Co-authored-by: Max Howell <mxcl@me.com> |
||
---|---|---|
.. | ||
entrypoint.sh | ||
package.yml | ||
README.md |
getting started
$ pkgx +brewkit -- run llama.cpp
# ^^ default chat prompt with an appropriate hugging face model
If you want to run llama.cpp
with your own args pkgx llama.cpp $ARGS
is
your friend.
converting your own models
We provide a working convert.py
from the llama.cpp project. To use it you
need to launch it via a tea pkgenv:
pkgx +llama.cpp -- convert.py path/to/your/model
# ^^ the -- is necessary since `convert.py` is a not listed in the llama.cpp
# provides list