2023-07-24 23:43:32 +03:00
|
|
|
# getting started
|
|
|
|
|
|
|
|
```sh
|
2023-10-26 14:24:04 +03:00
|
|
|
$ pkgx +brewkit -- run llama.cpp
|
|
|
|
# ^^ default chat prompt with an appropriate hugging face model
|
2023-07-24 23:43:32 +03:00
|
|
|
```
|
|
|
|
|
2023-10-26 14:24:04 +03:00
|
|
|
If you want to run `llama.cpp` with your own args `pkgx llama.cpp $ARGS` is
|
|
|
|
your friend.
|
2023-07-24 23:43:32 +03:00
|
|
|
|
|
|
|
# converting your own models
|
|
|
|
|
|
|
|
We provide a working `convert.py` from the llama.cpp project. To use it you
|
|
|
|
need to launch it via a tea pkgenv:
|
|
|
|
|
|
|
|
```sh
|
2023-10-26 14:24:04 +03:00
|
|
|
pkgx +llama.cpp -- convert.py path/to/your/model
|
|
|
|
# ^^ the -- is necessary since `convert.py` is a not listed in the llama.cpp
|
|
|
|
# provides list
|
2023-07-24 23:43:32 +03:00
|
|
|
```
|