Jacob Heider
5729b3ca1c
fix(llama.cpp)
...
closes #5628
closes #5630
closes #5631
closes #5632
closes #5636
closes #5637
2024-03-18 17:06:15 -04:00
Jacob Heider
f35308a401
fix(llama.cpp)
...
convert to `bkpyvenv`
closes #4721
closes #4722
closes #4723
2024-01-04 13:09:41 -05:00
Jacob Heider
311c04c3fc
fix(llama.cpp)
...
closes #4689
closes #4688
closes #4687
closes #4686
closes #4675
2024-01-02 10:20:42 -05:00
Jacob Heider
c0973b0a46
fix(llama.cpp)
...
closes #4606
2023-12-28 15:11:56 -05:00
Jacob Heider
3adaa94a1f
fix(llama.cpp)
...
closes #3915
closes #3917
closes #3919
closes #3923
closes #3924
closes #3926
closes #3927
closes #3928
closes #3929
closes #3931
2023-11-01 23:19:54 -04:00
Max Howell
a5a1bd7b12
Use recommended model
2023-10-26 14:29:00 -04:00
Max Howell
a40a0d8fc8
fix llama.cpp model download
2023-10-26 08:13:43 -04:00
James Reynolds
2b06942c62
GitHub.com/ggerganov/llama.cpp update ( #3696 )
...
* llama.cpp, github version instead of hardcoded version
* llama.cpp, check if model is specified, if yes, run it, if not, then download model
* Use entrypoint for custom llama.cpp invocation
* `llama.cpp` is just raw executable. This I think is our new pattern.
* To run chat use the entrypoint: `pkgx +brewkit -- run llama.cpp`
Co-authored-by: James Reynolds <magnsuviri@me.com>
Co-authored-by: Max Howell <mxcl@me.com>
2023-10-26 07:24:04 -04:00
Jacob Heider
587441621e
don't use python3.12 widely yet
2023-10-03 11:23:56 -04:00
Max Howell
81e7a5e16f
pkgx
2023-10-01 14:44:42 -04:00
Max Howell
9c56216f04
entrypoint for agpt.co
2023-07-30 08:01:11 -04:00
Max Howell
1554f7e49d
fix model download url
2023-07-24 16:54:33 -04:00
Max Howell
7c803208a2
update llama.cpp; use OpenLLaMA ( #2655 )
2023-07-24 16:43:32 -04:00
James Reynolds
9ef056e74a
Updated to 2023.04.11 8b67998
2023-04-13 07:52:27 -04:00
Max Howell
96857e732b
+llama.cpp
2023-03-28 08:48:03 -04:00
Max Howell
d8a3e7c646
+llama.cpp ( #844 )
...
llama.cpp -p "Getting paid to write open source can be accomplished in 3 simple steps:"
2023-03-24 17:53:39 -04:00