llama.cpp
https://github.com/ggerganov/llama.cpp
The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.
- Links to openSUSE:Factory / llamacpp
- Has a link diff
- Download package
-
Checkout Package
osc -A https://api.opensuse.org checkout home:simotek:cmake4macro/llamacpp && cd $_ - Create Badge
Refresh
Source Files (show merged sources derived from linked package)
| Filename | Size | Changed |
|---|---|---|
| _link | 0000000699 699 Bytes |
Comments 0