llama.cpp
https://github.com/ggerganov/llama.cpp
The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.
- Download package
-
Checkout Package
osc -A https://api.opensuse.org checkout home:lalala123:x86_succeed_pro/llamacpp && cd $_ - Create Badge
Refresh
Source Files
| Filename | Size | Changed |
|---|---|---|
| _service | 0000000125 125 Bytes | |
| llamacpp-6428.tar.gz | 0025682923 24.5 MB | |
| llamacpp.changes | 0000052553 51.3 KB | |
| llamacpp.spec | 0000006277 6.13 KB |
Comments 0