llama.cpp
https://github.com/ggerganov/llama.cpp
The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.
- Sources inherited from project home:adrianSuSE
- Links to science:machinelearning / llamacpp
- Has a link diff
- Download package
-
Checkout Package
osc -A https://api.opensuse.org checkout home:adrianSuSE:link/llamacpp && cd $_
- Create Badge
Refresh
Refresh
Source Files (show merged sources derived from linked package)
Filename | Size | Changed |
---|---|---|
0001-dl-load-path.patch | 0000000482 482 Bytes | |
0002-build-main-cli.patch | 0000002170 2.12 KB | |
_link | 0000000131 131 Bytes | |
_service | 0000000814 814 Bytes | |
_servicedata | 0000000240 240 Bytes | |
llamacpp-4589.obscpio | 0082570254 78.7 MB | |
llamacpp.changes | 0000025762 25.2 KB | |
llamacpp.obsinfo | 0000000096 96 Bytes | |
llamacpp.spec | 0000005147 5.03 KB |
Comments 0