Get up and running with Llama 2, Mistral, Gemma, and other large language models.
https://ollama.com
Get up and running with Llama 2, Mistral, Gemma, and other large language models.
You can find a list of models available for use at https://ollama.com/library.
- Links to science:machinelearning / ollama
- Has a link diff
- Download package
-
Checkout Package
osc -A https://api.opensuse.org checkout home:birdwatcher:machinelearning/ollama && cd $_
- Create Badge
Refresh
Refresh
Source Files (show merged sources derived from linked package)
Filename | Size | Changed |
---|---|---|
_link | 0000000131 131 Bytes | |
_service | 0000000802 802 Bytes | |
_servicedata | 0000000234 234 Bytes | |
ollama-0.4.2.obscpio | 0017815053 17 MB | |
ollama-add-install-targets.patch | 0000002627 2.57 KB | |
ollama-lib64-runner-path.patch | 0000000686 686 Bytes | |
ollama-pr7499.patch | 0000075272 73.5 KB | |
ollama-use-external-cc.patch | 0000000704 704 Bytes | |
ollama-user.conf | 0000000158 158 Bytes | |
ollama-verbose-tests.patch | 0000000352 352 Bytes | |
ollama.changes | 0000041025 40.1 KB | |
ollama.obsinfo | 0000000095 95 Bytes | |
ollama.service | 0000000221 221 Bytes | |
ollama.spec | 0000004675 4.57 KB | |
vendor.tar.zstd | 0005366874 5.12 MB |
Comments 1
AMD users, install this one. This is the one you want with proper ROCm support. Thanks birdwatcher for taking the time to properly make ollama and ROCm modules fully available on Tumbleweed.
For Radeon 780M, there's no need to modify anything to get it running. However, due to some limitations imposed by ROCm and perhaps Ollama as well, you might be limited to 4096 MiB of VRAM. My GTT says I have 7000+ MiB of memory. However, ROCm only detects 4096 and will crash on most 7B models even though I have set UMA in BIOS to 16G.
As a workaround, you need to set a custom GTT size as well as TTM pool and page pool sizes to use your whole available VRAM. Instructions here: https://www.reddit.com/r/ROCm/comments/1g3lnuj/rocm_apu_680m_and_gtt_memory_on_arch/
There's an open PR in Ollama's repository as well: https://github.com/ollama/ollama/pull/6282