Skip to content

Desktop llm apps with opensuse support

Based on the search results, there are several options for running LLMs locally on OpenSUSE Linux, with Ollama being the most recently supported option:

Ollama

Ollama is now officially packaged for OpenSUSE Tumbleweed as of February 27, 2024[5]. You can install it using:

sudo zypper ref && sudo zypper in ollama

Note that the packaged version currently doesn't support GPU acceleration. If you need GPU support, you should install directly from ollama.com instead[5].

Other Local LLM Options

LM Studio

  • Provides an easy-to-use interface for running LLMs locally
  • Supports GGUF format models from providers like Llama 3.1, Phi 3, Mistral, and Gemma
  • Requires M1/M2/M3 Mac or Windows PC with AVX2 support[4]

LocalAI

  • Open-source replacement for OpenAI
  • Can be installed using Docker compose
  • Provides OpenAI-compatible APIs
  • Includes a web interface[1]

OpenLLM

  • Allows running open-source LLMs with OpenAI-compatible APIs
  • Installed via pip: pip install openllm
  • Supports models like Llama 3.2, Qwen2.5, Phi3
  • Features built-in chat UI[3]

Llamafile

  • Backed by Mozilla
  • Runs on multiple platforms including Linux
  • Requires no installation
  • Uses tinyBLAST for running without SDK requirements[4]

For OpenSUSE specifically, Ollama is currently the most well-integrated option with official package support, though it has some limitations regarding GPU acceleration in the packaged version.


The above text was generated by a large language model (LLM) and its accuracy has not been validated. This page is part of 'LLMs-on-LLMs,' a Github repository by Daniel Rosehill which explores how curious humans can use LLMs to better their understanding of LLMs and AI. However, the information should not be regarded as authoritative and given the fast pace of evolution in LLM technology will eventually become deprecated. This footer was added at 16-Nov-2024.