This post will review 2 llm options in emacs how I set them up.

Ellama

From ellama

Ellama is a tool for interacting with large language models from Emacs. It allows you to ask questions and receive responses from the LLMs. Ellama can perform various tasks such as translation, code review, summarization, enhancing grammar/spelling or wording and more through the Emacs interface. Ellama natively supports streaming output, making it effortless to use with your preferred text editor.

The name “ellama” is derived from “Emacs Large LAnguage Model Assistant”. Previous sentence was written by Ellama itself.

Ellama offers a frontend to LLMs for diverse purposes. In a buffer it can correct grammar mistakes, enhancing wording (quite useful for a non native english speaker!), answer general question, assist in writing or editing code and more. Ellama can also be fed the current buffer or specific files when asking questions.

Overall, I like Ellama a lot, relying on it approximately 90% of the time for my LLM needs at the time of writing. One thing I particularly like about Ellama is that switching out models is particularly easy, which makes LLM experimentation fun.

One aspect I like a bit less about Ellama is in replicating a conversational, 'chat', experience, an area where gptel is better. A conversations continuum requires re-invoking the ellama-chat interface. ellama-chat takes input from the echo area, which is not super edit friendly.

Initially, Ellama's support was limited to locally hosted models via the Ollama API. However, with its transition to utilizing the LLM Emacs package, it now also supports OpenAI-compatible APIs, a standard nearly universally adopted across providers.

Ellama with doom emacs

By default Ellama saved its sessions into my .doom.d/ folder. Doom emacs uses a variant of org mode for documentation in there which made emacs hang when Ellama was used. The solution is to place configure ellama-sessions-directory some place else like

(setq ellama-sessions-directory "~/.emacs.d/.local/cache/ellama-sessions")

gptel

From gptel

gptel is a simple Large Language Model chat client for Emacs, with support for multiple models and backends. It works in the spirit of Emacs, available at any time and uniformly in any buffer.

Unlike Ellama which leverages the llm emacs package behind the screen, gptel has its own backend code for talking to multiple llm providers. gptel's functionality is more limited compared to Ellama or at least a bit more 'manual'. It's just provides a chat interface to llms. However it does feel more like a real chat than Ellama. gptel does feature a menu to change the instructions, context, providers, input and output parameters of the requests. It works well, but I feel that this menu to do things integrates less well into my workflow.

I also like changing llm providers often and out of the box gptel's menu makes that a few more steps.

Adding all Openrouter and Ollama models to gptel and Ellama

As already mentioned, I like playing with multiple LLMs. Openrouter provides access to a lot of different models. I want to have access to all off them in emacs. Neither gptel, nor Ellama can access all off them by default, so I had Ellama write me some code to achieve his.

Ellama with openrouter models

First we need to get a list of the models names and id from openrouter. Openrouter has an api for that via https://openrouter.ai/api/v1/models

This is the elisp code to get those in a pair of name and id

(require 'json)
(require 'url)
(defun fetch-openrouter-models ()
  (with-current-buffer
      (url-retrieve-synchronously "https://openrouter.ai/api/v1/models")
    (goto-char url-http-end-of-headers)
    (let* ((json-object-type 'alist)
           (json-data (json-read))
           (models (alist-get 'data json-data)))
      (mapcar (lambda (model)
                (cons (alist-get 'name model)
                      (alist-get 'id model)))
              models))))

Adding external llm providers looks like this (get-auth-info is a helper to get the API key from .authinfo.gpg)

(setq ellama-providers
      '(("deepseek-chat" . (make-llm-openai-compatible
                    :key (get-auth-info
                          :host "api.deepseek.com"
                          :user "apikey")
                    :url "https://api.deepseek.com/"
                    :chat-model "deepseek-chat")))

We need to add all models from that fetch-openrouter-models call.

(defmacro add-openrouter-model (name model-url)
  `(cons ,name  (make-llm-openai-compatible
                     :key (get-auth-info
                           :host "openrouter.ai"
                           :user "apikey")
                     :url "https://openrouter.ai/api/v1"
                     :chat-model ,model-url)))

(setq ellama-providers
        `(,@(mapcar (lambda (model)
                      (add-openrouter-model (car model) (cdr model)))
                    (fetch-openrouter-models))))

Here is the endresult.

gptel with openrouter models

Now the same can be achieved for gptel. gptel does know the concept of adding more models from the same provider in one place as a list at the :models parameter.

  (gptel-make-openai "OpenRouter"               ;Any name you want
    :host "openrouter.ai"
    :endpoint "/api/v1/chat/completions"
    :stream t
    :key (gptel-api-key-from-auth-source "openrouter.ai")
    :models (mapcar (lambda (model)
                      (cdr model))
               (fetch-openrouter-models)))

BONUS: gptel with all Ollama models

Ellama can already get all the Ollama models currently installed, but gptel needs an explicit list.

(defun get-ollama-models ()
  "Fetch the list of installed Ollama models."
  (let* ((output (shell-command-to-string "ollama list"))
         (lines (split-string output "\n" t))
         models)
    (dolist (line (cdr lines))  ; Skip the first line
      (when (string-match "^\\([^[:space:]]+\\)" line)
        (push (match-string 1 line) models)))
    (nreverse models)))

  (gptel-make-ollama "Ollama"             ;Any name of your choosing
    :host "localhost:11434"               ;Where it's running
    :stream t                             ;Stream responses
    :models (get-ollama-models))          ;List of models