Chat Applications
There are a number of applications that can be installed locally on your computer that offer a ChatGPT- or Notebook LM-inspired interface. Some also allow you to create your own local knowledge database using Retrieveal-augmentated Generation (RAG). A RAG typically allows you to “chat with your documents”.
DTU have no explicit licenses to these tool and have not evaluated their risks and potential applicability.
Some tools allow you to run the entire LLM model offline, i.e. on your own computer (provided you have enough memory and enough compute power and memory for practical use). Note, however, that if the application uses an external online LLM model through it’s application programming interface (API), some of your data will leave your computer and be sent to the LLM model for processing.
Some relatively easy-to-use applications that can be installed locally on modern Mac, Windows or Linux systems with sufficient (>8/16 GB RAM depending on the model you use) are the following. Most support a local database (RAG). Some may also support adding other tools that allow you to e.g. search the web automatically for information to include with your prompt.
Name | RAG | Free tier | Offline | Online | Comments |
---|---|---|---|---|---|
Jan | (Yes) | Open source | Yes | Yes | |
OpenWebUI | Yes | Open source | Yes* | Yes | Local web service |
MSTY | Yes | Personal use | Yes | Yes | |
LM Studio | Yes | Personal use | Yes | No | Not single-click install |
LibreChat | Plug-in | Open Source | Yes | Yes | Not single-click install |
AnythingLLM | Yes | Open Source | Yes | Yes | |
Klee | Yes | Open Source | Yes | No | Notebook LM-like; no Linux |
Ollama | No | Open Source | Yes | N/A | LLM server and CLI only |
* Offline (local) LLM models must be installed and managed by another tool, such as Ollama.
Online LLM API
There are many options for API access to online/external providers of LLM models that can be used with some of the tools above, including Open AI, Anthropic, Mistral, Groq, Grok, etc.
Most require a subscription, although Groq (not Grok) currently also provide a free tier for their API access. DTU does not currently provide a license to any of these.
Note that when using an external (online) service, data is sent to the service and could be stored or use for later training etc.
Plug-Ins
Another way to access a local (offline) or external (online) LLM model is using a plug-in in e.g. your browser or coding tool.
Page Assist is a browser (Chrome, Firefox, Edge) plug-in that allows you to chat with the currently displayed page (or just chat in general) using a local (or remote) LLM.
For coding tools like VS code and JetBrains, continue.dev can be used to interface to local language models.
Running a Local Language Model
Most of the applications mentioned above can directly manage (download and update) and run language models running locally. Some can also use already local installed language models. Models are typically downloaded from either Ollama or Hugging Face.
Obviously running very large (>70 GB models) locally is not feasible for most people, but has the advantage that you can ensure no sensitive data leaves your compotuer. And in some cases, a small model can be good enough for your application, or even better if it allows a significantly larger context (i.e. it can hold more “words” (tokens) when replying to your prompt).
Ollama is a simple way to manage and run language models locally using a command line interface that can also be used to prompt the model(s). It can be used with applications like OpenWebUI, that does not directly manage and run language models locally, or with a suitable browser- or coding tool plug-in like xxx or continue.dev.