This is one among many items I will regularly tag in Pinboard as oegconnect, and automatically post tagged as #OEGConnect to Mastodon. Do you know of something else we should share like this? Just reply below and we will check it out.
My favorite resources and recommendations for open source LLMs:
Power users may enjoy ollama (command line) + ollama-webui (web UI for ollama, similar to ChatGPT web interface). Ollama lets you automagically run tons of LLMs while hiding away all unnecessary details. That said, it does give you the power tools if you want them (for example, you can customize the system prompt and let you choose exactly how the model is run, e.g. on the CPU / GPU, and so on – for those familiar with Docker, Ollama’s modelfile looks a lot like a Dockerfile). There are MacOS + Linux installers available today, with native Windows installer “coming soon” (you can install it on Windows via WSL today if you need to).
For those who want LLMs to “just work” and never want to touch the command line, there is https://jan.ai/ (despite the name, I am not affiliated!). It runs on the desktop (Mac, Windows, Linux) and is super user friendly. As a bonus, it exposes OpenAI-compatible API with your localhost, which means you can use jan.ai as a back-end for other tools. Because it appeals to the masses and it is a user-friendly click-to-install app, there is a great potential for Jan to be very popular.
Ollama can run a ton of open source LLMs: library – a peek at their library gives you a quick overview about what is popular, what might work well for your use case and therefore might be worth installing. Jan.ai has a similar built-in library.
Some YouTube channels I found useful re: AI & open source:
Ollama / Jan.ai can be used as ChatGPT alternatives running on the local computer. I mostly use the text / code generation features but expect the primary usage to shift towards extracting knowledge from existing documents and media, chatting with my personal knowledge base, and using agents for more complex workflows. Educators can probably transfer a lot of their existing ChatGPT use cases to their local LLMs but need to be careful about hallucinations because compact models have a small knowledge base, so it’s no surprise that they fill in the blanks by making things up.
Small open source LLMs are a lot less capable than the paid GPT-4 but I like their focus and capability (some are trained for a narrow use case instead of general knowledge of everything), their programmability and the ability to uplolad my own data without sending it to the internet. I also think it’s helpful to gain more understanding by tinkering with them locally instead of only using the finished polished commercial product on the web. Only paying a flat fee for electricity instead of pay-per-use is also nice.
From the philosophical perspective, I believe it is healthy for the society not to rely on a few gigacorps for AI but have as much autonomy and independence as possible (sounds familiar to open educators?). We will only have independence and sovereignty if we use open tools (echoing the sentiment of @moodler here).
Also, by using open tools and open source LLMs, I like to think I contribute a small bit towards their wider acceptance, which will hopefully help make them better. We can’t outcompete big corps in raw hardware performance and scale but we can outcompete them in ingenuity and efficiency. They have no moat. Today’s open source LLMs are already as capable as some of the top tier commercial models from just a few months ago. And this is just the beginning.