Tagged for OEG Connect: 13 Best Open Source ChatGPT Alternatives

What’s of interest? 13 Best Open Source ChatGPT Alternatives

Tell me more!


Looking for open-source ChatGPT alternatives? We curated some of the best ones for you to take a look at.

ChatGPT is a powerful generative AI tool developed by OpenAI. You enter text prompts in a conversational way, and it gives you a detailed response.

Sure, it is not perfect. But, sometimes it is incredibly helpful. No matter what you do with it, unfortunately, it is not an open-source solution.

As a proprietary option, it isn’t beneficial in every way. So, what are some open source ChatGPT alternatives? Here, let me highlight the best ChatGPT alternatives that are open-source in nature.

h/t Martin Dougiamas (@martin@openedtech.social) - Open EdTech

Where is it?: 13 Best Open Source ChatGPT Alternatives


This is one among many items I will regularly tag in Pinboard as oegconnect, and automatically post tagged as #OEGConnect to Mastodon. Do you know of something else we should share like this? Just reply below and we will check it out.

There are some very nice picks in there! :slightly_smiling_face:

My favorite resources and recommendations for open source LLMs:

  1. Power users may enjoy ollama (command line) + ollama-webui (web UI for ollama, similar to ChatGPT web interface). Ollama lets you automagically run tons of LLMs while hiding away all unnecessary details. That said, it does give you the power tools if you want them (for example, you can customize the system prompt and let you choose exactly how the model is run, e.g. on the CPU / GPU, and so on – for those familiar with Docker, Ollama’s modelfile looks a lot like a Dockerfile). There are MacOS + Linux installers available today, with native Windows installer “coming soon” (you can install it on Windows via WSL today if you need to).

  2. For those who want LLMs to “just work” and never want to touch the command line, there is https://jan.ai/ (despite the name, I am not affiliated!). :wink: It runs on the desktop (Mac, Windows, Linux) and is super user friendly. As a bonus, it exposes OpenAI-compatible API with your localhost, which means you can use jan.ai as a back-end for other tools. Because it appeals to the masses and it is a user-friendly click-to-install app, there is a great potential for Jan to be very popular.

  3. Ollama can run a ton of open source LLMs: library – a peek at their library gives you a quick overview about what is popular, what might work well for your use case and therefore might be worth installing. Jan.ai has a similar built-in library.

  4. Some YouTube channels I found useful re: AI & open source:

Curious to hear about others’ personal faves! :innocent:

Thanks Jan, you are deeply informed and engaged here.

But more than the tools, can you share more what one does with said things? Why would an educator, project, organization take this on? That to me is most important to those beyond the technical fold.

Aldo, what seems compelling (not that I know first hand) are the capabilities to run LLMs locally without needing the big iron. What can we do there?

Ollama / Jan.ai can be used as ChatGPT alternatives running on the local computer. I mostly use the text / code generation features but expect the primary usage to shift towards extracting knowledge from existing documents and media, chatting with my personal knowledge base, and using agents for more complex workflows. Educators can probably transfer a lot of their existing ChatGPT use cases to their local LLMs but need to be careful about hallucinations because compact models have a small knowledge base, so it’s no surprise that they fill in the blanks by making things up.

Small open source LLMs are a lot less capable than the paid GPT-4 but I like their focus and capability (some are trained for a narrow use case instead of general knowledge of everything), their programmability and the ability to uplolad my own data without sending it to the internet. I also think it’s helpful to gain more understanding by tinkering with them locally instead of only using the finished polished commercial product on the web. Only paying a flat fee for electricity instead of pay-per-use is also nice.

From the philosophical perspective, I believe it is healthy for the society not to rely on a few gigacorps for AI but have as much autonomy and independence as possible (sounds familiar to open educators?). We will only have independence and sovereignty if we use open tools (echoing the sentiment of @moodler here).

Also, by using open tools and open source LLMs, I like to think I contribute a small bit towards their wider acceptance, which will hopefully help make them better. We can’t outcompete big corps in raw hardware performance and scale but we can outcompete them in ingenuity and efficiency. They have no moat. Today’s open source LLMs are already as capable as some of the top tier commercial models from just a few months ago. And this is just the beginning.