Ollama
Ollama is a software application that runs LLMs (large language models) on the machine that it is installed on.
Links
- GitHub
- Website
- Top 5 Open-Source LLMs for Coding: Ranked by Actual Developer Testing. 2025 article where Qwen 2.5 emerges as the winner, with DeepSeek R1 as the runner up.
Glossary
- ChatGPT
- A cloud-based LLM provided by the company OpenAI.
- Claude
- A cloud-based LLM provided by the company Anthropic.
- Gemini
- A cloud-based LLM provided by the company Google.
- GitHub Copilot
- A cloud-based LLM provided by the company Microsoft. This LLM is trained specifically to help with programming tasks.
- LLM
- Large Language Model
- RAG
- ?
Installation
Ollama can be installed via Homebrew, but for the moment I am happy with just installing the ready-made application for macOS.
When you run the Ollama application for the first time it tries to install the CLI tool ollama. I recommend allowing that, so that you can interact with Ollama and potentially perform low-level operations that are not exposed in the GUI. The CLI tool is installed in
/usr/local/bin
Installing a model
The application shows a drop-down selection below the prompt which lets you select the model that you want to interact with. If the model is not yet present on the machine, it has a download symbol next to it. To download the model, select it and then type a prompt - Ollama automatically starts the download and lets the model answer your prompt once the download has finished.
The location where models are installed can be configured in the application preferences. By default models are installed into the user's home directory:
/Users/<username>/.ollama/models
Integrating into Xcode
Xcode 26 has a new feature that Apple calls "Intelligence". This allows you to connect Xcode to an external LLM.
At the time of writing, Xcode provides ready-made integrations to ChatGPT and Claude. Luckily Apple has made Xcode flexible enough so that other model providers, such as Ollama, can also be integrated.
- Navigate to Xcode Preferences > Intelligence
- Tap button "Add a Model Provider..."
- Select tab "Locally Hosted"
- Port = 11434
- Description = Ollama
Command line
List the currently installed models:
dev@morannon littlego % ollama list NAME ID SIZE MODIFIED qwen3-coder:30b 06c1097efce0 18 GB 3 hours ago
Alternatives to Ollama
Alternatives to Ollama that I have seen mentioned, but have not exlored myself, are