Ollama Setup Guide

Run Zush analysis locally on your Mac with Ollama. Your files are processed by a model on your computer instead of a cloud AI provider.

What Local Mode Means

When Local (Ollama) is enabled in Zush, supported file analysis runs through your local Ollama server. Zush does not send analysis content to Zush cloud or third-party AI providers in this mode. You still control which model is installed, where Ollama stores it, and when Ollama is running.

Setup Steps

1

Install Ollama

Download Ollama for macOS from the official website, install it, and open the app once so the local server can start.

Open Ollama download page
2

Download a vision model

Zush works best with a vision-capable model because many files are images, screenshots, PDFs, or visual previews. Start with:

ollama pull qwen2.5vl:3b
3

Check that Ollama is running

Ollama usually runs in the background after you open it. If Zush cannot connect, start it from Terminal:

ollama serve
4

Enable Local mode in Zush

Open Zush, go to AI Setup, turn on Local (Ollama), refresh the model list, select your model, and run Test.

Troubleshooting

Zush does not see any models

Run ollama list in Terminal. If the list is empty, pull a model first, then click refresh in Zush.

Connection test fails

Make sure Ollama is running and the host is set to http://127.0.0.1:11434 in Zush connection settings.

Processing is too slow

Use a smaller model like qwen2.5vl:3b, close memory-heavy apps, or switch back to Cloud when you need faster batch processing.

Local mode is separate from Cloud and BYOK. Cloud uses Zush credits by default, BYOK uses your provider key, and Local uses Ollama on your Mac.