Connect OCRBook to your own Ollama server.
Install Ollama, run a model, then paste your server address into OCRBook to use local AI tools (summarize / translate / Q&A).
If you connect over plain HTTP, traffic may not be encrypted. Use a trusted network, VPN, or HTTPS reverse proxy when possible.
Setup steps
Download and install Ollama for your OS (macOS / Windows / Linux). After install, start the Ollama service/app.
Open Terminal (or PowerShell) and run: ollama pull llama3.2 (example). You can use any model you prefer.
Run: ollama run llama3.2 then type a message to confirm it responds.
By default Ollama may bind to localhost. To access from another device, set the host to 0.0.0.0 and allow the port in your firewall. Typical port is 11434.
export OLLAMA_HOST=0.0.0.0:11434
ollama serve
$env:OLLAMA_HOST="0.0.0.0:11434"
ollama serve
On the Ollama machine, find your local IP (e.g. 192.168.0.10). Your OCRBook server URL will look like: http://192.168.0.10:11434
In OCRBook settings, enable the Ollama integration and paste the server URL. Then try a small test prompt (summarize a short page).
Security (recommended)
For remote use, connect through VPN (recommended) or put Ollama behind an HTTPS reverse proxy (Caddy / Nginx).
Limit inbound access to your LAN, or allow only your device IPs. Don’t expose 11434 publicly without protection.