Luminary uses Ollama to provide powerful, offline translation features. Ollama allows you to run large language models, like Llama 3 and Gemma, directly on your computer, ensuring your data remains private.
- Official Website: Ollama
Setting Up Ollama for Translation
Step 1: Install Ollama for Windows
- Download the official installer directly from the Ollama website.
- Run the installer. It requires Windows 10 or later and is a very straightforward "next-next-finish" process.
Step 2: Verify the Installation
- Once installed, Ollama should start running automatically in the background.
- To verify it's working, open your web browser and go to http://localhost:11434/.
- If you see the message "Ollama is running", you're all set! If not, there may be an issue with the installation.
Step 3: Important File Locations (Optional)
Ollama stores files in a few key locations. You can view them by typing the path into the File Explorer address bar or using the Run command (Windows Key + R).
%LOCALAPPDATA%\Ollama
: Contains logs (app.log, server.log) and downloaded updates. You can safely delete log files if you need to.%LOCALAPPDATA%\Programs\Ollama
: Contains the application binaries. The installer automatically adds this to your system's PATH.%HOMEPATH%\.ollama
: The most important folder. This is where your downloaded models and configuration are stored. You can delete models files and folders manually under models\blobs (files) and models\manifests\registry.ollama.ai\library (folders)
Step 4: Understanding Available Models
The first time you use the translation feature in Luminary, you will be prompted to download a model. Hereβs a breakdown of the models available within the app to help you choose:
Model | Quality | Size (GB) | RAM Needed | VRAM Needed (GPU) | Best For |
---|
gemma3:4b | Good | 3.3 | ~8 GB | ~4 GB | General use on most systems. |
gemma3:4b-it-qat (Recommended) | Very Good | 4.0 | ~8 GB | ~5 GB | Efficient, memory-saving without losing quality. |
gemma3:12b-it-q4_K_M | Excellent | 8.1 | ~16 GB | ~10 GB | Demanding tasks and complex translations. |
llama3:instruct | High | 4.7 | ~8 GB | ~6-8 GB | Detailed instructions and clear answers. |
llama3.1:8b-instruct-q5_K_M | High | 5.7 | ~16 GB | ~8 GB | Balanced model for fast and accurate results. |