How-To Guides
Run AI Locally
Step-by-step setup guides for running LLMs and image generators on real hardware — with exact commands, benchmarks, and optimization tips.
Language Model Guides
Run Llama 3.1 8B on Mac Mini M4
Step-by-step guide to running Llama 3.1 8B locally on the Apple Mac Mini M4 using Ollama — no GPU required.
Run Llama 3.3 70B on Mac Mini M4 Pro
Complete guide to running Llama 3.3 70B (Q4) locally on the Mac Mini M4 Pro with 24 GB unified memory using Ollama.
Run Llama 3.1 70B on RTX 5070
How to run Llama 3.1 70B (Q4) on an RTX 5070 12 GB using Ollama — includes VRAM limits, layer offload settings, and expected speed.
Run Ollama on a Mini PC (Intel/AMD)
How to run local LLMs with Ollama on an Intel or AMD mini PC — best models for 16 GB RAM, performance expectations, and optimization.
Run DeepSeek R1 Locally with Ollama
How to run DeepSeek R1 (8B and 70B) locally on your own hardware using Ollama — hardware requirements, speed expectations, and tips.
Image Generation Guides
Run Stable Diffusion on Mac Mini M4
How to run SDXL and FLUX on the Mac Mini M4 using Diffusers or ComfyUI — with expected generation times and optimization tips.
Run SDXL and FLUX on RTX 5070
How to run SDXL and FLUX.1 on the NVIDIA RTX 5070 with 12 GB GDDR7 — setup, benchmarks, and VRAM optimization tips.
Run Stable Diffusion on RX 9060 XT (ROCm)
Complete guide to running SDXL and FLUX on the AMD RX 9060 XT 16 GB using ROCm and ComfyUI on Linux.
Need the hardware first?
Find the Right Hardware for Your Use Case
Every guide recommends specific hardware. Our buying guides match you to the right GPU or mini PC for your budget and workload.