Fine-Tune Gemma 3 270M on Apple Silicon with MLX-LM and Python
M2 MacBook Air. Wikimedia Commons · CC BY-SA 4.0 Your MacBook is already a fine-tuning machine. You just haven’t told it yet. If you’ve been staring at cloud GPU bills, waiting in Colab queues, or assuming that model fine-tuning is reserved for people with data centre access — this post is going to change your workflow. Google’s Gemma 3 270M is a surprisingly capable small language model, and Apple’s MLX framework turns your M-series Mac into a first-class local training environment. Together, they let you go from raw dataset to a domain-specialized model without leaving your desk. ...