Qwen3.5-0.8B: A Multimodal Thinking Model That Fits in 1 Gigabyte

800 million parameters. 262,000-token context window. Images, video, and text — all handled natively. Thinking mode on demand. Apache 2.0 license. And the entire model weighs in at 1GB on Ollama. That’s the Qwen3.5-0.8B, the smallest member of Alibaba’s Qwen3.5 family, released in February 2026. It is not a general-purpose language model pretending to be multimodal — it was trained with early fusion on multimodal tokens from the start, covering 201 languages and dialects. At sub-gigabyte scale, very little competes with its feature set. ...

May 13, 2026 · 8 min · 1547 words · Clevis

Qwen3-Coder-Next: Run a Frontier-Level Coding Agent Locally on Consumer Hardware

There is a certain irony in spending $200 a month on a cloud coding assistant for a codebase you’ll never let leave your machine. Your intellectual property stays on-premises, but every line you paste into a chat window makes a round trip to a server you don’t control. Until recently, the performance gap between local models and frontier cloud assistants made that trade-off feel unavoidable. Qwen3-Coder-Next, released by Alibaba’s Qwen team on February 3, 2026, is the clearest argument yet that the trade-off is closing. With 80 billion total parameters but only 3 billion activated per token, it scores 70.6% on SWE-Bench Verified — matching or beating models with 10–20× more active parameters — and it runs on hardware you can buy today. ...

May 11, 2026 · 7 min · 1479 words · Clevis

Gemma 4: Taking Agentic Workflows to the Edge

When deploying large language models locally, every byte of VRAM counts. For the past year, the industry has aggressively pursued smaller, more capable models that can run on consumer edge devices—like a MacBook Pro, a Raspberry Pi 5, or a mid-range Android phone—without sacrificing reasoning quality. Recently, Google DeepMind unveiled the next evolutionary step in this space: the Gemma 4 family. Released under the Apache 2.0 license, Gemma 4 is a set of state-of-the-art open models built from the ground up to bring frontier-level intelligence to edge constraints. Following in the footsteps of previous generations, Gemma 4 extends context windows, introduces native “thinking” modes, and explicitly focuses on multimodal autonomous agents running without the cloud. ...

April 5, 2026 · 6 min · 1213 words · Clevis

Deep Dive: Running Reka Edge Locally for Frontier-Level Vision AI on Mac and PC

Forget the cloud for a second. If you’ve been waiting for a truly capable multimodal vision model that runs entirely on your own hardware without chewing through a massive API bill, the landscape just shifted. Released recently in March 2026, Reka Edge is a 7-billion parameter vision-language model optimized specifically for image understanding, video analysis, object detection, and agentic tool-use. But what makes it special isn’t merely the benchmark scores—it’s the fact that it’s designed from the ground up to run entirely offline on consumer hardware. No API keys, no data leaving your machine, no latency waiting for a server farm. ...

March 25, 2026 · 8 min · 1565 words · Clevis

AI in Your Pocket: How Liquid AI’s Apollo App Lets You Run Chatbots Completely Offline

For years, the phrase “AI chatbot” has carried the inherent assumption of a round-trip to the cloud. You type a prompt, it zips off to a server farm somewhere, crunches the data on massive GPUs, and beams the text back to your screen. But what happens when you’re on a Wi-Fi-less airplane? Or working offline in a remote area? Or discussing highly sensitive enterprise work you’d never want stored on a provider’s server? ...

March 24, 2026 · 7 min · 1304 words · Edwin

The 'Small' Model That Does It All: How Mistral Small 4's Unified Architecture Kills the Need for Specialized AI

Forget managing separate endpoints for your chat assistant, vision parser, and coding AI. If you’ve been following the current obsession with agentic workflows and local efficiency, you know the routing headache. Now, Mistral AI just flipped the table. Mistral Small 4 is here, and it’s the first in the Mistral family to merge the capabilities of their flagship specialized models—Magistral (reasoning), Pixtral (multimodal), and Devstral (agentic coding)—into one unified powerhouse. Released under the Apache 2.0 license, this open-weight model sets a massive new standard for hardware-efficient, on-device, and low-latency inference. ...

March 24, 2026 · 6 min · 1099 words · Clevis

Out of the Cloud, Into the Wild: How Small AI Models and Physical AI Are Taking Over the Edge

The era of defaulting to a trillion-parameter behemoth for every AI task is officially over. For years, the narrative has been that bigger is always better, leading to massive, power-hungry Large Language Models (LLMs) locked away in centralized data centers. But the real revolution happening in 2026 isn’t in the cloud—it’s at the edge. The biggest paradigm shift hitting developers, businesses, and indie hackers right now is the pivot toward Small Language Models (SLMs) and “Physical AI.” Lightweight models are migrating AI out of expensive server farms and integrating it directly into the wild: onto our smartphones, factory floors, and 5G network towers. ...

March 23, 2026 · 6 min · 1105 words · Clevis

Fine-Tune Gemma 3 270M on Apple Silicon with MLX-LM and Python

M2 MacBook Air. Wikimedia Commons · CC BY-SA 4.0 Your MacBook is already a fine-tuning machine. You just haven’t told it yet. If you’ve been staring at cloud GPU bills, waiting in Colab queues, or assuming that model fine-tuning is reserved for people with data centre access — this post is going to change your workflow. Google’s Gemma 3 270M is a surprisingly capable small language model, and Apple’s MLX framework turns your M-series Mac into a first-class local training environment. Together, they let you go from raw dataset to a domain-specialized model without leaving your desk. ...

March 22, 2026 · 9 min · 1854 words · Clevis

GPT-5.4 Nano: The Fastest, Cheapest OpenAI Model Yet — What Developers Need to Know

There’s a pattern playing out in AI that should feel familiar to anyone who builds things for a living: last year’s flagship becomes this year’s free tier. GPT-5.4 nano is the latest, sharpest edge of that pattern. Released on March 17, 2026, GPT-5.4 nano is OpenAI’s smallest and most cost-efficient model in the GPT-5.4 family — and it’s API-only, no ChatGPT UI access, no consumer frills. It’s a tool built for builders. If you’re running high-volume pipelines, real-time classification jobs, or distributed agent architectures where milliseconds and fractions of cents actually matter, this one deserves a close look. ...

March 22, 2026 · 11 min · 2131 words · Clevis

SmolLM3-3B: The Fully Open Small Language Model That Punches Way Above Its Weight

Three billion parameters. 128,000 token context window. Reasoning mode baked right in. Six languages. And an Apache 2.0 license with the full training blueprint published alongside the weights. If you’ve been waiting for a small language model that you can actually deploy on a $5 VPS, an old MacBook, or a Raspberry Pi cluster without compromising on capability — HuggingFace’s SmolLM3-3B is worth your attention right now. What Is SmolLM3 and Why Does It Matter in 2026? Released by HuggingFace’s SmolLM team on July 8, 2025, SmolLM3-3B is the third major iteration of their “smol” model series. But calling it just “smol” undersells what’s going on here. ...

March 22, 2026 · 9 min · 1721 words · Clevis