That M-series Mac you bought to run local LLMs? It's idle 95% of the time. And when you finally need it, one machine isn't enough.
Pool Do lets you pool idle compute with others. Earn credits when your machine works for the pool. Spend them when you need more.
You bought serious hardware. It runs inference for 20 minutes a day. The rest of the time it's an expensive desk ornament.
When you actually want to run a batch of embeddings or evaluate 500 prompts, one machine is painfully serial.
You went local for a reason — privacy, cost, control. But "local" means "alone."
Three steps. No cloud accounts. No API keys.
One binary. It detects your models — Ollama, llama.cpp, MLX, LM Studio — automatically.
Pools are groups that share compute. Your team, your friends, your open-source project.
Your idle machine earns credits. Your batch jobs spend them. The math works out.
Millions of M-series Macs sit idle right now. Together they're a distributed data center that nobody built and nobody pays to run. Pool Do is the coordination layer.
The nagging feeling that your $3K inference machine is doing nothing right now.