Your GPU is bored.

That M-series Mac you bought to run local LLMs? It's idle 95% of the time. And when you finally need it, one machine isn't enough.

Pool Do lets you pool idle compute with others. Earn credits when your machine works for the pool. Spend them when you need more.

The irony

$3,000 space heater

You bought serious hardware. It runs inference for 20 minutes a day. The rest of the time it's an expensive desk ornament.

The bottleneck

Still too slow when you need it

When you actually want to run a batch of embeddings or evaluate 500 prompts, one machine is painfully serial.

The dilemma

The cloud feels wrong

You went local for a reason — privacy, cost, control. But "local" means "alone."

Solo
5%
Idle 95% of the time
Pooled
80%
Actually working

How it works

Three steps. No cloud accounts. No API keys.

1

Install the worker

One binary. It detects your models — Ollama, llama.cpp, MLX, LM Studio — automatically.

2

Join or create a pool

Pools are groups that share compute. Your team, your friends, your open-source project.

3

Earn and spend credits

Your idle machine earns credits. Your batch jobs spend them. The math works out.

Millions of M-series Macs sit idle right now. Together they're a distributed data center that nobody built and nobody pays to run. Pool Do is the coordination layer.
"The greenest GPU is the one that's already on your desk."
Token Anxiety (n.)

The nagging feeling that your $3K inference machine is doing nothing right now.

Built for developers

Open source Inspect, modify, self-host
API-first Standard endpoints, JSONL batch submission
Model-agnostic Anything your machine can run
Double-entry accounting Earned, spent, held, settled
Pluggable backends Ollama, llama.cpp, MLX, LM Studio
Trust-based pools You choose who you share with