We’ve open-sourced a multi-agent orchestrator that we’ve been using to handle long-running LLM tasks. We found that single LLM agents tend to stall, loop, or generate non-compiling code, so we built a harness for agents to coordinate over shared context while work is in progress.
How it works: 1. Orchestrator agent that manages task decomposition 2. Sub-agents for parallel work 3. Subscriptions to task state and progress 4. Real-time sharing of intermediate discoveries between agents
We tested this on a Putnam-level math problem, but the pattern generalizes to things like refactors, app builds, and long research. It’s packaged as a Claude Code skill and designed to be small, readable, and modifiable.
Use it, break it, tell me about what workloads we should try and run next!
If your registration process is eventually going to ask me for a username, can the org name and user name be the same?
I created the account from my phone, and don't have access to the dev tools I'd want to paste the key into. I can deal with it, but I don't know if I'll be able to regenerate the key if I lose it, I'd rather not store it on my phone, and I don't trust my accuracy in manually typing it in on my laptop while looking at my phone, so all the options feel not great. Again, not an actual roadblock, but still something I'd encourage fixing.
Edit added: Good thing I copied the key to my phone before writing this message. Jumping over to this page seems to have forced a refresh/logout on the ensure page in the other tab, so my token would (I think? maybe?) be lost at this point if I'd done it in the other order.
Will make this more clear in the quickstart, thanks for the feedback
I'm curious to see how it feels for you when you run it. I'm happy to help however I can.
Any workloads you want to see? The best are ones that have ways to measure the output being successful, thinking about recreating the C compiler example Anthropic did, but doing it for less than the $20k in tokens they used.