- The Frontier by Product Hunt
- Posts
- AI war games
AI war games
Plus, five AI tools you may have missed
WELCOME
Happy Tuesday, legends. Welcome back to another edition of The Frontier — our weekly newsletter covering the best new AI launches on Product Hunt. .
TOP LAUNCHES
Perplexity built a computer
Perplexity Computer is a new workspace where you hand over a goal and it runs the whole thing in the background. It breaks work into subtasks, spins up 19 different models in parallel, talks to your files and 400 plus apps, and can keep going for hours or weeks in a cloud sandbox while you watch progress from one place.
Superset is a desktop IDE built for people who run more than one coding agent at a time. You can fire up a bunch of Claude Code, Codex and friends in parallel, keep each task in its own sandboxed workspace, and watch them all from a single view instead of juggling terminals. When something finishes, you get a quick diff and an editor right there, so you can review, tweak, and ship without hopping between tools.
Alkemi is a data brain that lives inside Slack. You hook it up to things like Snowflake or BigQuery, then just @ it in a channel and ask what happened to pipeline, revenue, or whatever you care about. It answers in plain language, drops in charts, and everyone sees the same thing without leaving the thread.
KiloClaw is a fully managed, cloud hosted OpenClaw, so you get an agent running in under a minute instead of wrestling with Docker and SSH. You click to deploy, tap into 500 plus models through Kilo Gateway, set up scheduled automations, and plug into chat platforms like Slack, Telegram and Discord, while Kilo handles restarts, monitoring and infra behind the scenes.
Tessl is a package manager and evaluation layer for agent skills. You send in the skills your agents depend on, run them through real tasks, see if they actually improve success rates, and version them properly instead of passing markdown around repos.
WHAT’S HOT
Claude drew a line. OpenAI took the contract.
Anthropic spent the week finding out what happens when you tell the Pentagon no. The company refused to drop safeguards that would block Claude from being used for mass domestic surveillance or autonomous weapons, and the response was not subtle: the Pentagon moved to label Anthropic a supply-chain risk, while Trump directed federal agencies to stop using its tech. Anthropic says it is taking that fight to court.
Then OpenAI stepped in. Sam Altman said OpenAI reached a deal to deploy its models on the U.S. Department of War’s classified cloud networks, which is about as clear a signal as you can send about who is willing to do business here. OpenAI says its own rules still block domestic mass surveillance and keep humans responsible for the use of force, but it is also very clearly the company that got the contract while Anthropic got the headache.
That is the real story. This was not some abstract AI ethics panel or another vague policy blog post. One lab held the line, got punished for it, and another took the deal with guardrails it says are good enough. If you wanted a neat little snapshot of where AI, power, and defense are headed, this was it.


Reply