- Do More Newsletter
- Posts
- Do More Newsletter
Do More Newsletter
This issue contains featured article "The Leaked Model and the Upgrade Treadmill – What to Know About Mythos" and exciting product information about Slack AI expands inside Salesforce’s workplace stack, X1 Search v11, Forerunner AI Assistant, Penguin Ai’s Gwen, and PieBox.
Keep up to date on the latest products, workflows, apps and models so that you can excel at your work. Curated by Duet.
If you want help deploying AI in your business, email us at [email protected].

Stay ahead with the most recent breakthroughs—here’s what’s new and making waves in AI-powered productivity:
Slack AI expands inside Salesforce’s workplace stack
Salesforce announced more than 30 new Slack AI features aimed at making daily work faster, CRM updates, improved search, and desktop assistance. For an average user, the appeal is simple, it reduces the time spent on follow-ups, note-taking, and digging through messages so teams can stay in flow.X1 Search v11
X1’s newest release brings AI-powered organization and classification directly into its desktop and enterprise search product, with processing designed to stay on the user’s device or inside a corporate firewall. That makes it useful for people who want smarter search across email, files, Teams, and Slack without giving up security or data control.Forerunner AI Assistant
Forerunner launched an AI Assistant to capture inspection data faster and more accurately, with traceable auditability built in. The broader takeaway for everyday readers is that AI is increasingly showing up in practical workflow tools, not just chatbots, and this one focuses on reducing paperwork and manual reporting time.Penguin Ai’s Gwen
Penguin Ai introduced Gwen, a build-your-own AI platform for healthcare operations. While it’s aimed at healthcare teams, the product is part of a wider trend toward customizable AI tools that help organizations automate repetitive work, route requests, and improve internal productivity.PieBox
PieBox officially launched as a one-stop AI development platform that helps turn ideas into AI applications. For general AI readers, the value is that it lowers the barrier to building and experimenting with AI software, which is especially attractive to creators and founders who want to move from concept to prototype quickly.

X1 Search v11 is an AI-enhanced desktop search product designed to help users organize, classify, and find information across email, documents, chats, and cloud content. The key idea is that the AI works in-place, on the device or inside a controlled enterprise environment, rather than moving sensitive data elsewhere. That makes it especially relevant for users who want smarter search without sacrificing security or compliance.
The newest feature set adds AI-powered categorization, classification, and tagging, which means the software can help sort information more intelligently instead of only returning keyword matches. For users, the benefit is less time spent hunting through scattered files and more time actually using the information they already have.
One of the most useful aspects of the release is its flexibility. Customers can deploy AI models in a way that fits their environment, including local-device processing or behind-the-firewall use. That matters for teams handling sensitive material, because it keeps AI assistance aligned with existing security controls.
For anyone who works across email, Slack, Teams, and shared drives, the practical payoff is simple: faster search, better organization, and fewer context switches. X1 is positioning this as a secure way to bring AI into everyday productivity without forcing users to compromise on where their data lives.
The Leaked Model and the Upgrade Treadmill – What to Know About “Mythos”

Last week, someone at Anthropic made a mistake.
A content management system was misconfigured, and suddenly close to 3,000 unpublished assets — including a draft blog post about the company’s next AI model — were sitting on the open internet for anyone to find. Fortune reported the exposure after independent security researchers flagged the cache. Within hours, it was everywhere.
The model is called Claude Mythos. Internally, Anthropic refers to the tier as “Capybara.” And if the leaked documents are accurate, it’s a significant leap beyond anything currently available — not just from Anthropic, but from anyone.
You probably saw the headlines. Several major cybersecurity stocks fell 5–7% on March 27. Crypto prices slid, with some reports linking the move to the leak. People on X (formerly Twitter) started arguing about whether this was the beginning of the end of something, or just the beginning of something new. The usual.
Here’s the thing: if you use AI casually — to write emails, plan trips, brainstorm ideas — none of this changes your Tuesday. But it’s worth understanding what’s happening, because the treadmill is speeding up and at some point you’ll want to know how to think about it.
What Actually Leaked
The draft blog post described Mythos as a “step change” in capability — Anthropic’s strongest language to date about one of its own models. Compared to Claude Opus 4.6 (currently their best), Mythos reportedly scores “dramatically higher” on tests of software coding, academic reasoning, and cybersecurity. But the most telling detail — if early analyses of the leaked draft are accurate — is about how it works: where current models respond to instructions one step at a time, Mythos reportedly plans and executes sequences of actions on its own — moving across systems, making decisions, and completing operations without waiting for human input at each stage. Less chatbot, more autonomous agent.
The cybersecurity angle is the reason for the stock market jitters. The leaked documents included internal warnings that Mythos could find and exploit software vulnerabilities fast enough to pose what Anthropic called “unprecedented cybersecurity risks.” According to Axios, Anthropic has been privately briefing government officials about the implications.
The irony — a company warning about cybersecurity risks while accidentally leaving 3,000 files on a public server — has not gone unnoticed. Futurism’s headline called it “the most ironic way possible.” Fair.
Then, days later, it happened again. On March 31, Anthropic accidentally exposed the source code for Claude Code — their popular coding tool — via an npm packaging error. Roughly 500,000 lines of code across about 1,900 files, out in the open. Anthropic said the incident exposed internal source code but did not involve sensitive customer data or credentials — they described it as a packaging error, not a security breach. Still: two major leaks in under a week, from the company sounding the alarm about cybersecurity.
A few things worth noting: Mythos isn’t publicly available. It’s being tested by a small group of early-access users. Anthropic says it’s extremely expensive to run and not ready for general release. (Unconfirmed rumors put it at 10 trillion parameters, which would help explain the cost — but Anthropic hasn’t confirmed a number.) The company has signaled a deliberately slow, phased rollout, driven by both compute costs and misuse concerns. Some industry analysts think the timeline may be tied to Anthropic’s anticipated IPO later this year. Either way, this isn’t something you can go try right now.
The Treadmill Problem
Here’s what I actually want to talk about. Because the specific details of Mythos matter less to most of us than the pattern it represents.
We’re now in a cycle that looks like this: every few months, a major AI lab announces a new model that’s significantly better than the last one. Headlines erupt. People debate whether this is the one that changes everything. Then it ships, everyone adjusts, and three months later the next one lands.
GPT-4 was supposed to be the inflection point. Then GPT-4o. Then Claude 3.5 Sonnet. Then Gemini 2. Then o3. Then Opus 4.6. Then GPT-5.4 — which launched just weeks ago with a million-token context window. Now Mythos.
If you’re keeping score at home, you’ve already lost count. And that’s kind of the point.
For people who use AI as a tool — which probably included you — the upgrade treadmill creates a weird kind of anxiety. Am I using the right model? Should I switch? Is the thing I learned last month already obsolete?
The honest answer: probably not. The gap between “good enough” and “state of the art” is real, but for everyday use, it’s narrower than the headlines suggest. If you’re using Claude or ChatGPT to draft a cover letter, help your kid with homework, or summarize a long document, the difference between this quarter’s model and last quarter’s is rarely the thing that determines whether you get a good result. Your prompt matters more than your model.
When the Upgrade Does Matter
That said, there are places where the jumps are real and consequential.
Coding is the big one. Each generation of model gets meaningfully better at writing, debugging, and reasoning about software. If Mythos is as good as the leak suggests, developers will care — a lot. Software that was too complex for AI to handle reliably six months ago might suddenly be within reach.
Cybersecurity is another, and it’s a double-edged sword. A model that’s better at finding vulnerabilities is also better at exploiting them. This is the core of Anthropic’s own concern, and it’s not hypothetical hand-wringing — it’s the company that built the model saying “we need to be careful with this.”
And then there’s the category I’d call “reliability at complexity.” Each new generation tends to get better at long, multi-step tasks — the kind where older models would lose the thread halfway through. Planning a complex project, analyzing a dataset with multiple variables, maintaining context over a very long conversation. These are the tasks where you’d notice the difference.
But for the vast majority of what most people do with AI today? You’re fine. The model you’re using is not the bottleneck. You are the bottleneck — specifically, how clearly you can describe what you want and how thoughtfully you evaluate what you get back.
How to Think About the Arms Race
I’ve been following AI long enough to notice a pattern in myself: every time a big model announcement drops, I feel a little jolt of anxiety. Is this the one that makes everything I thought I knew outdated? Am I falling behind?
Then I sit down, use the tools, and realize that the fundamentals haven’t changed. AI is a force multiplier for clear thinking. If you know what you want and can articulate it, these tools are extraordinary. If you’re vague, they’re impressively vague right back at you. No model upgrade fixes that.
So here’s my framework for staying sane on the treadmill:
Ignore the benchmarks. Unless you’re a developer or researcher, the fact that Model X scores 3% higher than Model Y on graduate-level physics problems doesn’t affect your life. What matters is whether the tool helps you do the thing you’re trying to do.
Pick a tool and get good at it. Jumping between ChatGPT, Claude, and Gemini every time a new version drops is a recipe for never getting proficient at any of them. The best model is the one you know how to use well.
Pay attention to capabilities, not hype. When a new model launches, the useful question isn’t “is this the most powerful AI ever?” (the answer is always yes, briefly). The useful question is: “can it do something I couldn’t do before?” If yes, explore it. If not, keep doing what you’re doing.
Take the safety stuff seriously. The Mythos leak is a reminder that these models are getting powerful fast, and the companies building them are genuinely wrestling with the implications. Anthropic briefing government officials about cybersecurity risks isn’t marketing — it’s a company saying “this thing we built could be dangerous if mishandled.” That kind of candor deserves attention, not dismissal.
The Accidental Metaphor
There’s something almost too perfect about the way Mythos entered the public conversation. Not through a polished launch event or a carefully staged demo — through a misconfigured server. A human mistake. The most advanced AI model in the world, revealed because someone forgot to check a setting on the CMS. And then, before the news cycle even cooled, a second leak — this time their own source code, shipped in a botched npm package.
Two accidental exposures in one week. From the company telling Congress that AI-powered cyberattacks are about to get a lot worse. And the cleanup wasn’t exactly graceful either — Anthropic submitted a DMCA takedown notice that GitHub processed across the entire fork network, taking down about 8,100 repositories in one sweep. Anthropic later said the overbroad impact was accidental and retracted most of the notice.
It’s a reminder that for all the talk of superintelligence and existential risk and paradigm shifts, this stuff is still being built by people. People who misconfigure servers. People who leave draft blog posts in public caches. People who ship source maps they didn’t mean to. People whose damage control creates new damage. People who, like the rest of us, are making it up as they go along — just at a very high level.
That’s not a reason to dismiss the technology. It’s a reason to keep your feet on the ground while the hype cycle does what hype cycles do.
Mythos will eventually ship. It will be impressive. Another model will follow. The treadmill will keep spinning. And the people who get the most value from AI will be the same ones who always have: the ones who show up with a clear idea of what they need, a healthy skepticism about what they’re told, and the patience to learn the tools rather than chase the next shiny thing.
The upgrade that matters most is still the one between your ears.

Partner Spotlight: Duet Display
Duet Display turns your devices into an extended workspace and also offers remote access for working from anywhere. Duet Display offers fast, reliable, easy-to-use connectivity across Mac, PC, iOS, and Android, which makes it a practical fit for flexible work setups. Find out more information or sign up at Duet Display.
Replace your first 4 hires with AI. Free workshop on April 8th.
Most early-stage founders can't afford their first four hires. Sales, marketing, dev, and support alone can run hundreds of thousands in salaries.
On April 8th, AI thought leader Heather Murray shows pre-seed and seed founders how to build all four functions using AI tools. Live, with demos, for free.
Register today and get a free AI tech stack worth $5K+ including Claude, AWS credits, Make, and 90% off HubSpot.
Stay productive, stay curious—see you next week with more AI breakthroughs!

