- Do More Newsletter
- Posts
- Do More Newsletter
Do More Newsletter
This issue contains featured article "What Does AI Do with the Information I Share?" and exciting product information about Airia – Interactive Apps Inside Your AI Chats, WaveSpeedAI Desktop – A Production Workspace for Power Users, Carat AI Agent App Store – Hire an AI Creator, Not Just a Tool, Nafy AI – All‐in‐One AI Music Creation for Everyone, and Anytime AI 2.0.
Keep up to date on the latest products, workflows, apps and models so that you can excel at your work. Curated by Duet.

Stay ahead with the most recent breakthroughs—here’s what’s new and making waves in AI-powered productivity:
Airia – Interactive Apps Inside Your AI Chats
Airia has introduced full support for MCP Apps, becoming the first enterprise AI platform that can render rich, interactive dashboards, forms, and visualizations directly inside an AI conversation. Instead of receiving only text responses, teams can now see live sales dashboards, configuration wizards, and compliance audit trails as embedded UIs that they can click, filter, and drill into—without leaving the chat. Because these interfaces are rendered from the underlying source systems, users see the real data rather than an AI’s interpretation, which helps reduce hallucinations in critical workflows. Airia also lets companies connect tools like Salesforce, Snowflake, Grafana, and Asana through a secure enterprise gateway, so interactive experiences stay governed under centralized security, audit logging, and access control.
WaveSpeedAI Desktop – A Production Workspace for Power Users
WaveSpeedAI has launched Desktop, a professional-grade AI workspace designed to turn generative AI from a series of experiments into a streamlined, production-ready workflow. Desktop provides a structured local environment where creators and developers can run daily AI models efficiently, compare outputs across multiple models in a multi‑tab interface, and generate up to 16 variations at once for rapid iteration. One‑click templates make it easy to reuse complex model and parameter setups, while full LoRA support and local asset history give advanced users precise style control, versioning, and traceability over large volumes of generated content. The goal is to “make the tool disappear so creation can take center stage,” condensing what used to be a tangle of scripts, folders, and manual tracking into a single production cockpit.
Carat AI Agent App Store – Hire an AI Creator, Not Just a Tool
Carat AI has rolled out its new Agent App Store, reframing the platform from a simple content tool into a “workforce” of AI Content Creators that operate like virtual team members. Instead of manually stitching together scripts, images, and edits, users can spin up specialized Mini Agents that handle the entire creative pipeline—from planning and scriptwriting through multimodal generation, editing, and final export—via natural conversation. For small businesses, this means the ability to produce broadcast‑quality ads and marketing assets without hiring an agency, while influencers and solo creators can compress days of shooting and editing into a single working session. Carat’s Contextual Memory Core keeps track of project preferences, assets, and prior decisions so every new brief feels like working with a consistent, always‑on creative partner.
Nafy AI – All‑in‑One AI Music Creation for Everyone
Nafy AI has officially launched an all‑in‑one AI music platform that lets anyone generate royalty‑free, studio‑style tracks in seconds using plain‑language prompts. Users can describe mood, genre, tempo, and instrumentation to create fully arranged songs, or feed in their own lyrics so the Lyrics‑to‑Song feature can build synchronized instrumentation and vocals around them. The platform also includes an AI Song Cover Generator to restyle existing tracks, music extension tools to lengthen compositions while preserving structure, and utilities like an AI Music Editor, Lyrics Generator, and Vocal Remover to handle post‑production work in one place. With royalty‑free commercial usage on paid plans and support for custom voices, Nafy AI is positioned as a consumer‑friendly option for YouTubers, podcasters, indie game developers, and hobbyists who need original soundtracks without studio budgets.
Anytime AI released its 2.0 update, featuring "Talk to Teddy," an agentic AI workflow tool specifically tailored for the legal sector. The platform automates the tedious aspects of plaintiff law, such as medical record summarization and case discovery, by allowing attorneys to interact with case files via a conversational interface. This update represents a shift from static document analysis to active, agentic workflows that can draft legal arguments and identify critical case evidence autonomously.

Carat AI’s new Agent App Store marks a notable shift in how creators and small businesses can think about “using” AI for content: instead of a single tool that generates one asset at a time, the platform now offers an ecosystem of AI Content Creators that behave like specialized virtual employees. Each Mini Agent is built to own a complete workflow—from ideation to finished deliverables—so a user can simply describe the campaign or project they want and let the system orchestrate planning, scripting, asset generation, and editing behind the scenes. This design is aimed at people with strong ideas but limited technical skill or production bandwidth, lowering the barrier to professional‑grade media across video, imagery, audio, and more.
At a practical level, the Agent App Store bundles together a curated library of multimodal AI models and wraps them in agent behaviors tuned for specific production jobs. Instead of juggling separate tools for storyboard drafting, image generation, and sound design, a creator can hand off a high‑level brief to an agent that knows how to chain those steps in the right order with appropriate model settings. Carat’s Contextual Memory Core tracks project details (brand voice, preferred aspect ratios, prior asset choices) so later revisions—like “update this for a spring campaign” or “localize for Japanese audiences”—can be applied with minimal re‑prompting.
The immediate benefit of the new feature is time and coordination savings. A small business that might previously have needed an external agency can now produce TV‑style ads or social campaigns by working directly with agents that understand pacing, script structure, and visual composition, compressing what used to be weeks of back‑and‑forth into a single afternoon. For solo creators, the ability to spin up multiple agents—one focused on scripting, another on visual design, another on audio polish—means they can parallelize work that previously had to be done sequentially. Because the Agent App Store is available with a free tier offering daily credits, users can experiment with this “AI workforce” model before committing to higher‑volume paid plans.
There are also strategic implications in how Carat packages these capabilities. By framing agents as hireable creators rather than raw models, the platform encourages users to think in terms of roles and outcomes (“produce a launch video series” or “generate a product photo library”) instead of individual prompts and parameters. As Carat continues expanding its agent catalog and multimodal stack, creators may increasingly treat the service like a flexible studio team that scales with project demand, reserving their own time and attention for creative direction and decision‑making instead of manual production work.
What Does AI Do with the Information I Share?

You're chatting with an AI. Maybe you're asking it to proofread an email, brainstorm a business idea, or help you figure out why your code is broken. In the process, you're sharing information — sometimes very personal information — with a machine. So what is it doing with all that?
It's a fair question, and we’re all worried about the machines collecting all our information until they know all our weaknesses and can attack. Still, what it’s doing with your information isn’t quite that.
Two Different Things Are Happening
First, it helps to understand that there are really two separate data questions when it comes to AI chatbots like ChatGPT, Claude, Gemini, and others.
Question one: What happens to my data during our conversation? When you type something into an AI chatbot, the model processes your input to generate a response. Think of this like talking to someone — they hear what you say, respond, and (depending on the platform) may or may not remember it later. Most platforms retain your conversation history so you can go back and reference it, similar to how your text messages are stored on your phone.
Question two: Does my data get used to train future AI models? This is the big one, and it's where things get interesting — and where the major AI companies differ significantly.
The Training Question
Large language models learn from enormous datasets. The original training data for models like ChatGPT or Claude came from publicly available text across the internet — books, websites, forums, Wikipedia, academic papers, and so on. But what about the conversations you have with these tools after they're deployed? Does your chat about your messy divorce or your startup idea become training data for the next model?
The answer depends entirely on which platform you're using and what settings you've chosen.
OpenAI (ChatGPT): If you're using the free or Plus version of ChatGPT, your conversations can be used to train future models by default. OpenAI is upfront about this — they say the real-world data helps make their models more accurate and capable. However, you can opt out. There's a setting in your account under Data Controls where you can turn off "Improve Model for Everyone." Important caveat: this isn't retroactive. Anything you typed before you flipped that switch may have already been used. OpenAI also offers Temporary Chat mode, which doesn't save your conversation or use it for training. For business and enterprise customers, OpenAI doesn't use any data for training by default.
Anthropic (Claude): Anthropic used to be the gold standard for default privacy — for years, Claude simply didn't use your conversations for training, period. That changed in late 2025. Anthropic now asks Free, Pro, and Max users to choose whether their chats and coding sessions can be used to train future models. The catch: the opt-in prompt features a large "Accept" button with a small toggle that defaults to on, nudging users toward sharing. If you opt in, Anthropic can retain your data for up to five years. If you opt out, the previous 30-day retention policy applies. Deleted conversations are never used for training, regardless of your setting. And importantly, business accounts — Claude for Work, the API, and enterprise plans — remain completely excluded from training data.
Google (Gemini): Google saves your Gemini conversation history to your account for 18 months by default through a feature called "Gemini Apps Activity," though you can adjust that window or turn it off entirely. For the free consumer version, your conversations may be used to improve Google's models, and a subset of chats are reviewed by human reviewers. Google's own guidance includes the warning: "Please don't enter confidential information that you wouldn't want a reviewer to see or Google to use to improve our services." As a multi-product company, Google can also potentially cross-reference your Gemini interactions with data from Search, Gmail, and other services. Enterprise Workspace and Cloud customers get stronger protections — Google pledges that enterprise data is not used for model training.
What About the People Who Can See Your Chats?
Here's something that surprises a lot of people: at most AI companies, a limited number of human employees can access your conversations. At OpenAI, authorized personnel may review chats for purposes like investigating abuse, providing customer support, handling legal matters, or improving model performance (unless you've opted out). Other companies have similar policies.
This doesn't mean someone is sitting in a cubicle reading your conversations for fun. Access is typically subject to security controls, logging, and training requirements. But it does mean your chats aren't as private as a conversation with your therapist.
A Stanford study published in late 2025 examined the privacy policies of six major AI companies and found that all six were using or could use chat data for model training, though opt-out mechanisms vary widely. The researchers also flagged a concern about children's data — most platforms aren't taking active steps to remove children's input from their training pipelines.
The Inference Problem
Even beyond the question of whether your data gets used for training, there's a subtler privacy concern that doesn't get enough attention: inference.
Modern AI models are remarkably good at connecting dots. A Northeastern University researcher pointed out in 2025 that LLMs can infer private information from seemingly harmless data. Ask for low-sugar recipes, and the model might classify you as health-vulnerable. Mention a few details about your daily routine across several conversations, and a sophisticated system could piece together a surprisingly detailed profile.
This matters because in multi-product ecosystems like Google or Microsoft, those inferences can potentially flow across services. Your Gemini conversations don't exist in a vacuum — they exist alongside your search history, email content, and location data.
As the Northeastern researcher put it, try asking an AI to search for all the information associated with your email address. You might be surprised by how much it knows.
The Security Risk
There's also the question of what happens when things go wrong. AI companies are high-value targets for hackers, and breaches have already occurred. In early 2025, a platform called OmniGPT was breached, exposing 30,000 email addresses and 34 million lines of chat messages. The data was reportedly sold on the dark web for $100. A Chinese AI startup had a similar incident when an unsecured database exposed over a million lines of log data, including chat histories and API keys.
These aren't hypothetical risks. The more data you share with any online service, the more exposure you have if that service gets compromised.
So What Should You Actually Do?
Here's the practical advice, stripped of paranoia:
Know your platform's defaults. Don't assume your conversations are private. Check your settings. If you're using ChatGPT's free tier, your data is being used for training unless you've specifically turned that off. If you're using Claude, you were asked to make a choice about training data in late 2025 — if you're not sure what you picked, go check your Privacy Settings now.
Don't share truly sensitive information. Social Security numbers, passwords, medical records, financial account details — these don't belong in an AI chat, period. Even with the best privacy policies, you're still transmitting data to servers you don't control.
Use business tiers for business data. If you're using AI for work that involves confidential information, the enterprise versions of these tools offer meaningfully stronger data protections, including contractual guarantees that your data won't be used for training.
Think about the inference chain. Even if a single piece of information seems harmless, consider what it reveals in combination with everything else you've shared. AI is very good at synthesis.
Remember that privacy policies change. What a company promises today may not be what they promise tomorrow. The AI landscape is evolving rapidly, and data practices are evolving with it.
The Bottom Line
The short answer to "what does AI do with my information?" is: it depends on which AI, which plan, and which settings you're using. The range runs from "almost nothing" to "quite a lot," and the responsibility for knowing the difference falls largely on you.
The good news is that the major AI companies have gotten better about transparency over the past year. Opt-out mechanisms exist, privacy controls are more accessible, and enterprise-grade protections are robust. The bad news is that the trend line moved in the wrong direction in late 2025 — even companies that once championed privacy-first defaults have shifted toward asking users to share their data, and the ask is often designed to nudge you toward saying yes.
So take five minutes to check your settings. It's the most impactful privacy decision you'll make all week. Especially one day if the robots decide to attack using our known weaknesses.

Partner Spotlight: Duet Display
Duet Display turns your iPad, Android, Mac, or PC into an extra high‑performance display, giving knowledge workers and creators more space to work with applications and tools. It lets you extend or mirror your desktop over a wired or wireless connection with low latency, so you can keep AI dashboards, editing timelines, or chat agents on a secondary screen while your main display stays focused on core tasks. Duet also offers features such as touch and Apple Pencil support on compatible devices, making it especially useful for creative workflows, note‑taking, and whiteboarding alongside AI‑powered apps. Learn more at Duet Display.
Dictate prompts and tag files automatically
Stop typing reproductions and start vibing code. Wispr Flow captures your spoken debugging flow and turns it into structured bug reports, acceptance tests, and PR descriptions. Say a file name or variable out loud and Flow preserves it exactly, tags the correct file, and keeps inline code readable. Use voice to create Cursor and Warp prompts, call out a variable like user_id, and get copy you can paste straight into an issue or PR. The result is faster triage and fewer context gaps between engineers and QA. Learn how developers use voice-first workflows in our Vibe Coding article at wisprflow.ai. Try Wispr Flow for engineers.
Stay productive, stay curious—see you next week with more AI breakthroughs!

