Do More Newsletter

This issue contains featured article "Your AI Doctor Will See You Now" and exciting product information about Adobe Firefly AI Assistant, Jitterbit EDI AI Assistant, Eluvio Video Intelligence Editor, Base44 Superagents, and Vylit AI Tools.

In partnership with

Keep up to date on the latest products, workflows, apps and models so that you can excel at your work. Curated by Duet.

If you want help deploying AI in your business, email us at [email protected].

Stay ahead with the most recent breakthroughs—here’s what’s new and making waves in AI-powered productivity:

Adobe introduced a new Firefly AI Assistant that lets creators direct multi-step creative workflows in natural language across Adobe apps like Firefly, Photoshop, Premiere, Lightroom, Express, and Illustrator. For consumers and small businesses, that means less time switching tools and more time actually making marketing graphics, social posts, product visuals, and short videos.

Jitterbit announced general availability of its EDI AI Assistant inside the Harmony platform, giving users a conversational way to manage complex electronic data interchange tasks. Small businesses that rely on suppliers, invoices, or logistics partners can benefit from simpler day-to-day operations without needing to be EDI experts.

ELUV.IO unveiled new advanced AI tools for its next-generation EVIE video intelligence editor, which is designed to help users work with video libraries and live sports content more intelligently. For creators and small teams, the value is faster organization, smarter editing support, and a more AI-native workflow for turning video assets into usable content.

Base44 launched Superagents, a new AI agent experience that helps users create personalized autonomous agents using natural language. This is a strong fit for entrepreneurs and small teams who want to build practical tools, automate repetitive work, or prototype internal workflows without heavy technical overhead.

Meta’s newly announced Muse Spark is being integrated across apps like Facebook, Instagram, WhatsApp, and Messenger, bringing AI directly into tools millions already use daily. This rollout signals a major move toward “AI-first interfaces,” where users interact with intelligent assistants instead of traditional menus or workflows. For small businesses, especially those relying on social media, this could streamline content creation, customer engagement, and messaging automation without requiring new software adoption.

Adobe Firefly is Adobe’s all-in-one creative AI studio for generating and editing images, video, and design assets across the Creative Cloud ecosystem. The new Firefly AI Assistant adds a conversational layer on top of that system, so users can describe what they want in plain English and have the assistant orchestrate the steps across multiple Adobe apps.

The main benefit is speed without losing creative control. Adobe says the assistant is designed to handle complex, multi-step workflows while keeping the creator in charge of decisions, refinements, and final approvals, which makes it practical for everyday creators and small businesses that need polished content fast.

It also matters that the feature is not just a chatbot bolted onto a design tool. Adobe says the assistant can use pre-built creative skills, understand the assets being worked on, and even incorporate review feedback through Frame.io, which can shorten the path from idea to finished asset.

For consumers, that could mean easier social content, event graphics, and personal projects; for small businesses, it could mean faster production of ads, product visuals, and campaign materials without needing a full creative team. Adobe’s broader pitch is that this turns the software from a set of tools into a more guided creative partner.

Your AI Doctor Will See You Now

In January, Utah became the first state in America to approve a pilot program for AI-mediated prescription renewals. As of April 2026, the pilot is still in its first phase, and all prescription requests still require authorization by a licensed physician — but the trajectory is unmistakable.

The program is called Doctronic. It covers 192 medications for chronic conditions like high blood pressure, diabetes, and depression. Patients answer a series of questions. The AI evaluates their responses. If everything checks out, the prescription gets renewed. Under the agreement, the first 250 cases get pre-issuance physician review; the next 1,000 get retrospective physician review; and later phases call for monthly sampling of 5–10% of renewals, quarterly reviews of escalated cases, plus the ability for pharmacists and patients to trigger physician review at any time. It’s a phased leash, not an open field.

For the millions of Americans who wait weeks for a five-minute prescription renewal appointment — sometimes rationing their last few pills in the meantime — the appeal is obvious. But the program also raises a question that’s becoming harder to avoid: how much of your healthcare should an algorithm be responsible for?

That question is no longer hypothetical. AI is already deeply embedded in medicine, and the pace is accelerating.

Where AI Is Already in the Room

AI has moved well beyond the chatbot-answering-health-questions phase, though that’s happening too — OpenAI introduced ChatGPT Health on January 7, 2026, initially rolling it out to a small group of early users. The company says more than 230 million people already ask health and wellness questions on ChatGPT each week — a figure that covers the broader platform, not just the new Health product, but one that signals how many people are already turning to AI before they turn to a doctor.

But the more consequential developments are happening inside hospitals. AI systems are reading radiology scans — X-rays, MRIs, mammograms — and in some studies catching abnormalities that human radiologists miss. Emergency departments in the US, UK, and Australia are testing AI triage systems that assess patients on arrival, analyzing symptoms and vital signs to prioritize who needs immediate attention. Behind the scenes, AI tools are flagging drug interactions, predicting hospital readmissions, and chewing through the administrative paperwork that consumes an estimated 25% of a physician’s workday.

The early results in some areas are promising. A landmark study published in NEJM AI found that AI-assisted triage improved identification of critical care patients from about 79% to 83% and reduced the time from arrival to initial care by a third. For patients in rural areas or underserved communities with limited access to specialists, the efficiency gains could be transformative.

The Accuracy Problem

But those gains come with a significant caveat: AI’s diagnostic accuracy, particularly in consumer-facing settings, is still alarmingly uneven.

A study from Mass General Brigham, published this month, found that AI chatbots failed to produce an appropriate differential diagnosis more than 80% of the time when given initial patient information. The models performed better with detailed clinical data, but most people typing symptoms into a chatbot at midnight aren’t providing comprehensive medical histories — they’re describing what hurts and hoping for an answer.

Research from Mount Sinai found similar gaps in triage specifically. AI systems handled clear-cut emergencies well — a heart attack presents as a heart attack. But for ambiguous cases where the danger isn’t immediately obvious, the AI under-triaged more than half of the cases that physicians flagged as emergencies. In one scenario involving an asthma patient, the system correctly identified early warning signs of respiratory failure in its own written analysis — and then advised the patient to wait rather than seek emergency care.

The system saw the danger, documented it, and gave the wrong recommendation anyway. It’s a failure mode that’s distinct from human error: AI can hold contradictory information without noticing.

Who Gets Left Behind

There’s also the question of who AI healthcare works for — and who it doesn’t.

Researchers tested nine AI diagnostic programs using a thousand emergency room cases and found that recommendations often shifted based on a patient’s race, gender, income, or housing status — not their actual medical condition. This isn’t a glitch awaiting a patch. It reflects the training data. Models learn from the medical system as it exists, including its historical blind spots and disparities.

The downstream effects are measurable. Peer-reviewed research has found underdiagnosis bias in chest-X-ray AI for underserved populations, and dermatology reviews continue to report reduced performance or limited validation data for patients with darker skin tones.

The irony is hard to miss: the communities that stand to gain the most from AI-augmented healthcare — those with the least access to specialists and the longest wait times — are the same communities most likely to be underserved by the technology.

A Patchwork of Rules

Legislators are starting to respond, and the early signals point in very different directions depending on where you live.

Utah’s Doctronic program represents the “let’s try it” school of thought — a regulatory sandbox with guardrails (initial human review, automatic escalation for complex cases, strict data-use restrictions) but a fundamental willingness to let AI make medical decisions. Questions are already being raised about whether the FDA should have a role.

Maine just went the other direction. Governor Mills signed LD 2082 into law on April 13, 2026, barring anyone from offering therapy or psychotherapy services to the public through AI unless those services are provided by a licensed professional. Therapists can still use AI for scheduling, billing, and record-keeping — the administrative layer — but the clinical line is now drawn in statute. Missouri is considering a narrower proposal: HB 2372, which passed the House and has been referred in the Senate, would prohibit representing AI as capable of therapy, psychotherapy, or mental health diagnosis.

Meanwhile, Nebraska’s LB525 passed final reading with AI-disclosure rules for minors and broader anti-deception provisions, and Maryland’s HB 883 — a behavioral-health AI bill — passed the House but was still working through the Senate at the time of writing. The state-level patchwork is expanding quickly, and while there’s no single comprehensive federal health-AI statute, the FDA, HHS, and FTC are all actively shaping the field through guidance, strategy documents, and enforcement actions.

What This Means for Patients

The practical reality for consumers right now is a landscape with very few guardrails and a lot of hype in both directions.

AI is excellent at explaining medical concepts in plain language — translating a diagnosis into something a patient can actually understand, helping someone prepare better questions for their doctor, or summarizing dense medical literature into actionable takeaways. Used as a research and comprehension tool, it’s genuinely valuable.

Where it gets risky is when that same tool is used as a substitute for professional judgment. The 80% differential-diagnosis failure rate on initial presentations isn’t a number to wave away. And the most dangerous failure mode isn’t AI flagging something as serious when it’s not — it’s the reverse. AI reassuring a patient that they’re fine when they aren’t.

Utah’s Doctronic pilot — whose initial 12-month term runs to late October 2026, with the possibility of extension — will likely become a bellwether. If the program demonstrates safe outcomes at scale, other states will follow. If something goes wrong, the backlash could set back AI healthcare adoption by years.

The Road Ahead

AI is going to be a significant part of medicine — the efficiency gains are too large and the access problems too persistent for it to be otherwise. But right now, the technology is a long way from practicing medicine on its own. An 80% differential-diagnosis failure rate on first contact. Triage systems that document warning signs and then ignore them. Bias patterns that hit the most vulnerable patients hardest. These aren’t edge cases waiting to be polished out in the next software update. They’re fundamental gaps between what AI can do — process information at scale — and what medicine actually requires: judgment, context, and the ability to recognize when the textbook answer is the wrong one.

The tools are real, and in the right hands they’re already making healthcare faster and more accessible. But “in the right hands” is doing a lot of work in that sentence. Until AI can reliably tell the difference between a patient who’s fine and a patient who’s about to not be, the human in the white coat isn’t optional. They’re the whole point.

For now.

Partner Spotlight: Periscope by Duet Display

Periscope - The Only VPN You Can Trust. A simple, secure way to access the apps and services you rely on, even when location-based restrictions get in the way. It helps users get around geo-blocking so they can reach their streaming services and banking applications with less friction regardless of where in the world you are at. Emphasizing privacy for everyday useLearn more at Periscope.

The Free Newsletter Fintech and Finance Execs Actually Read

If you work in fintech or finance, you already have too many tabs open and not enough time.

Fintech Takes is the free newsletter senior leaders actually read. Each week, I break down the trends, deals, and regulatory moves shaping the industry — and explain why they matter — in plain English.

No filler, no PR spin, and no “insights” you already saw on LinkedIn eight times this week. Just clear analysis and the occasional bad joke to make it go down easier.

Get context you can actually use. Subscribe free and see what’s coming before everyone else.

Stay productive, stay curious—see you next week with more AI breakthroughs!