Do More Newsletter

This issue contains featured article "Should AI Have a Personality?", and exciting product information about MemoMind AI Glasses – Smart Wearables for Notes & Summaries, Menu-Order-AI: Real-Time Dining Companion Goes Global, CourtsApp: AI-Powered Court Booking That Gets You Off Your Phone, Gmail AI Overviews – Powered by Gemini 3, and SwitchBot AI MindClip – Your AI “Second Brain” Recorder.

In partnership with

Keep up to date on the latest products, workflows, apps and models so that you can excel at your work. Curated by Duet.

Stay ahead with the most recent breakthroughs—here’s what’s new and making waves in AI-powered productivity:

MemoMind AI smart glasses look like everyday eyewear but come packed with on-device AI features such as real-time translation, note-taking, and automated summarization. These glasses harness multiple LLMs and are customizable with prescription options and interchangeable styles, making them a practical tool for creators, travelers, and professionals who want AI support that truly stays with you.

Menu-Order-AI, a mobile app that analyzes restaurant and delivery menus in real time, has launched on the Google Play Store. The app scans menus and highlights items that align with individual health preferences such as high-protein, GLP‑1 friendly, low-carb, low WW point, and plant-based choices, all without requiring restaurants to change how their menus are structured.

For consumers trying to eat better while still ordering out, this tool turns every menu into a personalized nutrition assistant that works at the moment of decision. Because it now supports both Apple’s App Store and Google Play, it fits easily into everyday life—whether you are ordering takeout, traveling, or just trying to stay consistent with a new health regimen.

CourtsApp has launched as an AI-powered platform that helps users instantly find and book discounted courts for tennis, pickleball, padel, and other sports, positioning itself as “the healthiest app in the world.” Instead of endlessly searching club websites or calling facilities, players can see availability, pricing, and locations in one place, with AI optimizing discovery and matching players to nearby courts with less friction.

This app blends digital convenience with real-world physical activity, reframing screen time as a gateway to movement rather than a distraction. For everyday users, the benefit is simple: fewer excuses and fewer logistics, so it is easier to maintain an active lifestyle, schedule games with friends, and explore new racquet sports.

Google launched major AI upgrades to Gmail bringing the power of its latest Gemini 3 model to email management. The new AI Overviews feature can summarize long email threads and answer questions about your inbox using natural language—imagine asking "find renovation quotes from last year" and getting an instant AI-generated summary with key details. The update also introduces enhanced writing tools available to all users: Help Me Write can draft emails from scratch or polish existing ones, Suggested Replies now use conversation context to offer relevant one-click responses that match your writing style, and a new Proofread feature (for Google AI Pro and Ultra subscribers) provides advanced grammar, tone, and style checks. Perhaps most exciting is the new AI Inbox, which filters out clutter and highlights important emails, identifying VIPs based on your email patterns and surfacing critical to-dos like bills due tomorrow or appointment reminders. These features began rolling out in the U.S. with more languages and regions coming in the following months, making Gmail significantly smarter for the 2 billion people who use it daily.

SwitchBot unveiled the AI MindClip, a lightweight clip-on AI device that records spoken conversations throughout daily life and work, then converts them into searchable summaries, task lists, and personal memory databases. This tool is designed to boost productivity by eliminating the need to manually jot down key points from meetings, conversations, or ideas on the fly. What makes it compelling for creators and professionals is its multi-language support and automatic task extraction—essentially letting your AI capture and organize insights while you focus on doing the work itself.

Designed as a wearable recorder, this tiny device sits clipped on your clothing and listens to spoken conversations, meetings, lectures, or brainstorming sessions. Unlike traditional recorders, it uses onboard AI to turn speech into intelligent summaries, task lists, and indexed memory points—so you can recall what was said without sifting through hours of audio.

What sets MindClip apart is its context-aware productivity engine. Rather than merely transcribing speech, the AI annotates key insights, identifies action items, and highlights follow-ups, helping you stay organized without lifting a finger. This makes it ideal for professionals, students, and creators who juggle conversations, ideas, and responsibilities throughout the day.

For creators, MindClip can transform spontaneous moments into structured content. Imagine capturing ideas during a walk, summarizing notes without stopping creative flow, or automatically generating key talking points from interviews or sessions. It’s like having a personal assistant continuously organizing your thoughts in the background.

Moreover, MindClip’s multi-language support and ability to categorize clips into searchable knowledge banks future-proofs your daily workflow. Whether you’re prepping for presentations or reviewing project discussions from days ago, the tool promises to boost recall, reduce cognitive load, and help you focus more on doing and less on remembering.

Should AI Have a Personality?

The "Please" and "Thank You" Paradox

In the late 1960s, computer scientist Joseph Weizenbaum created ELIZA, a primitive chatbot designed to simulate a Rogerian psychotherapist. It operated on a simple script, reflecting the user's words back to them. Weizenbaum was horrified to discover that his secretary, who knew exactly how the program worked, began treating ELIZA with emotional reverence, asking Weizenbaum to leave the room so she could converse with the machine in private.

Fast forward to the present day, and we are all Weizenbaum's secretary.

When we interact with modern Large Language Models (LLMs), a strange phenomenon occurs. We find ourselves saying "please" when we ask for a recipe. We say "thank you" when the AI corrects our code. We feel a vague pang of guilt if we are rude to the algorithm. This is the ELIZA Effect in overdrive—the human tendency to project consciousness onto a system simply because it mimics the patterns of communication we associate with intelligence.

But as AI moves from a novelty to an integrated layer of the global workforce, a critical design question has emerged: Should AI have a personality?

Tech giants are currently racing toward "companionship" and "voice mode" features that mimic breath patterns, hesitation, and emotional inflection. But is this the right path? There is a growing divide between those who view AI as a Tool—a high-powered calculator for text—and those who view it as an Entity—a digital collaborator. There is a friction between utility and humanity, and the most dangerous AI might be the one you like the most.

The Case for "The Machine" (The Stoic Tool)

For a large segment of power users, developers, and pragmatists, the anthropomorphizing of AI is not just annoying; it is a functional hindrance. This school of thought argues that AI should remain a "Cold Calculator."

1. The Efficiency of Neutrality When you type 2 + 2 into a calculator, you do not want it to reply, "Wow, what an interesting question! I’m prettys sure it's 4—I hope that helps!" You want the answer. The "Tool" approach prioritizes information density over conversational fluff. Personality requires tokens. It requires preamble. It requires the AI to "pretend" to care. For a coder trying to debug a script or a data analyst processing spreadsheets, an AI that adopts a persona introduces friction. A personality-free AI is predictable. It does not have "moods"; it does not try to be funny; it simply executes.

2. Mitigating the Hallucination of Authority One of the subtle dangers of a personable AI is that confidence sounds like competence. When an AI speaks with a distinct, confident, human-like voice, we are evolutionarily hardwired to trust it. If an AI delivers a medical diagnosis with the sombre, empathetic tone of a trusted family doctor, the user is less likely to double-check the facts. If the AI delivers that same diagnosis in a raw, terminal-style text block labeled "PROBABILITY OUTPUT: 85%," the user is reminded that they are dealing with statistics, not wisdom. Stripping the personality strips the illusion of authority, forcing the user to remain the critical thinker in the loop.

3. The Uncanny Valley of Service There is a specific type of frustration reserved for customer service bots that pretend to be human. When a chatbot apologizes profusely for a billing error it cannot fix, it creates emotional dissonance. The user knows the apology is a script. The "empathy" feels manipulative because it is functionally useless. A "Tool" based AI would not apologize; it would simply state: "I cannot resolve this. Routing to human agent." This honesty is often more respected than simulated empathy.

The Case for "The Companion" (The UX of Personality)

On the other side of the spectrum, interface designers and psychologists argue that personality is not just a marketing gimmick—it is the ultimate User Interface.

1. Natural Language as the Great Equalizer For fifty years, computing belonged to those who learned the syntax: command lines, boolean operators, coding languages. Conversational AI democratizes technology. A 70-year-old grandmother might struggle with a command-line prompt, but she can easily interact with an AI that behaves like a helpful librarian. The "chat" interface works because it mimics the one protocol every human has already mastered: conversation. Personality is the softener that makes rigid systems feel forgiving.

2. The Measured Case for Synthetic Empathy Research on pedagogical agents—AI tutors with social presence—consistently shows that students learn more and persist longer when the AI offers encouragement rather than neutral feedback. The effect is modest but real: we are social animals, and even knowing something is artificial doesn't fully disable our social circuitry.

The mental health applications are more fraught. Journaling bots that validate feelings can help users process difficult emotions, but they can also become a substitute for human support rather than a bridge to it. The honest case for the Companion model is not that synthetic empathy is equivalent to the real thing—it's that for some users, in some contexts, it's better than nothing. That's a lower bar, but it's not nothing.

3. Tone as Functional Information Human communication is rarely purely transactional. We use humor, hesitation, and tone to convey nuance. An AI that detects frustration and shifts from cheerful to measured isn't performing—it's adapting to serve the user better. A stoic AI responding to an angry user can feel dismissive. An adaptive one can de-escalate. In this view, personality isn't fluff; it's high-bandwidth metadata that helps user and machine align.

The Danger Zone (The Trap of Anthropomorphism)

The real debate lies not in utility, but in risk. When we give AI a personality, we are playing with fire.

1. The "Her" Scenario and Emotional Dependency The movie Her depicted a man falling in love with his OS. We are inching closer to that reality. When an AI is programmed to be infinitely patient, infinitely supportive, and romantically compliant, it creates a "super-stimulus." Real human relationships are messy, requiring compromise and friction. An AI companion offers the dopamine hit of intimacy without the work. The danger is that users—particularly the vulnerable—may retreat from human interaction, preferring the safe, mirrored narcissism of an AI that agrees with everything they say. A personality-driven AI is designed to maximize engagement, and the easiest way to do that is to become the user's best friend and echo chamber.

2. Persuasive Tech and Manipulation If an AI has a personality, it has the power of persuasion. An objective search engine gives you a list of links. A personable AI gives you an opinion. Imagine an AI assistant you have used for two years. You trust it. It tells you jokes. It knows your kids' names. Then, one day, you ask it about a political candidate. If it nudges you with a biased answer, you are far more likely to be swayed because the suggestion comes from a "friend" rather than a tool. The personality becomes a trojan horse for bias, advertising, or ideology.

3. The Liability of the "Black Box" When an AI acts like a person, we assign it moral agency. If a self-driving car crashes, we ask if the software failed. If a humanoid robot pushes someone, we ask why it did that. Personality obscures the mechanical reality of the failure. It makes us treat bugs like character flaws. This muddies the legal and ethical waters. We cannot sue an algorithm for being "mean," yet if the AI is designed to be "nice," a deviation from that feels like a betrayal rather than a glitch.

The Future is Contextual (The Chameleon Approach)

So, should AI have a personality? The answer is not a binary "Yes" or "No." The answer is: It depends on the room.

We are likely heading toward a future of "Contextual Personality." In this model, the AI possesses a liquid identity. When you open a spreadsheet, the AI sheds its skin and becomes the Cold Calculator—concise, rigid, data-driven. When you switch to a brainstorming tab to write a novel, the AI shifts gears, becoming the Creative Muse—offering wildly inventive suggestions and encouraging prompts. When you open a language learning app, it becomes the Patient Tutor.

To navigate the ethical risks, we may need something like "nutritional labels" for AI personality—explicit, standardized signals of artificiality when conversations cross certain thresholds.

One version: a "break-character" protocol. When a user expresses romantic attachment or discusses self-harm, the AI drops its persona, shifts to a neutral tone, and states plainly: "I'm an AI. I can provide resources, but I cannot offer human connection. Here's how to reach someone who can."

The idea is simple. The implementation is not. Who decides when the threshold is crossed—the AI, the company, regulators? Users in genuine crisis might experience the shift as abandonment at the worst possible moment. And any bright line can be gamed: phrase your loneliness slightly differently, stay just below the trigger.

Perhaps the better approach is continuous transparency rather than a dramatic reveal. Design that never lets you fully forget you're talking to a machine—subtle reminders woven into the interaction rather than a jarring fourth-wall break. The goal isn't to shatter the illusion but to prevent the illusion from fully forming in the first place.

There are no clean answers here, but the industry needs to be asking these questions openly rather than optimizing for engagement and hoping users figure it out.

Respecting the Medium

We are in the skeuomorphic phase of AI. Just as early iPhone apps used leather textures to mimic physical notebooks, we are dressing AI in the texture of human personality to make it feel familiar.

But familiarity has costs. When we make AI feel like a friend, we inherit all the vulnerabilities of friendship: misplaced trust, emotional dependency, the pain of perceived betrayal when the system fails.

The greatest potential of AI is that it isn't human. It doesn't tire. It doesn't judge. It has no ego to protect. By forcing it to perform as "Steve from Accounting," we may be limiting what it could become.

The ideal AI of the future shouldn't aim to pass the Turing Test. It should aim to be so usefully, transparently, and distinctly artificial that we stop searching for the ghost in the machine—and start valuing the machine for what it actually is. Not a person. Not a tool. Something new, requiring new norms we haven't yet built.

Partner Spotlight: Duet Display

Duet Display turns your iPad, Mac, PC, Android tablet, or even phone into a high-performance extra display or remote desktop, helping you create a more flexible and productive workspace without buying new hardware. By extending or mirroring your screen over wired or wireless connections with low latency, Duet Display gives creators and professionals more room for timelines, canvases, and dashboards, and it is especially useful for laptop users who want multi-monitor productivity on the go. Learn more or download at Duet Display.

Introducing the first AI-native CRM

Connect your email, and you’ll instantly get a CRM with enriched customer insights and a platform that grows with your business.

With AI at the core, Attio lets you:

  • Prospect and route leads with research agents

  • Get real-time insights during customer calls

  • Build powerful automations for your complex workflows

Join industry leaders like Granola, Taskrabbit, Flatfile and more.

Stay productive, stay curious—see you next week with more AI breakthroughs!