- Do More Newsletter
- Posts
- Do More Newsletter
Do More Newsletter
Keep up to date on the latest products, workflows, apps and models so that you can excel at your work. Curated by Duet.

Stay ahead with the most recent breakthroughs—here’s what’s new and making waves in AI-powered productivity:
Wrike Copilot - Wrike upgraded its AI assistant into a real-time teammate, delivering instant insights and workflow automation that reduces manual tasks and accelerates project delivery for enterprise teams.
DeepL Autonomous AI Agent - DeepL unveiled a beta of its autonomous AI agent designed to take on complex, tedious tasks across sales, marketing, finance, and localization—operating securely within digital work environments.
Sourcetable AI SuperAgents for Excel - Sourcetable launched AI SuperAgents to automate advanced data analysis and workflow orchestration directly inside Excel spreadsheets, streamlining repetitive work and empowering users with intelligent automation.
Lenovo AI-Powered Devices & Experiences - Lenovo revealed a full portfolio of AI-enabled hardware and software, including adaptive laptops and creator tablets aimed at enhancing productivity and creation workflows on the go.
Ada AI Data Analyst - Ada quickly topped Product Hunt as the first fully automated AI data analyst, transforming raw data into actionable business insights with natural language interaction and automated SQL and reporting.
Liquid Web AI Side Hustle Starter Kit - Liquid Web released a free kit that helps entrepreneurs launch AI-driven side hustles swiftly with blueprints, checklists, and essential AI tools for rapid website and content deployment.

Wrike Copilot — Your AI Teammate for Real-Time Productivity
Wrike Copilot is an AI-powered assistant designed to make project management faster, smarter, and more efficient. By combining the power of Wrike’s work management platform with intelligent automation, Copilot helps teams save time, reduce errors, and focus on what matters most.
Smarter Task Management
Wrike Copilot can automatically draft task descriptions, summarize project updates, and suggest next steps. This means less time spent writing or searching for details, and more time executing on priorities.
Enhanced Collaboration
Teams can leverage Copilot to generate meeting notes, create summaries of long discussions, or quickly highlight key action items. Everyone stays aligned without needing to wade through lengthy threads.
Faster Decision-Making
With AI-driven insights, Copilot can analyze workloads, identify bottlenecks, and provide recommendations for balancing resources. Managers gain real-time clarity to make better decisions.
Reduced Administrative Burden
Repetitive tasks like organizing updates, formatting reports, or preparing project briefs can be handled by Copilot, freeing up team members to focus on strategic and creative work.
Continuous Improvement
As Copilot learns from your workflows, it becomes more tailored to your team’s needs. Over time, it helps refine processes and drive productivity gains across projects.
In today’s fast-paced business environment, Wrike Copilot acts as both a productivity booster and a collaboration partner, helping organizations move faster, stay aligned, and achieve better outcomes.
Inside an AI Brain: How ChatGPT (and Other AI) Actually Works

You type some text, and the computer actually responds to you like it understands exactly what you said. Only a few years ago, computers could not do that. So what’s going on?
Large language models (LLMs) like ChatGPT (and Claude and Gemini and many others) are everywhere now, and it’s probably good to have at least a basic understanding of how they work as it will help you use them better. Here’s a simplified explanation of what’s going on inside AI without getting mathematical or jargony.
First, what is an LLM?
Think of an LLM as autocomplete on rocket fuel. Your phone guesses the next word; ChatGPT guesses the next tiny chunk of text (called a token) again and again – so quickly and so well – that the sequence reads like a paragraph, an explanation, or a story. That’s the core trick: predict the next token given everything seen so far.
Crucially, the model doesn’t “look up” facts in a database when it replies. It generates text on the fly, using patterns it learned from reading a huge sample of public writing – books, articles, websites, code – during training.
How it learns: the giant “next-word” game
How do you teach an AI to generate human-like text? The training process is a bit like teaching by example – on a mind-boggling scale. Developers feed the LLM tons of text (books, Wikipedia articles, forum posts, news, you name it) and have it play the “next word prediction” game. This training is called self-supervised learning, because the training data doesn’t need manual labels – the next word in each sentence is its own label.
After the base training, companies do instruction tuning and sometimes a refinement step called reinforcement learning from human feedback—basically people give thumbs-up/thumbs-down on draft answers, and the model learns to favor the good ones.
Here’s basically how it works:
Show the model a slice of text (e.g., “Mary had a little …”).
Guess the next token (“lamb”).
Check the real answer from the training text.
Nudge the model’s internal knobs (parameters) so future guesses are slightly better.
Repeat this on countless sentences from across the internet, and the model becomes uncannily good at continuing text in ways that sound natural. It’s not memorizing every sentence – it’s absorbing patterns: grammar, style, typical facts, how ideas connect. Over time, these patterns get baked into billions of tiny numbers inside the model.
A helpful analogy: imagine you read libraries’ worth of material and, every time you guessed wrong about how a sentence would continue, your brain rewired itself a hair toward the right answer. After enough practice, you’d be world-class at anticipating what comes next.
Tokens: the building blocks
LLMs tend to work in tokens. A token is a small slice of text: often a word (“cat”), sometimes part of a word (“un-” + “likely”), sometimes punctuation or a space. The model converts text into tokens, predicts the next token, then the next, and so on. That’s how long replies are built – one token at a time. And that’s also its granularity at understanding things. A trick early on was to ask ChatGPT how many r’s are in the word “strawberry,” and it would constantly get it wrong. That’s because it didn’t see the letters – just the tokens “straw” and “berry” (as it would often be split into tokens) which would be represented as numbers to it.
How a reply is formed
Okay, so the model has been trained to predict next words. How does that create an answer to your question?
Let’s use a concrete example. Suppose you ask: “Who played Rose in the 1997 film Titanic?” The prompt the model sees might be: “The actress that played Rose in the 1997 film Titanic is named”. The model has learned that in many texts about the film Titanic, the name “Kate Winslet” often follows that prompt. So it predicts the next word is likely “Kate” (and then “Winslet” after that). By simply predicting the correct next words, the model ends up answering your question correctly – it outputs “Kate Winslet.” In training, it saw sentences about Titanic and learned the pattern that Kate Winslet is associated with Rose. So it doesn’t decide to answer the question so much as it accidentally answers it by doing what it always does – continuing the text in a likely way.
Behind the scenes, the model computes a probability distribution over the next token and samples from it. Settings like temperature (how adventurous the sampling is) affect whether responses are focused and predictable or creative and varied. With a higher temperature, it might give a much more interesting answer (though probably wrong).
Why it often feels like understanding
ChatGPT often feels like it understands what you’re asking. After all, it can explain concepts, summarize articles, even crack jokes or write code. So, is it truly understanding language, or just faking it? The answer depends on how you define “understand,” but it’s safer to say ChatGPT recognizes patterns in language rather than genuinely comprehends meaning the way a human does.
Here’s an analogy: imagine a very clever parrot that has heard every conversation on Earth. It doesn’t know why something is true, but it knows what often sounds true. If you ask this parrot a question, it can respond with a sentence it has heard or a mashup of things it has heard that statistically fits. Sometimes, the answer will be exactly right because the information was common in its experience. Other times, it might be off the mark or weird because it’s just stitching together likely phrases.
However, ChatGPT doesn’t have goals, feelings, or a true awareness of what it’s saying. It doesn’t “know” facts in the sense of having them stored as declarative knowledge; it has just learned the shape of text that usually represents those facts. If you ask it a question, it doesn’t recall the answer like looking it up in an encyclopedia – it generates a plausible answer based on patterns. Most of the time, those patterns line up with reality (because a lot of text in its training was correct information), but not always. In short, ChatGPT mimics understanding. It’s astonishingly good at this mimicry – good enough that it often gives useful, correct answers – but it isn’t infallible, and it doesn’t truly grasp truth or falsehood on its own.
Context window: the AI’s short-term memory
When you chat with ChatGPT, it seems to remember what you said earlier. For example, you might tell it, “My dog’s name is Rex,” and later ask, “What is my dog’s favorite food?” and it will incorporate “Rex” into its answer. How does that work? The trick is what’s called the context window – essentially, the AI’s short-term memory of the conversation.
In practical terms, the context window acts like working memory. Humans have a limited short-term memory too (we forget the start of a long sentence sometimes). For an AI, the limit is strict: anything beyond the context window is gone. Researchers are working on methods to extend memory (through bigger context windows or other tricks), but currently, no, ChatGPT doesn’t truly remember you or your prior chats once the session resets (despite its memory trick to add saved facts from previous sessions to its current context window). It also can’t recall facts from after its training cut-off date (it has no knowledge of events beyond what it saw in training). Its knowledge is frozen at the time of training completion, aside from any updates via fine-tuning. In summary, ChatGPT’s “brain” doesn’t form new long-term memories; it was “born” with a lot of knowledge from training, and it has a short-term memory for the current conversation and that’s it.
The Limits of Their “Thinking”
It’s tempting to think of ChatGPT as thinking or reasoning about answers, especially when it produces a detailed explanation or solves a problem. In reality, everything it does boils down to that next-word prediction, guided by patterns in data – it has no conscious thought or true reasoning process behind the scenes. Unlike a human, it doesn’t possess common sense or an understanding of the world beyond the text it was trained on. It has no beliefs, desires, or self-awareness. They’re essentially sophisticated pattern machines.
This means an LLM can appear intelligent in narrow contexts – it can follow logical patterns it saw during training, perform step-by-step solutions if prompted to, even imitate the process of reasoning. But it’s not reliably logical or correct. It doesn’t truly understand numbers or physics or morality, except insofar as those concepts show up in text patterns. It can easily contradict itself if you push it in the right way, or confidently assert something wrong if it strays from the familiar patterns it knows.
It’s also constrained by its training. If you ask about very recent events or niche topics it never saw, it might falter or just make something up. It cannot independently verify information (unless maybe instructed to against web sources for LLMs with web access). And because it has no genuine understanding, it lacks intentionality: it’s not trying to be correct or incorrect; it has no intent to lie or tell the truth – it’s just generating plausible sentences.
Why it hallucinates (makes things up)
Hallucinations happen because the model is optimized to produce plausible continuations, not guaranteed truths. In its training data, questions were usually followed by confident answers, not “I don’t know.” So when pushed beyond its knowledge or into ambiguous territory, it may confidently guess a tidy-sounding but wrong continuation—like a citation that looks real or a biography detail that “should” be true.
Tips to reduce this on your end:
Ask for uncertainty levels (“How confident are you?”).
Request sources or quotes, then check them.
Instruct it to list assumptions before answering.
When available, pair it with retrieval or verification tools.
Efforts are being made to reduce hallucinations. Fine-tuning steps (like instructing the model with human feedback) try to encourage the AI to say “I’m not sure” more often or to double-check itself. But as of now, you as the user should remember: ChatGPT doesn’t actually know facts – it predicts them. If the prediction is off, it will happily present a falsehood, because it has no built-in concept of truth vs. falsehood. This is why it’s always good to verify important information from reliable sources instead of taking the AI’s word for it.
Limits to keep in mind
No grounded world model. It doesn’t see, touch, or measure the world; it only models text about the world.
No self-awareness or goals. It can’t want things or decide to deceive; it simply continues text in a style that humans rate as helpful.
Temporal limits. A model’s built-in knowledge effectively freezes at the time it was trained. Without live tools, it won’t know yesterday’s news.
Context is finite. Long chats drop early details unless you keep them in view.
Bias in, bias out. It reflects (and attempts to moderate) patterns in its training data, which include human biases and mistakes.
Despite these limits, the mechanism is astonishingly useful. Because so much of human work involves shaping language—explaining, drafting, summarizing—an engine that’s good at continuations can collaborate with you across a wide range of tasks.
A simple mental model
If you’re trying to wrap your head around exactly what an LLM like ChatGPT is, here’s an easy way to look at it:
Library: Training text supplies the patterns.
Compressor: The model squeezes those patterns into parameters.
Autocomplete: Given your prompt, it expands those parameters back into text, one token at a time.
Spotlight: Only what’s in the context window is “visible” as it writes.
Stylist/Editor: Guardrails steer the tone; prompts steer the structure.
No inner homunculus: There’s no little thinker inside – just probabilistic continuation that can look like thinking.
Wrapping Up: A Peek Inside the Machine
So, to recap: ChatGPT (and models like it) are giant text-prediction machines. They were trained by consuming an enormous amount of human-written text and learning the patterns of language. When you use them, they generate responses one token at a time by guessing what the most likely next token is, given everything so far. Through this mechanism, they can produce answers, stories, or explanations that sound incredibly knowledgeable and coherent – because they’re drawing on the knowledge encoded in all those training texts.
However, these models don’t possess true understanding or intent. They don’t know why a sentence should be one way and not another, beyond statistical likelihood. They “know” facts only in the sense that those facts were reflected in their training data patterns. They handle context by taking into account a certain window of recent conversation, but they have no long-term memory or awareness beyond that. Their seeming intelligence is both an illusion and a real artifact of the vast data they’ve absorbed: they are a mirror to our collective writings, able to reflect information and style, but with no consciousness behind the mirror.
The takeaway is both exciting and cautionary. On one hand, GPT-style AIs are incredibly powerful – they can draft emails, tutor you in math, emulate famous writers, and more, all through that core ability to predict what comes next in a sentence. On the other hand, they have clear limitations: they may sound confident and authoritative, but they can be wrong, nonsensical, or biased, all because of the data and the method that drives them. Using them effectively means understanding these dual truths: there’s no little genius in the machine, just an alien kind of word-smart automaton. It’s like talking to a knowledgeable alien parrot – one that has learned to speak by listening to humans, can give you a lot of information, but doesn’t truly grasp what it’s saying.
But, who knows, maybe that comes next.

Partner Spotlight: Duet Display
Duet Display remains a top choice for transforming tablets, smartphones, laptops, and desktops into wireless second screens, enhancing work flexibility and creative workflows.
Duet Studio offers pressure sensitivity and drawing features for artists and creators.
KMS offers keyboard and mouse sharing between desktop computers
Boost productivity and creativity with Duet Display
How Canva, Perplexity and Notion turn feedback chaos into actionable customer intelligence
Support tickets, reviews, and survey responses pile up faster than you can read.
Enterpret unifies all feedback, auto-tags themes, and ties insights to revenue, CSAT, and NPS, helping product teams find high-impact opportunities.
→ Canva: created VoC dashboards that aligned all teams on top issues.
→ Perplexity: set up an AI agent that caught revenue‑impacting issues, cutting diagnosis time by hours.
→ Notion: generated monthly user insights reports 70% faster.
Stop manually tagging feedback in spreadsheets. Keep all customer interactions in one hub and turn them into clear priorities that drive roadmap, retention, and revenue.
Stay productive, stay curious—see you next week with more AI breakthroughs!

