Do More Newsletter

Keep up to date on the latest products, workflows, apps and models so that you can excel at your work. Curated by Duet.

In partnership with

Want to get the most out of ChatGPT?

ChatGPT is a superpower if you know how to use it correctly.

Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done.

Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI.

Stay ahead with the most recent breakthroughs—here’s what’s new and making waves in AI-powered productivity:

  • Google Opal: A brand-new, experimental tool from Google Labs enabling anyone in the U.S. to describe, chain, and share AI mini apps—all using natural language and visual editing. It’s designed for rapid prototyping and bringing your unique AI workflows to life without a single line of code.

  • Gemini 2.5 Flash-Lite: Google announced their fastest and cheapest new AI model for productivity, optimized for quick, cost-efficient tasks within AI apps.

  • Jace AI: A new personal AI assistant gaining a devoted following for its versatile features, with users claiming major boosts in daily productivity just days after its official launch.

  • VoiceNotes AI: The popular AI-powered meeting notes app just rolled out its official Windows version, further automating note-taking for hybrid teams.

What Is Google Opal?

Google Opal is an experimental no-code platform from Google Labs that empowers anyone to build and share robust AI workflows as “mini apps.” Instead of writing code, you simply describe what you want in plain language—or use a drag-and-drop visual editor. Opal instantly chains together AI models, prompts, and external tools, giving creators the freedom to:

  • Prototype new workflows for team or solo work.

  • Remix and personalize existing templates.

  • Share apps with others via simple links.

How Does It Work?

  • Describe your goal or workflow step-by-step (in plain English).

  • Visualize your process in Opal’s editor, which auto-generates logical flows between AI models, user inputs, and outputs.

  • Edit using natural language or graphical interfaces, tweaking prompts or connecting new tools as you go.

  • Share & Collaborate by distributing your Opal mini app link with anyone in the U.S. (public beta), so others can use or improve your workflow.

Who Should Try Opal?

  • Productivity hackers looking to automate unique workflows.

  • Managers & teams who want to fast-track prototypes and automations.

  • Educators building personalized learning tools.

  • Tinkerers and creators who want a playground for job-specific AI tools.

Early Impressions

Beta users praise Opal’s empowerment of non-coders, the speed of prototyping, and the joy of remixable app templates for everything from meeting notes to content generation. While still experimental and U.S.-only, Opal is already redefining what it means to do more—faster and without technical barriers.

AI Jargon Demystified

We finally have computers that can just talk to us, but the irony is they were made by eggheads who love jargon and aren’t terribly great at talking to average people themselves. That means there are a lot of buzzwords out there that aren’t very obvious in meaning but might be useful to understand for anyone planning to work with AI (which is basically everyone these days). So, let’s do a quick crash course on all the terms important to know.

To start, AI (Artificial Intelligence) simply means computers doing things that normally require human intelligence (a term that is widely used, but perhaps overused). A key way they do this is via machine learning (ML) – teaching computers by example, rather than explicit programming. And when we use deep learning, it means using large neural networks (inspired by the human brain) to learn from vast amounts of data. Now, let’s dive into some other buzzwords you’re likely to hear:

Key Takeaways

  • Neural Network: The “brain” of modern AI – a network of mathematical neurons layered to learn patterns, much like a web of interconnected brain cells. Think of it as a big team project where each neuron adjusts a bit of the solution until they all get the answer right.

  • Algorithm: A step-by-step recipe or set of rules that a computer follows to solve a problem. In AI, algorithms (like training routines or search methods) tell the system how to learn or make decisions.

  • LLM (Large Language Model): An AI that has read basically the entire internet (huge text data) to predict and generate text. It’s essentially a supercharged autocomplete system – an “autocomplete on steroids” that produces human-like responses. ChatGPT and Claude are prime examples of LLMs.

  • Diffusion Model: A generative AI that creates images from noise. It starts with random speckles (like old TV static) and iteratively refines them into a clear image – as if developing a photo from a blurry mess into a vivid picture.

And here are some more details on these terms and some other terms useful to know:

Neural Networks: The Brain-Inspired Engines of AI

A neural network is essentially the brain of an AI system. It’s a structure composed of layers of interconnected nodes (artificial “neurons”) that process data and learn patterns. These networks are inspired by the human brain’s neuron connections, albeit in a simplified mathematical form. When data (say, an image or a sentence) is fed into a neural network, it passes through multiple layers, each layer extracting or learning some feature, and finally produces an output (like classifying the image or understanding the sentence).

Think of a neural network like a big group project. Each neuron in the network is like a team member who does a tiny part of the task and then “passes the note” to the next member. Initially, their answers might be all over the place, but through training, they adjust their “notes” (their internal parameters) until the whole team reaches the right solution. In fact, one fun analogy describes a neural network as a group of people each tweaking their answers until the team as a whole gets it right. In short, neural networks enable a computer to learn from examples – they’re why AI can recognize your face in a photo or translate languages after being shown many examples.

It's still not quite the same as a human brain, though. A child can figure out what a dog is after seeing just one or two examples, but a neural network often needs thousands or millions of examples to do its machine learning.

Algorithms: Step-by-Step Recipes for Problem Solving

In everyday terms, an algorithm is just a set of instructions for accomplishing a task. You can think of an algorithm like a recipe: a series of step-by-step directions that take inputs (ingredients) and turn them into an output (the finished dish). In the context of computers and AI, algorithms are the coded procedures or formulas that tell the machine how to achieve a result.

For example, a sorting algorithm might describe how to alphabetically sort a list of names, much like a recipe details how to bake a cake. In AI, we often talk about learning algorithms – methods by which AI systems improve themselves. The training process of a neural network, for instance, uses a learning algorithm to adjust the network’s weights (the importance of each connection) step by step until it makes good predictions.

Large Language Models (LLMs): Autocomplete on Steroids

Large Language Models are the heavyweights powering today’s AI chatbots and writing assistants. An LLM is an AI model trained on vast amounts of text – basically, it has read everything from books and websites to news articles and more. Because of this training, it can generate human-like text and answer questions in a remarkably fluent way. LLM is often what people mean by AI these days when talking about things like ChatGPT.

One way to understand LLMs is to imagine the autocomplete feature on your phone, but supercharged. As an analogy, an LLM is like a “super-smart autocomplete on steroids.” Just like your phone suggests the next word while texting, an LLM predicts the next words in a sentence – but it does so with an enormous knowledge of grammar, facts, and context, thanks to its training. Essentially, when you prompt an LLM with a question or statement, it uses probability (learned from all that reading) to continue the text in a way that makes sense. That’s why ChatGPT can produce essays, code, or conversations that sound eerily human – it’s drawing on patterns it learned from literally billions of words.

(Bonus: What’s “GPT”?) GPT stands for Generative Pre-trained Transformer. It’s a specific type of large language model architecture introduced by OpenAI. “Generative” means it can create text; “Pre-trained” means it has been trained on a huge dataset before you ever see it; and Transformer is the neural network architecture that makes it so effective at understanding context in language. The transformer design is a big reason why models like GPT-4 can keep track of context and generate coherent paragraphs of text rather than just disjointed sentences.

Diffusion Models: Turning Noise into Art

On the image generation side of AI, diffusion models have been a game-changer. A diffusion model is a type of generative AI that creates new images (or other media) by starting with random noise and refining it step by step into something meaningful. It’s the technology behind tools like Stable Diffusion, Midjourney, and DALL·E, which can produce incredible pictures from text prompts.

Basically, these are based on algorithms to find an image in noise. You give the computer a noisy image and say, “Actually, that’s a picture of a dog in a tophat.” And it uses its knowledge of what dogs and tophats look like to find the picture in the noise, gradually cleaning it up in multiple passes until the image is fully there. The thing is, though, you actually gave it random noise, so it’s making the dog wearing a tophat out of nothing (but its training data). It kind of feels like a mean trick, but the algorithms don’t mind.

Context Window: The AI’s Short-Term Memory

When you chat with an AI or give it a lengthy task, you might wonder: how much can it remember from the conversation or prompt? This is where the context window comes in. A context window is essentially the AI model’s working memory – the amount of information (measured in tokens, which are like pieces of words) it can handle at once. Think of it as how many pages of text the AI can “keep in mind” as it generates a response.

For a human analogy, imagine you go to the grocery store without a written list. How many items can you remember to buy using just your short-term memory? Probably only a handful before you start forgetting things. Similarly, an AI has a limit to how much it can hold in its “head” at one time. If you give it a prompt that’s too long or a conversation that’s very lengthy, it might start to forget details from earlier – not because it’s dumb, but because its context window isn’t infinite. For example, earlier versions of ChatGPT had a context window of around 4,000 tokens (roughly a few thousand words), meaning if a conversation exceeded that, the model would lose track of the earliest parts. Newer models are extending this window (some to 100k tokens or more) so they can handle longer documents or discussions without forgetting.

In practical terms, knowing about context windows helps you understand why sometimes an AI repeats itself or contradicts something you said earlier – it may have “forgotten” because that part scrolled out of its memory. The key takeaway is that the larger the context window, the more information the AI can juggle at once, akin to having a bigger scratchpad or a better short-term memory.

Prompts and Prompt Engineering: Telling the AI What You Want

A prompt is simply the input or question you give to an AI model. It could be a single question (“Explain quantum physics in simple terms”) or a whole paragraph of instructions and context for the AI to consider. In many ways, the prompt is your way of programming the AI on the fly – you’re telling it what you need, and the AI will do its best to comply based on that guidance.

Because AI responses depend heavily on how you ask your question, a new skill called prompt engineering has emerged. Prompt engineering is the art of crafting your prompt in a way that guides the AI to produce a better or more relevant answer. For example, if you just ask, “Tell me about neural networks,” you’ll get a general answer. But if you prompt, “Explain neural networks in a casual tone with a simple analogy, as if I’m five years old,” you’re likely to get a more tailored, easier-to-grasp explanation. You essentially engineer the prompt to coax the kind of response you want.

Think of it like talking to a very literal-minded person: the clearer and more precise your instructions, the closer the result will be to what you envisioned. The AI isn’t a mind reader, but it is a pattern finder – if your prompt contains the right clues, context, or format, the model will follow those patterns. The good news is you don’t need a degree to do this – just a bit of practice in phrasing questions or requests. And unlike coding, if the output isn’t what you wanted, you can iteratively refine your prompt (like rewording a question) and try again. The AI will happily oblige with a new answer each time.

Hallucinations: When AI Makes Things Up

One quirky term you might hear in AI discussions is hallucination. No, the AI isn’t literally seeing things – in AI lingo, a hallucination refers to the model confidently generating information that sounds plausible but is factually incorrect or entirely fabricated. Essentially, the AI “makes stuff up.”

For instance, you might ask a chatbot for a citation or a historical detail, and it responds with a very specific answer that looks real – perhaps even including a fake quote or a reference. If that info isn’t actually in its training data or it tries too hard to fill in gaps, the AI can output a convincing-sounding lie. This happens because LLMs are trained to produce likely-sounding text; they don’t truly know truth from falsehood, they just know what words often come together. If a prompt pushes them beyond their knowledge cutoff or into ambiguous territory, they might “improvise” – like a student guessing an answer on a test – which results in a hallucination.

For general users, it’s important to be aware of this phenomenon. AI tools are incredibly helpful, but they’re not infallible. Always double-check critical facts that an AI gives you, especially if it’s something important (like medical or legal advice) or something that just seems a bit too neat and tidy in its answer. The term “hallucination” is a reminder that until AI truly understands concepts (a topic of ongoing research), it can sometimes present fiction as fact without warning.

Now You Know

AI doesn’t have to be intimidating. By understanding these key terms and concepts, you’ve effectively pulled back the curtain on the “magic” of modern AI. Now you know that a neural network is like an artificial brain learning from data, an algorithm is just a set of instructions (a recipe) running the show, an LLM like ChatGPT is a giant predictive text engine with a vast knowledge of language, and a diffusion model turns static into art. You’re aware that AI has a limited memory (context window), that how you ask a question (prompt engineering) matters, and that sometimes it might fib a little (hallucination) – so keep it honest.

Armed with this knowledge, maybe now you can get a little more out of AI. And if any new terms come up that seem important and you’re unfamiliar with them, you can just ask AI (an LLM, not a diffusion model). 

Partner Spotlight: Duet Display

Double your productivity—anywhere!

Duet Display turns your iPad, Mac, PC, or Android device into a high-performance second screen. Instantly extend or mirror your workspace for AI-powered multitasking, creative workflows, coding, and more. Trusted by professionals worldwide for smooth, lag-free performance—whether you’re remote, hybrid, or on the go.

Explore how Duet Display empowers real productivity: Visit Duet Display