- Do More Newsletter
- Posts
- Do More Newsletter
Do More Newsletter
This issue contains featured article "The History of AI: From Early Dreams to Today's Reality", exciting product information about Anthropic, Kustomer AI Assistants, Aided 1-Click, Kling AI, and TikTok Smart Split & AI Avatars for Creators.
Keep up to date on the latest products, workflows, apps and models so that you can excel at your work. Curated by Duet.

Stay ahead with the most recent breakthroughs—here’s what’s new and making waves in AI-powered productivity:
Anthropic — acquisition of Bun to accelerate AI coding tooling
Anthropic announced the acquisition of Bun, a developer runtime/packaging toolkit. This move is positioned to speed up and stabilize Anthropic’s code-generation and coding assistant products (Claude Code / developer tooling), which should translate into faster, more reliable AI-assisted coding workflows for developer teams and creator-engineers.
Kustomer AI Assistants – Workflow Automation for CX Teams
Kustomer has introduced a suite of AI Assistants aimed at letting customer‑experience teams design and evolve automated workflows without manual, low‑level configuration. The platform includes an AI workflow assistant that can effectively “build itself,” using natural language to translate business goals into automations that span channels, routing, and case handling.
For productivity, this means support leaders can move from one‑off bot scripts to continuously improving workflows that adapt as volumes, intents, and policies change. Front‑line agents benefit from fewer repetitive tasks and more context‑rich cases, while managers gain a clearer view into where automation is helping and where human judgment still adds the most value.
Aided 1‑Click Multi‑Model Content Creation
Aided has launched a new AI platform focused on giving businesses 1‑click access to multi‑model content generation workflows. Instead of forcing teams to manually switch between individual AI providers, Aided orchestrates several models behind the scenes so users can generate copy, visuals, and related assets from a single brief or prompt.
This approach is particularly useful for marketing and creative teams that care more about consistent output than about which underlying model is used. By abstracting the model layer, Aided aims to cut down experimentation time, standardize quality, and make AI‑driven campaigns easier to scale across channels and formats.
Kling AI — Video 2.6 (simultaneous audio-visual generation for creators)
Kling AI (via Kuaishou) released its Video 2.6 model which can generate video and matching audio in one pass (text→audio-visual and image→audio-visual), removing the traditional “silent video then separate dubbing” workflow. The model can produce short videos with synchronized speech, sound effects and ambient audio and currently supports Chinese and English voice generation — a direct productivity win for creators who want one-step content generation instead of stitching separate outputs.
Recent updates to TikTok’s AI creation toolbox are giving creators more ways to repurpose and scale video content without extra production overhead. The Smart Split tool can automatically slice long videos into short, high‑engagement clips, while AI Avatars can present products or content in a lifelike way around the clock, effectively acting as a virtual on‑screen host.
Paired with new image‑to‑video and text‑to‑video options, these tools help creators turn written ideas or static assets into dynamic shorts optimized for the TikTok feed. This reduces editing time and makes it easier for small teams or solo creators to maintain a consistent posting cadence while still experimenting with formats and hooks.

Kling AI: Audio-Visual Generation in a Single Step
Kling AI has quickly become one of the most talked-about creative tools in short-form video production thanks to its ability to generate high-fidelity, realistic videos from text or image prompts. Built by Kuaishou’s research team, Kling focuses on lifelike motion, expressive characters, and cinematic camera behavior—features that have made it popular among creators looking to produce compelling visual content without the need for advanced editing skills or expensive equipment. By designing the system to be accessible to anyone with an idea, Kling has positioned itself as a powerful new engine for rapid-cycle creativity.
Its newest upgrade, Kling Video 2.6, introduces one of the most impactful feature additions to date: seamless, synchronized audio-visual generation. Rather than producing a silent clip that creators must later pair with separately generated speech, music, or sound effects, Video 2.6 generates visuals and audio (including character speech, background ambience, and sound cues) in a single pass. This dramatically streamlines the production process. Instead of juggling multiple tools—or spending time syncing voice tracks manually—creators now get a fully-formed video complete with matching audio timing, tone, and rhythm from the moment the clip is generated.
The benefit for creators and productivity-focused users is clear: speed, coherence, and fewer editing steps. Audio and visuals are generated with shared context, which means lip movements match dialogue, ambient audio follows the scene’s emotional pacing, and motion lines up more naturally with vocal patterns. The result is a clip that feels more polished directly out of the generator, enabling creators to test ideas faster, produce more content in a shorter window, and maintain a consistent creative style without getting bogged down in audio engineering. For those working in short-form platforms like TikTok, Shorts, and Reels—where volume and turnaround time matter—this can be a major competitive advantage.
Practically speaking, the feature is most effective when creators use concise, descriptive prompts that include both the visual style and the desired audio characteristics (for example, specifying tone of voice, mood, ambient environment, or key sound elements). Shorter clips yield the best alignment, making the feature especially well-suited for concept testing, rapid storytelling, meme creation, product teasers, and narrative experiments. While longer or more cinematic projects will still require external editing tools, Kling Video 2.6 significantly boosts early-stage production speed—freeing creators to focus more on ideation, storytelling, and iteration rather than stitching audio to video manually.
The History of AI: From Early Dreams to Today's Reality

The Dream of the Artificial Being
Long before silicon chips and neural networks, Artificial Intelligence existed in the imagination. The desire to forge a thinking being is as old as storytelling itself. In Greek mythology, there was Talos, the giant bronze automaton constructed by Hephaestus to protect Europa. In Jewish folklore, the Golem was a clay figure brought to life through mystical incantations.
These ancient myths highlight a fundamental human curiosity: Can we recreate the spark of consciousness? For centuries, this remained the domain of theology and fiction. It wasn't until the 20th century, with the convergence of formal logic and the invention of the programmable computer, that the dream began to look like a solvable engineering problem.
Part I: The Foundations and the Birth (1940–1956)
The Turing Test and the Logic Gates
The scientific groundwork for AI was laid in the 1940s. A pivotal moment occurred with the publication of Alan Turing’s seminal paper, "Computing Machinery and Intelligence" (1950). Turing, famously known for cracking the Enigma code, posed a simple but profound question: "Can machines think?"
Turing proposed the "Imitation Game" (now widely known as the Turing Test) as a measure of machine intelligence. If a machine could converse with a human without the human realizing they were speaking to a machine, it could be considered intelligent. Simultaneously, neurophysiologist Warren McCulloch and logician Walter Pitts proposed the first mathematical model of a neural network, suggesting that simple logic units could mimic the behavior of biological neurons.
The Dartmouth Conference (1956)
The official birth of AI as a research discipline occurred in the summer of 1956 at Dartmouth College. A workshop organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon brought together the brightest minds in information theory.
It was here that McCarthy coined the term "Artificial Intelligence." The proposal for the conference contained a statement that defines the field to this day: “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
The attendees were optimistic. They believed that machines would be capable of doing any work a man could do within a generation.
Part II: The Golden Years and The First Winter (1956–1980)
Good Old-Fashioned AI
The years following Dartmouth (1956–1974) were defined by "Symbolic AI." Researchers believed that intelligence could be achieved by teaching computers the rules of logic and manipulating symbols.
During this era, computers solved algebra word problems, proved geometry theorems, and learned to speak English.
ELIZA (1966): Created by Joseph Weizenbaum, this was the first chatbot. It simulated a psychotherapist by matching patterns in the user's text and reflecting them back.
Shakey the Robot (1966–1972): The first general-purpose mobile robot able to reason about its own actions. It could navigate a room and move blocks.
Funding from DARPA poured in. The optimism was infectious; Minsky famously claimed in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."
The First AI Winter (1974–1980)
The optimism, however, was premature. Researchers hit a wall known as "combinatorial explosion." While computers could solve problems in a controlled "micro-world," they failed miserably when applied to the chaos of the real world. They lacked common sense and context.
In 1973, the Lighthill Report in the UK offered a scathing critique of AI progress. Realizing that the promises of "human-level intelligence" were nowhere near fruition, governments in the US and the UK slashed funding. This period of reduced interest and funding became known as the first "AI Winter."
Part III: Expert Systems and the Second Winter (1980–1993)
The Rise of Expert Systems
AI roared back in the 1980s with a shift in focus. Instead of trying to create a general intelligence (a machine that thinks like a human), researchers focused on Expert Systems.
These systems were designed to solve specific problems by mimicking the decision-making ability of a human expert. They used "If-Then" rules derived from domain specialists.
R1 (XCON): Used by Digital Equipment Corporation to configure computer orders, saving the company millions.
MYCIN: Designed to diagnose blood infections and recommend antibiotics.
Japan launched the massive Fifth Generation Computer Systems project, aiming to create a new class of supercomputers for AI. This terrified the West, prompting a resurgence in funding.
The Second AI Winter (1987–1993)
History, however, repeated itself. Expert systems were brittle; if an input fell slightly outside their programmed rules, they crashed or gave nonsense answers. They were also expensive to maintain and difficult to update.
Simultaneously, the market for specialized Lisp machines (computers designed for AI) collapsed as general-purpose desktop computers became more powerful. Japan’s Fifth Generation project failed to meet its ambitious goals. By the late 80s, funding dried up again. The term "Artificial Intelligence" became taboo in the scientific community; researchers began using terms like "Informatics" or "Advanced Computing" to avoid the stigma.
Part IV: The Statistical Turn and the Rise of Machine Learning (1993–2011)
The Victory of Probability
In the 1990s, a quiet revolution occurred. AI researchers began to move away from the rigid, symbolic logic of the past and embraced probability and statistics. Rather than hard-coding rules ("If it has a beak, it is a bird"), they began creating systems that learned patterns from data ("Here are 1,000 images of birds; figure out what makes them look like birds").
This was the true rise of Machine Learning (ML).
Deep Blue vs. Kasparov (1997)
A major cultural milestone occurred in 1997 when IBM’s Deep Blue defeated the reigning world chess champion, Garry Kasparov. While critics argued Deep Blue wasn't "thinking" (it was using brute-force search algorithms), it proved that machines could outperform the best humans in complex, strategic domains.
The Big Data Explosion
As the internet grew in the 2000s, the availability of data exploded. This was the fuel Machine Learning needed. Algorithms that were theoretically sound but practically useless in the 80s suddenly became powerful because they finally had enough data to learn from.
Search engines (Google), recommendation systems (Amazon/Netflix), and speech recognition began to integrate these statistical AI models into daily life.
Part V: The Deep Learning Revolution (2012–2020)
The ImageNet Moment (2012)
If one moment defines the modern era of AI, it is the 2012 ImageNet competition.
For years, computers struggled to identify objects in images. In 2012, a team led by Geoffrey Hinton used a deep neural network (an architecture inspired by the human brain, with many layers of artificial neurons) called AlexNet.
AlexNet crushed the competition, reducing the error rate by a massive margin. It proved that deep learning was superior for perceptual tasks. This triggered a gold rush. Researchers realized that deep learning, powered by the parallel processing capabilities of GPUs (chips originally designed for video games), could solve problems previously thought impossible.
AlphaGo (2016)
In 2016, Google DeepMind’s AlphaGo defeated Lee Sedol, a legendary player of the ancient game Go. Go is exponentially more complex than chess; brute force is impossible. AlphaGo didn't just calculate; it "intuited" moves. Its "Move 37" in the second game surprised human experts because it was a move no human would ever play—yet it was brilliant. This demonstrated that AI could be creative.
Part VI: Generative AI and Today's Reality (2020–Present)
The Transformer Architecture
The current wave of AI is built on a specific innovation introduced by Google researchers in 2017: the Transformer. This architecture allows computers to understand the context of data (like language) better than ever before by paying "attention" to different parts of a sentence simultaneously.
This led to the creation of Large Language Models (LLMs).
ChatGPT and the Mainstream
In November 2022, OpenAI released ChatGPT. For the first time, a powerful LLM was given a chat interface and released to the public. It became the fastest-growing consumer application in history.
Suddenly, AI wasn't just classifying data (is this a cat or a dog?); it was generating new data. It could write poetry, debug code, draft emails, and create photorealistic images (via diffusion models like Midjourney).
The Reality Check
Today, we stand at a crossroads. The reality of modern AI is a mix of awe-inspiring capability and significant limitations.
Capabilities: AI has solved protein folding (AlphaFold), is powering self-driving technologies, and acts as a sophisticated co-pilot for knowledge workers.
Limitations: Modern models suffer from "hallucinations"—confidently stating false information. They are "black boxes," meaning we don't always understand how they arrive at an answer. There are profound concerns regarding bias, copyright, and the energy consumption required to train these massive models.
The Tool and the Partner
From the mythical bronze giant Talos to the cloud-based neural networks of today, the history of AI is a story of human ambition. We have moved from trying to hard-code intelligence using rigid rules to creating systems that learn from the messy abundance of the real world.
We have not yet achieved ‘Artificial General Intelligence’ (AGI)—a machine with human-like consciousness and versatility—but it is starting to seem we are on the precipice of that feat. In the least, we have AI of the kind that people in the past could only dream of, and it is beginning to reshape society.

Partner Spotlight: Duet Display
Duet Display transforms an iPad, Mac, Android, or iPad\iPhone into an extra display or drawing tablet, giving professionals more screen space for multitasking and creative work. The software is designed for low‑latency, high‑quality extended displays, making it especially useful for creators, remote workers, and anyone running complex workflows across multiple apps.
Teams use Duet Display to keep communication tools, dashboards, and creative canvases visible at the same time, reducing window switching and helping maintain flow during intensive tasks. To explore plans, platform support, and advanced features like wireless mode and drawing support, visit the official site at Duet Display.
200+ AI Side Hustles to Start Right Now
AI isn't just changing business—it's creating entirely new income opportunities. The Hustle's guide features 200+ ways to make money with AI, from beginner-friendly gigs to advanced ventures. Each comes with realistic income projections and resource requirements. Join 1.5M professionals getting daily insights on emerging tech and business opportunities.
Stay productive, stay curious—see you next week with more AI breakthroughs!

