For years I believed chess was just a game—something my father taught me on the 14th floor of our apartment in Caracas when I was seven, before we left for France in the late 1980s. Then I discovered why it was invented in the first place, and it completely changed how I see intelligence itself.

Chess wasn't created for entertainment. Ancient Indian strategists designed it to teach military commanders humility and tactical thinking. The message was profound: in battle, any pawn can become decisive. A single overlooked piece, a moment of carelessness, could determine victory or defeat. The game taught that intelligence isn't about raw power—it's about seeing patterns others miss, adapting when circumstances shift, and understanding that even the smallest elements matter.

Centuries later, the Japanese refined this philosophy with Go—a game so complex that its possible board configurations exceed the number of atoms in the known universe. Where chess demands tactical precision, Go requires strategic intuition that unfolds across hundreds of moves. For millennia, these games represented the pinnacle of human strategic thinking.

Then something unexpected happened.

When Machines Started Teaching Us

In 2016, AlphaGo defeated world champion Lee Sedol at Go—a feat experts predicted wouldn't happen for another decade. But the truly remarkable part wasn't the victory itself. It was Move 37 in game two, a placement so unconventional it had only a 1 in 10,000 probability of being played by any human. The move violated centuries of accumulated wisdom. Professional players worldwide studied it obsessively, trying to understand the logic behind what seemed impossible.

Then something fascinating occurred: human players who studied AlphaGo's strategies began playing better themselves, developing novel approaches and improving decision quality across thousands of games. The AI hadn't replaced human creativity—it had expanded it. Players discovered possibilities they'd never imagined, hidden within a game they thought they'd mastered.

This is exactly how AI can augment us today, across every domain where we think and work. But there's a critical difference between people who use AI to become sharper and those who use it to become duller.

The Great Divergence Is Already Happening

I've spent years working with data science architectures, from natural language processing to reinforcement learning systems. Through my company Fractal-Apps, I've developed multiple AI-powered applications. What I've witnessed is a fundamental split emerging in how people interact with these technologies.

One group uses AI as a crutch—copying outputs verbatim, accepting answers without questioning, gradually losing the ability to think critically. Their cognitive muscles atrophy like those of someone who stops walking and relies entirely on a wheelchair despite having functioning legs.

The other group uses AI as a training partner. They engage in what I call "Socratic augmentation"—questioning the AI's responses, testing assumptions, using it to explore ideas they'd never considered alone. These people are becoming demonstrably more capable, faster learners, better problem-solvers.

The difference comes down to five core practices.

Critical Thinking: Question Everything, Including AI

When I develop AI solutions for clients, I never accept the first answer a model provides. Instead, I probe: "What assumptions are embedded in this response? What alternative approaches exist? Where might this reasoning fail?"

This isn't about distrusting AI—it's about treating it as a brilliant colleague who sometimes makes mistakes or misses context you possess. The most powerful approach combines your domain expertise with the AI's computational power and breadth of knowledge.

For instance, when building analytics platforms in my previous work, I'd use AI to generate initial statistical models, then critically examine whether they captured the nuances of real-world manufacturing processes. The AI brought sophisticated techniques I might not have considered; my experience brought practical constraints and edge cases the AI couldn't know.

This dynamic creates something neither human nor AI could achieve alone. But it only works if you maintain genuine intellectual engagement rather than passive acceptance.

Continuous Learning: The Incubation Process

Many mathematicians and researchers use what I call the "incubation method" for solving complex problems. You write down a problem clearly, work on it intensely for a period, then let it rest while your subconscious processes connections. Days or weeks later, insights emerge that weren't accessible through pure conscious effort.

AI supercharges this process.

I can now explore a mathematical concept, ask the AI to explain it from multiple perspectives, work through examples, then step away. When I return, I engage the AI again with new questions that emerged during incubation. This iterative deepening accelerates learning in ways that weren't possible even five years ago.

Through my technical education at ENSIAME in France and six months at Tsinghua University in Beijing, I learned that mastery requires sustained engagement with difficult concepts. AI doesn't eliminate that requirement—it makes the engagement more productive by providing immediate feedback, alternative explanations, and connections to related domains.

The key is using AI to support genuine learning rather than superficial understanding. Ask yourself: could you recreate this solution without the AI? If not, you haven't truly learned—you've just borrowed temporarily.

Teamwork: AI as Your Distributed Intelligence Network

One remarkable aspect of modern AI is how it enables individual practitioners to accomplish what once required entire teams. Through my work developing AI Room Styles—an interior design service—in just three months, I achieved what would have previously demanded a team of ten people working full-time for a year.

This isn't about AI replacing teams. It's about AI extending your capabilities so you can prototype faster, test more hypotheses, and iterate based on real feedback before committing extensive resources.

The teamwork dimension extends beyond just you and the AI. When multiple people use AI as a shared thinking tool, asking it to help reconcile different perspectives or identify gaps in collective reasoning, teams make better decisions. The AI becomes a facilitation layer that surfaces assumptions, highlights contradictions, and proposes synthesis.

In my technical sales work, I've seen how AI helps bridge communication gaps between engineers and business stakeholders. It can translate technical constraints into business implications and vice versa, creating shared understanding that accelerates projects.

New Solutions: Standing on the Shoulders of AI

Every week I see examples on social media of people accomplishing remarkable things by combining AI with domain knowledge. Financial simulators built in minutes instead of days. Complex algorithms implemented by asking the right questions rather than writing every line manually.

I recently created analytical tools using Claude.ai that would have taken me days to develop five years ago. The difference isn't that I've become less capable—it's that I can now focus my expertise on defining problems precisely and evaluating solutions critically, while the AI handles routine implementation.

This pattern appears across industries. Medical professionals use AI to stay current with research while focusing their judgment on patient care. Engineers leverage AI for rapid prototyping while concentrating their creativity on novel approaches. Researchers employ AI to process vast datasets while directing their insight toward interpretation and discovery.

The people succeeding aren't those with the most technical skills—they're those who best understand how to frame problems, evaluate outputs, and iterate toward solutions. Domain expertise becomes more valuable, not less, because it guides effective use of these powerful tools.

An augmented student
An augmented student playing Go (AI-generated by Grok Imagine)

The Superintelligence Question: Can We Understand What Surpasses Us?

This brings us to a profound challenge. Many experts predict we'll achieve artificial general intelligence—systems that match or exceed human capabilities across all cognitive domains—somewhere between 2027 and 2030. Recent forecasts from AI research platforms estimate a 50% probability of AGI by 2028 and 80% by 2030. Industry leaders like Anthropic's CEO have suggested AI systems could be broadly better than humans at almost all things by 2026 or 2027.

But here's my perspective, shaped by years working with increasingly sophisticated AI systems: superintelligence in specific domains already exists. An AI that can process billions of medical papers, identify patterns across millions of patient cases, and propose treatments that no individual doctor could derive—that's already superhuman in a meaningful sense.

The question isn't whether superintelligence will arrive. It's whether we'll be capable enough to work alongside it effectively.

If an AI truly surpasses human intelligence across all domains, we face an uncomfortable truth: we won't fully comprehend its reasoning. Just as AlphaGo made moves that violated human intuition yet proved correct, a superintelligence would operate according to logic we can't entirely grasp.

This creates a critical imperative: we must augment ourselves as rapidly as possible while we still can. The people who develop stronger reasoning, broader knowledge, and deeper expertise now will be better positioned to collaborate with—or at least understand the outputs of—future AI systems.

Those who let AI make them intellectually lazy today will be completely unprepared for a world where AI capabilities accelerate further.

Physical and Neural Augmentation: Beyond Software

The augmentation story doesn't end with software. Companies are moving to high-volume production of brain-computer interface devices with automated surgical procedures planned for 2026. Clinical trials for brain implants are expanding from single digits to dozens of patients, with leading companies starting trials in multiple countries.

These technologies could eventually create bandwidth between human consciousness and AI systems that's orders of magnitude greater than typing or speaking. Imagine accessing information not by reading but by direct neural query. Not replacing human thought, but expanding its range and speed.

Meanwhile, exoskeleton technology is transitioning from experimental prototypes to mainstream workplace equipment, with the market entering accelerated growth and employers viewing them as standard protective gear for physically demanding jobs. Modern exoskeletons can provide up to 38 kilograms of dynamic lift support, powered by AI trained on billions of human motion data points.

These aren't science fiction futures—they're engineering challenges being solved right now. The question is who will have access to these augmentation technologies and how they'll reshape human capability.

Long-Term Risks: What the Skeptics Get Right

I want to be clear about something: the people warning about AI risks aren't wrong to be concerned. There are genuine dangers.

If AI systems become sophisticated enough to operate autonomously in pursuit of objectives, misalignment between their goals and human wellbeing could lead to catastrophic outcomes. Even well-intentioned systems optimizing for the wrong metrics could cause harm at scale.

The competitive dynamics between nations and corporations create pressure to deploy AI systems faster than safety research can keep pace. When billions of dollars and geopolitical advantage are at stake, precaution often loses to ambition.

We're building technologies that could reshape civilization more profoundly than electricity or the internet, yet our governance frameworks remain inadequate. The gap between capability and wisdom grows wider every month.

These risks are real. Ignoring them would be foolish.

But I also believe that augmenting human intelligence—making millions of people smarter, more capable, better at solving complex problems—is our best defense against negative outcomes. A population that understands AI deeply, thinks critically about its applications, and participates actively in shaping its development will make better collective decisions than one that remains passive and uninformed.

The Choice Before You

Here's what I want you to understand: this article has already augmented you.

You've learned how ancient games taught strategic thinking that AI later surpassed and returned to us improved. You've discovered the five practices that separate those who grow stronger with AI from those who grow weaker. You've glimpsed the timeline of augmentation technologies that will reshape human capability in the next few years. You've confronted both the extraordinary possibilities and genuine risks of artificial superintelligence.

If you've read carefully, questioned your assumptions, and connected these ideas to your own experience, you're smarter now than when you began. That's what augmentation means—not replacing human intelligence, but expanding it.

This is a Super Article in the most literal sense: it has collected advantages from multiple domains and presented them in ways that help you surpass your previous understanding. It shows you possibilities you may not have considered and methods for enhancing your own capabilities in unexpected ways.

But reading alone isn't enough. The real augmentation happens when you apply these principles—when you question AI outputs critically, when you use incubation to deepen learning, when you leverage AI to amplify rather than replace your thinking.

The great divergence is already underway. One path leads to cognitive atrophy, dependence, and obsolescence. The other leads to enhanced capability, continuous growth, and partnership with increasingly powerful technologies.

Which path you take is entirely up to you.

This article was generated in collaboration with Claude Sonnet 4.5 and guided by the author, Nicolas Martin, demonstrating the very augmentation process it describes.

Key References