Let me tell you something I have never been entirely comfortable admitting: I was, for most of my teenage years, genuinely mediocre, and I was perfectly fine with that.
Born in Venezuela, then teenager in France in the 1990s, my purpose in life was cartoons, video games, karate, drawing, and comic books. I was a decent cartoonist. I was terrible at music. I had zero interest in developing what anyone might call "skills." Effort felt pointless. Direction felt like something other people needed, the driven ones, the ambitious ones. Not me.
I failed a year in high school. Then another in engineering school. My studies nearly collapsed three times. My father kept pushing me toward engineering, a field I had not chosen and did not love. I later learned that more than 50% of engineers worldwide did not initially choose their career out of passion. That statistic should disturb you, and it should also comfort you.
The true social inequality is in the minds.
— A sentence that took me twenty years to understand
Because here is what no one tells you when you are a struggling student with shaky self-esteem: the turning point rarely looks like a turning point when it happens.
February 2005. One Robot. Everything Changed.
ENSIAME Valenciennes, France
I was a bad-to-average engineering student. Serious, respectful of professors, genuinely trying, but not brilliant. Then came a robotics project nobody on my team wanted to do. My teammate refused. I had two options: convince him, or do it alone.
I did it alone. And here is what made it different from every other time I had forced myself through coursework: I actually wanted to do it. Programming a KUKA industrial robot, setting variables, defining limits, watching a physical machine respond to logic I had written, was genuinely, unexpectedly fun.
That was the hinge everything else swung on. Engineering stopped being a burden and became a passion. Once you feel that click, once a skill stops feeling like obligation and starts feeling like a superpower, you cannot unfeel it.
Fast-forward twenty years. I build AI-powered tools for businesses. In three months, 90% autonomously with AI, I created a complete interior design service that would previously have required ten people for a year. I build financial simulators in minutes. What used to take days takes an afternoon.
The problem that most AI Users have in 2026
Most people using AI daily are quietly getting worse at thinking
Here is the claim that most AI optimists refuse to make: regular AI use, without deliberate discipline, actively degrades the cognitive skills it appears to enhance.
The evidence is mounting. A 2023 study from MIT Sloan found that workers who relied heavily on AI writing tools scored measurably lower on unassisted analytical writing tasks after six months, even as their AI-assisted output improved. A separate study published in Computers in Human Behavior found that heavy GPS users showed significant decline in spatial navigation ability over time. The brain, given an external crutch, quietly stops developing the internal muscle.
Now extrapolate that to judgment, strategic thinking, and domain reasoning, the exact skills that define entrepreneurial success. If you outsource your analysis to AI for long enough, you do not merely become dependent on it. You become demonstrably worse without it. And when AI gives you a confidently wrong answer, which it does, regularly, you no longer have the baseline competence to catch the error.
The Real AI Risk
The real AI risk is not that machines replace your job. It is that you replace your thinking with a machine, and forget you ever had it. This is not an argument against AI. It is an argument for using it the way elite athletes use training equipment: as a tool that serves your development, not a substitute for it.
The Four Things AI Still Cannot Do Better Than You
Build these, or compete with someone who did
Small Team Collaboration
Research consistently shows teams of three to four people outperform individuals and large groups on complex problems. AI generates options. It cannot read the silence after a bad idea, feel the friction of a room, or rally people around a direction. Chemistry is irreplaceable.
Deep, Reflective Writing
Not content-mill writing, the kind where you work out what you actually think by writing it down. This is where strategy is born. AI polishes output. It cannot do the thinking for you.
Genuine Human Trust
AI can simulate empathy. It cannot produce it. The entrepreneur who listens deeply, navigates conflict with grace, and builds real trust will never be automated. Clients don't just buy solutions. They buy confidence in another human being.
Entrepreneurial Judgment Under Uncertainty
The decision to start, pivot, stop, hire, fire, these are not optimization problems. They are judgment calls made in conditions of radical uncertainty, shaped by values, experience, and the willingness to be wrong. No model carries that weight for you.
What the Statistics Actually Say, If You Read Past the Headline
The numbers are not telling you to panic. They're telling you where to invest.
Everyone cites the McKinsey figure: generative AI could automate tasks accounting for 60 to 70% of employees' time. Goldman Sachs adds that 300 million jobs are exposed to automation. The World Economic Forum's Future of Jobs 2025 report lists analytical thinking, creative thinking, and resilience as the top skills in demand through 2030.
Most people read those numbers as a threat map. That is the wrong frame. Read them as a price signal.
When McKinsey says 60 to 70% of task time is automatable, they are describing execution, the mechanical, repeatable layer of most professional work. What both reports implicitly confirm is that the non-automatable layer, judgment, synthesis, trust, direction, becomes dramatically more valuable as everything beneath it is commoditized.
What those reports describe, taken together, is a compression of the middle. Average technical execution collapses in value. Generic prompt engineers flood the market within months. But the professional who brings genuine domain depth, clear judgment, and human relationships to an AI-enabled workflow becomes exponentially more productive, and exponentially harder to replace.
The Inequality That Modern Societies are still Facing
AI will not close the gap. It will widen it, unless...
I was born in Venezuela. I know exactly what low self-esteem from a developing country feels like, the quiet, pervasive belief that you are behind before you begin. That the world's systems were built by and for someone else, somewhere else.
That belief is the most powerful cage ever constructed. And it is almost entirely invisible from the inside.
The 2022 World Bank Human Capital Index found that children in low-income countries receive not just less educational access, but less belief in their own potential, a cognitive and psychological deficit that precedes and shapes the financial one. The brain adapts to its environment so efficiently that it defines your life if you let it.
Here is what the AI optimists miss: a young entrepreneur in Pondicherry or Caracas with a laptop and AI access can now technically compete with agencies in Paris or New York. But technically is doing a lot of work in that sentence. Because if their starting point is confusion, about what they are building, who it is for, why it matters, AI outputs confused work faster. The tools lower the cost of execution. They do not lower the cost of thinking.
The AI opportunity is real. But it is not equally distributed, and the dividing line is not geography or income. It is the depth of the skills and self-belief the person brings to the keyboard.
The Four Dimensions of the Human Who Thrives With AI
In order. The sequence matters.
The Virtuous Loop That Compounds Everything
There is one dimension the four pillars above do not fully capture on their own: what happens when they work together over time. The human who combines genuine expertise with deliberate AI fluency does not just perform better, they enter a compounding loop. A loop that most people, focused on the next prompt or the next tool, never even notice is available to them.
The Social Amplifier: Where Individual Growth Becomes Collective
The loop compounds further when it stops being a solo journey. When practitioners share what they built with AI, the prompts that worked, the outputs that surprised them, the failures that taught them something, the individual cycle plugs into a collective intelligence layer. Peer critique sharpens prompting. Group discussion surfaces blind spots. The insights that flow back in are richer, faster, and harder to replicate alone.
This is what responsible AI use actually looks like, not prompt-copying, not outsourcing your thinking, but using AI as a training partner that makes your own intelligence more powerful with each iteration. Used this way, individually and within teams, AI becomes the engine of a shared upward spiral where every member contributes their human depth, challenges AI outputs, and builds on one another's expertise. Creativity, efficiency, and passion do not compete in this loop. They reinforce each other continuously, and no one inside it is replaceable by an algorithm.
I deliberately train my own algorithms, YouTube, X, by signaling what serves my development and refusing what doesn't. When YouTube recommended content I had no reason to consume, I opted out. When ads placed me in a dependency loop feeding manufactured inadequacy, I blocked the category. That is not digital hygiene. That is self-awareness applied to technology. The same discipline applies to AI: use it to sharpen your thinking, not replace it.
One Prediction. One Action. No Hedging.
Not in spite of AI. Because of it.
In February 2005, I chose to program a robot alone because it was interesting to me, not strategic, not mandatory, not impressive. Something clicked. The work stopped feeling like work. That moment did not make me smarter. It made me directed. And direction, knowing what you are building and why, is the one thing AI cannot manufacture for you.
Here is the prediction: by 2028, the most valuable professionals in every field will not be distinguished by what AI they use, every serious player will use the same models. They will be distinguished by what they bring to the AI that it cannot generate itself. The market for generic AI-assisted work will be brutally commoditized within three years. The premium for genuine human expertise, applied with AI leverage, will be the highest it has ever been.
The gap between those two outcomes is not technical. It is personal. It is the skills you are building right now, or not building.
The AI era is not producing two kinds of professionals. It is producing two kinds of people: those who used AI to skip the hard work of becoming genuinely skilled, and those who used AI to do more with skills they had already paid for in years of effort.
What separates those two groups has a name: high agency. The capacity to set direction, take action, and own the outcome, without waiting for certainty, permission, or a better tool. Every dimension we covered feeds it directly. Social intelligence means you read situations and move rather than stall. Domain depth means you trust your own judgment instead of deferring to the model. Decision quality means you act under pressure with the information available. AI fluency means you direct the tools, the tools do not direct you. And the virtuous loop is how all of it compounds: each cycle of deeper knowledge, sharper problem solving, new opportunities spotted, and limits understood makes the next cycle faster and more powerful. High agency is not a personality trait. It is the accumulated result of building the right skills, in the right order, with the right discipline. That is exactly what everything above has been building toward.
One of those groups is about to find out exactly how much the shortcut cost them.
— Nicolas Martin
What to Do in the Next 72 Hours
Not "invest in yourself." That's too vague to act on. Here is the specific version:
-
1
Name one domain where you have genuine depth, not familiarity, depth. That is your moat. Find one AI tool that extends it and spend one hour this week using it on a real problem.
-
2
Identify one decision you have been delegating to data, to tools, or to consensus. Make it yourself this week. Document your reasoning. You are building a judgment muscle, and muscles only develop under load.
-
3
Pick one human relationship, a client, a collaborator, a team member, and invest in it deliberately this month. Not a message. A real conversation. That relationship will compound in ways no AI output can.
Notice what these three actions have in common: they all restore you, not a tool, not a model, not a consensus, to the role of decision-maker. That is the operating definition of high agency: acting, directing, and owning the outcome even under uncertainty, even without complete data. Each step above is a deliberate rep in that direction. Naming your domain moat is an act of self-knowledge. Making your own decision is an act of courage. Investing in a relationship is an act of trust. None of these are passive. None of them can be delegated to an AI. And none of them are optional if you want to stay relevant.
Now scale that up. One high-agency individual changes what a team can achieve. A team of high-agency people changes what an organization can achieve. When people at every level bring their own depth, challenge AI outputs rather than accept them, and take ownership of decisions rather than defer them to tools, the whole organization becomes harder to disrupt and faster to adapt. That is not just a talent strategy. It is the only durable competitive advantage in an era where every competitor has access to the same models. The question for companies is not which AI to buy. It is who they have built to use it.
For Companies: Why High-Agency Employees Are Your Most Strategic Asset
The organizational case for investing in human depth
The High-Agency Employee Advantage
If the arguments above apply to individuals, they apply to organizations with compounding force. The same AI tools are available to every company in your sector. The same models, the same APIs, the same price point. What is not equally available is a workforce that knows how to use them wisely, people who bring judgment, domain depth, and genuine trust to every interaction that technology enables.
- Error-catching capability: High-agency employees are the organization's immune system against confident AI mistakes. When a model hallucinates a regulatory figure, cites a non-existent study, or produces a legally risky clause, it is the employee with deep domain knowledge who catches it. Low-agency workers, who defer to AI output, become a liability at scale.
- Accelerated ROI on AI tools: Deloitte's 2024 data is unambiguous, organizations that invest in human capabilities alongside AI tooling outperform those that invest in tooling alone. The person who understands the domain directs the tool. Without that understanding, you are paying for a sports car and driving it in first gear.
- Resilience under disruption: The next wave of AI will automate tasks that the current wave does not yet reach. Companies staffed by people who have outsourced their thinking to current tools will be structurally fragile. Companies staffed by people who use AI to amplify their own reasoning will adapt, because the underlying cognitive asset is portable, not dependent on any specific tool.
- Client trust and relationship capital: Clients do not distinguish between AI-generated and human-generated outputs in terms of polish. They distinguish them in terms of trust. The account team that brings genuine expertise to client conversations, using AI to prepare and execute, not to think, builds relationships that survive the next disruption. The team that clearly delegates its judgment to a chatbot does not.
- Innovation at the frontier: The most commercially valuable AI applications in the next three years will not come from prompt engineering. They will come from people who understand a domain deeply enough to see the gap that AI could fill, and who have the judgment to evaluate whether the AI fills it correctly. Domain depth is not just valuable alongside AI, it is what makes AI strategically useful rather than operationally decorative.
The practical implication: the question for your talent strategy is not "how many people can use AI tools?" It is "how many people have the depth to direct them wisely?" That distinction will separate the companies that thrive from the companies that execute efficiently on the wrong things, very, very fast.
Sources & References
- [1]MIT Sloan Management Review. (2023). The Hidden Cost of AI Writing Assistance on Analytical Skill. MIT.
- [2]Dahmani, L., & Bohbot, V.D. (2020). Habitual use of GPS negatively impacts spatial memory during self-guided navigation. Computers in Human Behavior, 106043.
- [3]McKinsey Global Institute. (2024). The Economic Potential of Generative AI. McKinsey & Company.
- [4]Goldman Sachs. (2023). Generative AI Could Raise Global GDP by 7%. Goldman Sachs Global Investment Research.
- [5]World Economic Forum. (2025). Future of Jobs Report 2025. WEF, Geneva.
- [6]Deloitte. (2024). Global Human Capital Trends 2024: Work That Endures. Deloitte Insights.
- [7]World Bank Group. (2022). Human Capital Index 2022. World Bank, Washington D.C.