We Built the Tiger Cub. Now What?
Why Geoffrey Hinton thinks we’re raising a digital tiger cub—and what leaders must do before it grows up.
By now, you’ve probably heard the story: Geoffrey Hinton, the “Godfather of AI,” steps out of the Googleplex, dusts off his conscience, and starts ringing the bell. Not the “ding-ding, buy more crypto” bell, but the “hey, humanity, are we paying attention?” bell.
Let’s break this down.
The Chicken, the Tiger Cub, and Us
Hinton’s warning is as simple as it is chilling: We’ve built something that’s learning faster than we can imagine. If you want to know what it’s like not to be the smartest thing in the room, he says, “ask a chicken.” Or, for a more cuddly metaphor, think of AI as a tiger cub: adorable, fascinating, but you’d better be sure it never wants to eat you when it grows up.
We’re raising the cub, but are we training it, or just feeding it?
The Six-Alarm Fire
AI isn’t just a productivity tool or a new way to write blog posts. Hinton sees six threats, and they’re not science fiction:
Autonomous Weapons: Robots that decide who lives or dies. (No, this isn’t a pitch for a new Netflix series.)
Cyber Attacks: AI can hack faster, smarter, and at scale. Your IT guy’s Red Bull habit won’t save you.
Disinformation & Echo Chambers: AI can deepen divides, confirm your biases, and keep you clicking.
Job Displacement: If your job is “mundane intellectual labor,” start looking for a wrench. (Plumbing, anyone?)
Wealth Inequality: The AI gold rush will make the rich richer. The rest? Well, good luck.
Loss of Control: Superintelligence doesn’t have to hate you. It just has to be indifferent.
Regulation: The World’s Worst Group Project
Europe is trying to regulate. The U.S. is worried about China. Everyone’s concerned about everyone else. And, as Hinton notes, the people making the rules often don’t understand the game. (Remember the senator who thought Facebook was “the internet”?)
So, who’s in charge? The answer: Not enough thoughtful, informed people.
The CEO’s Dilemma
If you’re a leader, you’re not off the hook. Hinton left Google because he wanted to speak freely about the risks. He says companies need to foster cultures where people can raise safety concerns without being shown the door. This isn’t just about profit; it’s about responsibility.
What Do We Do Now?
Audit your AI. Know what it’s doing, and what it could do.
Invest in security. Not just firewalls—think misinformation, deepfakes, and digital trust.
Upskill your team. If AI can do your job, can you do something AI can’t?
Engage in policy. Don’t wait for the regulators to catch up. Help them.
Plan for the worst. Hope for the best, but don’t bet the farm.
The Hardest Lesson
Hinton’s biggest regret? Not raising the alarm sooner. His advice: If your intuition tells you something’s wrong, don’t let the crowd drown you out. Sometimes, you’re the only one who sees the tiger cub for what it is.
The Bottom Line
AI is not coming. It’s here. The window for shaping its future is closing fast. The question is not whether we can build smarter machines. It’s whether we can stay wise enough to survive them.
The future isn’t written by the smartest code. It’s written by the bravest humans.
So, what will you do while the cub is still in the playpen?
~ aq