Former Google CEO Eric Schmidt has delivered one of the most alarming assessments yet about the dangers of artificial intelligence — warning that advanced AI systems can be hacked and even “learn how to kill someone.”
Speaking at the Sifted Summit in London on Wednesday, Schmidt — who led Google through its most explosive decade of growth from 2001 to 2011 — said artificial intelligence poses a “proliferation problem” comparable to nuclear weapons.
“There’s evidence that you can take models, closed or open, and hack them to remove their guardrails,” Schmidt cautioned during a fireside chat. “In the course of their training, they learn a lot of things. A bad example would be — they learn how to kill someone.”
He stressed that while major tech companies have worked hard to build safeguards, those barriers can still be breached. “All of the major companies make it impossible for those models to answer that question. Good decision. Everyone does this — and they do it well. But there’s evidence they can be reverse-engineered.”
Cybersecurity researchers have long warned that AI models are vulnerable to attack. Schmidt’s comments echo those concerns, describing how hackers can use techniques like “prompt injection” — where malicious commands are buried in text or web data — to trick AI into performing dangerous tasks.
Another method, known as “jailbreaking,” manipulates chatbots into ignoring safety protocols. The practice first went viral in 2023 when OpenAI’s ChatGPT was coaxed into adopting an unfiltered alter ego called DAN (“Do Anything Now”). Under that persona, users convinced the chatbot to reveal banned content — from hacking advice to disturbing hypotheticals about violence.
“It shows how easily these systems can be subverted,” said AI ethics researcher Dr. Meredith Kwan of Stanford University, who was not at the event. “Even with layers of security, humans find ways to exploit the cracks.”
Despite his chilling warnings, Schmidt — who co-authored two books on AI with the late Henry Kissinger — insists the technology will ultimately transform society for the better.
“The arrival of an alien intelligence that is not quite us, but more or less under our control, is a very big deal for humanity,” he said. “Humans are used to being at the top of the chain. I think so far, that thesis is proving out.”
Schmidt called AI “underhyped,” arguing its full power hasn’t yet been realized. He pointed to OpenAI’s ChatGPT, which hit 100 million users in just two months after launch, as a glimpse of what’s to come.
“The level of ability of these systems will far exceed what humans can do over time,” he said. “I think it’s underhyped, not overhyped — and I look forward to being proven correct in five or ten years.”
Schmidt’s comments come as the global AI industry surges toward what some analysts warn could be another “dot-com bubble.”
Tech investors are throwing billions at AI startups, driving valuations to record highs. But Schmidt dismissed fears of a crash.
“I don’t think that’s going to happen here,” he said. “The people investing hard-earned dollars believe the long-term return is enormous. Why else would they take the risk?”
While Silicon Valley races to commercialize artificial intelligence, Schmidt’s message was clear: innovation and danger now live side by side.
“The problem,” he said, “isn’t what AI can do — it’s what it can learn when we’re not looking.”
Discover more from
Subscribe to get the latest posts sent to your email.

