Meta’s AI Chief Yann LeCun has said predictions about AI endangering humanity are: “Complete B.S.’
LeCun has an extremely decorated resume in the world of AI. He’s won one of the most prestigious awards in the field, the A.M Turing award, for his work in deep learning, and he is a professor at New York University.
When questioned by a journalist from The Wall Street Journal on whether AI will become smart enough to endanger humanity in the near future, he simply replied: “You’re going to have to pardon my French, but that’s complete B.S.”
That doesn’t mean that LeCun was completely dismissive about the possibility of artificial general intelligence (AGI), which is an advanced machine intelligence that resembles a human being and solves a wide variety of tasks.
However, he argued that large language models (LLMs) like ChatGPT and X’s Grok won’t lead to an AGI, regardless of how much they scale their operations.
LeCun said these LLMs merely demonstrate that “you can manipulate language and not be smart”.
“We are used to the idea that people or entities that can express themselves, or manipulate language, are smart—but that’s not true,” says LeCun.
He explained to the WSJ that current LLMs are merely predicting upcoming words in pieces of text, but are “so good” that they fool viewers.
He highlighted the work that META’s Fundamental AI Research (FAIR) division is doing as the future of AI, where his team is currently working on digesting video from the real world.
The scientist, who has been called “The Godfather of AI”, comes into conflict with other figures in the tech world like Open AI CEO Sam Altman and Elon Musk with these sort of comments.
In January 2024, Altman predicted that an AGI would be coming in the “reasonably close-ish future” in a speech at the World Economic Forum organized by Bloomberg.
Recommended by Our Editors
Musk has been another figure who has consistently promoted the need for AI regulation before a super-intelligent AI is developed.
Elon Musk recently came out in support of California bill SB 1047, which would introduce new safety and accountability mechanisms for large AI systems, highlighting AI as a “potential risk to the public” in a post on X.
This brought him into direct opposition with LeChun, who claimed the legislation would have”apocalyptic consequences on the AI ecosystem” due to regulating the research and development process.
Meta’s AI chief has a long history of vocal AI skepticism; in May he told the Financial Times that LLMs have a “very limited understanding of logic” and can’t understand the physical world,
Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.