Uncategorized

AI Threats ‘Complete BS’ Says Meta Senior Research, Who Thinks AI is Dumber Than a Cat

Meta senior research Yann LeCun (also a professor at New York University) told the Wall Street Journal that worries about AI threatening humanity are “complete B.S.”

When a departing OpenAI researcher in May talked up the need to learn how to control ultra-intelligent AI, LeCun pounced. “It seems to me that before ‘urgently figuring out how to control AI systems much smarter than us’ we need to have the beginning of a hint of a design for a system smarter than a house cat,” he replied on X. He likes the cat metaphor. Felines, after all, have a mental model of the physical world, persistent memory, some reasoning ability and a capacity for planning, he says. None of these qualities are present in today’s “frontier” AIs, including those made by Meta itself.

LeCun shared a Turing Award with Geoffrey Hinton and Hoshua Bengio (who hopes LeCun is right, but adds “I don’t think we should leave it to the competition between companies and the profit motive alone to protect the public and democracy. That is why I think we need governments involved.”)

But LeCun still believes AI is a very powerful tool — even as Meta joins the quest for artificial general intelligence:

Throughout our interview, he cites many examples of how AI has become enormously important at Meta, and has driven its scale and revenue to the point that it’s now valued at around $1.5 trillion. AI is integral to everything from real-time translation to content moderation at Meta, which in addition to its Fundamental AI Research team, known as FAIR, has a product-focused AI group called GenAI that is pursuing ever-better versions of its large language models. “The impact on Meta has been really enormous,” he says.

At the same time, he is convinced that today’s AIs aren’t, in any meaningful sense, intelligent — and that many others in the field, especially at AI startups, are ready to extrapolate its recent development in ways that he finds ridiculous… OpenAI’s Sam Altman last month said we could have Artificial General Intelligence within “a few thousand days….” But creating an AI this capable could easily take decades, [LeCun] says — and today’s dominant approach won’t get us there…. His bet is that research on AIs that work in a fundamentally different way will set us on a path to human-level intelligence. These hypothetical future AIs could take many forms, but work being done at FAIR to digest video from the real world is among the projects that currently excite LeCun. The idea is to create models that learn in a way that’s analogous to how a baby animal does, by building a world model from the visual information it takes in.

In contrast, today’s AI models “are really just predicting the next word in a text, he says… And because of their enormous memory capacity, they can seem to be reasoning, when in fact they’re merely regurgitating information they’ve already been trained on.”

Read more of this story at Slashdot.

Meta senior research Yann LeCun (also a professor at New York University) told the Wall Street Journal that worries about AI threatening humanity are “complete B.S.”

When a departing OpenAI researcher in May talked up the need to learn how to control ultra-intelligent AI, LeCun pounced. “It seems to me that before ‘urgently figuring out how to control AI systems much smarter than us’ we need to have the beginning of a hint of a design for a system smarter than a house cat,” he replied on X. He likes the cat metaphor. Felines, after all, have a mental model of the physical world, persistent memory, some reasoning ability and a capacity for planning, he says. None of these qualities are present in today’s “frontier” AIs, including those made by Meta itself.

LeCun shared a Turing Award with Geoffrey Hinton and Hoshua Bengio (who hopes LeCun is right, but adds “I don’t think we should leave it to the competition between companies and the profit motive alone to protect the public and democracy. That is why I think we need governments involved.”)

But LeCun still believes AI is a very powerful tool — even as Meta joins the quest for artificial general intelligence:

Throughout our interview, he cites many examples of how AI has become enormously important at Meta, and has driven its scale and revenue to the point that it’s now valued at around $1.5 trillion. AI is integral to everything from real-time translation to content moderation at Meta, which in addition to its Fundamental AI Research team, known as FAIR, has a product-focused AI group called GenAI that is pursuing ever-better versions of its large language models. “The impact on Meta has been really enormous,” he says.

At the same time, he is convinced that today’s AIs aren’t, in any meaningful sense, intelligent — and that many others in the field, especially at AI startups, are ready to extrapolate its recent development in ways that he finds ridiculous… OpenAI’s Sam Altman last month said we could have Artificial General Intelligence within “a few thousand days….” But creating an AI this capable could easily take decades, [LeCun] says — and today’s dominant approach won’t get us there…. His bet is that research on AIs that work in a fundamentally different way will set us on a path to human-level intelligence. These hypothetical future AIs could take many forms, but work being done at FAIR to digest video from the real world is among the projects that currently excite LeCun. The idea is to create models that learn in a way that’s analogous to how a baby animal does, by building a world model from the visual information it takes in.

In contrast, today’s AI models “are really just predicting the next word in a text, he says… And because of their enormous memory capacity, they can seem to be reasoning, when in fact they’re merely regurgitating information they’ve already been trained on.”

Read more of this story at Slashdot.

Read More 

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top
Generated by Feedzy