Uncategorized

OpenAI Releases New O1 Reasoning Model

Kylie Robison, reporting for The Verge:

OpenAI is releasing a new model called o1, the first in a planned
series of “reasoning” models that have been trained to answer more
complex questions, faster than a human can. It’s being released
alongside o1-mini, a smaller, cheaper version. And yes, if you’re
steeped in AI rumors: this is, in fact, the extremely hyped
Strawberry model.

For OpenAI, o1 represents a step toward its broader goal of
human-like artificial intelligence. More practically, it does a
better job at writing code and solving multistep problems than
previous models. But it’s also more expensive and slower to use
than GPT-4o. OpenAI is calling this release of o1 a “preview” to
emphasize how nascent it is. […]

“The model is definitely better at solving the AP math test than I
am, and I was a math minor in college,” OpenAI’s chief research
officer, Bob McGrew, tells me. He says OpenAI also tested o1
against a qualifying exam for the International Mathematics
Olympiad, and while GPT-4o only correctly solved only 13 percent
of problems, o1 scored 83 percent.

Putting aside the politics and other legitimate social and legal concerns around AI, scoring that well in a difficult math exam is just incredible.

 ★ 

Kylie Robison, reporting for The Verge:

OpenAI is releasing a new model called o1, the first in a planned
series of “reasoning” models that have been trained to answer more
complex questions, faster than a human can. It’s being released
alongside o1-mini, a smaller, cheaper version. And yes, if you’re
steeped in AI rumors: this is, in fact, the extremely hyped
Strawberry
model.

For OpenAI, o1 represents a step toward its broader goal of
human-like artificial intelligence. More practically, it does a
better job at writing code and solving multistep problems than
previous models. But it’s also more expensive and slower to use
than GPT-4o. OpenAI is calling this release of o1 a “preview” to
emphasize how nascent it is. […]

“The model is definitely better at solving the AP math test than I
am, and I was a math minor in college,” OpenAI’s chief research
officer, Bob McGrew, tells me. He says OpenAI also tested o1
against a qualifying exam for the International Mathematics
Olympiad, and while GPT-4o only correctly solved only 13 percent
of problems, o1 scored 83 percent.

Putting aside the politics and other legitimate social and legal concerns around AI, scoring that well in a difficult math exam is just incredible.

Read More 

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top
Generated by Feedzy