AI Won’t Plateau — if We Give It Time To Think | Noam Brown | TED





To get smarter, traditional AI models rely on exponential increases in the scale of data and computing power. Noam Brown, a leading research scientist at OpenAI, presents a potentially transformative shift in this paradigm. He reveals his work on OpenAI’s new o1 model, which focuses on slower, more deliberate reasoning — much like how humans think — in order to solve complex problems. (Recorded at TEDAI San Francisco on October 22, 2024)

If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: https://ted.com/membership

Follow TED!
X: https://twitter.com/TEDTalks
Instagram: https://www.instagram.com/ted
Facebook: https://facebook.com/TED
LinkedIn: https://www.linkedin.com/company/ted-conferences
TikTok: https://www.tiktok.com/@tedtoks

The TED Talks channel features talks, performances and original series from the world’s leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit https://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.

Watch more: https://go.ted.com/noambrown

TED’s videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy: https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy. For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at https://media-requests.ted.com

#TED #TEDTalks #ai

source

40 Replies to “AI Won’t Plateau — if We Give It Time To Think | Noam Brown | TED”

Luck is part of poker. If you have two players of equal ability, the one who has better luck will always win. I think they'd hve to play hundreds or thousands of games to see which is better. Over the short term luck is a big factor. Over the long term ability is the biggest factor.

model 1 will hit a wall of cost/money energy and infrastructure. model 2 (improving thinking) transforms the problems of model 1 into another domain (time, and money charged) and that most probably will be another wall.

Thanks for testing your own intelligence…. 1) Buy a book written in a foreign language, 2) Study the symbols, discover and capture every sequence that these strange symbols follow. 3) After you have mastered every sequence, please answer the following question: Do you understand the language? or can you just repeat the sequences?

Depends what you mean by "think". Humans have a bioelectric field that connects every cell to every other cell, electromagnetically. This gives us self-awareness and connects us to the Universe. AI, on the other hand, only has what we give it, a pale simulation of a subset of our cells, the neurons. No glial cells. No field compute.

I think it's gonna plateu still that's not the problem i think the main problem is that ai lacks the main drive of learning across all species which is curiosity but that factor only comes with the addition of emotions starting with the most basic of them in the human brain

AI computer chips are located inside of metal boxes within data centers, they don't exist inside of fleshy, fragile meat sacks that have to "think" in order to survive this retched planet. More senseless Silicon Valley stock pumping garbage!

Thats is what you call marketing BS. Linus Torvalds is right, AI is 90% Marketing and 10% Reality. Computers doesn't even have the capability to think. They are on and off switches which relies on a set of instructions feed to the CPU in order to tell the computer what to do. Without an operating system, computers can't function or do anything which is largely created and maintained by human developers.

I don't think people understand how percentages work causing them 10 times more in training and costing consumers 10 cents instead of a penny is the same amount of money on different scales just shifting the burden to the consumer instead of the company

I don't understand; isn't thinking speed in computers equivalent to computational power or scale? I mean, it seems to be about scale again. For example, a computer with a scale of 1x for 20 seconds is equivalent to a computer with a scale of 2x for 10 seconds, isn't it? So, it's basically a shift from scaling in training systems to our PCs and phones.

How can a person koown for having an incredibaley high IQ be also known for being incredibally stupid?
An interesting paradox, indeed! Intelligence and wisdom often walk a tightrope, yet they aren't always in lockstep. Here are a few reasons why someone with a high IQ might also be seen as "incredibly stupid":

1. Lack of Practical Skills
Intellectuals may excel in abstract thinking but falter in everyday tasks. It's not uncommon for a genius mathematician to struggle with basic chores or social cues.

2. Overconfidence
High intelligence can lead to overestimating one's abilities. This "intellectual arrogance" may result in poor decisions because they fail to consider simpler, practical solutions.

3. Emotional Intelligence
A high IQ doesn't guarantee emotional intelligence (EQ). Social interactions, empathy, and self-awareness are often linked to EQ, not IQ. A person might solve complex problems yet fail to navigate relationships effectively.

4. Tunnel Vision
Brilliant minds often focus intensely on their areas of interest. This singular focus can lead them to overlook broader, simpler solutions, causing them to miss the forest for the trees.

5. Stress and Pressure
High achievers often face immense pressure. Stress can cloud judgment, leading even the smartest individuals to make seemingly foolish choices.

I’ve been saying this for years – we saw the AI scaling laws changing in 2017 to include TIME, and when you add time into the training and output of AI models the results are staggering. However, you still need the large foundational models to underpin many of these smaller models, without those large LLM models the smaller models are far less effective and useful – we saw with DeepSeek – cheap to train but in many many areas it is way way behind OpenAI o1 and o3 (for example in research and investment advice which it was measured on etc)

But what does he even mean by « system 2 thinking », this doesn’t mean anything. It’s like he’s speaking to a fifth grader. A computer doesn’t « think », it calculates. Why level 2 calculations are longer is what he should have explained. What’s the code behind it?

Why does the speaker keep saying more time to think? Isn't it the system is doing more calculations during the time? The system is not sitting idle. He should pose it as a shift from bigger models with more parameters to more iterative compute with a smaller model.

AI processor is much faster than us. So in a way, those minutes could be like days and even years at least eventually as CPUs and GPUs improve.

So, moving instantly…it would be equivalent to much more time if compared to human processing speed. And thinking for minutes would be exponentially longer in "AI time"

This is my own concept or opinion but maybe others agree?

Leave a Reply