logo
🔖

andrew-ng-do-we-think-the-world-is-better-off-with-more-or-less-intelligence

Published
Published
Author
www.ft.com
URL
Status
Genre
Book Name
andrew-ng-do-we-think-the-world-is-better-off-with-more-or-less-intelligence
Modified
Last updated January 4, 2024
Summary
Created time
Jan 4, 2024 03:40 PM

🎀 Highlights

Whatever we put more regulatory burdens on, that’s what we’ll see less of.
An open-source model is a general purpose technology: it can get used to build a healthcare app, a customer service app, a financial services app, and on and on. So if you regulate that core technology, you’re slowing everything down, and probably without making anything meaningfully safer.
from the scientific evidence I’ve seen, AI models do build models of the world.
there’s been scientific evidence showing that LLMs, when trained on a lot of data, do build a world model.
the debate on AI seems to come down to optimists like yourself, who focus on what the technology is currently capable of, and doomers, who focus on projecting what the exponential advances we’re seeing will mean for the future.
model or with a large language model. And people can build systems for misinformation with a small or large language model. So the size of the language model is a very weak measure for risk.
better measure would be: what is the nature of the application?
The problem with regulating the technology layer is that, because the technology is used for so many things, regulating it just slows down technological progress.
do we think the world is better off with more or less intelligence?
it is true that intelligence now comprises both human intelligence and artificial intelligence. And it is absolutely true that intelligence can be used for nefarious purposes.
If we look at AI extinctionism, its scenarios are so vague and fantastical that I don’t think they’re realistic. And they’re also hard to defend against.
Unfortunately, since that White House voluntary commitment, I’ve seen companies step back from watermarking text content. So I feel that the voluntary commitment approach is failing as a regulatory approach.
Unfortunately, there are massive forces, including some very large companies, that I think are overhyping the risks of AI. Big companies would frankly rather not have to compete with open-source AI.
Yann LeCun has been speaking about this as well. I think there are actually quite a few people with a very thoughtful perspective on this.
When lots of people signed [the Center for AI Safety statement] saying AI is dangerous like nuclear weapons, the media covered that
When there have been much more sensible statements — for example, Mozilla saying that open source is a great way to ensure AI safety — almost none of the media cover that.  West is best: tech investor Bill Gurley has said that Silicon Valley’s success in innovation owes a lot to its being so far from regulators in Washington, DC © David Paul Morris/Bloomberg
I see no reason to make an analogy between AI and nuclear weapons. It is an insane analogy. One brings more intelligence and helps make better decisions, and the other blows up cities. What have these two things to do with each other?
But while no one wants to see AI used to wage an unjust war, I think the price of slowing down global innovation, of letting there be less intelligence and poorer decision-making all around the world, is too high a price to pay.
who will be left behind if we slow down open source?
After reading Kai-Fu Lee’s book AI Superpowers, which came out in 2018, I was convinced that China would be the one leading the way on AI development.
data is very verticalised — data is not a single, featureless glob of things that you just want more of. For example, while Google has tons of web search data, that data by itself is not very useful for logistics, or smartphone manufacturing, or drug discovery.
Even Seattle, say, and New York have much less generative AI talent than Silicon Valley.