ai: Sean Carroll on AI Risks

Well, there has been so much written about this and I thought Sean Carroll was particularly articulate about grading possibilities. The good news is that he actually transcribes all his answers, so here they are:

It is in his August 2023 Ask Me Anything where:

Kevin Harrang
I learned a lot from your recent discussion about artificial intelligence with Raphaël Millière, thank you, but am still wondering where you personally come down on the question of whether current or future iterations of AI are something to be worried about (like Einstein warned about nuclear technology), either intrinsically or just in the wrong hands?

August AMA

And his answer is pretty eloquent but basically his point is that we should focus on near term likely risks rather than just the big scary ones.


1:00:13.3 SC: Yeah, sure. You should definitely be worried. [chuckle] The question is, what is the form that that worry takes? I thought that the open letter that said, “Stop doing research on AI for six months while we figure out what to do.” Was as a practical matter, completely unrealistic for various reasons. Number one, how are you gonna get everyone in the world to agree to stop doing that? What’s to stop people from actually continuing AI research? Just not tell anybody about it. Especially people in other countries or whatever.
1:00:45.4 SC: Number two, what in the world makes you think that in six months, you’ll figure out what to do about it? I think it’s a misunderstanding of how these things work. These things are processes that are ongoing, both the process of developing AI and the process of developing safeguards and figuring out what to do about it. The open letter did a lot of good in the sense that it sort of got people talking about the issue, which is great. But I don’t think it actually, what it was asking for was the right thing to ask for, the right thing to ask for is very accelerated hard thinking and policy implementation of safeguards that would protect us from the worst aspects of AI. What those aspects could be, there’s obvious, many choices, like if you turn over important technological or industrial features to control by computers rather than humans, you open up the possibility of disastrous errors.
1:01:43.9 SC: Humans using AI to spread misinformation and fake news and so forth is an obvious big problem. So, I think that it’s very, very important to put safeguards and to think about what the safeguards are that we need. I didn’t really like the idea of just pausing research for six months, I also don’t like the idea of emphasizing the possibility of existential risk where you kill every human being on earth. That is absolutely possible, but it’s sufficiently low probability that I don’t think that it’s the right way to think about it. I think that if you will actually increase the chances of bad things happening, if that’s the kind of risk you worry about rather than the real, very, very imminent risks that we have from AI.
1:02:26.6 SC: And furthermore, if you worry about the imminent risks, you’re much more likely to also ameliorate the existential risks. So, that’s what I would be in favor of doing.

AMA August 2023

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

I’m Rich & Co.

Welcome to Tongfamily, our cozy corner of the internet dedicated to all things technology and interesting. Here, we invite you to join us on a journey of tips, tricks, and traps. Let’s get geeky!

Let’s connect