![]() Over the last few years, the community has gradually refined its arguments as to why there might be a problem. The basic idea of the intelligence explosion is that once machines reach a certain level of intelligence, they’ll be able to work on AI just like we do and improve their own capabilities - redesign their own hardware and so on - and their intelligence will zoom off the charts. And if that’s true, then we need to put a lot more thought than we are doing into what the precise shape of that event might be. “The biggest event in human history” might be a good way to describe it. But it’s pretty clear that success would be an enormous thing. STUART RUSSELL: In the first edition of my book there’s a section called, “What if we do succeed?” Because it seemed to me that people in AI weren’t really thinking about that very much. QUANTA MAGAZINE: What concerns you about the possibility of human-level AI? ![]() (Professor of Computer Science, University of California/Berkeley) ![]()
0 Comments
Leave a Reply. |