Why do we need friendly Artificial Intelligence? – A Conversation with Eliezer Yudkowsky

Eliezer Yudkowsky is cofounder of MIRI (Machine Intelligence Research Institute) and an advocate for friendly artificial intelligence. From a past participant of our boot camp who follows Eliezer’s work we came to know that he was going to be spending a few weeks over the summer here in Reñaca, Chile, where Exosphere HQ is. We had the chance to spend some time with him, comparing notes about our views on the education system, the startup culture, the future of technology and in particular on artificial intelligence. This is the interesting conversation that ensued…

Hi Eliezer, thank you for accepting this interview. First, I would like to ask you how did you get where you are in your life? What’s the path that lead you to co-found an Institute that focuses on research about artificial intelligence?

I’ve been doing this more or less my entire adult career; when I was sixteen years old I read a book, “True Names and Other Dangers”, which mentioned the idea that at the point where your model of the future predicts that technology has created smarter-than-human intelligence, your crystal ball explodes and your model can’t predict the future past that point. This would apply to Artificial Intelligence, neurologically enhanced humans, or whatever. Having entities around that are smarter than you are is a kind of difference in the future that’s fundamentally different from faster cars or space colonies. This struck me as obviously correct, so I decided to spend the rest of my life working on it. Read more