Certainly an important line of thought. Have you thought about the more concrete things that we should do today in preparation for this future? Are you leaning towards the effective-altruism-related "we should work on AI alignment" path or is that view overhyped?
I'm not sure, it seems like a very hard problem to solve. Bostrom called it "philosophy on a deadline", saying that not only do we need to solve a lot of problems people have been working on for years, but we also need to express those solutions in a formal mathematical representation.
I don't know what's changed since the book has been published, but I do think solving the problem is important, possibly one of the most important things humanity will do.
Certainly an important line of thought. Have you thought about the more concrete things that we should do today in preparation for this future? Are you leaning towards the effective-altruism-related "we should work on AI alignment" path or is that view overhyped?
I'm not sure, it seems like a very hard problem to solve. Bostrom called it "philosophy on a deadline", saying that not only do we need to solve a lot of problems people have been working on for years, but we also need to express those solutions in a formal mathematical representation.
I don't know what's changed since the book has been published, but I do think solving the problem is important, possibly one of the most important things humanity will do.