- Mall Cop Robot
- A University for the coming singularity
- Wheat and Chessboard Problem
- The $1.3B Quest to Build a Supercomputer Replica of a Human Brain
- Obama’s Brain Project Backs Neurotechnology
- Reverse-engineering of Human Brain Likely By 2030
Are you a Kurzweil or Bostrom? Do you think see a bright future with happy augmented super intelligent people or a dark future where a super intelligent artificial intelligence might try to kill us all?
Why might a super intelligent artificial intelligence be a danger to humanity? Why won’t it help just help us like we will design it to?
Assuming we will create a super intelligent artificial intelligence, how can we lessen its danger to humanity?
- Can we ensure that a super intelligent artificial intelligence will not harm us? Should be proceed with its development if we can’t?