INQ120D
Living an Examined Life

Activity 3

The Singularity and Super-intelligence

  1. Are you a Kurzweil or Bostrom? Do you think see a bright future with happy augmented super intelligent people or a dark future where a super intelligent artificial intelligence might try to kill us all?

  2. Why might a super intelligent artificial intelligence be a danger to humanity? Why won’t it help just help us like we will design it to?

  3. Assuming we will create a super intelligent artificial intelligence, how can we lessen its danger to humanity?

  1. Can we ensure that a super intelligent artificial intelligence will not harm us? Should be proceed with its development if we can’t?