Some AI research may be too dangerous to share
Should development of artificial intelligence be fully open-sourced or are there ethical reasons to limit access?
A recent feature in the FT Weekend Magazine about the UK spy agency GCHQ revealed to me one especially surprising titbit: the agency had been inspired to create an internet surveillance system by the Google-owned artificial intelligence (AI) company DeepMind, based on an academic paper about an artificial chess grandmaster.
According to a GCHQ official: “The people who did this at DeepMind, they published all the work, it’s out there, anybody can access it. So we should make use of it.”