Forget Sentience, the Worry Is That AI Copies Human Bias
'I want everyone to understand that I am, in fact, a person. 'Soclaimed a Google software program, creating a bizarre controversy over the past week in AI circles and beyond.
The programme is called LaMDA, an acronym for Language Model for Dialogue Applications, a project run by Google. The human to whom it declared itself a person was Blake Lemoine, a senior software engineer at Google. He believes that LaMDA is sentient and should be accorded the same rights and courtesies as any other sentient being. When Google rejected his claims, he published his conversations with LaMDA on his blog. At which point, Google suspended him for having made public company secrets and the whole affair became an international cause célèbre.
There are many issues relating to AI about which we should worry. None of them has to do with sentience. There is, for instance, the issue of bias. Because algorithms and other forms of software are trained using data from human societies, they of ten replicate the biases and attitudes of those societies.
Then there is the question of privacy. From the increasing use of facial recognition software to predictive policing techniques, from algorithms that track us online to 'smart' systems at home, AI is encroaching into our innermost lives.
We do not need consent from LaMDA to 'experiment' on it. But we do need to insist on greater transparency from tech corporations and state institutions in the way they are exploiting AI for surveillance and control.