AI Ethics: Why does it matter?
In the course of the week, a short story about the resignation of Geoffrey Hinton from Google got me thinking a lot about ethics in the technology space.
A story I came across about his resignation had this headline: Godfather of AI, Geoffrey Hinton quits Google and warns over dangers of misinformation. Okay. Let me now break it down a bit further, and explain the true impact of this story, and why it made that impression on me.
Advertisement
First off, Hinton is not new to the technology world at all. In fact, if you have been an ardent reader of this column, you may have come across a few of his innovations mentioned in past editions.
He is often touted as the godfather of Artificial Intelligence (AI) because of his pioneering work on neural network. It is common knowledge that Hinton, together with two of his students at the University of Toronto, built a neural network in 2012, which has pioneered current systems and applications, such as ChatGPT.
So, what led to his exit from Google? Both The Guardian newspaper of the United Kingdom, and the New York Times of the United States of America, seem to agree that Hinton is leaving due to concerns over the flood of misinformation, and “the possibility for AI to upend the job market, and the ‘existential risk’ posed by the creation of a true digital intelligence”.
In an interview with the New York Times, Hinton said until last year he was confident that Google had been a “proper steward” of the technology he had pioneered, but his confidence dipped once Microsoft started incorporating a chatbot into its Bing search engine, and Google became concerned about the risk to its search business.
In another interview with the BBC, Hinton stressed some of the dangers of AI chatbots, claiming that it was “quite scary”, and bemoaned how they could be exploited by “bad actors” in the space. In fact, AI, he said, could become more intelligent than humans and that had the potential of distorting the future of work.
“It’s able to produce lots of text automatically so you can get lots of very effective spambots. It will allow authoritarian leaders to manipulate their electorates, things like that” Hinton said in the referenced interview. “I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have [ as humans],” he said and added: “So it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”
Advertisement
In fact, over the years I have also become concerned about the ethics of AI and Machine Learning (ML) too. In the April 1 edition of this column, for example, I explained how the quantum leap in the application of AI models had caused confusion among even the big adopters of the technology. This was based on an open letter signed by major AI players, including Elon Musk. In the letter, the authors opined that: “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control…... Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
The fact is, Musk’s role in the AI space has been enormous. He co-founded OpenAI. Other signatories to the open letter were Emad Mostaque, co-founder of Apple, and Steve Wozniak, who founded London-based Stability AI.
AI and ML models have helped to decipher and break down complex processes for industry, but the belief is that before the adoption of some new models or the start of training for future systems, there was the need for an independent review to agree to some set limits.
Today, for example, robots are deployed to perform some of the tasks that humans used to do. In Amazon warehouses, also as an example, robots sort goods; and in the car industry broadly, they play significant roles in the manufacturing process.
Advertisement
A story I read about robotics researcher, Joe Davidson, which l mentioned in the June 4, 2022 edition of this column, is further proof of the good and bad sides of AI. Davidson and his colleagues, according to the report, had turned to the human hand for inspiration, by observing professional fruit pickers, and were working to replicate their skilled movements with robotic fingers.
The good side is that these innovations could help transform agricultural production, but at what extra cost?
Just this week, I read about how McDonalds, the fast food chain, was piloting the use of robots to man a restaurant in one of their high street restaurants in the USA.
Advertisement
When successful, this restaurant would be manned by only robots! There are other claims that there was evidence that some workers were prepared to take a pay cut to stay in employment, if they realised that AI powered systems could do their work better.
Frankly, I can’t really call these technological developments negative in every way. All parts of human activity involve creativity and innovation; and for this reason, some few bright young ones have managed to make a good living out of the intense growth in technology this century.
Therefore, clearly, what is needed is for AI development models to be underpinned by strong ethical values. I was, therefore, excited when the authors of the open letter stressed, in no uncertain terms, that their call “does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities”. Hinton seems to have taken that step back to be able to voice his concerns about the dangers of AI freely.
Advertisement
On the issue of misinformation, the influence of AI is huge. Where there are no strong moral and ethical guidelines, people will not be able to discern what is true any more with AI-generated photos, videos and text flooding the internet. As for news, amid the gloomy picture often painted, there are positives too.
With AI, watch out for more sensational headlines, hard facts, truths, half-truths to hyperbolic clickbaits!