The Mirror Lifestyle Content


Tap to join GraphicOnline WhatsApp News Channel

Deepfake risk

Deepfake risk

Events in the past few weeks have forced me to revisit the subject of Artificial Intelligence (AI). And, when it comes to AI l always have a lot to write about because I have followed developments in space for a while, and l have written extensively about these developments in this column too.

Now, these are the two main issues that have provoked me, once again, to tackle AI, literally.
First was the AI-generated images of the American pop star Taylor Swift at the end of January. These images, spread across social media, portrayed the popular singer in very damaging sexually explicit images.

Advertisement

According to sources, it was viewed by millions of people before it was finally removed. This is about three weeks old.
But the most recent AI controversy is the deepfake episode that led to a finance worker parting with about US$25 million. In fact, this case is classic so let me repeat for you, portions of the report on this episode that also exposes the damaging potential posed by AI technology.

Reports on the incident said the events unfolded in Hong Kong but authorities did not reveal the identity of the individual nor the company involved.

This is how it was reported, in part: “A finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call, according to Hong Kong police”.

And that “The elaborate scam saw the worker duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations”.
 “(In the) multi-person video conference, it turns out that everyone [he saw] was fake,” senior superintendent Baron Chan Shun-ching is reported to have told the city’s public broadcaster RTHK.

There is more to this, but let me stop here! The first example shared, the headline-grabbing AI-generated sexually explicit images of the pop star is a classic case of why authorities across the world are growing increasingly concerned at the sophistication of AI technology and the various dubious uses it could be put to.

In fact, in the June 17, 2023 edition of this column I wrote about a report that had suggested that AI profiling exposed us to unacceptable forms of discrimination and stigmatisation too.

Advertisement


In an interview with the BBC in June last year, Margrethe Vestager, the European Union’s competition commissioner, re-emphasised the ever growing fear of the negative impact of AI on society, saying discrimination had a bigger risk than human extinction. 
“Probably [the risk of extinction] may exist, but I think the likelihood is quite small. I think the AI risks are more that people will be discriminated [against], they will not be seen as who they are”, according to a transcribed version of the interview by the UK Guardian news portal.

“If it’s a bank using it to decide whether I can get a mortgage or not, or if it’s social services in your municipality, then you want to make sure that you’re not being discriminated [against] because of your gender or your colour or your postal code,” she added.

And in the referenced edition, that is the June 17, 2023 edition of this column, this is what l wrote about Vestager’s observations: “The statement by Vestager is profound for many reasons.
On my part it brings back memories of past articles in this column in which l have stressed on the ethics of AI, and the other negative effects that required some monitoring to address.
 

But before I bring back some of the salient issues that l have raised in past editions to support the views of Vestager, in large part, first, I would like to look at why Vestager introduced ‘human extinction’ to press home the immediate dangers of AI”.
Of course I brought another twist to it but all that l sought to do was to put in context how serious the issue of extinction was.

Advertisement


In fact, in 2022, A UK data watchdog commenced investigations into whether AI systems showed racial bias when dealing with job applications.

In effect, the underlying issue is that you may actually lose that dream job not because you are not good enough for the job but because the data sets programmed and used for the profiling of applicants already had you in the rejected list even before you applied!
Responsible behaviour, no doubt, is necessary within the AI ecosystem.

In the May 6, 2023 edition, for example, I explained why responsible behaviour was necessary, using the resignation of Geoffrey Hinton from Google. That resignation got me thinking a lot about ethics in the technology space. 
A story I came across about his resignation had this headline: Godfather of AI, Geoffrey Hinton quits Google and warns over dangers of misinformation. Let me break it down a bit further, and explain why the story made that impression on me.
 

Advertisement

First off, Hinton is not new to the technology world at all. In fact, if you have been an ardent reader of this column, you may have come across a few of his innovations mentioned in past editions. He is often touted as the godfather of AI because of his pioneering work on neural networks.

It is common knowledge that Hinton, together with two of his students at the University of Toronto, built a neural network in 2012, which has pioneered current systems and applications, such as ChatGPT.
 So, what led to his exit from Google? Both The Guardian newspaper of the United Kingdom, and the New York Times of the United States of America, agreed that Hinton left due to concerns over the flood of misinformation, and “the possibility for

AI to upend the job market, and the ‘existential risk’ posed by the creation of a true digital intelligence”.
Familiar concern, isn’t it? Bottom line is this:  AI is huge but when there are no strong moral and ethical guidelines, it shall bring more negative effects than positives.
botabil@gmail.com

 
 
 

Advertisement

Connect With Us : 0242202447 | 0551484843 | 0266361755 | 059 199 7513 |