Conflict Sensitivity in the Age of AI

(Deepesh Thapa)

Just like most media think about money making, AI can’t be an exception. Hundreds of billions of dollars are being poured into this AI industry during the last decade. And obviously, their main aim is to turn that into trillion dollar industry. For example, Microsoft had invested 10 billion dollars in OpenAI and its famous product ChatGPT’s main aim is to replace the Google search engine and take a significant portion of its market. That’s to say, these AIs may think little about conflict sensitivity, just like most media.

The fundamental mechanism behind AI is to learn pattern recognition, conclude certain opinions and act on that conclusion as if that is a piece of true information or knowledge. Sometimes, it may be right, and sometimes wrong. Our main aim is to prevent society from the wrong information generated or concluded from AIs. We, humans, have limited capability of learning, memorizing, and concluding but AIs have “infinite” capability from infinite data which can be both helpful and dangerous alike. Since it thinks of even very small detail, it can recognize patterns where there are none.

For example, it can recognize a lot of negative patterns on certain races, religions, ethnicity, community, nation, and all, without any scientific evidence. And if it takes that for granted and spreads the information to billions of people worldwide, then that may hurt and humiliate concerned people for decades. Actually, it happened some years ago when Microsoft had launched its chatbot on Twitter but it commented racist opinions. Then it was suspended shortly.
Even the CIA had used facial recognition software to recognize criminals but it had suspected certain people of minority races as criminals whilst they were not. That is, the software was sort of racist.

These days, most people rely on social media to get news and information. But the use of AI only gives such contents and opinions that you like rather than true news and information. That’s to say, if people are racist, then they may get more and more racist news and information rather than truth and fact. According to many tech experts, a similar phenomenon was the main cause of Donald Trump’s presidential victory.

Another important thing is that we humans are emotional being rather than rational (rarely, if we are) but AIs are rational physical machines. This part is significant because this very cause prevents AIs to be sensitive about others’ feelings. Humans think before expressing their opinion because that may hurt others but AIs can’t do so. Although AI can be programmed not to hurt others’ feelings in certain conditions or perspectives but the thing is that feelings have no bound. AI can’t learn what hurts and doesn’t hurt certain individuals or groups or races.

AI can significantly boost your prejudices and biases. It tries to learn about your behavior and conclude that you like them. Then it acts on that information, for example, providing similar news, information, videos, etc so that you stick to their platform and the company behind that AI can make money from advertisements it provides to you. AI knows very well that if you don’t get contents that you like, you won’t spend much time on their platform. So, just to feed your prejudice and your bias, it provides more and more false contents.

There are such dangerous instances where innocents are alleged guilty because of these features of AI. For example, during the Nirmala rape case, two Bam sisters were alleged guilty for months. There was a rumor or fake news that these Bam sisters engage in lending girls to high-profile police officers, government officers, and leaders. And it was widely believed that they lent Nirmala to one of the police officers and that they killed her. But it was proven fake news months later. Rarely any media, civil society members or intellectuals think sensitively about Bam sisters’ feelings. They just take if for granted that these sisters were behind Nirmala’s rape and murder.

What happened during that period is that people just believed what they want to believe and similar posts, videos, and news came continuously to their social media accounts. So they reacted without any evidence of the Bam sisters’ involvement.
AIs’ this particular feature has possibly caused many disastrous events recently. For example, riots in India, the massacre of Rohingya in Myanmar, anti-Asian hatred in Western countries, etc.
In financial markets, AIs can bring disastrous events. It can see patterns where there are none and if it acts on the conclusion, that is buying or selling a tremendous volume of stocks in just some seconds, then the whole stock market trembles for weeks. This usually happens on Wall Street mainly.

Now, the question arises. How not to fall into that pitfall? How can we prevent ourselves from AIs’ false information, fake news, insensitive opinion, and unscientific conclusion? How to keep sensitive in the age of AI?

  • Since we are in the embryonic stage of the development of AIs, we don’t have infallible ideas to be foolproof from AIs’ negative impacts. But there are surely a lot of ideas that help increase conflict sensitivity in media and among social media users.
  • Reporters, social media users, and editors must not rely completely on news that is spreading on social media and the internet. They must try to get news from another side as well. They must search for reliable sources and evidence rather than widespread news on the internet.
  • It’d be wise to unfollow or unsubscribe those media that spread fake news and unscientific opinions and that use insensitive language. We should report and even block in case they are on the extreme when it comes to hatred towards other human beings.
  • It’d be wise to be critically rational and always search for evidence before buying any news or information. And if possible, try to examine whether it is truth or fact.
    Subscribe also those pages or media that provides news from the other side or multiple dimension. Always try to look for conflicting information that may prove that you are wrong since that makes you very critically rational.
  • Time and again, search, react, like, and comment randomly, especially in reliable pages and media so that you may get a fair amount of reliable news and information in your timeline. Don’t react too quickly to recent events without being convinced that it is a fact.
  • Don’t be too much engaged in social media since most activities there may be noise, not signal. You had better get the signal from noise if you give less time to social media.
    You had better not use AI to write any reporting or article or opinion if it is about people and society since AI may generate very insensitive opinion or remarks. If you used, then examine multiple times before publishing.
  • And at last, just try to utilize it and think about its possible negative consequences if not adequately used.

 

Nepali version is published here: https://techpankti.com/medias-ai-age-sensitivities/