The Impact of AI on Our Choices and World Perception

December 15, 2023

Recently, an article was published on ZDNET in which Professor Robert Crossler addressed issues related to elections in the digital society. It’s also worth exploring another article written by Professor Crossler himself, focusing on measurements of the impact of political orientation, platforms, and the consumption of fake news on voters’ interest in electoral processes. While both texts primarily describe the influence of artificial intelligence on electoral processes, it’s easy to imagine that AI could also affect other aspects of our lives, including cybersecurity. AI might support, if we may use such a statement, the activities of fraudsters by creating environments, opinions, and narratives that endorse propositions presented by cybercriminals.

Artificial intelligence facilitates the creation of content that “feeds” browsers and AI itself with data that is false, aiming to make people believe in something that is not true. However, people will have a strong sense that it is true because they will see very realistic material confirming the viewpoint presented by criminals. Professor Crossler explicitly states:

 

Generative artificial intelligence has the potential to direct communication specifically to people based on easily acquired knowledge placed in the public sphere. This is a technique already observed in social engineering attacks from a cybersecurity perspective.

 

It’s worth noting the issue raised by the professor regarding knowledge placed in the public sphere. We often encounter situations where people hear information, enter a related keyword into a search engine, click on the first link, read what is written there, and consider it as a certainty (“after all, they wrote it on the internet!”).

With a bit of engagement, it is not at all difficult to position one’s content in search engines, regardless of whether its content is true or not. This is exactly what hackers do, creating false content, websites, and information that are later used in the social engineering aspect during an attack. Services such as Google and Facebook, as well as companies like Intel laboratories, are implementing mechanisms to protect against the placement of content generated by AI in advertising or informational campaigns. At the current stage, this does not work perfectly, but efforts in this direction are continually evolving.

 

So, what to do? How to live? When asked by the ZDNET editorial team how to distinguish true political statements from potentially AI-generated and manipulated content, Professor Crossler responded:

“I started doing two things because of the recent elections and the misinformation that took place during this election cycle. The first thing I do is triangulate the information I receive. By this, I mean I want to use information from multiple sources that may have different biases in their reporting. The better I can triangulate information, the more confident I am in the truthfulness of that information.

The second thing I like to do, which allows me to achieve the first, is intentionally not forming an opinion when I first learn about a new issue. This is particularly important when it seems that the information differs from what I already knew or is potentially politically significant. Waiting to form that opinion gives me enough time to gather additional information that may confirm (or not) the initial information.”

 

 

Source:

How AI will fool voters in 2024 if we don’t do something now

Real-time deepfake detection: How Intel Labs uses AI to fight misinformation

U.S., U.K., and Global Partners Release Secure AI System Development Guidelines

The 3 biggest risks from generative AI – and how to deal with them

Algorithms soon will run your life – and ruin it, if trained incorrectly

Generative AI is a developer’s delight. Now, let’s find some other use cases

Measuring the effect of political alignment, platforms, and fake news consumption on voter concern for election processes

Published by: CERT EXATEL

Related articles