Politics

/

ArcaMax

Algorithms that customize marketing to your phone could also influence your views on warfare

Justin Pelletier, Rochester Institute of Technology, The Conversation on

Published in Political News

When a coupon suddenly appears on your phone as you approach a store, you might find it convenient and even helpful. But the same AI systems that know where you are and try to influence your purchases can be used to infer what you fear, what you trust and which stories you are likely to believe. AI-fueled marketing algorithms are becoming increasingly good at influencing human behavior.

That raises concern about what various governments might do with these tools to influence citizens’ views about warfare. A clear-eyed look at how administrations are exploiting these systems may help people and their nations navigate an uncertain future.

I am a security researcher who studies ways to explore and characterize the risk technology poses to individuals and society. The rise of AI-mediated influence has raised questions about the erosion of people’s capacity to exercise free will and, by extension, society’s ability to distinguish a just war from an unjust war.

The integration of AI with location-based services is pushing the marketing frontier. Location-based services use geographic data from indoor sensors, cellphone towers and satellites to promote goods and services that are tailored to your location, a capability called geofencing.

When marketing firms couple massive amounts of data about individuals’ behaviors – including information that people voluntarily or unknowingly share through mobile device applications – the firms can group, or segment, potential customers based on what they like, what they do and what they say.

Once an AI-powered marketing system knows where a user is and can make an informed guess about that person’s likes and dislikes, it can design targeted coupons and advertisements to influence the behavior of each person in a group, and possibly the group as a whole. This combination of AI with geofencing and segmentation makes hyperpersonalized marketing content possible at an unprecedented scale.

What might this advance have to do with warfare? The use of psychology to win battles or obviate the need for war is as old as armed conflict itself. Sun Tzu, the Chinese military general and philosopher who died in 496 B.C., wrote: “Therefore the skillful leader subdues the enemy’s troops without any fighting; he captures their cities without laying siege to them; he overthrows their kingdom without lengthy operations in the field.”

From Sun Tzu’s era until today, skilled practitioners of military strategy have sought to reduce the risk in fighting through reflexive control: getting opponents to willingly perform actions that are best for the strategist’s empire or nation.

Today’s strategists increasingly rely on paid social media advertisements, influencers, AI-generated content and even fake social media accounts to sway popular opinion toward their goals. This power, and controversy surrounding it, has been implicated in recent national elections, domestic unrest and negotiations to end the conflict in Ukraine.

Unlike propaganda during the Cold War between the U.S. and the Soviet Union, modern influencers don’t rely on a single message broadcast to the masses. Strategists test and deploy thousands of narrative variations simultaneously, monitor how different groups respond and refine their approach in near-real time. The purveyors don’t need to convince everyone. They just need to nudge enough people at the right moment to change election outcomes, pressure domestic policies or even trigger ethnic violence.

 

As online influence becomes more automated and personalized, it is harder to determine where persuasion ends and coercion begins. If groups of people, or even a nation’s citizenry, can be guided toward certain beliefs or behaviors without overt force, democratic societies face a new problem: how to distinguish traditional attempts at influence from manipulation – especially during conflict.

Recent studies show that Americans trust local news sources more than national ones, although trust in both local and national news media has declined across all age groups in the U.S. Ironically, this trust deficit is being exploited by unscrupulous media in various ways, such as AI-generated, pink-slime news – online news stories that only appear to be from authentic local news outlets. The stories are often technically accurate but presented with veiled political bias.

AI-driven propaganda directly challenges how people typically evaluate claims that their nation has been wronged – that it is the “good guy” standing up for what is right. Just war theory assumes that citizens can reasonably consent to war. Legitimate political authority requires an informed public that can decide violence is both necessary and proportional to the offense. However, when influence operations sway people’s views without them being aware of it, these systems threaten to undermine the moral preconditions that make war just.

The question citizens have to answer is how they will allow their information environments to evolve. Do they assume that deception is ubiquitous and therefore governments must control information and even preempt the truth by weaponizing AI-driven narratives? Or should the public accept the risk of AI-generated influence as a regrettable but necessary part of openness, pluralism and the belief that truth emerges through transparent debate and not under tight controls?

The same systems that decide which coupon reaches your phone are starting to shape which narratives reach you, your community and a nation’s entire population during a crisis. Recognizing this connection is the first step toward deciding how much influence people are willing to accept from such algorithms and the propagandists who control them.

This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Justin Pelletier, Rochester Institute of Technology

Read more:
Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns

‘Inoculation’ helps people spot political deepfakes, study finds

Could ChatGPT convince you to buy something? Threat of manipulation looms as AI companies gear up to sell ads

Justin Pelletier is affiliated with the United States Army Reserve. The views expressed are those of the author and do not reflect the official policy or position of the U.S. Army, Department of War, or the U.S. Government.


 

Comments

blog comments powered by Disqus

 

Related Channels

The ACLU

ACLU

By The ACLU
Amy Goodman

Amy Goodman

By Amy Goodman
Armstrong Williams

Armstrong Williams

By Armstrong Williams
Austin Bay

Austin Bay

By Austin Bay
Ben Shapiro

Ben Shapiro

By Ben Shapiro
Betsy McCaughey

Betsy McCaughey

By Betsy McCaughey
Bill Press

Bill Press

By Bill Press
Bonnie Jean Feldkamp

Bonnie Jean Feldkamp

By Bonnie Jean Feldkamp
Cal Thomas

Cal Thomas

By Cal Thomas
Clarence Page

Clarence Page

By Clarence Page
Danny Tyree

Danny Tyree

By Danny Tyree
David Harsanyi

David Harsanyi

By David Harsanyi
Debra Saunders

Debra Saunders

By Debra Saunders
Dennis Prager

Dennis Prager

By Dennis Prager
Dick Polman

Dick Polman

By Dick Polman
Erick Erickson

Erick Erickson

By Erick Erickson
Froma Harrop

Froma Harrop

By Froma Harrop
Jacob Sullum

Jacob Sullum

By Jacob Sullum
Jamie Stiehm

Jamie Stiehm

By Jamie Stiehm
Jeff Robbins

Jeff Robbins

By Jeff Robbins
Jessica Johnson

Jessica Johnson

By Jessica Johnson
Jim Hightower

Jim Hightower

By Jim Hightower
Joe Conason

Joe Conason

By Joe Conason
John Stossel

John Stossel

By John Stossel
Josh Hammer

Josh Hammer

By Josh Hammer
Judge Andrew P. Napolitano

Judge Andrew Napolitano

By Judge Andrew P. Napolitano
Laura Hollis

Laura Hollis

By Laura Hollis
Marc Munroe Dion

Marc Munroe Dion

By Marc Munroe Dion
Michael Barone

Michael Barone

By Michael Barone
Mona Charen

Mona Charen

By Mona Charen
Rachel Marsden

Rachel Marsden

By Rachel Marsden
Rich Lowry

Rich Lowry

By Rich Lowry
Robert B. Reich

Robert B. Reich

By Robert B. Reich
Ruben Navarrett Jr.

Ruben Navarrett Jr

By Ruben Navarrett Jr.
Ruth Marcus

Ruth Marcus

By Ruth Marcus
S.E. Cupp

S.E. Cupp

By S.E. Cupp
Salena Zito

Salena Zito

By Salena Zito
Star Parker

Star Parker

By Star Parker
Stephen Moore

Stephen Moore

By Stephen Moore
Susan Estrich

Susan Estrich

By Susan Estrich
Ted Rall

Ted Rall

By Ted Rall
Terence P. Jeffrey

Terence P. Jeffrey

By Terence P. Jeffrey
Tim Graham

Tim Graham

By Tim Graham
Tom Purcell

Tom Purcell

By Tom Purcell
Veronique de Rugy

Veronique de Rugy

By Veronique de Rugy
Victor Joecks

Victor Joecks

By Victor Joecks
Wayne Allyn Root

Wayne Allyn Root

By Wayne Allyn Root

Comics

Mike Beckom David Horsey Jeff Danziger Bob Englehart Gary Markstein A.F. Branco