Social Media, Artificial Intelligence and National Security
The Twelfth Workshop on the Social Implications of National Security (SINS19)
Reaching billions of people with specially tailored messages is now possible through microtargeting on social media platforms. Microtargeted campaigns can be designed to capitalize on an individual’s demographic, psychographic, geographic, and behavioral data to predict his or her buying behavior, interests, and opinions. By inferring things based on an amalgamation of evidence and consistent implementation of a strategy, microtargeting does not merely reflect or predict an individual’s beliefs. It can also alter them.
According to some studies, up to 15% of the US population can be swayed through this kind of strategy, much of it done through well-known platforms like Facebook, Google, Twitter, Instagram, and Snapchat. The advent of artificial intelligence (AI) has meant that microtargeting can now be unleashed with the added stealth of machine learning and autonomous systems activity. This ability to penetrate social media networks could be considered a cybersecurity threat at the organizational or national level. As the technology moves beyond algorithmic scripts that cycle through the distribution of fake or real news, social media users will find themselves coming face to face with personalized, bot-generated messaging. How might that sway both consumer and citizen confidence in these systems is yet to be determined.
This workshop will first highlight the issue of social media and AI cases that have attempted to manipulate people and describe various influence campaigns through broadcast or microtargeting strategies. Workshop participants will then consider how governments and organizations are responding to the misuse of online platforms, and evaluate various ways in which AI-based social media might be regulated internationally. The responsibility of social media platform providers will also be brought into the discussion, as algorithms can detect bot-generated and dispersed information. Finally, strategies for preventing and counter-attacking disinformation campaigns will be considered in cases and contexts where such messaging becomes a destabilizing force in communications. Emerging areas of research, such as neuromorphic computing, will be discussed in the context of cyberwarfare and espionage.
8.40 a.m. Registration and Coffee
9.00 a.m. Keynote
10.00 a.m. Networking Break
10.15 a.m. Ethan Burger, Institute for World Politics, Russian Cyber [Hybrid] Operations: A Look at the 2016 Brexit Referendum and the U.S. Presidential Election
11.00 a.m. Proceedings of the IEEE Live Webinar Series. Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems
12.00 p.m. Lunch
1.00 p.m. Katina Michael, School for the Future of Innovation in Society, Arizona State University, Bots without borders: how anonymous accounts hijack political debate
1.30 p.m. Eusebio Scornavacca, University of Baltimore, Artificial Intelligence in Social Media
2.00 p.m. Gary Marchant, Sandra Day O’Connor College of Law, Arizona State University
2.30 p.m. Gary Retherford, How to ensure Democracy through Technology, Implant and Blockchain Proponent [Draft Title]
3.00 p.m. Hot Topics (Group Discussion – led by Patrick Scannell)
4.00 p.m. Areas of Future Research (All)
4.30 p.m. Close.
Learn more here: