AI & Ethics: A Discussion
“Artificial Intelligence & AI & Machine Learning” by mikemacmarketing licensed under CC BY 2.0
Artificial intelligence (AI) suggests that machines will one day have the potential to imitate human behavior to complete complex tasks without human assistance. Many modern devices and appliances strive to operate in such a way. AI is not limited to robot technologies or self-driving cars, as it includes software like Siri on the iPhone, home systems, social media, and even many children’s toys. With AI integration growing more commonplace, scientists and modern philosophers worry how this might affect consumers.
MIT Professor Sherry Turkle studies the interaction and relationships between humans and devices. In her book Reclaiming Conversation: The Power of Talk in a Digital Age (2015) she examines the comfort people find in simple relationships with their devices and the effects of such. Turkle argues this leads to simpler conversations that are almost transactional in nature. These lusterless conversations lack the means to foster any sort of empathy. Turkle sees the importance of AI, but believes excessive exposure and connection to one’s devices are detrimental to human socialization.
Discussion Questions
- What is the difference between AI software and hardware? How do they operate?
- What parts of social media platforms use an AI component and what are the dangers of that?
- Where else do we see AI technologies and software in our everyday lives?
- Are some AI technologies safer than others? Which, and why?
- What sorts of protections should be put into place to protect consumers from potential negative aspects of AI systems?
- In cases of algorithms being made for AI systems, how are fairness and good ethics guaranteed, especially when private corporations are immune from public scrutiny?
AI and Privacy
Operating in a social-digital age, personal information is all the more accessible. Helen Nissenbaum, well recognized for studies in privacy and her concept of “contextual integrity”, wishes to create a system that appropriately delegates the use of personal data. With contribution from her collaborators, Nissenbaum has created a series of web plugins including TrackMeNot, Adnostic, and AdNauseam. These are “obfuscating” plugins that interfere with various data collection and ad services.
Question: Why might people worry about private companies having access to their personal data and information? What should private companies be able to do with this private information? What sorts of laws should be proposed, specifically in terms of privacy?
Bias in AI
AI systems can demonstrate bias. Some bias is not actually programmed into the code intentionally, but is the result of user interaction. Helen Nissenbaum uses Google’s behavioral advertising system as an example to explain this behavior. If one were to search two different names, one traditionally Caucasian and one traditionally African-American, searching the traditionally African-American name would yield more advertisements for background checks. Because background check advertisements are more likely clicked on when users search traditionally African-American names, Google’s system places more ads on searches for African-American names. Thus, racial bias is introduced by the user into the AI system.
Question: What sort of problems could the public face with human bias in AI programming? What kinds of safeguards should be put in place to ensure bias-free AI programming?
Predictive Policing
Only recently surfaced, the New Orleans police department started using a predictive policing program developed by Palantir Technologies in 2012. Palantir had access to personal information including social media data, phone numbers, addresses, licenses, court filings, and more. The software would use these records and private information to predict and deem people potential aggressors or victims. Palantir did this without the consent or knowledge of the City Council.
Question: Does this action by a private company seem like a violation of the law? What are the possible implications of government organizations using AI technology to police its citizens?
This one-sheet was created for the SOPHIA of Worcester County chapter by students in the Communication Law and Ethics course at Fitchburg State University and edited by Dr. J.J. Sylvia IV and Dr. Kyle Moody. It was hosted by Strong Style Coffee and its creation was supported by SOPHIA and the Douglas and Isabelle Crocker Center for Civic Engagement. Students included Miguel Aguiar, Colin Ahearn, Andrew Allen, Ben Bursell, Olivia Grant, Rebecca Landry, Kevin Newey, Martha Melendez, Shane Muir, Edgar Mutebi, Scott Ryan, Ben Sharple.