AI and Sociology
My inbox received two very kind and curious emails from students this semester. One was to our listserv, expressing remorse and solidarity for someone who had a death in the family. The second was a note of gratitude for my teaching this semester. The instincts behind were kind. Both students wanted to share meaningful feelings with our community in the first case, and with me in the second. They were interesting because they were both written entirely by AI.
Your passion for the subject, dedication to your students, and teaching methods have made a positive impact on my learning experience. Your ability to explain complex concepts with clarity and enthusiasm has enhanced my understanding of the material. I appreciate the time and effort you invested in creating a positive and engaging learning environment.
The entire note seemed just vague enough for me to suspect if this was generated by AI. So, I plugged it into an AI detection program called GPTZero, which identified 96% of the material to be generated by AI. (The other 4% was the student’s name and my own.)
I honestly and truly appreciated the sentiment behind the email. This student wanted to reach out and express appreciation. Professors love that! Further complicating the matter is that the student is not a first-generation speaker, and was likely looking to use an AI text generator, like ChatGPT, in order to be respectful.
In a recent paper, “Toward a Sociology of Artificial Intelligence: A Call for Research on Inequalities and Structural Change,” sociologists make a “call for sociologists to critically engage AI sociotechnical systems, as well as a call for AI practitioners and policy makers to more fully engage with sociological insights into AI, inequalities, and structural change.” They note that AI practitioners “fail to recognize the full complexity of social life,” as well as adequately reflect on inequalities, biases, and structural challenges behind systems, from incarceration to education to health care. (Here’s a nice overview of AI and Sociology, which also flags this research.)
Some sociologists are having conversations about social life with ChatGPT to understand how AI “thinks.” Here’s one exchange, on emotions, from that article:
Andy: […] Have you been programmed to express emotions when you’re making knowledge with people?
ChatGPT: I have been programmed to recognize and respond to emotional cues in human language, and to generate responses that convey appropriate emotional tones. However, my emotions are not truly “real” in the same way that human emotions are. Rather, my emotional responses are the result of complex algorithms that analyze patterns in language to mimic human-like emotions to facilitate more natural and engaging interactions with users.
Emotions, as you’ve undoubtedly experienced, are complicated for even the most astute of humans. Sociologist Jack Katz’s brilliant book, How Emotions Work, explores anger, laughing, and shame by examining everything from road rage to youth baseball leagues to examining videos of a people reacting to circus fun house mirrors. Emotions seem to be irrational. Why do seemingly sane people exhibit risky road rage behaviors? Why do boys transform shame into anger when they’ve struck out at the plate? The book underscores how emotions key up every interaction and how important understanding emotions is for sociology. It also points to one of the stickier puzzles for AI to navigate.
AI is rapidly changing our lives. I am getting much of the same feelings of when the internet rapidly changed my life in the mid to late 1990s. AI can analyze everyday life by, for example, identifying whether someone is carrying a firearm into a school. My brother-in-law uses AI to generate content and streamline is workflow in his non-profit world (e.g., writing grant proposals).
There may very well be a future when AI can process emotions better. Sci fi has been struggling with this puzzle for decades. Like Blade Runner, Her, and The Creator, the 2014 film Ex Machina is about whether new AI tech can develop and “pass” as human. The technology is, after all, in its infancy. It is already being used to generate or augment art, music, and movies in dramatic and complicated ways. And, through the looking glass, AI is being used to examine exactly how films and TV manipulate emotions in order to produce more emotionally resonant cultural goods.
For now, however, students should be very wary of using this technology overall. Perhaps especially when handling things that require a little more emotional intelligence.
Everyone considers themselves above average (called “The Lake Wobegon Effect”) and, like everyone else, I believe I’m above average in my emotional intelligence. While it would have been much easier for me to not send an awkward email, I felt that notifying my student that it was obvious to me that they had used AI was the right thing to do. I suppose I could have told the student not to use AI at all. Instead, I encouraged them to be very careful when using the technology.
When it comes to higher education, it is easy for professors to imagine the worst of AI: students having AI generate papers. Some faculty are changing their approach to assignments, setting up guidelines and boundaries in their classrooms, creating ChatGPT-proof assignments, and others are finding ways to embrace the new technology. (ChatGPT’s founder said that the disruption of AI in schools is similar to the advent of the calculator in math classes.)
I am not sure where I land on this. I do suspect that AI can be an amazing tool for generating rough paper outlines, and even a first draft of a literature review for student papers. I know students are using it for these purposes. I have not yet had a student use it in a paper. (Or at least Turnitin hasn’t flagged anything. I’m open to the idea that students are using it and getting around AI detection.) Students should put it aside, however, when trying to pass along finished products that require persuading the reader, making arguments, and connection. For that, students speak with their own voices. Students have amazing and distinctive voices, ones that good teachers should recognize throughout the semester.
It was disheartening to not “hear” a student’s voice in the emails I received this semester. But it was also understandable. The students in these two emails had a sincere desire to connect and wanted to do so in a respectful way. As the sociologists of AI are pointing out: it’s not just the cultural products of AI, but also the structures around AI that we should be aware of. And so, I also quickly reflected on how these students also did not have the time (e.g., taking four classes, working a job while in school) or feel they had the tools to effectively communicate (e.g., and not having English as their first language). Under these lights, it was very easy for me to empathize.
When do you think using AI is ethically acceptable to use?
Perhaps we should, in this early moment of AI, continue to embrace rough edges and messy half-baked thoughts, over polished papers and emails.
It's so interesting to see how professors will respond to their students' AI-related behavior, especially me as an international student!
Posted by: Emily | February 02, 2024 at 07:28 PM
The use of AI will be ethically acceptable when it complies with ethical principles, does not create or increase social problems, and is managed sustainably and transparently.
Posted by: that's not my neighbor | April 14, 2024 at 11:15 PM