(ITD6) Sociological Perspectives on Atificial Intelligence

Friday Jun 21 9:00 am to 10:30 am (Eastern Daylight Time)
Trottier Building - ENGTR 2120

Session Code: ITD6
Session Format: Paper Presentations
Session Language: English
Research Cluster Affiliation: Internet, Technology, and Digital Sociology
Session Categories: In-person Session

Artificial intelligence (AI) can be understood as “systems designed by humans that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take to achieve the given goal” (European Commission’s High-level Expert Group on Artificial Intelligence, 2018). Doubtlessly, the rapid development of increasingly sophisticated AI systems has great potential to transform many aspects of human life. While AI has many benefits (e.g., operational efficiency through task automation, informed decision making based on data analysis, assistance in medical diagnostics and management of treatment), there are also drawbacks (e.g., job displacement, ethical concerns about bias and invasion of privacy, security risks of being hacked, a lack of human-like empathy and creativity) (Duggal, 2023). In light of the growing public concern about the role of artificial intelligence in daily life (Tyson & Kikuchi, 2023), this session invited papers that explore attitudes toward artificial intelligence based on empirical research. Tags: Knowledge, Technology

Organizer: Henry Chow, University of Regina; Chair: Henry Chow, University of Regina

Presentations

Mike Zajko, University of British Columbia

Autocompleting Inequality: Large language models and the "alignment problem"

The latest wave of AI hype has been driven by ‘generative AI’ systems exemplified by ChatGPT, which was created by OpenAI’s ‘fine-tuning’ of a large language model (LLM). The process of fine tuning involves using human labor to provide feedback on generative outputs in order to bring these into greater ‘alignment’ with particular values. While these values typically include truthfulness, helpfulness, and non-offensiveness, this research focuses on those that address inequalities between groups, particularly based on gender and race, under the broader heading of ‘AI safety’. While previous sociological analysis has documented the algorithmic reproduction of inequality through various systems, what is notable about the current generation of generative AI is the concerted efforts to build ‘guard rails’ which counteract these tendencies. When asked to comment on marginalized groups, these guard rails direct generative AI systems to affirm fundamental human equalities and push back against derogatory language. As a result, services such as ChatGPT have been criticized for promoting ‘woke’ or liberal values, and their guard rails become sites of struggle as users attempt to ‘jailbreak’ these systems. This article analyzes the fine-tuning of generative AI as a process of social ordering, beginning with the encoding of cultural dispositions into LLMs, their containment and redirection into vectors of ‘safety’, and the subsequent challenge of these guard rails by users. Fine-tuning becomes a means by which some social hierarchies are reproduced, reshaped, and flattened. I analyze documentation provided by leading generative AI developers, including the instructional materials used to coordinate its workforce towards certain goals, datasets recording the interactions between these workers and the chatbots they are responsible for fine-tuning (through ‘reinforcement learning through human feedback’, or RLHF), and documentation accompanying the release of new generative AI systems, which describes the ‘mitigations’ taken to counteract inequality. I show how fine-tuning makes use of human judgement to reshape the algorithmic reproduction of inequality, while also arguing that the most important values driving AI alignment are commercial imperatives and aligning with political economy. To explain how inequalities continue to persist in generative AI despite its fine-tuning, this research builds on a Bourdieusian perspective that has been valuable in connecting the cultural reproduction of social order with machine learning. To explain how generative AI has been ‘tuned’ to avoid reproducing particular inequalities (namely sexism and racism), we can study the work involved in fine-tuning through methods adapted from institutional ethnography. This helps us understand how the human labour required to make AI ‘safe’ is textually mediated and coordinated towards certain goals across time and space. However, to understand what the goals of ‘values’ of fine-tuning are, requires grounding our analysis in political economy. This is because generative AI has been an expensive investment in what is intended as a profit-making enterprise. Commercial exploitation is a primary consideration, and the cultural reproduction of other forms of oppression can actually be a threat to business interests. Therefore, my argument is that AI’s alignment problem has less to do with lofty human values, and more to do with aligning these systems with political economy and whatever is conducive to commercialization. To the extent that these systems are being aligned towards equality, this remains a particular (liberal) form of equality oriented towards equal treatment or neutrality, particularly along lines of gender and race, rather than more radical or transformative alternatives.

Anabel Quan-Haase, University of Western Ontario

Algorithmic literacy in the smart city: A new digital divide?

Artificial intelligence or AI is not new. Developments in AI can be traced back to the 1950s, when scientists such as Alan Turing proposed models of machine learning that emulated human thinking. Yet, AI has seen new breakthroughs in the past decade with the widespread adoption of multimodal large language models such as GPT-1 to –4. While much of the study of AI has focused on its development and adoption, less is known about its social implications. According to Floridi et al. (2018), the current debate on AI’s impact on society requires a focus on how far the impact will be positive or negative and the “pertinent questions now are by whom, how, where, and when this positive or negative impact will be felt” (p. 690). Initial sociological work has started to look at AI literacy, with new scales and developments aiming to uncover inequalities in the use of AI and people’s understanding of its social implications. Despite these recent developments, no study today has looked at urban AI. Batty (2018) outlines how the large-scale implementation of AI and its filtering into urban spaces and infrastructures generates a new urbanism, referred to as AI urbanism. Luusua et al. (2023) define ‘Urban AI’ as the study of the relationship between “artificial intelligence systems and urban contexts, including the built environment, infrastructure, places, people, and their practices” (p.1039). In order to understand how people participate in different types of urban AI, and whether this is experienced differently with respect to digital divides we propose to investigate their level of algorithmic literacy of urban AI. Studying how people participate with algorithms is valuable for understanding how users navigate and evaluate algorithmically curated spaces, and this has been described as ‘algorithm literacy’ (Dogruel, 2021; Shin, Rasul and Fotiadis, 2022; Silva, Chen and Zhu, 2022) or ‘algorithm awareness’ (Gran, Booth and Bucher, 2021). Dogruel defines ‘algorithm literacy’ as the combination of “being aware of the use of algorithms in online applications, platforms, and services and knowing how algorithms work (i.e., understanding the types, functions, and scopes of algorithmic systems on the internet” (Dogruel 2022, p. 116). Recent studies on algorithmic literacy with respect to digital inequalities have found that it is often less visible than the previously recognised digital divides, such as digital access and digital skills, and that algorithmic systems impact peoples’ lives in different and often unequal, ways (Cotter and Reisdorf, 2020; Dogruel, 2021; Dogruel, Masur and Joeckel, 2022; Gran, Booth and Bucher, 2021). Gran et al. establish, in their study of algorithmic literacy in Norway, a typology of algorithm awareness that ranges from a) The unaware; b) The uncertain; c) The affirmative; d) The neutral; e) The sceptic and f) The critical, and found that over 40% of the study participants lacked an awareness of AI (Gran, Booth and Bucher, 2021), which demonstrates how participation in AI is an important aspect for digital inequalities. Therefore studying algorithmic literacy of urban AI with respect to different demographic characteristics may help to give insight into whether it can exacerbate digital divides in the city. We will outline an early-stage study with inhabitants in London, Ontario on their awareness and understanding of urban AI using an algorithmic literacy scale. We will discuss the outcomes of the study in terms of how digital divides can shape the way AI is experienced in the city. On a broader level we will discuss how there is a need to address the socio-technological barriers with urban AI and the implications for smart city projects.


Non-presenting author: Katherine Willis, University of Plymouth

Amit Sarkar, University of Regina

User attitudes toward Artificial Intelligence in educational learning

Artificial intelligence (AI) is swiftly becoming a cornerstone of modern education, especially in learning practices. AI-driven personalized learning platforms can accelerate learning by 25-60% according to research undertaken at Stanford University. This advancement is especially helpful for adult learners who must manage schooling with other responsibilities (Domingos 2015). The World Economic Forum report provides AI-driven educational tools that may boost adult students learning efficiency by 40% (the future of jobs report 2020). As noted by the Educause Centre for Analysis and Research, universities that use AI applications have seen course completion rates increase by 10-20%. This demonstrates AIs ability to significantly improve student achievement and participation in higher education (Brown et al. 2020). AI in education is growing due to adoption and investment. Markets and Markets (2019) predicted this market will rise to $3.68 billion by 2023. Researchers found that AI systems have the potential to provide students more control over their own learning and the interaction between them and their teachers in online classes. In the classroom, AI makes teachers more attuned to their students individual requirements, which in turn encourages students to ask more questions and creates more individualized learning plans (Tung et. al 2021). Studies also found that Future AI technology could reduce instructors tasks, allowing them to focus on creative lesson planning, professional development, and personalized student coaching and mentoring. All these exercises improve pupils learning performance for future abilities and problems. The use of artificial intelligence technology, such as Chat GPT, can facilitate the translation of instructional content and the development of dynamic, personalized learning spaces. Another context where the AI proves quite significant is in the field of personalized tutoring. Because every student learns in their own way, the AI systems might modify the lesson plan accordingly (Grassini 2023). From theoretical perspective, the theory of planned behavior students provides that intentions to employ artificial intelligence in learning are influenced by their attitudes toward technology, subjective norms, and perceived behavioral control. Students are more inclined to accept AI if they believe it improves their learning experience, if their classmates or educators support it, and if they are confident in their abilities to use it effectively (Ajzen, 1991). Students accept AI based on perceived utility and simplicity of usage, according to the Technology Acceptance Model (TAM). Students are more inclined to accept AI tools if they find them beneficial and easy to use (Davis, 1989). Self-efficacy as defined by Bandura (1977) plays a significant role in students attitudes towards technology for success in life. Students with more technological self-efficacy view AI in learning more positively. Delves in the feminist theory, in the realm of education, artificial intelligence has the potential to play a role in defying established gender norms. This is especially true when it comes to encouraging more young women and girls to participate in STEM (Science, Technology, Engineering, and Mathematics) topics, promoting gender equality in these professions (Chen et al., 2021). From the conflict theory perspective, AI facilitates bridging educational gaps, and artificial intelligence has the ability to decrease educational inequalities despite the fact that it is frequently associated with increasing inequality. As an example, artificial intelligence has the potential to give disadvantaged or remote places access to high-quality educational materials, so minimizing the existing discrepancies in academic quality. AI also contributes to preparing the future workforce as it enables students to concentrate on developing intellectual abilities by automating routine tasks. This prepares students for a workforce increasingly characterized by automation and AI technology (Turner 2022). Using responses from a questionnaire survey of over 500 students from a variety of academic backgrounds in Saskatchewans capital city, this paper explores user attitudes towards artificial intelligence in educational learning.  The implications for academic integrity policy will also be discussed.

Adhika Ezra, University of Regina; Henry Chow, University of Regina

University students' attitudes toward AI and the use of AI in higher education: A multivariate analysis

Artificial intelligence (AI) can be broadly defined as “systems designed by humans that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take to achieve the given goal” (European Commission’s High-level Expert Group on Artificial Intelligence, 2018). AI has been showing its promise in supporting the fields of healthcare, transportation, businesses, and more. Without a doubt, the proliferation of AI in education has the potential to provide many benefits such as improving academic performance by providing new ways to engage with materials, personalizing the learning experience for students, increasing the effectiveness of teaching, decreasing the grading burden for teachers through automated assessments, and reducing inequalities by expanding knowledge accessibility (Akgun and Greenhow, 2022; Pedro et al., 2019; Chiu et al., 2023). At the same time, AI poses serious ethical issues such as issues of transparency, privacy, accountability, human rights, automation, accessibility, and democracy (Siau and Wang, 2020). AI has also failed to function ethically on various occasions, due to the scarce knowledge regarding the consequences of AI, the lack of thoughtfulness in integrating ethical considerations into the use of AI, and the non-binding ethical codes produced by institutions (Hagendorff, 2020). As well, there are also additional risks to consider in education, such as the potential for AI to produce inaccurate information, to pose additional security risks, and for students to claim the work of AI as their own (Cardona, Rodríguez, and Ishmael, 2023), causing various countries to approach AI differently. While some are working towards integrating AI into the education system, others are restricting the use of AI entirely (“AI in education”, 2023). Regardless of the approach taken, there is a need to create an updated curriculum that takes AI into consideration (“Future-ready”, n.d.). The students perspectives provide key insights that can support the creation of a curriculum sensitive to the proliferation of AI in education, which can also lead to the creation of ethical guidelines for using AI in general. To better understand the students’ perspectives on AI, a campus survey of undergraduate students was undertaken in a Western Canadian city during the academic year 2023-24. Based on the findings from the survey, this paper will (1) examine respondents’ experience in using AI and (2) disentangle the key factors contributing to respondents’ general attitudes toward AI and the use of AI in higher education using ordinary least-squares and logistic regression.