Exploring Representations of Disability and Sexuality in AI-Generated Art


Melissa Miller, University of Calgary

Artificial intelligence (AI) has dramatically transformed the landscape of creative arts, particularly in terms of image generation. This evolution of AI technology presents a unique opportunity to critically examine its role in shaping societal perceptions, especially concerning sensitive topics like disability and sexuality. This exploratory qualitative study delves into two fundamental research questions: Firstly, what types of images are generated by AI when tasked with representing disability and sexuality? Secondly, do these AI-created images reinforce or challenge prevailing stereotypes and perceptions surrounding disability and sexuality? AI-based image generation tools possess the dual capability of fostering positive societal shifts while also posing significant challenges. These tools, through their programming and learned datasets, have the potential to either perpetuate harmful stereotypes or offer new, inclusive perspectives. Our study aims to provide a critical analysis of the representations of disability and sexuality in AI-generated imagery. By doing so, we seek to understand whether AI is merely replicating entrenched societal biases or if it is capable of contributing to a more diverse and inclusive visual narrative. This paper rigorously examines the portrayal of disability and sexuality in AI-generated images. By conducting a qualitative analysis of visual content produced by various AI models, our aim is to explore how AI technology impacts the depiction of disabled individuals as sexual beings. Utilizing NVivo, a qualitative analysis software, we engaged in an in-depth content analysis to identify and explore recurring themes within these images. Our preliminary findings reveal a concerning trend in AI-generated imagery: a significant lack of diversity and a narrow representation of the disabled experience. Most images predominantly feature white heterosexual couples, and, notably, when disability was explicitly mentioned in the prompts provided to AI image generators, many of the resultant images failed to display any visible signs of disability. In cases where disability was depicted, the inclusion of wheelchairs and glasses were often the sole indicators, demonstrating a limited and stereotypical perspective on disability. This study underlines the potential of AI-generated content in showcasing the rich and diverse experiences of disability. However, realizing this potential hinges on training AI models with a more comprehensive and varied range of images and data. It is imperative for developers, content creators, and designers to engage in ethical practices. This encompasses the diversification of training data to counteract ingrained biases, the integration of accessibility features in AI-generated content, and the proactive solicitation of feedback from individuals with disabilities during the development process. Our paper casts a critical light on the current state of disability and sexuality representation in AI-generated images. The analysis we present is not just an end in itself but serves as a catalyst for further research, ethical discussions, and the responsible use of AI in visual content creation. In our presentation, we will elucidate these preliminary findings, drawing attention to the persistent erasure of the disabled experience and the predominance of white heteronormative portrayals within AI image generation. As AI technology continues to evolve, it is essential to ensure that its application is marked by careful consideration and intentionality. AI holds the promise to represent and enhance the lives of individuals with disabilities effectively. However, this can only be achieved if we commit to using AI in a manner that ensures all individuals, regardless of their abilities, are portrayed with the accuracy, dignity, and respect they deserve.


Non-presenting authors: Alan Santinele Martino, University of Calgary; Rachell Trung, University of Calgary; Eleni Moumos, University of Calgary

This paper will be presented at the following session: