By Jim Betz
I was thrown a curveball recently when I tried a brand new lesson with a second-grade class that involved AI. The general idea of the lesson involved groups of kids devising a basic outline for a tall tale, then entering it into ChatGPT to generate a story on a second-grade reading level.
As a demonstration, I modelled developing a character for such a tale, and on a whim chose to base the character on myself: the story of a librarian who could shelve books at lightning speed. We were pleased with the results, and asked ChatGPT to create an illustration for the story.
The second graders, having little or no experience with AI, didn’t know what to expect, but a few expressed surprise when the illustration showed a character that was of a different race than myself. When several students tried to point out that the illustration didn’t accurately reflect me, I explained that I was merely using my name and job as a model for the character; it was never meant to represent me directly.
Not a big deal, and the class proceeded normally after. But the fact that programs like ChatGPT or Google’s Gemini can create illustrations involving people does raise a few questions for those teachers who wish to integrate them into their daily lessons—and may lead to discussions on topics we aren’t used to bringing up in class.
I decided to see how my teacher colleagues would handle a similar issue. Most of the ones I spoke to in a highly informal survey said they expected the kids to accept the illustration as the AI created it, and wouldn’t address the topic of race at all. However, I don’t feel we’re really serving our students when an issue such as racial representation comes up and is pushed to the side.
How should a teacher respond to students asking to regenerate an AI image with people of a particular race? Anyone familiar with using AI to create images of people knows that it is possible to revise these images to make a character taller, older, thinner, with a different hair color, etc.
There have been times that I’ve needed to specify the race of a character for my own work. When I create an AI image, though, I’m familiar with who I am representing and their origin; usually I create AI images for stories or podcasts in which the main character is well developed and has a sense of identity.
For educators who are teaching writing or computer science and having students generate images of characters that have just been dreamed up moments before, would it be a red line for a child to specify a certain race when formulating their character?
I image that this could be an issue for older students—the kind of issue that keeps them talking long after the class has ended. When we ask students to come up with adjectives to define how a character looks, should we now be teaching them tactful ways to assign race as well? In a racially sensitive atmosphere, would a teacher attach any importance if a student were to specify a character’s race in an AI-generated image? This could potentially lead to the perpetuation of stereotypes, such as specifying white police officers or African American criminals.
Moreover, if a teacher overlooks the issue, they may be choosing not to address a critical thinking skill that students could utilize while using AI at home. And they will use AI to create images at home. They will also encounter AI-generated images on their own time on websites that may have racial or ethnic agendas.
Until AI has been perfected enough to be flawless—something that may never happen—the issue of race in AI-generated images will come up in the classroom. A New York Times article recently addressed Google’s Gemini AI system which, when asked to generate images of German soldiers in World War II, produced images that included an African American man and an Asian woman.
The Times article raised enough controversy to cause a temporary suspension in images involving humans, but it’s doubtful that any company’s good intentions regarding diversity will infallibly be guided by historical accuracy. The students sitting in classrooms today need to learn how to evaluate the products of artificial intelligence, and model an acceptable means of communicating those issues of race that do arise.
As students shift from seeking information to navigating a flood of content generated and curated by algorithms, the classroom must evolve just as quickly. Generative AI is not a distant, futuristic tool—it’s already here, shaping the way students think, create, and question. As school districts begin to introduce AI training, the focus should move beyond mere fascination with what AI can do, and instead should prioritize thoughtful engagement with what it means.
Teachers need support not only in understanding the tools, but also in managing the ethical, cultural, and emotional complexities AI brings to the classroom. Preparing for these moments—when a student’s curiosity brushes up against real-world sensitivities—is no longer optional. It’s essential.
Jim Betz is a Media Specialist working at a primary (K–2) school in Georgia. He previously taught art and computer science, but loves being in the library because it lets him use art and technology in creative ways with the kids.