At this week’s Game Developers Conference, Ubisoft offered a possible glimpse of an AI-filled future for gamers. The company demoed a prototype at the show that used Nvidia’s Ace microservice to produce fully voiced “smart NPCs” that players could interact with by speaking into a microphone. Despite drawing skepticism online (from myself included), the demo itself impressed us once we went hands-on with it. I had a surprisingly cogent conversation with an environmentally conscious NPC about the ethics of eco-terrorism, a completely off-script conversation made possible through AI.
It’s one of the stronger use cases we’ve seen of the tech yet, but it has a surprising shortcoming that even Ubisoft is struggling to solve: linguistic bias.
AI’s secret bias
In the short demo, I took on the role of a space-faring character who gets involved with a resistance group’s fight against a megacorporation. The three-part demo had me chatting with two different characters, both of which Ubisoft created long backstories for and fed into Nvidia’s Ace tool. I learned about the sci-fi world by chatting with an NPC, asking him about his comrades, before planning a perfect heist with creative thinking.
After finishing my demo, I asked two Ubisoft workers involved with the project if there were any shortcomings with the tool that frustrated them. Though they were high on the tech overall, it was clear from some deep sighs that they had a laundry list of kinks they still needed to work out before the studio fully adopts the tech. Their number one gripe is the inherent bias present in the English language, a human problem that AI is currently inheriting by default.
The demoists pointed to two specific examples that hadn’t even registered with me during my playthrough. At one point, I asked an NPC to tell me about their least favorite member of their team. After telling me they loved all their crew members, they threw a character named Iron under the bus for being a prickly guy. There was just one problem: Iron isn’t supposed to be a man.
According to the demoists, the AI tool seems to associate the word “iron” with masculinity. Thus, it assumes that Iron, the character, is male. A similar problem pops up when the NPC talks about a character named Bloom, who is talked about as a sweet, nurturing woman. That’s the Iron problem happening in reverse; “Bloom” seems to be interpreted by the machine as a feminine term, grafting a motherly stereotype onto a male character. Those examples are small, but they open the door for some larger issues if left unchecked. For instance, how will generative AI dialogue deal with race?
This isn’t a problem specific to Nvidia’s tool or AI in general; it’s reflective of a fascinating nuance of language. Large language models (LLMs) like the one used here simply mirror the way humans talk based on the mass of data on which they’re trained. That means that it’s bound to pick up some nasty habits from time to time, like doling out gendered stereotypes. While Ubisoft’s NPCs have long backstories hand-crafted by writers, they’re still learning from more general language learning datasets. AI companies like Convai, who helped power Nvidia’s Kairos demo at CES this year, admit that they don’t really know the exact content those datasets are trained on.
That issue is only made more complicated when considering what it means for different languages. The developers I spoke to pointed out that generative AI is currently English-centric. Other languages can be radically different, though, which creates an additional challenge for those looking to adopt tools like Ace. What’s standard for English speakers might not make sense in another. How do you localize a machine that only really knows one language?
Ubisoft and Nvidia aren’t trying to hide these shortcomings. The workers I spoke to during the demo stressed the importance of collaborating with internal DEI teams who can help identify pain points and correct the writing AI characters spit out. As impressed as I was to be holding a rousing philosophical debate with an AI NPC, it’s clear that those machines still need a lot of human help to fix the instincts we’ve accidentally instilled in them.