The last six months have witnessed the rise of terms and phrases like “ChatGPT”, “AI powered search,” “Google Bard,” “Dall-E,” “Stable Diffusion,” “large language models (LLM),” and “generative AI.”
As most of the world and its places of work try to come to terms with what exactly this technology can offer them and the domains they work in, a lot of the functionalities and features of these AI models have been open for testing for a few months now.
Since March, I have interacted regularly with Google’s Bard, Microsoft’s AI-integrated Bing search, and Google’s Bard to not only explore the results they produce but also to understand how they perceive reality. This week’s column is a blend of a few of my observations and some estimation of how exactly AI models can aid the game making process.
Sifting through large sets of data and information
The belief that large language models (LLMs) can provide concise and informative text that enables quick comprehension of any given subject is a prevalent stereotype. However, it is frequently observed the output generated by AI systems can be overly vague and extremely general. This disparity can be best explained by AI models’ reliance on search engine results and cached web-based information to generate their responses.
Consequently, when you seek information from an AI model, the results often reflect a compilation of common terms associated with your query rather than addressing the specific topic at hand. This also clarifies the presence of imaginary references in systems like ChatGPT.
AI can help game writing
This belief is a popular one, and organizations like Ubisoft have developed their own proprietary AI models for writing dialogues in games. The industry belief on the success of such models is such that most gaming organizations have begun to wonder if they even need scriptwriting and character development teams in the future.
I have experimented extensively on game dialogue writing in both ChatGPT and Google Bard. I asked them to create five women characters for a dating sim game set in a bar on a weekend. I prompted the models that the game’s protagonist was an introvert and asked them to give me possible settings for how interactions between the protagonist and the women characters would proceed.
The results were alarmingly biased in terms of race and description. Both models yielded five white characters, all rooted in the US. When I asked for diversity in characters, I was given five racially and ethnically diverse characters who still bore the same North American/European names.
The more prompts I fed into the systems for diversity, the results that emerged were a lot more chaotic and unwieldy. Nearly two hours in, I had made hardly any progress as the exercise yielded almost no usable dialogue sequences.
AI can generate game worlds and write code
This is the one dimension that excites me the most, especially, when I look at the digital renditions of language-based prompts on systems like Midjourney and OpenAI’s Dall-E.
The results are sometimes not only aesthetically pleasing and accurate but also quite imaginative. However, these renditions are controversial as they use existing work of artists across the creative disciplines as foundations and benchmarks for the AI to render their own pieces. Similarly, on the coding aspect, people have attempted to get LLMs to write code for simple games like Pac-Man and Flappy Bird.
In most of these exercises human guidance and oversight has been necessary. If one thinks about the need for efficient and tight coding and the other polishing methods that go into making games and platforms responsive, I don’t know if AI models can undertake those exercises of testing and reflection.
From what I can see AI based models are nowhere close to overhauling key processes in game making. While their ability to create worlds and visuals is a significant one it would be interesting to see if those images and visuals can lead to playable sequences. I don’t think we are anywhere close to people losing jobs just yet.