PB Devblog #1
(PB means Project Barton)
Editor’s Note, July 14th: I’m a little embarrassed by how outdated this article is. There’s still some good info here about how I organized my first prototype for the whole ChatGPT interface, but since then I’ve learned about chat-completion-response systems and how to better integrate directing prompts into the conversation (which I’ll make a post about later.) This was when I was just first starting out, and still testing my ideas with the ChatGPT web interface. I debated just taking it down entirely, but showing the process of fucking up and finding out is the whole point of devblog, right?
My game’s premise, and ideally it’s core mechanic, is that the player is going to be interacting with an NPC in conversation. The player is given absolutely free range of what they can say or tell the AI controlling the NPC, and it’s up to the AI to improvise.
This has been done before. There are plenty of indie game devs that have been incorporating OpenAI into their NPCs, but they usually encounter the same problems:
The AI is incredibly nice, and is very obviously a robot.
The AI likes to “yes-and” everything that the player tells them, which usually makes for a weird conversation given the context of the game.
The player says random things just to mess with the AI.
It’s pretty easy to sum these problems up by labeling the AI as “undeveloped” or blaming it for not spitting out exactly what the developer wants. As for the issue with the player, I think it just comes with the territory of an AI-based game. You give someone an AI toy to play with, they’ll play with it.
This next part is pretty text-heavy, so here’s an AI picture of a telephone with googly eyes.
If you get tired of looking at paragraphs of dialogue, just come back here!
Fixing the Issue:
As I was researching how others have created an AI NPC, I realized that they all made a similar mistake:
Before the conversation begins, the game would ask the AI to “act” as the character they want it to be.
I think this is what causes the first problem I listed above. The AI is a terrible actor, because it’s programmed to be as kind and helpful as possible. It might give you the NPC’s name if you asked it, but it wouldn’t be able to show emotion, and therefor make for an uncompelling NPC. My method is this:
Before the conversation begins, give the AI the full context of the conversation. Then ask it, “How would [NPC] reply?”
The AI is a terrible actor, but a wonderful screenwriter. By putting a barrier between the AI’s identity and the NPC’s identity, we can generate some really compelling dialogue. For example, here’s an output I got from ChatGPT when playing with this method:
DON’T PANIC. I know that seems like a ton of writing. But I’ve found that the more context ChatGPT has to work with, the better the response. Just look at that dialogue! So compelling, so mischievous, and a little bit villainous.
What about user input?
Well first, we would have to format the user’s input so that it looks a bit more like a conversation between characters. If we just let the user input a random sentence with no context, it would break the whole co-writing-a-screenplay-with-the-AI thing we’re going for. It’s pretty easy, I just format it like this:
[Player Character]: “[User input]”
An example would be:
Barton: “Who are you?”
And that tends to keep the conversation flowing, and the AI outputting cohesive dialogue between the NPC and the PC.
Could we control what the AI says?
What happens when the player says something weird to the AI? What if I want the AI to call off the conversation, or display some sort of emotion in it’s answer?
I’ve found that formatting the user’s input to give a little between-the-lines guidance to the AI makes a MASSIVE difference. I’ve named these little subtextual prompts Directing Prompts, because I like to imagine it’s like a director yelling “CUT!” and having a little one-on-one with the main actor before continuing the scene.
Here’s an example:
The first prompt was my first time playing with this whole Directing Prompts concept, and it output what I expected from an AI: Being overly nice, supportive, and overall ChatGPT-like.
The second prompt, I tried something that the AI probably would not have done on its own, and it’s output… holy sh**. That’s mean.
This is when I really started to think about how this could turn into a narrative game. All that was left to do was to automate the whole process, and make sure that the AI sees all the added prompts, and the player only ever sees the dialogue.
This is where I come clean. As I’m writing this, I’ve already done it, and it took a while. I’ll be explaining my process, and some other things I’ve found useful, in some later devlogs.
Thumbnail: DALLE-2, "picture of an old telephone with googly eyes, photorealistic"