Building context-based truth models for ChatGPT

Does context hold the key to a standard of truth for ChatGPT and other generative AI?
23 March 2023

Truth through context and enriched responses?

In Part 1 of this article, we sat down with Don White of Satisfi Labs, a specialist in context rich AI-based conversational assistants (or chatbots, if you prefer), to discuss the reasons why – without a source of truth – generative AI like ChatGPT and Google Bard might not be as ready for deployment as many in both the business and civilian communities might think they are.

While we had Don in the chair, we asked him where sources of truth might be found for these generative AI systems – or indeed, how they might be built, so that nobody would necessarily need to know the truth or falsehood of any answers ChatGPT might provide ahead of time.

Context-thirsty work.


Let me tell you a quick story, for illustrative purposes.

I’m an IPA (India pale ale – a particular kind of beer) fan. I was talking to someone from the New York Mets recently, because they were our first client and we keep in touch. I said “Suppose you’re an IPA fan, and you’re in Citi Fields (home of the Mets) and you ask ChatGPT whether you can get a hazy IPA there.”

What ChatGPT does is scrape the menu and tell you actually, no, you can’t.

And that’s all. Big let-down for your IPA fan, and they go home grumbling that you can’t get a hazy IPA at Citi Field.

What it doesn’t do is come up with the kind of thing Amazon does – “No, you can’t get a hazy IPA here, but people who like a hazy IPA tried this drink, and it usually goes down pretty well with them.”

The broader point is that the “truth” there isn’t always want you actually want, because it takes away your ability to turn a negative into a positive for your consumer, and for your own bottom line.

It gives you the truth as far as it’s able to identify it, but like I said, sometimes, neither you nor your consumer want a plain, bald truth, without accompanying context to that truth, because the context can still enrich your consumer’s experience in ways that the plain “truth” can’t.

That’s essentially the fundamental – ahem – truth – on which we built our business in 2016. Context can give better results than bald information. So we built a system to provide a source of truth to large language models and other NLP. That’s what we did, and essentially, that’s why we’re in business today. So to see something like ChatGPT come about is really kind of cool. We call what we can add the “verified answer.”

Verification achieved.


Which is exactly what ChatGPT and other generative AIs right now are lacking – that verification, that contextual understanding that goes beyond its persuasive ability to “sound” right, even if its information is actually wrong – or, as you say, doesn’t result in a satisfying experience for the user.

So Satisfi just put a patent on its NLP (natural language processing) model? Timing is everything.


Ha. Funny story. We launched a press release all about our patent pending NLP.

ChatGPT was launched the same week.

One of our investors called me up to say “Great job for nothing.”

I appreciate that. I didn’t know the most disruptive conversational AI technology in our history would be launched that same week!

But here’s the thing. Our patent actually focuses on something called the context response system. So it means that we’re using our proprietary technology to create content placeholders, so that we can educate a product like ChatGPT on what the right answer is – or what the brand’s answer is, which can sometimes be richer in context and avoid disappointment, without delivering false information.

So what we’re actually patenting is how we use NLP to generate answer indexing. And that fits very well into where ChatGPT and the rest of the new generative AIs need to go.

A lot of technologies are all using the same models, like most conversational AI companies are sharing open source models. There are MIT models, there’s dialogue flow from Google, you’ve got Pi from Amazon. Now, we’re connected to them, but we don’t use them as part of our core product. That’s what excites me about the future and why our NLP, with its answer indexing capability, still stands out in the age beyond the launch of ChatGPT.

Scaling upgraded.


We saw the press release, yes. It said the patent would allow Satisfi to scale uniquely compared to other companies in the same business sphere. Is that a claim that has survived the arrival of ChatGPT?


Honestly, ChatGPT has quadrupled my expectations of what we’re capable of, because it does fill a huge gap. The largest gap in what we were doing was actually getting some of this information converted into the knowledge management sectors, which we needed.

ChatGPT is excellent in application, and if you know how to talk to it, you can do prompt engineering. Prompt engineering is a new discipline, whereby you have NLP engineers, who are the ones designing the phrase and word combinations, and prompt engineers, who are trained to teach or get an LLM to do what you want it to do.

That means that all the things we had thought would take years to get the content organized for, we now think we could do in two months. That’s why I love this product. I think this is tremendous, especially for us.

ChatGPT is a wonderful thing, but it needs a source of truth and context before it will do a lot of the things the market is expecting of it. We may well be able to help with that.