AI parenting – the nuts and bolts of ethical parenting

Do you have what it takes to ethically parent your AI?
6 October 2023

Let’s get to the nuts and bolts of AI parenting.

• AI parenting is the process by which you train your generative AI to do what you want.
• Ethical AI parenting depends on intention, thoroughness, and ensuring clean data wherever possible.
• Establishing trust in your AI parenting will be crucial to business success.

AI parenting is the process of training AI to behave in an ethical, inclusive way while still delivering the results and insights in which companies have invested. We’ve been talking to Dan O’Connell, chief AI and strategy officer, and Jim Palmer, VP of AI engineering, at Dialpad, a cloud communication company that has called for companies to make sure their AI parenting is thorough and effective.

AI parenting is about as easy as human parenting – it’s been done by lots of people, but that should never fool us into thinking it’s a straightforward process.

In Part 1 of this article, Dan and Jim explained exactly why AI parenting is a must-have if generative AI is to be trustworthy in the myriad applications to which it’s being put across the business world.

In Part 2, we looked at how to – at least as much as possible – eradicate bias from generative AI models through AI parenting, and came across the sobering notion that it’s essential that generative AI is less biased and more inclusive than either the society in which we live and die, or the internet we’ve built as its mirror if it’s to function responsibly enough for us to trust it.

That led us back to the fundamental question when it comes to AI parenting.

The crucial elements of AI parenting.

THQ:

If you’re looking to do AI parenting in a business context, making sure you’re as bias-free and ethical as possible, what are the actual nuts and bolts of that? Why are things like controllability and transparency so crucial to getting it right?

Because there are companies that will want to do it faster and cheaper, but will do it without some of the control and transparency, no?

JP:

In some respects, the reasons why controllability and transparency are important are already coming to the forefront of the public consciousness. The concept of hallucination, or confabulation, both of which mean that your AI is basically making things up, is increasingly well known.

If you don’t have controllability, you’re unlikely to be able to stop it doing that. And if you don’t have transparency, then when it does that, you’re going to have to answer questions about both why it does that, and why you didn’t tell anyone it did that.

From the point of view of both perceived accuracy and the value that your end user gets, that will immediately erode trust. So in a business context, accuracy is very important for many things.

AI parenting should enforce ethics and reinforce trust.

Responsible AI – the way forward?

Yes, there are certain things that you can get away with with a lower level of actual accuracy. If you’re just using your generative AI for the already classic case of “Write me an email about X,” that might not be so consequential.

But when it comes to making a business decision, we can’t make things up, and we can’t have generative AI making things up on our behalf, either. So having that accuracy and that transparency is very important.

DO’C:

Trust is the big component. There may be businesses that cut corners on things, but if you do that, over time that gap becomes exposed, and there’s a negative reaction to that.

We’re at this really important moment in time where these features and that trust becomes really critical. People want to know how we’re handling data, they want to understand how we’re testing for bias, they want to know the answers to all the questions we spoke about in Part 2.

So the nuts and bolts amount to everything you do in the AI parenting stage so that you can answer all those questions.

We’ve explained the kind of things that we do in Parts 1 and 2 – owning our AI stack, developing diverse teams to help us weed out bias, and so on. We don’t do those things just because we think they’re the right things to do – although they are and we do – but because that’s how we answer those questions and build that trust. Those things really are the nuts and bolts of the AI parenting process.

It’s a classic process of precision, recall, false positives, false negatives. And throughout our time doing machine learning, that’s been the constant struggle. We’re trying to change the context and change the wording a little bit – now we talk about hallucination, as opposed to false negatives, false positives, but it’s always been very, very important throughout machine learning.

The razor of trust.

THQ:

So, trust is the razor that cuts out those that don’t do the right work, because nobody’s going to want to use them? Because the reputation will build itself, both in the case of the positives and the negatives?

Let’s do something by way of summation. What precisely are the benefits of effective AI parenting?

Good AI parenting should engender trust.

Will your clients be able to trust your generative AI in a pinch?

JP:

We’re going to get new data every single day. And in that new data, we’re going to have new things that we need to look for. We need to continuously adapt or AI parent to increase our accuracy. Because machine learning is not perfect.

Some people claim it’s sentient, some people claim it’s at human levels of capacity. But this is all very loosely defined.

We need to keep adapting it because we’re going to have new conversations in our context. For us, we’re going to have new customers talking about new things, and it all comes down to always trying to increase accuracy, always being able to continuously add to our dataset, and then be as responsible with the data as possible.

Another thing that’s really important is that we take our data seriously. Some data we can’t hold onto, by reason of things like GDPR.

That’s responsible AI parenting – and it should be obvious why it’s crucial we do that. Continuous adaptation, continuous active learning or AI parenting is not only the ethical thing to do with your generative AI, it’s paramount to increasing the value of your product or offering.

So from both the ethical, the regulatory and the business point of view, responsible AI parenting is crucial – and ongoing.

Parenting for future AI regulation.

THQ:

We’re glad you mentioned GDPR. There will presumably, at some point, be a much stronger set of regulations around the ethical parenting of AI than currently exists. How do we parent within regulations that don’t exist yet?

JP:

That comes down to investment in an active parenting pipeline, where, when that happens, we can act. And then also, we’re using a lot of data, we’re not only using the data that we own, data that comes through our platform, but also fully public, commercially available, fully licensed datasets that are trustworthy.

We get as much data as we legally and ethically can. That’s part of our responsibility. There are other datasets that are a little more on the fringe – Wikipedia is such a massive dataset, but it’s a question of who owns that data. That’s part of this problem that we have to solve before we can use a dataset responsibly.

AI parenting depends on ever-more data.

The data, Precious! We must haves it all!

The best way for us to prepare for the future is… well, to prepare for it. To be able to get things into our datasets – or out of them if we need to, and to be able to continuously retrain so we maintain a trustworthy and bias-free dataset whatever happens.

Making sure that we have the framework, the infrastructure to be able to do that quickly, for training data, for new model architectures, for new compute resource, and for scaling. That’s how you future-proof your AI parenting practices against upcoming regulatory or legislative changes.

How to parent AI now – and in the more regulated future.