Humans are the secret to smart AI
As a pair of essentially-interrelated, fast-emerging and rapidly-evolving technologies, artificial intelligence (AI) and machine learning (ML) have not been far out of the headlines for the last 18-months, if not longer.
But this is a market that is developing fast. As we now start to more clearly understand the point at which AI is being engineered into the devices, software applications and (now more intelligent) services that we use every day, we have a new responsibility to elevate our level of thought.
Can I have more (data) please?
Why elevate? Well, it’s simple enough. Our initial focus on AI has been centered around the provenance, production, and propagation of data and datasets. More data from more sources makes better AI… or so the general theory goes.
The AI mavericks that have driven the post-millennial age of AI have worked hard to ensure that AI ‘engines’ (the algorithm-enriched software programs that drive our devices’ AI brains) are fed with the most current, most structured, more deduplicated and most verified data at all times.
If we feed the AI brain like this, then we can build the best ‘training datasets’… and so the machine can learn more stuff at a faster pace.
This is all fine. After all, machine intelligence without data isn’t very smart.
What we must now do is start to look at how human-machine relationships will adapt to the new AI streams that are working to control our businesses with technologies that bring predictive alerts and other advisory controls.
It is, if you will, a question of choreography between work processes, AI technologies and the human beings that sit in the center of this new maelstrom of information.
Organizations implementing new AI models must realize how important it is to ensure that AI steps up from being a solely automated thing to being a technology that is more directly understood by humans.
Non-departmental UK public body the Information Commissioner’s Office’s (ICO) has tabled comments made by AI research fellow, Reuben Binns, and technology policy adviser, Valeria Gallo, to explore how organizations can ensure ‘meaningful’ human involvement in AI decisions.
Ability for interpretability
Binns and Gallo point to the need to ensure interpretability when building AI engines and say that, to ensure AI is doing what it should, a human reviewer should be able to predict how an AI system’s outputs will change if given different inputs.
“Some AI systems are more interpretable than others. For instance, models that use a small number of human-interpretable features (e.g. age and weight), are likely to be easier to interpret than models that use a large number of features or involve heavy ‘pre-processing’,” note the pair.
Many industry commentators and analysts agree that this kind of human touch consideration is going to be necessary to eradicate AI bias that may have been consciously or even unconsciously programmed into a system at its point of ‘first build’.
The trends here point towards more, not less, human involvement in AI development. That could very arguably change over time as we build AI controls to control AI, if you will pardon the somewhat tautological notion of such a process.
“In order for the training to be effective, it is important that human reviewers have the authority to override the output generated by the AI system and they and are confident that they will not be penalized for so doing. This authority and confidence cannot be created by policies and training alone: a supportive organizational culture is also crucial,” said the ICO’s Binns and Gallo.
What happens next is a question of fine tuning in the application of AI applications. We need to audit and assess which machine brains are doing what, when and where and look at their impact upon business operations.
Where AI is already embedded into a [software] tool that may have been purchased from a third-party, we may need to stand back and consider the human factor in the machine intelligence we have bought into.
As suggested, this may be a somewhat transitional phase in terms of the use of AI over the next century, but it is widely agreed to be a real-world tangible technology concern. The robots are getting smarter, but let’s remember who still pays the electricity bill and take the upper hand where we still can.
29 March 2023
29 March 2023
29 March 2023