Do AI programmers need a Hippocratic oath?
Medical practitioners take the Hippocratic oath and swear to do no harm before they take their ‘trusted’ place in society. Artificial intelligence (AI) programmers are the architects of tomorrow’s society.
Similar to a medical practitioner, AI programmers have the skill to build or destroy critical infrastructures, protect or jeopardize financial institutions, and augment or weaken the judiciary system of a country.
By that logic, AI programmers must take a version of the Hippocratic oath that promises to do no harm, through their creations, to the society that they’re building and supporting.
The Association for Computing Machinery (ACM) recognizes this and has recently issued the ACM Code of Ethics and Professional Conduct which “is designed to inspire and guide the ethical conduct of all computing professionals, including current and aspiring practitioners, instructors, students, influencers, and anyone who uses computing technology in an impactful way”.
And while the document itself is quite interesting, the ACM issued a note comparing the first and second draft of its 2018 Code of Ethics and showcased some key pointers about how the world (and AI programmers) should work:
# 1 | Follow a set of principles
The code changed phrasing from “imperative” to “principle” throughout, which reflects the fact that the guidance provided are not rules, but principles that must be given due weight when making decisions.
Throughout the code, many instances of “must” or “will” have also been changed to “should.” This reflects the move from imperatives to principles, and also the broadened applicability of the Code.
# 2 | Help the masses
In the final copy of the Code, paramountcy of the public good has been substantially strengthened.
There is also a strong emphasis on prioritizing the least advantaged. The ACM, in its Code, also encourages pro-bono work and urges programmers to ensure that the public good is a central concern during all professional computing work.
The ACM believes that the public good should always be an explicit consideration when evaluating tasks associated with research, requirements analysis, design, implementation, testing, validation, deployment, maintenance, retirement, and disposal.
# 3 | Avoid obedience to authority without careful thought
The ACM clarifies that obedience to authority without careful thought is not supported by the Code.
Computing professionals should protect confidentiality unless required to do otherwise by a bona fide requirement of law or by another principle of the Code.
User data observed during the normal duties of system operation and maintenance should be treated with strict confidentiality, except in cases where it is evidence for the violation of law, of organizational regulations, or of the Code.
In these cases, the ACM recommends that the nature or contents of that information should not be disclosed except to appropriate authorities, and the computing professional should consider thoughtfully whether such disclosures are consistent with the Code.
Hippocratic oath for AI: Useful?
Well, having such an oath (or code) is definitely useful, to say the least.
Remember the recent experiment that the Massachusetts Institute of Technology (MIT) conducted on behavior and AI, and the resulting (psychotic) bot AI Norman that was born out of the experiment?
Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of artificial intelligence gone wrong when biased data is used in machine learning algorithms.
Well, through its experiment, MIT proved that “when people say that AI algorithms can be biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it.”
With the Code in place, feeding young AI algorithms with ‘bad’ data can be avoided.
The ACM, by stipulating that “the consequences of data aggregation and emergent properties of systems should be carefully analyzed”, ensures programmers are always aware of the long-term effects of the bots and solutions they create.
Let’s build AI for good
“AI has huge potential to help the world – if we stigmatize and prevent its abuse,” said Future of Life Insitute President Max Tegmark.
“There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI,” Tegmark added.
The fact is, we’re living in a world where AI is possible and real. Look at Sophia, the AI robot who has been awarded Saudi Arabian citizenship and has become the first non-human to be given a UN title (UNDP Innovation Champion).
She attends conferences, gives keynote speeches, and speaks to journalists every day.
Soon, there will be a fleet of robots just like her, and a million more AI-powered algorithms operating from servers and cloud platforms, running companies, and making multi-million dollar business decisions.
The future is exciting, but with AI programmers architecting that future, sticking to a code or even an oath can be very powerful.
If nothing else, every AI programmer should just commit to one thing: To teach AI to do no harm to the world around it, under any circumstances.
24 April 2019
23 April 2019