Responsible AI: Best Practices
AI has been around in one way or another for decades. But in its current form, AI is a brand new field, with countless untapped possibilities.
Those possibilities can be either good or bad. Many have fears about AI, including how it is trained, who it affects, and what it is used for.
These concerns are justified, but given how new modern AI technology is, few industry-wide standards exist to determine how to responsibly develop and use AI.
The best approach is to adopt a handful of responsible best practices, and use sound judgment in assessing and revising them over time. Google has a set of its own suggestions, and much of what I have to say here is based off of those ideas.
For responsible AI, focus on these principles:
- Safety & Security
The way that most modern AI systems work makes bias a nearly irremovable problem. Essentially, an AI is trained by feeding it a ton of data, from which it learns to make inferences and connections that it can apply to the real world.
Unfortunately, no set of data is perfect. Data may reflect societal level biases, such as representing one race or gender more than others. As a result, an AI system trained on this data will be biased in the same way.
This is especially worrying for AI systems that have significant real-world outcomes, such as medical testing, or face-recognition for law enforcement.
Fairness starts with your data. While it can be difficult to clean it all and remove bias, it is a best practice to do your best to do so. And when you can't, you can pursue the next best thing: transparency.
AI works like a black box. Input goes in, output comes out, and the stuff in the middle is trained to make the best output for the given input.
Unfortunately, we really don't always know what exactly is going on in the middle. That's probably part of why it works so well- after all, we don't really understand what's happening inside our own brains.
But this lack of transparency can be dangerous, especially if it is responsible for important decisions. When faced with this level of ambiguity, AI developers should take care to document every understandable part of the system that they can: what the data is like, how the system is trained, in what way it is deployed, and so on.
Fields like cybersecurity have found that a certain level of public transparency is actually beneficial, since many people are more likely to spot a problem than just a few. AI may turn out the same way, given the level of complexity involved.
Most people don't know that their data has been used to train an AI program. In fact, your data has almost certainly been used in several AI applications- at the very least, maybe a Spotify playlist generator or image tagging program.
This is naturally a concern, but in many cases there is little that you can do about it, since the details of data sharing are usually hidden in that wall of legalese you click "agree" to when signing up for something.
Most people are fine with this to a degree, with Salesforce reporting that 62% of consumers are open to the use of AI to improve their experience. But they may feel differently about sensitive data, like health or financial information.
In some cases, it's actually really good that highly sensitive data is being used to train AI. Consider the use of cancer patient data to train cancer detection software.
The important thing here is to anonymize the data first, so that anyone using the system later would have no way of tracing any health data back to its original owner. Names and addresses are generally irrelevant to cancer diagnoses, for example.
Safety and Security
Protecting the safety of an AI system is difficult, because the output is by nature unpredictable. It is often possible to exploit an AI to deliver unintended output- for example, the "Grandma Trick" used to make ChatGPT say harmful things or provide dangerous advice.
The problem starts before training, actually. Protect your data at all costs, since data poisoning is a real and possible threat.
As a new field, AI can easily be misused. By following these best practices, you can minimize harm to anyone affected by the training or execution of your AI system.
I would add one other general principle to follow: Reflecting & Revising. Because the norms surrounding AI are not yet settled, it is important to constantly reflect on your actual practices, and revise them as need be.
The last thing you should do is assume that everything you are doing is perfect, and there is no need for improvement. On the contrary, mistakes are to be expected, identified, and corrected.
At JetRockets, we are developing more and more AI related projects for our clients. With every prototype or application delivered, we learn a little more about how best to do AI, and fortunately the technology is getting easier to use all the time.