Artificial Intelligence (AI) is continuing to shape the way we live and work. From helping us complete everyday tasks to aiding in medical diagnoses, its data-driven automated decision-making improves our efficiency and accuracy, and in some instances, even performs better than human intuition or expertise in complex tasks.
As with any technology, it is important we don’t get clouded by the convenience and efficiency AI provides, and constantly evaluate that it is being used responsibly and ethically or risk damaging the credibility between you (or your organization) and your audience.
Here are some ways to ensure that AI is used responsibly:
1. Prioritize transparency and explainability
AI should be deployed in a way that ensures everyone understands the decisions it is making, and how they are being made.
However, the way AI models work is often unclear, and hard to explain their predictions, even for informed users. This lack of full transparency is probably acceptable if AI is recommending lunch options, or deciding whether an image is a hot dog or not. The stakes are higher when the AI model is in charge of the hiring process, a cancer diagnosis, or risk assessment of individuals in a criminal justice context.
There are two models that can ensure transparency in your AI Model:
- Interpretable AI(IAI) uses simple models, algorithms, and techniques – like decision trees and linear regressions – to make it easy to understand the reasoning behind the decisions made.
- Explainable AI(XAI) uses more complex models and algorithms that can give users insight into how the AI arrived at its decisions, but not full transparency into the inner workings of the AI system. For example, a highly explainable AI model that approves loans would be able to tell us that:
- Credit Score accounts for 50% of the prediction.
- Income accounts for 30% of the prediction.
- Career accounts for 15% of the prediction.
- Age accounts for 5% of the prediction.
Having this kind of clarity helps measure the impact of an AI model from both the users’ and the organizations’ perspectives.
2. Design for fairness and accountability
Fairness means having AI models that are not biased toward attributes such as race, age, and gender in their decision-making process, while accountability means that it’s possible to pinpoint and assign responsibility for a decision made by the AI models.
Bias can originate from preferences or exclusions in the training data. For example, if you collect healthcare data only from medical centers, it naturally excludes most people without health insurance. So in order to mitigate any potential bias, think critically about your data collection methods by identifying the purpose of your application, the demographics of its audience, and the ways they will be impacted by the outcome. Then engage with a diverse set of users and use-case scenarios, monitor your model’s performance with user surveys, and implement fairness indicators throughout the lifecycle.
The pursuit of fairness and accountability isn’t a nice-to-have, it’s becoming a law. The first piece of legislation in the country that targets the use of AI throughout the hiring process – Local Law 144 proposed by The New York City Department of Consumer and Worker Protection – requires employers to conduct bias audits on automated employment decision tools and provide certain notices about such tools to potential job applicants.
3. Prioritize data privacy
In today’s increasingly privacy-conscious world, data protection should be at the core of any AI system design. Because AI models require an enormous amount of data to train, it’s important to ensure it can’t be used or shared inappropriately.
Organizations need to be purposeful with what they really need for the model. Just because a service needs data, doesn’t mean we need to sacrifice user privacy in the process. We can use anonymized personal data – collected by removing identifiers and confidential attributes – and still achieve good results. And more data doesn’t necessarily mean the better. Test your solution with less. Focus on finding out what’s the least amount of information you need for your applications.
If there’s no way to avoid using private data, be transparent with your audience about the data usage by providing a clear, concise consent mechanism. It’s also important to future-proof, and provide ways for them to change their consent option at any time.
Aside from data collection, data governance is also key. For organizations, be mindful of how sensitive data is being stored and accessed. Remember, your data is also a gold mine to people outside of your organization, so safeguard it as if it were your own.
As AI becomes more mainstream and accessible, and we turn onto the two-way street where machines and humans are constantly influencing each other, it’s important to always maintain authentic relationships with your audience. We must never stop prioritizing and advocating for our users, and the machines – being the good learners they are – shall follow.
At M Booth’s Innovation Tech Team, human-centric design is our core belief. We are always building new products and solving new problems. Hit us up at email@example.com! We would love to chat.