AI Ethics: What You Need to Know Before Building a Bot

February 05, 2019

Conversational Messaging

When it comes to AI ethics, people have concerns. Just start typing “will AI” into Google’s predictive search and you’ll see just how troubled people are. Many people are worried that AI will replace them, some are even concerned that AI will destroy humanity.

At Hubtype, our products are built around empowering people, not replacing them. We’ve seen how chatbots and humans work better together, and how they actually improve the jobs of humans. But, that doesn’t mean there isn’t a real need for Artificial Intelligence ethics. We’ve put together a guide to Artificial Intelligence ethics that will help you ask (and answer) the right questions.

AI Ethics Rule #1: Acknowledge the risk of prejudices

AI applications and services are developed by people. And unfortunately, we humans are flawed by nature. AI relies on data recorded and supplied by humans, so the risk of involuntary prejudices does exist.

Take, for example, the AI being used to predict crimes. A 2016 ProPublica investigation which concluded that the data driving an AI system used by judges appeared to be biased against minorities. The algorithm was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them at almost twice the rate as white defendants.

AI Ethics Rule #2: Make sure your data is balanced

When we talk about potential issues for AI ethics, a major risk is unbalanced datasets. This means that a certain variable is overestimated or underestimated, which affects how AI explains and predicts certain events. When this happens, you can end up with the type of prejudice mentioned above.

Below are some techniques you can use to make sure your data is balanced:

  • Undersampling
  • Oversampling
  • Synthetic Data Generation
  • Cost-Sensitive Learning

We won’t get into the weeds here, but this scholarly article does a good job explaining each method.

AI Ethics Rule #3: Be transparent

People don’t like to be deceived. Don’t build a bot that pretends to be human. Make it clear when your customers are talking to a human and when they’re talking to a bot. Google recently learned this the hard way after presenting their new intelligent assistant.

During the presentation of Google’s intelligent assistant, it was clear that the people interacting with the bot actually thought they were talking to a human. This raised questions about how deceptive Google’s technology could be.

AI Ethics Rule #4: Protect your customer’s privacy

Make sure your customers know when and how you collect their data. People are surprisingly willing to give chatbots and AI highly personal information. For example, some app-based therapy companies aim to help people with depression and anxiety. These types of conversations contain sensitive and private data.

Make sure you’re diligent about how data is stored, and always get your customers’ consent before collecting data.

AI Ethics Rule #5: Make sure your bot knows how to handle abuse

We’d like to think everyone will be nice to AI, but that’s certainly not the case. The way your bot handles rude comments is also a matter of Artificial Intelligence ethics. Many people believe that AI should recognize abusive language and push back against it. They’d also like bots to reward users who are unusually friendly and polite.

While this might seem unnecessary, it’s particularly important with young users. Kids can pick up bad habits from ordering AI around and barking orders at technology.

AI Ethics Rule #6: Encourage self-scrutiny

Put guardrails in place that will set you up for success. Larger companies should establish ethics boards and write ethics charters to serve as a framework for all AI. Self-regulation is a great way to avoid the need for a third party or government body to get involved.

Microsoft is just one of the tech companies setting the standard in AI ethics. It will soon add an ethics review focusing on Artificial Intelligence issues to its standard checklist of product audits. Apple, Amazon, Microsoft, Google, IBM, and Facebook also jointly founded a nonprofit called Partnership on AI to focus on ethical issues.

Ready to build your own conversational chatbot? If you’re a developer, start using our platform now or alternatively, contact us for more information.