(Part-2 of AI Ethics Series)
As the field of AI unfolds, it is also important to gradually evolve even with AI Ethics. It is always believed that when you find something extremely pro, it is obvious for the con to accompany it! The art is, how do you actually handle these cons, in a systematic way. AI Ethics deals all about that.
Before selecting data and training models, it is important to carefully consider the human needs an AI system should serve — and if it should be built at all.
Human-centered design (HCD) is an approach to designing systems that serve people’s needs. Before a team actually takes up a design to build an AI model, every team should adopt this design. There are few simple steps that you can consider:
- Acknowledge people’s needs to give a conclusion to a problem
Understanding people’s requirements in the current journey is the way to help find their unaddressed needs. This can be done by conducting surveys, assembling focus groups, reading user feedback, and so on. Every team member should get involved in this step so that team member gains an understanding of the people they hope to serve. Your team should include and involve people with diverse perspectives and backgrounds, along with race, gender, and other characteristics. Sharpen your problem definition and brainstorm creative and inclusive solutions together!
- Introspect how will AI add merit to your problem statement
Once you are clear with address of your problem, it is important to analyze how AI creates an impact to solve the issue. Remember! AI improves efficiency and impact of human error is almost zero!
Do consider these questions..
- Is public convinced of your idea to produce a good outcome?
- Would non-AI systems — such as rule-based solutions, which are easier to create, and maintain — be significantly less effective than an AI system for this particular problem statement?
- Is the task of using AI would find boring, repetitive or otherwise difficult to concentrate on for this problem statement?
- Have AI solutions proven to be better than other solutions for similar use cases in the past?
If you answered no to any of these questions, an AI solution may not be necessary or appropriate.
3. Consider the potential harms that AI could cause
Weigh the benefits of using AI against the potential harms right from collection of data,labeling the data, till developing an AI system for a particular problem statement. Consider the impact on users of the AI product. In this stage, there should involve a lot of human intervention in judging the product carefully and tring to avoid potential harms as far as possible.Have an efficient data selection and operation of the system. If you estimate that the harms are likely to outweigh the benefits, do not build the system.
4. Start with non-AI solution first!
Do remember that developing an AI product is costly. Before you actaully start with an AI based solution, start with building a non AI based prototype to see how people respond to it immediately. This prototyping is easier, faster and less expensive. Take necessary feedback from people with diverse backgrounds.
5. Difficult roads always lead to beautiful destinations!
Challenge is required! Your selection of people should include people who have the power to judge and pose you different chalenges. Always remember not to limit your challeges, but challenge your limit!
6.Build in Safety measures
Safety measures protect users against harm. They seek to limit unintended behavior and accidents, by ensuring that a system reliably delivers high-quality outcomes. This can only be achieved through extensive and continuous evaluation and testing. Design processes around your AI system to continuously monitor performance, delivery of intended benefits, reduction of harms, fairness metrics and any changes in how people are actually using it.
Hope this gives you some insight on the way you proceed to build an AI System.
Is AI model biased? If yes, how to improve fairness?
Let’s explore these questions in our next blog. Till then Goodbye.