AI can be biased — ethics must not be overlooked
The world is increasingly focused on the development of artificial intelligence and mobilisation of new technologies. Companies have started to look to innovation as a strategic priority and they have been investing accordingly.
With this widespread adoption of disruptive technologies which will have an impact on the development of humanity, issues of ethics frequently come up. In particular, when it comes to AI-based systems that are often faced with decisions outside of their scope.
The public perception of AI is often divided when it comes to incorporating AI systems in our daily lives. Some argue for its benefits, whilst others are conscious of its inability to have fair judgement in certain situations.
For example: studies reveal how AI displays bias towards gender and members of minority ethnic groups, largely due to its inability to filter data. When the data used to train an AI system doesn’t have sufficient information to make a decision, there is a chance it can produce a biased result. Suppose a company has historically hired more men than women, in this case a bias is likely to already exist in the database, impacting future hiring decisions. This, of course, raises questions of a company’s ethical responsibility to society and puts more pressure on the board and senior executives to mitigate such risks.
What has been done so far to address these issues? Many companies, including Google, Facebook, and Microsoft, are trying to solve these ethical challenges differently, but there are also commonalities in their approach when it comes to the guiding principles such as fairness, inclusiveness, transparency and reliability.
It is great to see tech giants pave the way for the smaller players to start implementing these principles, but the difficulty lies in operationalizing them. As an active investor in many start-ups that are developing impressive technologies — from AI and augmented reality, to robotics and healthcare technology (StoreDot, General Robotics Ltd, Vocalis Health) — the topic of ethics is of much importance which needs to be taken seriously.
Reid Blackman in the Harvard Business Review laid out a useful set of guidelines for building a sustainable AI ethics program:
1. Identify existing infrastructure that a data and AI ethics program can leverage
2. Create a data and AI ethical risk framework that is tailored to your industry
3. Change how you think about ethics by taking cues from the successes in health care
4. Optimize guidance and tools for product managers
5. Build organizational awareness
6. Formally and informally incentivize employees to play a role in identifying AI ethical risks
7. Monitor impacts and engage stakeholders
Few deny the global digital transformation effort taking place and it is now important for the tech industry to work towards fairness and equality by addressing the gaps in AI’s knowledge. It is probably safe to say that AI can only be ethical once its driving forces want it to be.