Recent negative publicity surrounding photos of self harm leading to suicides in young people has highlighted some of the problems with Artificial Intelligence (AI). Platforms such as Instagram show you "more of the same", so if you view and like certain photos you will be presented with similar ones. While that works well for pictures of furry animals, it doesn't work so well with images of self harm, especially if the viewer is in a delicate state. The decision on which photos to show is based on algorithms that use AI to learn for themselves. They look at previous user behaviour and apply the results to drive decision making. The social media example shows that you can't always use blanket rules for decision making. But the issue that is becoming of more concern is some of the self learning algorithms that can have inherent bias built in.
In an example from 2015, Google's face recognition software was less accurate when it looked at dark skinned faces - in fact it actually labelled some black people's faces as Gorilla. They apologised and corrected the error, but it seems to have been a result of the data that was used to "teach" the software to recognise faces. The data set used was predominantly white male faces and this seems to have caused the problem. AI was also in the spotlight when Microsoft released "Tay", a "teen girl" chat robot on Twitter. The interactions turned Tay into a Hitler-loving sex robot within 24 hours. This was due to the real world comments that hadn't been anticipated by the development team. But bias is becoming a more important issue given the rise in automated decision making in everyday life. It could mean, for example, that certain groups may be excluded from, say loan approvals, becuase of inherent bias in the algorithms used.
In a recent talk at Stoke on Tech, Allison Gardner of Women Lead in AI
, suggested that AI should not be used just because it can be. There should be a positive reason for using AI and steps should be taken to avoid bias and unexpected behaviour. She said that GDPR can actually had a positive effect on algorithms but it will depend on how it's interpreted and implemented.
Up to now, companies have hidden behind algorithms to avoid accountability. But going forward, the automated processing rules in the GDPR mean that they will have to provide detailed accounts of how automated decisions are made. They will also need to carry out checks to ensure that the systems are working as expected. There is an argument that if companies can't explain the decision making, then should these algorithms be used?
But it may be necessary to do more than that and actually review algorithms before they are implemented. The impact on target groups as well as all groups should be measured together with the impact on society. In a couple of examples, Allison explained that the NHS have produced a detailed guide on the use of AI Techniques and have come up with 10 principles
that should be followed. The Canadian Government has also produced an Algorithmic Impact Assessment
, a list of 57 questions that should be asked when developing AI.
As well as looking at the value of AI and being fair about its use, the limitations should be understood. If the team developing the solution is not diverse, they should ensure that alternative views are considered, either by bringing in more diverse people or by the use of Ethics committees. Real world data should also be used to test - in the Twitter example above, it was actually Twitter users that "taught" the chat bot. This should be considered as well.
Given the growth in the use of AI, it may even be becoming necessary to have some kind of certification or quality marks. If you're not sure about whether bias exists, try Googling images of "unprofessional hair styles in the workplace" and see what comes up.
To find out more about the use of AI in your business you can contact us.