The main criteria used to judge AI are the same ones used to humans:
- Transparency: What’s going on behind the process?
- Responsibility: Who takes the blame?
- Incorruptibility: How to prevent bias?
Transparency
An AI must have a chain of processes that can be traced back to identify the cause of it’s processes. “AI needs to have a strong degree of traceability to ensure that if harms arise, they can be traced back to the cause,” said Adam Wisniewski, CTO and co-founder of AI Clearing’ (Lawton and Wigmore).
E.g. What happens when a rejected applicant for a home loan at the bank brings a lawsuit against the bank, alleging that the they are racially biased? If the machine learning algorithm is based on a complicated multi-layer deep neural network, then it may prove nearly impossible to understand why, or even how, the algorithm is judging applicants based on their race, gender or ability.
Responsibility
The issue of responsibility, (who takes the blame?) is also asked. Society is still sorting out responsibility when decisions made by AI systems have catastrophic consequences. The rules and regulations surrounding AI-based decisions need to be worked out in a process that includes representatives from many areas of society.
Incorruptibility & Bias
In datasets involving personally identifiable information, it is extremely important to ensure that there are no biases in terms of race, gender or ethnicity. In addition, personal data should not be used by AI algorithms for purposes other than those for which they were created.
The risk of misuse of personal data should be analysed at the design stage to minimise the risks to individuals, and safety measures introduced to reduce the adverse effects in such cases. In terms of increased productivity to companies that are using AI trained on people’s personal information, how will these profits, gained by use of big data, be fairly and equitably shared in wider society?
Bias in computer systems can arise from:
- existing bias
- technical bias
- emergent bias.
The battle against ‘fake news’
As news media and social platforms become increasingly AI driven, bad ‘actors’ can target specific populations in an attempt to influence public opinion, or even political elections, with misleading facts. What happens when we can no longer trust our news sources and social media feeds? To combat the rise of fake news, AI is being used to create and defend against political propaganda and other forms of fake news.
The future of the automated workplace
Automation will replace some labour in the future workforce and enhance other jobs. Many workers will be empowered by these new AI tools, which will enable them to work more quickly and efficiently. However, many companies will have to account for the jobs lost to automation.
A large portion of the workforce will have to be trained for new jobs created by automation. The challenge will come when deciding on how to retrain and redistribute employees whose jobs have been automated or augmented.
Government, employers and automation companies will need to work together as automation changes the landscape of work, and figure out what fair compensation for sustainable living for workers and displaced workers means.