A few months back, Google DeepMind AI stated that AI no longer needs humans to learn (this refers to the AlphaGoZero project).
In the eco-system which includes RPA (Robot Processing Automation), machine learning and the different levels of complexity, models and structures in AI and ultimately neural networks; it may eventually be a legal requirement to be able to get a computer based program to repeat its decision making process, in response to perhaps, a human questioning a decision that they feel might be unfair, not legal or potentially biased. This again may also need to be repeated as regards to data and the use of misinformation to affect how people may vote and cannot be more accentuated than the current status with Facebook and the Cambridge Analytica involvement not only in Democratic voting but in other similar instances such as misinformation distributed selectively to individuals during the Brexit campaign. This with its devastation on what Democracy is really about and the ability to potentially destroy the whole stability of a nation. This seen in more than one country.
An interesting situation has been created in the Supreme Court in the US, where people’s lives are being affected where the Correctional Offender Management profiling for Alternative Sanctions (COMPAS) Note: [The COMPAS is a commercially available, computerised tool designed to assess offenders‟ needs and risk of recidivism to inform decisions regarding the placement, supervision, and case management of offenders in community settings]. This is being questioned on the basis it has a lack of 'Algorithmic Transparency'. This has been over ruled in U.S. court and not reviewed, and hence it is potentially a time bomb and something that may affect many other people in many other ways. The fact the Supreme Court does not want to review these (non- transparent ) processes sets a president that may impact many people as our lives become more reliant on AI based decisions, say for a Mortgage Application, on a more day to day potential use case.
I can see that the requirements for Fairness, Accountability and Transparency may in fact fall in the statutes of basic human rights as far as AI decision making on humans is concerned and over the last year there has been a movement towards this approach.
In some of the processes where the decision is formulaic in approach, it may be possible to replicate and hence show the steps taken on the decision making process. Where neural networks are concerned, the sheer amount of compute power and the reliance on multiple factors that perhaps may not easily be replicated may cause problems with potential requirements, especially if we were unable to dissect decision processes. This creates two challenges. The first is, if for such leading edge technology to be future proof, perhaps the aspect of being able to repeat processes or the concept of bi-temporality (putting the process back in time) may need to be inbuilt into such systems.
The second is how does one build the right values into something you no longer control. There is a growing belief that a black box approach may no longer be tolerable. At the extreme could AI on its own decide that humans are a danger to the planet? What are the unforeseen consequences if we relinquish control?
The development of AI without human intervention is sure to trigger the concerns of Elon Musk, Bill Gates and the late Dr Stephen Hawkins, all of whom worry that AI could signal the end of humanity.
The attached article goes into detail of a Data Scientist who has looked at this challenge and sets a slightly different path to the adoption of Neural Networks for the reasons mentioned above with just an effective outcomes however reducing the known areas of risk...
Caruana has helped develop a generalised additive model known as GA2M that is as accurate as a neural net but allows users to see how predictions are made, allowing them to spot anomalies. “With this new kind of model, you absolutely include all the variables you are most terrified about,” he says, so biases can be identified and adjusted for. This is a better option than removing variables, as bias is likely to affect correlated data as well. He discussed his work at the Fairness, Accountability and Transparency in Machine Learning event in Halifax in Canada on 14 August. “Every complex dataset has these landmines buried in it,” says Caruana. “The most important thing, it turns out, is just knowing you have a problem.” In some applications, the problems will not matter: