Oct 1, 2020


 

Infusing AI with Legal Regulations

 

The authors of Prediction Machines made many excellent points on applications of AI in our everyday life. One of the more critical sections of the book deals with the harm resulting from actions of fully autonomous automated systems. This issue has been discussed at length by many scientists, sci-fi writers, and filmmakers. Typically, a starting point is a set of ideas about ethics and socially acceptable behaviors that need to be there. The Laws of Robotics by Isaac Asimov formulated many decades ago, are an excellent starting point. Inevitably, a “trolley problem” that requires the decision-maker to pick lesser but inevitable harm will occur daily. In the current legal environment, a human driver involved in a traffic accident might defend their actions by stating that there was not enough time to avoid or predict the conditions that led to the accident.

For AIs, this defense will not be possible. The lawyers and the jury will look at the system logs, video captured by the cameras, radar data. Millisecond by millisecond, they will analyze the decision-making process and ascertain compliance with existing laws. Some parts of the systems will probably be knowledge-based, i.e., the decision-making system will use pre-programmed facts that ensures adherence to traffic lights, speed limits, etc. However, other parts of the system will be machine learning-based. That part of the AI will “learn from experience” behaviors that are too difficult to program. This component will most likely be neural network-based that is very difficult to modify to introduce a legal rule or an ethical standard. Even though a significant amount of research into knowledge-based neural networks exists, the deep learning architectures will be very tricky to augment by hard-coded rules.

At some point, the political and legal communities will decide to tackle the autonomy issues.  We will probably need a significant effort from the research community to convert large parts of legal statutes into an executable format. Let’s not forget that the “legal knowledge base” will need to work with systems that take certain input, e.g., vision, sound, radar, network data, and produce an output. It means that the complexity of the environment that constitutes an input to the inference engine is significant. This task alone seems to be no less complex than building autonomous AI.

 

 

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.