Regulating AI – The Road Ahead - Data Science Central
Regulating AI – The Road Ahead
Summary:__ With only slight tongue in cheek about the road ahead we report on the just passed House of Representative’s new “Federal Automated Vehicle Policy” as well as similar policy just emerging in Germany. As a model of regulation on emerging AI technology we think they got this just about right.
Just today (9/6/17) the US House of Representatives released its 116 page “Federal Automated Vehicles Policy”. This still has to be reconciled and approved by the Senate but word is that shouldn’t take long. Equally as interesting is that just two weeks ago the German federal government published its guidelines for Highly Automated Vehicles (HAV being the new name of choice for these vehicles).
There are very few circumstances in which we would welcome government regulation of an emerging technology like AI, but this is one of those times.
Why We Welcome Regulation
First, of all the uses to which AI will be put, HAVs have the most potential to enhance or harm our lives. Industrial robots, chatbots, and robot vacuums don’t concern us. Perhaps the next most critical application will be drones for delivery or even transport but we’re not close enough there to yet be concerned.
HAVs done right will be a boon to productivity and cost reduction. Released too soon they may be more safety menace than means to reduce accidents and alienate potential future customers.
Given the level of regulation our normal cars and trucks receive at both the state and federal level regulation is inevitable. The balance to be achieved is a sufficiently light hand not to slow down innovation and deployment while giving the newly chauffeured public enough confidence to begin adopting.
HAV adoption is not a slam dunk. Gartner’s ‘Consumer Trends in Automotive’ reports that their recent survey in the US and Germany showed that 55% of their sample would not ride is a fully automated car. However 70% said they’d be willing if the car was only partially autonomous.
Frankly, my take is that the general public is rightly discounting a lot of the press hype and needs the reassurance some regulation can provide.
The US Approach
The policy published by the NHTSA is actually a breath of fresh air in the world of regulation. There are already at least 20 states that have regulations on their books to control HAVs and the impediment to innovation caused by this balkanization was on the verge of becoming overwhelming.
So the Fed has reserved for itself the issues of setting overall safety and performance criteria while leaving with the states those functions that were always theirs:
Licensing (human) drivers and registering motor vehicles;
Enacting and enforcing traffic laws and regulations;
Conducting safety inspections, where States choose to do so; and
Regulating motor vehicle insurance and liability.
If anything this leaves the states with plenty of room to diverge especially around who is to be licensed (manufacturer, owner, driver) when no licensed active driver is required to be aboard, and similarly who pays (liability) in the case of accidents.
What states are specifically prohibited from doing is regulating performance which is reserved for the Fed.
On the 6 point automation scale in which 0 is no automation and 5 is where the automated system can perform all driving tasks, under all conditions, the new policy applies to level 3 or higher (though the broad standards also apply to the partial automation in levels 1 and 2). Level 3 is roughly where Tesla currently operates (or is rapidly approaching) performing some of the tasks some of the time with the human on alert to take over.
The new policy document offers guidance in four areas:
Vehicle Performance Guidance for Automated Vehicles
Model State Policy
NHTSA’s Current Regulatory Tools
New Tools and Authorities
The Good News Is This
The Fed does not propose any specific performance standards though it reserves the possibility to do so in the future. The general guidance is that deployed HAVs driven by the public must meet or exceed current vehicle and safety standards.
Manufacturers self-certify compliance within a list of about 15 major areas.
Data Recording and Sharing
Human Machine Interface
Consumer Education and Training
Registration and Certification
Federal, State and Local Laws
Operational Design Domain
Object and Event Detection and Response
Fall Back (Minimal Risk Condition)
No specific performance requirements other than general safety greater than non-HAVs are specified. This allows for the greatest amount of flexibility for innovation as well as the constant stream of updates to deployed HAVs without prior government approval. Those upgrades are most likely to be delivered as software over the net making them much more akin to getting Windows Updates than going to your local mechanic for repairs.
In return manufacturers may each deploy 100,000 HAVs. Note that ‘deployed’ means driven by actual customers and not employees. The House version includes heavy trucks which the Senate version does not and the numerical limits are slightly different, but the balance of the policy is essentially the same in both versions.
There are on the order of about 35 companies (including some component suppliers) currently testing HAVs meaning that in just a few years we could have a test bed of 3 Million or more HAVs on which to perfect our AI.
The German Approach
So far the approach proposed by Germany has an even lighter hand (though that may change), but a different focus – the moral and ethical ramifications of HAV operation.
This has taken the form of a report from the Ethics Commission on Automated Driving presented by Federal Minister Alexander Dobrindt which was adopted by the Cabinet. Per Dobrindt:
The interaction between man and machine raises new ethical questions in the time of digitization and self-learning systems. Automated and networked driving is the latest innovation in which this interaction is applied in full. The ethics committee at the BMVI has done an absolute pioneering work and has developed the world's first guidelines for automated driving. We are now implementing these guidelines - and thus remain an international pioneer in mobility 4.0.
The full report covers 20 points of which these are the key:
Automated and networked driving is ethically necessary if the systems cause fewer accidents than human drivers (positive risk assessment).
Material damage is the result of personal injury: In the event of danger, the protection of human life always has top priority.
In the case of unavoidable accidents, any qualification of people according to personal characteristics (age, sex, physical or mental constitution) is not permitted.
In any driving situation, it is necessary to clearly define who is responsible for the driving task: the man or the computer.
Anyone who drives must be documented (e.g. to clarify possible liability issues).
In principle, the driver must be able to decide himself (data sovereignty) by passing on and using his vehicle data.
The core of this guidance is that in the event of unavoidable accident, priority must be given to humans as opposed to animals or property. Most importantly the HAV must not take into account any element of judgement based on any characteristics of the occupants including the number of passengers versus those potentially injured. Specifically called out are age, sex, physical or mental constitution.
Many readers will recognize this as the old Trolley Problem in which the operator must make a last second judgement about who and how many people will be injured versus those saved. As any first year philosophy student has discovered, there are no good answers. The Germans have taken the position that the AI must not be designed to make this decision. Unfortunately they are silent on who or how the decision will be made when the AI is in charge.
The US Policy on the Trolley Problem
The NHTSA published policy is not blind to this problem and takes a more nuanced approach, but one that offers little more in the way of guidance. The new NHTSA Policy discusses that there will no doubt be times when the three major goals of safety, mobility, and legality may be in conflict.
It says only that whatever rules are built into the AI must be completely transparent and available for discussion and agreement among the government, the manufacturers, and the operator/riders. In a problem with no good solution, waiting to see how this actually unfolds may indeed be the best guidance we can currently offer.
There is one element of policy I was sorry to see missing and that is car-to-car networking for simultaneous localization and mapping (known as SLAM in the HAV AI community).
Manufacturers currently regard the data transmitted by the HAV back to them as proprietary. In most cases this is reasonable. However the exception that we and others have argued in the past is that any new knowledge of immediate road conditions gathered by any HAV should be shared with all.
This can be the sudden removal of lanes caused by construction or the garbage can in the middle of residential street. Since HAVs rely on both mapping and object avoidance logic, any novel disruption seen by another HAV could be made available to all other HAVs in the immediate vicinity. Car-to-car transmission could also greatly simplify difficult merging situations such as on a roundabout.
It would be an appropriate use of government authority to mandate this limited level of data sharing. Perhaps that will come as the performance of different AI systems is shown to be better or worse than others and the NHTSA focuses in on minimum performance standards.
For now though, we’re delighted that the Fed has stepped in with a very light hand and given the green light to move to much more extensive real world experience with AI for HAVs. We hope their hand remains light. I for one am looking forward to my first ride.
About the author: Bill Vorhies is Editorial Director for Data Science Central and has practiced as a data scientist since 2001. He can be reached at: