The Big Question – Who’s To Blame for Autonomous Car Crashes?
Just recently, a Tesla SUV governed by a self-driving system collided with a divider in California, leading to the death of its driver, Walter Huang. Ironically, Huang had raised a complaint to Tesla about how the vehicle had veered towards a similar boundary on multiple occasions. Tesla shifted the blame for the seriousness of the accident towards a missing piece in the divider. The company also indicated that human error played a role in the crash.
John Paul MacDuffie, an analyst in the automated vehicle industry, treats this mishap with high relevance to the growing advancement of automated driving. According to MacDuffie, while the said event is quite shocking, such occurrences will be more normal in the future. In cases involving autonomous driving, where the legal boundaries are blurred, people will need the service of the most experienced personal injury lawyer who can be expected to unravel complications of these matters.
How This Incident Came to Fruition
In the Tesla crash, the autopilot had been locked in and it offered instructions to the driver, alerting him to a potential oncoming impact. However, the driver was unable to take control of the vehicle in time. An automated artificial framework without the need for a human response might almost certainly have avoided the crash.
However, when the drivers are treated as reinforcements to these artificial systems, there is always a higher risk involved. This test was meant to advance the move to full automation, where there’s a blend of human involvement and artificial programming.
Human Distraction
Statistics indicate that distracted driving is now at an all-time high. The American streets saw 37,461 fatalities in 2016, reflecting a 5.6% spike from the figures in 2015, as per a report from the National Highway Traffic Safety Administration. There are signs that the volume of such fatalities has been higher in recent years, the data for which isn’t accessible yet.
Automated driving could potentially help to quell the frequency of these mishaps. However, any circumstance where one expects the human and the AI to share control of the vehicle, it is extremely precarious and could lead to a higher chance of blunder.
Waymo, a firm pursuing driverless automation owned by Alphabet, has opposed such frameworks where control of a vehicle is shared collaboratively between a driver and the computer system. Instead, the firm has been pushing for an idealized mechanism that does not depend on any interference from a human driver.
How Tech Companies Cut Corners in the Automation Pursuit
Uber started testing its driverless vehicles in San Francisco in December 2016. California’s government ended their license seven days after this. Uber then shifted its tests to Arizona, where they were met with less interference and a more business-conducive atmosphere.
The changes Uber had made in these tests were perplexing. In tests, it supplanted the act of having two drivers with just one. It eliminated security gear in the vehicles while testing the programming. And reportedly, the company had its sensors installed only at one region in the vehicle, creating lesser safeguards and a wider avenue of risk.
During the aforementioned tests, the self-driving car crashed, leading to the death of a bystander. Indeed, as a full examination report is being conducted over the said incident, it looks increasingly clear that Uber’s compromising attitude is where the blame lies for this mishap.
Corporations’ Accountability Amidst a Surge of Ai Innovation
As anyone might expect, the accidents from Uber and Tesla gave fuel for their corporate rivals to posture themselves and promote their own self-driving assets. Waymo’s CEO John Krafcik made a statement after the Uber crash describing how his firm’s automated mechanism would have detected the pedestrian and swerved prior to the collision, preventing the mishap.
On a similar note, the CEO of Mobileye, a firm in the automated vehicle business owned by Intel, wrote a blog following the event in which he presents his company’s technology as being more reliable.
A Rising Trend in Today’s Innovative Climate
Driving-related deaths could be viewed as a general public emergency that the emerging automation systems could certainly help to address. However, these recent occurrences have made the path ahead for driverless vehicles far less clear. The tech corporations have been conducting their tests with a callousness that is quite concerning. In addition to this, they’ve been unable to hold tests in busy city streets, despite the fact that these vehicles are designed to serve city society.
It is also vital to note that the technology needs to be tried in certifiable conditions. Almost every one of these firms implements its tests in Arizona and Phoenix, generally, more conducive areas for driving: The cities have leveled streets with a dry climate and less congestion. These are excellent and reliable conditions for anyone driving a car. The harsher, more dangerous conditions of other cities need to be incorporated into these self-driving tests.
Response From the Government
Prior to these recent accidents, state authorities had been leveling new standards for the testing of automated mechanisms. California, which had been reprimanded for a much more callous administrative oversight, presently requires companies directing the tests to report each time the AI ceases control of the vehicle, handing it over to the driver.
In the midst of the proceeds, a bill had recently been concocted in Congress that appears to absolve these firms from government oversight. It was marketed as a ‘free enterprise’ bill meant to foster an environment of innovation for the car makers, despite the increasingly obvious risks that surround this entire situation.
So Who’s Really Responsible Here?
Considering the grey area in the law and the lack of clear administrative direction, the legal implications do remain unclear. But one thing is certain: in most cases where uncertain events occur involving a self-driving car, the manufacturing corporation is likely to be complicit in the errors trickling down from the computerization.
Crashes in self-driving vehicles have always been attributed to flaws and errors. In both the case of Tesla and Uber, analysts leveled the blame for the crashes on the respective vehicles’ governing technology. Therefore, the company making the vehicles could be legally culpable in the situation that evidence exists of their inefficiencies when building these vehicles.
Nova who is obsessed with home machinery and gadgets. She loves home decor as well and enjoys the sunset.