What are you looking for?
If you want, you can limit the search to a range of dates.

The start date occurs after the end date. Unless you're moving backward through time, you might want to change that.

To further narrow the results, you can limit by category and subcategory.



When you're ready, click the button below to do the search.
-- --


May 31, 2018

In today’s digest, we look at a couple of perspectives on the NTSB’s recent Uber AV crash report, how the AV START Act holdouts in the Senate have responded, and more.

NTSB Crash Investigation Results and the “Handoff Problem”

The NTSB released their much-anticipated findings on the Uber self-driving crash last week. Ars Technica called the report “a damning picture of Uber’s self-driving technology.”

Per AT:

“The report confirms that the sensors on the vehicle worked as expected, spotting pedestrian Elaine Herzberg about six seconds prior to impact, which should have given it enough time to stop given the car’s 43mph speed.”

“The problem was that Uber’s software became confused, according to the NTSB. ‘As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path,’ the report says.”

“Things got worse from there.”

“‘At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision. According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator [says the NTSB report.]”

“The vehicle was a modified Volvo XC90 SUV. That vehicle comes with emergency braking capabilities, but Uber automatically disabled these capabilities while its software was active.”

“The vehicle operator—who had been looking down in the seconds before the crash—finally took action less than a second before the fatal crash. She grabbed the steering wheel shortly before the car struck Herzberg, then slammed on the brakes a fraction of a second after the impact. The vehicle was traveling at 39mph at the time of the crash.”

“Dashcam footage of the driver looking down at her lap has prompted a lot of speculation that she was looking at a smartphone. But the driver told the NTSB that she was actually looking down at a touchscreen that was used to monitor the self-driving car software.”

“‘The operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review,’ the report said.”

Citylab’s report on the findings highlights the “handoff problem” we may be looking at with level 3 automation by highlighting a nearly 25-year old NTSB report on airline crashes:

“Over the period of that study, aviation had moved into the automation era, as Maria Konnikova reported for The New Yorker in 2014. Cockpit controls that once required constant vigilance now maintained themselves, only asking for human intervention on an as-needed basis. The idea was reduce the margin of error via the precision of machines, and that was the effect, in some respects. But as planes increasingly flew themselves, pilots became more complacent. The computers had introduced a new problem: the hazardous expectation that a human operator can take control of an automated machine in the moments before disaster when their attention isn’t otherwise much required.”

Citylab goes on to note that the so-called “safety drivers” in Uber’s self-driving cab fleet often grew complacent in the same way. One safety driver is quoted as saying “When nothing crazy happens with the car for months, it’s hard not to get used to it and to stay 100-percent vigilant.”

So despite the debate over whether Uber’s AI was set to ignore too many false positives or whether it used too few sensors of one type or another or whatever the argument is, it looks like there’s precedent here: it’s something the airline industry has been through and learned from. Perhaps it’s time to start paying attention to how the aviation industry has learned how to all but eliminate fatal accidents — if the burgeoning AV ride-hailing industry can’t pull it off, we may be stuck waiting for level 4-5 technology for self-driving taxis.

Senators Pounce on NTSB’s Findings

As we’ve reported in these pages, the US Senate has seen its AV Start Act legislation stalled over safety concerns on the part of a few key holdouts. After the Uber crash, it seemed all but certain they would ramp up their opposition to the bill and their demands that safety protocols be made more clear and their — or some federal agency’s — oversight over test AVs better codified.

Senators Markey and Blumenthal wasted no time, as you can see from this letter they sent to the CEO of Uber, which asks for a written response to ten multi-part questions about safety protocols both before the crash and after, hoping to see some evidence that the crash had spurred some improved protocols.

The senators, according to Engadget, seem to be looking most of all for information on “where testing is occurring, how companies determined if the self-driving tech was safe for public roads and whether the technology relies on internal sensors or external inputs”.

It wasn’t just Uber that Markey and Blumenthal put on notice with the letter. They also sent copies to a long list of other companies. Deep breath: BMW, Daimler Trucks, Fiat Chrysler, Ford, General Motors, Honda, Hyundai, Jaguar Land Rover, Kia, Mazda, Mercedes-Benz, Mitsubishi, Nissan, Subaru, Tesla, Toyota, Volkswagen, Volvo, Amazon, Apple, Intel, Lyft, NVIDIA Corporation, Uber and Waymo all received copies as well.

How tech companies and vehicle manufacturers reply to these concerns could determine a lot about what happens with AV legislation in the US Congress. The safe bet, for now anyway, is that the AV Start Act won’t pass anytime before the November midterms. And then it may well be subject to change depending on what happens, if anything, to the Congressional balance of power. While many Democrats do support the AV legislation, even those that were willing to vote for it felt it gave too much of a leash to companies (in self-reporting on their safety issues, for example) and took teeth away from existing federal code that allows for companies to be held accountable by the government for egregious mistakes.

Time will tell if Markey and Blumenthal will take responses to their queries at face value or if they’re merely throwing up another obstacle to the bill’s progress, but we wouldn’t make any large bets on any federal AV legislation going the distance anytime soon.

Quick Bites

  • Futurism reports on recent MIT research that says 3D printed autonomous boats aren’t far off from hitting the market! Yes, really! This could be the coolest thing we have seen in our news roundup in some time.
  • Wired has a thinkpiece about the NTSB crash this week with the poetic title “False Positives: Self-Driving Cars and the Agony of What Matters.” Sounds like a book you were supposed to read and didn’t in a college literature class, but it’s actually pretty interesting.
  • USA Today brings us some nice long-form reporting on the dilemma currently being dealt with in auto advertising: namely, should we brag about our driver assist features or warn that they’re definitely, absolutely not autonomous systems? Or can we get away with doing both?

What do you think — about any of these stories or the other ongoing developments in the realms of next-gen transportation or smart cities? Contact us and let us know. If you write something really great, we might even quote you next time, so don’t be shy, join the conversation!

Related Topics

Do you like our digest content?

Get frequent updates about smart cities, autonomous vehicles, and transportation topics by subscribing to the GTiMA mailing list.


Subscribe to our mailing list

* indicates required