25 May 2018

Mobility startups: Apply to exhibit for free as a TC Top Pick at Disrupt SF ‘18


Mobility is one of the most rapidly advancing technologies going, and we’re searching for the rising stars of early-stage mobility startups to apply as a TC Top Pick for Disrupt San Francisco 2018 on September 5-7 at Moscone Center West. It’s a competitive application process, but if TechCrunch editors designate your company as a Top Pick, you get to exhibit for free in Startup Alley — the show floor and heartbeat of every Disrupt event. Besides, who doesn’t love free?

Mobile tech is on the cusp of a revolution, and we’re interested in startups focused on everything it entails — autonomous vehicles, sensors, drones, security — or something else altogether. Flying cars, anyone? Exhibiting in Startup Alley will expose your startup to more than 10,000 attendees, including potential investors, customers, partners and more than 400 media outlets.

Here’s how the TC Top Pick process works. First things first: apply now. Our expert team of editors will review each application and choose only five mobility startups as TC Top Picks. They also will select five startups for each of the following tech categories: AI, AR/VR, Blockchain, Biotech, Fintech, Gaming, Healthtech, Privacy/Security, Space, Retail or Robotics. A total 60 companies will exhibit in Startup Alley as a TC Top Pick.

If your mobility startup makes the cut, you receive a free Startup Alley Exhibitor Package, which includes a one-day exhibit space in Startup Alley, three founder passes good for all three days of the show, use of CrunchMatch — our investor-to-startup matching platform — and access to the event press list.

In addition to all the other potential media opportunities, TC Top Picks also get a three-minute interview on the Showcase Stage with a writer — and we’ll share the heck out of that video across our social media platforms. That’s promotional gold right there, folks.

And who knows? As a Startup Alley exhibitor, your company might even get selected as the Startup Battlefield Wildcard — if they do, you get to compete in Startup Battlefield for a shot at the $100,000 prize.

Disrupt San Francisco 2018 takes place on September 5-7. Don’t miss your opportunity to exhibit in Startup Alley for free. The TC Top Pick deadline is June 29, and we have special offers for early applicants. Does your startup have what it takes to be one of the five mobility TC Top Picks? Apply today to find out.


Read Full Article

Snips announces an ICO and its own voice assistant device


French startup Snips has been working on voice assistant technology that respects your privacy. And the company is going to use its own voice assistant for a set of consumer devices. As part of this consumer push, the company is also announcing an initial coin offering.

Yes, it sounds a bit like Snips is playing a game of buzzword bingo. Anyone can currently download the open source Snips SDK and play with it with a Raspberry Pi, a microphone and a speaker. It’s private by design, you can even make it work without any internet connection. Companies can partner with Snips to embed a voice assistant in their own devices too.

But Snips is adding a B2C element to its business. This time, the company is going to compete directly with Amazon Echo and Google Home speakers. You’ll be able to buy the Snips AIR Base and Snips AIR Satellites.

The base will be a good old smart speaker, while satellites will be tiny portable speakers that you can put in all your rooms. The company plans to launch those devices in 18 months.

[gallery ids="1646039,1646040,1646041,1646042,1646043,1646044"]

By default, Snips devices will come with basic skills to control your smart home devices, get the weather, control music, timers, alarms, calendars and reminders. Unlike the Amazon Echo or Google Home, voice commands won’t be sent to Google’s or Amazon’s servers.

Developers will be able to create skills and publish them on a marketplace. That marketplace will run on a new blockchain — the AIR blockchain.

And that’s where the ICO comes along. The marketplace will accept AIR tokens to buy more skills. You’ll also be able to generate training data for voice commands using AIR tokens. To be honest, I’m not sure why good old credit card transactions weren’t enough. But I guess that’s a good way to raise money.


Read Full Article

Google’s Duo and Cisco’s Webex Teams among the VoIP apps pulled from the China App Store


Earlier this week, it came to light that Apple had removed a number of VoIP-based calling apps from the App Store, at the request of the Chinese government. The apps had been using CallKit, Apple’s new developer toolset that provides the calling interface for VoIP apps, freeing up developers to handle the backend communications. China’s government asked developers, by way of Apple, to remove CallKit from their apps sold on the China App Store, or they can remove their apps entirely.

Notices Apple sent out to the developers were first spotted by 9to5Mac, who shared a snippet from of one of the emails.

The email states that the Chinese Ministry of Industry and Information Technology (MIIT) “requested that CallKit be deactivated in app apps available on the China App Store,” and informed the developer they would need to comply with this regulation in order to have their app approved.

The regulation only impacts apps distributed in the China App Store.

We understand that the apps can still use CallKit and be sold in other markets outside the region.

Apple is not publicly commenting on the matter.

The pushback against CallKit is another means of discouraging people from developing or using VoIP services in China, without having to go so far as to ban the apps directly. It wouldn’t be the first time China has cracked down in this area. In November, Microsoft’s Skype was also pulled from the Apple and Android app stores.

The government also last year ordered VPN apps, which help users route around the Great Firewall, to be pulled from app stores – another order with which Apple complied.

Other social media apps, like WhatsApp and Facebook, are also disrupted at times, and newspapers’ apps like those from The NYT and WSJ are blocked, too.

According to data pulled by app store intelligence firm Sensor Tower, two dozen apps with CallKit had been removed during the week prior to the news reports.

That list, along with the date removed and publisher name, is below:

Sensor Tower notes it’s possible that there are other apps removed from additional stores, but doesn’t have that data.

In addition, this list only includes those apps that have been downloaded enough times to rank in the top 1,500 of an app category at some point – beyond that Sensor Tower wouldn’t pick it up. But an app that wasn’t ranked would have had so few downloads that the impact of its removal would be minimal.

Nevertheless, you can see list includes a few well-known names, including Cisco’s Webex Teams and Google’s Duo video calling app, among those from other operators and VoIP calling providers.

The full text of Apple’s email is below:

From Apple
5. Legal: Preamble
Guideline 5.0 – Legal

Recently, the Chinese Ministry of Industry and Information Technology (MIIT) requested that CallKit functionality be deactivated in all apps available on the China App Store. During our review, we found that your app currently includes CallKit functionality and has China listed as an available territory in iTunes Connect.

Next Steps

This app cannot be approved with CallKit functionality active in China. Please make the appropriate changes and resubmit this app for review. If you have already ensured that CallKit functionality is not active in China, you may reply to this message in Resolution Center to confirm. Voice over Internet Protocol (VoIP) call functionality continues to be allowed but can no longer take advantage of CallKit’s intuitive look and feel. CallKit can continue to be used in apps outside of China.


Read Full Article

Eric Schmidt says Elon Musk is ‘exactly wrong’ about AI


When former Google CEO Eric Schmidt was asked about Elon Musk’s warnings about AI, he had a succinct answer: “I think Elon is exactly wrong.”

“He doesn’t understand the benefits that this technology will provide to making every human being smarter,” Schmidt said. “The fact of the matter is that AI and machine learning are so fundamentally good for humanity.”

He acknowledged that there are risks around how the technology might be misused, but he said they’re outweighed by the benefits: “The example I would offer is, would you not invent the telephone because of the possible misuse of the telephone by evil people? No, you would build the telephone and you would try to find a way to police the misuse of the telephone.”

Schmidt, who has pushed back in the past against AI naysaying from Musk and scientist Stephen Hawking, was interviewed on-stage today at the VivaTech conference in Paris.

While he stepped down as executive chairman of Google’s parent company Alphabet in December, Schmidt remains involved as a technical advisor, and he said today that his work is now focused on new applications of machine learning and artificial intelligence.

Elon Musk speaks onstage at Elon Musk Answers Your Questions! during SXSW at ACL Live on March 11, 2018 in Austin, Texas. (Photo by Chris Saucedo/Getty Images for SXSW)

After wryly observing that he had just given the journalists in the audience their headlines, interviewer (and former Publicis CEO) Maurice Lévy asked how AI and public policy can be developed so that some groups aren’t “left behind.” Schmidt’s replied government should fund research and education around these technologies.

“As [these new solutions] emerge, they will benefit all of us, and I mean the people who think they’re in trouble, too,” he said. He added that data shows “workers who work in jobs where the job gets more complicated get higher wages — if they can be helped to do it.”

Schmidt also argued that contrary to concerns that automation and technology will eliminate jobs, “The embracement of AI is net positive for jobs.” In fact, he said there will be “too many jobs” — because as society ages, there won’t be enough people working and paying taxes to fund crucial services. So AI is “the best way to make them more productive, to make them smarter, more scalable, quicker and so forth.”

While AI and machine learning were the official topics of the interview, Levy also asked how Google is adapting to Europe’s GDPR regulations around data and privacy, which take effect today.

“From our perspective, GDPR is the law of the land and we have complied with it,” Schmidt said.

Speaking more generally, he suggested that governments need to “find the balance” between regulation and innovation, because “the regulations tend to benefit the current incumbents.”

What about the argument that users should get some monetary benefit when companies like Google build enormous businesses that rely on users’ personal data?

“I’m perfectly happy to redistribute the money — that’s what taxes are for, that’s what regulation is for,” Schmidt said. But he argued that consumers are already benefiting from these business models because they’re getting access to free services.

“The real value is not the data but in the industrial construction of the firm which uses the data to solve a problem to make money,” he said. “That’s capitalism.”


Read Full Article

Facebook, Google face first GDPR complaints over “forced consent”


After two years coming down the pipe at tech giants, Europe’s new privacy framework, the General Data Protection Regulation (GDPR), is now being applied — and long time Facebook privacy critic, Max Schrems, has wasted no time in filing four complaints relating to (certain) companies’ ‘take it or leave it’ stance when it comes to consent.

The complaints have been filed on behalf of (unnamed) individual users — with one filed against Facebook; one against Facebook-owned Instagram; one against Facebook-owned WhatsApp; and one against Google’s Android.

Schrems argues that the companies are using a strategy of “forced consent” to continue processing the individuals’ personal data — when in fact the law requires that users be given a free choice unless a consent is strictly necessary for provision of the service. (And, well, Facebook claims its core product is social networking — rather than farming people’s personal data for ad targeting.)

“It’s simple: Anything strictly necessary for a service does not need consent boxes anymore. For everything else users must have a real choice to say ‘yes’ or ‘no’,” Schrems writes in a statement.

“Facebook has even blocked accounts of users who have not given consent,” he adds. “In the end users only had the choice to delete the account or hit the “agree”-button — that’s not a free choice, it more reminds of a North Korean election process.”

We’ve reached out to all the companies involved for comment and will update this story with any response.

The European privacy campaigner most recently founded a not-for-profit digital rights organization to focus on strategic litigation around the bloc’s updated privacy framework, and the complaints have been filed via this crowdfunded NGO — which is called noyb (aka ‘none of your business’).

As we pointed out in our GDPR explainer, the provision in the regulation allowing for collective enforcement of individuals’ data rights in an important one, with the potential to strengthen the implementation of the law by enabling non-profit organizations such as noyb to file complaints on behalf of individuals — thereby helping to redress the imbalance between corporate giants and consumer rights.

That said, the GDPR’s collective redress provision is a component that Member States can choose to derogate from, which helps explain why the first four complaints have been filed with data protection agencies in Austria, Belgium, France and Hamburg in Germany — regions that also have data protection agencies with a strong record defending privacy rights.

Given that the Facebook companies involved in these complaints have their European headquarters in Ireland it’s likely the Irish data protection agency will get involved too. And it’s fair to say that, within Europe, Ireland does not have a strong reputation for defending data protection rights.

But the GDPR allows for DPAs in different jurisdictions to work together in instances where they have joint concerns and where a service crosses borders — so noyb’s action looks intended to test this element of the new framework too.

Under the penalty structure of GDPR, major violations of the law can attract fines as large as 4% of a company’s global revenue which, in the case of Facebook or Google, implies they could be on the hook for more than a billion euros apiece — if they are deemed to have violated the law, as the complaints argue.

That said, given how freshly fixed in place the rules are, some EU regulators may well tread softly on the enforcement front — at least in the first instances, to give companies some benefit of the doubt and/or a chance to make amends to come into compliance if they are deemed to be falling short of the new standards.

However, in instances where companies themselves appear to be attempting to deform the law with a willfully self-serving interpretation of the rules, regulators may feel they need to act swiftly to nip any disingenuousness in the bud.

“We probably will not immediately have billions of penalty payments, but the corporations have intentionally violated the GDPR, so we expect a corresponding penalty under GDPR,” writes Schrems.

Only yesterday, for example, Facebook founder Mark Zuckerberg — speaking in an on stage interview at the VivaTech conference in Paris — claimed his company hasn’t had to make any radical changes to comply with GDPR, and further claimed that a “vast majority” of Facebook users are willingly opting in to targeted advertising via its new consent flow.

“We’ve been rolling out the GDPR flows for a number of weeks now in order to make sure that we were doing this in a good way and that we could take into account everyone’s feedback before the May 25 deadline. And one of the things that I’ve found interesting is that the vast majority of people choose to opt in to make it so that we can use the data from other apps and websites that they’re using to make ads better. Because the reality is if you’re willing to see ads in a service you want them to be relevant and good ads,” said Zuckerberg.

He did not mention that the dominant social network does not offer people a free choice on accepting or declining targeted advertising. The new consent flow Facebook revealed ahead of GDPR only offers the ‘choice’ of quitting Facebook entirely if a person does not want to accept targeting advertising. Which, well, isn’t much of a choice given how powerful the network is. (Additionally, it’s worth pointing out that Facebook continues tracking non-users — so even deleting a Facebook account does not guarantee that Facebook will stop processing your personal data.)

Asked about how Facebook’s business model will be affected by the new rules, Zuckerberg essentially claimed nothing significant will change — “because giving people control of how their data is used has been a core principle of Facebook since the beginning”.

“The GDPR adds some new controls and then there’s some areas that we need to comply with but overall it isn’t such a massive departure from how we’ve approached this in the past,” he claimed. “I mean I don’t want to downplay it — there are strong new rules that we’ve needed to put a bunch of work into into making sure that we complied with — but as a whole the philosophy behind this is not completely different from how we’ve approached things.

“In order to be able to give people the tools to connect in all the ways they want and build committee a lot of philosophy that is encoded in a regulation like GDPR is really how we’ve thought about all this stuff for a long time. So I don’t want to understate the areas where there are new rules that we’ve had to go and implement but I also don’t want to make it seem like this is a massive departure in how we’ve thought about this stuff.”

Zuckerberg faced a range of tough questions on these points from the EU parliament earlier this week. But he avoided answering them in any meaningful detail.

So EU regulators are essentially facing a first test of their mettle — i.e. whether they are willing to step up and defend the line of the law against big tech’s attempts to reshape it in their business model’s image.

Privacy laws are nothing new in Europe but robust enforcement of them would certainly be a breath of fresh air. And now at least, thanks to GDPR, there’s a penalties structure in place to provide incentives as well as teeth, and spin up a market around strategic litigation — with Schrems and noyb in the vanguard.

Schrems also makes the point that small startups and local companies are less likely to be able to use the kind of strong-arm ‘take it or leave it’ tactics on users that big tech is able to use to extract consent on account of the reach and power of their platforms — arguing there’s a competition concern that GDPR should also help to redress.

“The fight against forced consent ensures that the corporations cannot force users to consent,” he writes. “This is especially important so that monopolies have no advantage over small businesses.”

Image credit: noyb.eu


Read Full Article

Facebook, Google face first GDPR complaints over “forced consent”


After two years coming down the pipe at tech giants, Europe’s new privacy framework, the General Data Protection Regulation (GDPR), is now being applied — and long time Facebook privacy critic, Max Schrems, has wasted no time in filing four complaints relating to (certain) companies’ ‘take it or leave it’ stance when it comes to consent.

The complaints have been filed on behalf of (unnamed) individual users — with one filed against Facebook; one against Facebook-owned Instagram; one against Facebook-owned WhatsApp; and one against Google’s Android.

Schrems argues that the companies are using a strategy of “forced consent” to continue processing the individuals’ personal data — when in fact the law requires that users be given a free choice unless a consent is strictly necessary for provision of the service. (And, well, Facebook claims its core product is social networking — rather than farming people’s personal data for ad targeting.)

“It’s simple: Anything strictly necessary for a service does not need consent boxes anymore. For everything else users must have a real choice to say ‘yes’ or ‘no’,” Schrems writes in a statement.

“Facebook has even blocked accounts of users who have not given consent,” he adds. “In the end users only had the choice to delete the account or hit the “agree”-button — that’s not a free choice, it more reminds of a North Korean election process.”

We’ve reached out to all the companies involved for comment and will update this story with any response.

The European privacy campaigner most recently founded a not-for-profit digital rights organization to focus on strategic litigation around the bloc’s updated privacy framework, and the complaints have been filed via this crowdfunded NGO — which is called noyb (aka ‘none of your business’).

As we pointed out in our GDPR explainer, the provision in the regulation allowing for collective enforcement of individuals’ data rights in an important one, with the potential to strengthen the implementation of the law by enabling non-profit organizations such as noyb to file complaints on behalf of individuals — thereby helping to redress the imbalance between corporate giants and consumer rights.

That said, the GDPR’s collective redress provision is a component that Member States can choose to derogate from, which helps explain why the first four complaints have been filed with data protection agencies in Austria, Belgium, France and Hamburg in Germany — regions that also have data protection agencies with a strong record defending privacy rights.

Given that the Facebook companies involved in these complaints have their European headquarters in Ireland it’s likely the Irish data protection agency will get involved too. And it’s fair to say that, within Europe, Ireland does not have a strong reputation for defending data protection rights.

But the GDPR allows for DPAs in different jurisdictions to work together in instances where they have joint concerns and where a service crosses borders — so noyb’s action looks intended to test this element of the new framework too.

Under the penalty structure of GDPR, major violations of the law can attract fines as large as 4% of a company’s global revenue which, in the case of Facebook or Google, implies they could be on the hook for more than a billion euros apiece — if they are deemed to have violated the law, as the complaints argue.

That said, given how freshly fixed in place the rules are, some EU regulators may well tread softly on the enforcement front — at least in the first instances, to give companies some benefit of the doubt and/or a chance to make amends to come into compliance if they are deemed to be falling short of the new standards.

However, in instances where companies themselves appear to be attempting to deform the law with a willfully self-serving interpretation of the rules, regulators may feel they need to act swiftly to nip any disingenuousness in the bud.

“We probably will not immediately have billions of penalty payments, but the corporations have intentionally violated the GDPR, so we expect a corresponding penalty under GDPR,” writes Schrems.

Only yesterday, for example, Facebook founder Mark Zuckerberg — speaking in an on stage interview at the VivaTech conference in Paris — claimed his company hasn’t had to make any radical changes to comply with GDPR, and further claimed that a “vast majority” of Facebook users are willingly opting in to targeted advertising via its new consent flow.

“We’ve been rolling out the GDPR flows for a number of weeks now in order to make sure that we were doing this in a good way and that we could take into account everyone’s feedback before the May 25 deadline. And one of the things that I’ve found interesting is that the vast majority of people choose to opt in to make it so that we can use the data from other apps and websites that they’re using to make ads better. Because the reality is if you’re willing to see ads in a service you want them to be relevant and good ads,” said Zuckerberg.

However he did not mention that the dominant social network does not offer people a free choice on accepting or declining targeted advertising. The new consent flow Facebook revealed ahead of GDPR only offers the ‘choice’ of quitting Facebook entirely if a person does not want to accept targeting advertising. Which, well, isn’t much of a choice given how powerful the network is. (Additionally, it’s worth pointing out that Facebook continues tracking non-users — so even deleting a Facebook account does not guarantee that Facebook will stop processing your personal data.)

Asked about how Facebook’s business model will be affected by the new rules, Zuckerberg essentially claimed nothing significant will change — “because giving people control of how their data is used has been a core principle of Facebook since the beginning”.

“The GDPR adds some new controls and then there’s some areas that we need to comply with but overall it isn’t such a massive departure from how we’ve approached this in the past,” he claimed. “I mean I don’t want to downplay it — there are strong new rules that we’ve needed to put a bunch of work into into making sure that we complied with — but as a whole the philosophy behind this is not completely different from how we’ve approached things.

“In order to be able to give people the tools to connect in all the ways they want and build committee a lot of philosophy that is encoded in a regulation like GDPR is really how we’ve thought about all this stuff for a long time. So I don’t want to understate the areas where there are new rules that we’ve had to go and implement but I also don’t want to make it seem like this is a massive departure in how we’ve thought about this stuff.”

Zuckerberg faced a range of tough questions on these points from the EU parliament earlier this week. But he avoided answering them in any meaningful detail.

So EU regulators are essentially facing a first test of their mettle — i.e. whether they are willing to step up and defend the line of the law against big tech’s attempts to reshape it in their business model’s image.

Privacy laws are nothing new in Europe but robust enforcement of them would certainly be a breath of fresh air. And now at least, thanks to GDPR, there’s a penalties structure in place to provide teeth and spin up a market around strategic litigation, with Schrems in the vanguard.

Schrems also makes the point that small startups and local companies are less likely to be able to use the kind of strong-arm ‘take it or leave it’ tactics on users that platforms are able to use to extract consent on account of the reach and power of their networks — arguing there’s a competition concern that GDPR should also help to redress.

“The fight against forced consent ensures that the corporations cannot force users to consent,” he adds. “This is especially important so that monopolies have no advantage over small businesses.”

Image credit: noyb.eu


Read Full Article

Some low-cost Android phones shipped with malware built in


Avast has found that many low-cost, non-Google-certifed Android phones shipped with a strain of malware built in that could send users to download apps they didn’t intend to access. The malware, called called Cosiloon, overlays advertisements over the operating system in order to promote apps or even trick users into downloading apps. Devices effected shipped from ZTE, Archos and myPhone.

The app consists of a dropper and a payload. “The dropper is a small application with no obfuscation, located on the /system partition of affected devices. The app is completely passive, only visible to the user in the list of system applications under ‘settings.’ We have seen the dropper with two different names, ‘CrashService’ and ‘ImeMess,'” wrote Avast. The dropper then connects with a website to grab the payloads that the hackers wish to install on the phone. “The XML manifest contains information about what to download, which services to start and contains a whitelist programmed to potentially exclude specific countries and devices from infection. However, we’ve never seen the country whitelist used, and just a few devices were whitelisted in early versions. Currently, no countries or devices are whitelisted. The entire Cosiloon URL is hardcoded in the APK.”

The dropper is part of the system’s firmware and is not easily removed.

To summarize:

The dropper can install application packages defined by the manifest downloaded via an unencrypted HTTP connection without the user’s consent or knowledge.
The dropper is preinstalled somewhere in the supply chain, by the manufacturer, OEM or carrier.
The user cannot remove the dropper, because it is a system application, part of the device’s firmware.

Avast can detect and remove the payloads and they recommend following these instructions to disable the dropper. If the dropper spots antivirus software on your phone it will actually stop notifications but it will still recommend downloads as you browse in your default browser, a gateway to grabbing more (and worse) malware. Engadget notes that this vector is similar to the Lenovo “Superfish” exploit that shipped thousands of computers with malware built in.


Read Full Article

Some low-cost Android phones shipped with malware built in


Avast has found that many low-cost, non-Google-certifed Android phones shipped with a strain of malware built in that could send users to download apps they didn’t intend to access. The malware, called called Cosiloon, overlays advertisements over the operating system in order to promote apps or even trick users into downloading apps. Devices effected shipped from ZTE, Archos and myPhone.

The app consists of a dropper and a payload. “The dropper is a small application with no obfuscation, located on the /system partition of affected devices. The app is completely passive, only visible to the user in the list of system applications under ‘settings.’ We have seen the dropper with two different names, ‘CrashService’ and ‘ImeMess,'” wrote Avast. The dropper then connects with a website to grab the payloads that the hackers wish to install on the phone. “The XML manifest contains information about what to download, which services to start and contains a whitelist programmed to potentially exclude specific countries and devices from infection. However, we’ve never seen the country whitelist used, and just a few devices were whitelisted in early versions. Currently, no countries or devices are whitelisted. The entire Cosiloon URL is hardcoded in the APK.”

The dropper is part of the system’s firmware and is not easily removed.

To summarize:

The dropper can install application packages defined by the manifest downloaded via an unencrypted HTTP connection without the user’s consent or knowledge.
The dropper is preinstalled somewhere in the supply chain, by the manufacturer, OEM or carrier.
The user cannot remove the dropper, because it is a system application, part of the device’s firmware.

Avast can detect and remove the payloads and they recommend following these instructions to disable the dropper. If the dropper spots antivirus software on your phone it will actually stop notifications but it will still recommend downloads as you browse in your default browser, a gateway to grabbing more (and worse) malware. Engadget notes that this vector is similar to the Lenovo “Superfish” exploit that shipped thousands of computers with malware built in.


Read Full Article

Uber in fatal crash detected pedestrian but had emergency braking disabled


The initial report by the National Transportation Safety Board on the fatal self-driving Uber crash in March confirms that the car detected the pedestrian as early as 6 seconds before the crash, but did not slow or stop because its emergency braking systems were deliberately disabled.

Uber told the NTSB that “emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior,” in other words to ensure a smooth ride. “The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.” It’s not clear why the emergency braking capability even exists if it is disabled while the car is in operation. The Volvo model’s built-in safety systems — collision avoidance and emergency braking, among other things — are also disabled while in autonomous mode.

It appears that in an emergency situation like this this “self-driving car” is no better, or substantially worse, than many normal cars already on the road.

It’s hard to understand the logic of this decision. An emergency is exactly the situation when the self-driving car, and not the driver, should be taking action. Its long-range sensors can detect problems accurately from much further away, while its 360-degree awareness and route planning allow it to make safe maneuvers that a human would not be able to do in time. Humans, even when their full attention is on the road, are not the best at catching these things; relying only on them in the most dire circumstances that require quick response times and precise maneuvering seems an incomprehensible and deeply irresponsible decision.

According to the NTSB report, the vehicle first registered Elaine Herzberg on lidar 6 seconds before the crash — at the speed it was traveling, that puts first contact at about 378 feet away. She was first identified as an unknown object, then a vehicle, then a bicycle, over the next few seconds (it isn’t stated when these classifications took place exactly).

The car following the collision.

During these 6 seconds, the driver could and should have been alerted of an anomalous object ahead on the left — whether it was a deer, a car, or a bike, it was entering or could enter the road and should be attended to. But the system did not warn the driver and apparently had no way to.

1.3 seconds before impact, which is to say about 80 feet away, the Uber system decided that an emergency braking procedure would be necessary to avoid Herzberg. But it did not hit the brakes, as the emergency braking system had been disabled, nor did it warn the driver because, again, it couldn’t.

It was only when, less than a second before impact, the driver happened to look up from whatever it was she was doing, and saw Herzberg, whom the car had known about in some way for 5 long seconds by then. It struck and killed her.

It reflects extremely poorly on Uber that it had disabled the car’s ability to respond in an emergency — though it was authorized to speed at night — and no method for the system to alert the driver should it detect something important. This isn’t just a safety issue, like going on the road with a sub-par lidar system or without checking the headlights — it’s a failure of judgement by Uber, and one that cost a person’s life.

Arizona, where the crash took place, barred Uber from further autonomous testing, and Uber yesterday ended its program in the state.

Uber offered the following statement on the report:

Over the course of the last two months, we’ve worked closely with the NTSB. As their investigation continues, we’ve initiated our own safety review of our self-driving vehicles program. We’ve also brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture, and we look forward to sharing more on the changes we’ll make in the coming weeks.


Read Full Article