27 July 2019

The Great Hack tells us data corrupts 


This week professor David Carroll, whose dogged search for answers to how his personal data was misused plays a focal role in The Great Hack: Netflix’s documentary tackling the Facebook-Cambridge Analytica data scandal, quipped that perhaps a follow up would be more punitive for the company than the $5BN FTC fine released the same day.

The documentary — which we previewed ahead of its general release Wednesday — does an impressive job of articulating for a mainstream audience the risks for individuals and society of unregulated surveillance capitalism, despite the complexities involved in the invisible data ‘supply chain’ that feeds the beast. Most obviously by trying to make these digital social emissions visible to the viewer — as mushrooming pop-ups overlaid on shots of smartphone users going about their everyday business, largely unaware of the pervasive tracking it enables.

Facebook is unlikely to be a fan of the treatment. In its own crisis PR around the Cambridge Analytica scandal it has sought to achieve the opposite effect; making it harder to join the data-dots embedded in its ad platform by seeking to deflect blame, bury key details and bore reporters and policymakers to death with reams of irrelevant detail — in the hope they might shift their attention elsewhere.

Data protection itself isn’t a topic that naturally lends itself to glamorous thriller treatment, of course. No amount of slick editing can transform the close and careful scrutiny of political committees into seat-of-the-pants viewing for anyone not already intimately familiar with the intricacies being picked over. And yet it’s exactly such thoughtful attention to detail that democracy demands. Without it we are all, to put it proverbially, screwed.

The Great Hack shows what happens when vital detail and context are cheaply ripped away at scale, via socially sticky content delivery platforms run by tech giants that never bothered to sweat the ethical detail of how their ad targeting tools could be repurposed by malign interests to sew social discord and/or manipulate voter opinion en mass.

Or indeed used by an official candidate for high office in a democratic society that lacks legal safeguards against data misuse.

But while the documentary packs in a lot over an almost two-hour span, retelling the story of Cambridge Analytica’s role in the 2016 Trump presidential election campaign; exploring links to the UK’s Brexit leave vote; and zooming out to show a little of the wider impact of social media disinformation campaigns on various elections around the world, the viewer is left with plenty of questions. Not least the ones Carroll repeats towards the end of the film: What information had Cambridge Analytica amassed on him? Where did they get it from? What did they use it for? — apparently resigning himself to never knowing. The disgraced data firm chose declaring bankruptcy and folding back into its shell vs handing over the stolen goods and its algorithmic secrets.

There’s no doubt over the other question Carroll poses early on the film — could he delete his information? The lack of control over what’s done with people’s information is the central point around which the documentary pivots. The key warning being there’s no magical cleansing fire that can purge every digitally copied personal thing that’s put out there.

And while Carroll is shown able to tap into European data rights — purely by merit of Cambridge Analytica having processed his data in the UK — to try and get answers, the lack of control holds true in the US. Here, the absence of a legal framework to protect privacy is shown as the catalyzing fuel for the ‘great hack’ — and also shown enabling the ongoing data-free-for-all that underpins almost all ad-supported, Internet-delivered services. tl;dr: Your phone doesn’t need to listen to if it’s tracking everything else you do with it.

The film’s other obsession is the breathtaking scale of the thing. One focal moment is when we hear another central character, Cambridge Analytica’s Brittany Kaiser, dispassionately recounting how data surpassed oil in value last year — as if that’s all the explanation needed for the terrible behavior on show.

“Data’s the most valuable asset on Earth,” she monotones. The staggering value of digital stuff is thus fingered as an irresistible, manipulative force also sucking in bright minds to work at data firms like Cambridge Analytica — even at the expense of their own claimed political allegiances, in the conflicted case of Kaiser.

If knowledge is power and power corrupts, the construction can be refined further to ‘data corrupts’, is the suggestion.

The filmmakers linger long on Kaiser which can seem to humanize her — as they show what appear vulnerable or intimate moments. Yet they do this without ever entirely getting under her skin or allowing her role in the scandal to be fully resolved.

She’s often allowed to tell her narrative from behind dark glasses and a hat — which has the opposite effect on how we’re invited to perceive her. Questions about her motivations are never far away. It’s a human mystery linked to Cambridge Analytica’s money-minting algorithmic blackbox.

Nor is there any attempt by the filmmakers to mine Kaiser for answers themselves. It’s a documentary that spotlights mysteries and leaves questions hanging up there intact. From a journalist perspective that’s an inevitable frustration. Even as the story itself is much bigger than any one of its constituent parts.

It’s hard to imagine how Netflix could commission a straight up sequel to The Great Hack, given its central framing of Carroll’s data quest being combined with key moments of the Cambridge Analytica scandal. Large chunks of the film are comprised from capturing scrutiny and reactions to the story unfolding in real-time.

But in displaying the ruthlessly transactional underpinnings of social platforms where the world’s smartphone users go to kill time, unwittingly trading away their agency in the process, Netflix has really just begun to open up the defining story of our time.


Read Full Article

Gatik’s self-driving vans have started shuttling groceries for Walmart


Gatik AI, the autonomous vehicle startup that’s aiming for the sweet middle spot in the world of logistics, is officially on the road through a partnership with Walmart.

The company received approval from the Arkansas Highway Commissioner’s office to launch a commercial service with Walmart. Gatik’s autonomous vehicles (with a human safety driver behind the wheel) is now delivering customer online grocery orders from Walmart’s main warehouse to its neighborhood stores in Bentonville, Arkansas.

The AVs will aim to travel seven days a week on a two-mile route — the tiniest of slivers of Walmart’s overall business. But the goal here isn’t ubiquity just yet. Instead, Walmart is using this project to capture the kind of data that will help it learn how best to integrate autonomous vehicles into their stores and services.

Gatik uses Ford transit vehicles outfitted with a self-driving system. Co-founder and CEO Gautam Narang has previously told TechCrunch that the company can fulfill a need in the market through a variety of use cases, including partnering with third-party logistics giants like Amazon, FedEx  or even the U.S. Postal Service, auto part distributors, consumer goods, food and beverage distributors as well as medical and pharmaceutical companies.

The company, which emerged from stealth in June, has raised $4.5 million in a seed round led by former CEO and executive chairman of Google Eric Schmidt’s Innovation Endeavors. Other investors include AngelPad, Dynamo Fund, Fontinalis Partners, Trucks Venture Capital and angel investor Lior Ron, who heads Uber Freight.

Gatik isn’t the only AV company working with Walmart. Walmart has partnerships with Waymo and Udelv. Both of these partnerships involve pilot programs in Arizona.

Udelv is testing the use of autonomous vans to deliver online grocery orders to customers. Last year, members of Waymo’s early rider program received grocery savings when they shopped from Walmart.com. The riders would then take a Waymo car to their nearby Walmart store for grocery pickup.


Read Full Article

PurePort Is the All-in-One Cleaning Multitool for iPhones and iPads


One of the most frustrating things about modern smartphones is the fact that they don’t last as long as we’d like them to. Not everyone wants to buy a new phone every year or two. Instead, some people would prefer to have their phones last for years on end.

There’s nothing wrong with wanting to stick with a phone for the long run, and a new device called PurePort aims to help iOS device owners keep their devices running for a long time by making it easy to keep the charging ports and other parts of the device clean and free of debris.

What Makes PurePort Stand Out?

There are a few things that make this multitool Worth your attention. It’s quite small for the number of functions it offers. It has a total of seven different tools that are designed to clean all of the important parts of an iPhone or iPad. The device is round, and you simply rotate it to get access to the particular tool you need.

Four different tools are included for cleaning the internals of a Lightning port, which tends to become filled with debris such as dust, pet hair, and other such nasties. PurePort includes two tools for cleaning and restoring the connectors on the Lightning cables themselves, which tend to get those black marks on them. Lastly, there’s a tool designed for cleaning the speakers.

I have two dogs in my home, and I’ve actually had issues with my iPhone charging. I took it to the Apple Store to get it fixed, and it turned out the Lightning port was just dirty. After a good cleaning, it charged like it did when it was new. With that in mind, this tool definitely seems like something I’d enjoy owning.

PurePort Availability

Kate Swinnerton, the creator of PurePort, is seeking funding for this new cleaning tool on Kickstarter. The project started off with a modest funding goal and it has far exceeded it as of this writing. If everything goes according to plan, PurePort multitools will be delivered to backers in November 2019.

If you’re interested in ordering a PurePort for yourself, you can reserve one for $26.

As with all crowdfunding campaigns, there are risks involved in backing PurePort. Even though it’s already met its goal and Kickstarter is attempting to make sure creators are more honest, there are still some things you should consider before backing this or any other project.

Read the full article: PurePort Is the All-in-One Cleaning Multitool for iPhones and iPads


Read Full Article

6 Best GIF Apps to Create, Edit, or Annotate Animated GIFs

Bias in AI: A problem recognized but still unresolved


There are those who praise the technology as the solution to some of humankind’s gravest problems, and those who demonize AI as the world’s greatest existential threat. Of course, these are two ends of the spectrum, and AI, surely, presents exciting opportunities for the future, as well as challenging problems to be overcome.

One of the issues that’s attracted much media attention in recent years has been the prospect of bias in AI. It’s a topic I wrote about in TechCrunch (Tyrant in the Code) more than two years ago. The debate is raging on.

At the time, Google had come under fire when research showed that when a user searched online for “hands,” the image results were almost all white; but when searching for “black hands,” the images were far more derogatory depictions, including a white hand reaching out to offer help to a black one, or black hands working in the earth. It was a shocking discovery that led to claims that, rather than heal divisions in society, AI technology would perpetuate them.

As I asserted two years ago, it’s little wonder that such instances might occur. In 2017, at least, the vast majority of people designing AI algorithms in the U.S. were white males. And while there’s no implication that those people are prejudiced against minorities, it would make sense that they pass on their natural, unconscious bias in the AI they create.

And it’s not just Google algorithms at risk from biased AI. As the technology becomes increasingly ubiquitous across every industry, it will become more and more important to eliminate any bias in the technology.

Understanding the problem

AI was indeed important and integral in many industries and applications two years ago, but its importance has, predictably, increased since then. AI systems are now used to help recruiters identify viable candidates, loan underwriters when deciding whether to lend money to customers and even judges when deliberating whether a convicted criminal will re-offend.

Of course, data can certainly help humans make more informed decisions using AI and data, but if that AI technology is biased, the result will be as well. If we continue to entrust the future of AI technology to a non-diverse group, then the most vulnerable members of society could be at a disadvantage in finding work, securing loans and being fairly tried by the justice system, plus much more.

AI is a revolution that will continue whether it’s wanted or not.

Fortunately, the issue around bias in AI has come to the fore in recent years, and more and more influential figures, organizations and political bodies are taking a serious look at how to deal with the problem.

The AI Now Institute is one such organization researching the social implications of AI technology. Launched in 2017 by research scientists Kate Crawford and Meredith Whittaker, AI Now focuses on the effect AI will have on human rights and labor, as well as how to safely integrate AI and how to avoid bias in the technology.

In May last year, the European Union put in place the General Data Protection Regulation (GDPR) — a set of rules that gives EU citizens more control over how their data is used online. And while it won’t do anything to directly challenge bias in AI technology, it will force European organizations (or any organization with European customers) to be more transparent in their use of algorithms. This will put extra pressure on companies to ensure they’re confident in the origins of the AI they’re using.

And while the U.S. doesn’t yet have a similar set of regulations around data use and AI, in December 2017, New York’s city council and mayor passed a bill calling for more transparency in AI, prompted by reports the technology was causing racial bias in criminal sentencing.

Despite research groups and government bodies taking an interest in the potentially damaging role biased AI could play in society, the responsibility largely falls to the businesses creating the technology, and whether they’re prepared to tackle the problem at its core. Fortunately, some of the largest tech companies, including those that have been accused of overlooking the problem of AI bias in the past, are taking steps to tackle the problem.

Microsoft, for instance, is now hiring artists, philosophers and creative writers to train AI bots in the dos and don’ts of nuanced language, such as to not use inappropriate slang or inadvertently make racist or sexist remarks. IBM is attempting to mitigate bias in its AI machines by applying independent bias ratings to determine the fairness of its AI systems. And in June last year, Google CEO Sundar Pichai published a set of AI principles that aims to ensure the company’s work or research doesn’t create or reinforce bias in its algorithms.

Demographics working in AI

Tackling bias in AI does indeed require individuals, organizations and government bodies to take a serious look at the roots of the problem. But those roots are often the people creating the AI services in the first place. As I posited in “Tyrant in the Code” two years ago, any left-handed person who’s struggled with right-handed scissors, ledgers and can-openers will know that inventions often favor their creators. The same goes for AI systems.

New data from the Bureau of Labor Statistics shows that the professionals who write AI programs are still largely white males. And a study conducted last August by Wired and Element AI found that only 12% of leading machine learning researchers are women.

This isn’t a problem completely overlooked by the technology companies creating AI systems. Intel, for instance, is taking active steps in improving gender diversity in the company’s technical roles. Recent data indicates that women make up 24% of the technical roles at Intel — far higher than the industry average. And Google is funding AI4ALL, an AI summer camp aimed at the next generation of AI leaders, to expand its outreach to young women and minorities underrepresented in the technology sector.

However, the statistics show there is still a long way to go if AI is going to reach the levels of diversity required to stamp out bias in the technology. Despite the efforts of some companies and individuals, technology companies are still overwhelmingly white and male.

Solving the problem of bias in AI

Of course, improving diversity within the major AI companies would go a long way toward solving the problem of bias in the technology. Business leaders responsible for distributing the AI systems that impact society will need to offer public transparency so that bias can be monitored, incorporate ethical standards into the technology and have a better understanding of who the algorithm is supposed to be targeting.

Governments and business leaders alike have some serious questions to ponder.

But without regulations from government bodies, these types of solutions could come about too slowly, if at all. And while the European Union has put in place GDPR that in many ways tempers bias in AI, there are no strong signs that the U.S. will follow suit any time soon.

Government, with the help of private researchers and think tanks, is moving quickly in the direction and trying to grapple with how to regulate algorithms. Moreover, some companies like Facebook are also claiming regulation could be beneficial. Nevertheless, high regulatory requirements for user-generated content platforms could help companies like Facebook by making it nearly impossible to compete for new startups entering the market.

The question is, what is the ideal level of government intervention that won’t hinder innovation?

Entrepreneurs often claim that regulation is the enemy of innovation, and with such a potentially game-changing, relatively nascent technology, any roadblocks should be avoided at all cost. However, AI is a revolution that will continue whether it’s wanted or not. It will go on to change the lives of billions of people, and so it clearly needs to be heading in an ethical, unbiased direction.

Governments and business leaders alike have some serious questions to ponder, and not much time to do it. AI is a technology that’s developing fast, and it won’t wait for indecisiveness. If the innovation is allowed to go on unchecked, with few ethical guidelines and a non-diverse group of creators, the results may lead to a deepening of divisions in the U.S. and worldwide.


Read Full Article

How Does File Compression Work?

The 7 Best Running and Workout Music Apps

Siri recordings “regularly” sent to Apple contractors for analysis, claims whistleblower


Apple has joined the dubious company of Google and Amazon in secretly sharing audio recordings of its users with contractors, confirming the practice to The Guardian after a whistleblower brought it to the outlet. The person said that Siri queries are routinely sent to human listeners for closer analysis, something not disclosed in Apple’s privacy policy.

The recordings are reportedly not associated with an Apple ID, but can be several seconds long, include content of a personal nature, and are paired with other revealing data, like location, app data, and contact details.

Like the other companies, Apple says this data is collected and analyzed by humans to improve its services, and that all analysis is done in a secure facility by workers bound by confidentiality agreements. And like the other companies, Apple failed to say that it does this until forced to.

Apple told The Guardian that less than one percent of daily queries are sent, cold comfort when the company is also constantly talking up the volume of Siri queries. Hundreds of millions of devices use the feature regularly, making a conservative estimate of a fraction of one percent rise quickly into the hundreds of thousands.

This “small portion” of Siri requests is apparently randomly chosen, and as the whistleblower notes, it includes “countless instances of recordings featuring private discussions between doctors and patients, business deals, seemingly criminal dealings, sexual encounters and so on.”

Some of these activations of Siri will have been accidental, which is one of the things listeners are trained to listen for and identify. Accidentally recorded queries can be many seconds long and contain a great deal of personal information, even if it is not directly tied to a digital identity.

Only in the last month has it come out that Google sends clips to be analyzed in like wise, and that Amazon, which we knew recorded Alexa queries, retains that audio indefinitely.

Apple’s privacy policy states regarding non-personal information (which Siri queries would fall under):

We may collect and store details of how you use our services, including search queries. This information may be used to improve the relevancy of results provided by our services. Except in limited instances to ensure quality of our services over the Internet, such information will not be associated with your IP address.

It’s conceivable that the phrase “search queries” is inclusive of recordings of search queries. And it does say that it shares some data with third parties. But nowhere is it stated simply that questions you ask your phone may be recorded and shared with a stranger. Nor is there any way for users to opt out of this practice.

Given Apple’s focus on privacy and transparency, this seems like a major, and obviously a deliberate, oversight. I’ve contacted Apple for more details and will update this post when I hear back.


Read Full Article

Siri recordings “regularly” sent to Apple contractors for analysis, claims whistleblower


Apple has joined the dubious company of Google and Amazon in secretly sharing audio recordings of its users with contractors, confirming the practice to The Guardian after a whistleblower brought it to the outlet. The person said that Siri queries are routinely sent to human listeners for closer analysis, something not disclosed in Apple’s privacy policy.

The recordings are reportedly not associated with an Apple ID, but can be several seconds long, include content of a personal nature, and are paired with other revealing data, like location, app data, and contact details.

Like the other companies, Apple says this data is collected and analyzed by humans to improve its services, and that all analysis is done in a secure facility by workers bound by confidentiality agreements. And like the other companies, Apple failed to say that it does this until forced to.

Apple told The Guardian that less than one percent of daily queries are sent, cold comfort when the company is also constantly talking up the volume of Siri queries. Hundreds of millions of devices use the feature regularly, making a conservative estimate of a fraction of one percent rise quickly into the hundreds of thousands.

This “small portion” of Siri requests is apparently randomly chosen, and as the whistleblower notes, it includes “countless instances of recordings featuring private discussions between doctors and patients, business deals, seemingly criminal dealings, sexual encounters and so on.”

Some of these activations of Siri will have been accidental, which is one of the things listeners are trained to listen for and identify. Accidentally recorded queries can be many seconds long and contain a great deal of personal information, even if it is not directly tied to a digital identity.

Only in the last month has it come out that Google sends clips to be analyzed in like wise, and that Amazon, which we knew recorded Alexa queries, retains that audio indefinitely.

Apple’s privacy policy states regarding non-personal information (which Siri queries would fall under):

We may collect and store details of how you use our services, including search queries. This information may be used to improve the relevancy of results provided by our services. Except in limited instances to ensure quality of our services over the Internet, such information will not be associated with your IP address.

It’s conceivable that the phrase “search queries” is inclusive of recordings of search queries. And it does say that it shares some data with third parties. But nowhere is it stated simply that questions you ask your phone may be recorded and shared with a stranger. Nor is there any way for users to opt out of this practice.

Given Apple’s focus on privacy and transparency, this seems like a major, and obviously a deliberate, oversight. I’ve contacted Apple for more details and will update this post when I hear back.


Read Full Article

How Microsoft turns an obsession with detail into micron-optimized keyboards


Nestled among the many indistinguishable buildings of Microsoft’s Redmond campus, a multi-disciplinary team sharing an attention to detail that borders on fanatical is designing a keyboard… again and again and again. And one more time for good measure. Their dogged and ever-evolving dedication to “human factors” shows the amount of work that goes into making any piece of hardware truly ergonomic.

Microsoft may be known primarily for its software and services, but cast your mind back a bit and you’ll find a series of hardware advances that have redefine their respective categories:

The original Natural Keyboard was the first split-key, ergonomic keyboard, the fundamentals of which have only ever been slightly improved upon.

The Intellimouse Optical not only made the first truly popular leap away from ball-based mice, but did so in such a way that its shape and buttons still make its descendants among the best all-purpose mice on the market.

Remember me?

Although the Zune is remembered more for being a colossal boondoggle than a great music player, it was very much the latter, and I still use and marvel at the usability of my Zune HD. Yes, seriously. (Microsoft, open source the software!)

More recently, the Surface series of convertible notebooks have made bold and welcome changes to a form factor that had stagnated in the wake of Apple’s influential mid-2000s MacBook Pro designs.

Microsoft is still making hardware, of course, and in fact it has doubled down on its ability to do so with a revamped hardware lab filled with dedicated, extremely detail-oriented people who are given the tools they need to get as weird as they want — as long as it makes something better.

You don’t get something like this by aping the competition.

First, a disclosure: I may as well say at the outset that this piece was done essentially at the invitation (but not direction) of Microsoft, which offered the opportunity to visit their hardware labs in Building 87 and meet the team. I’d actually been there before a few times, but it had always been off-record and rather sanitized.

Knowing how interesting I’d found the place before, I decided I wanted to take part and share it at the risk of seeming promotional. They call this sort of thing “access journalism,” but the second part is kind of a stretch. I really just think this stuff is really cool, and companies seldom expose their design processes in the open like this. Microsoft obviously isn’t the only company to have hardware labs and facilities like this, but they’ve been in the game for a long time and have an interesting and almost too detailed process they’ve decided to be open about.

Although I spoke with perhaps a dozen Microsoft Devices people during the tour (which was still rigidly structured), only two were permitted to be on record: Edie Adams, Chief Ergonomist, and Yi-Min Huang, Principal Design and Experience Lead. But the other folks in the labs were very obliging in answering questions and happy to talk about their work. I was genuinely surprised and pleased to find people occupying niches so suited to their specialities and inclinations.

Generally speaking the work I got to see fell into three general spaces: the Human Factors Lab, focused on very exacting measurements of people themselves and how they interact with a piece of hardware; the anechoic chamber, where the sound of devices is obsessively analyzed and adjusted; and the Advanced Prototype Center, where devices and materials can go from idea to reality in minutes or hours.

The science of anthropometry

microsoft building87 7100095Inside the Human Factors lab, human thumbs litter the table. No, it isn’t a torture chamber — not for humans, anyway. Here the company puts its hardware to the test by measuring how human beings use it, recording not just simple metrics like words per minute on a keyboard, but high-speed stereo footage that analyzes how the skin of the hand stretches when it reaches for a mouse button down to a fraction of a millimeter.

The trend here, as elsewhere in the design process and labs, is that you can’t count anything out as a factor that increases or decreases comfort; the little things really do make a difference, and sometimes the microscopic ones.

“Feats of engineering heroics are great,” said Adams, “but they have to meet a human need. We try to cover the physical, cognitive, and emotional interactions with our products.”

(Perhaps you take this, as I did, as — in addition to a statement of purpose — a veiled reference to a certain other company whose keyboards have been in the news for other reasons. Of this later.)

The lab is a space perhaps comparable to a medium-sized restaurant, with enough room for a dozen or so people to work in the various sub-spaces set aside for different highly specific measurements. Various models of body parts have been set out on work surfaces, I suspect for my benefit.

microsoft building87 7100099Among them are that set of thumbs, in little cases looking like oversized lipsticks, each with a disturbing surprise inside. These are all cast from real people, ranging from the small thumb of a child to a monster that, should it have started a war with mine, I would surrender unconditionally.

Next door is a collection of ears, not only rendered in extreme detail but with different materials simulating a variety of rigidities. Some people have soft ears, you know. And next door to those is a variety of noses, eyes, and temples, each representing a different facial structure or interpupillary distance.

This menagerie of parts represents not just a continuum of sizes but a variety of backgrounds and ages. All of them come into play when creating and testing a new piece of hardware.

microsoft building87 7100104 1“We want to make sure that we have a diverse population we can draw on when we develop our products,” said Adams. When you distribute globally it is embarrassing to find that some group or another, with wider-set eyes or smaller hands, finds your product difficult to use. Inclusivity is a many-faceted gem, indeed it has as many facets as you are willing to cut. (The Xbox Adaptive Controller, for instance, is a new and welcome one.)

In one corner stands an enormous pod that looks like Darth Vader should emerge from it. This chamber, equipped with 36 DSLR cameras, produces an unforgivingly exact reproduction of one’s head. I didn’t do it myself, but many on the team had; in fact, one eyes-and-nose combo belonged to Adams. The fellow you see pictured there also works in the lab; that was the first such 3D portrait they took with the rig.

With this they can quickly and easily scan in dozens or hundreds of heads, collecting metrics on all manner of physiognomical features and creating an enviable database of both average and outlier heads. My head is big, if you want to know, and my hand was on the upper range too. But well within a couple standard deviations.

So much for static study — getting reads on the landscape of humanity, as it were. Anthropometry, they call it. But there are dynamic elements as well, some of which they collect in the lab, some elsewhere.

“When we’re evaluating keyboards, we have people come into the lab. We try to put them in the most neutral position possible,” explained Adams.

It should be explained that by neutral, she means specifically with regard to the neutral positions of the joints in the body, which have certain minima and maxima it is well to observe. How can you get a good read on how easy it is to type on a given keyboard if the chair and desk the tester is sitting at are uncomfortable?

Here as elsewhere the team strives to collect both objective data and subjective data; people will say they think a keyboard, or mouse, or headset is too this or too that, but not knowing the jargon they can’t get more specific. By listening to subjective evaluations and simultaneously looking at objective measurements, you can align the two and discover practical measures to take.

microsoft building87 7100096One such objective measure involved motion capture beads attached to the hand while an electromyographic bracelet tracks the activation of muscles in the arm. Imagine if you will a person whose typing appears normal and of uniform speed — but in reality they are putting more force on their middle fingers than the others because of the shape of the keys or rest. They might not be able to tell you they’re doing so, though it will lead to uneven hand fatigue, but this combo of tools could reveal the fact.

“We also look at a range of locations,” added Huang. “Typing on a couch is very different from typing on a desk.”

One case, such as a wireless Surface keyboard, might require more of what Huang called “lapability,” (sp?) while the other perhaps needs to accommodate a different posture and can abandon lapability altogether.

A final measurement technique that is quite new to my knowledge involves a pair of high-resolution, high-speed black and white cameras that can be focused narrowly on a region of the body. They’re on the right, below, with colors and arrows representing motion vectors.

microsoft building87 7100106

A display showing various anthropometric measurements.

These produce a very detailed depth map by closely tracking the features of the skin; one little patch might move further than the other when a person puts on a headset, suggesting it’s stretching the skin on the temple more than it is on the forehead. The team said they can see movements as small as ten microns, or micrometers (therefore you see that my headline was only light hyperbole).

You might be thinking that this is overkill. And in a way it most certainly is. But it is also true that by looking closer they can make the small changes that cause a keyboard to be comfortable for five hours rather than four, or to reduce error rates or wrist pain by noticeable amounts — features you can’t really even put on the box, but which make a difference in the long run. The returns may diminish, but we’re not so far along the asymptote approaching perfection that there’s no point to making further improvements.

The quietest place in the world

microsoft building87 7100109Down the hall from the Human Factors lab is the quietest place in the world. That’s not a colloquial exaggeration — the main anechoic chamber in Building 87 at Microsoft is in the record books as the quietest place on Earth, with an official ambient noise rating of negative 20.3 decibels.

You enter the room through a series of heavy doors and the quietness, though a void, feels like a physical medium that you pass into. And so it is, in fact — a near-total lack of vibrations in the air that feels as solid as the nested concrete boxes inside which the chamber rests.

I’ve been in here a couple times before, and Hundraj Gopal, the jovial and highly expert proprietor of quietude here, skips the usual tales of Guinness coming to test it and so on. Instead we talk about the value of sound to the consumer, though they may not even realize they do value it.

Naturally if you’re going to make a keyboard, you’re going to want to control how it sounds. But this is a surprisingly complex process, especially if, like the team at Microsoft, you’re really going to town on the details.

The sounds of consumer products are very deliberately designed, they explained. The sound your car door makes when it shuts gives a sense of security — being sealed in when you’re entering, and being securely shut out when you’re leaving it. It’s the same for a laptop — you don’t want to hear a clank when you close it, or a scraping noise when you open it. These are the kinds of things that set apart “premium” devices (and cars, and controllers, and furniture, etc) and they do not come about by accident.

microsoft building87 7100113Keyboards are no exception. And part of designing the sound is understanding that there’s more to it than loudness or even tone. Some sounds just sound louder, though they may not register as high in decibels. And some sounds are just more annoying, though they might be quiet. The study and understanding of this is what’s known as psychoacoustics.

There are known patterns to pursue, certain combinations of sounds that are near-universally liked or disliked, but you can’t rely on that kind of thing when you’re, say, building a new keyboard from the ground up. And obviously when you create a new machine like the Surface and its family they need new keyboards, not something off the shelf. So this is a process that has to be done from scratch over and over.

As part of designing the keyboard — and keep in mind, this is in tandem with the human factors mentioned above and the rapid prototyping we’ll touch on below — the device has to come into the anechoic chamber and have a variety of tests performed.

microsoft building87 7100116

A standard head model used to simulate how humans might hear certain sounds. The team gave it a bit of a makeover.

These tests can be painstakingly objective, like a robotic arm pressing each key one by one while a high-end microphone records the sound in perfect fidelity and analysts pore over the spectrogram. But they can also be highly subjective: They bring in trained listeners — “golden ears” — to give their expert opinions, but also have the “gen pop” everyday users try the keyboards while experiencing calibrated ambient noise recorded in coffee shops and offices. One click sound may be lost in the broad-spectrum hubbub in a crowded cafe but annoying when it’s across the desk from you.

This feedback goes both directions, to human factors and prototyping, and they iterate and bring it back for more. This progresses sometimes through multiple phases of hardware, such as the keyswitch assembly alone; the keys built into their metal enclosure; the keys in the final near-shipping product before they finalize the keytop material, and so on.

Indeed, it seems like the process really could go on forever if someone didn’t stop them from refining the design further.

“It’s amazing that we ever ship a product,” quipped Adams. They can probably thank the Advanced Prototype Center for that.

Rapid turnaround is fair play

If you’re going to be obsessive about the details of the devices you’re designing, it doesn’t make a lot of sense to have to send off a CAD file to some factory somewhere, wait a few days for it to come back, then inspect for quality, send a revised file, and so on. So Microsoft (and of course other hardware makers of any size) now use rapid prototyping to turn designs around in hours rather than days or weeks.

This wasn’t always possible even with the best equipment. 3D printing has come a long way over the last decade, and continues to advance, but not long ago there was a huge difference between a printed prototype and the hardware that a user would actually hold.

microsoft building87 7100128Multi-axis CNC mills have been around for longer, but they’re slower and more difficult to operate. And subtractive manufacturing (i.e. taking a block and whittling it down to a mouse) is inefficient and has certain limitations as far as the structures it can create.

Of course you could carve it yourself out of wood or soap, but that’s a bit old-fashioned.

So when Building 87 was redesigned from the ground up some years back, it was loaded with the latest and greatest of both additive and subtractive rapid manufacturing methods, and the state of the art has been continually rolling through ever since. Even as I passed through they were installing some new machines (desk-sized things that had slots for both extrusion materials and ordinary printer ink cartridges, a fact that for some reason I found hilarious).

The additive machines are in constant use as designers and engineers propose new device shapes and styles that sound great in theory but must be tested in person. Having a bunch of these things, each able to produce multiple items per print, lets you for instance test out a thumb scoop on a mouse with 16 slightly different widths. Maybe you take those over to Human Factors and see which can be eliminated for over-stressing a joint, then compare comfort on the surviving 6 and move on to a new iteration. That could all take place over a day or two.

microsoft building87 7100092

Ever wonder what an Xbox controller feels like to a child? Just print a giant one in the lab.

Softer materials have become increasingly important as designers have found that they can be integrated into products from the start. For instance, a wrist wrest for a new keyboard might have foam padding built in.

But how much foam is too much, or too little? As with the 3D printers, flat materials like foam and cloth can be customized and systematically tested as well. Using a machine called a skiver, foam can be split into thicknesses only half a millimeter apart. It doesn’t sound like much — and it isn’t — but when you’re creating an object that will be handled for hours at a time by the sensitive hands of humans, the difference can be subtle but substantial.

For more heavy-duty prototyping of things that need to be made out of metal — hinges, laptop frames, and so on — there is bank after bank of 5-axis CNC machines, lathes, and more exotic tools, like a system that performs extremely precise cuts using a charged wire.

[gallery ids="1860698,1860699"]

The engineers operating these things work collaboratively the designers and researchers, and it was important to the people I talked to that this wasn’t a “here, print this” situation. A true collaboration has input from both sides, and that is what seems to be happening here. Someone inspecting a 3D model for printability before popping it into the 5-axis might say to the designer, you know, these pieces could fit together more closely if we did so-and-so, and it would actually add strength to the assembly. (Can you tell I’m not an engineer?) Making stuff, and making stuff better, is a passion among the crew and that’s a fundamentally creative drive.

Making fresh hells for keyboards

If any keyboard has dominated the headlines for the last year or so, it’s been Apple’s ill-fated butterfly switch keyboard on the latest MacBook Pros. While being in my opinion quite unpleasant to type on, they appeared to fail at an astonishing rate judging by the proportion of users I saw personally reporting problems, and are quite expensive to replace. How, I wondered, did a company with Apple’s design resources create such a dog?

microsoft building87 7100129

Here’s a piece of hardware you won’t break any time soon.

I mentioned the subject to the group towards the end of the tour but, predictably and understandably, it wasn’t really something they wanted to talk about. But a short time later I spoke with one of the people in charge of Microsoft’s reliability managers. They too demurred on the topic of Apple’s failures, opting instead to describe at length the measures Microsoft takes to ensure that their own keyboards don’t suffer a similar fate.

The philosophy is essentially to simulate everything about the expected 3-5 year life of the keyboard. I’ve seen the “torture chambers” where devices are beaten on by robots (I’ve seen these personally, years ago — they’re brutal), but there’s more to it than that. Keyboards are everyday objects, and they face everyday threats; so that’s what the team tests, with things falling into three general categories:

Environmental: This includes cycling the temperature from very low to very high, exposing the keyboard to dust and UV. This differs for each product, since some will obviously be used outside more than others. Does it break? Does it discolor? Where does the dust go?

Mechanical: Every keyboard undergoes key tests to make sure that keys can withstand however many million presses without failing. But that’s not the only thing that keyboards undergo. They get dropped and things get dropped on them, of course, or left upside-down, or have their keys pressed and held at weird angles. All these things are tested, and when a keyboard fails because of a test they don’t have, they add it.

Chemical. I found this very interesting. The team now has more than 30 chemicals that it exposes its hardware to, including: lotion, Coke, coffee, chips, mustard, ketchup, and Clorox. The team is constantly adding to the list as new chemicals enter frequent usage or new markets open up. Hospitals, for instance, need to test a variety of harsh disinfectants that an ordinary home wouldn’t have. (Note: Burt’s Bees is apparently bad news for keyboards.)

Testing is ongoing, with new batches being evaluated continuously as time allows.

To be honest it’s hard to imagine that Apple’s disappointing keyboard actually underwent this kind of testing, or if it did, that it was modified to survive it. The number and severity of problems I’ve heard of with them suggest the “feats of engineering heroics” of which Adams spoke, but directed singlemindedly in the direction of compactness. Perhaps more torture chambers are required at Apple HQ.

7 factors and the unfactorable

All the above are more tools for executing a design and not or creating one to begin with. That’s a whole other kettle of fish, and one not so easily described.

Adams told me: “When computers were on every desk the same way, it was okay to only have one or two kinds of keyboard. But now that there are so many kinds of computing, it’s okay to have a choice. What kind of work do you do? Where do you do it? I mean, what do we all type on now? Phones. So it’s entirely context dependent.”

microsoft building87 7100120

Is this the right curve? Or should it be six millimeters higher? Let’s try both.

Yet even in the great variety of all possible keyboards there are metrics that must be considered if that keyboard is to succeed in its role. The team boiled it down to seven critical points:

  • Key travel: How far a key goes until it bottoms out. Neither shallow nor deep is necessarily good, but serve different purposes.
  • Key spacing: Distance between the center of one key and the next. How far can you differ from “full-size” before it becomes uncomfortable?
  • Key pitch: On many keyboards the keys do not all “face” the same direction, but are subtly pointed towards the home row, because that’s the direction your fingers hit them from. How much is too much? How little is too little?
  • Key dish: The shape of the keytop limits your fingers’ motion, captures them when they travel or return, and provides a comfortable home — if it’s done right.
  • Key texture: Too slick and fingers will slide off. Too rough and it’ll be uncomfortable. Can it be fabric? Textured plastic? Metal?
  • Key Sound: As described above the sound indicates a number of things and has to be carefully engineered.
  • Force to fire: How much actual force does it take to drive a given key to its actuation point? Keep in mind this can and perhaps should differ from key to key.

In addition to these core concepts there are many secondary ones that pop up for consideration: Wobble, or the amount a key moves laterally (yes, this is deliberate), snap ratio, involving the feedback from actuation. Drop angle, off-axis actuation, key gap for chiclet boards… and of course the inevitable switch debate.

Keyboard switches, the actual mechanism under the key, have become a major sub-industry as many companies started making their own at the expiration of a few important patents. Hence there’s been a proliferation of new key switches with a variety of aspects, especially on the mechanical side. Microsoft does make mechanical keyboards, and scissor-switch keyboards, and membrane as well, and perhaps even some more exotic ones (though the original touch-sensitive Surface cover keyboard was a bit of a flop).

“When we look at switches, whether it’s for a mouse, QWERTY, or other keys, we think about what they’re for,” said Adams. “We’re not going to say we’re scissor switch all the time or something — we have all kinds. It’s about durability, reliability, cost, supply, and so on. And the sound and tactile experience is so important.”

As for the shape itself, there is generally the divided Natural style, the flat full style, and the flat chiclet style. But with design trends, new materials, new devices, and changes to people and desk styles (you better believe a standing desk needs a different keyboard than a sitting one), it’s a new challenge every time.

[gallery ids="1860695,1860694"]

They collected a menagerie of keyboards and prototypes in various stages of experimentation. Some were obviously never meant for real use — one had the keys pitched so far that it was like a little cave for the home row. Another was an experiment in how much a design could be shrunk until it was no longer usable. A handful showed different curves a la Natural — which is the right one? Although you can theorize, the only way to be sure is to lay hands on it. So tell rapid prototyping to make variants 1-10, then send them over to Human Factors and text the stress and posture resulting from each one.

“Sure, we know the gable slope should be between 10-15 degrees and blah blah blah,” said Adams, who is actually on the patent for the original Natural Keyboard, and so is about as familiar as you can get with the design. “But what else? What is it we’re trying to do, and how are we achieving that through engineering? It’s super fun bringing all we know about the human body and bringing that into the industrial design.”

Although the comparison is rather grandiose, I was reminded of an orchestra — but not in full swing. Rather, in the minutes before a symphony begins, and all the players are tuning their instruments. It’s a cacophony in a way, but they are all tuning towards a certain key, and the din gradually makes its way to a pleasant sort of hum. So it is that a group of specialists all tending their sciences and creeping towards greater precision seem to cohere a product out of the ether that is human-centric in all its parts.


Read Full Article

How parking app SpotHero is preparing for an era of driverless cars


On-demand parking app SpotHero wants to be ready for the day when autonomous vehicles are ubiquitous. Its strategy: target the human-driven car-sharing fleets today.

The Chicago-based company, which has operations in San Francisco, New York, Washington, D.C. and Seattle, has launched a new service dubbed SpotHero for Fleets that targets shared mobility and on-demand services.

The service aims to be a one-stop shop for car-sharing and commercial fleets to handle all that goes into ensuring there is access and the right number of designated parking areas on any given day within SpotHero’s large network of 6,500 garages across 300 cities.

That means everything from managing the relationships between garage owners and the fleet companies to proper signage so car-sharing customers can find the vehicles, as well as flexible plans that account for seasonal demands on businesses.

Under the new service, customers are able to source and secure parking inventory in high-traffic areas across multiple cities and pay per use across multiple parking facilities on one invoice to streamline payments. 

The service also aims to solve the crux of accessing commercial garages, Elan Mosbacher, SpotHero’s head of strategy and operations, said in a recent interview.

“How does a car get in and out of the garage when the driver driving that car isn’t necessarily the one paying for the parking?,” Mosbacher asked rhetorically. The service provides access to gated parking facilities to provide more pickup and drop-off points for shared cars.

The company’s core competency — its bread and butter since launching in 2011 — has been directed at connecting everyday drivers to parking spots in thousands of garages across North America.

That focus has expanded in the past eight years, with the company adding other services as urban density has increased and on-street parking has become more jumbled and confused thanks to an increase in traffic, ride-hailing and on-demand delivery services that take up valuable curb space.

“Our platform has evolved as more trends emerge around everything from connected cars to urban mobility apps to fleets to autonomous vehicles more and more companies are reaching out to us about how to leverage our network and our API to service parking from their interface to their audience of drivers,” said Mosbacher.

For instance, just last month, SpotHero announced it was integrating Waze, the navigation app owned by Google, into its app to help customers find the best and most direct route to their pre-booked parking spot. The company has also partnered with Moovit as well as expanded into the corporate world with firms such as the Associated Press, Caterpillar and US Cellular.

SpotHero could continue to scale up with this consumer-focused business model. However, the company saw two overlapping opportunities that center around car-sharing fleets.

In the past year, SpotHero has been approached by a number of autonomous vehicles companies acknowledging that one day they’re going to have to solve parking, Mosbacher said. But these companies aren’t even ready to launch pilot programs.

The company realized there was a use case and an opportunity today for human-driven car-sharing fleets.

“What we’re doing now is leveraging our network of services, hardware and software to solve a number of business problems around car-sharing fleets we the hope that the technology, infrastructure improves and accelerates to a point when autonomous vehicles are capable of parking using our network,” Mosbacher said.

That opportunity is poised to get a lot wider in the next decade. Deloitte predicts that by 2030 shared vehicles will overtake personally owned vehicles in urban areas. As car-share fleets grow, companies are increasingly tasked with solving for complex parking needs at scale, according to SpotHero.

The company has signed on car-sharing companies and other commercial fleets, although it’s not naming them yet.

The business of parking — and its potential to tap fleets of human-driven and someday even driverless vehicles — has attracted venture funds. SpotHero has raised $67.6 million to date.

And there’s good reason investors and parking app companies like SpotHero are jumping in to “solve parking.” A study by Inrix released in 2017 found that, on average, U.S. drivers spend 17 hours per year searching for parking at a cost of $345 per driver in wasted time, fuel and emissions.


Read Full Article

Old-school Doom and its sequels come to Switch, Xbox One, and PS4


Thinking about what to do this weekend? Think no more. Doom, Doom II, and Doom 3 have all just appeared on the Switch, Xbox One, and PS4, giving you no excuse not to play these classics. All the time. Over and over. Rip and tear!

The announcement was made at QuakeCon 2019, the annual gathering of slayers and gibbers where id Software usually shows off its latest wares. Or in this case, its earliest.

At $5 each, the original Doom and Doom II should provide dozens of hours of old-school fun. I’ve found in revisiting these games that the level design really is spectacular and the gameplay, while of course simple compared to your Dishonors or your Division 2s, is also elegant and carefully calibrated. It’s also amazing how scary these games can still be.

Not that you haven’t had ample opportunity to play them — and the thousands of free maps available for PC players — these last couple decades. But if your console of choice, with your surround sound system and big screen, is how you tend to play games, then perhaps it’s worth a tenner to put these enduring classics on there.

Importantly, these include 4-play split-screen deathmatch and co-op. Probably been a while since you played it that way, right?

As for Doom 3 — well, my most salient memory of the game is playing the leaked Alpha version, which scared the pants off me and almost put me off the actual game. It was a huge graphical advance at the time and due to its deliberate use of lighting still looks pretty cool, though of course highly primitive in other ways.

Is it still any good to play? $10 lets you find out.

The original two games are also officially available on iOS as well, and will, amazingly, run at about a dozen times the resolution they originally did back in the ’90s.


Read Full Article