25 March 2018

Zuck apologizes for Cambridge Analytica scandal with full-page print ad


Facebook chief Mark Zuckerberg has taken out a full page ad in the Washington Post, the New York Times, the Wall Street Journal and six UK papers today to apologize Cambridge Analytica scandal, according to CNN’s Brian Stelter.

The ad starts in bold letters, saying:

“We have a responsibility to protect your information. If we can’t, we don’t deserve it.”

The ad was published on Sunday, following Zuck’s first public acknowledgement of the issue on Facebook and a subsequent media tour earlier this week.

Congress has also put Mark Zuckerberg on notice to potentially come speak with them, with Senator Kennedy of Louisiana encouraging Zuck to “do the common sense thing and roll up his sleeves and take a meaningful amount of time talking to [them].”

For those of you still unsure what’s going on with Facebook and Cambridge Analytica, you can see a full play-by-play here.

Here’s the full transcript from the print ad:

We have a responsibility to protect your information. If we can’t, we don’t deserve it.

You may have heard about a quiz app built by a university researcher that leaked Facebook data of millions of people in 2014. This was a breach of trust, and I’m sorry we didn’t do more at the time. We’re now taking steps to make sure this doesn’t happen again.

We’ve already stopped apps like this from getting so much information. Now we’re limiting the data apps get when you sign in using Facebook.

We’re also investigating every single app that had access to large amounts of data before we fixed this. We expect there are others. And when we find them, we will ban them and tell everyone affected.

Finally, we’ll remind you of which apps you’ve given access to your information — so you can shut off the ones you don’t want anymore.

Thank you for believing in this community. I promise to do better for you.

Mark Zuckerberg


Read Full Article

A $6 trillion wake up call for the tech industry


Earlier this year, the business community received a wake-up call issued with all of the might that $6 trillion can muster.

The call came from Laurence Fink, the founder and chief executive of the global investment firm, BlackRock, and was delivered as a letter to the CEOs of the world’s largest companies.

Aptly titled, “A Sense of Purpose,” the letter informed business leaders that driving record profits is no longer enough to garner BlackRock’s support. Companies must also positively contribute to society, or in Mr. Fink’s words, “Companies must benefit all of their stakeholders, including shareholders, employees, customers, and the communities in which they operate.”

I was elated when I read the letter. I’ve spent my entire career as a social entrepreneur advocating for businesses—specifically technology businesses in Silicon Valley—to use their technology, wealth, and influence for social good. After reading the letter in the New York Times and seeing the extensive coverage in major business publications, I turned to the leading Silicon Valley tech blogs to get their take on this blockbuster announcement. After all, the Bay Area is home to many of BlackRock’s largest clients.

Crickets. Fink’s letter wasn’t covered by the technology press. Well, to be accurate, I checked the first ten pages of Google results as well as all of the tech pubs in Techmeme’s top ten list. Nothing.

Guys (I hate to say it, but it’s mostly guys here in the Valley), Fink’s point is that ignoring society’s voice will lead to the loss of our “license to operate.” Putting the Valley’s collective hands over our ears and saying “we can’t hear you” only works for so long.

Instead, what if Silicon Valley embraced the letter to commit good for the better of society as a whole, not just the interests of the software and data industrial complex? What if Fink’s letter served as a constant reminder to build products that make the world a 10x more equitable place to live and prosper and not just to build products that deliver 10x profit?

With those questions in mind, here are two interrelated and crucial ways to commit good on purpose while making sure Silicon Valley technology companies embrace “A Sense of Purpose.”

Put People Before Algorithms. The goal of algorithms must not be to replace, manipulate, or deceive in the name of profit. This is all too often the case as black-box algorithms use massive amounts of data to attract eyeballs, encourage clicks, and, in more dire circumstances, even determine if someone goes to prison.

We must always ask up front how unaccountable algorithms impact individuals and society as a whole. Instead of eyeballs, clicks, and even prison time served, algorithms should be optimized to make people better—more efficient in their jobs, more informed in their daily lives, and more connected to their communities. We must make a cognizant effort to analyze and identify the risks of algorithms-gone-rogue before they result in disasters. Let’s not only ask, “How can we make more money?” but also, “What could go wrong?”

Risk-benefit analysis already takes place around boardroom tables by those with monetary interests, but those conversations fail to include the diverse voices of the communities that will feel the decision’s impact. There will never be perfect clarity around what will unfold after a decision is made. That’s exactly why decisions that impact thousands, millions, and even billions of people must include all company stakeholders—shareholders, employees, customers, and the communities in which they operate—if we are ever to prevent a world where algorithms reign supreme in the name of profit.

Treat Diversity as Our Greatest Asset. It’s very easy to discount points of view, values, and even someone’s humanity when the voice of diversity is not present. Establishing diversity as a core company principle is a good start, but it’s not enough. Diversity must be omnipresent and it must be truly embraced across an organization as an asset, not a statistic.

Many in Silicon Valley will tell you that diversity has been a top priority for years, only to follow with reports that cite a 2% increase in women employees, 0% increase in black employees, and no data at all on the number of employees with disabilities. Let’s not conflate transparency with priority. We must increase diversity now while investing in STEM education and training to create a more diverse pipeline of workers for tomorrow’s technology jobs. By making the workforce of today and tomorrow more diverse, we make our communities more diverse. We are then one step closer to never discounting a point of view, value, or someone’s entire humanity due to a lack of voice.

It’s not too late to use Mr. Fink’s letter as a wake-up call for Silicon Valley to commit good on purpose. While the two proposals detailed in this article are aspirational, they have at their core something much more valuable than $6 trillion. These ideas are about regaining Silicon Valley’s conscience. They are about investing in a collective future that prizes diversity and equality, not a future that allows technology, data, and algorithms to further entrench the inequality that we face today in Silicon Valley and everywhere that feels our impact.


Read Full Article

Facebook was warned about app permissions in 2011


Who’s to blame for the leaking of 50 million Facebook users’ data? Facebook founder and CEO Mark Zuckerberg broke several days of silence in the face of a raging privacy storm to go on CNN this week to say he was sorry. He also admitted the company had made mistakes; said it had breached the trust of users; and said he regretted not telling Facebookers at the time their information had been misappropriated.

Meanwhile, shares in the company have been taking a battering. And Facebook is now facing multiple shareholder and user lawsuits.

Pressed on why he didn’t inform users, in 2015, when Facebook says it found out about this policy breach, Zuckerberg avoided a direct answer — instead fixing on what the company did (asked Cambridge Analytica and the developer whose app was used to suck out data to delete the data) — rather than explaining the thinking behind the thing it did not do (tell affected Facebook users their personal information had been misappropriated).

Essentially Facebook’s line is that it believed the data had been deleted — and presumably, therefore, it calculated (wrongly) that it didn’t need to inform users because it had made the leak problem go away via its own backchannels.

Except of course it hadn’t. Because people who want to do nefarious things with data rarely play exactly by your rules just because you ask them to.

There’s an interesting parallel here with Uber’s response to a 2016 data breach of its systems. In that case, instead of informing the ~57M affected users and drivers that their personal data had been compromised, Uber’s senior management also decided to try and make the problem go away — by asking (and in their case paying) hackers to delete the data.

Aka the trigger response for both tech companies to massive data protection fuck-ups was: Cover up; don’t disclose.

Facebook denies the Cambridge Analytica instance is a data breach — because, well, its systems were so laxly designed as to actively encourage vast amounts of data to be sucked out, via API, without the check and balance of those third parties having to gain individual level consent.

So in that sense Facebook is entirely right; technically what Cambridge Analytica did wasn’t a breach at all. It was a feature, not a bug.

Clearly that’s also the opposite of reassuring.

Yet Facebook and Uber are companies whose businesses rely entirely on users trusting them to safeguard personal data. The disconnect here is gapingly obvious.

What’s also crystal clear is that rules and systems designed to protect and control personal data, combined with active enforcement of those rules and robust security to safeguard systems, are absolutely essential to prevent people’s information being misused at scale in today’s hyperconnected era.

But before you say hindsight is 20/20 vision, the history of this epic Facebook privacy fail is even longer than the under-disclosed events of 2015 suggest — i.e. when Facebook claims it found out about the breach as a result of investigations by journalists.

What the company very clearly turned a blind eye to is the risk posed by its own system of loose app permissions that in turn enabled developers to suck out vast amounts of data without having to worry about pesky user consent. And, ultimately, for Cambridge Analytica to get its hands on the profiles of ~50M US Facebookers for dark ad political targeting purposes.

European privacy campaigner and lawyer Max Schrems — a long time critic of Facebook — was actually raising concerns about the Facebook’s lax attitude to data protection and app permissions as long ago as 2011.

Indeed, in August 2011 Schrems filed a complaint with the Irish Data Protection Commission exactly flagging the app permissions data sinkhole (Ireland being the focal point for the complaint because that’s where Facebook’s European HQ is based).

“[T]his means that not the data subject but “friends” of the data subject are consenting to the use of personal data,” wrote Schrems in the 2011 complaint, fleshing out consent concerns with Facebook’s friends’ data API. “Since an average facebook user has 130 friends, it is very likely that only one of the user’s friends is installing some kind of spam or phishing application and is consenting to the use of all data of the data subject. There are many applications that do not need to access the users’ friends personal data (e.g. games, quizzes, apps that only post things on the user’s page) but Facebook Ireland does not offer a more limited level of access than “all the basic information of all friends”.

“The data subject is not given an unambiguous consent to the processing of personal data by applications (no opt-in). Even if a data subject is aware of this entire process, the data subject cannot foresee which application of which developer will be using which personal data in the future. Any form of consent can therefore never be specific,” he added.

As a result of Schrems’ complaint, the Irish DPC audited and re-audited Facebook’s systems in 2011 and 2012. The result of those data audits included a recommendation that Facebook tighten app permissions on its platform, according to a spokesman for the Irish DPC, who we spoke to this week.

The spokesman said the DPC’s recommendation formed the basis of the major platform change Facebook announced in 2014 — aka shutting down the Friends data API — albeit too late to prevent Cambridge Analytica from being able to harvest millions of profiles’ worth of personal data via a survey app because Facebook only made the change gradually, finally closing the door in May 2015.

“Following the re-audit… one of the recommendations we made was in the area of the ability to use friends data through social media,” the DPC spokesman told us. “And that recommendation that we made in 2012, that was implemented by Facebook in 2014 as part of a wider platform change that they made. It’s that change that they made that means that the Cambridge Analytica thing cannot happen today.

“They made the platform change in 2014, their change was for anybody new coming onto the platform from 1st May 2014 they couldn’t do this. They gave a 12 month period for existing users to migrate across to their new platform… and it was in that period that… Cambridge Analytica’s use of the information for their data emerged.

“But from 2015 — for absolutely everybody — this issue with CA cannot happen now. And that was following our recommendation that we made in 2012.”

Given his 2011 complaint about Facebook’s expansive and abusive historical app permissions, Schrems has this week raised an eyebrow and expressed surprise at Zuckerberg’s claim to be “outraged” by the Cambridge Analytica revelations — now snowballing into a massive privacy scandal.

In a statement reflecting on developments he writes: “Facebook has millions of times illegally distributed data of its users to various dodgy apps — without the consent of those affected. In 2011 we sent a legal complaint to the Irish Data Protection Commissioner on this. Facebook argued that this data transfer is perfectly legal and no changes were made. Now after the outrage surrounding Cambridge Analytica the Internet giant suddenly feels betrayed seven years later. Our records show: Facebook knew about this betrayal for years and previously argues that these practices are perfectly legal.”

So why did it take Facebook from September 2012 — when the DPC made its recommendations — until May 2014 and May 2015 to implement the changes and tighten app permissions?

The regulator’s spokesman told us it was “engaging” with Facebook over that period of time “to ensure that the change was made”. But he also said Facebook spent some time pushing back — questioning why changes to app permissions were necessary and dragging its feet on shuttering the friends’ data API.

“I think the reality is Facebook had questions as to whether they felt there was a need for them to make the changes that we were recommending,” said the spokesman. “And that was, I suppose, the level of engagement that we had with them. Because we were relatively strong that we felt yes we made the recommendation because we felt the change needed to be made. And that was the nature of the discussion. And as I say ultimately, ultimately the reality is that the change has been made. And it’s been made to an extent that such an issue couldn’t occur today.”

“That is a matter for Facebook themselves to answer as to why they took that period of time,” he added.

Of course we asked Facebook why it pushed back against the DPC’s recommendation in September 2012 — and whether it regrets not acting more swiftly to implement the changes to its APIs, given the crisis its business is now faced having breached user trust by failing to safeguard people’s data.

We also asked why Facebook users should trust Zuckerberg’s claim, also made in the CNN interview, that it’s now ‘open to being regulated’ — when its historical playbook is packed with examples of the polar opposite behavior, including ongoing attempts to circumvent existing EU privacy rules.

A Facebook spokeswoman acknowledged receipt of our questions this week — but the company has not responded to any of them.

The Irish DPC chief, Helen Dixon, also went on CNN this week to give her response to the Facebook-Cambridge Analytica data misuse crisis — calling for assurances from Facebook that it will properly police its own data protection policies in future.

“Even where Facebook have terms and policies in place for app developers, it doesn’t necessarily give us the assurance that those app developers are abiding by the policies Facebook have set, and that Facebook is active in terms of overseeing that there’s no leakage of personal data. And that conditions, such as the prohibition on selling on data to further third parties is being adhered to by app developers,” said Dixon.

“So I suppose what we want to see change and what we want to oversee with Facebook now and what we’re demanding answers from Facebook in relation to, is first of all what pre-clearance and what pre-authorization do they do before permitting app developers onto their platform. And secondly, once those app developers are operative and have apps collecting personal data what kind of follow up and active oversight steps does Facebook take to give us all reassurance that the type of issue that appears to have occurred in relation to Cambridge Analytica won’t happen again.”

Firefighting the raging privacy crisis, Zuckerberg has committed to conducting a historical audit of every app that had access to “a large amount” of user data around the time that Cambridge Analytica was able to harvest so much data.

So it remains to be seen what other data misuses Facebook will unearth — and have to confess to now, long after the fact.

But any other embarrassing data leaks will sit within the same unfortunate context — which is to say that Facebook could have prevented these problems if it had listened to the very valid concerns data protection experts were raising more than six years ago.

Instead, it chose to drag its feet. And the list of awkward questions for the Facebook CEO keeps getting longer.


Read Full Article