25 April 2018

We put the Nintendo Labo to the test


The Nintendo Labo is for kids so we took Nintendo’s latest product to its target audience. Our resident toy testers Kasper, Milla, and Guthrie went hands-on with this cool STEM system. They cut, folded, and played for almost an hour and finally gave their rousing review including near perfect scores from each of the judges. It’s great to see Nintendo thinking outside (or inside) the box.


Read Full Article

Kogan: “I don’t think Facebook has a developer policy that is valid”


A Cambridge University academic at the center of a data misuse scandal involving Facebook user data and political ad targeting faced questions from the UK parliament this morning.

Although the two-hour evidence session in front of the DCMS committee’s fake news enquiry raised rather more questions than it answered — with professor Aleksandr Kogan citing an NDA he said he had signed with Facebook to decline to answer some of the committee’s questions (including why and when exactly the NDA was signed).

TechCrunch understands the NDA relates to standard confidentiality provisions regarding deletion certifications and other commitments made by Kogan to Facebook not to misuse user data — after the company learned he had user passed data to SCL in contravention of its developer terms.

Asked why he had a non disclosure agreement with Facebook Kogan told the committee it would have to ask Facebook. He also declined to say whether any of his company co-directors (one of whom now works for Facebook) had been asked to sign an NDA. Nor would he specify whether the NDA had been signed in the US.

Asked whether he had deleted all the Facebook data and derivatives he had been able to acquire Kogan said yes “to the best of his knowledge”, though he also said he’s currently conducting a review to make sure nothing has been overlooked.

A few times during the session Kogan made a point of arguing that data audits are essentially useless for catching bad actors — claiming that anyone who wants to misuse data can simply put a copy on a hard drive and “store it under the mattress”.

(Incidentally, the UK’s data protection watchdog is conducting just such an audit of Cambridge Analytica right now, after obtaining a warrant to enter its London offices last month — as part of an ongoing, year-long investigation into social media data being used for political ad targeting.)

Your company didn’t hide any data in that way did it, a committee member asked Kogan? “We didn’t,” he rejoined.

“This has been a very painful experience because when I entered into all of this Facebook was a close ally. And I was thinking this would be helpful to my academic career. And my relationship with Facebook. It has, very clearly, done the complete opposite,” Kogan continued.  “I had no interest in becoming an enemy or being antagonized by one of the biggest companies in the world that could — even if it’s frivolous — sue me into oblivion. So we acted entirely as they requested.”

Despite apparently lamenting the breakdown in his relations with Facebook — telling the committee how he had worked with the company, in an academic capacity, prior to setting up a company to work with SCL/CA — Kogan refused to accept that he had broken Facebook’s terms of service — instead asserting: “I don’t think they have a developer policy that is valid… For you to break a policy it has to exist. And really be their policy, The reality is Facebook’s policy is unlikely to be their policy.”

“I just don’t believe that’s their policy,” he repeated when pressed on whether he had broken Facebook’s ToS. “If somebody has a document that isn’t their policy you can’t break something that isn’t really your policy. I would agree my actions were inconsistent with the language of this document — but that’s slightly different from what I think you’re asking.”

“You should be a professor of semantics,” quipped the committee member who had been asking the questions.

A Facebook spokesperson told us it had no public comment to make on Kogan’s testimony. But last month CEO Mark Zuckerberg couched the academic’s actions as a “breach of trust” — describing the behavior of his app as “abusive”.

In evidence to the committee today, Kogan told it he had only become aware of an “inconsistency” between Facebook’s developer terms of service and what his company did in March 2015 — when he said he begun to suspect the veracity of the advice he had received from SCL. At that point Kogan said GSR reached out to an IP lawyer “and got some guidance”.

(More specifically he said he became suspicious because former SCL employee Chris Wylie did not honor a contract between GSR and Eunoia, a company Wylie set up after leaving SLC, to exchange data-sets; Kogan said GSR gave Wylie the full raw Facebook data-set but Wylie did not provide any data to GSR.)

“Up to that point I don’t believe I was even aware or looked at the developer policy. Because prior to that point — and I know that seems shocking and surprising… the experience of a developer in Facebook is very much like the experience of a user in Facebook. When you sign up there’s this small print that’s easy to miss,” he claimed.

“When I made my app initially I was just an academic researcher. There was no company involved yet. And then when we commercialized it — so we changed the app — it was just something I completely missed. I didn’t have any legal resources, I relied on SCL [to provide me with guidance on what was appropriate]. That was my mistake.”

“Why I think this is still not Facebook’s policy is that we were advised [by an IP lawyer] that Facebook’s terms for users and developers are inconsistent. And that it’s not actually a defensible position for Facebook that this is their policy,” Kogan continued. “This is the remarkable thing about the experience of an app developer on Facebook. You can change the name, you can change the description, you can change the terms of service — and you just save changes. There’s no obvious review process.

“We had a terms of service linked to the Facebook platform that said we could transfer and sell data for at least a year and a half — nothing was ever mentioned. It was only in the wake of the Guardian article [in December 2015] that they came knocking.”

Kogan also described the work he and his company had done for SCL Elections as essentially worthless — arguing that using psychometrically modeled Facebook data for political ad targeting in the way SCL/CA had apparently sought to do was “incompetent” because they could have used Facebook’s own ad targeting platform to achieve greater reach and with more granular targeting.

“It’s all about the use-case. I was very surprised to learn that what they wanted to do is run Facebook ads,” he said. “This was not mentioned, they just wanted a way to measure personality for many people. But if the use-case you have is Facebook ads it’s just incompetent to do it this way.

“Taking this data-set you’re going to be able to target 15% of the population. And use a very small segment of the Facebook data — page likes — to try to build personality models. When do this when you could very easily go target 100% and use much more of the data. It just doesn’t make sense.”

Asked what, then, was the value of the project he undertook for SCL, Kogan responded: “Given what we know now, nothing. Literally nothing.”

He repeated his prior claim that he was not aware that work he was providing for SCL Elections would be used for targeting political ads, though he confirmed he knew the project was focused on the US and related to elections.

He also said he knew the work was being done for the Republican party — but claimed not to know which specific candidates were involved.

Pressed by one committee member on why he didn’t care to know which politicians he was indirectly working for, Kogan responded by saying he doesn’t have strong personal views on US politics or politicians generally — beyond believing that most US politicians are at least reasonable in their policy positions.

“My personal position on life is unless I have a lot of evidence I don’t know. Is the answer. It’s a good lesson to learn from science — where typically we just don’t know. In terms of politics in particular I rarely have a strong position on a candidate,” said Kogan, adding that therefore he “didn’t bother” to make the effort to find out who would ultimately be the beneficiary of his psychometric modeling.

Kogan told the committee his initial intention had not been to set up a business at all but to conduct not-for-profit big data research — via an institute he wanted to establish — claiming it was Wylie who had advised him to also set up the for-profit entity, GSR, through which he went on to engage with SCL Elections/CA.

“The initial plan was we collect the data, I fulfill my obligations to SCL, and then I would go and use the data for research,” he said.

And while Kogan maintained he had never drawn a salary from the work he did for SCL — saying his reward was “to keep the data”, and get to use it for academic research — he confirmed SCL did pay GSR £230,000 at one point during the project; a portion of which he also said eventually went to pay lawyers he engaged “in the wake” of Facebook becoming aware that data had been passed to SCL/CA by Kogan — when it contacted him to ask him to delete the data (and presumably also to get him to sign the NDA).

In one curious moment, Kogan claimed not to know his own company had been registered at 29 Harley Street in London — which the committee noted is “used by a lot of shell companies some of which have been used for money laundering by Russian oligarchs”.

Seeming a little flustered he said initially he had registered the company at his apartment in Cambridge, and later “I think we moved it to an innovation center in Cambridge and then later Manchester”.

“I’m actually surprised. I’m totally surprised by this,” he added.

Did you use an agent to set it up, asked one committee member. “We used Formations House,” replied Kogan, referring to a company whose website states it can locate a business’ trading address “in the heart of central London” — in exchange for a small fee.

“I’m legitimately surprised by that,” added Kogan of the Harley Street address. “I’m unfortunately not a Russian oligarch.”

Later in the session another odd moment came when he was being asked about his relationship with Saint Petersburg University in Russia — where he confirmed he had given talks and workshops, after traveling to the country with friends and proactively getting in touch with the university “to say hi” — and specifically about some Russian government-funded research being conducted by researchers there into cyberbullying.

Committee chair Collins implied to Kogan the Russian state could have had a specific malicious interest in such a piece of research, and wondered whether Kogan had thought about that in relation to the interactions he’d had with the university and the researchers.

Kogan described it as a “big leap” to connect the piece of research to Kremlin efforts to use online platforms to interfere in foreign elections — before essentially going on to repeat a Kremlin talking point by saying the US and the UK engage in much the same types of behavior.

“You can make the same argument about the UK government funding anything or the US government funding anything,” he told the committee. “Both countries are very famous for their spies.

“There’s a long history of the US interfering with foreign elections and doing the exact same thing [creating bot networks and using trolls for online intimidation].”

“Are you saying it’s equivalent?” pressed Collins. “That the work of the Russian government is equivalent to the US government and you couldn’t really distinguish between the two?”

“In general I would say the governments that are most high profile I am dubious about the moral scruples of their activities through the long history of UK, US and Russia,” responded Kogan. “Trying to equate them I think is a bit of a silly process. But I think certainly all these countries have engaged in activities that people feel uncomfortable with or are covert. And then to try to link academic work that’s basic science to that — if you’re going to down the Russia line I think we have to go down the UK line and the US line in the same way.

“I understand Russia is a hot-button topic right now but outside of that… Most people in Russia are like most people in the UK. They’re not involved in spycraft, they’re just living lives.”

“I’m not aware of UK government agencies that have been interfering in foreign elections,” added Collins.

“Doesn’t mean it’s not happened,” replied Kogan. “Could be just better at it.”

During Wylie’s evidence to the committee last month the former SCL data scientist had implied there could have been a risk of the Facebook data falling into the hands of the Russian state as a result of Kogan’s back and forth travel to the region. But Kogan rebutted this idea — saying the data had never been in his physical possession when he traveled to Russia, pointing out it was stored in a cloud hosting service in the US.

“If you want to try to hack Amazon Web Services good luck,” he added.

He also claimed not to have read the piece of research in question, even though he said he thought the researcher had emailed the paper to him — claiming he can’t read Russian well.

Kogan seemed most comfortable during the session when he was laying into Facebook’s platform policies — perhaps unsurprisingly, given how the company has sought to paint him as a rogue actor who abused its systems by creating an app that harvested data on up to 87 million Facebook users and then handing information on its users off to third parties.

Asked whether he thought a prior answer given to the committee by Facebook — when it claimed it had not provided any user data to third parties — was correct, Kogan said no given the company provides academics with “macro level” user data (including providing him with this type of data, in 2013).

He was also asked why he thinks Facebook lets its employees collaborate with external researchers — and Kogan suggested this is “tolerated” by management as a strategy to keep employees stimulated.

Committee chair Collins asked whether he thought it was odd that Facebook now employs his former co-director at GSR, Joseph Chancellor — who works in its research division — despite Chancellor having worked for a company Facebook has said it regards as having violated its platform policies.

“Honestly I don’t think it’s odd,” said Kogan. “The reason I don’t think it’s odd is because in my view Facebook’s comments are PR crisis mode. I don’t believe they actually think these things — because I think they realize that their platform has been mined, left and right, by thousands of others.

“And I was just the unlucky person that ended up somehow linked to the Trump campaign. And we are where we are. I think they realize all this but PR is PR and they were trying to manage the crisis and it’s convenient to point the finger at a single entity and try to paint the picture this is a rogue agent.

At another moment during the evidence session Kogan was also asked to respond to denials previously given to the committee by former CEO of Cambridge Analytica Alexander Nix — who had claimed that none of the data it used came from GSR and — even more specifically — that GSR had never supplied it with “data-sets or information”.

“Fabrication,” responded Kogan. “Total fabrication.”

“We certainly gave them [SCL/CA] data. That’s indisputable,” he added.

In written testimony to the committee he also explained that he in fact created three apps for gathering Facebook user data. The first one — called the CPW Lab app — was developed after he had begun a collaboration with Facebook in early 2013, as part of his academic studies. Kogan says Facebook provided him with user data at this time for his research — although he said these datasets were “macro-level datasets on friendship connections and emoticon usage” rather than information on individual users.

The CPW Lab app was used to gather individual level data to supplement those datasets, according to Kogan’s account. Although he specifies that data collected via this app was housed at the university; used for academic purposes only; and was “not provided to the SCL Group”.

Later, once Kogan had set up GSR and was intending to work on gathering and modeling data for SCL/Cambridge Analytica, the CPW Lab app was renamed to the GSR App and its terms were changed (with the new terms provided by Wylie).

Thousands of people were then recruited to take this survey via a third company — Qualtrics — with Kogan saying SCL directly paid ~$800,000 to it to recruit survey participants, at a cost of around $3-$4 per head (he says between 200,000 and 300,000 people took the survey as a result in the summer of 2014; NB: Facebook doesn’t appear to be able to break out separate downloads for the different apps Kogan ran on its platform — it told us about 305,000 people downloaded “the app”).

In the final part of that year, after data collection had finished for SCL, Kogan said his company revised the GSR App to become an interactive personality quiz — renaming it “thisisyourdigitallife” and leaving the commercial portions of the terms intact.

“The thisisyourdigitallife App was used by only a few hundred individuals and, like the two prior iterations of the application, collected demographic information and data about “likes” for survey participants and their friends whose Facebook privacy settings gave participants access to “likes” and demographic information. Data collected by the thisisyourdigitallife App was not provided to SCL,” he claims in the written testimony.

During the oral hearing, Kogan was pressed on misleading T&Cs in his two commercial apps. Asked by a committee member about the terms of the GSR App not specifying that the data would be used for political targeting, he said he didn’t write the terms himself but added: “If we had to do it again I think I would have insisted to Mr Wylie that we do add politics as a use-case in that doc.”

“It’s misleading,” argued the committee member. “It’s a misrepresentation.”

“I think it’s broad,” Kogan responded. “I think it’s not specific enough. So you’re asking for why didn’t we go outline specific use-cases — because the politics is a specific use-case. I would argue that the politics does fall under there but it’s a specific use-case. I think we should have.”

The committee member also noted how, “in longer, denser paragraphs” within the app’s T&Cs, the legalese does also state that “whatever that primary purpose is you can sell this data for any purposes whatsoever” — making the point that such sweeping terms are unfair.

“Yes,” responded Kogan. “In terms of speaking the truth, the reality is — as you’ve pointed out — very few if any people have read this, just like very few if any people read terms of service. I think that’s a major flaw we have right now. That people just do not read these things. And these things are written this way.”

“Look — fundamentally I made a mistake by not being critical about this. And trusting the advice of another company [SCL]. As you pointed out GSR is my company and I should have gotten better advice, and better guidance on what is and isn’t appropriate,” he added.

“Quite frankly my understanding was this was business as usual and normal practice for companies to write broad terms of service that didn’t provide specific examples,” he said after being pressed on the point again.

“I doubt in Facebook’s user policy it says that users can be advertised for political purposes — it just has broad language to provide for whatever use cases they want. I agree with you this doesn’t seem right, and those changes need to be made.”

At another point, he was asked about the Cambridge University Psychometrics Centre — which he said had initially been involved in discussions between him and SCL to be part of the project but fell out of the arrangement. According to his version of events the Centre had asked for £500,000 for their piece of proposed work, and specifically for modeling the data — which he said SCL didn’t want to pay. So SCL had asked him to take that work on too and remove the Centre from the negotiations.

As a result of that, Kogan said the Centre had complained about him to the university — and SCL had written a letter to it on his behalf defending his actions.

“The mistake the Psychometrics Centre made in the negotiation is that they believed that models are useful, rather than data,” he said. “And actually just not the same. Data’s far more valuable than models because if you have the data it’s very easy to build models — because models use just a few well understood statistical techniques to make them. I was able to go from not doing machine learning to knowing what I need to know in one week. That’s all it took.”

In another exchange during the session, Kogan denied he had been in contact with Facebook in 2014. Wylie previously told the committee he thought Kogan had run into problems with the rate at which the GSR App was able to pull data off Facebook’s platform — and had contacted engineers at the company at the time (though Wylie also caveated his evidence by saying he did not know whether what he’d been told was true).

“This never happened,” said Kogan, adding that there was no dialogue between him and Facebook at that time.  “I don’t know any engineers at Facebook.”


Read Full Article

Facebook shuts down custom feed sharing prompts and 12 other APIs


Facebook is making good on Mark Zuckerberg’s promise to prioritize user safety and data privacy over its developer platform. Today Facebook and Instagram announced a slew of API shut downs and changes designed to stop developers from being able to pull you or your friends data without express permission, drag in public content, or trick you into sharing. Some changes go into effect today, and others roll out on August 1st so developers have over 90 days to fix their apps. They follow the big changes announced two weeks ago

Most notably, app developers will have to start using the standardized Facebook sharing dialog to request the ability to publish to the News Feed on a user’s behalf. They’ll no longer be  able to use the publish_actions API that let them design a custom sharing prompt. A Facebook spokesperson says this change was planned for the future because the consistency helps users feel in control, but the company moved the deadline up to August 1st as part of today’s updates because it didn’t want to have to make multiple separate announcements of app-breaking changes.

 

Facebook app developers will now have to use this standard Facebook sharing prompt since the publish_action API for creating custom prompts is shutting down

One significant Instagram Graph API change is going into effect today, which removes the ability to pull the name and bio of users who leave comments on your content, though commenters’ usernames and comment text is still available.

Facebook’s willingness to put user safety over platform utility indicates a maturation of the company’s “Hacker Way” that played fast-and-loose with people’s data in order to attract developers to its platform who would in turn create functionality that soaked up more attention.

For more on Facebook’s API changes, check out our breakdown of the major updates:


Read Full Article

24 April 2018

Facebook shuts down custom feed sharing prompts and 12 other APIs


Facebook is making good on Mark Zuckerberg’s promise to prioritize user safety and data privacy over its developer platform. Today Facebook and Instagram announced a slew of API shut downs and changes designed to stop developers from being able to pull you or your friends data without express permission, drag in public content, or trick you into sharing. Some changes go into effect today, and others roll out on August 1st so developers have over 90 days to fix their apps. They follow the big changes announced two weeks ago

Most notably, app developers will have to start using the standardized Facebook sharing dialog to request the ability to publish to the News Feed on a user’s behalf. They’ll no longer be  able to use the publish_actions API that let them design a custom sharing prompt. A Facebook spokesperson says this change was planned for the future because the consistency helps users feel in control, but the company moved the deadline up to August 1st as part of today’s updates because it didn’t want to have to make multiple separate announcements of app-breaking changes.

 

Facebook app developers will now have to use this standard Facebook sharing prompt since the publish_action API for creating custom prompts is shutting down

One significant Instagram Graph API change is going into effect today, which removes the ability to pull the name and bio of users who leave comments on your content, though commenters’ usernames and comment text is still available.

Facebook’s willingness to put user safety over platform utility indicates a maturation of the company’s “Hacker Way” that played fast-and-loose with people’s data in order to attract developers to its platform who would in turn create functionality that soaked up more attention.

For more on Facebook’s API changes, check out our breakdown of the major updates:


Read Full Article

Google’s Workshop on AI/ML Research and Practice in India




Last month, Google Bangalore hosted the Workshop on Artificial Intelligence and Machine Learning, with the goal of fostering collaboration between the academic and industry research communities in India. This forum was designed to exchange current research and industry projects in AI & ML, and included faculty and researchers from Indian Institutes of Technology (IITs) and other leading universities in India, along with industry practitioners from Amazon, Delhivery, Flipkart, LinkedIn, Myntra, Microsoft, Ola and many more. Participants spoke on the ongoing research and work being undertaken in India in deep learning, computer vision, natural language processing, systems and generative models (you can access all the presentations from the workshop here).

Google’s Jeff Dean and Prabhakar Raghavan kicked off the workshop by sharing Google’s uses of deep learning to solve challenging problems and reinventing productivity using AI. Additional keynotes were delivered by Googlers Rajen Sheth and Roberto Bayardo. We also hosted a panel discussion on the challenges and future of AI/ML ecosystem in India, moderated by Google Bangalore’s Pankaj Gupta. Panel participants included Anirban Dasgupta (IIT Gandhinagar), Chiranjib Bhattacharyya of the Indian Institute of Science (IISc), Ashish Tendulkar and Srinivas Raaghav (Google India) and Shourya Roy (American Express Big Data Labs).
Prabhakar Raghavan’s keynote address
Sessions
The workshop agenda included five broad sessions with presentations by attendees in the following areas:
Pankaj Gupta moderating the panel discussion
Summary and Next Steps
As in many countries around the world, we are seeing increased dialog on various aspects of AI and ML in multiple contexts in India. This workshop hosted 80 attendees representing 9 universities and 36 companies contributing 28 excellent talks, with many opportunities for discussing challenges and opportunities for AI/ML in India. Google will continue to foster this exchange of ideas across a diverse set of folks and applications. As part of this, we also announced the upcoming research awards round (applications due June 4) to support up to seven faculty members in India on their AI/ML research, and new work on an accelerator program for Indian entrepreneurs focused primarily on AI/ML technologies. Please keep an eye out for more information about these programs.

Instagram launches “Data Download” tool to let you leave


Two weeks ago TechCrunch called on Instagram to build an equivalent to Facebook’s “Download Your Information feature so if you wanted to leave for another photo sharing network, you could. The next day it announced this tool would be coming and now TechCrunch has spotted it rolling out to users. Instagram’s “Data Download” feature can be accessed here or through the app’s privacy settings. It lets users export their photos, videos, Stories, profile, info, comments, and messages, though it can take a few hours to days for your download to be ready.

An Instagram spokesperson now confirms to TechCrunch that “the Data Download tool is currently accessible to everyone on the web, but access via iOS and Android is still rolling out.” We’ll have more details on exactly what’s inside once my download is ready.

The tool’s launch is necessary for Instagram to comply with the data portability rule in European Union’s GDPR privacy law that goes into effect on May 25th. But it’s also a reasonable concession. Instagram has become the dominant image sharing social network with over 800 million users. It shouldn’t need to lock up users’ data in order to keep them around.

Instagram hasn’t been afraid to attack competitors and fight dirty. Most famously, it copied Snapchat’s Stories in August 2016, which now has over 300 million daily users — eclipsing the original. But it also cut off GIF-making app Phhhoto from its Find Friends feature, then swiftly cloned its core feature to launch Instagram Boomerang. Within a few years, Phhhoto had shut down its app.

If Instagram is going to ruthlessly clone and box out its competitors, it should also let users choose which they want to use. That’s tough if all your photos and videos are trapped inside another app. The tool could create a more level playing field for competition amongst photo apps.

It could also deter users from using sketchy third-party apps to scrape all their Instagram content. Since they typically require you to log in with your Instagram credentials, these put users at risk of being hacked or having their images used elsewhere without their consent. Considering Facebook launched its DYI tool in 2010, six years after the site launched, the fact that it took Instagram 8 years from launch to build this means it’s long overdue.

But with such strong network effect and its willingness to clone any popular potential rival, it may still take a miracle or a massive shift to a new computing platform for any app to dethrone Instagram.


Read Full Article

Instagram launches “Data Download” tool to let you leave


Two weeks ago TechCrunch called on Instagram to build an equivalent to Facebook’s “Download Your Information feature so if you wanted to leave for another photo sharing network, you could. The next day it announced this tool would be coming and now TechCrunch has spotted it rolling out to users. Instagram’s “Data Download” feature can be accessed here or through the app’s privacy settings. It lets users export their photos, videos, Stories, profile, info, comments, and messages, though it can take a few hours to days for your download to be ready.

An Instagram spokesperson now confirms to TechCrunch that “the Data Download tool is currently accessible to everyone on the web, but access via iOS and Android is still rolling out.” We’ll have more details on exactly what’s inside once my download is ready.

The tool’s launch is necessary for Instagram to comply with the data portability rule in European Union’s GDPR privacy law that goes into effect on May 25th. But it’s also a reasonable concession. Instagram has become the dominant image sharing social network with over 800 million users. It shouldn’t need to lock up users’ data in order to keep them around.

Instagram hasn’t been afraid to attack competitors and fight dirty. Most famously, it copied Snapchat’s Stories in August 2016, which now has over 300 million daily users — eclipsing the original. But it also cut off GIF-making app Phhhoto from its Find Friends feature, then swiftly cloned its core feature to launch Instagram Boomerang. Within a few years, Phhhoto had shut down its app.

If Instagram is going to ruthlessly clone and box out its competitors, it should also let users choose which they want to use. That’s tough if all your photos and videos are trapped inside another app. The tool could create a more level playing field for competition amongst photo apps.

It could also deter users from using sketchy third-party apps to scrape all their Instagram content. Since they typically require you to log in with your Instagram credentials, these put users at risk of being hacked or having their images used elsewhere without their consent. Considering Facebook launched its DYI tool in 2010, six years after the site launched, the fact that it took Instagram 8 years from launch to build this means it’s long overdue.

But with such strong network effect and its willingness to clone any popular potential rival, it may still take a miracle or a massive shift to a new computing platform for any app to dethrone Instagram.


Read Full Article

Meet the quantum blockchain that works like a time machine


A new – and theoretical – system for blockchain-based data storage could ensure that hackers will not be able to crack cryptocurrencies once we all enter the quantum era. The idea, proposed by researchers at the Victoria University of Wellington in New Zealand, would secure our crypto futures for decades to coming using a blockchain technology that is like a time machine.

You can check out their findings here.

To understand what’s going on here we have to define some terms. A blockchain stores every transaction in a system on what amounts to an immutable record of events. The work necessary for maintaining and confirming this immutable record is what is commonly known as mining. But this technology – which the paper’s co-author Del Rajan claims will make up “10 percent of global GDP… by 2027” – will become insecure in an era of quantum computers.

Therefore the solution is to store a blockchain in a quantum era requires a quantum blockchain using a series of entangled photons. Further, Spectrum writes: “Essentially, current records in a quantum blockchain are not merely linked to a record of the past, but rather a record in the past, one does that not exist anymore.”

Yeah, it’s weird.

From the paper intro:

Our method involves encoding the blockchain into a temporal GHZ (Greenberger–Horne–Zeilinger) state of photons that do not simultaneously coexist. It is shown that the entanglement in time, as opposed to an entanglement in space, provides the crucial quantum advantage. All the subcomponents of this system have already been shown to be experimentally realized. Perhaps more shockingly, our encoding procedure can be interpreted as non-classically influencing the past; hence this decentralized quantum blockchain can be viewed as a quantum networked time machine.

In short the quantum blockchain is immutable because the photons that it contains do not exist in the current time but are still extant and readable. This means you can see the entire blockchain but you cannot “touch” it and the only entry you would be able to try to tamper with is the most recent one. In fact, the researchers write, “In this spatial entanglement case, if an attacker tries to tamper with any photon, the full blockchain would be invalidated immediately.”

Is this really possible? The researchers note that the technology already exists.

“Our novel methodology encodes a blockchain into these temporally entangled states, which can then be integrated into a quantum network for further useful operations. We will also show that entanglement in time, as opposed to entanglement in space, plays the pivotal role for the quantum benefit over a classical blockchain,” the authors write. “As discussed below, all the subsystems of this design have already been shown to be experimentally realized. Furthermore, if such a quantum blockchain were to be constructed, we will show that it could be viewed as a quantum networked time machine.”

Don’t worry about having to update your Bitcoin wallet, though. This process is still very theoretical and not at all available to mere mortals. That said, it’s nice to know someone is looking out for our quantum future, however weird it may be.


Read Full Article

Bag Week is coming and we need your recommendations


Every year your faithful friends at TechCrunch spend an entire week looking at bags. Why? Because bags – often ignored but full of our important electronics – are the outward representations of our techie styles and we put far too little thought into where we keep our most prized possessions.

To that end we need your help. Do you have a favorite bag we should check out? Do you make a bag we should check out? Is there a bag we should avoid? We’ve created this form to gather bag and bag-related information. If you’re a manufacturer just add a link to your wares and we’ll be in touch. If you are a civilian and simply love a bag (or hate it) add as much info as you’d like and include a rating. We’ll pick a few brand new backs and some old standbys you recommend.

Considering we spend months carrying around our laptop and gear bags they deserve a closer look. That’s what Bag Week is about and we hope you can help us out with some recommendations. Our back and shoulders will thank you.


Read Full Article

DoorDash makes a big push into grocery delivery through a pilot program with Walmart


DoorDash is about to make a huge move into grocery delivery, but instead of going all out as a delivery service on its own, it’s instead going to be working behind the scenes to power delivery networks for larger companies — with Walmart as its first big partner.

While Instacart looks to control the end-to-end customer experience for grocery delivery, and Amazon is off doing Amazon-y things with its Whole Foods delivery system, DoorDash is hoping it can build a network that any company that needs some delivery network can tap without giving up its direct relationship with their customers. DoorDash is rolling out grocery delivery with Walmart in Atlanta in the first of what may be a major move to become a back-end platform for companies like Walmart, which want a delivery button on their website but don’t want to build the entire network themselves. By doing that, it offers DoorDash a potentially nice neutral niche as grocery delivery heats up.

“You can use the term white label, but our drivers still will often wear the DoorDash shirt and have the DoorDash bag,” DoorDash COO Christopher Payne said. “But if you go to Walmart.com, and order from Walmart in Atlanta, you’ll have no idea it’s from DoorDash. We’re very supportive of that scenario, that’s the DoorDash Drive scenario. We’re excited to build a business with them and provide this capability.”

Payne said he hopes this will be one of the first of a major expansion of that DoorDash Drive initiative to become a tool that businesses can start tapping for local delivery. And while DoorDash may partly be giving up that direct relationship with users, it can start getting a lot more data when it comes to deliveries. That data then helps it become more and more efficient, ensuring that it can get deliveries done in the best matter and attract more customers, leading to the need for more drivers, and so on.

DoorDash also basically started the whole last-mile delivery business on hard mode with restaurant delivery, Payne said. What DoorDash loses in that direct user experience is paid back in data, Payne says, and that’s more than valuable enough.

“It turns out restaurant delivery is probably one fo the hardest delivery use cases you have — you have to get a pizza somewhere in 20 or 30 minutes or it won’t be crisp, and you have to get an ice cream cone somewhere before it melts. Grocery delivery tends to be delivered earlier in the day, which is before dinner or before you go to work,” he said. “That works out perfectly for us, actually, because our drivers aren’t busy or are less busy than they would be otherwise. It’s a delivery window, as opposed to one that’s getting something to you at an exact moment and time. That’s actually much easier and less demanding than a real-time delivery.

It’s still a significant step beyond its core competency, which is restaurant delivery. But while that has the potential to be a big business, it’s also going to top out at some point. GrubHub, for example, has a market cap of nearly $9 billion — but Amazon, the backbone of how many consumers engage with physical goods through the Internet, is a $700 billion-plus company. If DoorDash is going to continue to grow, it has to start expanding into new lines of revenue, and figuring out how to take all the data and tools it’s built and bring them to new businesses is going to be critical.

Amazon changed the calculus of last-mile grocery delivery, and it pretty much did it overnight — or at least over the span of a few months, which is the equivalent of overnight for a $700 billion company. Amazon acquired Whole Foods, and all of its locations in major metropolitan areas, for $13.7 billion and very quickly began offering two-hour delivery for prime customers for Whole Foods. On top of that, the company quickly started offering a credit card with an absurdly good reward system that’s tied directly to Prime purchases and Whole Foods (assuming you stay within the Prime ecosystem).

That’s meant that larger companies find themselves trying to figure out how to make such an agile move, and do it as soon as possible. For Walmart, getting this partnership with DoorDash allows it to just add a small segment to its typical customer flow without having to build out a full-on logistics delivery system. The opportunity to expand that to other businesses is pretty natural, and that’s the theme behind the Drive platform, and in theory offers businesses a way to quickly ramp up a delivery network without having to hand off the customer relationship to DoorDash. That may, in the end, be much more palatable for businesses.

“One of the other advantages of partnering with a company like Walmart isn’t just that they’re a leading grocer in the US,” Payne said. “They’re in a lot of other lines of businesses. As they want to expand and deliver more to their customers, they have physical assets to do that, so it provides a nice solution for us to test other items in the future. I would say grocery delivery is very much in its early days, it’s roughly equivalent to where food delivery was four years ago. We’re all going to be learning together, and it also means there’s gonna be a lot of other competition as there is in food delivery. But we believe our merchant operational excellence and quality of delivery will set us apart, and that’ll be proven in time.”


Read Full Article

Facebook reveals 25 pages of takedown rules for hate speech and more


Facebook has never before made public the guidelines its moderators use to decide whether to remove violence, spam, harassment, self-harm, terrorism, intellectual property theft, and hate speech from social network until now. The company hoped to avoid making it easy to game these rules, but that worry has been overridden by the public’s constant calls for clarity and protests about its decisions. Today Facebook published 25 pages of detailed criteria and examples for what is and isn’t allowed.

Facebook is effectively shifting where it will be criticized to the underlying policy instead of individual incidents of enforcement mistakes like when it took down posts of the newsworthy “Napalm Girl” historical photo because it contains child nudity before eventually restoring them. Some groups will surely find points to take issue with, but Facebook has made some significant improvements. Most notably, it no longer disqualifies minorities from shielding from hate speech because an unprotected characteristic like “children” is appended to a protected characteristic like “black”.

Nothing is technically changing about Facebook’s policies. But previously, only leaks like a copy of an internal rulebook attained by the Guardian had given the outside world a look at when Facebook actually enforces those policies. These rules will be translated into over 40 languages for the public. Facebook currently has 7500 content reviewers, up 40% from a year ago.

Facebook also plans to expand its content removal appeals process, It already let users request a review of a decision to remove their profile, Page, or Group. Now Facebook will notify users when their nudity, sexual activity, hate speech or graphic violence content is removed and let them hit a button to “Request Review”, which will usually happen within 24 hours. Finally, Facebook will hold Facebook Forums: Community Standards events in Germany, France, the UK, India, Singapore, and the US to give its biggest communities a closer look at how the social network’s policy works.

Fixing the “white people are protected, black children aren’t” policy

Facebook’s VP of Global Product Management Monika Bickert who has been coordinating the release of the guidelines since September told reporters at Facebook’s Menlo Park HQ last week that “There’s been a lot of research about how when institutions put their policies out there, people change their behavior, and that’s a good thing.” She admits there’s still the concern that terrorists or hate groups will get better at developing “workarounds” to evade Facebook’s moderators, “but the benefits of being more open about what’s happening behind the scenes outweighs that.”

Content moderator jobs at various social media companies including Facebook have been described as hellish in many exposes regarding what it’s like to fight the spread of child porn, beheading videos, racism for hours a day. Bickert says Facebook’s moderators get trained to deal with this and have access to counseling and 24/7 resources, including some on-site. They can request to not look at certain kinds of content they’re sensitive to. But Bickert didn’t say Facebook imposes an hourly limit on how much offensive moderators see per day like how YouTube recently implemented a four-hour limit.

A controversial slide depicting Facebook’s now-defunct policy that disqualified subsets of protected groups from hate speech shielding. Image via ProPublica

The most useful clarification in the newly revealed guidelines explains how Facebook has ditched its poorly received policy that deemed “white people” as protected from hate speech, but not “black children”. That rule that left subsets of protected groups exposed to hate speech was blasted in a ProPublica piece in June 2017, though Facebook said it no longer applied that policy.

Now Bickert says “Black children — that would be protected. White men — that would also be protected. We consider it an attack if it’s against a person, but you can criticize an organization, a religion . . . If someone says ‘this country is evil’, that’s something that we allow. Saying ‘members of this religion are evil’ is not.” She explains that Facebook is becoming more aware of the context around who is being victimized. However, Bickert notes that if someone says “‘I’m going to kill you if you don’t come to my party’, if it’s not a credible threat we don’t want to be removing it.” 

Do community standards = editorial voice?

Being upfront about its policies might give Facebook more to point to when it’s criticized for failing to prevent abuse on its platform. Activist groups say Facebook has allowed fake news and hate speech to run rampant and lead to violence in many developing countries where Facebook hasn’t had enough native speaking moderators. The Sri Lankan government temporarily blocked Facebook in hopes of ceasing calls for violence, and those on the ground say Zuckerberg overstated Facebook improvements to the problem in Myanmar that led to hate crimes against the Rohingya people.

Revealing the guidelines could at least cut down on confusion about whether hateful content is allowed on Facebook. It isn’t. Though the guidelines also raise the question of whether the Facebook value system it codifies means the social network has an editorial voice that would define it as a media company. That could mean the loss of legal immunity for what its users post. Bickert stuck to a rehearsed line that “We are not creating content and we’re not curating content”. Still, some could certainly say all of Facebook’s content filters amount to a curatorial layer.

But whether Facebook is a media company or a tech company, it’s a highly profitable company. It needs to spend some more of the billions it earns each quarter applying the policies evenly and forcefully around the world.


Read Full Article

Unstoppable exploit in Nintendo Switch opens door to homebrew and piracy


The Nintendo Switch may soon be a haven for hackers, but not the kind that want your data — the kind that want to run SNES emulators and Linux on their handheld gaming consoles. A flaw in an Nvidia chip used by the Switch, detailed today, lets power users inject code into the system and modify it however they choose.

The exploit, known as Fusée Gelée, was first hinted at by developer Kate Temkin a few months ago. She and others at ReSwitched worked to prove and document the exploit, sending it to Nvidia and Nintendo, among others.

Although responsible disclosure is to be applauded, it won’t make much difference here: this flaw isn’t the kind that can be fixed with a patch. Millions of Switches are vulnerable, permanently, to what amounts to a total jailbreak; only new ones with code tweaked at the factory will be immune.

That’s because the flaw is baked into the read-only memory of the Nvidia Tegra X1 used in the Switch and a few other devices. It’s in the “Boot and Power Management Processor” to be specific, where a misformed packet sent during a routine USB device status check allows the connected device to send up to 64 kibibytes (65,535 bytes) of extra data that will be executed without question. You need to get into recovery mode first, but that’s easy.

As you can imagine, getting arbitrary code to run on a device that deep in its processes is a huge, huge vulnerability. Fortunately it’s only available to someone with direct, physical access to the Switch. But that in itself makes it an extremely powerful tool for anyone who wants to modify their own console.

Modding consoles is done for many reasons, and indeed piracy is among them. But people also want to do things Nintendo won’t let them, like back up their saved games, run custom software like emulators, or extend the capabilities of the OS beyond the meager features the company has provided.

Temkin and her colleagues had planned to release the vulnerability publicly on June 15 or when someone releases the vulnerability independent of them — whichever came first. It turned out to be the latter, which apparently came as a surprise to no one in the community. The X1 exploit seems to have been something of an open secret.

The exploit was released anonymously by some hacker and Temkin accordingly published the team’s documentation of it on GitHub. If that’s too technical, there’s also some more plain-language chatter about the flaw in a FAQ posted earlier this month. I’ve asked Temkin for a few more details.

In addition to Temkin, failOverflow announced a small device that will short a pin in the USB connector and put the device into recovery mode, prepping it for exploitation. And Team-Xecuter was advertising a similar hardware attack months ago.

The answer to the most obvious question is no, you can’t just fire this up and start playing Wave Race 64 (or a pirated Zelda) on your Switch 15 minutes from now. The exploit still requires technical ability to implement, though as with many other hacks of this type, someone will likely graft it to a nice GUI that guides ordinary users through the process. (It certainly happened with the NES and SNES Classic Editions.)

Although the exploit can’t be patched away with a software update, Nintendo isn’t powerless. It’s likely that a modified Switch would be barred from the company’s online services (such as they are) and possibly the user’s account as well. So although the hacking process is, compared with the soldering required for modchips of decades past, low on risk, it isn’t a golden ticket.

That said, Fusée Gelée will almost certainly open the floodgates for developers and hackers who care little for Nintendo’s official ecosystem and would rather see what they can get this great piece of hardware to do on their own.

I’ve asked Nintendo and Nvidia for comment and will update when I hear back.


Read Full Article

Facebook’s new authorization process for political ads goes live in the US


Earlier this month — and before Facebook CEO Mark Zuckerberg testified before Congress — the company announced a series of changes to how it would handle political advertisements running on its platform in the future. It had said that people who wanted to buy a political ad — including ads about political “issues” — would have to reveal their identities and location and be verified before the ads could run. Information about the advertiser would also display to Facebook users.

Today, Facebook is announcing the authorization process for U.S. political ads is live.

Facebook had first said in October that political advertisers would have to verify their identity and location for election-related ads. But in April, it expanded that requirement to include any “issue ads” — meaning those on political topics being debated across the country, not just those tied to an election.

Facebook said it would work with third parties to identify the issues. These ads would then be labeled as “Political Ads,” and display the “paid for by” information to end users.

According to today’s announcement, Facebook will now begin to verify the identity and the residential mailing address of advertisers who want to run political ads. Those advertisers will also have to disclose who’s paying for the ads as part of this authorization process.

This verification process is currently only open in the U.S. and will require Page admins and ad account admins to submit their government-issued ID to Facebook, along with their residential mailing address.

The government ID can either be a U.S. passport or U.S. driver’s license, a FAQ explains. Facebook will also ask for the last four digits of admins’ Social Security Number. The photo ID will then be approved or denied in a matter of minutes, though anyone declined based on the quality of the uploaded images won’t be prevented from trying again.

The address, however, will be verified by mailing a letter with a unique access code that only the admin’s Facebook account can use. The letter may take up to 10 days to arrive, Facebook notes.

Along with the verification portion, Page admins will also have to fill in who paid for the ad in the “disclaimer” section. This has to include the organization(s) or person’s name(s) who funded it.

This information will also be reviewed prior to approval, but Facebook isn’t going to fact check this field, it seems.

Instead, the company simply says: “We’ll review each disclaimer to make sure it adheres to our advertising policies. You can edit your disclaimers at any time, but after each edit, your disclaimer will need to be reviewed again, so it won’t be immediately available to use.”

The FAQ later states that disclaimers must comply with “any applicable law,” but again says that Facebook only reviews them against its ad policies.

“It’s your responsibility as the advertiser to independently assess and ensure that your ads are in compliance with all applicable election and advertising laws and regulations,” the documentation reads.

Along with the launch of the new authorization procedures, Facebook has released a Blueprint training course to guide advertisers through the steps required, and has published an FAQ to answer advertisers’ questions.

Of course, these procedures will only net the more scrupulous advertisers willing to play by the rules. That’s why Facebook had said before that it plans to use AI technology to help sniff out those advertisers who should have submitted to verification, but did not. The company is also asking people to report suspicious ads using the “Report Ad” button.

Facebook has been under heavy scrutiny because of how its platform was corrupted by Russian trolls on a mission to sway the 2016 election. The Justice Department charged 13 Russians and three companies with election interference earlier this year, and Facebook has removed hundreds of accounts associated with disinformation campaigns.

While tougher rules around ads may help, they alone won’t solve the problem.

It’s likely that those determined to skirt the rules will find their own workarounds. Plus, ads are only one of many issues in terms of those who want to use Facebook for propaganda and misinformation. On other fronts, Facebook is dealing with fake news — including everything from biased stories to those that are outright lies, intending to influence public opinion. And of course there’s the Cambridge Analytica scandal, which led to intense questioning of Facebook’s data privacy practices in the wake of revelations that millions of Facebook users had their information improperly accessed.

Facebook says the political ads authorization process is gradually rolling out, so it may not be available to all advertisers at this time. Currently, users can only set up and manage authorizations from a desktop computer from the Authorizations tab in a Facebook Page’s Settings.


Read Full Article

Google beats expectations again with $31.15B in revenue


Alphabet, Google’s parent company, reported another pretty solid beat this afternoon for its first quarter as it more or less has continued to keep its business growing substantially — and is growing even faster than it was a year ago today.

Google said its revenue grew 26% year-over-year to $31.16 billion in the first quarter this year. In the first quarter last year, Google said its revenue had grown 22% between Q1 of 2016 and Q1 of 2017. All this is a little convoluted, but the end result is that Google is actually growing faster than it was just a year ago despite the continued trend of a decline in its cost-per-click — a rough way of saying how valuable an ad is — as more and more web browsing shifts to mobile devices. Last year, Google said it recorded $24.75 billion in the first quarter.

Once again, Alphabet’s “other bets” — its fringe projects like autonomous vehicles and balloons — showed some additional health as that revenue grew while the losses shrank. That’s a good sign as it looks to explore options beyond search, but in the end it still represents a tiny fraction of Google’s overall business. This was also the first quarter that Google is reporting its results following a settlement with Uber, where it received a slice of the company as it ended a spat between its Waymo self-driving division and Uber.

Here’s the final scorecard:

  • Revenue: $31.16 billion, compared to $30.36 billion Wall Street estimates and up 26% year-over-year.
  • Earnings: $9.93 per share adjusted, compared to $9.28 per share from Wall Street
  • Other Revenues: $4.35 billion, up from $3.27 billion in Q1 last year
  • Other Bets: $150 million, up from $132 million in Q1 2017
  • Other Bets losses: $571 million, down from $703 million in the first quarter last year
  • TAC as a % of Revenue: 24%
  • Effective tax rate: 11%, down from 20% in Q1 2017

In the end, it’s a beat compared to what Wall Street wanted, and it’s getting a very Google-y response. Investors were looking for earnings of $9.35 per share on $30.36 billion in revenue. Google’s stock is up around 2% in extended trading, which for Google is adding more than $10 billion in value as it races alongside Microsoft and Amazon to chase Apple as the most valuable company in the world by market cap. Google jumped as much as 5% in extended trading, though it’s flattened out

Google’s traffic acquisition cost, or TAC, appears to also remain stable as a percentage of its revenue. This is a little bit of a sticking point for observers for the company and a potential negative signal for investors as more and more web browsing shifts to mobile. It’s ticked up very slowly over the past several years, but is now sitting at around 24% of its total revenue.

Google, at its core, is an advertising company that is going to make money off its billions of users across all of its properties. But as everything goes to mobile devices, the actual value of those ads is going to drop off over time simply because mobile browsing has a different set of behaviors. Google’s business has always been to offset that cost-per-click with a growing number of impressions — and, indeed, it seems like the status quo is sticking around for this one.

While Google’s advertising business continues to chug along, that diversification of revenue streams is going to be increasingly important for the company as a hedge against any potential threats to its advertising income. Already there is some chaos when it comes to what’s happening with user data following a massive scandal where information on as many as 87 million Facebook users ended up with a political research firm, Cambridge Analytica. That backlash centered around user privacy may end up tapping Google, which dominates most of how information travels across the web with Gmail and Search among its other products.

But that still comes at a pretty significant cost. It’s made major investments into tools like Google Cloud (or GCP), but tucked into the earnings report is a line item that shows its “purchases of property and equipment” more than doubled year-over-year to around $7.3 billion, up from $2.5 billion in the first quarter this year. Of course this can encompass a ton of things, but Google still has to actually buy servers if it’s going to run a cloud platform that can compete with AWS or Microsoft’s Azure.

All that feeds into its “other income” stream, which grew from $3.2 billion in Q1 last year to $4.35 billion in the first quarter this year. Amazon’s cloud business is already more than a $10 billion business annually, and that first-mover advantage has served it well as it began a huge shift to how businesses operate on cloud servers. But it also exposed a massive business opportunity for Google, which continues to invest in that.


Read Full Article

Google beats expectations again with $31.15B in revenue


Alphabet, Google’s parent company, reported another pretty solid beat this afternoon for its first quarter as it more or less has continued to keep its business growing substantially — and is growing even faster than it was a year ago today.

Google said its revenue grew 26% year-over-year to $31.16 billion in the first quarter this year. In the first quarter last year, Google said its revenue had grown 22% between Q1 of 2016 and Q1 of 2017. All this is a little convoluted, but the end result is that Google is actually growing faster than it was just a year ago despite the continued trend of a decline in its cost-per-click — a rough way of saying how valuable an ad is — as more and more web browsing shifts to mobile devices. Last year, Google said it recorded $24.75 billion in the first quarter.

Once again, Google’s “other bets” — its fringe projects like autonomous vehicles and balloons — showed some additional health as that revenue grew while the losses shrank. That’s a good sign as it looks to explore options beyond search, but in the end it still represents a tiny fraction of Google’s overall business.

Here’s the final scorecard:

  • Revenue: $31.16 billion, compared to $30.36 billion Wall Street estimates and up 26% year-over-year.
  • Earnings: $9.93 per share, compared to $9.28 per share from Wall Street
  • Other Revenues: $4.35 billion, up from $3.27 billion in Q1 last year
  • Other Bets: $150 million, up from $132 million in Q1 2017
  • Other Bets losses: $571 million, down from $703 million in the first quarter last year
  • TAC as a % of Revenue: 24%

In the end, it’s a beat compared to what Wall Street wanted, and it’s getting a very Google-y response. Investors were looking for earnings of $9.35 per share on $30.36 billion in revenue. Google’s stock is up around 2% in extended trading, which for Google is adding more than $10 billion in value as it races alongside Microsoft and Amazon to chase Apple as the most valuable company in the world by market cap. Google jumped as much as 5% in extended trading, though it’s flattened out

Google’s traffic acquisition cost, or TAC, appears to also remain stable as a percentage of its revenue. This is a little bit of a sticking point for observers for the company and a potential negative signal for investors as more and more web browsing shifts to mobile. It’s ticked up very slowly over the past several years, but is now sitting at around 24% of its total revenue.

Google, at its core, is an advertising company that is going to make money off its billions of users across all of its properties. But as everything goes to mobile devices, the actual value of those ads is going to drop off over time simply because mobile browsing has a different set of behaviors. Google’s business has always been to offset that cost-per-click with a growing number of impressions — and, indeed, it seems like the status quo is sticking around for this one.


Read Full Article