05 December 2018

TF-Ranking: A Scalable TensorFlow Library for Learning-to-Rank




Ranking, the process of ordering a list of items in a way that maximizes the utility of the entire list, is applicable in a wide range of domains, from search engines and recommender systems to machine translation, dialogue systems and even computational biology. In applications like these (and many others), researchers often utilize a set of supervised machine learning techniques called learning-to-rank. In many cases, these learning-to-rank techniques are applied to datasets that are prohibitively large  scenarios where the scalability of TensorFlow could be an advantage. However, there is currently no out-of-the-box support for applying learning-to-rank techniques in TensorFlow. To the best of our knowledge, there are also no other open source libraries that specialize in applying learning-to-rank techniques at scale.

Today, we are excited to share TF-Ranking, a scalable TensorFlow-based library for learning-to-rank. As described in our recent paper, TF-Ranking provides a unified framework that includes a suite of state-of-the-art learning-to-rank algorithms, and supports pairwise or listwise loss functions, multi-item scoring, ranking metric optimization, and unbiased learning-to-rank.

TF-Ranking is fast and easy to use, and creates high-quality ranking models. The unified framework gives ML researchers, practitioners and enthusiasts the ability to evaluate and choose among an array of different ranking models within a single library. Moreover, we strongly believe that a key to a useful open source library is not only providing sensible defaults, but also empowering our users to develop their own custom models. Therefore, we provide flexible API's, within which the users can define and plug in their own customized loss functions, scoring functions and metrics.

Existing Algorithms and Metrics Support
The objective of learning-to-rank algorithms is minimizing a loss function defined over a list of items to optimize the utility of the list ordering for any given application. TF-Ranking supports a wide range of standard pointwise, pairwise and listwise loss functions as described in prior work. This ensures that researchers using the TF-Ranking library are able to reproduce and extend previously published baselines, and practitioners can make the most informed choices for their applications. Furthermore, TF-Ranking can handle sparse features (like raw text) through embeddings and scales to hundreds of millions of training instances. Thus, anyone who is interested in building real-world data intensive ranking systems such as web search or news recommendation, can use TF-Ranking as a robust, scalable solution.

Empirical evaluation is an important part of any machine learning or information retrieval research. To ensure compatibility with prior work, we support many of the commonly used ranking metrics, including Mean Reciprocal Rank (MRR) and Normalized Discounted Cumulative Gain (NDCG). We also make it easy to visualize these metrics at training time on TensorBoard, an open source TensorFlow visualization dashboard.
An example of the NDCG metric (Y-axis) along the training steps (X-axis) displayed in the TensorBoard. It shows the overall progress of the metrics during training. Different methods can be compared directly on the dashboard. Best models can be selected based on the metric.
Multi-Item Scoring
TF-Ranking supports a novel scoring mechanism wherein multiple items (e.g., web pages) can be scored jointly, an extension of the traditional scoring paradigm in which single items are scored independently. One challenge in multi-item scoring is the difficulty for inference where items have to be grouped and scored in subgroups. Then, scores are accumulated per-item and used for sorting. To make these complexities transparent to the user, TF-Ranking provides a List-In-List-Out (LILO) API to wrap all this logic in the exported TF models.
The TF-Ranking library supports multi-item scoring architecture, an extension of traditional single-item scoring.
As we demonstrate in recent work, multi-item scoring is competitive in its performance to the state-of-the-art learning-to-rank models such as RankNet, MART, and LambdaMART on a public LETOR benchmark.

Ranking Metric Optimization
An important research challenge in learning-to-rank is direct optimization of ranking metrics (such as the previously mentioned NDCG and MRR). These metrics, while being able to measure the performance of ranking systems better than the standard classification metrics like Area Under the Curve (AUC), have the unfortunate property of being either discontinuous or flat. Therefore standard stochastic gradient descent optimization of these metrics is problematic.

In recent work, we proposed a novel method, LambdaLoss, which provides a principled probabilistic framework for ranking metric optimization. In this framework, metric-driven loss functions can be designed and optimized by an expectation-maximization procedure. The TF-Ranking library integrates the recent advances in direct metric optimization and provides an implementation of LambdaLoss. We are hopeful that this will encourage and facilitate further research advances in the important area of ranking metric optimization.

Unbiased Learning-to-Rank
Prior research has shown that given a ranked list of items, users are much more likely to interact with the first few results, regardless of their relevance. This observation has inspired research interest in unbiased learning-to-rank, and led to the development of unbiased evaluation and several unbiased learning algorithms, based on training instances re-weighting. In the TF-Ranking library, metrics are implemented to support unbiased evaluation and losses are implemented for unbiased learning by natively supporting re-weighting to overcome the inherent biases in user interactions datasets.

Getting Started with TF-Ranking
TF-Ranking implements the TensorFlow Estimator interface, which greatly simplifies machine learning programming by encapsulating training, evaluation, prediction and export for serving. TF-Ranking is well integrated with the rich TensorFlow ecosystem. As described above, you can use Tensorboard to visualize ranking metrics like NDCG and MRR, as well as to pick the best model checkpoints using these metrics. Once your model is ready, it is easy to deploy it in production using TensorFlow Serving.

If you’re interested in trying TF-Ranking for yourself, please check out our GitHub repo, and walk through the tutorial examples. TF-Ranking is an active research project, and we welcome your feedback and contributions. We are excited to see how TF-Ranking can help the information retrieval and machine learning research communities.

Acknowledgements
This project was only possible thanks to the members of the core TF-Ranking team: Rama Pasumarthi, Cheng Li, Sebastian Bruch, Nadav Golbandi, Stephan Wolf, Jan Pfeifer, Rohan Anil, Marc Najork, Patrick McGregor and Clemens Mewald‎. We thank the members of the TensorFlow team for their advice and support: Alexandre Passos, Mustafa Ispir, Karmel Allison, Martin Wicke, and others. Finally, we extend our special thanks to our collaborators, interns and early adopters: Suming Chen, Zhen Qin, Chirag Sethi, Maryam Karimzadehgan, Makoto Uchida, Yan Zhu, Qingyao Ai, Brandon Tran, Donald Metzler, Mike Colagrosso, and many others at Google who helped in evaluating and testing the early versions of TF-Ranking.

Seized cache of Facebook docs raise competition and consent questions


A UK parliamentary committee has published the cache of Facebook documents it dramatically seized last week.

The documents were obtained by a legal discovery process by a startup that’s suing the social network in a California court in a case related to Facebook changing data access permissions back in 2014/15.

The court had sealed the documents but the DCMS committee used rarely deployed parliamentary powers to obtain them from the Six4Three founder, during a business trip to London.

You can read the redacted documents here — all 250 pages of them.

In a series of tweets regarding the publication, committee chair Damian Collins says he believes there is “considerable public interest” in releasing them.

“They raise important questions about how Facebook treats users data, their policies for working with app developers, and how they exercise their dominant position in the social media market,” he writes.

“We don’t feel we have had straight answers from Facebook on these important issues, which is why we are releasing the documents. We need a more public debate about the rights of social media users and the smaller businesses who are required to work with the tech giants. I hope that our committee investigation can stand up for them.”

The committee has been investigating online disinformation and election interference for the best part of this year, and has been repeatedly frustrated in its attempts to extract answers from Facebook.

But it is protected by parliamentary privilege — hence it’s now published the Six4Three files, having waited a week in order to redact certain pieces of personal information.

Collins has included a summary of key issues, as the committee sees them after reviewing the documents, in which he draws attention to six issues.

Here is his summary of the key issues:

  1. White Lists Facebook have clearly entered into whitelisting agreements with certain companies, which meant that after the platform changes in 2014/15 they maintained full access to friends data. It is not clear that there was any user consent for this, nor how Facebook decided which companies should be whitelisted or not.
  2. Value of friends data It is clear that increasing revenues from major app developers was one of the key drivers behind the Platform 3.0 changes at Facebook. The idea of linking access to friends data to the financial value of the developers relationship with Facebook is a recurring feature of the documents.
  3. Reciprocity Data reciprocity between Facebook and app developers was a central feature in the discussions about the launch of Platform 3.0.
  4. Android Facebook knew that the changes to its policies on the Android mobile phone system, which enabled the Facebook app to collect a record of calls and texts sent by the user would be controversial. To mitigate any bad PR, Facebook planned to make it as hard of possible for users to know that this was one of the underlying features of the upgrade of their app.
  5. Onavo Facebook used Onavo to conduct global surveys of the usage of mobile apps by customers, and apparently without their knowledge. They used this data to assess not just how many people had downloaded apps, but how often they used them. This knowledge helped them to decide which companies to acquire, and which to treat as a threat.
  6. Targeting competitor Apps The files show evidence of Facebook taking aggressive positions against apps, with the consequence that denying them access to data led to the failure of that business

The publication of the files comes at an awkward moment for Facebook — which remains on the back foot after a string of data and security scandals, and has just announced a major policy change — ending a long-running ban on apps copying its own platform features.

Albeit the timing of Facebook’s policy shift announcement hardly looks incidental — given Collins said last week the committee would publish the files this week.

The policy in question has been used by Facebook to close down competitors in the past, such as — two years ago — when it cut off style transfer app Prisma’s access to its live-streaming Live API when the startup tried to launch a livestreaming art filter (Facebook subsequently launched its own style transfer filters for Live).

So its policy reversal now looks intended to diffuse regulatory scrutiny around potential antitrust concerns.

But emails in the Six4Three files suggesting that Facebook took “aggressive positions” against competing apps could spark fresh competition concerns.

In one email dated January 24, 2013, a Facebook staffer, Justin Osofsky, discusses Twitter’s launch of its short video clip app, Vine, and says Facebook’s response will be to close off its API access.

As part of their NUX, you can find friends via FB. Unless anyone raises objections, we will shut down their friends API access today. We’ve prepared reactive PR, and I will let Jana know our decision,” he writes. 

Osofsky’s email is followed by what looks like a big thumbs up from Zuckerberg, who replies: “Yup, go for it.”

Also of concern on the competition front is Facebook’s use of a VPN startup it acquired, Onavo, to gather intelligence on competing apps — either for acquisition purposes or to target as a threat to its business.

The files show various Onavo industry charts detailing reach and usage of mobile apps and social networks — with each of these graphs stamped ‘highly confidential’.

Facebook bought Onavo back in October 2013. Shortly after it shelled out $19BN to acquire rival messaging app WhatsApp — which one Onavo chart in the cache indicates was beasting Facebook on mobile, accounting for well over double the daily message sends at that time.

The files also spotlight several issues of concern relating to privacy and data protection law, with internal documents raising fresh questions over how or even whether (in the case of Facebook’s whitelisting agreements with certain developers) it obtained consent from users to process their personal data.

The company is already facing a number of privacy complaints under the EU’s GDPR framework over its use of ‘forced consent‘, given that it does not offer users an opt-out from targeted advertising.

But the Six4Three files look set to pour fresh fuel on the consent fire.

Collins’ fourth line item — related to an Android upgrade — also speaks loudly to consent complaints.

Earlier this year Facebook was forced to deny that it collects calls and SMS data from users of its Android apps without permission. But, as we wrote at the time, it had used privacy-hostile design tricks to sneak expansive data-gobbling permissions past users. So, put simple, people clicked ‘agree’ without knowing exactly what they were agreeing to.

The Six4Three files back up the notion that Facebook was intentionally trying to mislead users.

In one email dated November 15, 2013, from Matt Scutari, manager privacy and public policy, suggests ways to prevent users from choosing to set a higher level of privacy protection, writing: “Matt is providing policy feedback on a Mark Z request that Product explore the possibility of making the Only Me audience setting unsticky. The goal of this change would be to help users avoid inadvertently posting to the Only Me audience. We are encouraging Product to explore other alternatives, such as more aggressive user education or removing stickiness for all audience settings.”

Another awkward trust issue for Facebook which the documents could stir up afresh relates to its repeat claim — including under questions from lawmakers — that it does not sell user data.

In one email from the cache — sent by Mark Zuckerberg, dated October 7, 2012 — the Facebook founder appears to be entertaining the idea of charging developers for “reading anything, including friends”.

Yet earlier this year, when he was asked by a US lawmaker how Facebook makes money, Zuckerberg replied: “Senator, we sell ads.”

He did not include a caveat that he had apparently personally entertained the idea of liberally selling access to user data.

Responding to the publication of the Six4Three documents, a Facebook spokesperson told us:

As we’ve said many times, the documents Six4Three gathered for their baseless case are only part of the story and are presented in a way that is very misleading without additional context. We stand by the platform changes we made in 2015 to stop a person from sharing their friends’ data with developers. Like any business, we had many of internal conversations about the various ways we could build a sustainable business model for our platform. But the facts are clear: we’ve never sold people’s data.

Zuckerberg has repeatedly refused to testify in person to the DCMS committee.

At its last public hearing — which was held in the form of a grand committee comprising representatives from nine international parliaments, all with burning questions for Facebook — the company sent its policy VP, Richard Allan, leaving an empty chair where Zuckerberg’s bum should be.


Read Full Article

App Stores to pass $122B in 2019, with gaming and subscriptions driving growth


Mobile intelligence and data firm App Annie is today releasing its 2019 predictions for the worldwide app economy, including its forecast around consumer spending, gaming, the subscription market, and other highlights. Most notably, it expects the worldwide gross consumer spend in apps – meaning before the app stores take their own cut – to surpass $122 billion next year, which is double the size of the global box office market, for comparison’s sake.

According to the new forecast, the worldwide app store consumer spend will grow 5 times as fast as the overall global economy next year.

But the forecast also notes that “consumer spend” – which refers to the money consumers spend on apps and through in-app purchases – is only one metric to track the apps stores’ growth and revenue potential.

Mobile spending is also expected to continue growing for both in-app advertising and commerce – that is, the transactions that take place outside of the app stores in app like Uber, Amazon, and Starbucks, for example.

Specifically, mobile will account for 62 percent of global digital ad spend in 2019, representing $155 billion, up from 50 percent in 2017. In addition, 60 percent more mobile apps will monetize through in-app ads in 2019.

Mobile gaming to reach 60% market share

As in previous years, mobile gaming is contributing to the bulk of the growth in consumer spending, the report says.

Mobile gaming, which continues to be the fastest growing form of gaming, matured further this year with apps like Fortnite and PUBG, says App Annie. These games “drove multiplayer game mechanics that put them on par with real-time strategy and shooter games on PC/Mac and Consoles in a way that hadn’t been done before,” the firm said.

They also helped push forward a trend towards cross-platform gaming, and App Annie expects that to continue in 2019 with more games becoming less siloed.

However, the gaming market won’t just be growing because of experiences like PUBG and Fortnite. “Hyper-casual” games – that is, those with very simple gameplay – will also drive download growth in 2019.

Over the course of the next year, consumer spend in mobile gaming will reach 60 percent market share across all major platforms, including PC, Mac, console, handheld, and mobile.

China will remain a major contributor to overall app store consumer spend, including mobile gaming, but there may be a slight deceleration of their impact next year due to the game licensing freeze. In August, Bloomberg reported China’s regulators froze approval of game licenses amid a government shake-up. The freeze impacted the entire sector, from large players like internet giant Tencent to smaller developers.

If the freeze continues in 2019, App Annie believes Chinese firms will push towards international expansion and M&A activity could result.

App Annie is also predicting one breakout gaming hit for 2019: Niantic’s Harry Potter: Wizards Unite, which it believes will exceed $100 million in consumer spend in its first 30 days. Niantic’s Pokémon Go, by comparison, cleared $100 million in its first two weeks and became the fastest game to reach $1 billion in consumer spend.

But App Annie isn’t going so far as to predict Harry Potter will do better than Pokémon Go, which tapped into consumer nostalgia and was a first-to-market mainstream AR gaming title.

Mobile Video Streaming

Another significant trend ahead for the new year is the growth in video streaming apps, fueled by in-app subscriptions.

Today, the average person consumers over 7.5 hours of media per day, including watching, listening, reading or posting. Next year, 10 minutes of every hour will be spent consuming media across TV and internet will come from streaming video on mobile, the forecast says.

The total time in video streaming apps will increase 110 percent from 2016 to 2019, with consumer spend in entertainment apps up by 520 percent over that same period. Most of those revenues will come from the growth in in-app subscriptions.

Much of the time consumer spend streaming will come from short-form video apps like YouTube, TikTok and social apps like Instagram and Snapchat.

YouTube alone accounts for 4 out of every 5 minutes spent in the top 10 video streaming apps, today. But 2019 will see many changes, including the launch of Disney’s streaming service, Disney+, for example.

App Annie’s full report, which details ad creatives and strategies as well, is available on its blog.

 


Read Full Article

Yandex gets in the smartphone game


“The Google of Russia” is a fairly apt description, as far as those things go. Yandex has its hand in just about everything internet in its native Russia. Recently additions include a Prime-style service, a smart speaker and, yes, self-driving cars. So it was really only a matter of time before the company unleashed a smartphone on the world.

The internet giant’s first handset is called — rather uninspiredly — the Yandex.Phone. At least you know what you’re getting. Designwise, the handset also looks like a fairly bog standard middling Android handset.

The most interesting bits here are are, naturally, the inclusion of Yandex’s own software. The company says it’s the first to integrate its app ecosystem out of the box. That list includes Alice, the Alexa-style smart assistant it introduced in October of last year.

The spec aren’t much to write home about. There’s a 5.65 display and a pair of cameras on either inside. Inside, it’s got a Snapdragon 630 inside, coupled with 64GB of storage and 4GB of RAM. Oh, and there’s a headphone jack on-board, as well.

The price is certainly right, at 17,990 rubles ($270). It will be on sale tomorrow at the company’s flagship Moscow store, with wider availability starting the following day. As far as availability goes here in the States, I wouldn’t hold my breath. 


Read Full Article

Europe dials up pressure on tech giants over election security


The European Union has announced a package of measures intended to step up efforts and pressure on tech giants to combat democracy-denting disinformation ahead of the EU parliament elections next May.

The European Commission Action Plan, which was presented at a press briefing earlier today, has four areas of focus: 1) Improving detection of disinformation; 2) Greater co-ordination across EU Member States, including by sharing alerts about threats; 3) Increased pressure on online platforms, including to increase transparency around political ads and purge fake accounts; and 4) raising awareness and critical thinking among EU citizens.

The Commission says 67% of EU citizens are worried about their personal data being used for political targeting, and 80% want improved transparency around how much political parties spend to run campaigns on social media.

And it warned today that it wants to see rapid action from online platforms to deliver on pledges they’ve already made to fight fake news and election interference.

The EC’s plan follows a voluntary Code of Practice launched two months ago, which signed up tech giants including Facebook, Google and Twitter, along with some ad industry players, to some fairly fuzzy commitments to combat the spread of so-called ‘fake news’.

They also agreed to hike transparency around political advertising. But efforts so far remain piecemeal, with — for example — no EU-wide roll out of Facebook’s political ads disclosure system.

Facebook has only launched political ad identification checks plus an archive library of ads in the US and the UK so far, leaving the rest of the world to rely on the more limited ‘view ads’ functionality that it has rolled out globally.

The EC said it will be stepping up its monitoring of platforms’ efforts to combat election interference — with the new plan including “continuous” monitoring.

This will take the form of monthly progress reports, starting with a Commission progress report in January and then monthly reports thereafter (against what it slated as “very specific targets”) to ensure signatories are actually purging and disincentivizing bad actors and inauthentic content from their platform, not just saying they’re going to.

As we reported in September the Code of Practice looked to be a pretty dilute first effort. But ongoing progress reports could at least help concentrate minds — coupled with the ongoing threat of EU-wide legislation if platforms fail to effectively self-regulate.

Digital economy and society commissioner Mariya Gabriel said the EC would have “measurable and visible results very soon”, warning platforms: “We need greater transparency, greater responsibility both on the content, as well as the political approach.”

Security union commissioner, Julian King, came in even harder on tech firms — warning that the EC wants to see “real progress” from here on in.

“We need to see the Internet platforms step up and make some real progress on their commitments. This is stuff that we believe the platforms can and need to do now,” he said, accusing them of “excuses” and “foot-dragging”.

“The risks are real. We need to see urgent improvement in how adverts are placed,” he continued. “Greater transparency around sponsored content. Fake accounts rapidly and effectively identified and deleted.”

King pointed out Facebook admits that between 3% and 4% of its entire user-base is fake.

“That is somewhere between 60M and 90M face accounts,” he continued. “And some of those accounts are the most active accounts. A recent study found that 80% of the Twitter accounts that spread disinformation during the 2016 US election are still active today — publishing more than a million tweets a day. So we’ve got to get serious about this stuff.”

Twitter declined to comment on today’s developments but a spokesperson told us its “number one priority is improving the health of the public conversation”.

“Tackling co-ordinated disinformation campaigns is a key component of this. Disinformation is a complex, societal issue which merits a societal response,” Twitter’s statement said. “For our part, we are already working with our industry partners, Governments, academics and a range of civil society actors to develop collaborative solutions that have a meaningful impact for citizens. For example, Twitter recently announced a global partnership with UNESCO on media and information literacy to help equip citizens with the skills they need to critically analyse content they are engaging with online.”

We’ve also reached out to Facebook and Google for comment on the Commission plan.

King went on to press for “clearer rules around bots”, saying he would personally favor a ban on political content being “disseminated by machines”.

The Code of Practice does include a commitment to address both fake accounts and online bots, and “establish clear marking systems and rules for bots to ensure their activities cannot be confused with human interactions”. And Twitter has previously said it’s considering labelling bots; albeit with the caveat “as far as we can detect them”.

But action is still lacking.

“We need rapid corrections, which are given the same prominence and circulation as the original fake news. We need more effective promotion of alternative narratives. And we need to see overall greater clarity around how the algorithms are working,” King continued, banging the drum for algorithmic accountability.

“All of this should be subject to independent oversight and audit,” he added, suggesting the self-regulation leash here will be a very short one.

He said the Commission will make a “comprehensive assessment” of how the Code is working next year, warning: “If the necessary progress is not made we will not hesitate to reconsider our options — including, eventually, regulation.”

“We need to be honest about the risks, we need to be ready to act. We can’t afford an Internet that is the wild west where anything goes, so we won’t allow it,” he concluded.

Commissioner Vera Jourova also attended the briefing and used her time at the podium to press platforms to “immediately guarantee the transparency of political advertising”.

“This is a quick fix that is necessary and urgent,” she said. “It includes properly checking and clearly indicating who is behind online advertisement and who paid for it.”

In Spain regional elections took place in Andalusia on Sunday and — as noted above — while Facebook has launched a political ad authentication process and ad archive library in the US and the UK, the company confirmed to us that such a system was not up and running in Spain in time for that regional European election.

In the vote in Andalusia a tiny Far Right party, Vox, broke pollsters’ predictions to take twelve seats in the parliament — a first since the country’s return to democracy after the death of the dictator Francisco Franco in 1975.

Zooming in on election security risks, Jourova warned that “large-scale organized disinformation campaigns” have become “extremely efficient and spread with the speed of light” online. She also warned that non-transparent ads “will be massively used to influence opinions” in the run up to the EU elections.

Hence the pressing need for a transparency guarantee.

“When we allow the machines to massively influence free decisions of democracy I think that we have appeared in a bad science fiction,” she added. “The electoral campaign should be the competition of ideas, not the competition of dirty money, dirty methods, and hidden advertising where the people are not informed and don’t have a clue that they are influenced by some hidden powers.”

Jourova urged Member States to update their election laws so existing requirements on traditional media to observe a pre-election period also apply online.

“We all have roles to play, not only Member States, also social media platforms, but also traditional political parties. [They] need to make public the information on their expenditure for online activities as well as information on any targeting criteria used,” she concluded.

A report by the UK’s DCMS committee, which has been running an enquiry into online disinformation for the best part of this year, made similar recommendations in its preliminary report this summer.

Though the committee also went further — calling for a levy on social media to defend democracy. Albeit, the UK government did not leap into the recommended actions.

Also speaking at today’s presser, EC VP, Andrus Ansip, warned of the ongoing disinformation threat from Russia but said the EU does not intend to respond to the threat from propaganda outlets like RT, Sputnik and IRA troll farms by creating its own pro-EU propaganda machine.

Rather he said the plan is to focus efforts on accelerating collaboration and knowledge-sharing to improve detection and indeed debunking of disinformation campaigns.

“We need to work together and co-ordinate our efforts — in a European way, protecting our freedoms,” he said, adding that the plan sets out “how to fight back against the relentless propaganda and information weaponizing used against our democracies”.

Under the action plan, the budget of the European External Action Service (EEAS) — which bills itself as the EU’s diplomatic service — will more than double next year, to €5M, with the additional funds intended for strategic comms to “address disinformation and raise awareness about its adverse impact”, including beefing up headcount.

“This will help them to use new tools and technologies to fight disinformation,” Ansip suggested.

Another new measure announced today is a dedicated Rapid Alert System which the EC says will facilitate “the sharing of data and assessments of disinformation campaigns and to provide alerts on disinformation threats in real time”, with knowledge-sharing flowing between EU institutions and Member States.

The EC also says it will boost resource for national multidisciplinary teams of independent fact-checkers and researchers to detect and expose disinformation campaigns across social networks — working towards establishing a European network of fact-checkers.

“Their work is absolutely vital in order to combat disinformation,” said Gabriel, adding: “This is very much in line with our principles of pluralism of the media and freedom of expression.”

Investments will also go towards supporting media education and critical awareness, with Gabriel noting that the Commission will to run a European media education week, next March, to draw attention to the issue and gather ideas.

She said the overarching aim is to “give our citizens a whole array of tools that they can use to make a free choice”.

“It’s high time we give greater visibility to this problem because we face this on a day to day basis. We want to provide solutions — so we really need a bottom up approach,” she added. “It’s not up to the Commission to say what sort of initiatives should be adopted; we need to give stakeholders and citizens their possibility to share best practices.”


Read Full Article

Middle Latitudes


Middle Latitudes

AT&T announces a second Samsung 5G smartphone for 2019


Leave it to Samsung to talk up its second 5G smartphone before most companies have tipped their first. A few days after announcing its first 5G handset via Verizon, the company is already onto number two. This one comes via an AT&T press release with that qualifies the handset a bit more, calling it “another standards-based 5G device.”

Shortly after Verizon was first with the original Samsung 5G news, both AT&T and Sprint announced that they would be getting the handset, as well. That device is scheduled, broadly, for some time in the first half of the year. This one, meanwhile, will likely arriving in the second half. “Likely” because of roadmaps and all of that stuff that’s ultimately subject to change. 

The device will “be able to access both 5G mmWave and sub-6 GHz.” Beyond that, unsurprisingly, there’s about as much detail as we got the first time around. The rest of the release finds the carrier talking up its wireless plans, going forward, and noting that this deal brings it up to three 5G device, including a mobile hotspot announced in late October.

It’s a bit unlike smartphone makers to tip their hands this far out, but between these handsets and the foldable prototype the company recent showed off, Samsung is clearly making at effort to demonstrate the innovation it’s got in the works. That appears to be, at least in part, due to somewhat lackluster sales in 2018. Wireless carriers, meanwhile, are clearly falling all over themselves to be the first announced partner for these devices.

Given the fairly lengthy lead time, the companies don’t risk cannibalizing holiday sales too much, especially with some deep December discounts on flagship devices.


Read Full Article

5 Essential Quick Look Tips for Previewing Mac Files


mac-quick-look

Want to see what a file contains without opening its associated app on your Mac? All you have to do is press the Space bar. This shortcut triggers the handy Quick Look feature on your Mac.

While it’s simple to use, we’ll explore five essential tips for getting more out of it.

1. Preview Files With a Shortcut

sample-file-preview-in-quick-look-on-mac

After you have revealed the contents of a Finder file by pressing Space, you can make the preview disappear by tapping Space again. Pressing the Escape key is another option for this.

The shortcut Cmd + Y also works as a Quick Look trigger. There’s a corresponding menu option too: File > Quick Look.

Feel free to drag the edges of the preview window to scale it up and down. You can also zoom and pan within Quick Look previews as you would in any Mac app. Use a double-tap gesture or pinch with two fingers to zoom. Plus, you can use the shortcuts Cmd + Plus and Cmd + Minus to zoom in and out of previews.

To pan across a preview, swipe left and right with two fingers. Of course, if you use this gesture in a video preview, you’ll scrub through the video instead.

If you want a Quick Look preview window to expand to fill the screen, hold down the Option key while you tap Space. This toggles a full-screen preview. By the way, holding down the Option key triggers many useful actions across macOS.

Clicking on the Full Screen button next to the Close button in the preview window is another way to switch to a full-screen preview.

2. Open, Mark Up, and Share Files

markup-open-and-share-options-in-quick-look-preview-on-mac

Quick Look lets you preview everything from text files, PDFs, and images to spreadsheets, presentations, and videos.

You’ll find a couple of common buttons in all previews:

  • Open with [App]: Use this to open the file you’re previewing in its corresponding or default app. You won’t see this button in full-screen previews.
  • Share: This button is for sharing the file via the Share sheet that’s standard across macOS.

Depending on the type of file you’re previewing, you’ll also find a few extra options if you have upgraded to macOS Mojave.

For example, when you’re previewing a PDF, you’ll get access to one of the best new features of Mojave—markup tools embedded in Finder.

Look for the Markup button that gives you tools to annotate the PDF right from the preview window. You can also navigate to different pages in the PDF using the page thumbnails in the sidebar. Similarly, in spreadsheet previews, you can navigate between sheets.

Also, you can rotate images/videos from their previews with the Rotate Left button. Change the button to a Rotate Right button temporarily by holding down the Option key. In audio/video previews, you’ll find a Trim button. If it’s missing, you’ll have to enable it from System Preferences > Extensions > Finder.

Remember, you can find the right macOS setting faster with a few tips.

3. Preview Multiple Items

index-sheet-for-multiple-item-preview-in-quick-look-on-mac

You don’t have to select Finder items one at a time to preview them. You can select multiple items and Quick Look will display their previews as a collection that you can browse through. Use the right and left arrow keys to move between file previews.

(If you hit the arrow keys when you have a single file selected, Quick Look will still walk you through the previews of the remaining items in that folder.)

It doesn’t matter if the items you select are in different formats; Quick Look will work just the same.

When you’re previewing multiple items, look for the Index Sheet button in the left section of the title bar. This button gives you a grid-based display of the selected files, making it easier for you to preview them in random order.

In full-screen previews, you’ll find the Index Sheet button in the toolbar at the bottom of the screen. Here, you’ll also find a Play/Pause button that comes in handy when you want to preview selected images as a slideshow.

4. Preview Items in Spotlight, Dock, Notes, and More

quick-look-preview-from-dock-folder-on-mac

Spotlight (your Mac’s search mechanism) and folders added to the Dock also give you file previews. With these, you can browse through PDFs, play videos, switch sheets in spreadsheets, and so on. But you won’t find advanced Quick Look features like the share menu and the index sheet.

In Spotlight, you’ll automatically see the preview for a file when you select it in the search results.

In the case of folders added to the Dock, you can preview the contents of their files only when you have displayed the folder contents as a fan or as a grid. (To switch between different views for the folder contents, select the correct view from the context menu of the folder’s Dock shortcut.)

It’s quite convenient that macOS also allows you to use Quick Look in a few other Mac apps. This comes in handy when you want to, say, preview attachments in Apple Mail or Apple Notes, or preview files in Time Machine before restoring them.

5. Install Quick Look Plugins

preview-zip-archive-with-quick-look-plugin-on-mac

You’ll find that you can’t preview certain types of files—such as archives and EPUBs—with Quick Look. But you can get around this restriction with third-party plugins. Here are a few:

QLImage is another useful QuickLook plugin. It displays the image dimensions and file size in image previews.

You’ll find a useful compilation of various other plugins at quicklookplugins.com. To install a QuickLook plugin, here’s what you need to do:

  1. Click on Go > Go to Folder.
  2. In the popup box that shows up, paste this location and hit the Go button:
    ~/Library/QuickLook
    
  3. Drag the plugin file (with the QLGENERATOR extension) to the folder that opens up.

If the plugin doesn’t activate quickly, you can speed it up by executing this Terminal command:

qlmanage -r

Use Quick Look on Mac More Often

Quick Look is one of most subtle features of macOS and it’s an important one on our list of Finder tips for Mac newbies. Sometimes people go years without knowing about Quick Look. Have you similarly missed out on any tiny, but useful macOS features?

Read the full article: 5 Essential Quick Look Tips for Previewing Mac Files


Read Full Article

What Is the Kik App and Why Do Teens Love It?

Google invests in Japanese AI and machine learning startup ABEJA


Google has made a rare investment in Japan after the company led a follow-on round for AI and machine learning startup ABEJA.

The deal amount is undisclosed but a little digging suggests that it is likely a single-digit million US dollar figure. That’s because six-year-old ABEJA did confirm that it has now raised JPY 6 billion ($53 million) from investors to date. The company has raised $45 million in disclosed capital, according to Crunchbase, which leaves around $8 million unaccounted for — although that covers both the Google investment and a previous Series A deal in 2014, which was also undisclosed.

Numbers aside, the deal is notable not only because it represents a Google deal in Japan, but because it is strategic in nature.

“Going forward, ABEJA and Google will collaborate on AI and ML solutions across various sectors, including retail and manufacturing, driving the application of AI solutions, along with further growth in the Japanese AI sector,” ABEJA said in a statement.

The startup’s core offering is a ‘platform as a service’ that uses machine learning to help over 150 companies develop business analysis and insight from their data piles. There is a specialist product for retail stores which hones in on customer and retail data — that’s used by some 100 corporate customers, according to ABEJA.

“ABEJA has strong technical capabilities and ML expertise, and is respected across the industry for its track record of collaboration and the effective deployment of its tech solutions,” said Shinichi Abe, Managing Director of Google Cloud Japan, in a pre-prepared statement. “This investment paves the way for collaboration with ABEJA in innovative solutions in the retail and manufacturing sector, as well as other verticals.”

Google has placed significant emphasis on AI and machine learning in China — where it opened a lab in Beijing one year ago — but that aside the majority of its research and focus has come from the U.S. and also Europe, where its Deep Mind unit is headquartered. Google did acquire startups in India and Singapore that include AI and ML capabilities, but those deals were aimed at growing its in-house product teams which are customizing and creating services for those growing local markets.


Read Full Article

Google contract workers demand better pay and benefits


Google contract workers, internally referred to as Temporary, Vendor and Contractors (TVCs, are seeking better, equal treatment. That entails better pay and access to benefits, as well as better access to company-wide information. In a letter to Google CEO Sundar Pichai, they allege Google “routinely denies TVCs access to information that is relevant to our jobs and our lives.”

For example, when there was a shooting at YouTube this past April, TVCs say Google only sent out updates to full-time employees. They say they were also left out of the town hall discussion the following the day.

“The exclusion of TVCs from important communications and fair treatment is part of a system of institutional racism, sexism, and discrimination,” they wrote in the letter. “TVCs are disproportionately people from marginalized groups who are treated as less deserving of compensation, opportunities, workplace protections, and respect. We wear different badges from full-time employees, which reinforces this arbitrary and discriminatory separation. Even when we’re doing the same work as full-time employees, these jobs routinely fail to provide living wages and often offer minimal benefits.”

As Bloomberg reported in July, Google’s TVCs make up more than half of the company’s total staff. They handle a variety of jobs, including serving meals, testing self-driving cars and managing teams.

This comes after Google conceded to some of the employee’s demands in the aftermath of sexual harassment and assault allegations. While Google did make some changes, the company did not address all of the organizers’ demands. For example, Google failed to elevate its chief diversity officer to report directly to Pichai and also ignored the organizers’ request to add an employee representative to the board of directors.

I’ve reached out to Google and will update this story if I hear back.

 


Read Full Article

Bose’s Audio-Based AR Future Available to Preorder for $199


When you think of augmented reality, you probably think of a screen that appears in your field of vision. You probably imagine a screen that’ll show you key information, or even one that lets you play games that involve the world around you. Your mind probably travels to thoughts of virtual reality as well, but augmented and virtual reality are actually quite different.

Bose has a different vision, as the company just put out a pair of  augmented reality glasses that function through the use of sound, rather than visuals. The Verge posted about the new AR sunglasses, and it immediately caught our attention.

Introducing the Bose Frames

The Bose Frames are $199 pair of sunglasses that use audio to relay information about the world around you. The actual lenses on the sunglasses are just that—sunglasses.

Everything works through the built-in microphone and open-ear headphones for both receiving and relaying information to the user. Both Siri and Google Assistant are integrated into the headphones, so users will be able to control music, ask questions, and so on.

Bose plans to launch a full platform of apps later in 2019, so for the time being, they’ll serve as an interesting way to interact with your favorite digital assistants.

Because they use an open-ear headphone design, privacy could be a bit of a concern. Bose promises that the devices won’t broadcast sound to everyone around you, but they won’t be nearly as discreet as a pair of closed-back headphones, so you might want to be careful broadcasting anything too personal.

Here are some other interesting things to know about the Bose Frames:

  • Battery Life: 3.5 hours of playback, 12 hours of standby
  • Recharge Time: 2 hours for a full charge
  • 9-axis head motion sensor detects the direction you’re facing
  • Uses GPS from Android or iPhone to know user’s location
  • Includes cleaning cloth and protective case
  • Headphones weight 45 grams
  • Available in round and square frame shapes

When Are the Bose Frames Available?

Bose announced that The Frames are available for preorder right now for $199 from Bose.com. The company is planning on shipping the devices in January 2019, so anyone who decides to throw down their money on a pair won’t need to wait too long to get their hands (or eyes and ears, as it were) on a set.

If you’re interested in the world of augmented reality as it relates to your eyes, check out some of the best AR apps out there right now.

Read the full article: Bose’s Audio-Based AR Future Available to Preorder for $199


Read Full Article

Google’s Santa Tracker Is Back for 2018


Google’s Santa Tracker is back for 2018, and there are a host of new features available in Santa’s Village this time around. Google has added new elements to Santa’s Village every year since its inception, and in 2018 there are more things to do than ever before.

While this will probably make you feel old, Santa’s Village has been an annual tradition for 15 years now. And if you need to entertain the kids through December there are games to play, fun ways to learn simple coding and foreign languages, and animated shorts to watch.

Google Upgrades Its Santa Tracker for 2018

We know what’s new for 2018 thanks to a post on The Keyword written by a certain Mrs. Claus, the VP of Product, Santa’s Village and Santa Tracker. She draws attention to Elf Maker, a new game which lets you customize an elf from head to toe.

There are also holiday photos from Local Guides, a quiz about holiday traditions powered by Google Earth, and a Translations game (using Google Translate) designed to help people of all ages and nationalities learn holiday phrases in other languages.

The most important part of Google’s Christmas celebrations is back with a vengeance. On December 24th, kids (and infantile adults) will be able to track Santa as he wends his way around the world delivering presents to everyone who made it onto his Nice list.

As well as being able to track Santa using the dedicated Santa Tracker, you can also see where he is at any time using Google Maps. Unfortunately, this feature is locked until December 24th, which is the one day Santa actually does any work. The lazy so-and-so.

Explore Santa’s Village With Help From Google

You (or your children) can visit Santa’s Village here. You can then click on individual buildings to play games, learn something new, and/or watch something entertaining. Clicking the hamburger menu in the top-left will open up even more options.

Two of the options hidden away in the menu are the option to Call Santa and the option to hear a Holiday Story. These are just two of the new Google Assistant features launched in time for the holidays. So I think it’s fair to say Google likes Christmas.

Read the full article: Google’s Santa Tracker Is Back for 2018


Read Full Article

Tumblr Bans Adult Content and Upsets Users


tumblr-followers

Tumblr is banning adult content from its platform. This comes after Apple pulled Tumblr from the iOS App Store when harmful images were found to have beaten Tumblr’s filtering system. Unfortunately, Tumblr has taken the nuclear option to avoid a repeat performance.

Tumblr’s Evolving Attitude to Adult Content

Since its inception, Tumblr has always hosted a certain amount of adult content. And while it has mostly turned a blind eye to anything legal, it has employed a Safe Mode and search filters to hide the adult content from people who aren’t interested in it.

Now, however, things are changing, and Tumblr is trying to remove adult content from the site entirely. The only exceptions being nudity related to health situations, nudity found in art, nudity related to newsworthy speech, and written erotica.

A Better, More Positive Tumblr?

Tumblr announced the changes in a post titled, “A better, more positive Tumblr“. In it, CEO Jeff D’Onofrio explains how “posts that contain adult content will no longer be allowed on Tumblr, and we’ve updated our Community Guidelines to reflect this policy change.”

He goes onto say, “There are no shortage of sites on the internet that feature adult content. We will leave it to them and focus our efforts on creating the most welcoming environment possible for our community.” Except the people this excludes, obviously.

The people who were using Tumblr to post and view adult content are upset at this decision. And understandably so. However, as D’Onofrio suggests (and everyone else already knows), there are plenty of other places online to post and view adult content.

It’s true that Tumblr had become home for a non-mainstream audience who won’t necessarily find what they’re looking for elsewhere. But forums catering to Tumblr refugees will inevitably pop up. And Tumblr itself will be just fine without the adult content.

Tumblr Is Determined to Clean Up Its Act

This move to ban adult content is the second big policy change Tumblr has announced recently. In August 2018, Tumblr cracked down on hate speech, violent imagery, and sexual harassment. Which also upset some users who value free speech above all else.

Read the full article: Tumblr Bans Adult Content and Upsets Users


Read Full Article

What Are Gaming Routers and Are They Worth Buying?


Hardware companies target gamers with all kinds of special gaming hardware. You can grab gaming keyboards, headsets, mice, monitors, and even gaming motherboards and sound cards. It doesn’t stop there though.

Most of the major router manufacturers now offer “gaming routers”—routers specifically designed with online gaming experiences in mind.

But what exactly do these gaming routers do differently from standard routers? Moreover, do their features really matter?

What Does a Gaming Router Do?

If manufacturers want you to buy a gaming router, what separates them from a regular router? The key difference between a gaming router and a regular router is Quality of Service (QoS) features. A QoS utility predominantly focuses on sending your data exactly where it needs to go.

Don’t all routers do that? Yes, they do. However…

A typical router doesn’t care about which type of traffic is which. Your roommate using BitTorrent at maximum speed, Dropbox uploading and downloading files, web browsing, Netflix, gaming—it’s all the same to your router. All of it has equal priority when it comes to your internet connection

Of course, if you live in a busy household with multiple people attempting to use these services simultaneously, your internet can struggle to meet the demand. And if you’re trying to smash someone online at FIFA while someone else streams 4K video and another decides to upload their entire photo collection, your online gaming experience becomes decidedly laggy.

Gaming Router Quality of Service and Other Features

QoS takes that incoming data and, understanding how important gaming is, prioritizes incoming traffic for your game. In that, your gaming router attempts to minimize packet loss for gaming connections while bunching the rest of the incoming and outgoing network data into a separate stream.

One of the most common quality of service tools is Qualcomm’s StreamBoost. StreamBoost (or variants based upon StreamBoost) features in a wide range of gaming routers as many are powered by a Qualcomm chipset.

Manufacturers have tweaked and developed their own versions, too. StreamBoost and similar technologies are considered adaptive QoS, in that they can automatically adjust to the shifting demands of your home network.

In many cases, adaptive QoS isn’t there to strangle the connections of other internet users in your home. (Unless you set it up like that, of course.)

Rather, the adaptive QoS attempts to balance the demands of the available incoming bandwidth for the end users. But as it is a gaming router, your gaming QoS will take precedence if that’s what you require.

Other Useful Gaming Router Features

Gaming routers also come with a bunch of other useful QoS and quality of life (QoL) features. There are a few gaming router features you should look out for.

  • Gigabit Ethernet. How many Gigabit Ethernet ports does the router have? The Netgear Nighthawk XR700 even has a 10 Gigabit Ethernet port for those with an ultra-fast internet connection. The answer lies in your home network. How many wired devices do you have and will you add more in the future?
  • Wireless standards. At this point, most gaming routers pack in the latest wireless standards. The most common wireless standard at the time of writing is 802.11ac. However, the IEEE is developing 802.11ax, the successor to 802.11ac, while 802.11ay is set to function in the largely untapped 60GHz spectrum (wireless internet currently uses 2.4GHz and 5GHz).
  • Multi-band Wi-Fi. High-end routers now support tri-band wireless networking, allowing them to transmit on three channels at once. Some gaming routers use tri-band to transmit on two 5Ghz and a single 2.4GHz, while others broadcast on the newer but as yet largely unused 60GHz frequency.
  • Processor and RAM. Powerful modern gaming routers feature faster, more powerful CPUs and more RAM. Increased power and memory in your gaming router means it can a) handle more connections and attempt to decrease latency, and b) create and use more advanced QoS features.

Now, none of these features are exclusive to “gaming routers.” However, they’re certainly not found on low-end routers, or ISP-provided routers either. But they’re certainly available on routers that are cheaper than the high-end gaming routers.

How to Prioritize Your Gaming Router Traffic

One of the most important things to do after purchasing a gaming router is configuring your network priorities. What’s the best way to prioritize your gaming router traffic to deliver the best gaming experience?

  • Prioritize by Service. Want every device on the network to access a specific app? Set your network to prioritize by service. For instance, you could set every device on your network to have priority to a specific game or a video streaming service, like Netflix.
  • Prioritize by Network. Gaming routers allow prioritization by the network. That is, you can give your wireless connections priority over wired.
  • Prioritize by IP Address. Each device on your home network has a specific IP address behind your router. If each device has a static IP address—that’s an IP address that isn’t dynamically allocated when it joins the network—you can specify the network traffic priority for each IP address. (How do you get a static IP address, anyway?)
  • Prioritize by MAC Address. Your hardware has a unique identifier known as a MAC address. When your device connects to your gaming router, the router receives the device MAC address. You can then prioritize network traffic to devices based upon their unique identifier.

Some gaming routers let you combine prioritization methods. You could prioritize by service and IP address; funnily enough, that’s the perfect combination for streamlining gaming traffic.

Gaming Mode on D-Link Routers

D-Link routers—not just gaming routers, but many D-Link brand routers marketed to home users in general—come with a “Gaming Mode.”

This mode isn’t very self-explanatory. The D-Link router configuration interface says “If you are having difficulties playing some online games—please enable this mode.” and that “Gaming Mode should be used when you are playing games on the internet from behind the router.”

d-link-router-gaming-mode.png

While D-Link doesn’t provide much documentation on this, it turns out that “Gaming Mode” is essentially the same thing as “Full-Cone NAT” and the router uses “Symmetric NAT” when gaming mode is disabled.

Let’s back up a bit here. Your router uses network address translation (NAT) to share your internet connection between the devices connected to it: PCs, gaming consoles, smartphones, tablets, and whatever else. Your router discards incoming traffic by default because it has no idea which device to forward it to.

Now, let’s say your Xbox console establishes an outgoing connection to the internet. When the Xbox receives a response to that connection, the router will forward the incoming traffic to the Xbox.

With the default Symmetric NAT, the router will only forward traffic to the Xbox if it’s from the same destination the Xbox opened a communication channel with.

With full-cone NAT—that’s “Gaming Mode” in D-Link router parlance—the router will forward all incoming traffic on that port over to the Xbox.

In other words, when Gaming Mode is enabled, the Xbox can establish an outgoing connection and then receive incoming connections from any other address. This is often necessary when playing games, as they may be hosted on the Xbox itself.

Many people have reported that enabling Gaming Mode on a D-Link router is necessary to use Xbox Live.

Is a Gaming Router Worth the Money?

Gaming routers aren’t just a marketing ploy. They genuinely come with a host of useful features such as gaming traffic prioritization, additional Gigabit Ethernet ports, the latest wireless standards, and powerful router hardware.

However, it’s important to remember that none of these features are exclusive to gaming routers. QoS, Gigabit Ethernet, and dual-band 802.11ac Wi-Fi are common features in all higher-end routers.

Check out our guide to the best routers for gaming.

Read the full article: What Are Gaming Routers and Are They Worth Buying?


Read Full Article

10 Proven and Tested Tips to Extend Battery Life on Android

Welcome to the stochastic age


In 1990, Kleiner Perkins rejected 99.4 percent of the proposals it received, while investing in 12 new companies a year. Those investees made Kleiner Perkins “the most successful financial institution in the history of the world,” boasting “returns of about 40 percent per year, compounded, for coming up on thirty years.”

Nowadays, the Valley’s VC poster child is Y Combinator, which invests in more like 250 companies annually. They’re famously selective, accepting something like 1.5 percent of applicants, but still noticeably less selective than Kleiner Perkins in its heyday. They invest less money (though not necessarily that much less; KP bought 25 percent of Netscape for a mere $5 million back in 1994) in more companies.

In 1995, three networks controlled essentially all American television, and made only enough to fill the week; nowadays there is so much TV that you could binge watch a new scripted series every day of the year. In 1995, the top 10 movies of the year were responsible for 14 percent of the total box office. So far in 2018, the top 10 have claimed a full 25 percent of the total gross. Something similar happened in publishing; the so-called “midlist” was largely replaced by a “bestseller or bust” attitude.

In 1995, if you were a journalist, your readership was dictated almost entirely by who published you. No matter how compelling your piece in the Halifax Daily News may have been, the same number of people would glance at your headline as at the others in that issue, and that number would be drastically smaller than that of any article, no matter how buried, in The New York Times. Now, the relative readership of any article, both between and within publications, is determined mostly by social media sharing, and inevitably follows a power-law curve, such that a surprisingly small number of pieces attract the lion’s share of readers.

What do these fields have in common? The number of “hits” has remained relatively constant, while their value has grown, and the number of “swings” has grown to the point where it is difficult for any person, or even any group, to pay close attention to them all. And the outcomes inevitably follow a power law. So it doesn’t make sense to focus on individual outcomes any more; instead you focus on cohorts, and you think stochastically.

“Stochastic” means “randomly determined,” and your initial inclination may be to recoil — of course producers and investors and publishers aren’t acting randomly! They put enormous amounts of analysis, effort and intelligence into what they do! Which is true. But I put it to you that as gatekeepers’ power has diminished, and the number of would-be directors, CEOs and pundits has skyrocketed, while the costs of trying have shrunk — randomness has become a more and more important factor.

It’s easy to cite anecdotes. What if Excite had bought Google when it was offered to them for $1 million? How far were we, really, from a world in which Picplz succeeded and Instagram failed? Any honest success story will include elements of luck, which is, in this context, another word for randomness. My contention is that the world’s larger trends — greater interconnectedness, faster speed, democratized access to technology — make randomness an ever-more-important factor.

This is not automatically a good thing. People talk about “stochastic terrorism,” a.k.a. “The use of mass public communication, usually against a particular individual or group, which incites or inspires acts of terrorism which are statistically probable but happen seemingly at random.” Think of killers who dedicate their attack to ISIS after the fact despite no previous communication with them, or, more generally and contentiously, political violence promoted by broadcasting hatred and extremism.

And it seems that climate change is increasingly a stochastic disaster. Warmer weather means more energy in the atmosphere, which means more volatile behavior, which means more catastrophes like droughts, wildfires, hurricanes. Does climate change cause those things? Not directly. It increases the probability of them happening. It means both more and bigger hits, if you will.

This doesn’t apply to every field of human endeavor. But it seems to apply to essentially every field driven by unusual, extreme successes or failures — to extremistan, to use Nassim Taleb’s term. Extremistan seems to everywhere be growing more extreme, and there’s no end in sight.


Read Full Article

Facebook ends platform policy banning apps that copy its features


Facebook will now freely allow developers to build competitors to its features upon its own platform. Today Facebook announced it will drop Platform Policy section 4.1 which stipulates “Add something unique to the community. Don’t replicate core functionality that Facebook already provides.”

Facebook had previously enforced that policy selectively to hurt competitors that had used its Find Friends or viral distribution features. Apps like Vine, Voxer, MessageMe, Phhhoto and more had been cut off from Facebook’s platform for too closely replicating its video, messaging, or GIF creation tools. Find Friends is a vital API that lets users find their Facebook friends within other apps.

The move will significantly reduce the platform risk of building on the Facebook platform. It could also cast it in a better light in the eyes of regulators. Anyone seeking ways Facebook abuses its dominance will lose a talking point. And by creating a more fair and open platform where developers can build without fear of straying too close to Facebook’s history or roadmap, it could reinvigorate its developer ecosystem.

A Facebook spokesperson provided this statement to TechCrunch:

“We built our developer platform years ago to pave the way for innovation in social apps and services. At that time we made the decision to restrict apps built on top of our platform that replicated our core functionality. These kind of restrictions are common across the tech industry with different platforms having their own variant including YouTube, Twitter, Snap and Apple. We regularly review our policies to ensure they are both protecting people’s data and enabling useful services to be built on our platform for the benefit of the Facebook community. As part of our ongoing review we have decided that we will remove this out of date policy so that our platform remains as open as possible. We think this is the right thing to do as platforms and technology develop and grow.”

The change comes after Facebook locked down parts of its platform in April for privacy and security reasons in the wake of the Cambridge Analytica scandal. Diplomatically, Facebook said it didn’t expect the change to impact its standing with regulators but it’s open to answering their questions.

Earlier in April, I wrote a report on how Facebook used Policy 4.1 to attack competitors it saw gaining traction. The article “Facebook shouldn’t block you from finding friends on competitors” advocated for Facebook to make its social graph more portable and interoperable so users could decamp to competitors if they felt they weren’t treated right in order for to coerce Facebook to act better.

The policy change will apply retroactively. Old apps that lost Find Friends or other functionality will be able to submit their app for review and once approved, will regain access.

Friend lists still can’t be exported in a truly interoperable way. But at least now Facebook has enacted the spirit of that call to action. Developers won’t be in danger of losing access to that Find Friends Facebook API for treading in its path.

 

Below is an excerpt from our previous reporting on how Facebook has previously enforced Platform Policy 4.1 that before today’s change was used to hamper competitors:

  • Voxer was one of the hottest messaging apps of 2012, climbing the charts and raising a $30 million round with its walkie-talkie-style functionality. In early January 2013, Facebook copied Voxer by adding voice messaging into Messenger. Two weeks later, Facebook cut off Voxer’s Find Friends access. Voxer CEO Tom Katis told me at the time that Facebook stated his app with tens of millions of users was a “competitive social network” and wasn’t sharing content back to Facebook. Katis told us he thought that was hypocritical. By June, Voxer had pivoted toward business communications, tumbling down the app charts and leaving Facebook Messenger to thrive.
  • MessageMe had a well-built chat app that was growing quickly after launching in 2013, posing a threat to Facebook Messenger. Shortly before reaching 1 million users, Facebook cut off MessageMe‘s Find Friends access. The app ended up selling for a paltry double-digit millions price tag to Yahoo before disintegrating.
  • Phhhoto and its fate show how Facebook’s data protectionism encompasses Instagram. Phhhoto’s app that let you shoot animated GIFs was growing popular. But soon after it hit 1 million users, it got cut off from Instagram’s social graph in April 2015. Six months later, Instagram launched Boomerang, a blatant clone of Phhhoto. Within two years, Phhhoto shut down its app, blaming Facebook and Instagram. “We watched [Instagram CEO Kevin] Systrom and his product team quietly using PHHHOTO almost a year before Boomerang was released. So it wasn’t a surprise at all . . . I’m not sure Instagram has a creative bone in their entire body.”
  • Vine had a real shot at being the future of short-form video. The day the Twitter-owned app launched, though, Facebook shut off Vine’s Find Friends access. Vine let you share back to Facebook, and its six-second loops you shot in the app were a far cry from Facebook’s heavyweight video file uploader. Still, Facebook cut it off, and by late 2016, Twitter announced it was shutting down Vine.

Read Full Article