07 February 2019

Subscription startup Scroll acquires news aggregator Nuzzel


Tony Haile, who previously led analytics company Chartbeat, is trying to rethink the business model for news at his new startup Scroll. Now he’s adding aggregation and curation to the mix with the acquisition of Nuzzel.

Scroll is still an invite-only product, but Haile explained the idea succinctly: “We deliver this amazing, clean, ad-free experience, and we do it for a low monthly price.”

In other words, after you subscribe and download Scroll, anytime you load up one of its partner sites (including USA Today, BuzzFeed and Vox), you should get an ad-free experience, which should work regardless of whether you’re accessing the site directly from your desktop or mobile browser, or from social media. In exchange, the publishers share the subscription revenue.

Nuzzel, meanwhile, was founded by Jonathan Abrams (who previously founded Friendster), and its core product allows you to see the stories that are most-shared by the people you follow on social media.

Haile said that by acquiring Nuzzel, Scroll can also start experimenting with different models for news curation — which is particularly important because if “we have just two algorithms determining who gets traffic and who doesn’t, then that’s not a healthy web ecosystem.”

“It’s really hard to [build] a scalable business as an amazing curation service,” he added. With Nuzzel, he hopes to “start finding ways in which we can build in that value and drive a new model for our user experience services.”

Tony Haile

NEW YORK, NY – OCTOBER 01: Tony Haile speaks onstage at the Buyer Beware! panel during AWXI on October 1, 2014 in New York City. (Photo by Andrew Toth/Getty Images for AWXI)

That doesn’t mean existing Nuzzel users shouldn’t expect any dramatic changes to either the app or the newsletters — Haile said they will continue to operate as separate products, and his team is taking the approach of “first do not harm.”

However, Scroll does plan to remove any advertising from the newsletters, and the engineering team behind the Nuzzel Media Intelligence productwill be spinning that out as a separate company.

The financial terms of the deal were not disclosed. According to Crunchbase, Nuzzel had raised $5.1 million from investors including Salesforce CEO Marc Benioff. Scroll, meanwhile, has raised a total of $10 million.

Haile said there won’t be anyone from the Nuzzel team joining Scroll in a full-time capacity, though some of them may remain involved as contractors. Abrams, meanwhile, told me via email that he and Nuzzel COO Kent Lindstrom are starting a new, yet-to-be-announced company.

“I think current Nuzzel users should see this as great news, since Scroll wants to make sure that Nuzzel’s services continue to operate,” Abrams said. “As you know, a lot of other news app and news aggregation startups were unfortunately shutdown between 2015 and 2018, so like I said, this is good news for Nuzzel users.”


Read Full Article

Tech platforms called to support public interest research into mental health impacts


The tech industry has been called on to share data with public sector researchers so the mental health and psychosocial impacts of their service on vulnerable users can be better understood, and also to contribute to funding the necessary independent research over the next ten years.

The UK’s chief medical officers have made the call in a document setting out advice and guidance for the government about children’s and young people’s screen use. They have also called for the industry to agree a code of conduct around the issue.

Concerns have been growing in the UK about the mental health impacts of digital technologies on minors and vulnerable young people.

Last year the government committed to legislate on social media and safety. It’s due to publish a white paper setting out the detail of its plans before the end of the winter, and there have been calls for platforms to be regulated as publishers by placing a legal duty of care on them to protect non-adult users from harm. Though it’s not yet clear whether the government intends to go that far.

“The technology industry must share data they hold in an anonymised form with recognised and registered public sector researchers for ethically agreed research, in order to improve our scientific evidence base and understanding,” the chief medical officers write now.

After reviewing the existing evidence the CMOs say they were unable to establish a clear link between screen-based activities and mental health problems.

“Scientific research is currently insufficiently conclusive to support UK CMO evidence-based guidelines on optimal amounts of screen use or online activities (such as social media use),” they note, hence calling for platforms to support further academic research into public health issues.

Last week the UK parliament’s Science and Technology Committee made a similar call for high quality anonymized data to be provided to further public interest research into the impacts of social media technologies.

We asked Facebook-owned Instagram whether it will agree to provide data to public sector mental health and wellbeing researchers earlier this week. But at the time of writing we’re still waiting for a response. We’ve also reached out to Facebook for a reaction to the CMOs’ recommendations.

Update: A Facebook spokesperson said:

We want the time young people spend online to be meaningful and, above all, safe. We welcome this valuable piece of work and agree wholeheartedly with the Chief Medical Officers on the need for industry to work closely together with government and wider society to ensure young people are given the right guidance to help them make the most of the internet while staying safe.

Instagram’s boss, Adam Mosseri, is meeting with the UK health secretary today to discuss concerns about underage users being exposed to disturbing content on the social media platform.

The meeting follows public outrage over the suicide of a schoolgirl whose family said she had been exposed to Instagram accounts that shared self-harm imagery, including some accounts they said actively encouraged suicide. Ahead of the meeting Instagram announced some policy tweaks — saying it would no longer recommend self-harm content to users, and would start to screen sensitive imagery, requiring users click to view it.

In the guidance document the CMOs write that they support the government’s move to legislate “to set clear expectations of the technology industry”. They also urge the technology industry to establish a voluntary code of conduct to address how they safeguard children and young people using their platforms, in consultation with civil society and independent experts.

Areas that the CMOs flag for possible inclusion in such a code include “clear terms of use that children can understand”, as well as active enforcement of their own T&Cs — and “effective age verification” (they suggest working with the government on that).

They also suggest platforms include commitments to “remove addictive capabilities” from the UX design of their services, criticism so-called “persuasive” design.

They also suggest platforms commit to ensure “appropriate age specific adverts only”.

The code should ensure that “no normalisation of harmful behaviour (such as bullying and selfharming) occurs”, they suggest, as well as incorporate ongoing work on safety issues such as bullying and grooming, in their view.

In advice to parents and carers also included in the document, the CMOs encourage the setting of usage boundaries around devices — saying children should not be allowed to take devices into their bedrooms at bedtime to prevent disruption to sleep.

Parents also encourage screen-free meal time to allow families to “enjoy face-to-face conversation”.

The CMOs also suggest parents and guardians talk to children about device use to encourage sensible social sharing — also pointing out adults should never assume children are happy for their photo to be shared. “When in doubt, don’t upload,” they add.


Read Full Article

The 10 Best Video Game Themed Valentine’s Day Gifts

Raspberry Pi Terminal Commands: A Quick Guide for Raspberry Pi Users


Got hold of a Raspberry Pi but not entirely confident with Linux? While the main desktop is easy enough to use, at times you’ll need to rely on command line entry in the terminal. But if you’re new to the Raspbian operating system and Linux, this is easier said than done.

If you’re using a Raspberry Pi computer for a weekend project (perhaps a media center or a home server), then there is a good chance these useful Raspberry Pi command line instructions will save you some time.

Raspberry Pi Commands: You’re Using Linux

You’ve imaged your SD card and booted your Raspberry Pi, and running the Raspbian operating system, updated and configured to optimize your Raspberry Pi.

What you may not have realized is that despite the Windows-style icon-driven desktop, Raspbian is a Linux distribution. Several operating systems are available for Raspberry Pi, the vast majority of which are Linux.

This isn’t an attempt to get people using Linux by stealth! You can install Linux on a huge range if devices. Rather, the Raspberry Pi Foundation relies on Linux operating systems because of their open source origins and versatility. While you can use a Linux operating system without using the command line, this is where the real power lies.

Want total control over your Raspbian-powered Raspberry Pi? Begin by launching LX Terminal or booting to the command line.

5 Important Raspberry Pi Update Commands

We wouldn’t expect you to start using the command line without knowing how it works. Essentially, it is a method for instructing the computer to perform tasks, but without a mouse.

Look for the pi@raspberrypi $ prompt when you log in to the terminal. You can enter commands whenever this is displayed.

Probably the first thing you should learn to do from the command line is update your Raspberry Pi. If you’re using Raspbian, this is a case of using three or four commands, to update and upgrade the Pi’s sources and operating system:

  • sudo apt-get update
  • sudo apt-get upgrade
  • sudo apt-get dist-upgrade
  • sudo rpi-update

To save time, combine these into a single chained command:

  • sudo apt-get update && sudo apt-get upgrade && sudo apt-get dist-upgrade && sudo rpi-update

5 Raspberry Pi Command Line Basics

With a mouse-driven GUI, you can easily switch directories and read their contents. However, you may prefer the flexibility of text-based commands.

  • pwd shows you the current directory (print working directory).
  • ls will list the contents of the directory.
  • cd is used to change the directory. For example, cd edward with switch you to a child directory called “Edward” while cd.. returns focus to the parent directory.
  • mkdir newdir will create a new directory, where “newdir” is the directory label. You can also create a succession of new directories with mkdir -p /home/edward/newdir1/newdir2, where both newdir1 and newdir2 are created, but this will only work with the -p
  • clear presents a clean new screen, useful if your previous commands are cluttering things up.

You’ll easily pick up these command line basics. It’s useful to be able to navigate via the command line as some files and folders are invisible to the mouse-driven file manager.

10 Commands for Raspberry Pi Hardware Info

On a Windows PC or Mac you can easily find hardware information by looking in System Information or About This Mac. To find out about your Raspberry Pi’s hardware, enter the following:

  • cat /proc/cpuinfo

Discover information about the Raspberry Pi's CPU

This will output information about the device’s processor. For instance, where you see “BCM2708”, this indicates that Broadcom manufactured the chip.

Run these proc directory commands to uncover other hardware information.

  • cat /proc/meminfo displays details about the Raspberry Pi’s memory.
  • cat /proc/partitions reveals the size and number of partitions on your SD card or HDD.
  • cat /proc/version shows you which version of the Pi you are using.

Check the current Linux versions

Use these commands to assess what your Raspberry Pi might be capable of. It doesn’t end there. Find further information using the vcgencmd series of commands:

  • vcgencmd measure_temp reveals the CPU temperature (vital if you’re concerned about airflow).
  • vcgencmd get_mem arm && vcgencmd get_mem gpu will reveal the memory split between the CPU and GPU, which can be adjusted in the config screen.
  • free -o -h will display the available system memory.
  • top d1 checks the load on your CPU, displaying details for all cores.
  • df -h is a great way to quickly check the free disk space on your Raspberry Pi.

How much free space does your Raspberry Pi's SD card have?

  • uptime is a simple command that displays the Raspberry Pi’s load average.

3 Commands to Check Connected Devices

Just as you can list the contents of a directory with a single command, Linux lets you list devices connected to your computer.

  • ls /dev/sda* displays a list of partitions on the SD card. For a Raspberry Pi with a HDD attached, substitute sda* with hda*.
  • lsusb displays all attached USB devices. This is crucial for connecting a hard disk drive or other USB hardware that requires configuration.

Use lsusb to learn about USB devices connected to the Raspberry Pi

If the item is listed here, you should be able to set it up.

  • lsblk is another list command you can use. This displays information about all attached block devices (storage that reads and writes in blocks).

3 Commands to Shutdown and Restart Raspberry Pi

Perhaps the most important command line instruction is sudo. This single word instructs Linux-based systems that the following command is to be carried out with “super user” privilege. This is an advanced level of access like (but not the same as) administrator on Windows computers.

Raspberry Pi configuration tool

One of the most common commands for Raspbian users is sudo raspi-config. This opens the configuration screen for the operating system (there is also a desktop version found via main menu). The following three commands may prove useful:

  • startx will start the Raspberry Pi GUI (graphic user environment) and return you to the default Raspbian desktop.
  • sudo shutdown -h now will commence the shutdown process with immediate effect. Schedule a timed shutdown with the format: sudo shutdown -h 21:55
  • sudo reboot is for restarting the Raspberry Pi from the command line.

Raspberry Pi Terminal Commands Are Power

For many people, command line access on any platform is intimidating.

The useful commands listed here are an attempt to give the Raspberry Pi newcomer the bare minimum to get started with the terminal, a small stepping stone to success with whichever Pi project they decide to start.

There’s an added bonus: learning these commands can set you on the road to using Linux, as the majority will work on any distribution! If you’re new to the Raspberry Pi, check out our article on Raspberry Pi basics everyone should know.

Read the full article: Raspberry Pi Terminal Commands: A Quick Guide for Raspberry Pi Users


Read Full Article

How to Use Facebook Live

7 Alternative Superhero Movies to Watch on Netflix


alternative-superhero-movies

Theater audiences love superhero movies. Every year critics confidently predict superhero fatigue will finally stall the success of the Marvel and DC movie juggernauts. And yet every year superhero movies make billions of dollars at the box office.

People aren’t sick of superheroes yet. However, they are looking for movies that go beyond the MCU and DCEU formula. If you are also ready to watch some alternative superhero movies, Netflix has plenty of options to choose from…

1. V for Vendetta (2005)

Dystopian Political Thriller | IMDb: 8.2 | RT: 73%

Welcome to an alternate future, where a neo-fascist regime has taken control of Britain. The only one standing in the path of the authoritarian government is the mysterious entity known simply as V.

His aim is to bring about a revolution that would overthrow the government through a series of elaborately staged terrorist acts. Starring Hugo Weaving, Natalie Portman, and Stephen Rea in lead roles, the film was directed by James McTeigue.

The screenplay is written by The Wachowskis, based on the DC/Vertigo Comics limited series of the same name. The film drew praise for its stylized action and complex socio-political themes. V for Vendetta is sumptuous fare for those looking for deeper themes in superhero films.

2. Krrish (2006)

Bollywood Superhero Sci-Fi Thriller | IMDb: 6.4 | RT: 100%

The concept of a Bollywood superhero film might leave western audiences scratching their heads. But Krrish upends expectations of what you can get from such a film. It combines Matrix-style action with old-school Hollywood romantic musical tropes.

The film is a sequel to Koi Mil Gaya (imagine E.T. crossed with Forrest Gump). It follows the story of a young man named Krishna, who must hide his powers from the world at the behest of his grandmother.

Love brings Krishna to Singapore. There he becomes involved in a sinister conspiracy involving his dead father and a scientist bent on world domination.

The movie stars Hrithik Roshan as the charismatic lead, with Quantico’s Priyanka Chopra opposite him. Krrish is a throwback to the kind of superhero movies Hollywood was making in the Christopher Reeve era.

Roshan’s commanding performance and the film’s emphasis on practical stunts instead of CGI makes Krrish a definite standout in the crowd of foreign superhero films.

3. Hellboy (2004)

Supernatural Horror Action-Adventure | IMDb: 6.9 | RT: 81%

The world is beset by threats of the paranormal demonic variety. A secret government organization has been tasked with identifying and neutralizing such threats.

The ace up their sleeves is a demonic beast-turned-superhero nicknamed Hellboy. Sarcastic, swaggering, and supremely irreverent, Hellboy makes his own rules when it comes to protecting the world from his paranormal kin.

Starring Ron Perlman in a role that seems custom-made for him. Hellboy shows us the early genius of Guillermo del Toro as he takes over the director’s chair of a superhero film for the second time after Blade II.

The movie is based on Hellboy: Seed of Destruction, a graphic novel published by Dark Horse Comics.

4. Watchmen (2009)

Neo-Noir | IMDb: 7.6 | RT: 64%

Zack Snyder’s adaptation of Watchmen is undeniably brilliant. The critically acclaimed graphic novel series by Alan Moore and Dave Gibbons on which the film is based is owned by DC. But it superbly satirizes DC’s own trademark tropes when it comes to superhero comics, as well as Marvel’s.

In 1985, the arrival of a god-like being named Dr. Manhattan shakes up the world and the superheroes populating it at the height of the Cold War. It is up to a group of retired heroes to navigate a myriad of complex moral and interpersonal dilemmas in their quest to uncover a vast and sinister conspiracy regarding the fate of the entire world.

5. Next Gen (2018)

Sci-Fi Action Comedy-Drama | IMDb: 6.6 | RT: 80%

The world has moved forward technologically. Sentient robots are commonplace. Mai Su is a lonely teenage girl who encounters a top-secret robot prototype known simply as 7723. The two unlikely allies must band together to put an end to a vicious threat.

Starring John Krasinski, Charlyne Yi, Jason Sudeikis, and Michael Pena among others. This Canadian-American-Chinese feature was directed by Kevin R. Adams and Joe Ksander, and scheduled for a 2018 Netflix release.

The movie effectively puts its young protagonist at the center of the action. It is the relationship between Mai and 7723 that drives much of the narrative. This is a great watch for the whole family.

6. Psychokinesis (2018)

South Korean Superhero Adventure | IMDb: 5.9 | RT: 88%

Director Yeon Sang-ho follows up his blockbuster zombie feature Train to Busan with a film about an average Joe who gets superpowers. When bank security guard Shin Roo-mi gains telekinetic powers following contact with a mysterious meteor, his whole life turns upside down.

He finds himself in the crosshairs of an evil construction company that wants to take over his neighborhood. Unless Shin can step up to the challenge and save the day.

The movie is frequently funny and offers some sharp social commentary. Psychokinesis also features the type of practical actions scenes that South Korean cinema is famous for.

More emphasis is laid on the human side of the superhero narrative instead of throwing out scene after scene of CGI spectacle.

7. Astro Boy (2009)

Hong Kong-America Computer Animation | IMDb: 6.3 | RT: 50%

Based on the popular Manga series of the same name. Astro Boy follows the story of Tobio Tenma, a 13-year-old boy who was disintegrated by a dangerous weapon. His distraught grandfather constructs a robot replica of Tobio using his memories.

The robot must come to terms with the memory of its human self while battling the very machine that took his life earlier.

With a voice cast which includes Freddie Highmore, Nicholas Cage, and Eugene levy among others, Astro Boy is a marvelously designed piece of cartoon kinetics. The lack of originality of the plot is more than made up for by the visual thrills it offers.

Exploring Sci-Fi Beyond DC and Marvel

At the end of the day, most superhero films can basically be defined as “Science fiction movies featuring men in tights”. But the science fiction genre itself ranges far beyond conventional superhero fare. You can explore the genre further with the best modern sci-fi movies on Netflix.

And if you still feel the need for a superhero fix, these forgotten superhero games you should definitely play will put you in the shoes of your favorite superpowered characters.

Read the full article: 7 Alternative Superhero Movies to Watch on Netflix


Read Full Article

4 Common Issues to Know When Installing a Custom Android ROM

Match fully acquires relationship-focused app Hinge


Last year, Match Group acquired a 51 percent stake in the relationship-focused dating app Hinge, in order to diversify its portfolio of dating apps led by Tinder. The company has now confirmed that it fully bought out Hinge in the past quarter, and today owns 100 percent of the app which has been gaining momentum both inside and outside of the U.S. following last year’s deal.

Terms of the acquisition were not disclosed.

Match believes that Hinge can offer an alternative to those who aren’t interested in using casual apps, like Tinder. As the company noted on its earnings call with investors this morning, half of all singles in the U.S. and Europe have never tried dating products. And of the 600 million internet-connected singles in the world, 400 million have never used dating apps.

That leaves room for an app like Hinge to grow, as it can attract a different type of user than Tinder and other Match-owned apps – like OKCupid or Plenty of Fish, for example – are able to reach.

As Match explained in November, it plans to double-down on marketing that focuses on Tinder’s more casual nature and use by young singles, while positioning Hinge as the alternative for those looking for serious relationships. The company said it would also increase its investment in Hinge going forward, in order to grow its user base.

Those moves appear to be working. According to Match Group CEO Mandy Ginsberg, Hinge downloads grew 4 times on a year-over-year basis in the fourth quarter of 2018, and grew by 10 times in the U.K. The app is particularly popular in New York and London, which are now its top two markets, the exec noted.

Match may also see Hinge as a means of better competing with dating app rival Bumble, which it has been unable to acquire and continues to battle in court over various disputes.

Bumble’s brand is focused on female empowerment with its “women go first” product feature, and takes a more heavy-handed approach to banning, ranging from its prohibition on photos with weapons and its stance on kicking out users who are disrespectful to others.

Match, in its earnings announcement, made a point of comparing Hinge to other dating apps, including Bumble.

“Hinge downloads are now two-and-a-half times more than the next largest app, and 40 percent of Bumble downloads,” said Ginsberg, referring to a chart (below) which positions Hinge next to competitors like Happn, The League, Coffee Meets Bagel and Bumble.

“We expect Hinge to continue to strengthen its position in this relationship-minded market,” she added. “We believe that Hinge can be a meaningful revenue contributor to match group beyond 2019, and we have confidence that can carve out a solid position in the dating app landscape amongst relationship-minded millennials, and serve as a complimentary role in our portfolio next to Tinder,” Ginsberg said. 

Match says its has big plans for Hinge in 2019, saying that it will expand Hinge to international markets, double the size of its team, and build new products featured focused on helping people get off the app and go on dates.

Hinge today claims to be the fastest-growing dating app in the U.S., U.K., Canada and Australia, and is setting up a date every four seconds. 3 out of 4 first dates on Hinge also lead to second dates, it says.

Hinge is now one of several dating apps own by Match Group, which is best known for Tinder and its namesake, Match.com. But the company has been diversifying as of late, not only with Hinge, but also its newest addition, Ship, which was developed in partnership with media brand Betches. But Ship could be a miss if it doesn’t even out its demographics – currently, the subscriber base is 80 percent female, Match says.

Tinder, meanwhile, still drives Match Group’s revenue, which rose to $457 million from $379 million a year ago, and exceeded analysts’ expectations for $448 million, per MarketWatch. In the quarter, Tinder added 233,000 net new subscribers, bringing its total subscriber count to 4.3 million. Combined with Match’s other apps, overall subscribers totalled 8.2 million.

 


Read Full Article

Twitter Q4 beats on sales of $909M and EPS of $0.33, but MAUs slump to just 321M


After strong results from Facebook and Snap this quarter, all eyes were on Twitter to see if the other big, publicly listed social network could deliver a hat trick of growth. If we judged the company on financials alone, the company did not disappoint. The company reported Q4 revenues of $909 million (up 24 percent on a year ago) and diluted earnings per share of $0.33 with a net income of $244 million. On average, analysts had been expecting revenues of $859.5 million on an EPS of $0.25.

However! Twitter’s achilles heel remains user growth. It has now slumped to 321 million monthly active users, falling short even of estimates that were expecting a decline. Shares are equally slumping in pre-market trading, down more than seven percent so far.

That may have also not been helped by weak guidance. The company said it expects Q1 revenues to be between just $715 million and $775 million, with operating income between $5 million and $35 million. Even with Q1 seasonal declines, this is a big drop from Q4, also a jump up on financials from a year ago. Twitter estimated that capex for 2019 would be between $550 million and $600 million, which makes one wonder if it has some acquisitions in mind, too.

Twitter’s Q4 MAUs are a decrease of 9 million year-over-year and a down 5 million on last quarter with declines both in the US as well as international markets.

Twitter said the decline was partly due to three areas: “product changes that reduced the number of email notifications sent, as well as decisions we have made to prioritize the health of the service and not move to paid SMS carrier relationships in certain markets, and, to a lesser extent, changes we made to comply with the General Data Protection Regulation (GDPR) in Europe.”

The company will stop giving MAU numbers after the next quarter, which is one way of getting the decline out of the conversation.

Advertising revenues were $791 million, accounting for 87 percent of the company’s revenues (more on these below). “Monetizeable daily active users” are now at 126 million up from 124 million in the previous quarter.

To put user growth into some context, Twitter has long-standing issues with user growth that even predate the company going public. In many quarters — such as last quarter, when it also beat estimates on revenues of $758 million and earnings per share of 21 cents; and a year ago, when it also crushed financials but fell on subscriber growth — user numbers, specifically monthly active users, have remained flat or even shrunk.

(Even analysts factor in declines to their own estimates. Analysts had been expecting 324 million monthly active users in Q4, according to a poll from Bloomberg, down from 326 million in Q3.)

Some of Twitter’s challenges on the user-number front have included the fact that despite its almost addictive popularity with some people, a strong showing from very high profile figures “speaking to the people” on Twitter, and the fact that it’s become a go-to for the media both to source news as well as broadcast — the real-time aspect of the feed lends itself well to all of these — it has been hard for it to find that groove with everyone.

Especially for many later adopting, newer users Twitter has proven to be confusing or too much work to use. That’s led to the company regularly tweaking the service to try to make it more user-friendly, with the latest move being that the company is planning a “beta” app to run multiple experiments simultaneously on a live audience receptive to seeing those and giving feedback.

As with Snap’s Snapchat, Twitter has worked to mitigate those numbers another way, too: by focusing and asking others to focus on daily active over monthly active users. That will become an official policy soon: after Q1 it will stop giving out MAU numbers at all.

So why do user numbers ultimately matter? The general thinking goes that, in a business based around advertising and user data, as Twitter is, the larger audience you have the more revenue you can make off them as a product — a turn that Google and Facebook have made to great effect.

So it’s interesting that despite Twitter’s issues with user growth, the company has been coming up trumps (sorry) with its business model, specifically initiatives around advertising and marketing and figuring out more clever ways of targeting those who are on there.

Its ad revenues were up 23 percent, and Twitter said that new formats around video media are in particular showing strong results. Video accounted for more than half of Twitter’s ad revenues in the quarter and for all of 2018.

For any ad-based business — and any investor or analyst of those ad-based businesses — a focus not on volume but on the quality of the audience is an interesting trend and it will be interesting to continue watching how that develops longer term, even if it’s not a trend that’s particularly benefitting Twitter’s own share price at the moment.

Less strong this quarter were the company’s various enterprise efforts. The company said that data licensing and other revenue totalled $117 million, an increase of 35 percent but still a small proportion of overall revenues. 

Facebook’s strong results this quarter came at the same time that the company has been weathering a ton of bad publicity around how its platform has been exploited (seemingly with little resistance from Facebook) to manipulate democratic processes, and how Facebook itself has been exploiting users to extract more data to help it build products.

The fact that one (financials) do not seem to be impacted by the other (bad PR) raises a lot of questions: does the public really not care about all these things, or will the commercial ramifications come down the line as a delayed reaction?

It’s not clear how it will play out, but regardless, Facebook has been taking measures to try to set things aright, both in terms of hiring more people to “fix” some of these issues, and also to reorient its whole staff to prioritise cleaning up the platform both when planning for future products, and in their daily work.

I mention all this because Twitter dedicated some time in its earnings to highlighting how it has been battling abuse. This has been one of the company’s biggest points of criticism from users, both as observers and as first-hand recipients of harassment.

Twitter noted that there has been a 16 percent year-over-year decrease in abuse reports. And it highlighted how it has improved security, updated rules for hateful conduct, and ramped up monitoring “behavior-based signals” to better manage what Tweets are viewed. 

As with Facebook, it’s really not clear yet how this effort, or the frustrating and dangerous presence of trolls, will longer-term have an impact on the company’s bottom line, but we at least have one proof point of the negative impact: it has apparently affected how Twitter was viewed once as an acquisition target. More generally, unless you are a ruthless monster, there is an argument to be made to fix it regardless, because that is just the right thing to do. And that is what Twitter is trying to do.

It said that it will also focus on this in 2019, with a “more proactive approach to reducing abuse and its effects on Twitter, with the goal of reducing the burden on victims of abuse and, where possible, taking action before abuse is reported.” Specifically, it said it would focus on abuse that could cause severe or immediate harm; and a better sign-up process to screen for bad actors.”

In terms of guidance for the year ahead, the company is not giving any user number forecasts but said that it expects Q1 revenue

For Q1, we expect:

  • Total revenue to be between $715 million and $775 million
  • GAAP operating income to be between $5 million and $35 million

For FY 2019, we expect:

  • GAAP and cash operating expenses to be up approximately 20% year-over-year in 2019as we support our existing priorities of health, conversation, revenue product and sales, and platform
  • Stock-based compensation expense to be in the range of $350 million to $400 million
  • Capital expenditures to be between $550 million and $600 million

More to come.


Read Full Article

Motorola’s G7 line arrives this spring, starting at $199


Weeks of leaks haven’t left much to the imagination. But for those waiting for the real thing, the latest iteration of Motorola’s budget G line just became officially official as of this morning — and with a few weeks to spare ahead of Mobile World Congress. Of course, the Moto G7 line isn’t really aimed at the MWC crowd.

That show tends to be far more focused on premium flagships, while, as Motorola put it to me ahead of launch, this line is for “people who say, ‘I don’t need all this phone.’” In other words, people who don’t want to spend $1,000+ for a flagship. As such, the line starts at $199, putting it in line with earlier models.

As ever, the line will be available in three somewhat convoluted models. There’s the G7, the G7 Play, G7 Power and G7 Plus. The Plus, which brings a number of camera effects that have trickled down from the Moto Z line, won’t be available here in the States. It is, however, available today in Brazil and Mexico and will be rolling out in Europe, Australia and other parts of Latin American, packing a 16-megapixel dual camera, OIS and “auto-smile” image capture.

As for the base-level G7, that sports a 6.2-inch display, 12-megapixel dual cameras and a beefy 5,000 mAh battery, coupled with a middling Snapdragon 632. That, too, is already available in Brazil and Mexico, priced at $299. For $249 you can get the G7 Power, which has the same screen and battery, but drops the dual cameras.

Cheapest of all is the $199 Moto G7 Play. That shrinks the screen down to 5.7 inches and pops a single 13-megapixel camera on back. The G7, G7 Play and G7 Power will be available in the States this spring. 


Read Full Article

German antitrust office limits Facebook’s data-gathering


A lengthy antitrust probe into how Facebook gathers data on users has resulted in Germany’s competition watchdog banning the social network giant from combining data on users across its own suite of social platforms without their consent.

The investigation of Facebook data-gathering practices began in March 2016.

The decision by Germany’s Federal Cartel Office, announced today, also prohibits Facebook from gathering data on users from third party websites — such as via tracking pixels and social plug-ins — without their consent.

Although the decision does not yet have legal force and Facebook has said it’s appealing.

In both cases — i.e. Facebook collecting and linking user data from its own suite of services; and from third party websites — the Bundeskartellamt says consent must be voluntary, so cannot be made a precondition of using Facebook’s service.

The company must therefore “adapt its terms of service and data processing accordingly”, it warns.

“Facebook’s terms of service and the manner and extent to which it collects and uses data are in violation of the European data protection rules to the detriment of users. The Bundeskartellamt closely cooperated with leading data protection authorities in clarifying the data protection issues involved,” it writes, couching Facebook’s conduct as “exploitative abuse”.

“Dominant companies may not use exploitative practices to the detriment of the opposite side of the market, i.e. in this case the consumers who use Facebook. This applies above all if the exploitative practice also impedes competitors that are not able to amass such a treasure trove of data,” it continues.

“This approach based on competition law is not a new one, but corresponds to the case-law of the Federal Court of Justice under which not only excessive prices, but also inappropriate contractual terms and conditions constitute exploitative abuse (so-called exploitative business terms).”

Commenting further in a statement, Andreas Mundt, president of the Bundeskartellamt, added: “In future, Facebook will no longer be allowed to force its users to agree to the practically unrestricted collection and assigning of non-Facebook data to their Facebook user accounts.

“The combination of data sources substantially contributed to the fact that Facebook was able to build a unique database for each individual user and thus to gain market power. In future, consumers can prevent Facebook from unrestrictedly collecting and using their data. The previous practice of combining all data in a Facebook user account, practically without any restriction, will now be subject to the voluntary consent given by the users.

“Voluntary consent means that the use of Facebook’s services must not be subject to the users’ consent to their data being collected and combined in this way. If users do not consent, Facebook may not exclude them from its services and must refrain from collecting and merging data from different sources.”

“With regard to Facebook’s future data processing policy, we are carrying out what can be seen as an internal divestiture of Facebook’s data,” Mundt added. 

Facebook has responded to the Bundeskartellamt’s decision with a blog post setting out why it disagrees. The company did not respond to specific questions we put to it.

One key consideration is that Facebook also tracks non-users via third party websites. Aka, the controversial issue of ‘shadow profiles’ — which both US and EU politicians questioned founder Mark Zuckerberg about last year.

Which raises the question of how it could comply with the decision on that front, if its appeal fails, given it has no obvious conduit for seeking consent from non-users to gather their data. (Facebook’s tracking of non-users has already previously been judged illegal elsewhere in Europe.)

The German watchdog says that if Facebook intends to continue collecting data from outside its own social network to combine with users’ accounts without consent it “must be substantially restricted”, suggesting a number of different criteria are feasible — such as restrictions including on the amount of data; purpose of use; type of data processing; additional control options for users; anonymization; processing only upon instruction by third party providers; and limitations on data storage periods.

Should the decision come to be legally enforced, the Bundeskartellamt says Facebook will be obliged to develop proposals for possible solutions and submit them to the authority which would then examine whether or not they fulfil its requirements.

While there’s lots to concern Facebook in this decision, it isn’t all bad for the company — or, rather, it could have been worse.

The authority makes a point of saying the social network can continue to make the use of each of its messaging platforms subject to the processing of data generated by their use, writing: “It must be generally acknowledged that the provision of a social network aiming at offering an efficient, data-based business model funded by advertising requires the processing of personal data. This is what the user expects.”

Although it also does not close the door on further scrutiny of that dynamic, either under data protection law (as indeed, there is a current challenge to so called ‘forced consent‘ under Europe’s GDPR); or indeed under competition law.

“The issue of whether these terms can still result in a violation of data protection rules and how this would have to be assessed under competition law has been left open,” it emphasizes.

It also notes that it did not investigate how Facebook subsidiaries WhatsApp and Instagram collect and use user data — leaving the door open for additional investigations of those services.

On the wider EU competition law front, in recent years the European Commission’s competition chief has voiced concerns about data monopolies — going so far as to suggest, in an interview with the BBC last December, that restricting access to data might be a more appropriate solution to addressing monopolistic platform power vs breaking companies up.

In its blog post rejecting the German Federal Cartel Office’s decision, Facebook’s Yvonne Cunnane, head of data protection for its international business, Facebook Ireland, and Nikhil Shanbhag, director and associate general counsel, make three points to counter the decision, writing that: “The Bundeskartellamt underestimates the fierce competition we face in Germany, misinterprets our compliance with GDPR and undermines the mechanisms European law provides for ensuring consistent data protection standards across the EU.”

On the competition point, Facebook claims in the blog post that “popularity is not dominance” — suggesting the Bundeskartellamt found 40 per cent of social media users in Germany don’t use Facebook. (Not that that would stop Facebook from tracking those non-users around the mainstream Internet, of course.)

Although, in its announcement of the decision today, the Federal Cartel Office emphasizes that it found Facebook to have a dominant position in the Germany market — with (as of December 2018) 23M daily active users and 32M monthly active users, which it said constitutes a market share of more than 95 per cent (daily active users) and more than 80 per cent (monthly active users).

It also says it views social services such as Snapchat, YouTube and Twitter, and professional networks like LinkedIn and Xing, as only offering “parts of the services of a social network” — saying it therefore excluded them from its consideration of the market.

Though it adds that “even if these services were included in the relevant market, the Facebook group with its subsidiaries Instagram and WhatsApp would still achieve very high market shares that would very likely be indicative of a monopolisation process”.

The mainstay of Facebook’s argument against the Bundeskartellamt decision appears to fix on the GDPR — with the company both seeking to claim it’s in compliance with the pan-EU data-protection framework (although its business faces multiple complaints under GDPR), while simultaneously arguing that the privacy regulation supersedes regional competition authorities.

So, as ever, Facebook is underlining that its regulator of choice is the Irish Data Protection Commission.

“The GDPR specifically empowers data protection regulators – not competition authorities – to determine whether companies are living up to their responsibilities. And data protection regulators certainly have the expertise to make those conclusions,” Facebook writes.

“The GDPR also harmonizes data protection laws across Europe, so everyone lives by the same rules of the road and regulators can consistently apply the law from country to country. In our case, that’s the Irish Data Protection Commission. The Bundeskartellamt’s order threatens to undermine this, providing different rights to people based on the size of the companies they do business with.”

The final plank of Facebook’s rebuttal focuses on pushing the notion that pooling data across services enhances the consumer experience and increases “safety and security” — the latter point being the same argument Zuckerberg used last year to defend ‘shadow profiles’ (not that he called them that) — with the company claiming now that it needs to pool user data across services to identify abusive behavior online; and disable accounts link to terrorism; child exploitation; and election interference.

So the company is essentially seeking to leverage (you could say ‘legally weaponize’) a smorgasbord of antisocial problems many of which have scaled to become major societal issues in recent years, at least in part as a consequence of the size and scale of Facebook’s social empire, as arguments for defending the size and operational sprawl of its business. Go figure.


Read Full Article

Be Positive in 2019, Check Who Viewed You on LinkedIn, and Use 2FA


linkedin-headline

The Really Useful Podcast is back for a brand new run of episodes in 2019, kicking off with a look at some productivity and personal security tips.

Need to get yourself together and be more positive in 2019? Wondering which potential hiring manager is eyeing you for a new role? And are you using two-factor authentication (and why does it have such a complicated name anyway)?

This week’s show brings you discussion of these topics, personal insights, and tips, broken down with our trademark simplicity.

After all, we’re the tech podcast for technophobes! Click play to find out more.

Really Useful Podcast Season 2 Episode 1 Shownotes

As ever, please share the podcast with anyone you know who would benefit from our straight, no-nonsense approach to using technology.

This week’s show is brought to you by Christian Cawley and Gavin Phillips. You can follow them on Twitter as @thegadgetmonkey and @gavinspavin and they’re happy to hear your thoughts and suggestions for future topics.

Look out for our other shows (featuring other MakeUseOf contributors). Subscribe to The Really Useful Podcast on iTunes and YouTube (be sure to hit the bell icon to be notified of new episodes) for more tips.

Read the full article: Be Positive in 2019, Check Who Viewed You on LinkedIn, and Use 2FA


Read Full Article

Twitter Still Wants to Let You Edit Your Tweets


Ever since Twitter launched way back in 2006, users have been asking for the ability to edit their tweets. And yet here we are, over a decade later, and nothing. We’re still stuck with the binary choice of leaving a tweet as it is, or deleting it in its entirety.

If you’re starting to doubt whether you’ll ever gain the option of editing a tweet, don’t despair. In a recent interview with Joe Rogan, Jack Dorsey, the co-founder and CEO of Twitter, revealed it’s still something the company is at least thinking about.

Jack Dorsey Talks Twitter With Joe Rogan

Dorsey spoke about an edit function for tweets in the context of expanding the character limit. He said, “We don’t have edit Tweets right now,” prompting Rogan to request “the ability to edit […] but also the ability for people to see the original”.

Dorsey’s response was to suggest Twitter is “looking at exactly that”. He then goes onto explain why Twitter doesn’t have an edit function in the first place. Which is all to do with the way Twitter started out powered by SMS text messages.

Still, Twitter has moved on since then, and Dorsey floated the idea of “a 5-second to 30-second delay”. This would give Twitter users a short window of time to edit a tweet without “taking the real-time nature and conversational flow out of it”.

If you’re an avid Twitter user, or even just a beginner looking to gain an insight, the whole conversation is worth a listen. The part about editing tweets starts at around the 1 hour 20 minute mark, and the video embedded above is set to start playing at that point.

Editing Tweets Shouldn’t Be This Difficult

This interview confirms that Twitter is still at least considering introducing an edit function. Still, seeing as in January 2017 Dorsey revealed that Twitter was “thinking a lot about” offering the option to edit your tweets, we won’t hold our breath waiting for it to happen.

Image Credit: Maryland GovPics/Flickr

Read the full article: Twitter Still Wants to Let You Edit Your Tweets


Read Full Article

Google’s Password Checkup Keeps You Safe From Hackers


Google has launched a new Chrome extension designed to keep your online accounts secure at all times. Password Checkup does exactly what the name suggests; checking to make sure your username and password combination are secure.

The web can be a scary place. There’s malware and phishing emails lying in wait, and hackers seem to be stealing data left, right, and center. Google is doing what it can to keep you safe, and its latest effort is a Chrome extension called Password Checkup.

How to Use Google’s Password Checkup

As detailed on the Google Security Blog, Password Checkup checks your username and password against a database of exposed login credentials. Password Checkup checks your passwords against “over 4 billion credentials that Google knows to be unsafe.”

All you need to do is install Password Checkup on Google Chrome. Once installed, you’ll see the Password Checkup icon in your browser bar. Then, everytime you sign into a site, Google will check your login credentials to see if they are still safe to use.

If your login credentials aren’t on the database you’ll be free to continue. However, if they match a set in Google’s database you’ll be alerted to the problem. Google will then suggest you change your password to something else not already exposed.

Google is keen to emphasize how secure this process is. Your login credentials are “strongly hashed and encrypted” when sent to Google. And the company uses “blinding and private information retrieval” to search through its list of logins.

Download: Password Checkup for Google Chrome

Google’s Version of Have I Been Pwned

Password Checkup is essentially Google’s version of Have I Been Pwned, but in the form of a Chrome extension. And with the monster data leak of January 2019 containing hundreds of millions of logins, this is timely. That is if you trust Google with your data.

Image Credit: Marco Verch/Flickr

Read the full article: Google’s Password Checkup Keeps You Safe From Hackers


Read Full Article

Fabula AI is using social spread to spot ‘fake news’


UK startup Fabula AI reckons it’s devised a way for artificial intelligence to help user generated content platforms get on top of the disinformation crisis that keeps rocking the world of social media with antisocial scandals.

Even Facebook’s Mark Zuckerberg has sounded a cautious note about AI technology’s capability to meet the complex, contextual, messy and inherently human challenge of correctly understanding every missive a social media user might send, well-intentioned or its nasty flip-side.

“It will take many years to fully develop these systems,” the Facebook founder wrote two years ago, in an open letter discussing the scale of the challenge of moderating content on platforms thick with billions of users. “This is technically difficult as it requires building AI that can read and understand news.”

But what if AI doesn’t need to read and understand news in order to detect whether it’s true or false?

Step forward Fabula, which has patented what it dubs a “new class” of machine learning algorithms to detect “fake news” — in the emergent field of “Geometric Deep Learning”; where the datasets to be studied are so large and complex that traditional machine learning techniques struggle to find purchase on this ‘non-Euclidean’ space.

The startup says its deep learning algorithms are, by contrast, capable of learning patterns on complex, distributed data sets like social networks. So it’s billing its technology as a breakthrough. (Its written a paper on the approach which can be downloaded here.)

It is, rather unfortunately, using the populist and now frowned upon badge “fake news” in its PR. But it says it’s intending this fuzzy umbrella to refer to both disinformation and misinformation. Which means maliciously minded and unintentional fakes. Or, to put it another way, a photoshopped fake photo or a genuine image spread in the wrong context.

The approach it’s taking to detecting disinformation relies not on algorithms parsing news content to try to identify malicious nonsense but instead looks at how such stuff spreads on social networks — and also therefore who is spreading it.

There are characteristic patterns to how ‘fake news’ spreads vs the genuine article, says Fabula co-founder and chief scientist, Michael Bronstein.

“We look at the way that the news spreads on the social network. And there is — I would say — a mounting amount of evidence that shows that fake news and real news spread differently,” he tells TechCrunch, pointing to a recent major study by MIT academics which found ‘fake news’ spreads differently vs bona fide content on Twitter.

“The essence of geometric deep learning is it can work with network-structured data. So here we can incorporate heterogenous data such as user characteristics; the social network interactions between users; the spread of the news itself; so many features that otherwise would be impossible to deal with under machine learning techniques,” he continues.

Bronstein, who is also a professor at Imperial College London, with a chair in machine learning and pattern recognition, likens the phenomenon Fabula’s machine learning classifier has learnt to spot to the way infectious disease spreads through a population.

“This is of course a very simplified model of how a disease spreads on the network. In this case network models relations or interactions between people. So in a sense you can think of news in this way,” he suggests. “There is evidence of polarization, there is evidence of confirmation bias. So, basically, there are what is called echo chambers that are formed in a social network that favor these behaviours.”

“We didn’t really go into — let’s say — the sociological or the psychological factors that probably explain why this happens. But there is some research that shows that fake news is akin to epidemics.”

The tl;dr of the MIT study, which examined a decade’s worth of tweets, was that not only does the truth spread slower but also that human beings themselves are implicated in accelerating disinformation. (So, yes, actual human beings are the problem.) Ergo, it’s not all bots doing all the heavy lifting of amplifying junk online.

The silver lining of what appears to be an unfortunate quirk of human nature is that a penchant for spreading nonsense may ultimately help give the stuff away — making a scalable AI-based tool for detecting ‘BS’ potentially not such a crazy pipe-dream.

Although, to be clear, Fabula’s AI remains in development at this stage, having been tested internally on Twitter data sub-sets at this stage. And the claims it’s making for its prototype model remain to be commercially tested with customers in the wild using the tech across different social platforms.

It’s hoping to get there this year, though, and intends to offer an API for platforms and publishers towards the end of this year. The AI classifier is intended to run in near real-time on a social network or other content platform, identifying BS.

Fabula envisages its own role, as the company behind the tech, as that of an open, decentralised “truth-risk scoring platform” — akin to a credit referencing agency just related to content, not cash.

Scoring comes into it because the AI generates a score for classifying content based on how confident it is it’s looking at a piece of fake vs true news.

A visualisation of a fake vs real news distribution pattern; users who predominantly share fake news are coloured red and users who don’t share fake news at all are coloured blue — which Fabula says shows the clear separation into distinct groups, and “the immediately recognisable difference in spread pattern of dissemination”.

In its own tests Fabula says its algorithms were able to identify 93 percent of “fake news” within hours of dissemination — which Bronstein claims is “significantly higher” than any other published method for detecting ‘fake news’. (Their accuracy figure uses a standard aggregate measurement of machine learning classification model performance, called ROC AUC.)

The dataset the team used to train their model is a subset of Twitter’s network — comprised of around 250,000 users and containing around 2.5 million “edges” (aka social connections).

For their training dataset Fabula relied on true/fake labels attached to news stories by third party fact checking NGOs, including Snopes and PolitiFact. And, overall, pulling together the dataset was a process of “many months”, according to Bronstein, He also says that around a thousand different stories were used to train the model, adding that the team is confident the approach works on small social networks, as well as Facebook-sized mega-nets.

Asked whether he’s sure the model hasn’t been trained to identified patterns caused by bot-based junk news spreaders, he says the training dataset included some registered (and thus verified ‘true’) users.

“There is multiple research that shows that bots didn’t play a significant amount [of a role in spreading fake news] because the amount of it was just a few percent. And bots can be quite easily detected,” he also suggests, adding: “Usually it’s based on some connectivity analysis or content analysis. With our methods we can also detect bots easily.”

To further check the model, the team tested its performance over time by training it on historical data and then using a different split of test data.

“While we see some drop in performance it is not dramatic. So the model ages well, basically. Up to something like a year the model can still be applied without any re-training,” he notes, while also saying that, when applied in practice, the model would be continually updated as it keeps digesting (ingesting?) new stories and social media content.

Somewhat terrifyingly, the model could also be used to predict virality, according to Bronstein — raising the dystopian prospect of the API being used for the opposite purpose to that which it’s intended: i.e. maliciously, by fake news purveyors, to further amp up their (anti)social spread.

“Potentially putting it into evil hands it might do harm,” Bronstein concedes. Though he takes a philosophical view on the hyper-powerful double-edged sword of AI technology, arguing such technologies will create an imperative for a rethinking of the news ecosystem by all stakeholders, as well as encouraging emphasis on user education and teaching critical thinking.

Let’s certainly hope so. And, on the educational front, Fabula is hoping its technology can play an important role — by spotlighting network-based cause and effect.

“People now like or retweet or basically spread information without thinking too much or the potential harm or damage they’re doing to everyone,” says Bronstein, pointing again to the infectious diseases analogy. “It’s like not vaccinating yourself or your children. If you think a little bit about what you’re spreading on a social network you might prevent an epidemic.”

So, tl;dr, think before you RT.

Returning to the accuracy rate of Fabula’s model, while ~93 per cent might sound pretty impressive, if it were applied to content on a massive social network like Facebook — which has some 2.3BN+ users, uploading what could be trillions of pieces of content daily — even a seven percent failure rate would still make for an awful lot of fakes slipping undetected through the AI’s net.

But Bronstein says the technology does not have to be used as a standalone moderation system. Rather he suggests it could be used in conjunction with other approaches such as content analysis, and thus function as another string on a wider ‘BS detector’s bow.

It could also, he suggests, further aid human content reviewers — to point them to potentially problematic content more quickly.

Depending on how the technology gets used he says it could do away with the need for independent third party fact-checking organizations altogether because the deep learning system can be adapted to different use cases.

Example use-cases he mentions include an entirely automated filter (i.e. with no human reviewer in the loop); or to power a content credibility ranking system that can down-weight dubious stories or even block them entirely; or for intermediate content screening to flag potential fake news for human attention.

Each of those scenarios would likely entail a different truth-risk confidence score. Though most — if not all — would still require some human back-up. If only to manage overarching ethical and legal considerations related to largely automated decisions. (Europe’s GDPR framework has some requirements on that front, for example.)

Facebook’s grave failures around moderating hate speech in Myanmar — which led to its own platform becoming a megaphone for terrible ethnical violence — were very clearly exacerbated by the fact it did not have enough reviewers who were able to understand (the many) local languages and dialects spoken in the country.

So if Fabula’s language-agnostic propagation and user focused approach proves to be as culturally universal as its makers hope, it might be able to raise flags faster than human brains which lack the necessary language skills and local knowledge to intelligently parse context.

“Of course we can incorporate content features but we don’t have to — we don’t want to,” says Bronstein. “The method can be made language independent. So it doesn’t matter whether the news are written in French, in English, in Italian. It is based on the way the news propagates on the network.”

Although he also concedes: “We have not done any geographic, localized studies.”

“Most of the news that we take are from PolitiFact so they somehow regard mainly the American political life but the Twitter users are global. So not all of them, for example, tweet in English. So we don’t yet take into account tweet content itself or their comments in the tweet — we are looking at the propagation features and the user features,” he continues.

“These will be obviously next steps but we hypothesis that it’s less language dependent. It might be somehow geographically varied. But these will be already second order details that might make the model more accurate. But, overall, currently we are not using any location-specific or geographic targeting for the model.

“But it will be an interesting thing to explore. So this is one of the things we’ll be looking into in the future.”

Fabula’s approach being tied to the spread (and the spreaders) of fake news certainly means there’s a raft of associated ethical considerations that any platform making use of its technology would need to be hyper sensitive to.

For instance, if platforms could suddenly identify and label a sub-set of users as ‘junk spreaders’ the next obvious question is how will they treat such people?

Would they penalize them with limits — or even a total block — on their power to socially share on the platform? And would that be ethical or fair given that not every sharer of fake news is maliciously intending to spread lies?

What if it turns out there’s a link between — let’s say — a lack of education and propensity to spread disinformation? As there can be a link between poverty and education… What then? Aren’t your savvy algorithmic content downweights risking exacerbating existing unfair societal divisions?

Bronstein agrees there are major ethical questions ahead when it comes to how a ‘fake news’ classifier gets used.

“Imagine that we find a strong correlation between the political affiliation of a user and this ‘credibility’ score. So for example we can tell with hyper-ability that if someone is a Trump supporter then he or she will be mainly spreading fake news. Of course such an algorithm would provide great accuracy but at least ethically it might be wrong,” he says when we ask about ethics.

He confirms Fabula is not using any kind of political affiliation information in its model at this point — but it’s all too easy to imagine this sort of classifier being used to surface (and even exploit) such links.

“What is very important in these problems is not only to be right — so it’s great of course that we’re able to quantify fake news with this accuracy of ~90 percent — but it must also be for the right reasons,” he adds.

The London-based startup was founded in April last year, though the academic research underpinning the algorithms has been in train for the past four years, according to Bronstein.

The patent for their method was filed in early 2016 and granted last July.

They’ve been funded by $500,000 in angel funding and about another $500,000 in total of European Research Council grants plus academic grants from tech giants Amazon, Google and Facebook, awarded via open research competition awards.

(Bronstein confirms the three companies have no active involvement in the business. Though doubtless Fabula is hoping to turn them into customers for its API down the line. But he says he can’t discuss any potential discussions it might be having with the platforms about using its tech.)

Focusing on spotting patterns in how content spreads as a detection mechanism does have one major and obvious drawback — in that it only works after the fact of (some) fake content spread. So this approach could never entirely stop disinformation in its tracks.

Though Fabula claims detection is possible within a relatively short time frame — of between two and 20 hours after content has been seeded onto a network.

“What we show is that this spread can be very short,” he says. “We looked at up to 24 hours and we’ve seen that just in a few hours… we can already make an accurate prediction. Basically it increases and slowly saturates. Let’s say after four or five hours we’re already about 90 per cent.”

“We never worked with anything that was lower than hours but we could look,” he continues. “It really depends on the news. Some news does not spread that fast. Even the most groundbreaking news do not spread extremely fast. If you look at the percentage of the spread of the news in the first hours you get maybe just a small fraction. The spreading is usually triggered by some important nodes in the social network. Users with many followers, tweeting or retweeting. So there are some key bottlenecks in the network that make something viral or not.”

A network-based approach to content moderation could also serve to further enhance the power and dominance of already hugely powerful content platforms — by making the networks themselves core to social media regulation, i.e. if pattern-spotting algorithms rely on key network components (such as graph structure) to function.

So you can certainly see why — even above a pressing business need — tech giants are at least interested in backing the academic research. Especially with politicians increasingly calling for online content platforms to be regulated like publishers.

At the same time, there are — what look like — some big potential positives to analyzing spread, rather than content, for content moderation purposes.

As noted above, the approach doesn’t require training the algorithms on different languages and (seemingly) cultural contexts — setting it apart from content-based disinformation detection systems. So if it proves as robust as claimed it should be more scalable.

Though, as Bronstein notes, the team have mostly used U.S. political news for training their initial classifier. So some cultural variations in how people spread and react to nonsense online at least remains a possibility.

A more certain challenge is “interpretability” — aka explaining what underlies the patterns the deep learning technology has identified via the spread of fake news.

While algorithmic accountability is very often a challenge for AI technologies, Bronstein admits it’s “more complicated” for geometric deep learning.

“We can potentially identify some features that are the most characteristic of fake vs true news,” he suggests when asked whether some sort of ‘formula’ of fake news can be traced via the data, noting that while they haven’t yet tried to do this they did observe “some polarization”.

“There are basically two communities in the social network that communicate mainly within the community and rarely across the communities,” he says. “Basically it is less likely that somebody who tweets a fake story will be retweeted by somebody who mostly tweets real stories. There is a manifestation of this polarization. It might be related to these theories of echo chambers and various biases that exist. Again we didn’t dive into trying to explain it from a sociological point of view — but we observed it.”

So while, in recent years, there have been some academic efforts to debunk the notion that social media users are stuck inside filter bubble bouncing their own opinions back at them, Fabula’s analysis of the landscape of social media opinions suggests they do exist — albeit, just not encasing every Internet user.

Bronstein says the next steps for the startup is to scale its prototype to be able to deal with multiple requests so it can get the API to market in 2019 — and start charging publishers for a truth-risk/reliability score for each piece of content they host.

“We’ll probably be providing some restricted access maybe with some commercial partners to test the API but eventually we would like to make it useable by multiple people from different businesses,” says requests. “Potentially also private users — journalists or social media platforms or advertisers. Basically we want to be… a clearing house for news.”


Read Full Article

Facebook’s head of comms hits the road after 8 years at the company


Facebook head of communications Caryn Marooney is leaving for greener pastures, she announced today, on Facebook of course. She joins the growing number of executives and high-level employees departing the company during and after what may be its toughest year.

“I spent a lot of time over the winter holiday reflecting, and with the New Year, and after 8 years at Facebook, I’ve decided to step down as leader of the communications group,” Marooney wrote. “I’ve decided it’s time to get back to my roots: going deep in tech and product.”

She thanked CEO Mark Zuckerberg and COO Sheryl Sandberg, with whom she worked closely. The former commented to thank Marooney “for the dedication and brilliance you have brought to Facebook over the years.”

Certainly she saw Facebook during a period of intense growth and transition, though arguably the company’s entire history has been marked by those traits. But 2011’s Facebook was remarkably smaller and less complex — operationally, ethically, and legally — so to have gone from that middle stage to the present must certainly have been quite a ride.

Marooney is just the latest in what seems like a constant of high-profile departures over the last year:

Obviously in a large company there’s going to be turnover. But an average of one a month seems like a lot.

There’s no indication Marooney left because of any acute cause other than wanting to move on to the next thing. It’s just that a lot of people seem to be doing it at the same time.


Read Full Article