11 July 2018

How to Split an HDMI Signal to Multiple Displays (And 3 High-Quality HDMI Splitters)

Facebook independent research commission ‘Social Science One’ will share a petabyte of user data


Back in April, Facebook announced that it would be working with a group of academics to establish an independent research commission to look into issues of social and political significance using the company’s own extensive data collection. That commission just came out of stealth; it’s called Social Science One, and its first project will have researchers analyzing about a petabyte’s worth of sharing data.

The way the commission works is basically that a group of academics is created and given full access to the processes and datasets that Facebook could potentially provide. They identify and help design interesting sets based on their experience as researchers themselves, then document them publicly — for instance, “this dataset consists of 10 million status updates taken during the week of the Brexit vote, structured in such and such a way.”

This documentation describing the set doubles as a “request for proposals” from the research community. Other researchers interested in the data propose analyses or experiments, which are evaluated by commission. These proposals are then granted (according to their merit) access to the data, funding, and other privileges. Resulting papers will be peer reviewed with help from the Social Science Research Council, and can be published without being approved (or even seen) by Facebook.

“The data collected by private companies has vast potential to help social scientists understand and solve society’s greatest challenges. But until now that data has typically been unavailable for academic research,” said Social Science One co-founder, Harvard’s Gary King, in a blog post announcing the initiative. “Social Science One has established an ethical structure for marshaling privacy preserving industry data for the greater social good while ensuring full academic publishing freedom.”

If you’re curious about the specifics of the partnership, it’s actually been described in a paper of its own, available here.

The first dataset is a juicy one: “almost all” public URLs shared and clicked by Facebook users globally, accompanied by a host of useful metadata.

It will contain “on the order of 2 million unique URLs shared in 300 million posts, per week,” reads a document describing the set. “We estimate that the data will contain on the order of 30 billion rows, translating to an effective raw size on the order of a petabyte.”

The metadata includes country, user age, device and so on, but also dozens of other items, such as “ideological affiliation bucket,” the proportion of friends vs. non-friends who viewed a post, feed position, the number of total shares, clicks, likes, hearts, flags… there’s going to be quite a lot to sort through. Naturally all this is carefully pruned to protect user privacy — this is a proper research dataset, not a Cambridge Analytica-style catch-all siphoned from the service.

In a call accompanying the announcement, King explained that the commission had much more data coming down the pipeline, with a focus on disinformation, polarization, election integrity, political advertising, and civic engagement.

“It really does get at some of the fundamental questions of social media and democracy,” King said on the call.

The other sets are in various stages of completeness or permission: post-election survey participants in Mexico and elsewhere are being asked if their responses can be connected with their Facebook profiles; the political ad archive will be formally made available; they’re working on something with CrowdTangle; there are various partnerships with other researchers and institutions around the world.

A “continuous feed of all public posts on Facebook and Instagram” and “a large random sample of Facebook newsfeeds” are also under consideration, probably encountering serious scrutiny and caveats from the company.

Of course quality research must be paid for, and it would be irresponsible not to note that Social Science One is funded not by Facebook but by a number of foundations: the Laura and John Arnold Foundation, The Democracy Fund, The William and Flora Hewlett Foundation, The John S. and James L. Knight Foundation, The Charles Koch Foundation, Omidyar Network’s Tech and Society Solutions Lab, and The Alfred P. Sloan Foundation.

You can keep up with the organization’s work here; it really is a promising endeavor and will almost certainly produce some interesting science — though not for some time. We’ll keep an eye out for any research emerging from the partnership.


Read Full Article

How to Get Portrait Mode on Android

Hold for the drop: Twitter to purge locked accounts from follower metrics


Twitter is making a major change aimed at cleaning up the spammy legacy of its platform.

This week it will globally purge accounts it has previously locked (i.e. after suspecting them of being spammy) — by removing the accounts from users’ follower metrics.

Which in plain language means Twitter users with lots of followers are likely to see their follower counts take a noticeable hit in the coming days. So hold tight for the drop.

Late last month Twitter flagged smaller changes to follower counts, also as part of a series of platform-purging anti-spam measures — warning users they might see their counts fluctuate more as counts had been switched to being displayed in near real-time (in that case to try to prevent spambots and follow scams artificially inflating account metrics).

But the global purge of locked accounts from user account metrics looks like it’s going to be a rather bigger deal, putting some major dents in certain high profile users’ follower counts — and some major dents in celeb egos.

Hence Twitter has blogged again. “Follower counts are a visible feature, and we want everyone to have confidence that the numbers are meaningful and accurate,” writes Twitter’s Vijaya Gadde, legal, policy and trust & safety lead, flagging the latest change.

It will certainly be interesting to see whether the change substantially dents Twitter follower counts of high profiles users — such as Katy Perry (109,609,073 Twitter followers at the time of writing) Donald Trump (53,379,873); Taylor Swift (85,566,010); Elon Musk (22,329,075); and BeyoncĂ© (15,303,191), to name a few of the platform’s most followed users.

Check back in a week to see how their follower counts look.

“Most people will see a change of four followers or fewer; others with larger follower counts will experience a more significant drop,” warns Gadde, adding: “We understand this may be hard for some, but we believe accuracy and transparency make Twitter a more trusted service for public conversation.”

Twitter is also warning that while “the most significant changes” will happen in the next few days, users’ follower counts “may continue to change more regularly as part of our ongoing work to proactively identify and challenge problematic accounts”.

The company says it locks accounts if it detects sudden changes in account behavior — such as tweeting “a large volume of unsolicited replies or mentions, Tweeting misleading links, or if a large number of accounts block the account after mentioning them” — which therefore may indicate an account has been hacked/taken over by a spambot.

It says it may also lock accounts if we see email and password combinations from other services posted online and believe that information could put the security of an account at risk.

After locking an account Twitter contacts the owner to try to confirm they still have control of the account. If the owner does not reply to confirm the account stays locked — and will soon also be removed from follower counts globally.

Twitter emphasizes that locked accounts already cannot Tweet, like or Retweet, and are not served ads. But removing them from follower counts is an important additional step that it’s great to see Twitter making — albeit at long last

Twitter also specifies that locked accounts that have not reset their password in more than one month were already not included in Twitter’s MAU or DAU counts — so it today reiterates the CFO’s recent message, saying this change won’t affect its own platform usage metrics. 

The company has been going through what — this time — looks to be a serious house-cleaning process for some months now, after years and years of criticism for failing to tackle rampant spam and abuse on its platform.

In March, Twitter CEO Jack Dorsey also put out a call for ideas to help it capture, measure and evaluate healthy interactions on its platform and the health of public conversations generally — saying: “Ultimately we want to have a measurement of how it affects the broader society and public health, but also individual health, as well.”


Read Full Article

5 Ways to DIY Hack Your Old Nintendo Devices Into Something New


diy-nintendo

Whether you’re a veteran gamer, or simply love Nintendo hardware, there’s a good chance you’ve got some old bits and pieces knocking around. You could give those old consoles away… or you could hack them.

Here are five ways you can hack old Nintendo hardware into something new and useful.

1. Wii Homebrew

We’ll start with the easiest option: turning an old Nintendo Wii into a media center, capable of running media files on your network, retro games, and even DVDs.

While early hacks for this required some messing around, latterly the LetterBomb utility made things far simpler. By applying this hack, you get access to the Wii Homebrew Channel, where a vast library of free software can be found. Most of this is games, either home-made, or ported from other platforms (such as Doom, for instance).

All you need to get this hack working—and turn your Nintendo Wii into a system that meets its full potential—is a suitable SD card.

Check out our full guide to hacking your Nintendo Wii. Note that this will void the device warranty, which shouldn’t be a problem because most Wii warranties ran out years ago.

2. Wiimote

It’s not just the Nintendo Wii console you can hack. The groundbreaking Wiimote can also be reused/misused. Even if you don’t have a Wii, you can pick up these old controllers (and, preferably, the sensor) for pennies on eBay, thrift stores, yard sales, etc.

But how can you reuse a Nintendo Wiimote?

We’ve previously listed several Wiimote hacks, which include pairing one with an Arduino to steer a radio controlled car (our Arduino starter guide should help), using a Wiimote as a PC controller, as an interactive whiteboard, for finger tracking, and even desktop VR.

Note that not only does the Wiimote have an infrared transmitter and receiver, it is also Bluetooth compatible. Having the nunchuck peripheral will make a few of these Wiimote hacks (and others) easier to complete.

3. Wii Fit Board

One of the most popular peripherals shipped with early Nintendo Wiis was the Wii Balance Board. This device uses Bluetooth radio and has four pressure sensors which are designed to measure the user’s center of balance. Most commonly, the device was used with the Wii Fit game, and is capable of measuring body position (when paired with a WiiMote) and weight.

Despite support for over 150 games, the chances are that your Wii Fit Board is stashed under the bed. So, how can you hack it?

Option 1: Check Your Wii8 (Weight)

The first option is to reuse the Wii Balance Board in the manner it was intended, as a set of scales. Thanks to four AA batteries, the board (which again you can pick up for just a few dollars) should run for 60 hours. You should get quite a bit of use out of it!

Developed by Stavros Korokithakis, the DIY internet-enabled bathroom scale runs on Ubuntu, and can record your weight. It uses the software in this Github repository.

Option 2: How Many Beers?

Alternatively, you might pair a Raspberry Pi with the Wii Balance Board and use it to tell you how many beers are left in your fridge.

Admittedly, this is probably not in the spirit of the device’s initial use, but hey, it could be useful during the summer months. Follow the video above from YouTube channel John’s DIY Playground for the full tutorial.

4. Nintendo SNES Classic Mini (More Games)

Admittedly not an original Nintendo SNES console, the SNES Classic Mini is designed specifically for fans of retro gaming. Whether you owned one the first time around or you have a love of old games, these devices ship with only a limited selection of games.

But what if you wanted to add some of your favorites?

The answer is covered in this video, which explains how to use the Haxchi2 hack (available from GitHub) to import your ROM files into the SNES Classic Mini. Compatible ROMs are listed in this spreadsheet.

Remember to stay on the right side of the law, and only use ROM files for games that you already own.

5. Lego Dimensions Toy Pad (Wii U)

Admittedly not specifically Nintendo-built, the Lego Dimensions Toy Pad can be connected to a Nintendo Wii U. When used with the corresponding game, NFC tags on Lego figures (and other NFC-enabled toys) trigger LEDs (which also depend on in-game events). This pad is basically a USB device with a triple NFC reader and some LEDs built in, and bits of Lego molded to the case.

The Toy Pad can be used in various different ways. Here it is connected to a Lego Mindstorms EV3 computer:

Further details on this particular approach can be found at the EV3Dev website. This site is dedicated to exploring the EV3 computer, so you’re likely to find a lot of interest if you’re a big Lego fan.

Meanwhile, if you don’t own an EV3 computer, other options for hacking the Lego Dimensions Toy Pad exist. For instance, the base color changer app runs on Windows and lets you adjust the colors displayed by the LEDs.

Start Hacking Your Nintendo Hardware!

If you’re short of a project, and have some old gear you want to reuse, then old Nintendo hardware is a good place to start. You could soon end up with a speak-your-weight fridge, or an NFC-triggered LED notification system…

The possibilities are endless. If you’re not sure how to start with DIY, however, try these easy DIY activities and projects to ease you in.

Read the full article: 5 Ways to DIY Hack Your Old Nintendo Devices Into Something New


Read Full Article

Timehop admits that additional personal data was compromised in breach


Timehop is admitting that additional personal information was compromised in a data breach on July 4.

The company first acknowledged the breach on Sunday, saying that users’ names, email addresses and phone numbers had been compromised. Today it said it that additional information, including date of birth and gender, was also taken.

To understand what happened, and what Timehop is doing to fix things, I spoke to CEO Matt Raoul, COO Rick Webb and the security consultant that the company hired to manage its response. (The security consultant agreed to be interviewed on-the-record on the condition that they not be named.)

To be clear, Timehop isn’t saying that there was a separate breach of its data. Instead, the team has discovered that more data was taken in the already-announced incident.

Why didn’t they figure that out sooner? In an updated version of its report (which was also emailed to customers), the company put it simply: “Because we messed up.” It goes on:

In our enthusiasm to disclose all we knew, we quite simply made our announcement before we knew everything. With the benefit of staff who had been vacationing and unavailable during the first four days of the investigation, and a new senior engineering employee, as we examined the more comprehensive audit on Monday of the actual database tables that were stolen it became clear that there was more information in the tables than we had originally disclosed. This was precisely why we had stated repeatedly that the investigation was continuing and that we would update with more information as soon as it became available.

In both the email and my interviews, the Timehop team noted that the service does not have any financial information from users, nor does it perform the kinds of detailed behavioral tracking that you might expect from an ad-supported service. The team also emphasized that users’ “memories” — namely, the older social media posts that people use Timehop to rediscover — were not compromised.

How can they be sure, particularly since some of the compromised data was overlooked in the initial announcement? Well, the breach affected one specific database, while the memories are stored separately.

“That stuff is what we cared about, that stuff was protected,” Webb said. The challenge is, “We have to make a mental note to think about everything else.”

Timehop team

The breach occurred when someone accessed a database in Timehop’s cloud infrastructure that was not protected by two-factor authentication, though Raoul insisted that the company was already using two-factor quite broadly — it’s just that this “fell through the cracks.”

It’s also worth noting that while 21 million accounts were affected, Timehop had varying amounts of data about different users. For example, it says that 18.6 million email addresses were compromised (down from the “up to 21 million” addresses first reported), compared to 15.5 million dates of birth. In total, the company says 3.3 million records were compromised that included names, email addresses, phone numbers and DOBs.

None of those things may seem terribly sensitive (anyone with a copy of my business card and access to Google could probably get that information about me), but the security consultant acknowledged that in the “very, very small percentage” of cases where the records included full names, email addresses, phone numbers and DOBs, “identity theft becomes more likely,” and he suggested that users take standard steps to protect themselves, including password-protecting their phones.

Meanwhile, the company says that it worked with the social media platforms to detect activity that used the compromised authorization tokens, and it has not found anything suspicious. At this point, all of the tokens have been deauthorized (requiring users to re-authorize all of their accounts), so it shouldn’t be an ongoing issue.

As for other steps Timehop is taking to prevent future breaches, the security consultant told me the company is already in the process of ensuring that two-factor authentication is adopted across the board and encrypting its databases, as well as improving the process of deploying code to address security issues.

In addition, the company has shared the IP addresses used in the attack with law enforcement, and it will be sharing its “indicators of compromise” with partners in the security community.

Timehop screenshot

Everyone acknowledged that Timehop made real mistakes, both in its security and in the initial communication with customers. (As the consultant put it, “They made a schoolboy mistake by not doing two-factor authentication.”) However, they also suggested that their response was guided, in part, by the accelerated disclosure timeline required by Europe’s GDPR regulations.

The security consultant told me, “We haven’t had the time fine-toothed comb kinds of things we normally want to do,” like an in-depth forensic analysis. Those things will happen, he said — but thanks to GDPR, the company needed to make the announcement before it had all the information.

And overall, the consultant said he’s been impressed by Timehop’s response.

“I think it really says a lot to their integrity that they decided to go fully public the second they knew it was a breach,” he said. “I want to point out these guys responded within 24 hours with a full-on incident response and secured their environments. That’s better than so many companies.”


Read Full Article

Everything You Need to Know About Python and Object-Relational Maps


python-object-relational-maps

You may have heard of object-relational mapping (ORM). You may have even used one, but what exactly are they? And how do you use them in Python?

Here’s everything you need to know about ORMs and Python.

What Is an ORM?

Object-relational mapping (ORM) is a programming technique used to access a database. It exposes your database into a series of objects. You don’t have to write SQL commands to insert or retrieve data, you use a series of attributes and methods attached to objects.

It may sound complex and unnecessary, but they can save you a lot of time, and help to control access to your database.

Here’s an example. Say that whenever you insert a password into your database you want to hash it, as explained in website password security. This isn’t a problem for simple use cases—you do the calculation before inserting. But what if you need to insert a record in many places in the code? What if another programmer inserts into your table, and you don’t know about?

By using an ORM, you can write code to ensure that whenever and wherever any row or field in your database is accessed, your other, custom code is executed first.

This also acts as a “single source of truth”. If you want to change a custom calculation, you only have to change it in one place, not several. It’s possible to perform many of these principles with object orientated programming (OOP) in Python, but ORMs work in tandem with OOP principles to control access to a database.

There are certain things to watch out for when using an ORM, and there are circumstances where you may not want to use one, but they are generally considered to be a good thing to have, especially in a large codebase.

ORMs in Python Using SQLAlchemy

Like many tasks in Python, it’s quicker and easier to import a module than writing your own. Of course, it’s possible to write your own ORM, but why reinvent the wheel?

The following examples all use SQLAlchemy, a popular Python ORM, but many of the principles apply regardless of the implementation.

Setting Up Python for SQLAlchemy

Before jumping right in, you’re going to need to setup your machine for Python development with SQLAlchemy.

You’ll need to use Python 3.6 to follow along with these examples. While older versions will work, the code below will need some modification before it will run. Not sure on the differences? Our Python FAQ covers all the differences.

Before coding, you should set up a Python environment, which will prevent problems with other imported Python packages.

Make sure you have PIP, the Python package manager installed, which comes with most modern versions of Python.

Once you’re ready to go, you can begin by getting SQLAlchemy ready. From within your Python environment in the command line, install SQLAlchemy with the pip install command:

pip install SQLAlchemy-1.2.9

The 1.2.9 is the version number. You can leave this off to get the latest package, but it’s good practice to be specific. You don’t know when a new release may break your current code.

Installing SQLAlchemy on your PC

Now you’re ready to start coding. You may need to prepare your database to accept a Python connection, but the following examples all use an SQLite database created in-memory below.

Models in SQLAlchemy

One of the key components of an ORM is a model. This is a Python class which outlines what a table should look like, and how it should work. It’s the ORM version of the CREATE TABLE statement in SQL. You need a model for each table in your database.

Open up your favorite text editor or IDE, and create a new file called test.py. Enter this starter code, save the file, and run it:

from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

engine = create_engine('sqlite://') # Create the database in memory
Base.metadata.create_all(engine) # Create all the tables in the database

This code does several things. The imports are necessary so that Python understands where to find the SQLAlchemy modules it needs. Your models will use the declarative_base later on, and it configures any new models to work as expected.

The create_engine method creates a new connection to your database. If you have a database already, you’ll need to change sqlite:// to your database URI. As it is, this code will create a new database in memory only. The database is destroyed once your code finishes executing.

Finally, the create_all method creates all the tables defined in your modes in your database. As you haven’t defined any models yet, nothing will happen. Go ahead and run this code, to ensure you don’t have any problems or typos.

Let’s make a model. Add another import to the top of your file:

from sqlalchemy import Column, Integer, String

This imports the Column, Integer, and String modules from SQLAlchemy. They define how the database tables, fields, columns, and datatypes work.

Underneath the declarative_base, create your model class:

class Cars(Base):
  __tablename__ = 'cars'
  id = Column(Integer, primary_key=True)
  make = Column(String(50), nullable=False)
  color = Column(String(50), nullable=False)

This simple example uses cars, but your tables may contain any data.

Each class must inherit Base. Your database table name is defined in __tablename__. This should be the same as the class name, but this is just a recommendation, and nothing will break if they don’t match.

Finally, each column is defined as a python variable within the class. Different data types are used, and the primary_key attribute tells SQLAlchemy to create the id column as a primary key.

Go ahead and add one last import, this time for the ForeignKey module. Add this alongside your Column import:

from sqlalchemy import Column, ForeignKey, Integer, String

Now create a second model class. This class is called CarOwners, and stores owner details of specific cars stored in the Cars table:

class CarOwners(Base):
  __tablename__ = 'carowners'
  id = Column(Integer, primary_key=True)
  name = Column(String(50), nullable=False)
  age = Column(Integer, nullable=False)
  car_id = Column(Integer, ForeignKey('cars.id'))

  car = relationship(Cars)

There are several new attributes introduced here. The car_id field is defined as a foreign key. It is linked to the id in the cars table. Notice how the lower case table name is used, insted of the uppercase class name.

Finally, an attribute of car is defined as a relationship. This allows your model to access the Cars table through this variable. This is demonstrated below.

If you run this code now, you’ll see that nothing happens. This is because you haven’t told it to do anything noticeable yet.

Objects in SQLAlchemy

Now that your models are created, you can start to access the objects, and read and write data. It’s a good idea to place your logic into its own class and file, but for now, it can stay alongside the models.

Writing Data

In this example, you need to insert some data into the database before you can read it. If you’re using an existing database, you may have data already. Either way, it’s still very useful to know how to insert data.

You may be used to writing INSERT statements in SQL. SQLAlchemy handles this for you. Here’s how to insert one row into the Cars model. Start with a new import for sessionmaker:

from sqlalchemy.orm import sessionmaker

This is needed to create the session and DBSession objects, which are used to read and write data:

DBSession = sessionmaker(bind=engine)
session = DBSession()

Now put this underneath your create_all statement:

car1 = Cars(
  make="Ford",
  color="silver"
)
session.add(car1)
session.commit()

Let’s break down that code. The variable car1 is defined as an object based on the Cars model. Its make and color are set as parameters. This is like saying “make me a car, but don’t write it to the database yet”. This car exists in memory but is waiting to be written.

Add the car to the session with session.add, and then write it to the database with session.commit.

Now let’s add an owner:

owner1 = CarOwners(
  name="Joe",
  age="99",
  car_id=(car1.id)
)
session.add(owner1)
session.commit()

This code is almost identical to the previous insert for the Cars model. The main difference here is that car_id is a foreign key so needs a row id that exists in the other table. This is accessed through the car1.id property.

You don’t have to query the database or return any ids, as SQLAlchemy handles this for you (as long as you commit the data first).

Reading Data

Once you have written some data, you can begin to read it back. Here’s how to query the Cars and CarOwners tables:

result = session.query(Cars).all()

It is that simple. By using the query method found in the session, you specify the model, and then use the all method to retrieve all the results. If you know there will only be one result, then you can use the first method:

result = session.query(Cars).first()

Once you’ve queried the model, and stored your returned results in a variable, you can access the data through the object:

print(result[0].color)

This prints the color “silver”, as that record is the first row. You can loop over the result object if you want to.

A sample Python-ORM query

As you defined the relationship in your model, it’s possible to access data in related tables without specifying a join:

result = session.query(CarOwners).all()
print(result[0].name)
print(result[0].car.color)

Simple database query result using ORM

This works because your model contains details of your table structure, and the car attribute was defined as a link to the cars table.

What’s Not to Like About ORMs?

This tutorial only covered the very basics, but once you’ve got the hang of those, you can move on the advanced topics. There are some potential downsides to ORMs:

  • You have to write your model before any queries can run.
  • It’s another new syntax to learn.
  • It may be too complex for simple needs.
  • You must have a good database design to begin with.

These issues aren’t a big problem on their own, but they are things to watch out for. If you’re working with an existing database, you may get caught out.

If you’re not convinced an ORM is the right tool for you, then make sure you read about the important SQL commands programmers should know.

Read the full article: Everything You Need to Know About Python and Object-Relational Maps


Read Full Article

Opera adds a crypto wallet to its mobile browser


The Opera Android browser will soon be able to hold your cryptocurrencies. The system, now in beta, lets you store crypto and ERC20 tokens in your browser, send and receive crypto on the fly, and secures your wallet with your phone’s biometric security or passcode.

You can sign up to try the beta here.

The feature, called Crypto Wallet, “makes Opera the first major browser to introduce a built-in crypto wallet” according to the company. The feature could allow for micropayments in the browser and paves the way for similar features in other browsers.

From the release:

We believe the web of today will be the interface to the decentralized web of tomorrow. This is why we have chosen to use our browser to bridge the gap. We think that with a built-in crypto wallet, the browser has the potential to renew and extend its important role as a tool to access information, make transactions online and manage users’ online identity in a way that gives them more control.

In addition to being able to send money from wallet to wallet and interact with Dapps, Opera now supports online payments with cryptocurrency where merchants support exists. Users that choose to pay for their order using cryptocurrency on Coinbase Commerce-enabled merchants will be presented with a payment request dialog, asking them for their signature. The payment will then be signed and transmitted directly from the browser.

While it’s still early days for this sort of technology it’s interesting to see a mainstream browser entering the space. Don’t hold your breath on seeing crypto in Safari or Edge but Chrome and other “open source” browsers could easily add these features given enough demand.


Read Full Article

Summer road trip tech essentials and extras


Editor’s note: This post was done in partnership with Wirecutter. When readers choose to buy Wirecutter’s independently chosen editorial picks, Wirecutter and TechCrunch may earn affiliate commissions.

Gearing up for a pleasant road trip entails more than picking an exciting destination. The mode of transportation, and what you’re able to do while traveling, sometimes makes or breaks hours spent on the road.

Whether you’re taking an older car on a short solo excursion, or piling in with family and friends for a cross-country drive, your road trip gear and setup can add to the experience. We’ve gathered some of our favorite picks that cover the basics.

iPad Headrest Mount: Arkon Center Extension Car Headrest Tablet Mount

If getting comfortable in a backseat and watching a movie sounds like an ideal way to pass time, do so with the help of a tablet mount. The Arkon Center Extension Car Headrest Tablet Mount securely holds iPads and most 9- to 12-inch tablets.

It attaches to the metal rods of the front seat headrest and its holster is attached to an extendable arm that can be positioned so one or multiple backseat passengers can get a clear view.

Photo: Rik Paul

Car GPS: Garmin DriveSmart 51 LMT-S

Before the wheels start rolling, knowing where you’re going and how to get there is likely the first order of business. Using a standalone car GPS means your smartphone doesn’t have to be held hostage and you don’t have to rely on a live data connection.

The Garmin DriveSmart 51 LMT-S has maps and a database with points of interest built in so that navigation — even off of the beaten path — is straightforward. It works via Bluetooth, can display alerts or searches from a smartphone and its maps are updated over Wi-Fi. During testing, its voice-control system was the simplest to use and its audible directions were the most precise.

You’ll like that its 5-inch touchscreen displays easy-to-follow lanes and road signs, along with nearby stops and speed limits.

Bluetooth Kit: Anker SoundSync Drive

There’s no fun in a road trip that doesn’t include your favorite podcasts and music playlists. Older cars without built-in Bluetooth pose a problem when it comes to streaming curated entertainment from a smartphone.

Bypass installing a new stereo system and use an inexpensive Bluetooth kit instead. We recommend the Anker SoundSync Drive for cars with an aux-in port as well as other options for different setups. The SoundSync Drive will allow you to listen to music and makes hands-free calls.

It offers high-quality audio that’s on par and better than competitors we tested, and it has convenient track-control buttons. Keep it powered by plugging its USB-A charging cable into any car charger or USB power source.

Photo: Michael Hession

Car Mount: iOttie Easy One Touch 4 Air Vent Mount

For a car mount that won’t get in the way of other devices that have to be placed on a windshield or dashboard, we recommend the iOttie Easy One Touch 4 Air Vent Mount. It fits into an air vent and its grip — which is secured by long rubber-lined arms and a spring-loaded clamp — places it above similar models.

Its cradle holds firm, it can be placed on vents of all thicknesses and it’s easy to position. The Easy One Touch 4 Air Vent Mount’s build makes it easy to access and it works against weighing down vent slats.

Photo: Nick Guy 

USB Car Charger: RAVPower RP-VC006

The RAVPower RP-VC006 USB car charger is small but packs a punch (up to 2.4 amps) with two USB ports for powering smartphones or tablets. It isn’t difficult to insert or remove, and when it’s dark outside, its LED and white ports make it easy to locate.

We like that it’s compact and doesn’t stick out too far. The RAVPower RP-VC006 plugs into a 12-volt power jack and it’s capable of charging two devices — simultaneously and in little to no time. It comes with a lifetime warranty, and if you’re concerned about misplacing it or running out of juice, it’s cheap enough to buy a few.

This guide may have been updated by Wirecutter.

When readers choose to buy Wirecutter’s independently chosen editorial picks, Wirecutter and Engadget may earn affiliate commissions.


Read Full Article

You can now stream to your Sonos devices via AirPlay 2


Newer Sonos devices and “rooms” now appear as AirPlay 2-compatible devices, allowing you to stream audio to them via Apple devices. The solution is a long time coming for Sonos which promised AirPlay 2 support in October.

You can stream to Sonos One, Sonos Beam, Playbase, and Play:5 speakers and ask Siri to play music on various speakers (“Hey Siri, play some hip-hop in the kitchen.”) The feature should roll out to current speakers today.

I tried a beta version and it worked as advertised. A set of speakers including a Beam and a Sub in my family room showed up as a single speaker and a Sonos One in the kitchen showed up as another. I was able to stream music and podcasts to either one.

Given the ease with which you can now stream to nearly every device from every device it’s clear that whole-home audio is progressing rapidly. As we noted before Sonos is facing tough competition but little tricks like this one help it stay in the race.

[gallery ids="1671157,1671158"]

Read Full Article

Pinterest is adding a way for users to collaborate on boards


Pinterest is trying to further tap its popularity as a place to plan events, this time adding ways for users to collaborate across boards that are baked directly into the app.

Group boards will have their own designated feed, where users will be able to communicate with others collaborating on that board and also get updates on new member additions or added pins. There are also the other typical social structures you’d expect on an app these days, including @-mentions or liking comments. It’s another step to get people onto Pinterest and sticking around as they look to plan events, and create more ways to make the platform more and more sticky. It’s also another quality-of-life improvement that Pinterest seems to have needed for quite some time.

It’s those kinds of events — weddings, parties and others — that propelled Pinterest initially to become one of the larger social networks in the early 2010s. The company late last year said it had more than 200 million monthly active users, which while small compared to the likes of Instagram or Facebook, serves as a hub for a different kind of user behavior than you might find on those other platforms. The majority of the content on Pinterest is high-resolution products from businesses, where people will search for or save those products as they look to plan future life events.

Pinterest has tried to position itself as one of the best ways to discover new ideas, whether that’s stumbling upon something in a primary feed or finding something through searching. Over time, it’s added more and more tools to try to get people to come back more regularly, and if it continues to improve those recommendation engines, it can continue to run that feedback loop and keep users more and more attached to the platform. Adding a sort of light social pressure from friends that are sharing ideas and looking for feedback is one way to do that, in addition to it generally being useful.

All that is good for its pitch to advertisers as well. Pinterest, in addition to trying to cater to that unique kind of user behavior, is also trying to sell itself to advertisers as a platform where they can reach potential customers through ways they wouldn’t be able to with primary advertising channels like Facebook or Google. By making the platform more sticky, it can go back to those advertisers and offer them better engagement metrics and show that users stick around and are paying closer attention to content on Pinterest, which can in turn drive that additional value to advertisers.


Read Full Article

Wall Art


Wall Art

How to Protect Your Apple Account With Two-Factor Authentication

Facebook under fresh political pressure as UK watchdog calls for “ethical pause” of ad ops


The UK’s privacy watchdog revealed yesterday that it intends to fine Facebook the maximum possible (£500k) under the country’s 1998 data protection regime for breaches related to the Cambridge Analytica data misuse scandal.

But that’s just the tip of the regulatory missiles now being directed at the platform and its ad-targeting methods — and indeed, at the wider big data economy’s corrosive undermining of individuals’ rights.

Alongside yesterday’s update on its investigation into the Facebook-Cambridge Analytica data scandal, the Information Commissioner’s Office (ICO) has published a policy report — entitled Democracy Disrupted? Personal information and political influence — in which it sets out a series of policy recommendations related to how personal information is used in modern political campaigns.

In the report it calls directly for an “ethical pause” around the use of microtargeting ad tools for political campaigning — to “allow the key players — government, parliament, regulators, political parties, online platforms and citizens — to reflect on their responsibilities in respect of the use of personal information in the era of big data before there is a greater expansion in the use of new technologies”.

The watchdog writes [emphasis ours]:

Rapid social and technological developments in the use of big data mean that there is limited knowledge of – or transparency around – the ‘behind the scenes’ data processing techniques (including algorithms, analysis, data matching and profiling) being used by organisations and businesses to micro-target individuals. What is clear is that these tools can have a significant impact on people’s privacy. It is important that there is greater and genuine transparency about the use of such techniques to ensure that people have control over their own data and that the law is upheld. When the purpose for using these techniques is related to the democratic process, the case for high standards of transparency is very strong.

Engagement with the electorate is vital to the democratic process; it is therefore understandable that political campaigns are exploring the potential of advanced data analysis tools to help win votes. The public have the right to expect that this takes place in accordance with the law as it relates to data protection and electronic marketing. Without a high level of transparency – and therefore trust amongst citizens that their data is being used appropriately – we are at risk of developing a system of voter surveillance by default. This could have a damaging long-term effect on the fabric of our democracy and political life.

It also flags a number of specific concerns attached to Facebook’s platform and its impact upon people’s rights and democratic processes — some of which are sparking fresh regulatory investigations into the company’s business practices.

“A significant finding of the ICO investigation is the conclusion that Facebook has not been sufficiently transparent to enable users to understand how and why they might be targeted by a political party or campaign,” it writes. “Whilst these concerns about Facebook’s advertising model exist generally in relation to its commercial use, they are heightened when these tools are used for political campaigning. Facebook’s use of relevant interest categories for targeted advertising and it’s, Partner Categories Service are also cause for concern. Although the service has ceased in the EU, the ICO will be looking into both of these areas, and in the case of partner categories, commencing a new, broader investigation.”

The ICO says its discussions with Facebook for this report focused on “the level of transparency around how Facebook user data and third party data is being used to target users, and the controls available to users over the adverts they see”.

Among the concerns it raises about what it dubs Facebook’s “very complex” online targeting advertising model are [emphasis ours]:

Our investigation found significant fair-processing concerns both in terms of the information available to users about the sources of the data that are being used to determine what adverts they see and the nature of the profiling taking place. There were further concerns about the availability and transparency of the controls offered to users over what ads and messages they receive. The controls were difficult to find and were not intuitive to the user if they wanted to control the political advertising they received. Whilst users were informed that their data would be used for commercial advertising, it was not clear that political advertising would take place on the platform.

The ICO also found that despite a significant amount of privacy information and controls being made available, overall they did not effectively inform the users about the likely uses of their personal information. In particular, more explicit information should have been made available at the first layer of the privacy policy. The user tools available to block or remove ads were also complex and not clearly available to users from the core pages they would be accessing. The controls were also limited in relation to political advertising.

The company has been criticized for years for confusing and complex privacy controls. But during the investigation, the ICO says it was also not provided with “satisfactory information” from the company to understand the process it uses for determining what interest segments individuals are placed in for ad targeting purposes.

“Whilst Facebook confirmed that the content of users’ posts were not used to derive categories or target ads, it was difficult to understand how the different ‘signals’, as Facebook called them, built up to place individuals into categories,” it writes.

Similar complaints of foot-dragging responses to information requests related to political ads on its platform have also been directed at Facebook by a parliamentary committee that’s running an inquiry into fake news and online disinformation — and in April the chair of the committee accused Facebook of “a pattern of evasive behavior”.

So the ICO is not alone in feeling that Facebook’s responses to requests for specific information have lacked the specific information being sought. (CEO Mark Zuckerberg also annoyed the European Parliament with highly evasive responses to their highly detailed questions this Spring.)

Meanwhile, a European media investigation in May found that Facebook’s platform allows advertisers to target individuals based on interests related to sensitive categories such as political beliefs, sexuality and religion — which are categories that are marked out as sensitive information under regional data protection law, suggesting such targeting is legally problematic.

The investigation found that Facebook’s platform enables this type of ad targeting in the EU by making sensitive inferences about users — inferred interests including communism, social democrats, Hinduism and Christianity. And its defense against charges that what it’s doing breaks regional law is that inferred interests are not personal data.

However the ICO report sends a very chill wind rattling towards that fig leaf, noting “there is a concern that by placing users into categories, Facebook have been processing sensitive personal information – and, in particular, data about political opinions”.

It further writes [emphasis ours]:

Facebook made clear to the ICO that it does ‘not target advertising to EU users on the basis of sensitive personal data’… The ICO accepts that indicating a person is interested in a topic is not the same as formally placing them within a special personal information category. However, a risk clearly exists that advertisers will use core audience categories in a way that does seek to target individuals based on sensitive personal information. In the context of this investigation, the ICO is particularly concerned that such categories can be used for political advertising.

The ICO believes that this is part of a broader issue about the processing of personal information by online platforms in the use of targeted advertising; this goes beyond political advertising. It is clear from academic research conducted by the University of Madrid on this topic that a significant privacy risk can arise. For example, advertisers were using these categories to target individuals with the assumption that they are, for example, homosexual. Therefore, the effect was that individuals were being singled out and targeted on the basis of their sexuality. This is deeply concerning, and it is the ICO’s intention as a concerned authority under the GDPR to work via the one-stop-shop system with the Irish Data Protection Commission to see if there is scope to undertake a wider examination of online platforms’ use of special categories of data in their targeted advertising models.

So, essentially, the regulator is saying it will work with other EU data protection authorities to push for a wider, structural investigation of online ad targeting platforms which put users into categories based on inferred interests — and certainly where those platforms are allowing targeting against special categories of data (such as data related to racial or ethnic origin, political opinions, religious beliefs, health data, sexuality).

Another concern the ICO raises that’s specifically attached to Facebook’s business is transparency around its so-called “partner categories” service — an option for advertisers that allows them to use third party data (i.e. personal data collected by third party data brokers) to create custom audiences on its platform.

In March, ahead of a major update to the EU’s data protection framework, Facebook announced it would be “winding down” this service down over the next six months.

But the ICO is going to investigate it anyway.

“A preliminary investigation of the service has raised significant concerns about transparency of use of the [partner categories] service for political advertising and wider concerns about the legal basis for the service, including Facebook’s claim that it is acting only as a processor for the third-party data providers,” it writes. “Facebook announced in March 2018 that it will be winding down this service over a six-month period, and we understand that it has already ceased in the EU. The ICO has also commenced a broader investigation into the service under the DPA 1998 (which will be concluded at a later date) as we believe it is in the public interest to do so.”

In conclusion on Facebook the regulator asserts the company has not been “sufficiently transparent to enable users to understand how and why they might be targeted by a political party or campaign”.

“Individuals can opt out of particular interests, and that is likely to reduce the number of ads they receive on political issues, but it will not completely block them,” it points out. “These concerns about transparency lie at the core of our investigation. Whilst these concerns about Facebook’s advertising model exist in relation in general terms and its use in the commercial sphere, the concerns are heightened when these tools are used for political campaigning.”

The regulator also looked at political campaign use of three other online ad platforms — Google, Twitter and Snapchat — although Facebook gets the lion’s share of its attention in the report given the platform has also attracted the lion’s share of UK political parties’ digital spending. (“Figures from the Electoral Commission show that the political parties spent £3.2 million on direct Facebook advertising during the 2017 general election,” it notes. “This was up from £1.3 million during the 2015 general election. By contrast, the political parties spent £1 million on Google advertising.”)

The ICO is recommending that all online platforms which provide advertising services to political parties and campaigns should include experts within the sales support team who can provide political parties and campaigns with “specific advice on transparency and accountability in relation to how data is used to target users”.

“Social media companies have a responsibility to act as information fiduciaries, as citizens increasingly live their lives online,” it further writes.

It also says it will work with the European Data Protection Board, and the relevant lead data protection authorities in the region, to ensure that online platforms comply with the EU’s new data protection framework (GDPR) — and specifically to ensure that users “understand how personal information is processed in the targeted advertising model, and that effective controls are available”.

“This includes greater transparency in relation to the privacy settings, and the design and prominence of privacy notices,” it warns.

Facebook’s use of dark pattern design and A/B tested social engineering to obtain user consent for processing their data at the same time as obfuscating its intentions for people’s data has been a long-standing criticism of the company — but one which the ICO is here signaling is very much on the regulatory radar in the EU.

So expecting new laws — as well as lots more GDPR lawsuits — seems prudent.

The regulator is also pushing for all four online platforms to “urgently roll out planned transparency features in relation to political advertising to the UK” — in consultation with both relevant domestic oversight bodies (the ICO and the Electoral Commission).

In Facebook’s case, it has been developing policies around political ad transparency — amid a series of related data scandals in recent years, which have ramped up political pressure on the company. But self-regulation looks very unlikely to go far enough (or fast enough) to fix the real risks now being raised at the highest political levels.

“We opened this report by asking whether democracy has been disrupted by the use of data analytics and new technologies. Throughout this investigation, we have seen evidence that it is beginning to have a profound effect whereby information asymmetry between different groups of voters is beginning to emerge,” writes the ICO. “We are a now at a crucial juncture where trust and confidence in the integrity of our democratic process risks being undermined if an ethical pause is not taken. The recommendations made in this report — if effectively implemented — will change the behaviour and compliance of all the actors in the political campaigning space.”

Another key policy recommendation the ICO is making is to urge the UK government to legislate “at the earliest opportunity” to introduce a statutory Code of Practice under the country’s new data protection law for the use of personal information in political campaigns.

The report also essentially calls out all the UK’s political parties for data protection failures — a universal problem that’s very evidently being supercharged by the rise of accessible and powerful online platforms which have enabled political parties to combine (and thus enrich) voter databases they are legally entitled to with all sorts of additional online intelligence that’s been harvested by the likes of Facebook and other major data brokers.

Hence the ICO’s concern about “developing a system of voter surveillance by default”. And why she’s pushing for online platforms to “act as information fiduciaries”.

Or, in other words, without exercising great responsibility around people’s information, online ad platforms like Facebook risk becoming the enabling layer that breaks democracy and shatters civic society.

Particular concerns being attached by the ICO to political parties’ activities include: The purchasing of marketing lists and lifestyle information from data brokers without sufficient due diligence; a lack of fair processing; and use of third party data analytics companies with insufficient checks around consent. And the regulator says it has several related investigations ongoing.

In March, the information commissioner, Elizabeth Denham, foreshadowed the conclusions in this report, telling a UK parliamentary committee she would be recommending a code of conduct for political use of personal data, and pushing for increased transparency around how and where people’s data is flowing — telling MPs: “We need information that is transparent, otherwise we will push people into little filter bubbles, where they have no idea about what other people are saying and what the other side of the campaign is saying. We want to make sure that social media is used well.”

The ICO says now that it will work closely with government to determine the scope of the Code. It also wants the government to conduct a review of regulatory gaps.

We’ve reached out to the Cabinet Office for a government response to the ICO’s recommendations. Update: A Cabinet Office spokesperson directed us to the Department for Digital, Culture, Media and Sport — and a DCMS spokesman told us the government will wait to review the full ICO report once it’s completed before setting out a formal response.

A Facebook spokesman declined to answer specific questions related to the report — instead sending us this short statement, attributed to its chief privacy officer, Erin Egan: “As we have said before, we should have done more to investigate claims about Cambridge Analytica and take action in 2015. We have been working closely with the ICO in their investigation of Cambridge Analytica, just as we have with authorities in the US and other countries. We’re reviewing the report and will respond to the ICO soon.”

Here’s the ICO’s summary of its ten policy recommendations:

1) The political parties must work with the ICO, the Cabinet Office and the Electoral Commission to identify and implement a cross-party solution to improve transparency around the use of commonly held data.

2) The ICO will work with the Electoral Commission, Cabinet Office and the political parties to launch a version of its successful Your Data Matters campaign before the next General Election. The aim will be to increase transparency and build trust and confidence amongst 5 the electorate on how their personal data is being used during political campaigns.

3) Political parties need to apply due diligence when sourcing personal information from third party organisations, including data brokers, to ensure the appropriate consent has been sought from the individuals concerned and that individuals are effectively informed in line with transparency requirements under the GDPR. This should form part of the data protection impact assessments conducted by political parties.

4) The Government should legislate at the earliest opportunity to introduce a statutory Code of Practice under the DPA2018 for the use of personal information in political campaigns. The ICO will work closely with Government to determine the scope of the Code.

5) It should be a requirement that third party audits be carried out after referendum campaigns are concluded to ensure personal data held by the campaign is deleted, or if it has been shared, the appropriate consent has been obtained.

6) The Centre for Data Ethics and Innovation should work with the ICO, the Electoral Commission to conduct an ethical debate in the form of a citizen jury to understand further the impact of new and developing technologies and the use of data analytics in political campaigns.

7) All online platforms providing advertising services to political parties and campaigns should include expertise within the sales support team who can provide political parties and campaigns with specific advice on transparency and accountability in relation to how data is used to target users.

8) The ICO will work with the European Data Protection Board (EDPB), and the relevant lead Data Protection Authorities, to ensure online platforms’ compliance with the GDPR – that users understand how personal information is processed in the targeted advertising model and that effective controls are available. This includes greater transparency in relation to the privacy settings and the design and prominence of privacy notices.

9) All of the platforms covered in this report should urgently roll out planned transparency features in relation to political advertising to the UK. This should include consultation and evaluation of these tools by the ICO and the Electoral Commission.

10)The Government should conduct a review of the regulatory gaps in relation to content and provenance and jurisdictional scope of political advertising online. This should include consideration of requirements for digital political advertising to be archived in an open data repository to enable scrutiny and analysis of the data.


Read Full Article

Netflix Launches Smart Downloads for Mobile Users


Netflix has launched a new feature designed to make it easier for binge-watchers to get their fix of flix. Smart Downloads sees Netflix managing your mobile downloads so that you don’t have to. Which means you’ll always have something to watch on your phone.

Since 2016, you have been able to download movies and TV shows from Netflix. For people who enjoy watching Netflix on the go but who don’t have unlimited data, this changed everything. And now Smart Downloads have arrived to make life even sweeter.

Netflix Now Manages Your Mobile Downloads

Smart Downloads are exactly what the name suggests. With the feature enabled, Netflix will manage your downloads for you, and hopefully in a smart manner. Which means you can focus on watching your favorite content without worrying about managing it.

The idea is that when you finish watching an episode of your favorite show, Netflix will delete that episode off your mobile device and download the next episode in its place. This should prove especially useful for binge-watchers who burn through multiple episodes.

Smart Downloads only kicks in when you’re connected over Wi-Fi. So if you’re going on a long journey you’ll still have to download lots of episodes in advance. However, as soon as you reconnect to a Wi-Fi network Smart Downloads will do its thing.

If you’re one of those strange people who watches the same episode multiple times you’ll need to turn Smart Downloads off. To do so, tap the Menu icon, scroll down and tap on App Settings, and under Downloads toggle the Smart Downloads feature off.

Smart Downloads Is Only on Android (for Now)

Smart Downloads is now available on Android, so Android users just need to update the Netflix app to gain access to it. Unfortunately, iOS users are going to have to wait a while longer, with Netflix suggesting Smart Downloads will hit reach them later this year.

If you weren’t previously aware you could watch Netflix content offline now is as good a time as any to learn how to download movies and TV shows on Netflix. And then all that’s left to do is discover the TV shows worth watching on your commute to work.

Read the full article: Netflix Launches Smart Downloads for Mobile Users


Read Full Article

HTC’s blockchain phone is real, and it’s arriving later this year


HTC isn’t gone just yet. Granted, it’s closer than it’s ever been before, with a headcount of fewer than 5,000 employees worldwide — that’s down from 19,000 in 2013. But in spite of those “market competition, product mix, pricing, and recognized inventory write-downs,” the company’s still trucking on.

And while its claim to being “the leading innovator in smart phone devices,” is up for debate, the Taiwanese manufacturer has never shied away from a compelling gimmick. Announced earlier this year, the Exodus definitely fits the bill. The “world’s first major blockchain phone” is still shrouded in mystery, though the company did reveal a couple of key details this week at RISE in Hong Kong intended to keep folks interested while it irons out the rest of the product’s hiccups.

Chief among the reveals is an admittedly nebulous release date of Q3 this year. It’s hardly specific, but it does make the phone a little bit more real — unlike the images, which are still limited to the above blueprint picture at press time.

Here’s a quote from the company’s chief crypto officer, a position that really exists.

In the new internet age people are generally more conscious about their data, this a perfect opportunity to empower the user to start owning their digital identity. The Exodus is a great place to start because the phone is the most personal device, and it is also the place where all your data originates from. I’m excited about the opportunity it brings to decentralize the internet and reshape it for the modern user.

Prior to the launch, the company is partnering with the popular blockchain title, CryptoKitties. The game will be available on a small selection of the company’s handsets starting with the U12+. “This is a significant first step in creating a platform and distribution channel for creatives who make unique digital goods,” the company writes in a release tied to the news. “Mobile is the most prevalent device in the history of humankind and for digital assets and dapps to reach their potential, mobile will need to be the main point of distribution. The partnership with Cryptokitties is the beginning of a non fungible, collectible marketplace and crypto gaming app store.”

The company says the partnership marks the beginning of a “platform and distribution channel for creatives who make unique digital goods.” In other words, it’s attempting to reintroduce the concept of scarcity through these decentralized apps. HTC will also be partnering with Bitmark to help accomplish this.

If HTC is looking for the next mainstream play to right the ship, this is emphatically not it. That said, it could be compelling enough to gain some adoption among those heavily invested enough in the crypto space to pick up a handset built around the technology.

HTC promises more information on the device in “the coming months.”


Read Full Article

UK’s Information Commissioner will fine Facebook the maximum £500K over Cambridge Analytica breach


Facebook continues to face fallout over the Cambridge Analytica scandal, which revealed how user data was stealthily obtained by way of quizzes and then appropriated for other purposes, such as targeted political advertising. Today, the U.K. Information Commissioner’s Office (ICO) announced that it would be issuing the social network with its maximum fine, £500,000 ($662,000) after it concluded that it “contravened the law” — specifically the 1998 Data Protection Act — “by failing to safeguard people’s information.”

The ICO is clear that Facebook effectively broke the law by failing to keep users data safe, when their systems allowed Dr Aleksandr Kogan, who developed an app, called “This is your digital life” on behalf of Cambridge Analytica, to scrape the data of up to 87 million Facebook users. This included accessing all of the friends data of the individual accounts that had engaged with Dr Kogan’s app.

The ICO’s inquiry first started in May 2017 in the wake of the Brexit vote and questions over how parties could have manipulated the outcome using targeted digital campaigns.

Damian Collins, the MP who is the chair of the Digital, Culture, Media and Sport Committee that has been undertaking the investigation, has as a result of this said that the DCMS will now demand more information from Facebook, including which other apps might have also been involved, or used in a similar way by others, as well as what potential links all of this activity might have had to Russia. He’s also gearing up to demand a full, independent investigation of the company, rather than the internal audit that Facebook so far has provided. A full statement from Collins is below.

The fine, and the follow-up questions that U.K. government officials are now asking, are a signal that Facebook — after months of grilling on both sides of the Atlantic amid a wider investigation — is not yet off the hook in the U.K. This will come as good news to those who watched the hearings (and non-hearings) in Washington, London and European Parliament and felt that Facebook and others walked away relatively unscathed. The reverberations are also being felt in other parts of the world. In Australia, a group earlier today announced that it was forming a class action lawsuit against Facebook for breaching data privacy as well. (Australia has also been conducting a probe into the scandal.)

The ICO also put forward three questions alongside its announcement of the fine, which it will now be seeking answers to from Facebook. In its own words:

  1. Who had access to the Facebook data scraped by Dr Kogan, or any data sets derived from it?
  2. Given Dr Kogan also worked on a project commissioned by the Russian Government through the University of St Petersburg, did anyone in Russia ever have access to this data or data sets derived from it?
  3. Did organisations who benefited from the scraped data fail to delete it when asked to by Facebook, and if so where is it now?

The DCMS committee has been conducting a wider investigation into disinformation and data use in political campaigns and it plans to publish an interim report on it later this month.

Collins’ full statement:

Given that the ICO is saying that Facebook broke the law, it is essential that we now know which other apps that ran on their platform may have scraped data in a similar way. This cannot by left to a secret internal investigation at Facebook. If other developers broke the law we have a right to know, and the users whose data may have been compromised in this way should be informed.

Facebook users will be rightly concerned that the company left their data far too vulnerable to being collected without their consent by developers working on behalf of companies like Cambridge Analytica. The number of Facebook users affected by this kind of data scraping may be far greater than has currently been acknowledged. Facebook should now make the results of their internal investigations known to the ICO, our committee and other relevant investigatory authorities.

Facebook state that they only knew about this data breach when it was first reported in the press in December 2015. The company has consistently failed to answer the questions from our committee as to who at Facebook was informed about it. They say that Mark Zuckerberg did not know about it until it was reported in the press this year. In which case, given that it concerns a breach of the law, they should state who was the most senior person in the company to know, why they decided people like Mark Zuckerberg didn’t need to know, and why they didn’t inform users at the time about the data breach. Facebook need to provide answers on these important points. These important issues would have remained hidden, were it not for people speaking out about them. Facebook’s response during our inquiry has been consistently slow and unsatisfactory.

The receivers of SCL elections should comply with the law and respond to the enforcement notice issued by the ICO. It is also disturbing that AIQ have failed to comply with their enforcement notice.

Facebook has been in the crosshairs of the ICO over other data protection issues, and not come out well.


Read Full Article

Court victory legalizes 3D-printable gun blueprints


A multi-year legal battle over the ability to distribute computer models of gun parts and replicate them in 3D printers has ended in defeat for government authorities who sought to prevent the practice. Cody Wilson, the gunmaker and free speech advocate behind the lawsuit, now intends to expand his operations, providing printable gun blueprints to all who desire them.

The longer story of the lawsuit is well told by Andy Greenberg over at Wired, but the decision is eloquent on its own. The fundamental question is whether making 3D models of gun components available online is covered by the free speech rights granted by the First Amendment.

This is a timely but complex conflict because it touches on two themes that happen to be, for many, ethically contradictory. Arguments for tighter restrictions on firearms are, in this case, directly opposed to arguments for the unfettered exchange of information on the internet. It’s hard to advocate for both here: restricting firearms and restricting free speech are one and the same.

That at least seems to be conclusion of the government lawyers, who settled Wilson’s lawsuit after years of court battles. In a copy of the settlement provided to me by Wilson, the U.S. government agrees to exempt “the technical data that is the subject of the Action” from legal restriction. The modified rules should appear in the Federal Register soon.

What does this mean? It means that a 3D model that can be used to print the components of a working firearm is legal to own and legal to distribute. You can likely even print it and use the product — you just can’t sell it. There are technicalities to the law here (certain parts are restricted, but can be sold in an incomplete state, etc) but the implications as regards the files themselves seems clear.

Wilson’s original vision, which he is now pursuing free of legal obstacles, is a repository of gun models, called DEFCAD, much like any other collection of data on the web, though naturally considerably more dangerous and controversial.

“I currently have no national legal barriers to continue or expand DEFCAD,” he wrote in an email to TechCrunch. “This legal victory is the formal beginning to the era of downloadable guns. Guns are as downloadable as music. There will be streaming services for semi-automatics.”

The concepts don’t map perfectly, no doubt, but it’s hard to deny that with the success of this lawsuit, there are few legal restrictions to speak of on the digital distribution of firearms. Before it even, there were few technical restrictions: certainly just as you could download MP3s on Napster in 2002, you can download a gun file today.

Gun control advocates will no doubt argue that greater availability of lethal weaponry is the opposite of what is needed in this country. But others will point out that in a way this is a powerful example of how liberally free speech can be defined. It’s important to note that both of these things can be true.

This court victory settles one case, but marks the beginnings of many another. “I have promoted my values for years with great care and diligence,” Wilson wrote. It’s hard to disagree with that. Those whose values differ are free to pursue them in their own way; perhaps they too will be awarded victories of this scale.


Read Full Article