Machine learning (ML) for tabular data (e.g. spreadsheet data) is one of the most active research areas in both ML research and business applications. Solutions to tabular data problems, such as fraud detection and inventory prediction, are critical for many business sectors, including retail, supply chain, finance, manufacturing, marketing and others. Current ML-based solutions to these problems can be achieved by those with significant ML expertise, including manual feature engineering and hyper-parameter tuning, to create a good model. However, the lack of broad availability of these skills limits the efficiency of business improvements through ML.
Google’s AutoML efforts aim to make ML more scalable and accelerate both research and industry applications. Our initial efforts of neural architecture search have enabled breakthroughs in computer vision with NasNet, and evolutionary methods such as AmoebaNet and hardware-aware mobile vision architecture MNasNet further show the benefit of these learning-to-learn methods. Recently, we applied a learning-based approach to tabular data, creating a scalable end-to-end AutoML solution that meets three key criteria:
Full automation: Data and computation resources are the only inputs, while a servable TensorFlow model is the output. The whole process requires no human intervention.
Extensive coverage: The solution is applicable to the majority of arbitrary tasks in the tabular data domain.
High quality: Models generated by AutoML has comparable quality to models manually crafted by top ML experts.
To benchmark our solution, we entered our algorithm in the KaggleDays SF Hackathon, an 8.5 hour competition of 74 teams with up to 3 members per team, as part of the KaggleDays event. The first time that AutoML has competed against Kaggle participants, the competition involved predicting manufacturing defects given information about the material properties and testing results for batches of automotive parts. Despite competing against participants thats were at the Kaggle progression system Master level, including many who were at the GrandMaster level, our team (“Google AutoML”) led for most of the day and ended up finishing second place by a narrow margin, as seen in the final leaderboard.
Our team’s AutoML solution was a multistage TensorFlow pipeline. The first stage is responsible for automatic feature engineering, architecture search, and hyperparameter tuning through search. The promising models from the first stage are fed into the second stage, where cross validation and bootstrap aggregating are applied for better model selection. The best models from the second stage are then combined in the final model.
The workflow for the “Google AutoML” team was quite different from that of other Kaggle competitors. While they were busy with analyzing data and experimenting with various feature engineering ideas, our team spent most of time monitoring jobs and and waiting for them to finish. Our solution for second place on the final leaderboard required 1 hour on 2500 CPUs to finish end-to-end.
After the competition, Kaggle published a public kernel to investigate winning solutions and found that augmenting the top hand-designed models with AutoML models, such as ours, could be a useful way for ML experts to create even better performing systems. As can be seen in the plot below, AutoML has the potential to enhance the efforts of human developers and address a broad range of ML problems.
Potential model quality improvement on final leaderboard if AutoML models were merged with other Kagglers’ models. “Erkut & Mark, Google AutoML”, includes the top winner “Erkut & Mark” and the second place “Google AutoML” models. Erkut Aykutlug and Mark Peng used XGBoost with creative feature engineering whereas AutoML uses both neural network and gradient boosting tree (TFBT) with automatic feature engineering and hyperparameter tuning.
Google Cloud AutoML Tables
The solution we presented at the competitions is the main algorithm in Google Cloud AutoML Tables, which was recently launched (beta) at Google Cloud Next ‘19. The AutoML Tables implementation regularly performs well in benchmark tests against Kaggle competitions as shown in the plot below, demonstrating state-of-the-art performance across the industry.
Third party benchmark of AutoML Tables on multiple Kaggle competitions
We are excited about the potential application of AutoML methods across a wide range of real business problems. Customers have already been leveraging their tabular enterprise data to tackle mission-critical tasks like supply chain management and lead conversion optimization using AutoML Tables, and we are excited to be providing our state-of-the-art models to solve tabular data problems.
Acknowledgements This project was only possible thanks to Google Brain team members Ming Chen, Da Huang, Yifeng Lu, Quoc V. Le and Vishy Tirumalashetty. We also thank Dawei Jia, Chenyu Zhao and Tin-yun Ho from the Cloud AutoML Tables team for great infrastructure and product landing collaboration. Thanks to Walter Reade, Julia Elliott and Kaggle for organizing such an engaging competition.
In the same way that you can tell a lot about a person by looking at their physical desktop, you can also deduce a similar amount of information from a person’s Windows desktop.
If you’re living in a world of virtual clutter, it might be sensible to turn to a third-party desktop management app for help. The most well-known is Fences, but there are plenty of other options out there too.
Here are the best free alternatives to Fences for managing and organizing your Windows desktop.
1. Fences
Wait, how can Stardock’s Fences be an alternative to Fences? Hear me out.
These days, Fences is a paid app. You can enjoy a 30-day free trial, but thereafter you’ll need to pay $10 for the app. If you want the full app, including Object Desktop, it will cost you $50.
However, Fences hasn’t always been a paid app. Back when it was first making a name for itself, the app was free.
And the good news? You can still download that old, free version of Fences. Sure, it doesn’t have quite as many bells and whistles as the newest releases, but it still works well.
Nimi Places is a desktop organizer software that lets users organize their desktop into customizable containers. Each container can hold files and folders from multiple locations, and each file or folder can be displayed as an icon or a thumbnail.
From an organizational standpoint, you can add colored labels and create rules for containers so specific actions will be performed at pre-defined times. Each container can use an individual theme, and you can use different size icons within each container to aid onscreen visuals. The containers also have a built-in media previewer.
The thumbnails are also worth looking at in more detail. Nimi Places doesn’t only have the ability to create thumbnails of images and videos—it can also work with Photoshop files, web page shortcuts, folder directories, and an assortment of productivity files.
If you’re an Apple user, you will be familiar with Launchpad on macOS. Yes, you can customize the Start Menu in Windows 10 to partially replicate it, but having all your installed apps neatly displayed with a single click is super convenient and a true time saver.
If you’re the type of person who has hundreds of app shortcuts on your desktop, give XLaunchpad a try. It brings the Mac Launchpad experience to Windows. Once installed, you’ll see a Rocket icon on your desktop. Click the icon, and you’ll see all your apps. You can finally delete all those app shortcuts from your desktop.
People who’ve used Fences for Windows will like SideSlide. It is the Windows equivalent of shoving all the clutter on your physical desktop into your office drawers. Out of sight, out of mind, right?
The program centers around a Workspace. Within the Workspace, you can add containers, shortcuts, commands, URLs, RSS news feeds, pictures, reminders, notes, and a whole lot more.
All the content in your Workspace is readily available with just a single click. Just dock the app to the side of the screen, and it stays out of sight when not in use; hover your mouse over the dock and it will instantly expand.
Customization is SideShare’s priority. With a bit of tweaking, you can get the app working exactly the way you want. Check out the video above for a glimpse into what it’s capable of.
Another computer desktop organizer for Windows 10, ViPad, also takes a container-based approach to organizing your desktop. However, it just uses one single container, with tabs along the top of the container’s window allowing you to jump between the different groups of content.
Tabs can hold apps and documents, web links, social media contacts, and even music. The tabs are fully searchable (just start typing to start looking), and can be rearranged to suit your needs using drag-and-drop.
TAGO Fences is the most lightweight app on this list. If you just want a few core features without all the added extras you’ll probably never use, check it out.
It’s also arguably the most Fences-like experience, with the aforementioned Nimi Places coming in a close second.
The app lets you store multiple shortcuts and apps within each fence, and has a scroll bar in case the list of icons becomes too big for the container.
For each container, you can change the background and tile colors, show or hide individual icons, and drag and drop your content into your preferred order.
Windows 10 marks the first time multiple virtual desktops have become a mainstream feature. Used correctly, they can massively reduce the amount of clutter on your desktop. For example, if your desktop is a jumble of Steam shortcuts, college assignments, and fresh memes you found on Reddit, why not give each category its own desktop space?
To create a new desktop, click on the Task View icon on the taskbar, or press Windows + Tab. On the new window, click + New Desktop in the upper left-hand corner. To cycle between desktops, press Windows + Ctrl + Left Arrow (or Right Arrow), and to close a desktop, press Windows + Ctrl + F4.
The Best Fences Alternative for Windows 10
As we briefly alluded to earlier in the article, Windows 10 is potentially on the way to making all these apps redundant. You can now use the Start Menu to group shortcuts and apps into expandable folders (just drag one icon over the top of another to get started). If you pair the Start Menu with virtual desktops, you can argue apps like Fences are reaching the end of their life cycle.
If you’re looking for the most Fences-like experience, we recommend TAGO or Nimi.
Tor is one of the most powerful tools for protecting your privacy on the internet. But, as seen in recent years, the power of Tor does come with limitations. Today, we’re going to look at how Tor works, what it does and does not do, and how to stay safe while using it.
Read on for ways you can stay safe from rogue or bad Tor exit nodes.
Tor in a Nutshell: What Is Tor?
Tor works like this: when you send a message through Tor, it is sent on a random course throughout the Tor network. It does this using a technology known as “onion routing.” Onion routing is a bit like sending a message sealed in a series of envelopes, each secured with a padlock.
Each node in the network decrypts the message by opening the outermost envelope to read the next destination, then send the still-sealed (encrypted) inner envelopes to the next address.
As a result, no individual Tor network node can see more than a single link in the chain, and the path of the message becomes extremely difficult to trace.
Eventually, though, the message has to wind up somewhere. If it is going to a “Tor hidden service,” your data remains within the Tor network. A Tor hidden service is a server with a direct connection to the Tor network and without a connection to the regular internet (sometimes referred to as the clearnet).
But if you are using the Tor Browser and Tor network as a proxy to the clearnet, it gets a little more complicated. Your traffic must go through an “exit node.” An exit node is a special type of Tor node that passes your internet traffic back along to the clearnet.
While the majority of Tor exit nodes are fine, some present a problem. Your internet traffic is vulnerable to snooping from an exit node. But it is important to note that it is far from all of them. How bad is the problem? Can you avoid malicious exit nodes?
How to Catch Bad Tor Exit Nodes
A Swedish security researcher, using the name “Chloe,” developed a technique that tricks corrupt Tor exit nodes into revealing themselves [Internet Archive link; original blog is no longer active]. The technique is known as a honeypot, and here’s how it works.
First, Chloe set up a website using a legitimate-looking domain name and web design to serve as the honeypot. For the specific test, Chloe created a domain resembling a Bitcoin merchant. Then, Chloe downloaded a list of every Tor exit node active at the time, logged into Tor, and used each Tor exit node, in turn, to log into the site.
To keep the results clean, she used a unique account for each exit node in question (around 1,400 at the time of the research).
Then, Chloe sat back and waited for a month. Any exit nodes that were attempting to steal login credentials from the exiting Tor traffic would see the unique login details, steal the username and password, and attempt to use it. The honeypot Bitcoin merchant site would note the login attempts and make a note.
Because each username and password combination was unique for each exit node, Chloe quickly uncovered several malicious Tor exit nodes.
Of the 1,400 nodes, 16 attempted to steal the login credentials. It doesn’t seem like many, but even one is too much.
Are Tor Exit Nodes Dangerous?
Chloe’s Tor exit node honeypot experiment was illuminating. It illustrated that malicious Tor exit nodes will take the opportunity to use any data they can acquire.
In this case, the honeypot research was only picking up the Tor exit nodes whose operators have an interest in quickly stealing a few Bitcoins. You have to consider that a more ambitious criminal probably wouldn’t show up in such a simple honeypot.
However, it is a concerning demonstration of the damage that a malicious Tor exit node can do, given the opportunity.
Back in 2007, security researcher Dan Egerstad ran five compromised Tor exit nodes as an experiment. Egerstad quickly found himself in possession of login details for thousands of servers across the world—including servers belonging to the Australian, Indian, Iranian, Japanese, and Russian embassies. Understandably, these come with a tremendous amount of extremely sensitive information.
Egerstad estimates that 95% of the traffic running through his Tor exit nodes was unencrypted, using the standard HTTP protocol, giving him complete access to the content.
After he posted his research online, Egerstad was raided by Swedish police and taken into custody. He claims that one of the police officers told him that the arrest was due to the international pressure surrounding the leak.
5 Ways to Avoid Malicious Tor Exit Nodes
The foreign powers whose information was compromised made a basic mistake; they misunderstood how Tor works and what it is for. The assumption is that Tor is an end-to-end encryption tool. It isn’t. Tor will anonymize the origin of your browsing and message, but not the content.
If you are using Tor to browse the regular internet, an exit node can snoop on your browsing session. That provides a powerful incentive for unscrupulous people to set up exit nodes solely for espionage, theft, or blackmail.
The good news is, there are some simple tricks you can use to protect your privacy and security while using Tor.
1. Stay on the Darkweb
The easiest way to stay safe from bad exit nodes is not to use them. If you stick to using Tor hidden services, you can keep all your communications encrypted, without ever exiting to the clearnet. This works well when possible. But it isn’t always practical.
Given the Tor network (sometimes referred to as the “darkweb”) is thousands of times smaller than the regular internet, you won’t always find what you’re looking for. Furthermore, if you want to use any social media site (bar Facebook, which does operate a Tor onion site), you will use an exit node.
2. Use HTTPS
Another way to make Tor more secure is to use end-to-end encryption. More sites than ever are using HTTPS to secure your communications, rather than the old, insecure HTTP standard. HTTPS is the default setting in Tor, for sites that support it. Also note that .onion sites don’t use HTTPS as standard because communication within the Tor network, using Tor hidden services is by its very nature, encrypted.
But if you enable HTTPS, when your traffic leaves the Tor network through an exit node, you maintain your privacy. Check out the Electronic Frontier Foundation’s Tor and HTTPS interactive guide to understand more about how HTTPS protects your internet traffic.
In any case, if you are connecting to a regular internet site using the Tor Browser, make sure the HTTPS button is green before transmitting any sensitive information.
3. Use Anonymous Services
The third way you can improve your Tor safety is to use websites and services that don’t report on your activities as a matter of course. That is easier said than done in this day and age, but a few small adjustments can have a significant impact.
For instance, switching from Google search to DuckDuckGo reduces your trackable data footprint. Switching to encrypted messaging services such as Ricochet (which you can route over the Tor network) also improve your anonymity.
4. Avoid Using Personal Information
In extension to using tools to increase your anonymity, you should also refrain from sending or using any personal information on Tor. Using Tor for research is fine. But if you engage in forums or interact in with other Tor hidden services, do not use any personally identifiable information.
5. Avoid Logins, Subscriptions, and Payments
You should avoid sites and services that require you to log in. What I mean here is that sending your login credentials through a malicious Tor exit node could have dire consequences. Chloe’s honeypot is a perfect example of this.
Furthermore, if you log in to a service using Tor, you may well start using identifiable account information. For example, if you log in to your regular Reddit account using Tor, you have to consider if you have identifying information already associated with it.
Similarly, the Facebook onion site is a security and privacy boost, but when you sign-in and post using your regular account, it isn’t hidden, and anyone can track it down (although they wouldn’t be able to see the location you sent it from).
Tor isn’t magic. If you login to an account, it leaves a trace.
6. Use a VPN
Finally, use a VPN. A Virtual Private Network (VPN) keeps you safe from malicious exit nodes by continuing to encrypt your data once it leaves the Tor network. If your data remains encrypted, a malicious exit node will not have a chance to intercept it and attempt to figure out who you are.
Tor, and by extension, the darkweb, don’t have to be dangerous. If you follow the safety tips in this article, your chances of exposure will drastically decrease. The key thing to remember is to move slowly!
Tinder has changed the online dating game. But even as one of the most popular dating apps, there are some mistakes that way too many users make.
From falling for fake profiles to sabotaging your chances of getting matched, here are some common Tinder mistakes you should avoid at all costs.
What Is Tinder and How Do I Use It?
Tinder is a dating app for smartphones that lets you swipe through people’s profiles in an effort to find a potential romantic partner.
You can provide the app with a distance you are willing to travel up to 100 miles, and from there, you “like” or “pass” people. Liking a profile is also referred to as swiping-right, while swiping-left is a rejection. If you and someone else on Tinder both swipe-right on each other, the app notifies you both that you’re a match. This allows you to contact each other using the app’s messaging platform.
On the surface, Tinder might appear to be a bit shallow. The app’s primary focus is aimed at featuring profile images to swipe through. Few details besides a first name, job, and age are visible at first glance. Luckily, you are able to find out more by looking at the user’s bio.
To use Tinder, you simply need to download the iOS or Android app and set up a profile. Since 2018, users can also use the app on their PC’s internet browser by visiting the Tinder website.
Now that you know how Tinder works, you’re probably ready to get started. However, you should make sure to avoid these common Tinder mistakes…
1. Falling for Fake Tinder Profiles
Tinder can be a great place to find a romantic partner, but it’s not uncommon for users to stumble across fake profiles. Fake profiles are often used by bots and people running scams on Tinder.
Fake profiles usually have a few warning signs to look out for. These include very little to no information in the bio, with only one picture that looks like a stock image.
You can’t always tell from the profile, however, whether the user is fake. But if you receive links to alternative services and games, messages that seem automated and unrelated to context, or overly forward romantic messages; the profile may be fake.
Make sure you’re up-to-date with all of the warning signs of an online dating scammer. It’s better to err on the side of caution when matching up with people on the app.
2. Sharing Too Much Personal Information on Tinder
While Tinder apparently no longer shows mutual Facebook friends, there are still other profiles that can be linked to the dating app. For example, you are able to link your Instagram account and Spotify playlists.
However, if you’re not careful about what you link, you could share too much personally identifiable information with strangers. Reverse image searches, location tags, and other common tools can be used to identify you.
While you should include some information about yourself (or risk being considered a bot), you should also make sure to protect your privacy on Tinder. Don’t share your home address, your work address, or other private information with matches.
3. Choosing the Wrong Tinder Profile Picture
Many users swipe pretty quickly on Tinder. They won’t necessarily take the time to look at multiple photos and your bio unless your first picture makes an impression. Therefore, you need to make sure that the first photo on your profile is your best.
You should also make sure it only features you rather than a group of friends. Group photos make it unclear who the actual profile belongs to and requires people to view all your photos—something that is more of a deterrent rather than an incentive.
A few other primary profile picture mistakes you should avoid are:
Photos where your face is obscured by sunglasses or hats
Photos taken from far away (making you difficult to see)
Photos that don’t have you in it, such as memes or pictures of animals
Photos where you’re not smiling
You can include these types of images elsewhere in your profile, but if you include it as your primary image or only image, you’re less likely to attract matches. Tinder’s own statistics show that certain types of photos reduce the number of right-swipes that users get.
4. Bypassing the Gems
If you’re looking for more than just a casual hookup on Tinder, you should take the time to check bios before swiping. Every once in a while, you might find someone who has written something clever on their profile or genuinely interesting photos.
So don’t miss out. It’s really easy to focus just on appearances. But if you want more meaningful connections, take some time and get to know the person behind the photo. It’s easy for profiles to come across as generic, so if something stands out to you, take a chance with a right-swipe.
5. Having Unreasonable Expectations on Tinder
Using Tinder is definitely a great way to meet new people and potential romantic partners. But you will also need to manage your expectations.
Users on Tinder are there for a variety of reasons. This means that not all people you match with will be looking for the same thing as you. Surveys by Tinder show that most users are there to find love, but a significant number say that they’re there for casual flings, meeting friends, or just validating their self-esteem.
While you may be there for love, you may very well end up matching with users looking for a fling or just a chat. It’s going to take some time and effort to find people you like on the app, so make sure you’re not expecting instant success and connections when you use the app for the first time.
6. Lying on Your Tinder Profile
While we all want to put our best foot forward on dating sites, there’s a difference between making sure your selfies are flattering and telling outright lies. If you want to have a pleasant experience on Tinder, don’t lie about yourself.
Be clear with what you’re looking for from the experience. This will enable you to find people with similar goals, such as those seeking a potential long-term partner.
Don’t use heavily edited images that don’t look at all like you. Also avoid old images that no longer reflect how you look. This causes more problems than its worth.
You should also make sure to use your real name on Tinder, or at least your nickname. A match will become suspicious if they find out you’re using a fake name on the app. After all, it’s something that scammers or cheaters tend to do.
7. Not Checking Your Tinder Match’s Actual Age
Tinder is limited to users who are 18 years of age or older. While the app tries to prevent underage users from accessing the service, this isn’t always possible.
Furthermore, people are able to hide their age on Tinder with a Tinder Plus account. There are also users who don’t put their real age on the app. Some users go so far as to list their age as over 100 years old.
When setting up a date, you should make sure to check your Tinder match’s actual age.
8. Swiping Right Too Much or Too Little on Tinder
The way the Tinder algorithm works is not altogether clear. However, the company itself has confirmed that it prioritizes active users on the app. This doesn’t mean swiping right on every single profile, however.
Anecdotal reports from Tinder users on Reddit claim that swiping right on too many profiles lowers your number of matches. However, Tinder also recommends on their Swipe Life blog that you should not limit likes to only one percent of the profiles you see.
Even if there’s no penalty in the algorithm, swiping right on every profile can clearly reduce the quality of matches that Tinder is able to provide. After all, Tinder’s algorithm is unable to learn your preferences if you don’t seem to have any at all.
Other Dating Apps That Aren’t Tinder
Tinder can be a great way to get back into the dating game and meet new people. But it’s not for everyone.
If Tinder’s setup and focus on appearances doesn’t appeal, you may want to use an alternative dating app. To get in the dating game using another platform, check out our list of alternative dating apps that match you differently to Tinder.
Video game ratings are attached to every video game. Like movies, video games receive ratings so that you know whether they’re appropriate for children. However, if you’re not too familiar with video games, you may find video game ratings confusing.
With most video game ratings just a set of letters of numbers, this article offers a guide to the ESRB and PEGI ratings. In it, we explain how video game ratings work, give a little background on the companies responsible, and explain how you can utilize them.
North America: The ESRB
The ESRB, short for Entertainment Software Rating Board, provides video game ratings for the United States, Canada, and Mexico. It was established in 1994, and the circumstances leading up to it are quite interesting.
Prior to the ESRB, video game ratings were up to the console manufacturers. At the time, Nintendo didn’t rate games, but had a reputation for censoring games to make them family-friendly. Meanwhile, Sega had its own rating system for its consoles.
As video game graphics grew more realistic, parents and the US government became concerned. Two games became the center of controversy: the ultra-violent fighting game Mortal Kombat, and Night Trap, a game with full-motion video where you have to stop teenage girls from being abducted.
As a result of this, the US government held hearings on the effects of mature games on society. They gave the game industry an ultimatum: come up with a universal ratings system in one year, or the government would force one on them.
Thus, in 1994, the ESRB was born. It’s been the video game ratings system in North America ever since. Unlike many other countries, ESRB ratings are not legally enforced. Instead, it’s self-regulated; all console manufacturers require games to have an ESRB rating to appear on their systems, and stores won’t stock games without a rating.
Europe: PEGI
PEGI, which stands for Pan European Game Information, is the standard for rating video games in much of Europe. It launched in 2003 and replaced various game rating systems that individual nations had used prior. As of this writing, 39 countries use PEGI to rate games.
There’s not quite as much of a backstory with PEGI. It’s an example of standardization across the countries in the European Union; the European Commission has expressed support for it. Some countries mandate that age labels appear on games and enforce their sales, while others adopt it as a de facto standard with no particular legislative support.
Video Game Ratings in Other Countries
As you’d expect, other regions of the world have their own video game ratings systems as well. We can’t cover them all here, but they mostly follow similar patterns. For example, Japan has the CERO (Computer Entertainment Rating Organization) which assigns letter ratings to games.
However, Australia is particularly noteworthy for enacting heavy censorship compared to other western nations. The Australian Classification Board didn’t support the 18+ rating for video games until 2013. Certain games never get released in Australia, while others have to undergo heavy editing.
For example, in Fallout 3, the real-world drug morphine was changed to “Med-X” worldwide to comply with Australian standards. It’s illegal to sell any games in Australia that have been refused classification.
ESRB Ratings Explained
Now that we’ve looked at the companies behind the ratings, let’s look at the actual video game ratings you’ll see on boxes in North America.
The ESRB uses seven different ratings for games. Four of them are common, while two others are fairly rare and one is a placeholder.
Early Childhood (EC) is the lowest rating. It signifies games that are intended for a preschool audience. These titles thus have no objectionable content, and are likely not enjoyable for general audiences as they’re meant for young children. This rating is not very common. Example games include Dora the Explorer: Dance to the Rescue and Bubble Guppies.
Everyone (E) is the base rating. Games with this rating have content that’s “generally suitable for all ages”. They might contain minor instances of cartoon violence or comic mischief. Before 1998, this rating was called Kids to Adults (KA). Games rated E include Mario Kart 8 Deluxe and Rocket League.
Everyone 10+ (E10+) signifies games appropriate for kids 10 years and older. Compared to a game rated E, these titles can contain some suggestive content, more crude humor, or heavier violence. Notably, this is the only rating the ESRB has added since its inception. Some games with this rating are Super Smash Bros. Ultimate and Kingdom Hearts III.
Teen (T) is the next level up. This rating is suitable for players 13 and older. Titles may have sexually suggestive content, more frequent or stronger language, and blood. You’ll find the Teen rating on games like Apex Legends and Fortnite (find out what parents should know about Fortnite).
Mature (M) is the highest normal rating. Games rated M are considered suitable only for those 17 and older. Compared to Teen titles, they may contain intense violence, strong sexual content, nudity, and incessant strong language. Some stores don’t sell M-rated games to minors, but this is not a legal standard. Example titles rated M include Red Dead Redemption II and Assassin’s Creed Odyssey.
Adults Only (AO) is the ESRB’s 18+ rating. It’s issued for games with graphic sexual content or those that allow gambling with real money. However, it is in effect a lame-duck rating. None of the major console manufacturers allow AO games on their systems, and few retailers will sell AO games in their stores.
Because of this, only a handful of games have ever received this rating; most AO games receive the rating due to heavy sexual content. Publishers will make changes to their games to avoid this rating, as it’s essentially a death sentence. Games with the AO rating include Seduce Me and Ef: A Fairy Tale of the Two.
Rating Pending (RP) is a placeholder. It appears alongside advertisements for games that haven’t been rated yet.
ESRB Content Descriptors
While you’ll find a rating on the front of a game’s box, the back contains more information. The ESRB has a few dozen content descriptors, which give you info about the exact kinds of objectionable content in the game. Most of them are self-explanatory (such as Blood or Use of Drugs) , but we’ll explain a few potentially confusing ones here:
Comic Mischief: Characters slip on banana peels, slap each other, etc.
Crude Humor: Generally refers to “bathroom humor” such as farting.
Lyrics: Music in the game contains language or otherwise suggestive content.
Simulated Gambling: The game contains gambling with virtual money.
Suggestive Themes: A lesser version of the Sexual Themes descriptor. The game usually has characters in skimpy clothing or similar.
Finally, ESRB ratings now feature information about “Interactive Elements” at the bottom of the rating. These include In-Game Purchases if the game lets you spend real money for loot boxes or similar items, and Users Interact in games where you can talk and share content with others. The ESRB does not rate the online portions of a game because it can’t predict how people will act online.
For a complete list of descriptors and information, see the ESRB’s ratings guide. You can also search for any game on the ESRB’s website to see a summary of its objectionable elements.
PEGI Ratings Explained
PEGI uses a similar setup to the ESRB with five total ratings. However, there are slight differences in the rating levels, and there’s no “useless” rating like AO.
PEGI 3 is the lowest rating and is suitable for all age groups. Unlike the EC rating, games with this rating aren’t necessarily aimed at preschoolers. These titles won’t contain anything that will scare young children or any language, but very mild comical violence is OK. An example of this rating is Yoshi’s Crafted World.
PEGI 12 features an orange icon. These games are for players 12 and older. They can contain more realistic violence, sexual innuendo, minor instances of gambling, horrifying elements, and some bad language. One such game is Shadow of the Colossus.
PEGI 16, also orange, signifies titles for those 16 and up. Compared to PEGI 12 titles, these games can contain drug use, more intense violence, stronger sexual situations, and frequent strong language. Battlefield V falls under this rating.
PEGI 18 is the strongest rating and carries a red color. These games are only for players 18 and older. They contain extreme violence, glorification of drug use, and explicit sexual activity. Metro: Exodus is an example of a PEGI 18 game.
PEGI Content Descriptors
Like the ESRB, PEGI also supplements the main ratings with content descriptions. These appear as icons on the back of the box. While there are far fewer PEGI descriptors compared to the ESRB, they signify different levels of that content based on the rating.
For example, the Bad Language descriptor can appear on games rated 12 through 18. But while a PEGI 12 game will only contain some mild swearing, a PEGI 18 game might have pervasive sexual expletives. Additionally, the descriptions are limited to particular ratings, so you won’t see the Drugs descriptor on a PEGI 7 title, for instance.
We’ve taken a full tour of the ESRB and PEGI video game ratings systems. So now you know the background of these companies, what the ratings mean, and how to check the content descriptors for additional details on individual titles.
It’s interesting to see how ratings compare across regions. For example, the indie title Celeste received an E10+ rating in the US, but only a PEGI 7 in Europe. PEGI also doesn’t point out some of the content that the ESRB does, such as crude humor.
If you’re a parent who wants to know more about their kids’ hobby, here’s our parents’ guide to video games to help you understand a little better.
360-degree cameras allow you to capture everything around you in video or still form. Despite seeming like a gimmick at first, the technology has matured to a point where these cameras are no longer just toys.
In fact, 360-degree cameras are now so good that there are some serious benefits to carrying a camera that can see more than you can. Curious? Here are some reasons why you should buy a 360-degree camera.
1. 360-Degree Cameras Are Fun
It’s easy to lose track of why you fell in love with photography. When you’re obsessing over sensor sizes and focal lengths, you’re forgetting what drew you to the hobby in the first place. Everything in life should be fun, and 360-degree cameras are built for fun.
There are no interchangeable lenses, and most have a modest sensor size of 1/2.3” (that’s the same as a GoPro). At this point, creativity is far more important than technical proficiency.
The result is a camera that demands you have fun with it. 360-degree video is a new medium to most photographers. These cameras make ideal travel companions since they see more of your holidays than you do. This doesn’t require much effort either, since a 360-degree camera is always pointing in the right direction.
If you love taking pictures on your smartphone, you’ll probably love shooting 360-degree stills and videos. You can’t buy the creative spark, but you can stoke the fire and try to rekindle it by challenging yourself to do something new.
2. 360-Degree Cameras Are Super Easy to Use
The other reason that 360-degree cameras make such good travel companions is down to ease of use. When your camera is pointing in every direction, you don’t need to put so much thought into framing your shot. Since most modern cameras of this type come with impressive image stabilization, you don’t even need to worry about holding the camera still either.
And then there’s overcapture. Coined to describe the practice of extracting regular non-360-degree footage from 360-degree video, overcapture opens up a world of possibilities. You can frame your shot in post, slow down and speed up your footage, and insert pans and zooms.
When you’re done you can export your video for sharing on YouTube, Facebook, Instagram, and more.
Most 360-degree cameras support overcapture through the use of a mobile app. The Insta360 One X (our Insta360 One X review) is one such camera that makes excellent use of overcapture to generate fluid hyperlapses. Creating a hyperlapse previously required hours of manual work, a tripod, and patience. Now you can do it by walking down the road while hand-holding your camera.
3. The Technology Has Caught Up With the Ambition
A lot has changed since the first consumer-oriented 360-degree cameras hit the market. The early models, like Samsung’s Gear 360 (our Samsung Gear 360 review), suffered from poor image quality, wobbly video, and sub-par optics. These are the areas that have improved the most on today’s models.
Cameras like the Rylo are full-blown action cameras, with impressive image stabilization capabilities. The Insta360 One X and EVO cameras can shoot flat color profile LOG video and RAW still images, complete with manual camera controls if you want them. Then there’s the Ricoh Theta Z1, a 360-degree camera with a 1-inch sensor for superior low light performance and color rendition.
Most cameras use a companion smartphone app for controlling, viewing, and editing purposes. These allow you to put together slick videos which can be shared in an instant, direct from your device. Not only has the hardware improved, the software and focus on convenient mobile apps has come on in leaps and bounds too.
4. Things Look Even Better in VR
While overcapture is nice, nothing quite beats seeing your 360-degree footage in 360-degrees. Thanks to the increasing prevalence of virtual reality headsets, you can now don a headset and move your head around for a better view.
You don’t even need to shell out for an expensive HTC Vive or Oculus Go. Cheap VR solutions like Google Cardboard and Samsung’s Gear VR allow you to immerse yourself in your footage without breaking the bank, and there are some great VR apps for Google Cardboard.
If the idea of shooting VR videos excites you, take a look at the Insta360 EVO (above). The camera features two lenses on a hinge, allowing you to shoot full 360-degree video and 180-degree video in 3D. VR headsets offer some of the most impressive 3D experiences on the market, and the EVO doesn’t disappoint.
5. Shoot Drone-Like Footage Without a Drone
With the right accessories, you can get drone-like footage from your 360-degree action camera. Don’t believe me? Check out the video below:
Not all 360-degree cameras offer this functionality, and it doesn’t always work as well as Insta360’s implementation (above). With the right camera and stick, you can simulate a pretty impressive floating camera effect.
Combine the floating camera with overcapture and you can pull off some impressive-looking drone-like shots. Not only is it fun, it’s perfect for use in areas where flying a drone is impractical or illegal.
6. The Possibilities Are Endless
When action cameras became affordable, people started mounting GoPros on everything. Imagine how much better some of that footage would have been with the ability to re-frame the shot using overcapture. You no longer have to imagine, since even GoPro has a 360-degree camera in the form of the GoPro Fusion.
A 360-degree camera lets you further explore these possibilities. Mount it on your car and go for a scenic drive. Put it on your mountain bike and hit some gnarly jumps. Attach it to your pushchair and go for a walk through the park. Stick it on your motorcycle helmet and use it like a dashcam.
This technology can push the boundaries of your creative abilities. Master the art of “shoot first, frame later”. Just make sure you’ve got a spare battery or two before you embark on your next adventure.
Picking the Right 360-Degree Camera for You
Before you settle on a 360-degree camera, you have to ask yourself where your priorities lie. Form factors, still image capabilities, image stabilization, mobile app quality, and the ability to shoot LOG and RAW media should all influence your decision. And then there’s the price, with most current cameras hovering around the $500 mark.
Instagram isn’t just pretty pictures. It now also harbors bullying, misinformation, and controversial self-expression content. So today Instagram is announcing a bevvy up safety updates to protect users and give them more of a voice. Most significantly, Instagram will now let users appeal the company’s decision to take down one of their posts. A new in-app interface rolling out starting today over the next few months will let users get “get a second opinion on the post” says Instagram’s head of policy Karina Newton. A different Facebook moderator will review the post, and restore its visibility if it was wrongly removed, and they’ll inform users of their conclusion either way.
Blocking Vaccine Misinfo Hashtags
On the misinformation front, Instagram will begin blocking vaccine-related hashtag pages when content surfaced on a hashtag page features a large proportion of verifiably false content about vaccines. If there is some violating content but under that threshold, Instagram will lock a hashtag into a “Top-only” post where Recent posts won’t show up to decrease visibility of problematic content. Instagram says that it will test this approach and expand it to other problemaic content genres if it works. Instagram will also be surfacing educational information via a pop-up to people who search for vaccine content, similar to what it’s used in the past for self-harm and opioid content.
Instagram says now that health agencies like the Center For Disease Control and World Health Organization are confirming that VACCINES DO NOT CAUSE AUTISM, it’s comfortable declaring that information contradicting that is verifiably false, and can be aggressively demoted on the platform.
Automated system scan and score every post uploaded to Instagram, checking them against classifiers of prohibited content and what it calls “text-matching banks”. These collections of fingerprinted content it’s already banned have their text indexed and words pulled out of imagery through optical character recognition so Instagram can find posts with the same words later. It’s working on extending this technology to videos, and all the systems are being trained to spot obvious issues like threats, unwanted contact, and insults, but also causing intentional ‘fear-of-missing-out’, taunting, shaming, and betrayals.
If the AI is confident a post violates policies, it’s taken down and counted as a strike against any hashtag included. If a hashtag gets way too many strikes, the hashtag will be blocked, and those with fewer strikes get locked in Top-Only mode. The change comes after stern criticism from CNN and others about how hashtage pages like #VaccinesKill still featured tons of dangerous misinformation as recently as yesterday.
Tally-Based Suspensions
One other new change announced this week is that Instagram will no longer determine whether to suspend an account based on the percentage of their content that violates policies, but by a tally of total violations within a certain period of time. Otherwise, Newton says “It would disproportionately benefit those that have a large amount of posts” because even a large number of violations would be a smaller percentage than a rare violation by someone who doesn’t post often. Instagram won’t disclose the exact time frame or number of violations that trigger suspensions to prevent bad actors from gaming the system.
Instagram recently announced several new tests on the safety front at F8, including a “nudge” not to post a potentially hateful comment a user has typed, “away mode” for taking a break from Instagram without deleting your account, and a way to “manage interations” so you can ban people from taking certain actions like commenting on your content or DMing you without blocking them entirely.
The announcements come as Instagram has solidified its central place in youth culture. That means it has intense responsibility to protect its user base from bullying, hate speech, graphic content, drugs, misinformation, and extremism. “We work really closely with subject matter experts, raise issues that might be playing out differently on Instagram than Facebook, and we identify gaps where we need to change how our policies are operationalized or our policies are changed” says Instagram’s head of public policy Karina Newton.
Instagram isn’t just pretty pictures. It now also harbors bullying, misinformation, and controversial self-expression content. So today Instagram is announcing a bevvy up safety updates to protect users and give them more of a voice. Most significantly, Instagram will now let users appeal the company’s decision to take down one of their posts.
A new in-app interface rolling out starting today over the next few months will let users get “get a second opinion on the post” says Instagram’s head of policy Karina Newton. A different Facebook moderator will review the post, and restore its visibility if it was wrongly removed, and they’ll inform users of their conclusion either way. Instagram always let users appeal account suspensions, but now someone claiming their post was mistakenly removed for nudity when they weren’t nude or hatespeech that was actually friendly joshing.
Blocking Vaccine Misinfo Hashtags
On the misinformation front, Instagram will begin blocking vaccine-related hashtag pages when content surfaced on a hashtag page features a large proportion of verifiably false content about vaccines. If there is some violating content but under that threshold, Instagram will lock a hashtag into a “Top-only” post where Recent posts won’t show up to decrease visibility of problematic content. Instagram says that it will test this approach and expand it to other problemaic content genres if it works. Instagram will also be surfacing educational information via a pop-up to people who search for vaccine content, similar to what it’s used in the past for self-harm and opioid content.
Instagram says now that health agencies like the Center For Disease Control and World Health Organization are confirming that VACCINES DO NOT CAUSE AUTISM, it’s comfortable declaring that information contradicting that is verifiably false, and can be aggressively demoted on the platform.
Automated system scan and score every post uploaded to Instagram, checking them against classifiers of prohibited content and what it calls “text-matching banks”. These collections of fingerprinted content it’s already banned have their text indexed and words pulled out of imagery through optical character recognition so Instagram can find posts with the same words later. It’s working on extending this technology to videos, and all the systems are being trained to spot obvious issues like threats, unwanted contact, and insults, but also causing intentional ‘fear-of-missing-out’, taunting, shaming, and betrayals.
If the AI is confident a post violates policies, it’s taken down and counted as a strike against any hashtag included. If a hashtag gets way too many strikes, the hashtag will be blocked, and those with fewer strikes get locked in Top-Only mode. The change comes after stern criticism from CNN and others about how hashtage pages like #VaccinesKill still featured tons of dangerous misinformation as recently as yesterday.
Tally-Based Suspensions
One other new change announced this week is that Instagram will no longer determine whether to suspend an account based on the percentage of their content that violates policies, but by a tally of total violations within a certain period of time. Otherwise, Newton says “It would disproportionately benefit those that have a large amount of posts” because even a large number of violations would be a smaller percentage than a rare violation by someone who doesn’t post often. Instagram won’t disclose the exact time frame or number of violations that trigger suspensions to prevent bad actors from gaming the system.
Instagram recently announced several new tests on the safety front at F8, including a “nudge” not to post a potentially hateful comment a user has typed, “away mode” for taking a break from Instagram without deleting your account, and a way to “manage interations” so you can ban people from taking certain actions like commenting on your content or DMing you without blocking them entirely.
The announcements come as Instagram has solidified its central place in youth culture. That means it has intense responsibility to protect its user base from bullying, hate speech, graphic content, drugs, misinformation, and extremism. “We work really closely with subject matter experts, raise issues that might be playing out differently on Instagram than Facebook, and we identify gaps where we need to change how our policies are operationalized or our policies are changed” says Instagram’s head of public policy Karina Newton.
The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.
Specifically, Google says it will start giving more weight to reviews that discuss the most recent releases of an app. When this rolls out for consumers in August, it should ideally ensure that ratings better reflect the latest fixes and changes, positive or negative.
“You told us you wanted a rating based on what your app is today, not what it was years ago, and we agree,” said Google’s Milena Nikolic.
In January, the Consumer Technology Association nullified the award it had granted Lora DiCarlo (that’s a startup, not a person), which is building a hands-free device that uses biomimicry and robotics to help people achieve a “blended” orgasm. Yesterday, the CTA reversed that decision and apologized.
Facebook still won’t let you advertise for ICOs or binaries, and ads for cryptocurrencies and exchanges need prior approval. But a year after banning all blockchain-related ads, it’s reopening to “blockchain technology, industry news, education or events related to cryptocurrency.”
Consumer giants are taking note of the direct-to-consumer trend, with this deal following Unilever’s acquisition of Dollar Shave Club and Procter & Gamble’s acquisition of Walker & Company.
Traditionally, building these headphones involved building a lot of the hardware and software stack — something top-tier manufacturers could afford to do, but it kept second- or third-tier headphone developers from adding voice assistant capabilities to their devices.
Nike Fit uses a proprietary combination of computer vision, data science, machine learning, artificial intelligence and recommendation algorithms to find your right fit.
Detailed.com founder Glen Allsopp highlights some of the most overlooked ideas and sources of data to find words and phrases relevant to your business that are high in intent but lacking in competition. (Extra Crunch membership required.)
The latest call to break up Facebook looks to be the most uncomfortably close to home yet for supreme leader, Mark Zuckerberg.
“Mark’s power is unprecedented and un-American,” writes Chris Hughes, in an explosive op-ed published in the New York Times. “It is time to break up Facebook.”
It’s a long read but worth indulging for a well articulated argument against the market-denting power of monopolies, shot through with a smattering of personal anecdotes about Hughes’ experience of Zuckerberg — who he at one point almost paints as ‘only human’, before shoulder-dropping into a straight thumbs-down that “it’s his very humanity that makes his unchecked power so problematic.”
The tl;dr of Hughes’ argument against Facebook/Zuckerberg being allowed to continue its/his reign of the Internet knits together different strands of the techlash zeitgeist, linking Zuckerberg’s absolute influence over Facebook — and therefore over the unprecedented billions of people he can reach and behaviourally reprogram via content-sorting algorithms — to the crushing of innovation and startup competition; the crushing of consumer attention, choice and privacy, all hostage to relentless growth targets and an eyeball-demanding ad business model; to the crushing control of speech that Zuckerberg — as Facebook’s absolute monarch — personally commands, with Hughes worrying it’s a power too potent for any one human to wield.
“Mark may never have a boss, but he needs to have some check on his power,” he writes. “The American government needs to do two things: break up Facebook’s monopoly and regulate the company to make it more accountable to the American people.”
His proposed solution is not just a break up of Facebook’s monopoly of online attention by re-separating Facebook, Instagram and WhatsApp — to try to reinvigorate a social arena it now inescapably owns — he also calls for US policymakers to step up to the plate and regulate, suggesting an oversight agency is also essential to hold Internet companies to account, and pointing to Europe’s recently toughened privacy framework, GDPR, as a start.
“Just breaking up Facebook is not enough. We need a new agency, empowered by Congress to regulate tech companies. Its first mandate should be to protect privacy,” he writes. “A landmark privacy bill in the United States should specify exactly what control Americans have over their digital information, require clearer disclosure to users and provide enough flexibility to the agency to exercise effective oversight over time. The agency should also be charged with guaranteeing basic interoperability across platforms.”
Once an equally fresh faced co-founder of Facebook alongside his Harvard roommate, Hughes left Facebook in 2007, walking away with what would become eye-watering wealth —writing laterthat he made half a billion dollars for three years’ work, off of the back of Facebook’s 2012 IPO.
It’s harder to put a value on the relief Hughes must also feel, having exited the scandal-hit behemoth so early on — getting out before early missteps hardened into a cynical parade of privacy, security and trust failures that slowly, gradually yet inexorably snowballed into world-wide scandal — with the 2016 revelations about the extent of Kremlin-backed political disinformation lighting up the dark underbelly of Facebook ads.
Soon after, the Cambridge Analytica data misuse scandal shone an equally dim light into similarly murky goings on Facebook’s developer platform.Some of which appeared to hit even closer to home. (Facebook had its own staff helping to target those political ads, and hired the co-founder of the company that had silently sucked out user data in order to sell manipulative political propaganda services to Cambridge Analytica.)
It’s clear now that Facebook’s privacy, security and trust failures are no accident; but rather chain-linked to Zuckerberg’s leadership; to his strategy of neverending sprint for relentless, bottomless growth — via what was once literally a stated policy of “domination”.
Hughes, meanwhile, dropped out — coming away from Facebook a very rich man and, if not entirely guilt-free given his own founding role in the saga, certainly lacking Zuckerberg-levels of indelible taint.
Though we can still wonder where his well-articulated concern, about how Facebook’s monopoly grip on markets and attention is massively and horribly denting the human universe, has been channelled prior to publishing this NYT op-ed — i.e. before rising alarm over Facebook’s impact on societies, democracies, human rights and people’s mental health scaled so disfiguringly into mainstream view.
Does he, perhaps, regret not penning a critical op-ed before Roger McNamee, an early Zuckerberg advisor with a far less substantial role in the whole drama, got his twenty-cents in earlier this year — publishing a critical book, Zucked, which recounts his experience trying and failing to get Zuckerberg to turn the tanker and chart a less collaterally damaging course.
It’s certainly curious it’s taken Hughes so long to come out of the woodwork and join the big techlash.
The NYT review of Zucked headlined it as an “anti-Facebook manifesto” — a descriptor that could apply equally to Hughes’ op-ed. And in an interview with TC back in February, McNamee — whose more limited connection to Zuckerberg Facebook has sought to dismiss — said of speaking out: “I may be the wrong messenger, but I don’t see a lot of other volunteers at the moment.”
Facebook certainly won’t be able to be so dismissive of Hughes’ critique, as a fellow co-founder. This is one Zuckerberg gut-punch that will both hurt and be harder to dodge. (We’ve asked Facebook if it has a response and will update if so.)
At the same time, hating on Facebook and Zuckerberg is almost fashionable these days — as the company’s consumer- and market-bending power has flipped its fortunes from winning friends and influencing people to turning frenemies into out-and-out haters and politically charged enemies.
Whether it’s former mentors, former colleagues — and now of course politicians and policymakers leading the charge and calling for the company to be broken up.
Seen from that angle, it’s a shame Hughes waited so long to add his two cents. It does risk him being labelled an opportunist — or, dare we say it, a techlash populist. (Some of us have been banging on about Facebook’s intrusive influence for years, so, er, welcome to the club Chris!)
Though, equally, he may have been trying to protect his historical friendship with Zuckerberg. (The op-ed begins with Hughes talking about the last time he saw Zuckerberg, in summer 2017, which it’s hard not to read as him tacitly acknowledging there likely won’t be any more personal visits after this bombshell.)
Hughes is also not alone in feeling he needs to bide his time to come out against Zuckerberg.
The WhatsApp founders, who jumped the Facebook mothership last year, kept their heads down and their mouths shut for years, despite a product philosophy that boiled down to ‘fuck ads’ — only finally making their lack of love for their former employer’s ad-fuelled privacy incursions into WhatsApp clear post-exit from the belly of the beast — in their own subtle and not so subtle ways.
In their case they appear to have been mostly waiting for enough shares to vest. (Brian Acton did leave a bunch on the table.) But Hughes has been sitting on his money mountain for years.
Still, at least we finally have his critical — and rarer — account to add to the pile; A Facebook co-founder, who had remained close to Zuckerberg’s orbit, finally reaching for the unfriend button.
Samsung has been understandably silent about the Galaxy Fold for the last couple of weeks. The company’s been reassessing issues with the foldable’s display after initially chalking problems with review units up to small sample sizes and user error. It’s tough to say how difficult and expensive a fix will be, but this surely isn’t the sort of press it was hoping for with its first to market device.
CEO DJ Koh is finally ready to talk about the Fold — or at least offer news that there will soon be news. The exec told The Korea Herald that Samsung, “has reviewed the defect caused from substances (that entered the device), and we will reach a conclusion in a couple of days (on the launch).”
What Koh appears to be referring to specifically are the gaps in the fold mechanism that allowed material to get behind the display, damaging it when pressure was applied to the touchscreen.
From the sound of things, Samsung is hoping to have an update on timing at some point this week or early next, at the latest. Koh added, “We will not be too late,” which the paper took to be a suggestion that the Fold will begin shipping earlier than expected.
Samsung no doubt is hoping to have it out sooner than later, but the Note debacle’s two recalls should serve as a reminder that these things ought not be rushed.
A coalition of child protection and privacy groups has filed a complaint with the Federal Trade Commission (FTC) urging it to investigate a kid-focused edition of Amazon’sEcho smart speaker.
The complaint against Amazon Echo Dot Kids, which has been lodged with the FTC by groups including the Campaign for a Commercial-Free Childhood, the Center for Digital Democracy and the Consumer Federation of America, argues that the ecommerce giant is violating the Children’s Online Privacy Protection Act (Coppa) — including by failing to obtain proper consents for the use of kids’ data.
As with its other smart speaker Echo devices the Echo Dot Kids continually listens for a wake word and then responds to voice commands by recording and processing users’ speech. The difference with this Echo is it’s intended for children to use — which makes it subject to US privacy regulation intended to protect kids from commercial exploitation online.
The complaint, which can be read in full via the group’s complaint website, argues that Amazon fails to provide adequate information to parents about what personal data will be collected from their children when they use the Echo Dot Kids; how their information will be used; and which third parties it will be shared with — meaning parents do not have enough information to make an informed decision about whether to give consent for their child’s data to be processed.
They also accuse Amazon of providing at best “unclear and confusing” information per its obligation under Coppa to also provide notice to parents to obtain consent for children’s information to be collected by third parties via the online service — such as those providing Alexa “skills” (aka apps the AI can interact with to expand its utility).
A number of other concerns are also being raised about Amazon’s device with the FTC.
Amazon released the Echo Dot Kids a year ago — and, as we noted at the time, it’s essentially a brightly bumpered iteration of the company’s standard Echo Dot hardware.
There are differences in the software, though. In parallel Amazon updated its Alexa smart assistant — adding parental controls, aka its FreeTime software, to the child-focused smart speaker.
Amazon said the free version of FreeTime that comes bundled with the Echo Dot Kids provides parents with controls to manage their kids’ use of the product, including device time limits; parental controls over skills and services; and the ability to view kids’ activity via a parental dashboard in the app. The software also removes the ability for Alexa to be used to make phone calls outside the home (while keeping an intercom functionality).
A paid premium tier of FreeTime (called FreeTime Unlimited) also bundles additional kid-friendly content, including Audible books, ad-free radio stations from iHeartRadio Family, and premium skills and stories from the likes of Disney, National Geographic and Nickelodeon.
At the time it announced the Echo Dot Kids, Amazon said it had tweaked its voice assistant to support kid-focused interactions — saying it had trained the AI to understand children’s questions and speech patterns, and incorporated new answers targeted specifically at kids (such as jokes).
But while the company was ploughing resource into adding a parental control layer to Echo and making Alexa’s speech recognition kid-friendly, the Coppa complaint argues it failed to pay enough attention to the data protection and privacy obligations that apply to products targeted at children — as the Echo Dot Kids clearly is.
Or, to put it another way, Amazon offers parents some controls over how their children can interact with the product — but not enough controls over how Amazon (and others) can interact with their children’s data via the same always-on microphone.
More specifically, the group argues that Amazon is failing to meet its obligation as the operator of a child-directed service to provide notice and obtain consent for third parties operating on the Alexa platform to use children’s data — noting that its Children’s Privacy Disclosure policy states it does not apply to third party services and skills.
Instead the complaint says Amazon tells parents they should review the skill’s policies concerning data collection and use. “Our investigation found that only about 15% of kid skills provide a link to a privacy policy. Thus, Amazon’s notice to parents regarding data collection by third parties appears designed to discourage parental engagement and avoid Amazon’s responsibilities under Coppa,” the group writes in a summary of their complaint.
They are also objecting to how Amazon is obtaining parental consent — arguing its system for doing so is inadequate because it’s merely asking that a credit or debit/debit gift card number be inputted.
“It does not verify that the person “consenting” is the child’s parent as required by Coppa,” they argue. “Nor does Amazon verify that the person consenting is even an adult because it allows the use of debit gift cards and does not require a financial transaction for verification.”
Another objection is that Amazon is retaining audio recordings of children’s voices far longer than necessary — keeping them indefinitely unless a parent actively goes in and deletes the recordings, despite Coppa requiring that children’s data be held for no longer than is reasonably necessary.
They found that additional data (such as transcripts of audio recordings) was also still retained even after audio recordings had been deleted. A parent must contact Amazon customer service to explicitly request deletion of their child’s entire profile to remove that data residue — meaning that to delete all recorded kids’ data a parent has to nix their access to parental controls and their kids’ access to content provided via FreeTime — so the complaint argues that Amazon’s process for parents to delete children’s information is “unduly burdensome” too.
Their investigation also found the company’s process for letting parents review children’s information to be similarly arduous, with no ability for parents to search the collected data — meaning they have to listen/read every recording of their child to understand what has been stored.
They further highlights that children’s Echo Dot Kids’ audio recordings can of course include sensitive personal details — such as if a child uses Alexa’s ‘remember’ feature to ask the AI to remember personal data such as their address and contact details or personal health information like a food allergy.
The group’s complaint also flags the risk of other children having their data collected and processed by Amazon without their parents consent — such as when a child has a friend or family member visiting on a playdate and they end up playing with the Echo together.
Responding to the complaint, Amazon has denied it is in breach of Coppa. In a statement a company spokesperson said: “FreeTime on Alexa and Echo Dot Kids Edition are compliant with the Children’s Online Privacy Protection Act (COPPA). Customers can find more information on Alexa and overall privacy practices here: https://www.amazon.com/alexa/voice [amazon.com].”
An Amazon spokesperson also told us it only allows kid skills to collect personal information from children outside of FreeTime Unlimited (i.e. the paid tier) — and then only if the skill has a privacy policy and the developer separately obtains verified consent from the parent, adding that most kid skills do not have a privacy policy because they do not collect any personal information.
At the time of writing the FTC had not responded to a request for comment on the complaint.
Over in Europe, there has been growing concern over the use of children’s data by online services. A report by England’s children’s commissioner late last year warned kids are being “datafied”, and suggested profiling at such an early age could lead to a data-disadvantaged generation.
Responding to rising concerns the UK privacy regulator launched a consultation on a draft Code of Practice for age appropriate design last month, asking for feedback on 16 proposed standards online services must meet to protect children’s privacy — including requiring that product makers put the best interests of the child at the fore, deliver transparent T&Cs, minimize data use and set high privacy defaults.