08 May 2018

Android P leverages DeepMind for new Adaptive Battery feature


No surprise here, Android P was the highlight of today’s Google I/O keynote. The new version of the company’s mobile operating system still doesn’t have a name (at least not as of this writing), but the company’s already highlighted a number of key new features, including, notable, Adaptive Battery.

Aimed at taking on basically everyone’s biggest complaints about their handset, the new feature is designed to make more efficient use of on-board hard. Google’s own DeepMind is doing much of the heavy lifting here, relying on user habits to determine what apps they use, when, and delegating power accordingly.

According to the company, the new feature is capable of “anticipating actions,” resulting in 30-percent fewer CPU wakeups. Google has promised more information on the feature in the upcoming developer keynote. Combined with larger on-board batteries and faster charging in recent handsets, the new tech could go a long ways toward changing the way users interact with their devices, shift the all night charge model to quick charging bursts — meaning, for better or worse, you can sleep with your handset nearby without having to worry about keeping it plugged in. 


Read Full Article

With App Actions and Slices, Google introduces more ways for users to interact with the apps on their phones


With Instant Apps, Google offers a feature for Android users that allows them to load a small part of an app right from the search results and get a native app experience. With Slices, Google is launching a new feature today that may look somewhat similar at first glance, but which solves a very different problem. While Instant Apps feature on providing a full app experience and are a great way to get uses to install the full app, Slices are about solving a small, well-defined problem — and the work with apps that are already installed on your device.

In addition to Slices, Google also today announced App Actions, a new feature in Android P. Actions allow developers to bring their content directly to Android surfaces like Search, the Google Assistant and the Google Launcher when and where the user needs it. The idea here is to surface not just the right content, which is something Google has long done, but also the right action. Google says these Actions will appear based on usage and relevance. Some of the details here remain a bit unclear, but Google says this feature is modeled after the Conversational Actions for the Google Assistant and that developers will soon be able to give them a spin themselves by signing up for early access here.

Slices is also meant to get users to interact more with the apps they have already installed, but the overall premise is a bit different from App Actions. Slices essentially provide users with a mini snippet of an app and they can appear in Google Search and the Google Assistant. From the developer’s perspective, they are all about driving the usage of their apps, but from the user’s perspective, they look like an easy way to get something done quickly.

“A slice is designed to solve a problem: I’m a user and want to get something quickly done on my device,” Google’s PM director for Android Stephanie Saad Cutherbertson told me ahead of today’s announcement. Maybe that’s calling a Lyft or book a hotel room, for example. To surface those slices, all you have to do is type “I want to book a ride” in the search box on Android and you’ll see that mini version of the app right there without having to go into the main app.

“This radically changes how users interact with the app,” Cuthbertson said. She also noted that developers obviously want people to use their app, so every additional spot where users can interact with it is a win for them.

Slices will launch with Android P, but will be broadly available across Android versions.

To make things easier for developers, Google modeled the development pattern after Android Notifications. All Android developers are pretty familiar with that, so getting started with Slices should be pretty straightforward. Google is also providing developers with templates that make it easier to build the user interface for these and ensure that the interface will be consistent across Slices. Developers who want to branch out, though, have the flexibility to built their experience from the ground up, too.


Read Full Article

Android P leverages DeepMind for new Adaptive Battery feature


No surprise here, Android P was the highlight of today’s Google I/O keynote. The new version of the company’s mobile operating system still doesn’t have a name (at least not as of this writing), but the company’s already highlighted a number of key new features, including, notable, Adaptive Battery.

Aimed at taking on basically everyone’s biggest complaints about their handset, the new feature is designed to make more efficient use of on-board hard. Google’s own DeepMind is doing much of the heavy lifting here, relying on user habits to determine what apps they use, when, and delegating power accordingly.

According to the company, the new feature is capable of “anticipating actions,” resulting in 30-percent fewer CPU wakeups.


Read Full Article

Google Assistant is coming to Google Maps


Google wants to bundle its voice assistant into every device and app. And it’s true that it makes sense to integrate Google Assistant in Google Maps. It’ll be available on iOS and Android this summer.

At Google I/O, director of Google Assistant Lilian Rincon showed a demo of Google Maps with Google Assistant. Let’s say you’re driving and you’re using Google Maps for directions. You can ask Google Assistant to share your ETA without touching your phone.

You can also control the music with your voice for instance. Rincon even played music on YouTube, but without the video element of course. It lets you access YouTube’s extensive music library while driving.

If you’re using a newer car with Android Auto or Apple CarPlay, you’ve already been using voice assistants in your car. But many users rely exclusively on their phone. That’s why it makes sense to integrate Google Assistant in Google Maps directly.

It’s also a great way to promote Google Assistant to users who are not familiar with it yet. That could be an issue as Google Assistant asks for a ton of data when you first set it up. It forces you to share your location history, web history and app activity. Basically you let Google access everything you do with your phone.


Read Full Article

iOS will soon disable USB connection if left locked for a week


In a move seemingly designed specifically to frustrate law enforcement, Apple is adding a security feature to iOS that totally disables data being sent over USB if the device isn’t unlocked for a period of 7 days. This spoils many methods for exploiting that connection to coax information out of the device without the user’s consent.

The feature, called USB Restricted Mode, was first noticed by Elcomsoft researchers looking through the iOS 11.4 code. It disables USB data (it will still charge) if the phone is left locked for a week, re-enabling it if it’s unlocked normally.

Normally when an iPhone is plugged into another device, whether it’s the owner’s computer or another, there is an interchange of data where the phone and computer figure out if they recognize each other, if they’re authorized to send or back up data, and so on. This connection can be taken advantage of if the computer being connected to is attempting to break into the phone.

USB Restricted Mode likely a response to the fact that iPhones seized by law enforcement or by malicious actors like thieves essentially will sit and wait patiently for this kind of software exploit to be applied to them. If an officer collects a phone during a case, but there are no known ways to force open the version of iOS it’s running, no problem: just stick it in evidence and wait until some security contractor sells the department a 0-day.

But what if, a week after that phone was taken, it shut down its own Lightning port’s ability to send or receive data or even recognize it’s connected to a computer? That would prevent the law from ever having the opportunity to attempt to break into the device unless they move with a quickness.

On the other hand, had its owner simply left the phone at home while on vacation, they could pick it up, put in their PIN, and it’s like nothing ever happened. Like the very best security measures, adversaries will curse its name while users may not even know it exists. Really, this is one of those security features that seems obvious in retrospect and I would not be surprised if other phone makers copy it in short order.

Had this feature been in place a couple years ago, it would have prevented that entire drama with the FBI. It milked its ongoing inability to access a target phone for months, reportedly concealing its own capabilities the while, likely to make it a political issue and manipulate lawmakers into compelling Apple to help. That kind of grandstanding doesn’t work so well on a 7-day deadline.

It’s not a perfect solution, of course, but there are no perfect solutions in security. This may simply force all iPhone-related investigations to get high priority in courts, so that existing exploits can be applied legally within the 7-day limit (and, presumably, every few days thereafter). All the same, it should be a powerful barrier against the kind of eventual, potential access through undocumented exploits from third parties that seems to threaten even the latest models and OS versions.


Read Full Article

Google Assistant is coming to Google Maps


Google wants to bundle its voice assistant into every device and app. And it’s true that it makes sense to integrate Google Assistant in Google Maps. It’ll be available on iOS and Android this summer.

At Google I/O, director of Google Assistant Lilian Rincon showed a demo of Google Maps with Google Assistant. Let’s say you’re driving and you’re using Google Maps for directions. You can ask Google Assistant to share your ETA without touching your phone.

You can also control the music with your voice for instance. Rincon even played music on YouTube, but without the video element of course. It lets you access YouTube’s extensive music library while driving.

If you’re using a newer car with Android Auto or Apple CarPlay, you’ve already been using voice assistants in your car. But many users rely exclusively on their phone. That’s why it makes sense to integrate Google Assistant in Google Maps directly.

It’s also a great way to promote Google Assistant to users who are not familiar with it yet. That could be an issue as Google Assistant asks for a ton of data when you first set it up. It forces you to share your location history, web history and app activity. Basically you let Google access everything you do with your phone.


Read Full Article

iOS will soon disable USB connection if left locked for a week


In a move seemingly designed specifically to frustrate law enforcement, Apple is adding a security feature to iOS that totally disables data being sent over USB if the device isn’t unlocked for a period of 7 days. This spoils many methods for exploiting that connection to coax information out of the device without the user’s consent.

The feature, called USB Restricted Mode, was first noticed by Elcomsoft researchers looking through the iOS 11.4 code. It disables USB data (it will still charge) if the phone is left locked for a week, re-enabling it if it’s unlocked normally.

Normally when an iPhone is plugged into another device, whether it’s the owner’s computer or another, there is an interchange of data where the phone and computer figure out if they recognize each other, if they’re authorized to send or back up data, and so on. This connection can be taken advantage of if the computer being connected to is attempting to break into the phone.

USB Restricted Mode likely a response to the fact that iPhones seized by law enforcement or by malicious actors like thieves essentially will sit and wait patiently for this kind of software exploit to be applied to them. If an officer collects a phone during a case, but there are no known ways to force open the version of iOS it’s running, no problem: just stick it in evidence and wait until some security contractor sells the department a 0-day.

But what if, a week after that phone was taken, it shut down its own Lightning port’s ability to send or receive data or even recognize it’s connected to a computer? That would prevent the law from ever having the opportunity to attempt to break into the device unless they move with a quickness.

On the other hand, had its owner simply left the phone at home while on vacation, they could pick it up, put in their PIN, and it’s like nothing ever happened. Like the very best security measures, adversaries will curse its name while users may not even know it exists. Really, this is one of those security features that seems obvious in retrospect and I would not be surprised if other phone makers copy it in short order.

Had this feature been in place a couple years ago, it would have prevented that entire drama with the FBI. It milked its ongoing inability to access a target phone for months, reportedly concealing its own capabilities the while, likely to make it a political issue and manipulate lawmakers into compelling Apple to help. That kind of grandstanding doesn’t work so well on a 7-day deadline.

It’s not a perfect solution, of course, but there are no perfect solutions in security. This may simply force all iPhone-related investigations to get high priority in courts, so that existing exploits can be applied legally within the 7-day limit (and, presumably, every few days thereafter). All the same, it should be a powerful barrier against the kind of eventual, potential access through undocumented exploits from third parties that seems to threaten even the latest models and OS versions.


Read Full Article

Google Duplex: An AI System for Accomplishing Real World Tasks Over the Phone

The Google Assistant will soon be able to call restaurants and make a reservations for you


Google just showed a crazy (and terrifying) new feature for the Google Assistant at its I/O developer conference. The Assistant will soon be able make calls for you to make a reservation — maybe for a salon appointment or to reserve a table at a restaurant that doesn’t take online bookings. For now, this was only a demo, but the company plans to bring this feature to the Assistant in the future.

In the demo, Google showed how you can tell the Assistant that you want to make a haircut appointment. The Assistant can then make that call, talk to whoever answers and make the request. In the demo, the Assistant even handled complicated conversations, adding small little hints that make it sound natural. Even for calls that don’t quite go as expected, the Assistant can handle these interactions quite gracefully — though Google obviously only demoed two example that worked out quite well.

Google calls this feature “Duplex” and it’ll roll out at some point in the future.

The crazy thing here is that the Assistant in the demos was able to sound quite human, adding little pauses to the voice queries and responses for example. I’m sure that restaurant workers will soon figure out which voice signifies a call from the Assistant and have some fun with it.

There’s always a chance that Google fudged this demo a bit, so we’ll have to wait and see what it’ll actually sound like when it goes live.


Read Full Article

Google brings its visual assistant to Android devices with Google Assistant


Google said it is rolling out its visual assistant, which brings ups information as well as ways to interact with apps with a Google Assistant voice request in a full-screen experience, to Android phones this summer.

When an Android user makes a query through Google Assistant, Google will provide a more interactive visual experience on the phone. That includes ways to interact with smart home products, like thermostats, or interacting directly with apps like the Starbucks app. Google’s visual assistant is coming to iOS devices this year. You can make a voice query such as “what is the temperature right now,” and a display shows up with a way to change the temperature.

Users will also be able to swipe up to get access to a visual snapshot of what’s happening that day, including navigation to work, reminders, and other services like that. All this aims to provide users a way to quickly get access to their daily activities without having to string together a series of sentences or taps in order to get there.

Google’s visual assistant on phones more of an evolution of how users can interact with Google’s services and integrations in a more seamless way. Voice interfaces have become increasingly robust with the emergence of Google Home and Alexa, allowing users to interact with devices and other services by just saying things to their phones or devices at home. But sometimes there are more complex interactions, such as tweaking the temperature slightly, and having a visual display makes more sense.

Each new touch point developers get — such as a full-screen display after a voice query — gives companies more and more ways to keep the attention of potential customers and users. While Alexa offers developers a way to get a voice assistant in their tools, as well as SoundHound with its Houndify platform, Google is trying to figure out what the next step is for a user after asking a question for Google. That makes more sense on the phone, where they can quickly get a new interface that requires some kind of visual element.


Read Full Article

A Google Assistant update will teach kids to say ‘please’


No more rudely yelling at your Google Home smart speaker, kids. Google today announced at its I/O developer conference a new Google Assistant setting for families called “Pretty Please.” The feature will teach children to use polite language when interacting with the Google Assistant, and will receive thanks from the virtual assistant in response.

For example, when children say “please,” the Assistant will respond with some sort of positive reinforcement while performing the requested task.

During a brief demo, the Assistant was shown interacting with kids, and saying things like “thanks for saying please,” “thanks for asking so nicely, or “you’re very polite.”

The feature arrives at a time when parents were growing concerned that kids were learning to treat the virtual assistants in smart speakers rudely, which would translate into their interactions with people.

Amazon recently addressed this problem with an Alexa update called Magic Word, which is just now rolling out.

Google says its Pretty Please feature will launch later this year.

 


Read Full Article

Google makes talking to the Assistant feel more natural


Google today announced a major update to the Google Assistant at its I/O developer conference. The main idea here is to allow you to have more natural conversations with the Google Assistant. Instead of  having to say “Hey Google” or “Ok Google” every time you want to say a command, you’ll only have to do this the first time and then you can have a conversation with the Assistant.

Google calls this feature ‘continued conversation’ and it’ll roll out in the coming week.

The company is also adding a new feature that allows you to ask multiple questions within the same request. Google’s Scott Huffman noted that it may seems like a simple feature — just listen for the ‘and’ — but is actually quite difficult. Thanks to this new feature, you can now ask about the recent scores from a game and then how well a specific player did in it within one query. No two “Ok Google’s” needed.

All of this will work everywhere the Google Assistant works, including the car, where Google is introducing the Google Assistant to Google Maps.

All of this will introduce a far more natural way to interact with the Google Assistant. Huffman admitted how annoying the constant “Hey Google” requests are and if you have a Google Home, that’ll definitely sound familiar to you.


Read Full Article

Google adds Morse code input to Gboard


Google is adding morse code input to its mobile keyboard. It’ll be available as a beta on iOS and Android later today. The company announced that new feature at Google I/O after showing a video of Tania Finlayson.

Finlayson has been having a hard time communicating with other people due to her condition. She found a great way to write sentences and talk with people using Morse code.

Her husband developed a custom device that analyzes her head movements and transcodes them into Morse code. When she triggers the left button, it adds a short signal, while the right button triggers a long signal. Her device then converts the text into speech.

Google’s implementation will replace the keyboard with two areas for short and long signals. There are multiple word suggestions above the keyboard just like on the normal keyboard. The company has also created a Morse poster so that you can learn Morse code more easily.

As with all accessibility features, the more input methods the better. Everything that makes technology more accessible is a good thing.

Of course, Google used its gigantic I/O conference to introduce this feature to make the company look good too. But it’s a fine trade-off, a win-win for both Google and users who can’t use a traditional keyboard.


Read Full Article

Google adds Morse code input to Gboard


Google is adding morse code input to its mobile keyboard. It’ll be available as a beta on iOS and Android later today. The company announced that new feature at Google I/O after showing a video of Tania Finlayson.

Finlayson has been having a hard time communicating with other people due to her condition. She found a great way to write sentences and talk with people using Morse code.

Her husband developed a custom device that analyzes her head movements and transcodes them into Morse code. When she triggers the left button, it adds a short signal, while the right button triggers a long signal. Her device then converts the text into speech.

Google’s implementation will replace the keyboard with two areas for short and long signals. There are multiple word suggestions above the keyboard just like on the normal keyboard. The company has also created a Morse poster so that you can learn Morse code more easily.

As with all accessibility features, the more input methods the better. Everything that makes technology more accessible is a good thing.

Of course, Google used its gigantic I/O conference to introduce this feature to make the company look good too. But it’s a fine trade-off, a win-win for both Google and users who can’t use a traditional keyboard.


Read Full Article

Google’s new ‘smart compose’ will help you write your emails


Following the big Gmail revamp, Google today announced another new feature for Gmail called “Smart Compose,” which will actually help you write your emails using machine learning technology.

As you type, Smart Compose pops up suggestions about what you might want to write next – similar to Google autocomplete.

Google CEO Sundar Pichai briefly demoed the technology on stage at the Google I/O developer conference this morning, showing how Smart Compose could finish sentences by suggesting text, including common phrases or even addresses.

“As the name suggests, we use machine learning to start suggesting phrases for you,” Pichai explained. “All you have to do is hit tab to keep auto-completing. In this case, it understands the subject is ‘taco Tuesday.’ It takes care of mundane things like addresses so you can focus on what you want to type,” he continued.

“I’ve been sending a lot more emails to the company. Not sure what they think of it, but it’s been great,” Pichai said.

The feature is rolling it out to all users this month.


Read Full Article

Google announces a new generation for its TPU machine learning hardware


As the war for creating customized AI hardware heats up, Google is now rolling out its third generation of silicon, the Tensor Processor Unit 3.0. 

Google says the new TPU is eight times more powerful than last year, with up to 100 petaflops in performance. Google also said this is the first time the company has had to include liquid cooling in its data centers. Google joins pretty much every other major company in looking to create custom silicon in order to handle its machine operations. And while multiple frameworks for developing machine learning tools have emerged, including PyTorch and Caffe2, this one is optimized for Google’s TensorFlow. Google is looking to make Google Cloud an omnipresent platform at the scale of Amazon, and offering better machine learning tools is quickly becoming table stakes. 

Amazon and Facebook are both working on their own kind of custom silicon. Facebook’s hardware is optimized for its Caffe2 framework, which is designed to handle the massive information graphs it has on its users. You can think about it as taking everything Facebook knows about you — your birthday, your friend graph, and everything that goes into the news feed algorithm — fed into a complex machine learning framework that works best for its own operations. That, in the end, may have ended up requiring a customized approach to hardware. We know less about Amazon’s goals here, but it also wants to own the cloud infrastructure ecosystem with AWS. 

All this has also spun up an increasingly large and well-funded startup ecosystem looking to create a customized piece of hardware targeted toward machine learning. There are startups like Cerebras Systems, SambaNova Systems, and Mythic, with a half dozen or so beyond that as well (not even including the activity in China). Each is looking to exploit a similar niche, which is find a way to outmaneuver Nvidia on price or performance for machine learning tasks. Most of those startups have raised more than $30 million. 

Google unveiled its second-generation TPU processor at I/O last year, so it wasn’t a huge surprise that we’d see another one this year. We’d heard from sources for weeks that it was coming, and that the company is already hard at work figuring out what comes next. Google at the time touted performance, though the point of all these tools is to make it a little easier and more palatable in the first place. 

There are a lot of questions around building custom silicon, however. It may be that developers don’t need a super-efficient piece of silicon when an Nvidia card that’s a few years old can do the trick. But data sets are getting increasingly larger, and having the biggest and best data set is what creates a defensibility for any company these days. Just the prospect of making it easier and cheaper as companies scale may be enough to get them to adopt something like GCP. 

Intel, too, is looking to get in here with its own products. Intel has been beating the drum on FPGA as well, which is designed to be more modular and flexible as the needs for machine learning change over time. But again, the knock there is price and difficulty, as programming for FPGA can be a hard problem in which not many engineers have expertise. Microsoft is also betting on FPGA, and unveiled what it’s calling Brainwave just yesterday at its BUILD conference for its Azure cloud platform — which is increasingly a significant portion of its future potential.

Google more or less seems to want to own the entire stack of how we operate on the internet. It starts at the TPU, with TensorFlow layered on top of that. If it manages to succeed there, it gets more data, makes its tools and services faster and faster, and eventually reaches a point where its AI tools are too far ahead and locks developers and users into its ecosystem. Google is at its heart an advertising business, but it’s gradually expanding into new business segments that all require robust data sets and operations to learn human behavior. 

Now the challenge will be having the best pitch for developers to not only get them into GCP and other services, but also keep them locked into TensorFlow. But as Facebook increasingly looks to challenge that with alternate frameworks like PyTorch, there may be more difficulty than originally thought. Facebook unveiled a new version of PyTorch at its main annual conference, F8, just last month. We’ll have to see if Google is able to respond adequately to stay ahead, and that starts with a new generation of hardware.

 

 


Read Full Article

Google announces a new generation for its TPU machine learning hardware


As the war for creating customized AI hardware heats up, Google is now rolling out its third generation of silicon, the Tensor Processor Unit 3. 

Google joins pretty much every other major company in looking to create custom silicon in order to handle its machine operations. And while multiple frameworks for developing machine learning tools have emerged, including PyTorch and Caffe2, this one is optimized for Google’s TensorFlow. Google is looking to make Google Cloud an omnipresent platform at the scale of Amazon, and offering better machine learning tools is quickly becoming table stakes. 

Amazon and Facebook are both working on their own kind of custom silicon. Facebook’s hardware is optimized for its Caffe2 framework, which is designed to handle the massive information graphs it has on its users. You can think about it as taking everything Facebook knows about you — your birthday, your friend graph, and everything that goes into the news feed algorithm — fed into a complex machine learning framework that works best for its own operations. That, in the end, may have ended up requiring a customized approach to hardware. We know less about Amazon’s goals here, but it also wants to own the cloud infrastructure ecosystem with AWS. 

All this has also spun up an increasingly large and well-funded startup ecosystem looking to create a customized piece of hardware targeted toward machine learning. There are startups like Cerebras Systems, SambaNova Systems, and Mythic, with a half dozen or so beyond that as well (not even including the activity in China). Each is looking to exploit a similar niche, which is find a way to outmaneuver Nvidia on price or performance for machine learning tasks. Most of those startups have raised more than $30 million. 

Google unveiled its second-generation TPU processor at I/O last year, so it wasn’t a huge surprise that we’d see another one this year. We’d heard from sources for weeks that it was coming, and that the company is already hard at work figuring out what comes next. Google at the time touted performance, though the point of all these tools is to make it a little easier and more palatable in the first place. 

https://techcrunch.com/2017/05/17/google-announces-second-generation-of-tensor-processing-unit-chips/

There are a lot of questions around building custom silicon, however. It may be that developers don’t need a super-efficient piece of silicon when an Nvidia card that’s a few years old can do the trick. But data sets are getting increasingly larger, and having the biggest and best data set is what creates a defensibility for any company these days. Just the prospect of making it easier and cheaper as companies scale may be enough to get them to adopt something like GCP. 

Intel, too, is looking to get in here with its own products. Intel has been beating the drum on FPGA as well, which is designed to be more modular and flexible as the needs for machine learning change over time. But again, the knock there is price and difficulty, as programming for FPGA can be a hard problem in which not many engineers have expertise. Microsoft is also betting on FPGA, and unveiled what it’s calling Brainwave just yesterday at its BUILD conference for its Azure cloud platform — which is increasingly a significant portion of its future potential.

Google more or less seems to want to own the entire stack of how we operate on the internet. It starts at the TPU, with TensorFlow layered on top of that. If it manages to succeed there, it gets more data, makes its tools and services faster and faster, and eventually reaches a point where its AI tools are too far ahead and locks developers and users into its ecosystem. Google is at its heart an advertising business, but it’s gradually expanding into new business segments that all require robust data sets and operations to learn human behavior. 

Now the challenge will be having the best pitch for developers to not only get them into GCP and other services, but also keep them locked into TensorFlow. But as Facebook increasingly looks to challenge that with alternate frameworks like PyTorch, there may be more difficulty than originally thought. Facebook unveiled a new version of PyTorch at its main annual conference, F8, just last month. We’ll have to see if Google is able to respond adequately to stay ahead, and that starts with a new generation of hardware. 


Read Full Article

Google Photos will add more AI-powered fixes, including colorization of black-and-white photos


Google Photos already makes it easy for users to correct their photos with built-in editing tools and clever, A.I.-powered features for automatically creating collages, animations, movies, stylized photos, and more. Now the company is making it even easier to fix photos with a new version of the Google Photos app that will suggest quick fixes and other tweaks – like rotations, brightness corrections, or adding pops of color, for example – right below the photo you’re viewing.

The changes, which are being introduced on stage at the Google I/O developer conference today, are yet another example of this year’s theme of bringing A.I. technology closer to the end user.

In Google Photos’ case, that means no longer just hiding the A.I. away within the “Assistant” tab, but putting it directly in the main interface.

The company says that the idea to add the fix suggestions to the photos themselves came about because they realized this is where the app sees the most activity.

“One of the insights we’ve had as Google Photos has grown significantly over the years is that people spend a lot of time looking at photos inside of Google Photos,” explained Google Photos Product Lead, Dave Lieb, in an earlier interview with TechCrunch about the update. “They do it to the tune of about 5 billion photo views per day,” he added.

That got the team thinking that they should focus on solving some of the problems people see when they’re looking at their photos.

Google Photos will begin to do just that with the changes that begin rolling out this week.

For example, if you come to a photo that’s too dark, there will be a little button you can tap underneath the photo to fix the brightness. If the photo is on its side, you can press a button to rotate it. These are things you could have done before, of course, by accessing the editing tools manually. But the updated app will now just make it a one-tap fix.

Also new are tools inspired by Google’s Photoscan technology that will fix photos of documents and paperwork, by zooming in, cropping and rectifying the photo.

One tool will analyze who’s in the photo and prompt you to share it with them, similar to the previously launched sharing suggestions feature. Another prompts you to archive photos of old receipts. And one, called “Color Pop,” will identify when it could pop out whoever’s in the foreground by turning the background to black and white.

In addition to the new tools arriving thi week, Google is also prepping a “Colorize” tool that will turn black-and-white photos into colorized images. This tool, too, was inspired by Photoscan, as Google found people were scanning in old family photos, including the black-and-white ones.

“Our team thought, what if we applied computer vision and A.I. to black-and-white photos? Could we re-create a color version of those photos?” said Lieb. They wanted to see if technology could be trained to re-colorize images so you could really see what it was like back then, or at least a close approximation. That’s how Colorize will work, when it’s ready.

A neural network will try to infer the colors that likely work best in the photo – like turning the grass green, for example. Getting other things right – like skin tone, perhaps – could be more tricky; so the team isn’t launching the feature until it’s “really right,” they said.

The new Google Photos features were announced along with news of a developer preview version of the Google Photos Library API, that allows third-party developers to take advantage of Google Photos’ storage, infrastructure and machine intelligence. More on that is here.


Read Full Article