24 July 2018

5 Times Your Data Was Shockingly Handed Over to the NSA


your-data-handed-to-nsa

It’s hard to know which companies you can trust these days.

Apple deserves commendation for their refusal to bow to FBI demands for “backdoor access,” but plenty of others will hand over information the NSA without a second thought.

Here’s a look at organizations which we know have acquiesced and given the NSA access to user data. Use these services at your own peril.

1. Yahoo

Possibly the worst offender for handing over data to the NSA.

In truth, it’s already remarkable that people continue to use any of Yahoo’s services given its disastrous track record data breaches. Hackers compromised three billion accounts in August 2013, a further 500 million in late-2014, and yet another 200 million in late-2015.

However, that’s nothing compared to the company’s NSA collusion.

In 2016, it was revealed that Yahoo had specifically created a secret email filter that would monitor its users’ inboxes and automatically send anything that was flagged to the NSA; the NSA didn’t even need to ask for the data anymore.

Then-CEO, Marissa Mayer, made the decision to create the filter. She did so without the knowledge of the company’s top security engineer, Alex Stamos. When he discovered the program, he quit on the spot.

Later, when other employees also found the filter, it was so invasive that they assumed it was the work of a malicious hacker.

2. Amazon

The initial furor over PRISM and the NSA has died down a bit, but don’t let that lull you into a false sense of security.

Amazon wasn’t even one of the companies listed in the original leaked NSA slides, yet it provides the NSA with a vast amount of users’ data.

In its most recent transparency report [PDF]—which became available at the end of December 2017—Amazon received:

  • 1,618 subpoenas. The company fully complied with 42 percent and partially complied with 31 percent.
  • 229 search warrants. The company fully complied with 44 percent and partially complied with 37 percent.
  • 89 other court orders. The company fully complied with 52 percent and partially complied with 32 percent.

All the requests except one came from within the United States, and the figures mark an increase of nearly 15 percent compared to the previous six months.

Additionally, Amazon is not allowed to say how many national security letters it has received. The company can, however, say if it did not receive any. Amazon chose to declare that it had received between zero and 249.

3. Verizon

In June 2013, the UK’s Guardian newspaper obtained a leaked document which showed the NSA collected phone records from millions of Verizon customers every day.

Thanks to a top-secret court order from April of the same year, Verizon was required to give the NSA details on every single phone call in its systems, including both domestic and international calls.

The court order was valid for three months and expired in July 2013.

The order said Verizon had to hand over the numbers of both parties, the location data, the call duration, any unique identifiers, and the time and duration of all calls. All that metadata can reveal a lot about the people behind the calls.

Worse still, the ruling forbade Verizon from telling the public about the court order or the NSA’s request.

The program was codenamed Ragtime. Further leaked documents in late-2017 showed the Ragtime project was not only alive and well, but was also much broader in scope than first imagined.

Ragtime-P, which was the part of the program that the Verizon court order fell under, was still active. But the leak revealed there are 10 further variants. From a US citizen standpoint, the most troubling is Ragtime-USP (US person).

In theory, American citizens and permanent residents are protected from the collection of phone records after a 2015 ruling; however, the presence of Ragtime-USP calls that into question.

Unfortunately, we don’t know which companies are colluding with Ragtime-USP.

4. Facebook

Facebook has always been at the forefront of the NSA surveillance debate. Like Amazon, it publishes the details on the number of information requests it receives. And today, the NSA is requesting more information than ever before.

The most recent figures available are for the first six months of 2017. The data shows the NSA’s number of requests rose by 26 percent in that period compared to the previous six-month period.

It’s part of the same long-term upwards trend that’s seen the number of requests go from approximately 10,000 in the first six months of 2013 to more than 33,000 in the first six months of 2017.

And, at the same time that Facebook is receiving more requests, the company is also agreeing to more requests. In the first six months of 2013, it agreed to 79 percent of NSA requests. In the first six months of 2017, that had risen to 85 percent.

It goes on. A considerable 57 percent of the NSA requests that Facebook received included a non-disclosure clause. It means Facebook cannot tell the user that the NSA requested their data. If you think the clause has the potential to be abused, you’d be right; the number of non-disclosure orders in the first six months of 2017 rose by a staggering 50 percent compared to the last six months of 2016.

5. AT&T

Yahoo’s customized email filter might be the most brazen spying apparatus on this list, but AT&T arguably wins the battle for being the most complicit company.

In 2015, a batch of leaked NSA documents laid bare the relationship between the telecoms giant and the government agency.

One document revealed the association between the two was “unique and especially productive.” Another said AT&T was “highly collaborative” and praised the company for its “extreme willingness to help.”

A third document showed NSA employees were repeatedly reminded to be polite when visiting AT&T premises, saying “This is a partnership, not a contractual relationship.”

The collaboration program is called Fairview. It began way back in 1985 after the breakup of Ma Bell but ramped up to its current levels in the aftermath of 9/11. AT&T began giving the NSA access to its information within days of the attack; within the first month of operation, it gave the NSA more than 400 billion internet metadata records.

In 2011, the Fairview program went up another notch. The documents show that AT&T started providing the NSA with 1.1 billion domestic cell phone calling records every day.

And if you think you’re in the clear because you’re not an AT&T customer, think again. One of the leaked documents says the relationship with AT&T “provides unique accesses to other telecoms and ISPs.”

The cross-ISP surveillance is possible because of the way email data collection occurs. To tap a single email, parts of several other emails also need to be collected. For American-to-American communications, the law means the NSA will (theoretically) discard those emails immediately.

However, for foreigner-to-American and foreigner-to-foreigner mails, the law does not apply. As such, the NSA can engage in bulk collection without a warrant. Given so much of the web’s data flows through American cables, this loophole is especially lucrative to the agency.

Protecting Yourself Against Internet Surveillance

We’ve only discussed five of the most worrisome and high-profile cases of the NSA forcefully grabbing data from companies.

However, there are undoubtedly near-endless cases of smaller companies also handing over data—either willingly or by legal force. Sadly, those cases are not in the public domain and probably never will be.

Despite all the collection, there are still some steps you can take to protect yourself from excessive internet surveillance.

Read the full article: 5 Times Your Data Was Shockingly Handed Over to the NSA


Read Full Article

Snap Kills Snapcash, Its Peer-to-Peer Payment Service


hide-friends-snapchat

Snap is shutting down Snapcash, its peer-to-peer payment service. It looks as though Snapcash has never really taken off in the way Snap originally hoped, but its disappearance will still be a loss for those who use it on a regular basis.

In 2014, Snap launched Snapcash as an alternative to Venmo. Snapcash, which was possible thanks to a partnership with Square, let you easily send money to friends. However, four years on and Snap has decided Snapcash has outlived its usefulness.

Snap Shutters Snapcash

TechCrunch broke the story after uncovering code in the Snapchat app including a “Snapcash deprecation message”. This reads as, “Snapcash will no longer be available after %s [date]”. The date has since been confirmed as August 30, 2018.

Snap hasn’t officially announced the news, but did say, “Snapcash was our first product created in partnership with another company—Square. We’re thankful for all the Snapchatters who used Snapcash for the last four years and for Square’s partnership!”

There is speculation that Snap has decided to can Snapcash after discovering people were using it as a means to pay for erotic content. A quick Twitter search reveals people hawking explicit photos of themselves for Snapcash payments.

Snapcash was intended to allow friends to split restaurant bills or Uber fares. So, to see it being used to pay “adult entertainers” will surely have rankled with Snap. Especially as it wasn’t exactly a big name in the world of peer-to-peer payment services.

Alternatives to Snapcash

We suspect most people won’t have even heard of Snapcash, which is the most likely reason Snap is getting rid of it. In the end, people will use the payment service that makes most sense to them, which clearly wasn’t ever going to be Snapcash.

Still, not to worry, as there are plenty of other ways of sending money to people. Options include the aforementioned Venmo, as well as PayPal, Zelle, Google Pay, and Apple Pay Cash. But let’s not forget that cold, hard cash still exists.

Read the full article: Snap Kills Snapcash, Its Peer-to-Peer Payment Service


Read Full Article

Google is bringing voice commands to Hangouts Meet hardware


Today at the Google Next conference in San Francisco, the company announced it would soon be enhancing Google meeting hardware to allow voice commands.

For many people setting up meetings remains a major problem and pain point. The company wants to bring the same voice-enabled artificial intelligence it uses for tools like Google Assistant to meeting hardware. To that end, the company introduced Voice Commands for Meet today.

This will allow users to say, “Hey Google, start the meeting.” And this is just a starting point. They promise to be adding additional commands over time. They will be adding this functionality later this year.

Just last Fall, the company launched the Hangouts Meet hardware program, which provided a way for Meet customers to launch meetings using Google or other hardware such as the traditional Cisco or Polycom hardware found in many conference rooms. Google reports that customers have set up thousands of these Hangouts Meet-enabled meeting rooms.

By providing some simple commands to set up the meeting, invite participants, join a meeting and so forth using your voice, it can greatly simplify the sometimes complicated activity of meeting administration, which even after all this years often seems unnecessarily complicated and frustrating for many people.

Users are certainly getting used to interacting with devices thanks to Google Home, the Amazon Echo and similar devices.

It’s worth noting that Google is not alone in trying to bring voice-enabled hardware into the meeting room. Last November, Cisco announced Cisco Spark Assistant to bring voice commands specifically to Cisco meeting room hardware. The underlying the voice recognition technology comes from the MindMeld acquisition, a conversational AI startup that Cisco bought in May 2017 for $125 million.


Read Full Article

Google announces Cloud Build, its new continuous integration/continuous delivery platform


It used to be that developers built applications with long lead times and development cycles. There was always plenty of time to prepare, but in today’s continuous delivery/continuous deployment (CI/CD) world, new versions could be going out every day. That requires a CI/CD framework, and today at Google Next in San Francisco, the company announced Cloud Build, its new CI/CD framework.

As Google describes it, Cloud Build is the company’s “fully-managed Continuous Integration/Continuous Delivery (CI/CD) platform that lets you build, test, and deploy software quickly, at scale.”

Cloud Build works across a variety of environments including VMs, serverless, Kubernetes, or Firebase. What’s more it supports Docker containers and it gives developers or operations the flexibility to build, test and deploy in an increasingly automated fashion.

Google will allow you to use triggers to deploy, so that when certain conditions are met, the update will launch automatically. You can identify vulnerabilities in your packages before you deploy and you can build locally and deploy in the cloud if you so choose.

If there are problems, Cloud Build provides analytics and insights to let you debug via build errors and warnings and filter those warnings to easily identify slow builds or those with other issues you want to see before deploying live.

Google is offering a free version of Cloud Build with up to 120 build minutes per day at no cost. Additional build minutes will be billed at $0.0034 per minute.


Read Full Article

Google announces a suite of updates to its contact center tools


As Google pushes further and further into enterprise services, it’s looking to leverage what it’s known for — a strong expertise in machine learning — to power some of the most common enterprise functions, including contact centers.

Now Google is applying a lot of those learnings in a bunch of new updates for its contact center tools. That’s basically leaning on a key focus Google has, which is using machine learning for natural language recognition and image recognition. Those tools have natural applications in enterprises, especially those looking to spin up the kinds of tools that larger companies have with complex customer service requests and niche tools. Today’s updates, announced at the Google Cloud Next conference, include a suite of AI tools for its Google Cloud Contact Center.

Today the company said it is releasing a couple of updates to its Dialogflow tools, including a new one called phone gateway, which helps companies automatically assign a working phone number to a virtual agent. The company says you can begin taking those calls in “less than a minute” without infrastructure, with the rest of the machine learning-powered functions like speech recognition and natural language understanding managed by Google.

Google is adding AI-powered tools to the contact center with agent assistant tools, which can quickly pull in with relevant information, like suggested articles. It also has an update to its analytics tools, which lets companies sift through historical audio data to pull in trends — like common calls and complaints. One application here would be to be able to spot some confusing update or a broken tool based on a high volume of complaints, and that helps companies get a handle on what’s happening without a ton of overhead.

Other new tools include sentiment analysis, correcting spelling mistakes, tools to understand unstructured documents within a company like knowledge base articles — streaming that into Dialogflow. Dialogflow is also getting native audio response.


Read Full Article

Linux Without systemd: Why You Should Use Devuan, the Debian Fork


devuan-debian-fork

You may be surprised what constitutes a crisis in the Linux community. Several years ago, the creation of the systemd init system aggravated a number of developers and users. Most Linux-based operating systems adopted systemd, but there are a few that have chosen to chart a different course.

For all the contention, can you even tell the difference between a version of Linux that embraces systemd and one that doesn’t?

Devuan uses Xfce desktop environment

As a clear test case, let’s consider Debian and a variant called Devuan. Debian is one of the oldest and largest Linux-based OSes. In 2014, a group called Veteran UNIX Admins started Devuan, a fork of Debian without systemd. Should you give it a shot?

What’s an init System, Anyway?

Init is short for initialization. An init process is the first part of the operating system process to start as your Linux-powered computer boots up. It runs in the background for as long as your computer is on, and it continues until the computer shuts down.

The init system manages other processes, so that your computer boots, runs, and shuts down smoothly. So while the init system may be largely invisible, it’s also essential.

What’s “Wrong” With systemd?

Systemd is more than an init system. It includes other software, such as networkd and logind, which manage other aspects of your computer. Systemd is a suite of software that serves as the bridge between applications and the underlying Linux kernel. It handles tasks as diverse as managing user logins to hotplugging devices.

Traditionally, on Unix-based and Unix-like operating systems (Linux is the latter), developers design software to do one task and to do it well. There have always been exceptions, but with systemd, a core component has diverged from this way of doing things.

As you may expect, there are reasons developers felt the need for a change. For starters, the older init system booted up in a linear fashion, loading various scripts in an order that makes sense. This makes it harder to boot a computer and manage core functions (such as connecting to a network) in the smooth manner expected on today’s machines.

Combining many of these tasks into a single project enables Linux-based operating systems to provide a faster bootup experience.

How Devuan Is Different From Debian

Debian 8 was the first version to adopt systemd. The Devuan project began at that time, but the first stable release didn’t land until 2017, alongside the release of Debian 9.

Devuan uses the same APT package manager as Debian, but it maintains its own package repositories. Those are the servers that store the software you download using APT.

Devuan’s repositories contain the same software as Debian, only with patches that enable programs to run without systemd. This mainly refers to backend components such as policykit, which manages which users can access or modify certain parts of your PC.

What Is It Like to Use Devuan?

Just like with Debian, there are multiple ways to install Devuan. The “minimal” download provides you with the essential tools you need to get Devuan up and running on your machine. The “live” download provides you with a working desktop that you can test out before installing Devuan onto your computer.

Devuan uses the Xfce desktop environment by default. This is a traditional computing environment akin to how PC interfaces looked several decades ago. Functionally, Xfce is still able to handle most tasks people have come to expect from computers today.

The live version of Devuan comes with plenty of software to cover general expectations. Mozilla Firefox is available for browsing the web. LibreOffice is there for opening and editing documents. GIMP can alter photos and other images. These apps all function as you would expect, with no concern for which init system you’re running.

LibreOffice on Devuan

While Devuan mirrors Debian’s package repositories, the two are not interchangeable. Adding a repository intended for Debian runs the risk of wrecking your installation. You can edit your software sources via the terminal or inside the Synaptic Package Manager, which comes included.

Find new software on Devuan

Devuan connects to Wi-Fi and Ethernet networks just fine. You can also expect it to recognize the flash drives and hard drives you plug in. There’s a decent chance you won’t even notice a difference. Systemd is only one way of doing things, not the only way.

What init System Does Devuan Use?

At the end of the day, this question gets to the core of what Devuan is all about.

Devuan defaults to the sysvinit system, which is similar to the System V initialization process used in Unix. Sysvinit was the general standard many versions of Linux, including Debian, used before systemd.

Devuan also offers numerous alternatives. You can download OpenRC, runit, and others to replace the provided init system.

Do Other Linux-Based OSes Avoid systemd?

Gentoo, the build-your-operating-system-from-scratch Linux distribution, defaults to OpenRC. It’s one of the oldest and most well-known versions of Linux to avoid systemd. Slackware, another ancient Linux-based OS, has opted to stick with sysvinit. PCLinuxOS is a younger option that has also chosen not to switch to systemd.

There are also several Linux distributions that are based on Devuan. Though the number pales in comparison to Debian, which serves as a base for many prominent Linux-based OSes such as Ubuntu.

Should You Switch to Devuan?

Are you a sysadmin? Do you build your operating system from scratch or regularly interact with startup daemons and services? If so, are you more comfortable with the way you’ve traditionally managed your system? If your answer is yes, you may prefer Devuan. It’s more of a continuation of the way things were, than something altogether new.

For the rest of us, this question is more of a philosophical question than a pragmatic one. Do you like the traditional Unix approach of doing one job and doing it well? Do you take issue with the idea of consolidating many tasks into a single project? If so, using Devuan is an expression of your belief in that ideal.

Pragmatically speaking, use Devuan if you want Debian without systemd. If you want systemd, stick with Debian. There isn’t much more to it than that.

Read the full article: Linux Without systemd: Why You Should Use Devuan, the Debian Fork


Read Full Article

Google’s Cloud Functions serverless platform is now generally available


Cloud Functions, Google’s serverless platform that competes directly with tools like AWS Lambda and Azure Functions from Microsoft, is now generally available, the company announced at its Cloud Next conference in San Francisco today.

Google first announced Cloud Functions back in 2016, so this has been a long beta. Overall, it also always seemed as if Google wasn’t quite putting the same resources behind its serverless play when compared to its major competitors. AWS, for example, is placing a major bet on serverless, as is Microsoft. And there are also plenty of startups in this space, too.

Like all Google products that come out of beta, Cloud Functions is now backed by an SLA and the company also today announced that the service now runs in more regions in the U.S. and Europe.

In addition to these hosted options, Google also today announced its new Cloud Services platform for enterprises that want to run hybrid clouds. While this doesn’t include a self-hosted Cloud Functions option, Google is betting on Kubernetes as the foundation for businesses that want to run serverless applications (and yes, I hate the term ‘serverless,’ too) in their own data centers.


Read Full Article

Your Kids Are Playing Fortnite: What You Need to Know About It

G Suite now lets businesses choose whether their data is stored in the US or Europe


Data sovereignty is a major issue for many major companies, especially in Europe. So far, Google’s G Suite, which includes products like Gmail, Google Docs and Sheets, didn’t give users any control over where their data was stored at rest, but that’s changing today. As the company announced at its Cloud Next conference in San Francisco, G Suite users can now choose whether their primary data for select G Suite apps: in the U.S. or in Europe.

These new data regions are now available to all G Suite Business and Enterprise customers at no additional cost.

“What this means is that for organizations with data- or geo-control requirements, G Suite will now let them choose where a copy of their data for G Suite apps like Gmail should be stored at rest,” said G Suite VP of product management David Thacker.

Google is also adding a tool that makes it easy to move data to another region as employees move between jobs and organizations.

“Given PwC is a global network with operations in 158 countries, I am very happy to see Google investing in data regions for G Suite and thrilled by how easy and intuitive it will be to set up and manage multi-region policies for our domain,” said Rob Tollerton, director of IT at PricewaterhouseCoopers International Limited, in a canned statement about this new feature.


Read Full Article

Google brings support for custom translations and text categorization to AutoML


Pre-trained machine learning models are good enough for many use cases, but to get the most out of this technology, you need custom models. Given that it’s not exactly easy to get started with machine learning, Google (and others) have opted for a hybrid approach that allows users to upload their own data to customize the existing models. Google’s version of this is AutoML, which until now only provided this capability for machine vision tasks under the AutoML Vision moniker.

Starting today, the company is adding two new capabilities to AutoML: AutoML Natural Language for predicting text categories and AutoML Translation, which allows users to upload their own language pairs to achieve better translations for texts in highly specialized fields, for example. In addition, Google is launching AutoML Vision out of preview and into its private beta.

Rajen Sheth, the director of product management for Google Cloud AI, said that this extension of AutoML is yet another step toward the company’s vision of democratizing AI. “What we are trying to do with Cloud AI is to make it possible for everyone in the world to use AI and build models for their purposes,” he said. For most of its customers, though, pre-trained models aren’t good enough, yet for most businesses, it’s hard to find the machine learning experts that would allow them to build their own custom models. Given this demand, it’s maybe no surprise that about 18,000 users have signed up for the preview of AutoML Vision so far.

“Natural language is something that is really the next frontier of this,” Sheth noted when he discussed the new Natural Language API. “It’s something that’s very useful to the customers. Because more than 90 percent of our customers’ information within their enterprise is unstructured and free information. And a lot of this is textual documents or emails or whatever it may be. Many customers are trying to find ways to get meaning and information out of those documents.”

As for AutoML Translation, the benefits of this kind of customization are pretty obvious, given that translating highly specialized texts remains the domain of experts. As an example, Sheth noted that “driver” in a technical document could be about a device driver for Windows 10, for example, while in another text it could simply be about somebody who is driving a car (until computers take over that task, too).


Read Full Article

Google makes it easier for G Suite admins to investigate security breaches


Google is announcing a fair number of updates to G Suite at its Next conference today, most of which focus on the user experience. In addition to those, though, the company also launched a new security investigation tool for admins that augments the existing tools for preventing and detecting potential security issues. The new tool builds on those and adds remediation features to the G Suite security center.

“The overall goal of the security center in G Suite is to provide administrators with the visibility and control they need to prevent, detect and remediate security issues,” said David Thacker, Google’s VP of product management for G Suite. “Earlier this year, we launched the first major components of this security center that help admins prevent and detect issues.”

Now with this third set of tools in line, G Suite admins can get a better understanding of the threats they are facing and how to remediate them. To do this, Thacker said, analysts and admins will be able to run really advanced queries over many different data sources to identify the users who have been impacted by a breach and then investigate what exactly happened. The tool also makes it easy for admins to remove access to certain files or to delete malicious emails “without having to worry about analyzing logs, which can be time-consuming or require complex scripting,” as Thacker noted.

This new security tool is now available as an Early Adopter Program for G Suite Enterprise customers.


Read Full Article

Google’s Smart Compose is now ready to write emails for G Suite users


At its Cloud Next conference, Google today announced that Smart Compose, a new feature in Gmail that essentially autocompletes sentences for you, will become available to all G Suite users in the coming weeks.

Smart Compose is part of the new Gmail, where it has been available for the last few months as an experimental feature for those who opt in to using it. In my experience, it can occasionally save you a few keystrokes, though don’t think that it’ll automatically write your emails for you. It’s mostly useful for greetings, addresses and finishing relatively standard phrases for you. To be fair, that’s what most emails consist of, and when it works, it works really well.

Over time, the system trains itself to learn more about how you write and what you write about. “It gets smarter over time by learning your colleagues’ names, your favorite phrases and specific jargon,” Google’s VP for product management for G Suite David Thacker explained during a press briefing.

To use Smart Compose, you simply type your emails and when it thinks that it can help you complete the sentence, the Smart Compose feature writes the next few words for you and you can hit tab to accept them.

It’s worth nothing that the launch of Smart Compose goes against one of Google’s most cherished traditions: announcing features at I/O that won’t launch for another 10 months. It’s only been two months or so since Google first announced this new feature.


Read Full Article

Google Docs gets an AI grammar checker


You probably don’t want to make grammar errors in your emails (or blog posts), but every now and then, they do slip in. Your standard spell-checking tool won’t catch them unless you use an extension like Grammarly. Well, Grammarly is getting some competition today in the form of a new machine learning-based grammar checker from Google that’s soon going live in Google Docs.

These new grammar suggestions in Docs, which are now available through Google’s Early Adopter Program, are powered by what is essentially a machine translation algorithm that can recognize errors and suggest corrections as you type. Google says it can catch anything from wrongly used articles (“an” instead of “a”) to more complicated issues like incorrectly used subordinate clauses.

“We’ve adopted a highly effective approach to grammar correction that is machine translation-based,” Google’s VP for G Suite product management David Thacker said in a press briefing ahead of the announcement. “For example, in language translation, you take a language like French and translate it into English. Our approach to grammar is similar. We take improper English and use our technology to correct or translated it into proper English. What’s nice about this is that the language translations is a technology that we have a long history of doing well.”

Because we haven’t seen this new tool in person, it’s impossible to know how well it will do in the real world, of course. It’s not clear to me whether Google’s service will find issues with punctuation or odd word choices, something that tools like Grammarly can check for.

It’s interesting that Google is opting for this translation-based approach, though, which once again shows the company’s bets on artificial intelligence and how it plans to bring these techniques to virtually all of its products over time.

It’d be nice if Google also made this new grammar checker available as an API for other developers, too, though it doesn’t seem to have any plans to do so for the time being.


Read Full Article

Google Cloud goes all-in on hybrid with its new Cloud Services Platform


The cloud isn’t right for every business, be that because of latency constraints at the edge, regulatory requirements or because it’s simply cheaper to own and operate their own data centers for their specific workloads. Given this, it’s maybe no surprise that the vast majority of enterprises today use both public and private clouds in parallel. That’s something Microsoft has long been betting on as part of its strategy for its Azure cloud, and Google, too, is now taking a number of steps in this direction.

With the open-source Kubernetes project, Google launched one of the fundamental building blocks that make running and managing applications in hybrid environments easier for large enterprises. What Google hadn’t done until today, though, is launch a comprehensive solution that includes all of the necessary parts for this kind of deployment. With its new Cloud Services Platform, though, the company is now offering businesses an integrated set of cloud services that can be deployed on both the Google Cloud Platform and in on-premise environments.

As Google Cloud engineering director Chen Goldberg noted in a press briefing ahead of today’s announcement, many businesses also simply want to be able to manage their own workloads on-premise but still be able to access new machine learning tools in the cloud, for example. “Today, to achieve this, use cases involve a compromise between cost, consistency, control and flexibility,” she said. “And this all negatively impacts the desired result.”

Goldberg stressed that the idea behind the Cloud Services Platform is to meet businesses where they are and then allow them to modernize their stack at their own pace. But she also noted that businesses want more than just the ability to move workloads between environments. “Portability isn’t enough,” she said. “Users want consistent experiences so that they can train their team once and run anywhere — and have a single playbook for all environments.”

The two services at the core of this new offering are the Kubernetes container orchestration tool and Istio, a relatively new but quickly growing tool for connecting, managing and securing microservices. Istio is about to hit its 1.0 release.

We’re not simply talking about a collection of open-source tools here. The core of the Cloud Services Platform, Goldberg noted, is “custom configured and battle-tested for enterprises by Google.” In addition, it is deeply integrated with other services in the Google Cloud, including the company’s machine learning tools.

GKE On-Prem

Among these new custom-configured tools are a number of new offerings, which are all part of the larger platform. Maybe the most interesting of these is GKE On-Prem. GKE, the Google Kubernetes Engine, is the core Google Cloud service for managing containers in the cloud. And now Google is essentially bringing this service to the enterprise data center, too.

The service includes access to all of the usual features of GKE in the cloud, including the ability to register and manage clusters and monitor them with Stackdriver, as well as identity and access management. It also includes a direct line to the GCP Marketplace, which recently launched support for Kubernetes-based applications.

Using the GCP Console, enterprises can manage both their on-premise and GKE clusters without having to switch between different environments. GKE on-prem connects seamlessly to a Google Cloud Platform environment and looks and behaves exactly like the cloud version.

Enterprise users also can get access to professional services and enterprise-grade support for help with managing the service.

“Google Cloud is the first and only major cloud vendor to deliver managed Kubernetes on-prem,” Goldberg argued.

GKE Policy Management

Related to this, Google also today announced GKE Policy Management, which is meant to provide Kubernetes administrators with a single tool for managing all of their security policies across clusters. It’s agnostic as to where the Kubernetes cluster is running, but you can use it to port your existing Google Cloud identity-based policies to these clusters. This new feature will soon launch in alpha.

Managed Istio

The other major new service Google is launching is Managed Istio (together with Apigee API Management for Istio) to help businesses manage and secure their microservices. The open source Istio service mesh gives admins and operators the tools to manage these services and, with this new managed offering, Google is taking the core of Istio and making it available as a managed service for GKE users.

With this, users get access to Istio’s service discovery mechanisms and its traffic management tools for load balancing and routing traffic to containers and VMs, as well as its tools for getting telemetry back from the workloads that run on these clusters.

In addition to these three main new services, Google is also launching a couple of auxiliary tools around GKE and the serverless computing paradigm today. The first of these is the GKE serverless add-on, which makes it easy to run serverless workloads on GKE with a single-step deploy process. This, Google says, will allow developers to go from source code to container “instantaneously.” This tool is currently available as a preview and Google is making parts of this technology available under the umbrella of its new native open source components. These are the same components that make the serverless add-on possible.

And to wrap it all up, Google also today mentioned a new fully managed continuous integration and delivery service, Google Cloud Build, though the details around this service remain under wraps.

So there you have it. By themselves, all of those announcements may seem a bit esoteric. As a whole, though, they show how Google’s bet on Kubernetes is starting to pay off. As businesses opt for containers to deploy and run their new workloads (and maybe even bring older applications into the cloud), GKE has put Google Cloud on the map to run them in a hosted environment. Now, it makes sense for Google to extend this to its users’ data centers, too. With managed Kubernetes from large and small companies like SUSE, Platform 9, containership is starting to become a big business. It’s no surprise the company that started it all wants to get a piece of this pie, too.


Read Full Article

Watchmaker Doxa resurrects its most famous dive watch


Doxa is a storied dive watch company and their most popular watch, the Sub, has just gotten a 2018 overhaul. The watches were made famous by writer Clive Cussler whose character, Dirk Pitt, consulted his beefy Doxa on multiple occasions.

This new model is made in collaboration with gear manufacturer Aqua Lung and features a 42mm steel case with 300 meters of water resistance, a Swiss ETA movement, and a unidirectional diving bel. It will cost $2,190 when it ships in August.

The SUB 300 ‘Silver Lung’ continues the yearlong 50th anniversary celebration for DOXA Watches, whose pioneering SUB would first plumb the ocean depths in 1967 as the first purpose-built dive watch for the emerging recreational scuba diving market. Lauded for its bright orange dial and professional-grade build quality and dependability, the SUB quickly became the benchmark against which all other dive watches were measured, and ultimately won the approval of the pioneers of modern diving. This included those at Aqua Lung, who would soon distribute the watches under the US Divers name before consolidating into the singular name Aqua Lung in 1998.

Why is this important? First, it’s a cool-looking watch and priced low enough for a Swiss movement and case to be interesting. Further, it has real history and provenance and is a little known brand. If you’re a diver or just want to pretend to be one you could do worse than this beefy and very legible piece.


Read Full Article

MacBook Pro 2018 vs. 2017: The Good, Bad, and Ugly


Apple has released the 2018 version of the MacBook Pro with Touch Bar (13-inch and 15-inch models). This is the third generation of the redesign we first saw in 2016.

From the outside, it doesn’t look a whole lot different. And while it doesn’t solve all the major issues in this iteration of the MacBook Pro, it’s still a major update with new internals.

Here’s what you must know about the 2018 MacBook Pro models.

1. Only Touch Bar MacBooks Get the Update

2018 MacBook Pro 13 inch and 15 inch

Apple has only updated the MacBook Pro models that include the Touch Bar. Other MacBook models (13-inch MacBook Pro with function key row, 12-inch MacBook, and MacBook Air) remain untouched. Apple also took this opportunity to discontinue the 15-inch MacBook Pro from 2015.

For better or for worse, the Touch Bar remains the same (though it is an OLED screen now).

2. It Might Fix the Keyboard Issue

The current MacBook Pro generation is plagued with keyboard reliability issues. Officially, Apple has updated the keyboard to address the excessive noise. The keys are softer to press and quieter in day-to-day use.

But unofficially (after several class action lawsuits), Apple seems to have updated the keyboard to fix the reliability issue. During the teardown process, iFixit found a “thin, silicone barrier” right below the keycaps. According to iFixit, “This flexible enclosure is quite obviously an ingress-proofing measure to cover up the mechanism from the daily onslaught of microscopic dust.”

Will this unofficial change solve the keyboard jamming problems? It’s too early to tell, but it should hopefully prevent small dust particles from jamming up the keys.

3. More CPU Cores

MacBook Pro 13 inch Multi core geekbench

This is the first time since 2011 that a 13-inch MacBook Pro has gained more cores. The 13-inch MacBook Pro with Touch Bar now comes with a quad-core CPU (that’s double the cores compared to the 2017 model). The $1,799 base model starts with a 2.3GHz Core i5 quad-core CPU; you can configure it with a 2.7GHz Core i7 quad-core CPU as well.

The $2,399 15-inch MacBook Pro gets a 2.2GHz 6-core Intel Core i7, available to upgrade to a 2.9GHz 6-core Intel Core i9 CPU if you like.

This simple change makes the new MacBook Pro a lot more powerful than the 2017 version, especially when it comes to multi-threading. If you use your MacBook Pro for photo editing or video processing, these new cores will come in handy.

MacBook Pro 2018 15 inch multi core geekbench

The top-of-the-line 15-inch MacBook Pro with a 2.9GHz 6-core Intel Core i9 processor has a multi-core score of 22,439. That’s a 44.3 percent increase over the 2017 model with a 3.1GHz quad-core Core i7 and Turbo Boost up to 4.1GHz.

Meanwhile, the 13-inch MacBook Pro with a 2.7GHz quad-core Intel Core i7 processor gets a multi-core score of 17,557. This is a whopping 83.8 percent increase compared to the premium 2017 model. The base models see similar gains as well.

4. Better GPUs

2018 MacBook Pro running graphics application

The 13-inch MacBook Pro gets Intel Iris Plus 655 integrated graphics with 128MB of eDRAM. The 15-inch model has Radeon Pro discrete graphics with 4GB of video memory on every configuration.

That gives the base 15-inch MacBook Pro some amazing firepower. While it’s not going to be your next great gaming PC, the 4GB of Radeon Pro graphics means you’ll breeze through rendering sessions in Final Cut Pro X.

And even with all the upgrades, Apple has managed to keep the battery life the same (by increasing the battery size by 10%).

5. For the Pros: 32 GB of RAM and 4TB SSD

The 15-inch MacBook Pro now has the power that professional videographers need. While the 2017 MacBook Pros maxed out at 16 GB of DDR3 RAM, the 2018 MacBook Pro comes with DDR4 RAM that you can max out to 32 GB.

And if you want, you can even upgrade to 4TB SSD for $3,200. The new SSDs in the 2018 MacBook Pros are ridiculously fast. You can expect up to 3.2Gbps read speeds!

6. True Tone Display

2018 MacBook Pro Running Photo Editing App

One of the best display technologies from the iPhone and iPad Pro has arrived on Mac. True Tone technology automatically changes the color temperature of the screen based on your surroundings.

If you’re indoors, the screen will turn warmer and you’ll see a yellow tint over the screen. When you’re out in bright light, the screen will adjust to a bright blue light. While it’s not the most exciting update, even the OLED Touch Bar screen gets True Tone support.

7. The T2 Chip Brings “Hey Siri” to the Mac

The T1 chip in the 2017 MacBook Pros brought support for Apple Pay, Touch ID, and Secure Enclave. Now, the T2 chip in the 2018 models adds always-on “Hey Siri” support. Just like your iPhone or iPad, you can call up Siri on Mac to help you create reminders, look for files, and even open websites.

macOS lets you create a keyboard shortcut for bringing up Siri, but calling for her by voice is much more convenient.

Are the 2018 MacBook Pro Models a Worthy Upgrade?

If you were looking for a complete redesign of the MacBook Pro, you’re probably disappointed with this upgrade. If you didn’t like the keyboard or design from previous generations, or thought the Touch Bar was useless, you still won’t like the 2018 version.

But if you can get used to the keyboard, and you’re looking to upgrade from a MacBook that’s several years old, you’ll see a huge upgrade in performance and usability. This is especially true with the base 13-inch model. A one-and-a-half times performance boost for MacBooks is unheard of these days.

If you’re in the market for a new Mac, we recommend comparing the MacBook and iMac. The 5K iMac packs an amazing screen and some serious firepower, so you may want to go for that if you don’t need portability.

Read the full article: MacBook Pro 2018 vs. 2017: The Good, Bad, and Ugly


Read Full Article

How to Fix the System Service Exception Stop Code in Windows 10


system-service-exception-stop-code-fix

The Blue Screen of Death (BSOD) isn’t as rare as it once was, but it still happens. While Windows 10 still has quirks and annoyances, one huge improvement is that the BSOD now displays useful information regarding your system crash.

In this article, we’ll examine the SYSTEM_SERVICE_EXCEPTION error, why it happens, what you can do to fix it, and how to stop it happening again.

What Is a System Service Exception Error?

A SYSTEM_SERVICE_EXCEPTION error happens for a few reasons: graphic user interface errors, corrupted system files, and issues with outdated or corrupt drivers amongst others.

Given that there is such a range of potential SYSTEM_SERVICE_EXCEPTION causes, there are also several methods for fixing the issue. Some might fix your Windows system errors, while others won’t.

The main course of action is updating drivers and checking your Windows 10 file system for errors, but you should work through the list below until the SYSTEM_SERVICE_EXCEPTION error disappears for good.

How to Fix a System Service Exception Error

1. Update Windows 10

The first thing to do is check that Windows 10 is completely up to date. Outdated system files can cause unexpected errors. Checking for a pending update is a quick and easy way to figure out if that is what is causing your issue.

Hit Windows key + I to open the Settings panel. Now, head to Update & Security, then check under Windows Update for any pending updates. If there is an update, save any important files, then press Restart now. Your system will reboot during the process.

2. Update System Drivers

Windows Update keeps your system drivers up to date. Automating the process means your system drivers are less likely to fall behind their recommended version—but that doesn’t mean some won’t fall through the gaps. At other times, Windows doesn’t use the correct driver version.

To check your latest automatic driver updates, head to Update & Security > Windows Update > View update history. Recent driver updates appear here. Now, type device manager in the Start menu search bar and select the best match. Head down the list and check for an error symbol. If there is nothing, your driver status is likely not the source of the issue.

If there is a yellow “alert” symbol, open the section using the dropdown arrow, then right-click the problem-driver and select Update driver. Select Search automatically for updated driver software to let Windows automate the update process for you.

how to update drive in windows 10

Otherwise, you can use a third-party tool to update all your system drivers simultaneously. Check out this list of free tools you can use to fix a majority of Windows problems. The first two options—IOBit’s Driver Booster and Snappy Driver Installer—do exactly this.

3. Run CHKDSK

Next up, try running Windows Check Disk from the Command Prompt. CHKDSK is a Windows system tool that verifies the file system and with certain settings, fixes issues as it runs.

Type command prompt in your Start menu search bar, then right-click the best match and select Run as administrator. (Alternatively, press Windows key + X, then select Command Prompt (Admin) from the menu.)

Next, type chkdsk /r and press Enter. The command will scan your system for errors and fix any issues along the way.

4. Run SFC

System File Check is another Windows system tool that checks for missing and corrupt Windows system files. Sounds like CHKDSK, right? Well, SFC checks for Windows system files specifically, while CHKDSK scans your entire drive for errors.

But before running the SFC command, it is best to double-check that it is completely functional.

DISM stands for Deployment Image Servicing and Management. DISM is an integrated Windows utility with a vast range of functions. In this case, the DISM Restorehealth command ensures that our next fix will work properly. Work through the following steps.

  1. Type Command Prompt (Admin) in the Start menu search bar, then right-click and select Run as administrator to open an elevated Command Prompt.
  2. Type the following command and press Enter: DISM /online /cleanup-image /restorehealth
  3. Wait for the command to complete. The process can take up to 20 minutes depending on your system health. The process seems stuck at certain times, but wait for it to complete.
  4. When the process completes, type sfc /scannow and press Enter.

5. Install the Official Windows Hotfix

There is an official Windows hotfix for the SYSTEM_SERVICE_EXCEPTION error. However, the hotfix relates to a stop code that defines a specific SYSTEM_SERVICE_EXCEPTION issue. The stop code is 0x0000003B, and it relates to IEEE 1394 devices; in other words, FireWire and similar branded versions of the interface standard.

Head to the Microsoft hotfix page and select the Hotfix Download Available link. Follow the onscreen instructions (requires email details to send the hotfix to you). When the hotfix arrives in your email account (it is instantaneous), use the link at the bottom of the page to download the file.

Once downloaded, double-click the file. The auto-extraction file suggests C:/ as the default location. However, I would add “hotfix” to the file path, (e.g., C:/hotfix) to make it easier to find the unpacked file. Next up, navigate to the extracted file, then right-click and select Run as Administrator to complete the process.

6. Last Resort: Reset Windows 10

If nothing else works, you can use Windows 10’s Reset function to refresh your system files. Windows 10 Reset replaces your system files with a completely fresh set of files and theoretically clears lingering issues relating to your SYSTEM_SERVICE_EXCEPTION error while keeping the majority of your important files intact.

Head to Settings > Update and Security > Recovery, then under Reset this PC select Get started. Your system restarts as soon as you hit the button, so make sure you to backup any important files beforehand. Your system will restart, then you may select Keep my files or Remove everything.

System Service Exception Error: Fixed and Eradicated!

One of these fixes or a combination of them will resolve your SYSTEM_SERVICE_EXCEPTION error, leaving your system BSOD free.

If not, there is another short solution you can try: Work your way through your recently installed programs, uninstalling each one until the issue resolves. Some programs have an unwelcome habit of causing certain system process to crash.

Another handy bluescreen error code tool is Nirsoft’s BlueScreenView. It helps you better understand the error codes so you can isolate issues much faster!

Read the full article: How to Fix the System Service Exception Stop Code in Windows 10


Read Full Article

The Best YouTube Originals to Watch on YouTube Premium


youtube-premium-originals

In 2018, Google split its YouTube Red subscription service into two standalone offerings: YouTube Music and YouTube Premium.

The YouTube Premium service offers ad-free videos, offline viewing, background playback, all of the YouTube Music features and access to YouTube Originals.

What Are YouTube Originals?

Much like Netflix and Amazon, YouTube has started creating its own exclusive content. All of this content is available under the YouTube Originals moniker.

The shows and movies straddle a wide range of genres and a mix of well-known actors and famous YouTubers. All of the shows are available in any market in which YouTube Premium is available, and they are viewable on all your devices.

To help you get started sorting the wheat from the chaff, here are the best YouTube Originals that you can watch right now.

1. Cobra Kai

Cobra Kai is perhaps the most well-known YouTube Original. The storyline uses the four Karate Kid movies as inspiration.

Taking place 34 years after the first film, the 10-part series follows the reopening of the Cobra Kai karate dojo and the rekindling of the rivalry between Daniel LaRusso and Johnny Lawrence.

Actors Ralph Macchio (LaRusso) and William Zabka (Lawrence) both return to reprise their original roles.

A second season will be available in 2019.

2. Step Up: High Water

Step Up: High Water is another YouTube Original that draws heavily on a movie franchise.

For those who aren’t aware, there were five Step Up films released between 2006 and 2014. They fall into the dance drama genre.

The YouTube show tells the story of twins Tal (Petrice Jones) and Janelle (Lauryn McClain) as they are forced to relocate to Atlanta due to a family crisis. Once they arrive in their new home, they join a free performing arts school run by Sage Odom (Ne-Yo). It’s described as the most cutthroat school in the city.

You can expect lots of music, lots of comedy, and lots of dark moments across the 10 episodes.

The series’ breakout star, Channing Tatum, serves as an executive producer on the YouTube show.

3. Kedi

What do you get if you place a group of documentary makers in a city with millions of stray cats? The answer is Kedi. It’s the first movie on our list.

It tracks the lives of seven strays in Istanbul. With a healthy dose of melancholic music and a voiceover that’s sure to pull on your heartstrings, it’s a must-watch for cat lovers.

4. Rhett and Link’s Buddy System

Created by comedy duo Rhett McLaughlin and Link Neal, this absurdist comedy show was arguably the first YouTube Original that became popular with a mainstream audience.

When it launched in early 2017, demand was for the show was higher than demand for several big-hitting series from other streaming providers, including Orange is the New Black, Santa Clarita Diet, The Grand Tour, and The Man In the High Castle.

The story follows the lives of Rhett and Link (playing themselves) as they seek to regain control of their morning chat show from a mutual ex-girlfriend.

5. Scare PewDiePie

If you like Felix Kjellberg’s style of content, you’ll enjoy Scare PewDiePie. The series sees the YouTube star put himself into reality-based horror setups.

We reviewed Scare PewDiePie when YouTube Red first launched, giving it a score of 7/10. Our biggest complaint was the seemingly scripted reactions, but we still found it largely enjoyable.

There’s only one season of 10 episodes. The second season was canceled before its release in 2017 after YouTube decided Kjellberg was too divisive a figure to promote in such a way. However, he’s still the most popular YouTuber going by number of subscribers.

6. Youth and Consequences

Youth and Consequences ticks all the usual boxes for a teenage high school drama.

There’s the school queen struggling to hold on to her status, the ensemble of bullies making crude remarks about everything from sexuality to social class, the ongoing web of love interests, the cliques, and every other high school stereotype you can think of.

It might not be everyone’s cup of tea, but after its release in March 2018 it quickly shot up to the top of YouTube’s most popular shows.

The first season contains eight episodes. There’s been no official word on a second season.

7. The Keys of Christmas

The Keys of Christmas is the second YouTube Original movie on the list. It offers a modern take on the classic Charles Dickens novel, A Christmas Carol.

The film enjoys an all-star cast, with Mariah Carey, DJ Khaled, Ciara, Fifth Harmony, Rudy Mancuso, Melanie Iglesias, and Mike Tyson all playing significant roles.

DJ Khaled and Mariah Carey are the do-gooders; they transport YouTube star Mancuso into a winter wonderland in an attempt to teach him about the true meaning of Christmas.

It’s certainly not a classic film, but it’s a fun and welcome addition to the list of Christmas movies worth streaming.

8. Lifeline

Lifeline takes us into the world of science fiction. The two main protagonists, Conner (Zach Gilford) and Haley (Amanda Crew) work for an insurance firm that has the ability to time travel.

Using their powers, Conner and Haley try to predict their clients’ death by traveling 33 days into the future. They then return to the present day and try to prevent the deaths from occurring.

In some respects, the show feels like Black Mirror. Some of the sub-plots, such as the duo’s slower aging process due to their time travel, are explored in more detail in later episodes.

The series runs for eight episodes. Unfortunately, YouTube has already confirmed that there will not be a season two.

9. Mind Field

Fans of Michael Stevens’ Vsauce channel will be instantly familiar with the premise of Mind Field. The episodes use a documentary-style format, with Stevens discussing various aspects of human behavior.

Each episode contains an experiment which links to the theme of the video. Sometimes Stevens performs the experiment on himself. On other occasions, he uses guests or test subjects.

Experiments have included three days of solitary confinement to test the effect on the brain, making people angry and giving them things to smash up, and asking a group of people to incorrectly answer a question in front of one unwitting participant to test the power of conformity.

10. Single by 30

Single by 30 is a romantic comedy series. Set in Los Angeles, the show starts with two high school students who make a pact to go to the senior dance together if they cannot find a date.

In the aftermath of the dance, they make another pact: They will marry each other if they are both still single by the time they hit 30.

Predictably, they lose touch with each other. But as they approach their 30th birthdays, they reunite after the breakdown of their long-term relationships. They choose to reinstate the old pact, with the series focusing on their feelings for each other as they re-enter the dating world.

Is YouTube Premium Worth the Money?

All these shows are only available to YouTube Premium subscribers. However, some people still might not see the value in subscribing to yet another streaming service.

To help you decide, see our coverage of whether YouTube Premium is worth it and what to know about it. (YouTube Music subscribers can upgrade to YouTube Premium for just $2/month.)

Read the full article: The Best YouTube Originals to Watch on YouTube Premium


Read Full Article

Google Cloud CEO Diane Greene: “We’re playing the long game here”


Google is hosting its annual Cloud Next conference in San Francisco this week. With 20,000 developers in attendance, Cloud Next has become the cloud-centric counterpart to Google I/O. A few years ago, when the event only had about 2,000 attendees and Google still hosted it on a rickety pier, Diane Greene had just taken over as the CEO of Google’s cloud businesses and Google had fallen a bit behind in this space, just as Amazon and Microsoft were charging forward. Since then, Google has squarely focused on bringing business users to its cloud, both to its cloud computing services and to G Suite.

Ahead of this year’s Cloud Next, I sat down with Diane Greene to talk about the current state of Google Cloud and what to expect in the near future. As Greene noted, a lot of businesses first approached cloud computing as an infrastructure play — as a way to get some cost savings and access to elastic resources. “Now, it’s just becoming so much more. People realize it’s a more secure place to be, but really, I feel like in its essence it’s all about super-charging your information to make your company much more successful.” It’s the cloud, after all, where enterprises get access to globally distributed databases like Cloud Spinner and machine learning tools like AutoML (and their equivalent tools from other vendors).

When she moved to Google Cloud, Greene argued, Google was missing many of the table stakes that large enterprises needed. “We didn’t have all the audit logs. We didn’t have all the fine-grained security controls. We didn’t have the peer-to-peer networking. We didn’t have all the compliance and certification,” she told me.

People told her it would take Google 10 years to be ready for enterprise customers. “That’s how long it took Microsoft. And I was like, no, it’s not 10 years.” The team took that as a challenge and now, two years later, Greene argues that Google Cloud is definitely ready for the enterprise (and she’s tired of people calling it a ‘distant third’ to AWS and Azure).

Today, when she thinks about her organization’s mission, she sees it as a variation on Google’s own motto. “Google’s mission is to organize the world’s information,” she said. “Google Cloud’s mission then is to supercharge our customers’ information.”

When it comes to convincing large enterprises to bet on a given vendor, though, technology is one thing, but a few years ago, Google also didn’t have the sales teams in place to sell to these companies. That had to change, too, and Greene argues that the company’s new approach is working as well. And Google needed the right partners, too, which it has now found with companies like SAP, which has certified Google’s Cloud for its Hana in-memory database, and the likes of Cisco.

A few months ago, Greene told CNBC she thought that people were underestimating the scale of Google’s cloud businesses. And she thinks that’s still the case today, too. “They definitely are underestimating us. And to some extent, maybe that hurt us. But we love our pipeline and all our engagements that we have going on,” she told me.

Getting large businesses on board is one thing, but Greene also argued that today is probably the best time ever to be an enterprise developer. “I’ve never seen companies so aggressively pursuing the latest technology and willing to adopt this disruptive technology because they see the advantage that can give them and they see that they won’t be competitive if the people they compete with adopt it first,” Greene told me. “And because of this, I think innovation in the enterprise is happening right now, even faster than it is in consumer, which is somewhat of a reversal.”

As for the companies that are choosing Google Cloud today, Greene sees three distinct categories. There are those that were born in the cloud. Think Twitter, Spotify and Snap, which are all placing significant bets on Google Cloud. Not shy to compare Google’s technology prowess to its competitors, Green noted that “they are with Google Cloud because they know that we’re the best cloud from a technology standpoint.”

But these days, a lot of large companies that preceded the internet but were still pretty data-centric are also moving to the cloud. Examples there, as far as Google Cloud customers go, include Schlumberger, HSBC and Disney. And it’s those companies that Google is really going after at this year’s Next with the launch of the Google Services Platform for businesses that want or need to take a hybrid approach to their cloud adoption plans. “They see that the future is in the cloud. They see that’s where the best technology is going to be. They see that through using the technology of the cloud they can redeploy their people to be more focused on their business needs,” Greene explained.

Throughout our conversation, Greene stressed that a lot of these companies are coming to Google because of its machine learning tools and its support for Kubernetes. “We’re bringing the cloud to them,” Greene said about these companies that want to go hybrid. “We are taking Kubernetes and Istio, the monitoring and securing of the container workflows and we’re making it work on-prem and within all the different clouds and supporting it across all that. And that way, you can stay in your data center and have this Kubernetes environment and then you can spill over into the cloud and there’s no lock-in.”

But there’s also a third category, the old brick-and-mortar businesses like Home Depot that often don’t have any existing large centralized systems but that now have to go through their own digital transformation, too, to remain competitive.

While it’s fun to talk about up-and-coming technologies like Kubernetes and containers, though, Greene noted the vast majority of users still come to Google Cloud because of its compute services and data management and analytics tools like BigQuery. Of course there’s lot of momentum behind the Google Kubernetes Engine, too, as well as the company’s machine learning tools, but enterprises are only now starting to think about these tools.

But Greene also stressed that a lot of customers are looking for security, not just in the cloud computing side of Google Cloud but also when it comes to choosing the G Suite set of productivity tools.

“Companies are getting hacked and Google, knock on wood, is not getting hacked,” she noted. “We are so much more secure than any company could ever contemplate.”

But while that’s definitely true, Google has also faced an interesting challenge here because of its consumer businesses. Greene noted that it sometimes takes people a while to understand that what Google does with consumer data is vastly different from what it does with data that sits in Google Cloud. Google, after all, does mine a good amount of its free users’ data to serve them more relevant ads.

“We’ve been keeping billions of people’s data private for almost 20 years and that’s a lot of hard work, but a cloud customer’s data is completely private to them and we do have to continually educate people about that.”

So while Google got a bit of a late start in getting enterprises to adopt its Cloud, Greene now believes that it’s on the right track. “And the other thing is, we’re playing the long game,” she noted. “This thing is early. Some people estimate that only 10 percent of workloads are in the big public clouds. And if it’s not in a public cloud, it is going to be in a public cloud.”


Read Full Article

EU fines Asus, Denon & Marantz, Philips and Pioneer $130M for online price fixing


The European Union’s antitrust authorities have issued a series of penalties, fining consumer electronics companies Asus, Denon & Marantz, Philips and Pioneer more than €110 million (~$130M) in four separate decisions for imposing fixed or minimum resale prices on their online retailers in breach of EU competition rules.

It says the four companies engaged in so called “fixed or minimum resale price maintenance (RPM)” by restricting the ability of their online retailers to set their own retail prices for widely used consumer electronics products — such as kitchen appliances, notebooks and hi-fi products.

Asus has been hit with the largest fine (63.5M), followed by Philips (29.8M). The other two fines were 10.1M for Pioneer, and 7.7M for Denon & Marantz.

The Commission found the manufacturers put pressure on ecommerce outlets who offered their products at low prices, writing: “If those retailers did not follow the prices requested by manufacturers, they faced threats or sanctions such as blocking of supplies. Many, including the biggest online retailers, use pricing algorithms which automatically adapt retail prices to those of competitors. In this way, the pricing restrictions imposed on low pricing online retailers typically had a broader impact on overall online prices for the respective consumer electronics products.”

It also notes that use of “sophisticated monitoring tools” by the manufacturers allowed them to “effectively track resale price setting in the distribution network and to intervene swiftly in case of price decreases”.

“The price interventions limited effective price competition between retailers and led to higher prices with an immediate effect on consumers,” it added.

In particular, Asus, was found to have monitored the resale price of retailers for certain computer hardware and electronics products such as notebooks and displays — and to have done so in two EU Member States (Germany and France), between 2011 and 2014.

While Denon & Marantz was found to have engaged in “resale price maintenance” with respect to audio and video consumer products such as headphones and speakers of the brands Denon, Marantz and Boston Acoustics in Germany and the Netherlands between 2011 and 2015.

Philips was found to have done the same in France between the end of 2011 and 2013 — but for a range of consumer electronics products, including kitchen appliances, coffee machines, vacuum cleaners, home cinema and home video systems, electric toothbrushes, hair driers and trimmers.

In Pioneer’s case, the resale price maintenance covered products including home theatre devices, iPod speakers, speaker sets and hi-fi products.

The Commission said the company also limited the ability of its retailers to sell-cross border to EU consumers in other Member States in order to sustain different resale prices in different Member States, for example by blocking orders of retailers who sold cross-border. Its conduct lasted from the beginning of 2011 to the end of 2013 and concerned 12 countries (Germany, France, Italy, the United Kingdom, Spain, Portugal, Sweden, Finland, Denmark, Belgium, the Netherlands and Norway).

In all four cases, the Commission said the level of fines were reduced — 50% in the case of Pioneer; and 40% for each of the others — due to the companies’ co-operation with its investigations, specifying that they had provided evidence with “significant added value” and had “expressly acknowledg[ed] the facts and the infringements of EU antitrust rules”.

Commenting in a statement, commissioner Margrethe Vestager, who heads up the bloc’s competition policy, said: The online commerce market is growing rapidly and is now worth over 500 billion euros in Europe every year. More than half of Europeans now shop online. As a result of the actions taken by these four companies, millions of European consumers faced higher prices for kitchen appliances, hair dryers, notebook computers, headphones and many other products. This is illegal under EU antitrust rules. Our decisions today show that EU competition rules serve to protect consumers where companies stand in the way of more price competition and better choice.”

We’ve reached out to all the companies for comment.

The fines follow the Commission’s ecommerce sector inquiry, which reported in May 2017, and showed that resale-price related restrictions are by far the most widespread restrictions of competition in ecommerce markets, making competition enforcement in this area a priority — as part of the EC’s wider Digital Single Market strategy.

The Commission further notes that the sector inquiry shed light on the increased use of automatic software applied by retailers for price monitoring and price setting.

Separate investigations were launched in February 2017 and June 2017 to assess if certain online sales practices are preventing, in breach of EU antitrust rules, consumers from enjoying cross-border choice and from being able to buy products and services online at competitive prices. The Commission adds that those investigations are ongoing.

Commenting on today’s EC decision, a spokesman for Philips told us: “Since the start of the EC investigation in late 2013, which Philips reported in its Annual Reports, the company has fully cooperated with the EC. Philips initiated an internal investigation and addressed the matter in 2014.”

“It is good that we can now leave this case behind us, and focus on the positive impact that our products and solutions can have on people,” he added. “Let me please stress that Philips attaches prime importance to full compliance with all applicable laws, rules and regulations. Being a responsible company, everyone in Philips is expected to always act with integrity. Philips rigorously enforces compliance of its General Business Principles throughout the company. Philips has a zero tolerance policy towards non-compliance in relation to breaches of its General Business Principles.”

Anticipating the decision of the EC, he said the company had already recognized a 30M provision in its Q2 2018.


Read Full Article