22 April 2019

Why it’s so hard to know who owns Huawei


It’s one of the greatest technology “startup” success stories of the personal computer and smartphone eras. Yet, despite selling 59 million smartphones and netting $27 billion in revenue last quarter in its first-ever public earnings report this morning, a strange and tantalizing question shrouds the world’s number two handset manufacturer behind Samsung.

Who owns Huawei?

To hear the company tell it, it’s 100% employee-owned. In a statement circulated last week, it said that “Huawei is a private company wholly owned by its employees. No government agency or outside organization holds shares in Huawei or has any control over Huawei.”

That’s a simple statement, but oh is it so much more complicated.

As with all things related to Huawei, which outside of its 5G archrival Qualcomm is probably the tech company most entrenched in geopolitics today, the story is never as simple as it appears at first glance.


Read Full Article

Daily Crunch: Samsung delays the Galaxy Fold


The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. Samsung reportedly pushes back Galaxy Fold release

Four days out from the Galaxy Fold’s official release date, Samsung is pushing things back a bit, according to a report from The Wall Street Journal. There’s no firm time frame for the launch, though the phone is still expected “in the coming weeks.”

TechCrunch’s reviewer Brian Heater says he hasn’t experienced any issues with his device, but a number of others reported malfunctioning displays.

2. Tencent’s latest investment is an app that teaches grannies in China to dance

Called Tangdou, or “sugar beans” in Chinese, the app announced that it has raised a Series C funding round led by Tencent.

3. SiriusXM’s new streaming-only ‘Essential’ plan targets smart speaker owners

The company has launched a new plan called SiriusXM Essential, targeting those who listen in-home and on mobile devices. The streaming-only plan is also more affordable — $8 per month, versus the $15.99 per month (and up) plans for SiriusXM’s satellite radio service for cars.

4. Confirmed: Pax Labs raises $420M at a valuation of $1.7B

That’s right, $420 million for a vape maker. CEO Bharat Vasan said, “This financing round allows us to invest in new products and new markets, including international growth in markets like Canada and exploring opportunities in hemp-based CBD extracts.”

5. Sony launches a taxi-hailing app to rival Uber in Tokyo

The service is a joint venture between Sony, its payment services subsidiary and five licensed taxi companies. Because ride-hailing with civilian cars is illegal in Japan, the service will focus on connecting licensed taxis with passengers.

6. The Exit: an AI startup’s McPivot

An in-depth interview with investor Adam Fisher about the recent McDonald’s acquisition of Dynamic Yield. (Extra Crunch membership required.)

7. This week’s TechCrunch podcasts

This week’s episode of Equity addresses the aforementioned cannabis vaping round, followed up by an Equity Shot about the Fastly S-1. Meanwhile, on Original Content we reviewed Donald Glover’s “Guava Island” and discussed the new season of “Game of Thrones.”


Read Full Article

SpecAugment: A New Data Augmentation Method for Automatic Speech Recognition




Automatic Speech Recognition (ASR), the process of taking an audio input and transcribing it to text, has benefited greatly from the ongoing development of deep neural networks. As a result, ASR has become ubiquitous in many modern devices and products, such as Google Assistant, Google Home and YouTube. Nevertheless, there remain many important challenges in developing deep learning-based ASR systems. One such challenge is that ASR models, which have many parameters, tend to overfit the training data and have a hard time generalizing to unseen data when the training set is not extensive enough.

In the absence of an adequate volume of training data, it is possible to increase the effective size of existing data through the process of data augmentation, which has contributed to significantly improving the performance of deep networks in the domain of image classification. In the case of speech recognition, augmentation traditionally involves deforming the audio waveform used for training in some fashion (e.g., by speeding it up or slowing it down), or adding background noise. This has the effect of making the dataset effectively larger, as multiple augmented versions of a single input is fed into the network over the course of training, and also helps the network become robust by forcing it to learn relevant features. However, existing conventional methods of augmenting audio input introduces additional computational cost and sometimes requires additional data.

In our recent paper, “SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition”, we take a new approach to augmenting audio data, treating it as a visual problem rather than an audio one. Instead of augmenting the input audio waveform as is traditionally done, SpecAugment applies an augmentation policy directly to the audio spectrogram (i.e., an image representation of the waveform). This method is simple, computationally cheap to apply, and does not require additional data. It is also surprisingly effective in improving the performance of ASR networks, demonstrating state-of-the-art performance on the ASR tasks LibriSpeech 960h and Switchboard 300h.

SpecAugment
In traditional ASR, the audio waveform is typically encoded as a visual representation, such as a spectrogram, before being input as training data for the network. Augmentation of training data is normally applied to the waveform audio before it is converted into the spectrogram, such that after every iteration, new spectrograms must be generated. In our approach, we investigate the approach of augmenting the spectrogram itself, rather than the waveform data. Since the augmentation is applied directly to the input features of the network, it can be run online during training without significantly impacting training speed.
A waveform is typically converted into a visual representation (in our case, a log mel spectrogram; steps 1 through 3 of this article) before being fed into a network.
SpecAugment modifies the spectrogram by warping it in the time direction, masking blocks of consecutive frequency channels, and masking blocks of utterances in time. These augmentations have been chosen to help the network to be robust against deformations in the time direction, partial loss of frequency information and partial loss of small segments of speech of the input. An example of such an augmentation policy is displayed below.
The log mel spectrogram is augmented by warping in the time direction, and masking (multiple) blocks of consecutive time steps (vertical masks) and mel frequency channels (horizontal masks). The masked portion of the spectrogram is displayed in purple for emphasis.
To test SpecAugment, we performed some experiments with the LibriSpeech dataset, where we took three Listen Attend and Spell (LAS) networks, end-to-end networks commonly used for speech recognition, and compared the test performance between networks trained with and without augmentation. The performance of an ASR network is measured by the Word Error Rate (WER) of the transcript produced by the network against the target transcript. Here, all hyperparameters were kept the same, and only the data fed into the network was altered. We found that SpecAugment improves network performance without any additional adjustments to the network or training parameters.
Performance of networks on the test sets of LibriSpeech with and without augmentation. The LibriSpeech test set is divided into two portions, test-clean and test-other, the latter of which contains noisier audio data.
More importantly, SpecAugment prevents the network from over-fitting by giving it deliberately corrupted data. As an example of this, below we show how the WER for the training set and the development (or dev) set evolves through training with and without augmentation. We see that without augmentation, the network achieves near-perfect performance on the training set, while grossly under-performing on both the clean and noisy dev set. On the other hand, with augmentation, the network struggles to perform as well on the training set, but actually shows better performance on the clean dev set, and shows comparable performance on the noisy dev set. This suggests that the network is no longer over-fitting the training data, and that improving training performance would lead to better test performance.
Training, clean (dev-clean) and noisy (dev-other) development set performance with and without augmentation.
State-of-the-Art Results
We can now focus on improving training performance, which can be done by adding more capacity to the networks by making them larger. By doing this in conjunction with increasing training time, we were able to get state-of-the-art (SOTA) results on the tasks LibriSpeech 960h and Switchboard 300h.
Word error rates (%) for state-of-the-art results for the tasks LibriSpeech 960h and Switchboard 300h. The test set for both tasks have a clean (clean/Switchboard) and a noisy (other/CallHome) subset. Previous SOTA results taken from Li et. al (2019), Yang et. al (2018) and Zeyer et. al (2018).
The simple augmentation scheme we have used is surprisingly powerful—we are able to improve the performance of the end-to-end LAS networks so much that it surpasses those of classical ASR models, which traditionally did much better on smaller academic datasets such as LibriSpeech or Switchboard.
Performance of various classes of networks on LibriSpeech and Switchboard tasks. The performance of LAS models is compared to classical (e.g., HMM) and other end-to-end models (e.g., CTC/ASG) over time.
Language Models
Language models (LMs), which are trained on a bigger corpus of text-only data, have played a significant role in improving the performance of an ASR network by leveraging information learned from text. However, LMs typically need to be trained separately from the ASR network, and can be very large in memory, making it hard to fit on a small device, such as a phone. An unexpected outcome of our research was that models trained with SpecAugment out-performed all prior methods even without the aid of a language model. While our networks still benefit from adding an LM, our results are encouraging in that it suggests the possibility of training networks that can be used for practical purposes without the aid of an LM.
Word error rates for LibriSpeech and Switchboard tasks with and without LMs. SpecAugment outperforms previous state-of-the-art even before the inclusion of a language model.
Most of the work on ASR in the past has been focused on looking for better networks to train. Our work demonstrates that looking for better ways to train networks is a promising alternative direction of research.

Acknowledgements
We would like to thank the co-authors of our paper Chung-Cheng Chiu, Ekin Dogus Cubuk, Quoc Le, Yu Zhang and Barret Zoph. We also thank Yuan Cao, Ciprian Chelba, Kazuki Irie, Ye Jia, Anjuli Kannan, Patrick Nguyen, Vijay Peddinti, Rohit Prabhavalkar, Yonghui Wu and Shuyuan Zhang for useful discussions.

Reinvent the Wheel


Reinvent the Wheel

Samsung confirms Galaxy Fold delay, shares ‘initial findings’ on faulty units


Samsung has just confirmed that it will delay the release of the Galaxy Fold. Confirming this morning’s report, the company sent TechCrunch a statement noting that the foldable will not make its previously announced Friday ship date.

Once again, no details on availability are forthcoming — which is honestly probably for the best, as the company assesses the situation. The news follows reports of malfunctioning displays from multiple reviewers. They were in the minority — ours is still working just fine — but three or four in such a small sample size is enough to raise concern.

The company says it will “announce the release date in the coming weeks.”

The statement is understandably still a bit defensive, but this time out, Samsung actually has “initial findings” to share from those faulty units. According to the company,

Initial findings from the inspection of reported issues on the display showed that they could be associated with impact on the top and bottom exposed areas of the hinge. There was also an instance where substances found inside the device affected the display performance.

It’s bad news for the device that’s being positioned as the future of both Samsung and the mobile space in general, but the company’s been through worse PR and come out largely unscathed. The Galaxy Note 7 ultimately did little to damage Samsung’s bottomline, thanks to a booming component business. And that product was already shipping — resulting in two separate recalls.

At least here the company was able to delay the device before it started shipping. It’s hard to say precisely how widespread these issues are — and preproduction units are notorious for having issues. But the statement does appear to a cautious admission that there’s more going on here than just reviewers accidentally peeling back the protective layer.

 


Read Full Article

Facebook makes its first browser API contribution


Facebook today announced that it has made its first major API contribution to Google’s Chrome browser. Together with Google, Facebook’s team created an API proposal to contribute code to the browser, which is a first for the company. The code, like so much of Facebook’s work on web tools and standards, focuses on making the user experience a bit smoother and faster. In this case, that means shortening the time between a click or keystroke and the browser reacting to that.

The first trial for this new system will launch with Chrome 74.

Typically, a browser’s JavaScript engine handles how code is executed and when it will halt for a moment to see if there are any pending input events to which it needs to react. Because even modern JavaScript engines that run on multi-core machines are still essentially single-threaded, the engine can only really do one thing at a time, so the trick is to figure out how to best combine code execution with checking for input events.

“Like many other sites, we deal with this issue by breaking the JavaScript up into smaller blocks. While the page is loading, we run a bit of JavaScript, and then we yield and pass control back to the browser,” the Facebook team explains in today’s announcement. “The browser can then check its input event queue and see whether there is anything it needs to tell the page about. Then the browser can go back to running the JavaScript blocks as they get added.”

Every time the browser goes through that cycle, though, and checks for new events, processes them, a bit of extra time passes. You do this too many times and loading the page slows down. But if you only check for inputs at slower intervals, the user experience degrades as the browser takes longer to react.

To fix this, Facebook’s engineers created the isInputPending API, which eliminates this trade-off. The API, which Facebook also brought to the W3C Web Performance Working Group, allows developers to check whether there are any inputs pending while their code is executing.

With this, the code simply checks if there’s something to react to, without having to fully yield control back to the browser and then passing it back to the JavaScript engine.

For now this is just a trial — and because developers must integrate this into their code, it’s not something that will automatically speed up your browser once Chrome 74 launches. If the trial is successful, though, chances are developers will make use of it (and Facebook surely will do so itself) and that other browser vendors will integrate into through their own engines, too.

“The process of bringing isInputPending to Chrome represents a new method of developing web standards at Facebook,” the team says. “We hope to continue driving new APIs and to ramp up our contributions to open source web browsers. Down the road, we could potentially build this API directly into React’s concurrent mode so developers would get the API benefits out of the box. In addition, isInputPending is now part of a larger effort to build scheduling primitives into the web.”


Read Full Article

Facebook makes its first browser API contribution


Facebook today announced that it has made its first major API contribution to Google’s Chrome browser. Together with Google, Facebook’s team created an API proposal to contribute code to the browser, which is a first for the company. The code, like so much of Facebook’s work on web tools and standards, focuses on making the user experience a bit smoother and faster. In this case, that means shortening the time between a click or keystroke and the browser reacting to that.

The first trial for this new system will launch with Chrome 74.

Typically, a browser’s JavaScript engine handles how code is executed and when it will halt for a moment to see if there are any pending input events that it needs to react to. Because even modern JavaScript engines that run on multi-core machines are still essentially single-threaded, the engine can only really do one thing at a time, so the trick is to figure out how to best combine code execution with checking for input events.

“Like many other sites, we deal with this issue by breaking the JavaScript up into smaller blocks. While the page is loading, we run a bit of JavaScript, and then we yield and pass control back to the browser,” the Facebook team explains in today’s announcement. “The browser can then check its input event queue and see whether there is anything it needs to tell the page about. Then the browser can go back to running the JavaScript blocks as they get added.”

Every time the browser goes through that cycle, though, and checks for new events, processes them, a bit of extra time passes. You do this too many times, and loading the page slows down. But if you only check for inputs at slower intervals, the user experience degrades as the browser takes longer to react.

To fix this, Facebook’s engineers created the isInputPending API, which eliminates this tradeoff. The API, which Facebook also brought to the W3C Web Performance Working Group, allows developers to check whether there are any inputs pending while their code is executing.

With this, the code simply checks if there’s something to react to, without having to fully yield control back to the browser and then passing it back to the JavaScript engine.

For now this is just a trial — and since developers have to integrate this into their code, it’s not something that will automatically speed up your browser once Chrome 74 launches. If the trial is successful, though, chances are developers will make use of it (and Facebook surely will do so itself) and that other browser vendors will integrate into through own engines, too.

“The process of bringing isInputPending to Chrome represents a new method of developing web standards at Facebook,” the team says. “We hope to continue driving new APIs and to ramp up our contributions to open source web browsers. Down the road, we could potentially build this API directly into React’s concurrent mode so developers would get the API benefits out of the box. In addition, isInputPending is now part of a larger effort to build scheduling primitives into the web.”


Read Full Article

Down To Shop is a tongue-in-cheek mobile shopping network


Cyrus Summerlin and Max Hellerstein, who previously created the Push for Pizza app (which allowed users to order a pizza with the push of a button), are officially launching their new startup today, Down to Shop.

The app bills itself as both a modern reinvention of QVC and “the funnest way to shop.” It allows users to watch funny videos featuring products that can be purchased directly from the app.

In an email, Hellerstein said the pair created Down to Shop out of dissatisfaction with existing advertising and e-commerce. Summerlin described it as “a hypermedia commerce platform.”

“We’ve created a self aware, fun and entertaining, interactive environment that gets customers to engage with brands like never before — because they want to,” Summerlin said. “What a concept!”

To do this, Down to Shop says it has recruited a creative team of Upright Citizens Brigade alums and Instagram influencers to star in its shows, which are written, filmed and edited in the startup’s Los Angeles studios. (Walid Mohammad oversees the creative side.) The content is built around four-week seasons, with daily episodes across five shows each season.

Down to Shop

You can actually download the iOS app now, then swipe through different videos and games. Judging from the videos available at launch, the app is holding true to its promise of “content first, advertising second,” with laidback, tongue-in-cheek shows that also happen to feature promoted products.

By playing games and watching videos, you also earn Clout, the in-app currency that be used to make purchases. As for the products available to purchase, the company says it’s already working with more than 60 brands, including Sustain Condoms, Dirty Lemon (water) and Pretty Litter (cat litter).

Down to Shop’s investors include Greycroft, Lerer Hippeau and Firstmark. The startup isn’t disclosing the size of its funding, but according a regulatory filing, it raised $5.9 million last fall.

 


Read Full Article

Samsung reportedly pushes back Galaxy Fold release


Can’t say we didn’t see this coming. Four days out from the Galaxy Fold’s official release date, Samsung is pushing things back a bit, according to a report from The Wall Street Journal that cites “people familiar with the matter.”

There’s no firm timeframe for the launch, though the phone is still expected “in the coming weeks,” at some point in May. We’ve reached out to Samsung for comment and will update accordingly. When a number a reviewers reported malfunctioning displays among an extremely small sample size, that no doubt gave the company pause.

I’ve not experienced any issues with my own device yet, but this sort of thing can’t be ignored. Samsung’s initial response seemed aimed at mitigating pushback, writing, “A limited number of early Galaxy Fold samples were provided to media for review. We have received a few reports regarding the main display on the samples provided. We will thoroughly inspect these units in person to determine the cause of the matter.”

It also went on to note that the problems may have stemmed from users attempting to peel back a “protective layer.” Things took a turn to the more cautious over the weekend, however, when it was reported that the phone’s launch events in parts of Asia would be delayed (we reached out about that, as well, but haven’t heard back). Since then, a larger delay has seemed all but inevitable.


Read Full Article

LibreOffice vs. OpenOffice: Which One Should You Use?


libreoffice-openoffice

If you need to edit documents, spreadsheets, or presentations without Microsoft Office, your options are growing. Chances are if you’ve spent time searching for Office alternatives, you’ve encountered either LibreOffice or OpenOffice. Maybe you’ve encountered both.

They both offer similar feature sets so you might wonder which one wins when it comes to the LibreOffice vs. OpenOffice face off. You’re not the only one. It can be tough to pick between the two, or even tell them apart. Here’s what you need to know.

The Origins of OpenOffice and LibreOffice

Though the entire OpenOffice vs. LibreOffice debacle seems relatively recent, its roots date back to 1985. That was the year that StarOffice was born. At the time, it went by the name StarWriter. The company was bought by Sun Microsystems in 1999.

StarWriter for DOS

In 2000, Sun announced an open-source version of StarOffice, known as OpenOffice.org. This quickly became the default office suite on Linux distributions. Everything stayed relatively stable until 2010 when Oracle acquired Sun and became the de facto leader of OpenOffice.org development.

OpenOffice.org community members weren’t thrilled with Oracle’s past behavior in the open-source space and began discussing a fork. Later in 2010, LibreOffice was forked from OpenOffice.org, with the Document Foundation as the host. Oracle was invited to join the Document Foundation but declined.

LibreOffice About screen

Oracle renamed StarOffice as Oracle Open Office, which led to confusion. Many OpenOffice.org developers also began leaving the project. Development on both OpenOffice.org and Oracle Open Office came to a halt not long after.

Why Are There Two Similar Office Suites?

When Oracle stopped active development on OpenOffice.org, it gave the trademarks and code to the Apache Foundation. For years nothing happened, but since 2014, Apache OpenOffice has seen regular releases and updates. This has renewed the LibreOffice vs. OpenOffice debate.

OpenOffice About screen

LibreOffice is the more actively developed of the two. It routinely adds new features and bugs are fixed more quickly. It also remains the more popular of the two.

OpenOffice adds features at a much slower rate, but this does have the side effect of introducing fewer bugs. Some still worry about the future of OpenOffice, as it appears to have been in danger of shutting down as recently as 2016.

License Differences Mean Feature Differences

License differences usually only matter to two camps: Businesses and people who care about the nature of “free as in beer” and “free as in libre” software. In the case of LibreOffice vs. OpenOffice, the licenses used by each actually does affect the feature sets of both.

OpenOffice uses the Apache License, while LibreOffice uses a dual LGPLv3 / MPL license.

You don’t need to know the details, except for one aspect: LibreOffice can freely incorporate code and features from OpenOffice, but OpenOffice can’t incorporate anything from LibreOffice.

This lets LibreOffice add features at an even faster rate, as it can adopt new features from OpenOffice as it adds them.

User Interface (UI) Differences

There are slight differences, but for the most part, both LibreOffice and OpenOffice look similar. Of course, they look as similar to each other as LibreOffice Writer or OpenOffice Writer look to an older version of Word. Word processors tend to look like word processors.

LibreOffice tends toward a cleaner interface, as is the modern trend. OpenOffice, on the other hand, crams more features in by default. Launch each office suite’s word processor. You will immediately spot OpenOffice has a sidebar that is missing from LibreOffice.

Sidebar in LibreOffice

Of course, it isn’t actually missing. It just isn’t shown in LibreOffice by default. Instead, you need to click a subtle arrow on the right of the screen. This is an example of the differences between the two when it comes to the user interface.

File Format Differences

Both LibreOffice and OpenOffice support opening a ton of different formats. You can open any Microsoft Office filetype in either suite. The big difference is apparent when it’s time to save your work.

When you go to save in either, you’ll see a lot of the same options. Both default to the Open Document Format, which uses the .odt extension.

  • OpenOffice can save Word documents, but only the older .doc format.
  • LibreOffice can save both the older .doc and newer .docx formats.

Saving a modern Office document in LibreOffice

If you need interoperability with modern Microsoft Office installations, neither of these will be 100 percent perfect. Still, the ability to save more modern formats gives LibreOffice the win here.

Software Size Differences

LibreOffice is a larger download than OpenOffice, though unless you’re bandwidth limited, this shouldn’t matter much. The macOS download of LibreOffice 6.2 is just over 250 MB, while the OpenOffice 4.1 installer is around 185 MB.

The installation size is larger as well. OpenOffice requires between 400 MB and 650 MB disk space depending on the platform. LibreOffice, on the other hand, requires between 800 MB and 1.55 GB disk space.

Both require between 256 MB and 512 MB of RAM depending on the platform, so that’s one area where the requirements are similar.

Mobile Apps

Both LibreOffice and OpenOffice have compatible mobile apps. With OpenOffice, you get the ported AndrOpen Office app, which is available for Android. This lets you open and edit files from OpenOffice Writer, Calc, Impress, Draw, and Math. You can also use the app to edit older Office documents.

LibreOffice has the LibreOffice Viewer. This can edit documents, spreadsheets, and presentations, both in Open Document Format and Office formats. The difference is that this supports newer Office formats, which AndrOpen Office does not.

One more option for LibreOffice is Impress Remote. This lets you control Impress presentations from both iOS and Android devices. If this is important to you, it gives one more point to LibreOffice when you have to make a choice.

Download: AndrOpen Office for Android (Free)

Download: LibreOffice Viewer for Android (Free)

Download: Impress Remote for Android | iOS (Free)

LibreOffice vs. OpenOffice: Which One?

It’s probably apparent, but if you want new features faster, pick LibreOffice. The rate at which it adds them is dramatically faster than OpenOffice. Not everyone will consider this a good thing though.

OpenOffice toolbar

Because it adds features more slowly, OpenOffice is less likely to change. If you want to keep your software up to date with bug fixes and security updates, this can be a good thing. You don’t have to worry about updating only to find the entire interface has changed and you need to re-learn it.

Do You Just Need Microsoft Office for Cheap?

Do you really need either LibreOffice or OpenOffice? Or are you just looking to edit Office documents without paying the hefty fee for Microsoft Office?

If you’re a student or work for a non-profit organization, you can get Microsoft Office 365 for free. If neither of these applies to you, there are still a few ways you can get by. Take a look at our guide to the various options for getting Microsoft Office without paying for it.

Read the full article: LibreOffice vs. OpenOffice: Which One Should You Use?


Read Full Article

How to Use the Blending Mode in Photoshop


photoshop-blending

The blending mode in Photoshop is one of the most creative and exciting tools in your workspace. By utilizing a series of layers with different properties you can create all sorts of visual tricks.

Because of its expansive nature, Photoshop’s blending mode can be a little daunting. So, to help beginners get to grips with it, let’s explore the basics of the blending mode in Photoshop together.

Step 1: Set Up Your File

Using Blending Mode in Photoshop New Document

As we covered in our tutorial on how to create Photoshop textures, the first thing you’ll need to do is set up your file.

For this tutorial you don’t need specific dimensions. When you don’t need specific dimensions we recommend that you go with Adobe’s Default Photoshop Size.

Using Blending Mode in Photoshop Your Workspace

When you create your new file, you’ll see something similar to this. In the center of your workspace is a big white square.

If you look towards the bottom right-hand corner of your screen, you’ll see another smaller white square.

This is how your image shows up in the Layers panel as a preview.

Using Blending Mode in Photoshop Layers Panel Close Up

The Layers panel is what we’ll be focusing on for the remainder of this tutorial.

If you zoom in, you can see your image is on a locked layer—indicated by the little padlock icon beside it. At the top of the Layers panel you’ll see three tabs: Layers, Channels, and Paths.

Layers is the primary tab we’ll be using. You can also use the Channels tab to check your blending properties.

Let’s explore it.

Step 2: Exploring Channels

First, add some color to your image. For this tutorial we’re going to apply a simple gradient to see how the blending mode will affect a layer across a blue-to-red spectrum.

If you’re unsure on this step, check out our tutorial on how to create a custom gradient in Photoshop.

Using Blending Mode in Photoshop Explore Channels

Next, we’re going to go to our Layers panel and click on Channels.

This is where Photoshop stores all the color information about your image. By controlling the visibility of these colors—by toggling the eye icon next to the individual channel—you can see how each color interacts within a layer.

Using Blending Mode in Photoshop Turn Channel Off

For example, if I turn off Red in my Channels, everything in the image turns blue. That’s because I’ve turned off the visibility on anything that may have a reddish tint.

To turn the red back on, click on the empty box next to Red, so your eye icon returns.

Note: Turning off the visibility on a color channel does not mean that the color will be stripped from your image when you save it.

Step 3: Add a Blending Element

Using Blending Mode in Photoshop Brush Tool

Next, we’re going to add another element in a second layer to see how those two layers blend together.

To keep things simple, create a new layer in your Layers panel. Make sure the layer sits above your gradient. Add a dash of color with a paintbrush.

To add a color, click on your Brush tool, found on the left-hand toolbar.

Using Blending Mode in Photoshop Choose Brush

Next, click on the Brush preset icon found in the top left-hand corner of your workspace.

To pick a brush, scroll through the presets until you find a subfolder called General Brushes. Open it.

For this tutorial we’re going to use a Hard Round brush and blow up the size. This will allow you to create a large circle without using the Ellipse tool.

Using Blending Mode in Photoshop New Blending Layer

After you drop your color on this new layer, give it a meaningful name to remember what you’re doing with it. For this tutorial I’m going to call mine “Blending Layer”.

Step 4: Experiment With Blending Mode

Using Blending Mode in Photoshop Drop-Down Menu

Now that you have your blending layer set up, it’s time to experiment with the blending mode. The dropdown menu you’ll be working with is the one highlighted in red.

As you can see, the blending mode is currently set to Normal, which means the orange circle sits on top of the gradient and doesn’t interact with it.

Click on your “Blending Layer” to make sure it’s active, then click on the dropdown menu to start playing around with the effects.

Using Blending Mode in Photoshop Normal Mode

There are a lot of different blending modes in the dropdown menu.

A cool thing about Photoshop is that instead of having to click each individual option to see what it does, Adobe automatically previews the mode as you mouse over it.

You’ll notice there are soft grey lines between some of the blending modes. This is because Adobe groups those modes based upon the type of effect they will create.

Using Blending Mode in Photoshop Multiply Mode

If you scroll down and click on something like Multiply, your circle will become darker. Not only does it get darker, but it picks up the dark-to-light values of the gradient as well.

Using Blending Mode in Photoshop Lighten Mode

If you’re looking to make your circle lighter, go down to the next section and click on blending modes like Screen or Lighten.

Using Blending Mode in Photoshop Overlay Mode

You can also try the Overlay section. The effects in this section vary a lot, but essentially they take dark and light values from both layers, plus the colors, and combine all three to create a new effect.

Using Blending Mode in Photoshop Divide Mode

Towards the bottom you’ll find a group of blending modes with options for Difference, Exclusion, Subtract, and Divide.

Using Blending Mode in Photoshop Color Mode

Lastly, you’ll get to a section where you can see options for Hue, Saturation, Color, and Luminosity.

Step 5: Change Your Opacity

Using Blending Mode in Photoshop Change Opacity

We’re almost done with this tutorial, but a few more things before we wrap up.

On your Layers panel beside your blending mode dropdown, you can also change the Opacity of your layer, seen here in red.

By sliding the arrow left or right along the opacity slider, you can create additional, unique effects.

Step 6: Access Blending Options

Using Blending Mode in Photoshop fx Icon

Additionally, you can create blending effects by clicking the fx icon at the bottom of your Layers panel. Once you do, click Blending Options.

Using Blending Mode in Photoshop Layer Style

A new box called Layer Style will pop up. Here you can cycle through an incredible array of choices to apply to your image.

We recommend going through each one and trying them out to see what you can do.

Step 7: Lock Your Layer

Using Blending Mode in Photoshop Lock Layer

Lastly, you might decide that you’re done with this layer and don’t want to make any more changes. To prevent further changes from happening:

  1. Go to your Layers panel.
  2. Click on the layer you want to lock.
  3. Either click the checkerboard icon or the padlock icon.

The checkerboard icon will lock transparent pixels on your screen. This means that you can draw inside the circle you created, but not outside it.

The padlock icon will lock all pixels—meaning that nothing can be edited or moved around, including your circle.

Once you’re done, click File > Save As to save your image.

Delving Deeper Into Photoshop

Photoshop’s blending mode is a wonderful tool, and by learning the basics you’ll be well on your way to creating unique and compelling images.

If there are other parts of Photoshop you want to explore why not start with our tutorial detailing how to create custom brushes in Photoshop.

Read the full article: How to Use the Blending Mode in Photoshop


Read Full Article