15 March 2018

Facebook launches Express Wi-Fi app for its local-operated hotspots


Facebook wants you to pay for internet. This week TechCrunch was tipped off that Facebook had quietly launched an Express Wi-Fi Android app in the Google Play store that lets users buy data packs and find nearby hotspots as part of Facebook’s distributed Wi-Fi network. The company’s Express Wi-Fi program is live in five developing countries, that sees local business owners operating Wi-Fi hotspots where people can pay to access higher-speed bandwidth via local telecoms instead of paying steep prices for slow cellular data connections.

Previously, Express Wi-Fi users had to dig out a mobile website, or directly download an app from a telecom that required reconfiguring a phone’s settings. There wasn’t any way to look up where hotspots were located. The new Google Play app can be downloaded the normal way. It’s now live in Indonesia with bandwidth from telecom partner D-Net, and in Kenya through Surf. The app can also tell if a user’s Wi-Fi is turned on to help with set up, and they can file reports to Facebook about connectivity or retailer issues.

The launch signals Facebook expanding its pursuit of developing world audiences who first need Internet access before they can become lucrative Facebook users. Unlike its much-criticized zero-rating program called Free Basics (formerly Internet.org), Express Wi-Fi offers a full, unrestricted version of the web for a price instead of only low-bandwidth services approved by Facebook. This strategy could help it achieve its mission of getting more disconnected people in the developing world online without the net neutrality concerns. Making Express Wi-Fi an actual business might save Facebook from backlash about it masking a user growth driver inside a philanthropic initiative.

Facebook confirmed the launch to TechCrunch, with a spokesperson telling us “Facebook is releasing the Express Wi-Fi app in the Google Play store to give people another simple and secure way to access fast, affordable internet through their local Express Wi-Fi hotspots.” Sensor Tower first tipped us off to the app.

Weak or expensive connectivity is a huge barrier to Facebook deepening its popularity in the developing world at a time when its reaching saturation or even shrinking in some developed world nations. Facebook saw its first user loss ever in the U.S. and Canada region in Q4, with daily active users decreasing by 700,000 in part because of News Feed changes that reduced the presence of engagement-drawing viral videos.

Facebook needs user growth more than ever, and the developing world is where it can find it. That’s why it’s developing advanced technologies like the Aquila solar drone and satellites that can beam down connectivity. It’s also working with telecoms that use microwave towers to beam backhaul bandwidth to its Express Wi-Fi units.

Monetizing the international market has been a big focus for the company. It’s launched new region-specific and low-bandwidth ad units like click-to-missed-call and slideshows. It’s paid off. From 2012 to 2016, average revenue per user grew 4X in the Rest Of World region. And that revenue grows even faster when people can load Facebook quickly and cheaply thanks to strong Wi-Fi access. The more accessible Facebook makes this program, the more it could see those Internet users turn into social networkers.


Read Full Article

Facebook launches Express Wi-Fi app for its local-operated hotspots


Facebook wants you to pay it for Internet. This week TechCrunch was tipped off that Facebook had quietly launched an Express Wi-Fi Android app in the Google Play store that lets users buy data packs and find nearby hotspots as part of Facebook’s distributed Wi-Fi network. The company’s Express Wi-Fi program is live in five developing countries, that sees local business owners operating Wi-Fi hotspots where people can pay to access higher-speed bandwidth via local telecoms instead of paying steep prices for slow cellular data connections.

Previously, Express Wi-Fi users had to dig out a mobile website, or directly download an app from a telecom that required reconfiguring a phone’s settings. There wasn’t any way to look up where hotspots were located. The new Google Play app can be downloaded the normal way. It’s now live in Indonesia with bandwidth from telecom partner D-Net, and in Kenya through Surf. The app can also tell if a user’s Wi-Fi is turned on to help with set up, and they can file reports to Facebook about connectivity or retailer issues.

The launch signals Facebook expanding its pursuit of developing world audiences who first need Internet access before they can become lucrative Facebook users. Unlike its much-criticized zero-rating program called Free Basics (formerly Internet.org), Express Wi-Fi offers a full, unrestricted version of the web for a price instead of only low-bandwidth services approved by Facebook. This strategy could help it achieve its mission of getting more disconnected people in the developing world online without the net neutrality concerns. Making Express Wi-Fi an actual business might save Facebook from backlash about it masking a user growth driver inside a philanthropic initiative.

Facebook confirmed the launch to TechCrunch, with a spokesperson telling us “Facebook is releasing the Express Wi-Fi app in the Google Play store to give people another simple and secure way to access fast, affordable internet through their local Express Wi-Fi hotspots.” Sensor Tower first tipped us off to the app.

Weak or expensive connectivity is a huge barrier to Facebook deepening its popularity in the developing world at a time when its reaching saturation or even shrinking in some developed world nations. Facebook saw its first user loss ever in the U.S. and Canada region in Q4, with daily active users decreasing by 700,000 in part because of News Feed changes that reduced the presence of engagement-drawing viral videos.

Facebook needs user growth more than ever, and the developing world is where it can find it. That’s why it’s developing advanced technologies like the Aquila solar drone and satellites that can beam down connectivity. It’s also working with telecoms that use microwave towers to beam backhaul bandwidth to its Express Wi-Fi units.

Monetizing the international market has been a big focus for the company. It’s launched new region-specific and low-bandwidth ad units like click-to-missed-call and slideshows. It’s paid off. From 2012 to 2016, average revenue per user grew 4X in the Rest Of World region. And that revenue grows even faster when people can load Facebook quickly and cheaply thanks to strong Wi-Fi access. The more accessible Facebook makes this program, the more it could see those Internet users turn into social networkers.


Read Full Article

How to Start Split Toning Your Photos in Lightroom

Using Evolutionary AutoML to Discover Neural Network Architectures




The brain has evolved over a long time, from very simple worm brains 500 million years ago to a diversity of modern structures today. The human brain, for example, can accomplish a wide variety of activities, many of them effortlessly — telling whether a visual scene contains animals or buildings feels trivial to us, for example. To perform activities like these, artificial neural networks require careful design by experts over years of difficult research, and typically address one specific task, such as to find what's in a photograph, to call a genetic variant, or to help diagnose a disease. Ideally, one would want to have an automated method to generate the right architecture for any given task.

One approach to generate these architectures is through the use of evolutionary algorithms. Traditional research into neuro-evolution of topologies (e.g. Stanley and Miikkulainen 2002) has laid the foundations that allow us to apply these algorithms at scale today, and many groups are working on the subject, including OpenAI, Uber Labs, Sentient Labs and DeepMind. Of course, the Google Brain team has been thinking about AutoML too. In addition to learning-based approaches (eg. reinforcement learning), we wondered if we could use our computational resources to programmatically evolve image classifiers at unprecedented scale. Can we achieve solutions with minimal expert participation? How good can today's artificially-evolved neural networks be? We address these questions through two papers.

In “Large-Scale Evolution of Image Classifiers,” presented at ICML 2017, we set up an evolutionary process with simple building blocks and trivial initial conditions. The idea was to "sit back" and let evolution at scale do the work of constructing the architecture. Starting from very simple networks, the process found classifiers comparable to hand-designed models at the time. This was encouraging because many applications may require little user participation. For example, some users may need a better model but may not have the time to become machine learning experts. A natural question to consider next was whether a combination of hand-design and evolution could do better than either approach alone. Thus, in our more recent paper, “Regularized Evolution for Image Classifier Architecture Search” (2018), we participated in the process by providing sophisticated building blocks and good initial conditions (discussed below). Moreover, we scaled up computation using Google's new TPUv2 chips. This combination of modern hardware, expert knowledge, and evolution worked together to produce state-of-the-art models on CIFAR-10 and ImageNet, two popular benchmarks for image classification.

A Simple Approach
The following is an example of an experiment from our first paper. In the figure below, each dot is a neural network trained on the CIFAR-10 dataset, which is commonly used to train image classifiers. Initially, the population consists of one thousand identical simple seed models (no hidden layers). Starting from simple seed models is important — if we had started from a high-quality model with initial conditions containing expert knowledge, it would have been easier to get a high-quality model in the end. Once seeded with the simple models, the process advances in steps. At each step, a pair of neural networks is chosen at random. The network with higher accuracy is selected as a parent and is copied and mutated to generate a child that is then added to the population, while the other neural network dies out. All other networks remain unchanged during the step. With the application of many such steps in succession, the population evolves.
Progress of an evolution experiment. Each dot represents an individual in the population. The four diagrams show examples of discovered architectures. These correspond to the best individual (rightmost; selected by validation accuracy) and three of its ancestors.
The mutations in our first paper are purposefully simple: remove a convolution at random, add a skip connection between arbitrary layers, or change the learning rate, to name a few. This way, the results show the potential of the evolutionary algorithm, as opposed to the quality of the search space. For example, if we had used a single mutation that transforms one of the seed networks into an Inception-ResNet classifier in one step, we would be incorrectly concluding that the algorithm found a good answer. Yet, in that case, all we would have done is hard-coded the final answer into a complex mutation, rigging the outcome. If instead we stick with simple mutations, this cannot happen and evolution is truly doing the job. In the experiment in the figure, simple mutations and the selection process cause the networks to improve over time and reach high test accuracies, even though the test set had never been seen during the process. In this paper, the networks can also inherit their parent's weights. Thus, in addition to evolving the architecture, the population trains its networks while exploring the search space of initial conditions and learning-rate schedules. As a result, the process yields fully trained models with optimized hyperparameters. No expert input is needed after the experiment starts.

In all the above, even though we were minimizing the researcher's participation by having simple initial architectures and intuitive mutations, a good amount of expert knowledge went into the building blocks those architectures were made of. These included important inventions such as convolutions, ReLUs and batch-normalization layers. We were evolving an architecture made up of these components. The term "architecture" is not accidental: this is analogous to constructing a house with high-quality bricks.

Combining Evolution and Hand Design
After our first paper, we wanted to reduce the search space to something more manageable by giving the algorithm fewer choices to explore. Using our architectural analogy, we removed all the possible ways of making large-scale errors, such as putting the wall above the roof, from the search space. Similarly with neural network architecture searches, by fixing the large-scale structure of the network, we can help the algorithm out. So how to do this? The inception-like modules introduced in Zoph et al. (2017) for the purpose of architecture search proved very powerful. Their idea is to have a deep stack of repeated modules called cells. The stack is fixed but the architecture of the individual modules can change.
The building blocks introduced in Zoph et al. (2017). The diagram on the left is the outer structure of the full neural network, which parses the input data from bottom to top through a stack of repeated cells. The diagram on the right is the inside structure of a cell. The goal is to find a cell that yields an accurate network.
In our second paper, “Regularized Evolution for Image Classifier Architecture Search” (2018), we presented the results of applying evolutionary algorithms to the search space described above. The mutations modify the cell by randomly reconnecting the inputs (the arrows on the right diagram in the figure) or randomly replacing the operations (for example, they can replace the "max 3x3" in the figure, a max-pool operation, with an arbitrary alternative). These mutations are still relatively simple, but the initial conditions are not: the population is now initialized with models that must conform to the outer stack of cells, which was designed by an expert. Even though the cells in these seed models are random, we are no longer starting from simple models, which makes it easier to get to high-quality models in the end. If the evolutionary algorithm is contributing meaningfully, the final networks should be significantly better than the networks we already know can be constructed within this search space. Our paper shows that evolution can indeed find state-of-the-art models that either match or outperform hand-designs.

A Controlled Comparison
Even though the mutation/selection evolutionary process is not complicated, maybe an even more straightforward approach (like random search) could have done the same. Other alternatives, though not simpler, also exist in the literature (like reinforcement learning). Because of this, the main purpose of our second paper was to provide a controlled comparison between techniques.
Comparison between evolution, reinforcement learning, and random search for the purposes of architecture search. These experiments were done on the CIFAR-10 dataset, under the same conditions as Zoph et al. (2017), where the search space was originally used with reinforcement learning.
The figure above compares evolution, reinforcement learning, and random search. On the left, each curve represents the progress of an experiment, showing that evolution is faster than reinforcement learning in the earlier stages of the search. This is significant because with less compute power available, the experiments may have to stop early. Moreover evolution is quite robust to changes in the dataset or search space. Overall, the goal of this controlled comparison is to provide the research community with the results of a computationally expensive experiment. In doing so, it is our hope to facilitate architecture searches for everyone by providing a case study of the relationship between the different search algorithms. Note, for example, that the figure above shows that the final models obtained with evolution can reach very high accuracy while using fewer floating-point operations.

One important feature of the evolutionary algorithm we used in our second paper is a form of regularization: instead of letting the worst neural networks die, we remove the oldest ones — regardless of how good they are. This improves robustness to changes in the task being optimized and tends to produce more accurate networks in the end. One reason for this may be that since we didn't allow weight inheritance, all networks must train from scratch. Therefore, this form of regularization selects for networks that remain good when they are re-trained. In other words, because a model can be more accurate just by chance — noise in the training process means even identical architectures may get different accuracy values — only architectures that remain accurate through the generations will survive in the long run, leading to the selection of networks that retrain well. More details of this conjecture can be found in the paper.

The state-of-the-art models we evolved are nicknamed AmoebaNets, and are one of the latest results from our AutoML efforts. All these experiments took a lot of computation — we used hundreds of GPUs/TPUs for days. Much like a single modern computer can outperform thousands of decades-old machines, we hope that in the future these experiments will become household. Here we aimed to provide a glimpse into that future.

Acknowledgements
We would like to thank Alok Aggarwal, Yanping Huang, Andrew Selle, Sherry Moore, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Alex Kurakin, Quoc Le, Barret Zoph, Jon Shlens, Vijay Vasudevan, Vincent Vanhoucke, Megan Kacholia, Jeff Dean, and the rest of the Google Brain team for the collaborations that made this work possible.

Here’s why Spotify will go public via direct listing on April 3rd


Spotify explained why it’s ditching the traditional IPO for a direct listing on the NYSE on April 3rd today during its Investor Day presentation.

Spotify described the rationale for using a direct listing to go public with five points:

  • List Without Selling Shares  – Spotify has plent of money with $1.3 billion in cash and securities, has no debt, and has positive free cash flow
  • Liquidity – Investors and employees can sell on public market and sell at time of their choosing, while new investors can join in
  • Equal Access – Bankers won’t get preferred access. Instead, the whole world will get access at the same time
  • Transparency – Spotify wants to show the facts about its business to everyone, rather than giving more info to bankers
  • Market-Driven Price Discovery – Rather than setting a specific price with bankers, Spotify will let the public decide what it’s worth

Spotify won’t wait for the direct listing, and on March 26th will announce first quarter and 2018 guidance before markets open.

It’s unclear exactly what Spotify will be valued at on April 3rd, but during 2018 its shares have traded on the private markets for between $90 and $132.50, valuing the company at $23.4 billion at the top of the range. The music streaming service now has 159 million monthly active users (up 29 percent in 2017) and 71 million paying subscribers (up 46 percent in 2017.

During CEO Daniel Ek’s presentation, he explained that Spotify emerged as an alternative to piracy by convenience to make paying or ad-supported access easier than stealing. Now he sees the company as the sole leading music streaming service that’s a dedicated music company, subtly throwing shade at Apple, Google, and Amazon. Ek discussed the flywheel that drives Spotify’s business, explaining that the more people discover music, the more they listen, and the more artists that become successful on the platform, and the more artists will embrace the platform and bring their fans.

You can follow along with the presentation here.


Read Full Article

Check Out The Best Deals Available On The Internet Today

PUBG soft-launches on mobile in Canada with Android release


Player Unknown’s Battlegrounds, the ‘battle royale’ style game where everyone tries to be the last player standing while scrounging for supplies to keep them alive, has launched on Android in Canada MobileSyrup reports, which could presage a future release in the U.S.

The arrival of the mobile version of the game more generally known as PUBG coincides with it reaching the 5 million player milestone on Xbox, where it’s been available since late last year after debuting on the PC in early access earlier in 2017. It’s not cross-play compatible, unlike Fortnite, however, so if you’re playing the Android version you’ll be matched up against others with the app, which is published by Chinese Internet giant Tencent.

This Android port wasn’t developed by original PUBG studio Bluehole, but they say they oversaw the creation of this mobile version. Based on early testing with a Pixel 2 XL, it looks and feels a lot like the original.

PUBG doesn’t have quite the hype of Fortnite right now, since that’s begun a cross-platform play mobile beta and also Drake just played a session with one of the most popular professional esports players in the world. But a mobile version close at hand (and available now, if you’re Canadian) is reason to get excited.


Read Full Article

The 5 Best Android Phones With Wireless Charging for Every Budget

Facebook opens Instant Games to all developers


Facebook’s Instant Games are now open to all developers, Facebook announced this week in advance of the Game Developers Conference. First launched in 2016, the platform lets developers build mobile-friendly games using HTML5 that work on both Facebook and Messenger, instead of requiring users to download native apps from Apple or Google’s app stores.

The Instant Games platform kicked off its launch a couple of years ago with 17 games from developers like Bandai Namco, Konami, Taito, Zynga and King, who offered popular titles like Pac-Man, Space Invaders, and Words with Friends. The following year, the platform had grown to 50 titles and became globally available. But it wasn’t open to all – only select partners.

In addition to getting users to spend more time on Facebook’s platform, Instant Games provides Facebook with the potential for new revenue streams now that Facebook is moving into game monetization.

In October, Facebook said it would begin to test interstitial and rewarded video ads, as well as in-app purchases. The tools were only available to select developers on what was then an otherwise closed platform for Facebook’s gaming partners.

Now, says Facebook, all developers can build Instant Games as the platform exits its beta testing period.

Alongside this week’s public launch, Facebook introduced a handful of new features to help developers grow, measure and monetize their games.

This includes the launch of the ads API, which was also previously in beta.

In-app purchases, however, are continuing to be tested.

Developers will also have access to Facebook’s Monetization Manager, where they can track manage ads and track how well ad placements are performing; as well as a Game Switch API for cross-promoting games across the platform, or creating deep links that work outside Facebook and Messenger.

Facebook says it also updated how its ranking algorithm surfaces games based on users’ recent play and interests, and updated its in-game leaderboards, among other things.

Soon, Instant Game developers will be able to build ad campaigns in order to acquire new players from Facebook. These new ad units, when clicked, will take players directly into the game where they can begin playing. 

Since last year, Facebook Instant Games have grown to nearly 200 titles, but the company isn’t talking in-depth about their performance from a revenue perspective.

It did offer one example of a well-performing title, Basketball FRVR, which is on track to make over 7-digits in ad revenue annually, and has been played over 4.2 billion times.

With the public launch, Facebook is offering Instant Games developer documentation page and a list of recommended HTML5 game engines to help developers get started. Developers can then build and submit games via Facebook’s App page.


Read Full Article

Android Wear is becoming ‘Wear OS by Google’


Android Wear hasn’t exactly been the rocket ship of success Google was no doubt banking on when it was launched four years ago this week.

After a slow start, the company issued a 2.0 refresh of the wearable operating system in early 2017 — but the update was fairly minimal and didn’t appear to move the needle. A few months after the new version was announced, Tizen overtook Wear in global market share, courtesy of Samsung’s adoption of the open operating system.

Perhaps Wear needs a new coat of paint — or at the very least, a new name. It’s getting the latter today. Google announced via blog post that Android Wear is now Wear OS. Or, more accurately, Wear OS  by Google.

“We’re announcing a new name that better reflects our technology, vision, and most important of all—the people who wear our watches,” Wear OS Director of Product, Dennis Troper said in the post. “We’re now Wear OS by Google, a wearables operating system for everyone.”

Watch making conglomerate Fossil Group appears to be behind the spirit of the rebrand.

“In 2017, Fossil Group nearly doubled its wearables business to more than $300 million, including 20 percent of watch sales in Q4,” the company’s Chief Strategy and Digital Officer, Greg McKelvey said in a statement provided to TechCrunch. “And we expect to see continued growth in the category. Many of our smartwatch customers are iOS users, so we are confident in and eager to see the added benefits that both Android and iOS phone users globally will experience as Wear OS by Google rolls out in 2018.”

The news comes ahead of BaselWorld, next week’s big watch and jewelry show in Switzerland. For now, the change doesn’t appear to involve much more than the rename, though the company may be saving additional details for the big show. Android Wear hasn’t been much of a focus at Google in recent years. The company added iOS compatibility back in 2015, casting the net wider than Apple’s offering. And while more than 50 watches have been released for the operating system, it has yet to take the wearables world by storm.

Perhaps the rebrand marks a new-found focus at the company, as it looks toward smartwatches as rare bright light in the stagnating wearables category. Then again, perhaps a name change is just a name change. The Wear OS name will be rolling out to the app and watch over the course of the next few weeks, according to the company.


Read Full Article

SwiftKey gets stickers


Back in 2016, Microsoft bought the popular SwiftKey keyboard for Android and iOS for $250 million. It’s still one of the most popular third-party keyboard on both platforms and today, the company is launching one of its biggest updates since the acquisition. With SwiftKey 7.0,  which is out now, the company is adding stickers — because who doesn’t like stickers?

Going forward, the service will offer a number of sticker packs, including some that can be edited and some that are exclusive to Microsoft, too.

That by itself wouldn’t be all that interesting, of course (and I can already see you rolling your eyes) but the real change here is under the hood and sets SwiftKey up for adding more interesting features soon. That’s because the stickers will live in the new SwiftKey toolbar, which will replace the current ‘hub,’ the menu where you can change your keyboard’s layout, size, etc. Right now, what you can find there are stickers and collections, that is, a library of stickers, images and other media you like to torture your friends with.

In the near future, SwiftKey will use this toolbar to enable a number of other new features like location sharing (though only in the U.S. and India for now) and calendar sharing.

“We remain committed to making regular typing as fast and easy as possible,” writes Chris Wolfe, Principal Product Manager at SwiftKey in today’s announcement. “Today’s release of Toolbar, Stickers and Collections, as well as the announcement of Location and Calendar, also shows our ambition to improve users’ experience of rich media. With the support of Microsoft, you can expect to see more innovations in both regular and rich media typing coming soon”


Read Full Article

What Is a Supercomputer? The Top 10 Supercomputers in the World

Facebook opens Instant Games to all developers


Facebook’s Instant Games are now open to all developers, Facebook announced this week in advance of the Game Developers Conference. First launched in 2016, the platform lets developers build mobile-friendly games using HTML5 that work on both Facebook and Messenger, instead of requiring users to download native apps from Apple or Google’s app stores.

The Instant Games platform kicked off its launch a couple of years ago with 17 games from developers like Bandai Namco, Konami, Taito, Zynga and King, who offered popular titles like Pac-Man, Space Invaders, and Words with Friends. The following year, the platform had grown to 50 titles and became globally available. But it wasn’t open to all – only select partners.

In addition to getting users to spend more time on Facebook’s platform, Instant Games provides Facebook with the potential for new revenue streams now that Facebook is moving into game monetization.

In October, Facebook said it would begin to test interstitial and rewarded video ads, as well as in-app purchases. The tools were only available to select developers on what was then an otherwise closed platform for Facebook’s gaming partners.

Now, says Facebook, all developers can build Instant Games as the platform exits its beta testing period.

Alongside this week’s public launch, Facebook introduced a handful of new features to help developers grow, measure and monetize their games.

This includes the launch of the ads API, which was also previously in beta.

In-app purchases, however, are continuing to be tested.

Developers will also have access to Facebook’s Monetization Manager, where they can track manage ads and track how well ad placements are performing; as well as a Game Switch API for cross-promoting games across the platform, or creating deep links that work outside Facebook and Messenger.

Facebook says it also updated how its ranking algorithm surfaces games based on users’ recent play and interests, and updated its in-game leaderboards, among other things.

Soon, Instant Game developers will be able to build ad campaigns in order to acquire new players from Facebook. These new ad units, when clicked, will take players directly into the game where they can begin playing. 

Since last year, Facebook Instant Games have grown to nearly 200 titles, but the company isn’t talking in-depth about their performance from a revenue perspective.

It did offer one example of a well-performing title, Basketball FRVR, which is on track to make over 7-digits in ad revenue annually, and has been played over 4.2 billion times.

With the public launch, Facebook is offering Instant Games developer documentation page and a list of recommended HTML5 game engines to help developers get started. Developers can then build and submit games via Facebook’s App page.


Read Full Article

Hootsuite nabs $50M in growth capital for its social media management platform, passes 16M customers


Over the last several years, social media has become a critical and central way for businesses to communicate, and market to, their customers. Now, one of the startups that helped spearhead this trend has raised a round of growth funding to expand its horizons. Hootsuite, the Vacouver-based social media management company that counts some 16 million businesses as customers, said today that it has raised $50 million in growth capital — specifically through a credit financing agreement — from CIBC Innovation Banking.

We asked Ryan Holmes, the co-founder and CEO, for details about its valuation and funding, and said that it will be used for more acquisitions in the near future, and with it the valuation is unchanged.

“We opted for to go with non-dilutive credit at this point and found a great partner and terms in CIBC,” he wrote in an email. “The company is cash flow positive and the facility will primarily be reserved for M&A purposes. There is no associated valuation, however our latest 409a is up from last year and growth is very strong.”

Notably, the last time Hootsuite raised money — way back in 2014 — the company was already valued at $1 billion. For some context, at the time it had 10 million businesses as customers, and today it has 16 million including what it says is 80 percent of the Fortune 1000, so it’s likely that its valuation has grown as well.

“This financing is a testament to the strong fundamentals behind Hootsuite and our ongoing commitment to innovation and growth as the clear leader in social media management,” said Greg Twinney, CFO of Hootsuite, in a statement. “The additional capital will help us scale even faster to bring the most innovative products and partnerships to market globally and help our customers strategically build their brands, businesses and customer relationships with social.”

The funding, according to the release, will also be used to expand its business in Asia Pacific, Europe and Latin America. It also plans to add in more tools to serve the needs of specific verticals like financial services, government and healthcare.

You may not know the name Hootsuite but you might recognise its mascot — an owl — and more specifically its corresponding shortened link — it starts with ‘ow.ly’ — that is used a lot on Twitter, the social network that gave Hootsuite its first customers and ubiquity.

Things have moved along quite a bit since those early days, when Hootsuite first started as a side project for Holmes, who himself was running a marketing and advertising agency when he started it.

Social media is now the fastest-growing category for marketing spend — partly because of the popularity of social networking services like Facebook, Snapchat and Twitter; and partly because “eyeballs” can be better tracked and quantified on these networks over more legacy channels like print and outdoor ads. At the same time, presenting yourself as a business on a social network is getting harder and harder. Sites like Facebook are focused on trying to improve engagement, and that is leading it to rethink how it shares and emphasizes posts that are not organically created by normal people. On the other side, we’re seeing a new wave of privacy and data protection regulation come in that will change how data can be used across and within these sites.

All of this means that Hootsuite, and others that it competes with, need to get a lot smarter about what it offers to its customers, and how it offers it.

Starting as a modest tool that plugged into Twitter, Hootsuite itself now integrates with just about all of the major social platforms, most recently finally adding Instagram earlier this month. Its customers use a dashboard to both monitor a variety of social media platforms to track how their companies are being discussed, and also to send out messages to the world. And they now use that dashboard and Hootsuite for a growing array of other purposes, from placing ads to content marketing to analytics across an increasing number of platforms — a range of services that Hootsuite has developed both in-house and by way of acquisition.

One challenge that Hootsuite has had over the years has been the company’s focus on the freemium model, and how to convert its initially non-paying users into paying tiers with more premium offerings. Some of that expansion into new services appears to have helped tip the balance.

“In the past year, Hootsuite has seen tremendous growth from acquisitions like AdEspresso, to strategic partnerships with market leaders such as Adobe, to recognitions such as being named a leader in the Forrester Wave and G2 Crowd,” said Holmes in a statement. “This financing allows Hootsuite in continuing to create strong value for customers looking to unlock the power of social.”

Another challenge has been the fundamental fact that Hootsuite relies on third parties to essentially “complete” its offering: Hootsuite offers analytics and tools for marketing, but still needs to connect into social networks and their data pools in order to do that.

This makes the company somewhat dependent on the whims of those third parties. So, for example, if Twitter decides to either increase the fees it charges to Hootsuite, or tries to offer its own analytics and thereby cuts off some of Hootsuite’s access, this impacts the company.

One solution to this is to continue to integrate as many other platforms as possible, to create a position where its stronger because of the sum of its parts. Unsurprisingly, Hootsuite also says that some of the funding will be used to increase its partnerships and integrations.

More generally, we are seeing a trend of consolidation in the area of social media management, as several smaller, and more focused solutions are brought together under one umbrella to improve economies of scale, and also to build out that “hub” strategy, becoming more indispensable, by virtue of providing so much utility in one place.

As part of that trend, we’ve seen two of Hootsuite’s rivals, Sprinklr and Falcon.io (not an owl but another bird of prey), also grow by way of a spate of acquisitions.


Read Full Article

Bookmark This: All Your HTML Questions Answered


HTML has been around for a long time now, so it’s about time you learned the basics. What is it, how it works, and how to write some common elements in HTML.

Before starting, make sure your read our guide to free online HTML editors and the best websites for quality HTML examples.

What Is HTML?

HTML is the language used to construct web pages. HTML stands for Hypertext Markup Language and is simple a set of instructions for your web browser. Using these instructions, your browser displays what a web page should look like.

It’s important to understand that it’s a markup language, not a programming language. Programming languages allow you to solve problems, such as math equations, manipulating data, or moving a video game character.

You’re unable to write any logic in HTML. It is only concerned with layout.

What Does HTML Look Like?

HTML consists of several elements known as “tags”. Tags are instructions for styling a specific part of your web page. Going back to construction, HTML is the plans, and tags are specific features such as windows or doors.

Here’s what a very basic web page looks like in HTML:

<html>
  <head>
    <title>MUO Website</title>
  </head>
  <body>


  </body>
</html>

Tags in HTML are pre-defined, and specify common features like images, links to other webpages, buttons, and more.

The vast majority of tags have to be open and closed. This simply defines some feature, with text, images, or other tags inside it, and then ends the definition. Thinking back to houses, opening the tag is like saying “start the window here”, and closing the tag is like saying “here’s where the window ends”.

HTML tags won’t actually show up on your website. Your browser follows the instruction, but never shows it to any visitors. It’s not secret, however. Anyone can look at your HTML once you publish your web pages.

While there is a large number of different HTML tags, you don’t have to learn them all before you can code a website. Today you’ll learn how to write some common tags, and what they can be used for.

What Are HTML Tag Attributes?

One last thing to know about tags is attributes. Attributes define special features of tags. If tags are windows and doors, then attributes specify specific building details. This could be the width and height of the frame, whether the window opens, or if the door has a lock.

Attributes are included inside the opening tag, like this:

<p width="123" height="567"></p>

You can’t just make up your own tags or attributes. Attributes and tags are pre-defined by the World Wide Web Consortium (W3C).

What Is HTML5?

HTML5 is the latest version of HTML. It contains several new tags, attributes, and features.

As HTML is a set of instructions, different web browsers sometimes interpret it differently. One browser might decide that windows and doors should be painted black unless you say otherwise.

everything know about html

While browsers have finally started to become quite consistent with each other, you can still get caught out sometimes with very new features. Perhaps Google Chrome has implemented a new tag, but Microsoft’s internet Explorer has not yet.

For the most part, your web pages will look the same across all the major browsers, but it’s still worth having a quick test before your publish anything, especially if you’re using newer tags, which may not be supported by all browsers yet.

If you’d like to know more about HTML5, then take a look at our HTML5 getting started guide.

How to Comment Out HTML

Like many other languages, markup or programming, HTML has the ability to “comment out” blocks of markup. A comment is something that is ignored by the browser. This may be a note to remind yourself about what this particular piece of your website does.

By commenting out markup, you are instructing the browser to ignore one or more tags. This may be useful to remove functionality, or to hide a piece of your website without deleting the code.

When a web browser sees a comment, it understands it as “don’t use these instructions until I say otherwise”. Comments consist of an “opening” comment, and a “closing” comment—just like tags.

Here’s an example:

<!-- Don't forget to add the XYZ here! -->
<p width="123" height="567"></p>

Commenting out code is done exactly the same way:

<!-- <p width="123" height="567"></p> -->

Rather than a message, put your markup between the comment tags.

How to Insert Images in HTML

Inserting images into your HTML is done with the image tag:

<img src="MUO_logo.jpg" alt="MakeUseOf Logo">

Notice how the tag name is called img, and there are two attributes. The src attribute specifies where to find the image, and the alt tag is an alternative text description, in case the image cannot be loaded for any reason.

The image tag does not need closing, unlike most other tags.

How to Change Font in HTML

Fonts can be changed using the font tag and the face attribute:

<font face="arial">MUO Arial Text</font>

everything know about html

Font size can be easily changed using the size attribute:

<font size="12">MUO Big Text</font>

everything know about html

If you’d like to change the font color, this can be easily done with the color attribute:

<font color="red">MUO Red Text</font>

everything know about html

These attributes are unique to the font tag. If you wish to use another tag, you can nest tags, by placing one inside the other:

<p><font color="red">MUO Red Text</font></p>

How to Add a Link in HTML

Links can be added using the a tag:

<a href="//www.makeuseof.com">MakeUseOf.com</a>

The href attribute is the destination of your link.

How to Make a Table in HTML

HTML tables involve nesting several different tags. You’ll need to start with a table tag:

<table>
  
</table>

Now add some rows using the tr tag:

<table>
  <tr>
    
  </tr>
  <tr>
    
  </tr>
  <tr>
    
  </tr>
</table>

Finally, use the td tag to create your table cells, which will also create the columns:

<table>
  <tr>
    <td></td>
    <td></td>
    <td></td>
  </tr>
  <tr>
    <td></td>
    <td></td>
    <td></td>
  </tr>
  <tr>
    <td></td>
    <td></td>
    <td></td>
  </tr>
</table>

It’s possible to go overboard and go quite wild with your table layout, but it’s usually best to keep things simple if possible. In the past, tables were used to structure a web page, but this practice is dated and looks terrible. Keep tables simply for relaying data to the reader.

Using CSS With HTML

These examples have covered the basics, but if you want to get really creative, you’ll need to use CSS. Cascading Style Sheets allow you much greater control over your website design, and allow you to re-use quite a lot of code between different parts of your website.

While we have tutorials on learning CSS and quick CSS examples, there’s still some setup you can do in HTML.

If you’d like to write CSS alongside your HTML, you can use the style attribute. This attribute simply applies the CSS to the tag it’s used on:

<p ></p>

While this way works well, you’ll find it hard work to maintain if you have a lot of markup which requires similar styling.

The better way is to use the style tag, placed inside the head tag. Here you can define CSS for your whole page:

<html>
  <head>
    <style type="text/css">
      MANY CSS RULES
    </style>
  </head>
</html>

The style tag has an attribute of text/css. This is required to let your browser know the exact style to expect in the tag.

The third and final way of using CSS is through an external file, using the link tag. This links your HTML to CSS stored in its own file, which is great if you have a large amount of it:

<link rel="stylesheet" type="text/css" href="muostyle.css">

There are several attributes in use here. The rel attribute declares your link as a stylesheet. The type of “text/css” is once again defined in the type attribute, and the href attribute is where to find the external file.

How Do You Make a Website With HTML?

As you’ve seen, HTML really isn’t that bad, is it? Using a few simple tags and attributes, you can quickly assemble a web page, even if you’ve never written HTML before!

If you’re looking to write a complete website, then make sure you take a look at our beginner’s guide to making a website.


Read Full Article

The red-hot AI hardware space gets even hotter with $56M for a startup called SambaNova Systems


Another massive financing round for an AI chip company is coming in today, this time for SambaNova Systems — a startup founded by a pair of Stanford professors and a longtime chip company executive — to build out the next generation of hardware to supercharge AI-centric operations.

SambaNova joins an already quite large class of startups looking to attack the problem of making AI operations much more efficient and faster by rethinking the actual substrate where the computations happen. While the GPU has become increasingly popular among developers for its ability to handle the kinds of lightweight mathematics in very speedy fashion necessary for AI operations. Startups like SambaNova look to create a new platform from scratch, all the way down to the hardware, that is optimized exactly for those operations. The hope is that by doing that, it will be able to outclass a GPU in terms of speed, power usage, and even potentially the actual size of the chip. SambaNova today said it has raised a massive $56 million series A financing round led by GV, with participation from Redline Capital and Atlantic Bridge Ventures.

SambaNova is the product of technology from Kunle Olukotun and Chris Ré, two professors at Stanford, and led by former SVP of development Rodrigo Liang, who was also a VP at Sun for almost 8 years. When looking at the landscape, the team at SambaNova looked to work their way backwards, first identifying what operations need to happen more efficiently and then figuring out what kind of hardware needs to be in place in order to make that happen. That boils down to a lot of calculations stemming from a field of mathematics called linear algebra done very, very quickly, but it’s something that existing CPUs aren’t exactly tuned to do. And a common criticism from most of the founders in this space is that Nvidia GPUs, while much more powerful than CPUs when it comes to these operations, are still ripe for disruption.

“You’ve got these huge [computational] demands, but you have the slowing down of Moore’s law,” Olukotun said. “The question is, how do you meet these demands while Moore’s law slows. Fundamentally you have to develop computing that’s more efficient. If you look at the current approaches to improve these applications based on multiple big cores or many small, or even FPGA or GPU, we fundamentally don’t think you can get to the efficiencies you need. You need an approach that’s different in the algorithms you use and the underlying hardware that’s also required. You need a combination of the two in order to achieve the performance and flexibility levels you need in order to move forward.”

While a $56 million funding round for a series A might sound massive, it’s becoming a pretty standard number for startups looking to attack this space, which has an opportunity to beat massive chipmakers and create a new generation of hardware that will be omnipresent among any device that is built around artificial intelligence — whether that’s a chip sitting on an autonomous vehicle doing rapid image processing to potentially even a server within a healthcare organization training models for complex medical problems. Graphcore, another chip startup, got $50 million in funding from Sequoia Capital, while Cerebras Systems also received significant funding from Benchmark Capital.

Olukotun and Liang wouldn’t go into the specifics of the architecture, but they are looking to redo the operational hardware to optimize for the AI-centric frameworks that have become increasingly popular in fields like image and speech recognition. At its core, that involves a lot of rethinking of how interaction with memory occurs and what happens with heat dissipation for the hardware, among other complex problems. Apple, Google with its TPU, and reportedly Amazon have taken an intense interest in this space to design their own hardware that’s optimized for products like Siri or Alexa, which makes sense because dropping that latency to as close to zero as possible with as much accuracy in the end improves the user experience. A great user experience leads to more lock-in for those platforms, and while the larger players may end up making their own hardware, GV’s Dave Munichiello — who is joining the company’s board — says this is basically a validation that everyone else is going to need the technology soon enough.

“Large companies see a need for specialized hardware and infrastructure,” he said. “AI and large-scale data analytics are so essential to providing services the largest companies provide that they’re willing to invest in their own infrastructure, and that tells us more more investment is coming. What Amazon and Google and Microsoft and Apple are doing today will be what the rest of the Fortune 100 are investing in in 5 years. I think it just creates a really interesting market and an opportunity to sell a unique product. It just means the market is really large, if you believe in your company’s technical differentiation, you welcome competition.”

There is certainly going to be a lot of competition in this area, and not just from those startups. While SambaNova wants to create a true platform, there are a lot of different interpretations of where it should go — such as whether it should be two separate pieces of hardware that handle either inference or machine training. Intel, too, is betting on an array of products, as well as a technology called Field Programmable Gate Arrays (or FPGA), which would allow for a more modular approach in building hardware specified for AI and are designed to be flexible and change over time. Both Munichiello’s and Olukotun’s arguments are that these require developers who have a special expertise of FPGA, which a sort of niche-within-a-niche that most organizations will probably not have readily available.

Nvidia has been a massive benefactor in the explosion of AI systems, but it clearly exposed a ton of interest in investing in a new breed of silicon. There’s certainly an argument for developer lock-in on Nvidia’s platforms like Cuda. But there are a lot of new frameworks, like TensorFlow, that are creating a layer of abstraction that are increasingly popular with developers. That, too represents an opportunity for both SambaNova and other startups, who can just work to plug into those popular frameworks, Olukotun said. Cerebras Systems CEO Andrew Feldman actually also addressed some of this on stage at the Goldman Sachs Technology and Internet Conference last month.

“Nvidia has spent a long time building an ecosystem around their GPUs, and for the most part, with the combination of TensorFlow, Google has killed most of its value,” Feldman said at the conference. “What TensorFlow does is, it says to researchers and AI professionals, you don’t have to get into the guts of the hardware. You can write at the upper layers and you can write in Python, you can use scripts, you don’t have to worry about what’s happening underneath. Then you can compile it very simply and directly to a CPU, TPU, GPU, to many different hardwares, including ours. If in order to do work you have to be the type of engineer that can do hand-tuned assembly or can live deep in the guts of hardware there will be no adoption… We’ll just take in their TensorFlow, we don’t have to worry about anything else.”

(As an aside, I was once told that Cuda and those other lower-level platforms are really used by AI wonks like Yann LeCun building weird AI stuff in the corners of the Internet.)

There are, also, two big question marks for SambaNova: first, it’s very new, having started in just November while many of these efforts for both startups and larger companies have been years in the making. Munichiello’s answer to this is that the development for those technologies did, indeed, begin a while ago — and that’s not a terrible thing as SambaNova just gets started in the current generation of AI needs. And the second, among some in the valley, is that most of the industry just might not need hardware that’s does these operations in a blazing fast manner. The latter, you might argue, could just be alleviated by the fact that so many of these companies are getting so much funding, with some already reaching close to billion-dollar valuations.

But, in the end, you can now add SambaNova to the list of AI startups that have raised enormous rounds of funding — one that stretches out to include a myriad of companies around the world like Graphcore and Cerebras Systems, as well as a lot of reported activity out of China with companies like Cambricon Technology and Horizon Robotics. This effort does, indeed, require significant investment not only because it’s hardware at its base, but it has to actually convince customers to deploy that hardware and start tapping the platforms it creates, which supporting existing frameworks hopefully alleviates.

“The challenge you see is that the industry, over the last ten years, has underinvested in semiconductor design,” Liang said. “If you look at the innovations at the startup level all the way through big companies, we really haven’t pushed the envelope on semiconductor design. It was very expensive and the returns were not quite as good. Here we are, suddenly you have a need for semiconductor design, and to do low-power design requires a different skillset. If you look at this transition to intelligent software, it’s one of the biggest transitions we’ve seen in this industry in a long time. You’re not accelerating old software, you want to create that platform that’s flexible enough [to optimize these operations] — and you want to think about all the pieces. It’s not just about machine learning.”


Read Full Article

Don’t foul this free-throwing Toyota basketball robot


Because if it gets to the free-throw line, it sinks the shot – every. single. time. This robot (via The Verge) is the project of a group of Toyota engineers using their spare time, to build a robot inspired by the manga Slam Dunk, which is about a Japanese high school basketball team.

The engineers brought their robot out to face off against humans, (pro players, though pro players from a B–league in Japan, not NBA) but the robot nailed it every time. Still, it’s a free–throw competition – humans still have a gigantic lead on other aspects of basketball, like most of them, in fact. Don’t get me started on the dunk competition.


Read Full Article

Uber vs. Lyft: Which One Should You Use?


Secret location tracking, toxic company cultures, congestion problems, arguments with city mayors, and a near-endless list of business malpractices. Ridesharing apps haven’t exactly covered themselves in glory in recent years.

But that’s not stopped people from using the services that the companies provide. Uber has now completed more than five billion trips and is clocking up a barely-believable 5.5 million trips per day.

However, Uber is no longer the only show in town. The company’s business model has been replicated around the world. As a result, Uber’s market share in the United States declined by more than 10 percent in 2017. Today, it’s the most significant competitor in North America is Lyft.

But which service should you use: Uber or Lyft? In this article, we’re going to put the two apps head-to-head and compare five criteria.

1. Availability

uber or lyft which one is better

A ridesharing service isn’t very useful if it’s not available when you need it, so let’s look at how Uber and Lyft compare in terms of availability.

Uber operates in 633 cities worldwide. That number covers a vast number of American cities, and more globally-renowned locations in Europe, Latin America, Africa, Asia, and Oceania.

In contrast, Lyft has traditionally only been available in the United States. It expanded into Canada in December 2017, but that’s the only other country it operates in.

Even in the domestic American market where Lyft specializes, Uber comfortably wins. It counts more than 250 markets whereas Lyft barely operates in 100.

Score: Uber 1-0 Lyft

2. Services

Sure, Uber started life as a glorified taxi service, but today the company has expanded its operations considerably.

Although not all the services are available in every city, Uber offers 17 standalone products. They range from the well-known offerings such as UberEATS through to the obscure.

For example, have you ever heard of UberAUTO (a rickshaw service in Pakistan), UberBOAT (a water taxi service in Istanbul), or UberAIR (an aircraft service launching in 2020)?

Lyft’s services are far more limited.

Yes, the company has been experimenting with self-driving cars, and it offers services like ridesharing and six-person vehicles, but it lacks the diversification of Uber. At one time, Lyft was developing a rival to UberEATS, but the plan met with resistance inside the company and was abandoned.

Score: Uber 2-0 Lyft

3. Price

uber or lyft which one is better

Many people don’t care about riding in a top-of-the-range luxury vehicle or whether a driver offers you a bottle of water. They just want to get from A to B for as little money as possible.

Sadly, it’s not easy to make a like-for-like price comparison between Uber and Lyft. Factors like surge pricing, time of day, and even the city you’re in all have an impact on how much you will pay and make it difficult to tell if Uber or Lyft is chepaer. Things quickly become confusing.

However, there are some broad similarities. When using either company, you can expect to pay about $1 on average to start a ride, an average of $1.50 per mile, and an average of 25 cents per minute. All combined, you can expect to pay about $2 per mile with each company.

We’ll give the point to Lyft, merely due to its cheaper surge pricing system. Lyft will often go to x2, whereas x8 is not uncommon on Uber.

Score: Uber 2-1 Lyft

4. Ease of Use

If you’ve used Uber and Lyft, you will know the two apps function in a similar way. You tell the app where you want to go, and you’ll see nearby cars and a price estimate. Both apps also allow you to tip your driver and add multiple stops to your journey.

However, as Uber has grown its ancillary services like UberEATS, the main Uber app has become one of the company’s primary advertising tools. What used to be a slick and professional app has become increasingly bloated and hard to navigate.

Lyft used to trail Uber in the professionalism stakes; it appealed to a “hippy” crowd. But after retiring its famous mustaches a couple of years ago, it’s grown up and become more serious.

Again, we’re going to give the point to Lyft. Today, the company’s app feels like the Uber app did three years ago when it was in its pomp.

Score: Uber 2-2 Lyft

5. Controversies and Culture

Critics have accused both Uber and Lyft of destroying the traditional taxi industry with their ruthless approach. Typically, the companies’ modus operandi sees them enter a new market without the necessary permits, then use lobbyists to create a political storm if the authorities don’t grant a business license.

But regardless of how you feel about the tactic, it’s not technically breaching any laws. There are far more serious controversies; Uber has a charge sheet as long as your arm. Let’s look at some of the company’s more notorious incidents:

  • Throughout 2014: The company used an internal tool called Greyball to avoid giving rides to law enforcement officers in areas where its service is illegal. In May 2017, the United States Department of Justice opened a criminal investigation into the issue.
  • February 2016: Uber driver Jason Dalton shot six people in Michigan while working. He continued driving and accepting fares while the police undertook a seven-hour manhunt.
  • February 2016: A former Uber employer who reported sexual harassment to her superiors was threatened with the sack if she didn’t drop her claim.
  • Mid-2016: Uber suffered a data breach. 600,000 drivers and 57 million customers were affected. Instead of reporting the loss, Uber paid a $100,000 ransom to the hackers.
  • January 2017: Uber was forced to pay $20 million to the US government after lying to drivers about potential earnings.
  • November 2017: Paradise Papers revealed Uber used offshore accounts to minimize taxes.

The points we’ve listed above don’t even scratch the surface of Uber’s issues. Of course, Lyft isn’t perfect, but it prides itself on being a lot less controversial than Uber.

Here’s how Lyft’s President John Zimmer explained the difference between the two companies to Time Magazine:

“We’re not the nice guys; we’re a better boyfriend. […] In our minds, there’s been a contrast in the values, there’s been a contrast in the type of business we’re building. […] We’re woke. Our community is woke, and the US population is woke. Our choice matters, the seat we take matters.”

Score: Uber 2-3 Lyft

Uber vs. Lyft: Which Should You Use?

As you will see from our running score, we’ve given the victory to Lyft. But in truth, Lyft only gets the nod because of Uber’s poisonous corporate image.

From a rider’s standpoint, no one stands out very much in the Uber vs. Lyft battle . Certainly, the difference is much less noticeable than it was a few years ago.

Some users won’t care about Uber’s corporate indiscretions. If you do, you should choose Lyft every time. But if you don’t, you’ll need to decide what’s more important to you: availability and services or cost and ease-of-use.


Read Full Article

Google is launching playable in-game ads


Maybe you’ve seen this kind of ad in a game you have played: your character dies and then the game asks you to watch a short video ad in return for an extra life. That’s actually a feature of Google’s AdMob advertising service, and today it’s extending this with playable ads, a new type of ad that fits far better into a game.

Google calls these types of ads “rewarded ads” and with the Game Developers Conference coming up next week, this is the perfect time to launch these new playable ads. More than 45 percent of AdMob’s top 1,000 gaming partners already use rewarded ads to monetize their apps, and the new playable ads will work just like rewarded video ads. In return for playing a mini-game — and either doing well in that or potentially installing the full game — players can get an extra life or maybe some fresh loot.

Playable ads are actually one of two new rewarded ad types the company is launching today. The other type is multiple-option video ads. Those give you the option to choose which video ad you want to watch in return for game goodies.

At first glance, these new ad types look like they would take a player out of the game, but at least the playable ads fit into the general mode and state of mind a player is in. As for video ads… yeah… those are always annoying, but at least you’ll get an extra game life out of it.

In addition to these new ad types, Google also is now giving developers a couple of new ways to attract new players. The first of this is the beta launch of video ads in the Google Play Store. These are an extension of the videos developers could already highlight on their Play Store pages.

Another new ad feature for developers is an extension of the existing Universal Ad Campaigns tool. Starting in May (as a beta for select advertisers), developers will be able to target people with similar interests to their best customers. Like so many current Google products, this feature will use Google’s machine learning tools to mine the billions of in-app events it now tracks to advertise a game to the right potential players.

Google is launching a new feature for ad bidding, too. With Open Bidding, ad networks like Smaato, Index Exchange and OpenX can bid to serve ads in an app simultaneously in a single unified auction. While that may sound like something only those in the arcane underworld of ad network developers would care about, it actually makes the bidding process faster and easier.


Read Full Article