12 November 2020

Udemy and altMBA co-founders return to edtech with a new, stealthy business


In 2009, Udemy co-founder Gagan Biyani tried to convince people to learn online through live classes. But what he discovered instead was that everyone wanted an online repository of content that allowed them to learn at their own pace, whenever and wherever. So, he canned his idea and Udemy created what is now called a massive open online course provider, or MOOC.

In the years since, Biyani was let go from Udemy, started a 200-person food company, shut that down, took a sabbatical, and is now returning to the seedling he left behind in 2009: live, online courses.

Today, Biyani tells TechCrunch that he is teaming up with Wes Kao, the co-founder of AltMBA, an online cohort-based leadership program, to start an edtech company that combines both of their experiences into one focus: live, cohort-based learning. The duo grew up as friends in the same hometown, but only recently reconnected over education once Biyani returned from sabbatical. Kao’s experience building an online course from scratch, with an over 95% completion rate, was validation that the format worked. And soon enough, they incorporated a company together.

The company will focus on cohort-based learning, mixing live and asynchronous components. As it’s still in early stealth, the founders said it doesn’t have a name yet. Instead of a company site, they have a Notion landing page.

Despite those missing details, what Biyani did say is that the startup’s main focus is creating a community where anyone can start their own course. Kao says that creating a course requires over a dozen people behind the scenes — teacher assistants, community moderators and the process is essentially “an entire production.” With the startup, she wants to democratize that operation.

“I see it as a way to help more traders and experts be able to share their knowledge,” she said. “And take away the question marks on how to build community.”

The company from the start will focus on the back-end production of helping teachers, but eventually create a marketplace to allow students to see a directory of classes.

“It should be as easy as building a Substack,” Biyani said, referring to the popular newsletter service. Similar to Substack, the company will only make money if the instructor, or creator, does. It takes a chunk of each student’s subscription cost as revenue.

The company is entering a crowded space. Yesterday, CampusWire announced that it has pivoted to start offering build-your-own courses to experienced professors. MasterClass allows celebrities to teach classes, Teachable allows anyone to create their own course, and the list continues.

But Biyani views their biggest competitor as teachers who have already built courses without a third-party service. The company is planning to bring those creators onto their platform by offering ways to manage their customer base.

Ultimately, the market will only be won over by the startup that has the best strategy, product, and teacher pool. Based on their stealthy vision, the duo has raised $4.3 million in a round led by First Round Capital. Other investors include Naval Ravikant, Sahil Lavingia, Li Jin, Arlan Hamilton and co-founders from Lambda School, Outschool, Superhuman, and Udemy.

It’s a stacked term-sheet for a company in the early stages, suggesting that that edtech’s boom is still very much upon us. Lavingia says that he committed right away even though he didn’t use the product.

“Gagan’s name was enough for me,” he said. “I think I followed him on Twitter a year or two ago and i’d back anything he does just based on what he shares.”

Backstage Capital’s Hamilton said that Kao has been within the Backstage mentor network for a while, and added that “there’s a perfect storm for Wes and Gagan to execute within.”


Read Full Article

Python creator Guido van Rossum joins Microsoft


Guido van Rossum, the creator of the Python programming language, today announced that he has unretired and joined Microsoft’s Developer Division.

Van Rossum, who was last employed by Dropbox, retired last October after six and a half years at the company. Clearly, that retirement wasn’t meant to last. At Microsoft, van Rossum says, he’ll work to “make using Python better for sure (and not just on Windows).”

A Microsoft spokesperson told us that the company also doesn’t have any additional details to share but confirmed that van Rossum has indeed joined Microsoft. “We’re excited to have him as part of the Developer Division. Microsoft is committed to contributing to and growing with the Python community, and Guido’s on-boarding is a reflection of that commitment,” the spokesperson said.

The Dutch programmer started working on what would become Python back in 1989. He continued to actively work on the language during his time at the U.S. National Institute of Standards and Technology in the mid-90s and at various companies afterward, including as Director of PythonLabs at BeOpen and Zope and at Elemental Security. Before going to Dropbox, he worked for Google from 2005 to 2012. There, he developed the internal code review tool Mondrian and worked on App Engine.

Today, Python is among the most popular programming languages and the de facto standard for AI researchers, for example.

Only a few years ago, van Rossum joining Microsoft would’ve been unthinkable, given the company’s infamous approach to open source. That has clearly changed now and today’s Microsoft is one of the most active corporate open-source contributors among its peers — and now the owner of GitHub. It’s not clear what exactly van Rossum will do at Microsoft, but he notes that there’s “too many options to say” and that “there’s lots of open source here.”


Read Full Article

Act now before Google kills us, 135-strong coalition of startups warns EU antitrust chief


A coalition of 135 startups and tech companies with services in verticals including travel, accommodation and jobs have written to the European Commission to urge antitrust action against Google — warning that swift enforcement is needed or some of their businesses may not survive.

They also argue the Commission needs to act now or it risks undermining its in-train reform of digital regulations — which is due to be lay out in draft form early next month.

The letter has been inked by veteran Internet players such as Booking.com, Expedia, Kayak, Opentable, Tripadvisor and Yelp, co-signing along with a raft of (mostly) smaller European startups across all three verticals.

A further 30 co-signatories are business associations and organizations in related and other areas such as media/publishing — making for a total of 165 entities calling for Google to face swift antitrust banhammers.

A European Commission spokesperson confirmed to TechCrunch it’s received the Google critics’ letter — saying it will reply “in due course”.

‘Not competing on the merits’

While there have been complaints on this front before — the Commission has said it’s been hearing rumblings of discontent in the travel segment since for years at this point — a growing coalition of businesses (including some based in the US) are bandying together to pressure the EU antitrust chief to clip Google’s wings — with, for example, jobs-related businesses joining the travel startups whose complaints we reported on recently.

Reuters, which obtained the letter earlier, reports that the coalition is the largest ever to complain in concert to the EU’s competition division.

In the letter, which TechCrunch has reviewed, the group argues that Google is violating a 2017 EU competition enforcement decision over Google Shopping that barred the tech giant from self-preferencing and unfairly demoting rivals.

The group argues Google is unfairly leveraging its dominant position in Internet search to grab marketshare in the verticals where they operate — pointing to a feature Google displays at the top of search results (called ‘OneBoxes’) where it points Internet users to its own services, simultaneously steering them away from rival services.

The Commission is considering limiting such self-preferencing in forthcoming legislative proposals that it wants to apply to dominant ‘gatekeeper’ Internet platforms — which Google would presumably be classified as.

For, now, though no such ex ante regulation exists — and the coalition argues the Commission needs to pull its finger out and flex its existing antitrust powers to stop Google’s market abuse before its too late for their businesses.

“Google’s technical integration of its own specialised search services into its near monopoly general search service continues to constitute a clear abuse of dominance,” they argue in the letter to Vestager.

“Like no service before, Google has amassed data and content relevant for competition on such markets at the expense of others – us,” they go on. “Google did not achieve its position on any such market by competing on the merits. Rather, there is now global consensus that Google gained unjustified advantages through preferentially treating its own services within its general search results pages by displaying various forms of grouped specialised search results.”

A similar complaint about Google unfairly pushing its own services at the expense of rivals’ can be found in the US Department of Justice’s antitrust lawsuit against it, filed just last month — which is doubtless giving succour to Google complainants to redouble their efforts in Europe.

Back in 2017, the Commission found Google to be a dominant company in Internet search. Under EU law this means it has a responsibility not to apply the same types of infringing behavior identified in the Google Shopping case in any other business vertical, regardless of its marketshare.

Antitrust chief Margrethe Vestager has gained a reputation for taking on big tech during her first (and now second term) stint as the Commission’s competition chief — now combined with an EVP role shaping digital strategy for the bloc.

But while, on her watch, Google has faced enforcement over its Shopping search (2017), Android mobile OS (2018) and AdSense search ad brokering business (2019), antitrust complainants say the regulatory action has done nothing to dislodge the tech giant’s dominance and restore competition to those specific markets or elsewhere.

“The Commission’s Google Search (Shopping) decision of 27 June 2017 (was supposed to) set a precedent that Google is not permitted to promote its own services within the search results pages of its dominant general search service. However, as of today, the decision did not lead to Google changing anything meaningful,” the coalition argues in the letter dated November 12, 2020.

The Commission contends its Shopping decision has let to a significant increase in the rate of display of offers from competitors to Google in its Shopping units (up 73.5%), also pointing to a rate of near parity between Google offers on Shopping units getting clicks and rivals’ offers being clicked on. However, if Google is compensating for losing out on (some) marketshare in Shopping searches by dialling up its marketshare in other verticals (such as travel and jobs) that’s hardly going to sum to a balanced and effective antitrust remedy.

It’s also interesting to note that the signatures on the latest letter include the Foundem CEO: aka the original shopping comparison engine complainant in the Google Shopping case.

In further remarks today, the Commission spokesperson told us: “We continue to carefully monitor the market with a view to assessing the effectiveness of the remedies,” adding: “Shopping is just one of the specialised search services that Google offers. The decision we took in June 2017 gives us a framework to look also at other specialised search services, such as Google jobs and local search. Our preliminary investigation on this is ongoing.”

On the Commission’s forthcoming Digital Services Act and Digital Markets Act package, the coalition suggests a lack of action to rein in abusive behavior by Google now risks making it impossible for those future regulations to correct such practices.

“If, in the pending competition investigations, the Commission accepts Google’s current conduct as ‘equal treatment’, this creates the risk of pre-defining and hence devaluing the meaning of any future legislative ban on self-preferencing,” they warn, adding that: “Competition and innovation will continue to be stifled, simply because the necessary measures to counter the further anti-competitive expansion are not taken right now.”

Additionally, they argue that a legislative process is simply too slow to be used as an antitrust corrective measure — leaving their businesses at risk of not surviving Google in the meanwhile.

“While a targeted regulation of digital gatekeepers may help in the long run, the Commission should first use its existing tools to enforce the Shopping precedent and ensure equal treatment within Google’s general search results pages,” they urge, adding that they generally welcome the Commission plan to regulate “dominant general search engines” but emphasize speed is of the essence.

“We face the imminent risk of being disintermediated by Google. Many of us may not have the strength and resources to wait until such regulation really takes effect,” they add. “Action is required now. If Google were allowed to continue the anti-competitive favouring of its own specialised search services until any meaningful regulation takes effect, our services will continue to lack traffic, data and the opportunity to innovate on the merits. Until then, our businesses continue to be trapped in a vicious cycle – providing benefits to Google’s competing services while rendering our own services obsolete in the long run.”

Asked for its response to the group’s criticism of its business practices, a Google spokesperson send this statement: “People expect Google to give them the most relevant, high quality search results that they can trust. They do not expect us to preference specific companies or commercial rivals over others, or to stop launching helpful services which create more choice and competition for Europeans.”


Read Full Article

The Machine Learning Behind Hum to Search


Melodies stuck in your head, often referred to as “earworms,” are a well-known and sometimes irritating phenomenon — once that earworm is there, it can be tough to get rid of it. Research has found that engaging with the original song, whether that’s listening to or singing it, will drive the earworm away. But what if you can’t quite recall the name of the song, and can only hum the melody?

Existing methods to match a hummed melody to its original polyphonic studio recording face several challenges. With lyrics, background vocals and instruments, the audio of a musical or studio recording can be quite different from a hummed tune. By mistake or design, when someone hums their interpretation of a song, often the pitch, key, tempo or rhythm may vary slightly or even significantly. That’s why so many existing approaches to query by humming match the hummed tune against a database of pre-existing melody-only or hummed versions of a song, instead of identifying the song directly. However, this type of approach often relies on a limited database that requires manual updates.

Launched in October, Hum to Search is a new fully machine-learned system within Google Search that allows a person to find a song using only a hummed rendition of it. In contrast to existing methods, this approach produces an embedding of a melody from a spectrogram of a song without generating an intermediate representation. This enables the model to match a hummed melody directly to the original (polyphonic) recordings without the need for a hummed or MIDI version of each track or for other complex hand-engineered logic to extract the melody. This approach greatly simplifies the database for Hum to Search, allowing it to constantly be refreshed with embeddings of original recordings from across the world — even the latest releases.

Background
Many existing music recognition systems convert an audio sample into a spectrogram before processing it, in order to find a good match. However, one challenge in recognizing a hummed melody is that a hummed tune often contains relatively little information, as illustrated by this hummed example of Bella Ciao. The difference between the hummed version and the same segment from the corresponding studio recording can be visualized using spectrograms, seen below:

Visualization of a hummed clip and a matching studio recording.

Given the image on the left, a model needs to locate the audio corresponding to the right-hand image from a collection of over 50M similar-looking images (corresponding to segments of studio recordings of other songs). To achieve this, the model has to learn to focus on the dominant melody, and ignore background vocals, instruments, and voice timbre, as well as differences stemming from background noise or room reverberations. To find by eye the dominant melody that might be used to match these two spectrograms, a person might look for similarities in the lines near the bottom of the above images.

Prior efforts to enable discovery of music, in particular in the context of recognizing recorded music being played in an environment such as a cafe or a club, demonstrated how machine learning might be applied to this problem. Now Playing, released to Pixel phones in 2017, uses an on-device deep neural network to recognize songs without the need for a server connection, and Sound Search further developed this technology to provide a server-based recognition service for faster and more accurate searching of over 100 million songs. The next challenge then was to leverage what was learned from these releases to recognize hummed or sung music from a similarly large library of songs.

Machine Learning Setup
The first step in developing Hum to Search was to modify the music-recognition models used in Now Playing and Sound Search to work with hummed recordings. In principle, many such retrieval systems (e.g., image recognition) work in a similar way. A neural network is trained with pairs of input (here pairs of hummed or sung audio with recorded audio) to produce embeddings for each input, which will later be used for matching to a hummed melody.

Training setup for the neural network

To enable humming recognition, the network should produce embeddings for which pairs of audio containing the same melody are close to each other, even if they have different instrumental accompaniment and singing voices. Pairs of audio containing different melodies should be far apart. In training, the network is provided such pairs of audio until it learns to produce embeddings with this property.

The trained model can then generate an embedding for a tune that is similar to the embedding of the song’s reference recording. Finding the correct song is then only a matter of searching for similar embeddings from a database of reference recordings computed from audio of popular music.

Training Data
Because training of the model required song pairs (recorded and sung), the first challenge was to obtain enough training data. Our initial dataset consisted of mostly sung music segments (very few of these contained humming). To make the model more robust, we augmented the audio during training, for example by varying the pitch or tempo of the sung input randomly. The resulting model worked well enough for people singing, but not for people humming or whistling.

To improve the model’s performance on hummed melodies we generated additional training data of simulated “hummed” melodies from the existing audio dataset using SPICE, a pitch extraction model developed by our wider team as part of the FreddieMeter project. SPICE extracts the pitch values from given audio, which we then use to generate a melody consisting of discrete audio tones. The very first version of this system transformed this original clip into these tones.

Generating hummed audio from sung audio

We later refined this approach by replacing the simple tone generator with a neural network that generates audio resembling an actual hummed or whistled tune. For example, the network generates this humming example or whistling example from the above sung clip.

As a final step, we compared training data by mixing and matching the audio samples. For example, if we had a similar clip from two different singers, we’d align those two clips with our preliminary models, and are therefore able to show the model an additional pair of audio clips that represent the same melody.

Machine Learning Improvements
When training the Hum to Search model, we started with a triplet loss function. This loss has been shown to perform well across a variety of classification tasks like images and recorded music. Given a pair of audio corresponding to the same melody (points R and P in embedding space, shown below), triplet loss would ignore certain parts of the training data derived from a different melody. This helps the machine improve learning behavior, either when it finds a different melody that is too ‘easy’ in that it is already far away from R and P (see point E) or because it is too hard in that, given the model's current state of learning, the audio ends up being too close to R — even though according to our data it represents a different melody (see point H).

Example audio segments visualized as points in embedding space

We’ve found that we could improve the accuracy of the model by taking these additional training data (points H and E) into account, namely by formulating a general notion of model confidence across a batch of examples: How sure is the model that all the data it has seen can be classified correctly, or has it seen examples that do not fit its current understanding? Based on this notion of confidence, we added a loss that drives model confidence towards 100% across all areas of the embedding space, which led to improvements in our model’s precision and recall.

The above changes, but in particular our variations, augmentations and superpositions of the training data, enabled the neural network model deployed in Google Search to recognize sung or hummed melodies. The current system reaches a high level of accuracy on a song database that contains over half a million songs that we are continually updating. This song corpus still has room to grow to include more of the world’s many melodies.

Hum to Search in the Google App

To try the feature, you can open the latest version of the Google app, tap the mic icon and say “what's this song?” or click the “Search a song” button, after which you can hum, sing, or whistle away! We hope that Hum to Search can help with that earworm of yours, or maybe just help you in case you want to find and playback a song without having to type its name.

Acknowledgements
The work described here was authored by Alex Tudor, Duc Dung Nguyen, Matej Kastelic‎, Mihajlo Velimirović‎, Stefan Christoph, Mauricio Zuluaga, Christian Frank, Dominik Roblek, and Matt Sharifi. We would like to deeply thank Krishna Kumar, Satyajeet Salgar and Blaise Aguera y Arcas for their ongoing support, as well as all the Google teams we've collaborated with to build the full Hum to Search product.

We would also like to thank all our colleagues at Google who donated clips of themselves singing or humming and therefore laid a foundation for this work, as well as Nick Moukhine‎ for building the Google-internal singing donation app. Finally, special thanks to Meghan Danks and Krishna Kumar for their feedback on earlier versions of this post.


How to be fearless in the face of authoritarianism | Sviatlana Tsikhanouskaya

How to be fearless in the face of authoritarianism | Sviatlana Tsikhanouskaya

How do you stand up to authoritarianism? And what does it mean to be "fearless"? In this powerful talk, housewife-turned-politician Sviatlana Tsikhanouskaya describes her unlikely bid to defeat Belarus's long-time autocratic leader in the nation's 2020 presidential election. Painting a vivid picture of how small acts of defiance flourished into massive, peaceful demonstrations, she shares a beautiful meditation on the link between fearlessness and freedom, reminding us that we all have what it takes to stand up to injustice -- we just need to do it together.

https://ift.tt/3niomEm

Click this link to view the TED Talk

Instagram redesign puts Reels and Shop tabs on the home screen


Instagram is putting its TikTok competitor Reels front-and-center in a redesigned version of its app by giving it the center position on its new navigation bar. The update, arriving today, also replaces the Activity tab (heart icon) with the Shop tab, following a test that had changed this aspect of the app’s home screen earlier this summer.

In the redesigned app, both the Compose button and the Activity tab have been relocated to the top-right of the home screen, while the center middle button now belongs to Reels.

Before, Reels videos were mixed in with other photo and video content on the Instagram Explore page, though Instagram this fall began to experiment with different layouts (see below).

This led to some early complaints from users looking for Reels in the app, who had said it was harder to find, the company says.

The redesign, which makes Reels the main button in the app, is an aggressive attempt on Instagram’s part to direct users to its short-form video feed, which has so far seen only a lukewarm reception from reviewers. Critics have said Reels lacks competitive features, contributes to Instagram’s bloat, feels stale and features a lot of recycled TikTok content. At best, it’s been deemed a shameless clone.

Instagram, on the other hand, would argue that it’s still early days for its Reels short-form video in its app. And the change could encourage more creators to share their Reels, given the now high-profile position given to the product.

That said, it cannot be understated how significant it is to relocate a Compose button in an app that relies on user-generated content. That Instagram would minimize the button’s importance in this way is a testament to how much of its future relies on making Reels work.

“The way we think about this update is that we’re trying to make it really easy to use an expanded suite of products now available on Instagram, while maintaining a simplicity,” explains Instagram’s director of Product Management, Robby Stein.

Simplicity, given the wide range of products Instagram now offers, could become a challenge.

When tapped, the relocated Compose button will now take users to a redesigned Camera experience, too. Here, you can either pick photos or videos to post to your Feed, or scroll over to choose to post to your Story, Reels, or go Live. While this doesn’t replace the swipe gesture to get to the Camera, it does give all the different post formats a more equal footing.

Image Credits: Instagram

Next to the new Compose button is the relocated Activity button (the heart icon) and a redesigned messaging button that takes you to your Instagram DMs — which are now connected to Facebook Messenger’s universe. The messages button itself has been changed to look like the Facebook Messenger icon (for those who opted in to the new experience), and not the paper airplane icon that was previously associated with the Instagram inbox.

Another major change sees the Instagram Shop winning a home screen placement.

The company began testing the Shop tab in place of the Activity tab in July, where it would send users to an updated version of the Instagram Shop. Here, users could filter by brands they followed on Instagram or by product category. And, in many cases, users could pay for their purchase using Instagram’s own Checkout feature, which involves a selling fee.

Instagram’s push to make its app more of an online shopping destination through this and other changes comes at a critical time for the e-commerce market. The coronavirus pandemic accelerated the shift to e-commerce by at least five years, according to some analysts. That means any plans Instagram had to become a major player in online commerce were also just expedited.

Image Credits: Instagram

Combined, both moves signal a company that’s worried about the impact TikTok may have on the long-term future of its business.

The Chinese-owned rival video app has been surging in popularity around the world, and particularly with the Gen Z demographic. TikTok is now projected to top 1.2 billion monthly active users in 2021, according to a recent forecast. However, the app’s U.S. fate is still unknown due to a lack of attention from the Trump administration over the TikTok ban, as well as uncertainty as to how the incoming Biden administration will proceed to enforce it.

Today’s TikTok captures users’ attention with its short-form content, personalized “For You” feed, sizable music catalog and special effects.

Image Credits: Instagram

But there’s also potential for the app to expand beyond being just an entertainment platform, as its recent partnership with Shopify on social commerce indicates. TikTok’s video format makes for an ideal medium to showcase a brand’s products — which is why Walmart angled in on the would-be TikTok acquisition for its U.S. operations, driven by Trump’s TikTok ban.

If and when TikTok scales this side of its business in the U.S., it could win social commerce market share from both Facebook and Instagram. And its appeal on the entertainment front could make it more difficult for Reels, or anyone else, to compete.

But Instagram has one big advantage in this battle: user data. It can inform its own personalization algorithms for Reels based on what users are doing elsewhere in its app, and even on Facebook if the user connected their account.

However, Stein says the main signals Reels personalization algorithms use are based on data coming from engagement within Reels, like whether you liked a video, for example.

Though Instagram users may not appreciate the buttons being relocated, Stein says, in tests, people came to adapt the changes. And in the end, it was necessary.

“We try to maintain simplicity by making sure that it’s clear why everything is where it is. But also, each tab has a really clear purpose to you,” says Stein. “So there’s now one clear place to go to start watching video and be entertained and, hopefully, have some fun,” he says. “There’s one really clear place to go now, when you want to post. And there’s one really clear place now you want to shop, which is really important to us.”

The changes will roll out to all markets where Reels and Shop are live, including the U.S., over the next few days.

Correction, 11/12/20 9:20 am et: We initially misspelled Robby Stein’s name. It’s spelled Robby Stein, not Robbie Stein. This has been corrected. Apologies for the error. 


Read Full Article

Set in the Present


Set in the Present

What Are the Technologies Involved in Gaming?


The technology in gaming is continuously evolving to provide more detailed, immersive, and better experiences for gamers. Similar to playing casino games like roulette, taking part in video games may lighten depression, improve skills for making good decisions, reduce stress, and increase vision. Because of the competitive nature of different forms, every developer, publisher, and platform […]

The post What Are the Technologies Involved in Gaming? appeared first on ALL TECH BUZZ.


The Best Selling Gaming Computers For 2021 Predictions


If you’re into gaming on any level that requires a bit more power than simply playing the occasional game of Solitaire, The Sims, playing slots, or checking the latest odds at sites such as Unibet New Jersey for example, then you’ll understand that you need a solid gaming PC. Purpose made gaming PCs are more […]

The post The Best Selling Gaming Computers For 2021 Predictions appeared first on ALL TECH BUZZ.


Twitter brings its Stories feature, Fleets, to Japan


Twitter’s own version of Stories, which it calls “Fleets,” have arrived in Japan. The new feature allows users to post ephemeral content that automatically disappears after 24 hours. Though Fleets previously launched in Brazil, India, Italy and South Korea, Japan is notably Twitter’s second largest market, with some estimated 51.9 million users.

It’s also second in terms of revenues, led by advertising. In Q3 2020, Japan generated $132.4 million in revenue, coming in second behind the U.S.’s $512.6 million.

Twitter can be experimental when it comes to new features — it even once developed a new way to manage threads with a public prototype, coded alongside user feedback. But not all the features it dabbles with make it to launch.

However, the further expansion of Fleets to Japan signals Twitter’s interest in the product hasn’t diminished over time. It seems it’s now only a matter of time before Fleets arrive in Twitter’s largest market, the U.S.

That said, the U.S. may be the hardest market for Fleets to crack, as here, many users are concerned about how all social media apps are starting to look alike.

Whatever feature becomes a breakout success on one platform soon finds its way to all the others. In the early days, we saw this trend with the “feed” format, modeled after Facebook’s News Feed. The Stories format, popularized by Snapchat, came next. And now apps like Instagram and Snapchat are ripping off TikTok with their own short-form video features.

The result is that apps are losing focus on what makes them unique.

Twitter, for what it’s worth, has historically been slow to copy from other social networks. In fact, it’s one of the last to embrace Stories — a feature that’s now even on LinkedIn, of all places.

Plus, in Twitter’s case, the Stories feature may end up serving a different purpose than on other networks.

Instead of offering users a way to post content of lesser quality — posts that didn’t deserve a more prominent spot in the feed, that is — Fleets may encourage users who haven’t felt comfortable with the platform’s more public nature to begin posting for the first time. Or, at least, it could push users to increase their content output and engagement.

Twitter’s Fleets work much like Stories on other platforms. With a tap on the “+” (plus) button, users can post text, photos, GIFs or videos. Meanwhile, viewers use gestures to navigate the Fleets posted by others. The Stories sit at the top of the app’s home screen, also like on other platforms.

Twitter tells TechCrunch all users in Japan should have Fleets available on their accounts soon, but couldn’t share a time frame for a U.S. launch.


Read Full Article

Daily Crunch: Google Photos will end free, unlimited storage


Google changes its storage policy, Facebook extends its political ad ban and Ring doorbells are recalled. This is your Daily Crunch for November 11, 2020.

The big story: Google Photos will end free, unlimited storage

Google is changing its storage policies for free accounts in a way that could have a big impact on anyone regularly using Google Photos.

Currently, Google Photos allows users to store unlimited images (and HD video) as long as they’re under 16 megapixels. Starting on June 1, 2021, new photos and videos will all count toward the 15 gigabytes of free storage that the company offers to anyone with a free Google account.

Google says it will take the average user three years to reach 15 gigabytes — at which point they’ll either need to delete some photos or pay for a Google One account. Also on June 1: Docs, Sheets, Slides, Drawings, Forms and Jamboard files will start counting toward your storage total as well.

The tech giants

Facebook extends its temporary ban on political ads for another month — The company says the temporary ban will continue for at least another month.

ByteDance asks federal appeals court to vacate US order forcing it to sell TikTok — TikTok’s parent company says it remains committed to a negotiated solution and will only try to stop the government from forcing a sale “if discussions reach an impasse.”

Ring doorbells recalled over fire threat — The recall comes in the wake of 23 reports of fire and eight reports of minor burns.

Startups, funding and venture capital

SentinelOne, an AI-based endpoint security firm, confirms $267M raise on a $3.1B valuation — SentinelOne’s Singularity monitors and secures laptops, phones and other network-connected devices and services.

E-commerce startup Heroes raises $65M in equity and debt to become the Thrasio of Europe — The company has a strategy of acquiring and scaling high-performing Amazon businesses.

Seedcamp raises £78M for its fifth fund — This new fund increases the amount of capital the firm will invest in pre-seed and seed-stage companies.

Advice and analysis from Extra Crunch

Dear Sophie: What does Biden’s win mean for tech immigration? — Attorney Sophie Alcorn looks at the presidential election’s impact on U.S. immigration and immigration reform.

Greylock’s Asheem Chandna on ‘shifting left’ in cybersecurity and the future of enterprise startups — Enterprise software is changing faster this year than it has in a decade.

Square and PayPal earnings bring good (and bad) news for fintech startups — Square’s earnings give us a window into consumer payment activity, card usage, stock purchases and more.

(Reminder: Extra Crunch is our membership program, which aims to democratize information about startups. You can sign up here.)

Everything else

Honda to mass-produce Level 3 autonomous cars by March — Honda claims it will be the first automaker to mass-produce vehicles with autonomous capabilities that meet SAE Level 3 standards.

Data audit of UK political parties finds laundry list of failings — The audit claims parties are failing to come clean with voters about how they’re being invisibly profiled and targeted.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.


Read Full Article

Improving On-Device Speech Recognition with VoiceFilter-Lite


Voice assistive technologies, which enable users to employ voice commands to interact with their devices, rely on accurate speech recognition to ensure responsiveness to a specific user. But in many real-world use cases, the input to such technologies often consists of overlapping speech, which poses great challenges to many speech recognition algorithms. In 2018, we published a VoiceFilter system, which leverages Google’s Voice Match to personalize interaction with assistive technology by allowing people to enroll their voices.


While the VoiceFilter approach is highly successful, achieving a better source to distortion ratio (SDR) than conventional approaches, efficient on-device streaming speech recognition requires addressing restrictions such as model size, CPU and memory limitations, as well as battery usage considerations and latency minimization.

In “VoiceFilter-Lite: Streaming Targeted Voice Separation for On-Device Speech Recognition”, we present an update to VoiceFilter for on-device use that can significantly improve speech recognition in overlapping speech by leveraging the enrolled voice of a selected speaker. Importantly, this model can be easily integrated with existing on-device speech recognition applications, allowing the user to access voice assistive features under extremely noisy conditions even if an internet connection is unavailable. Our experiments show that a 2.2MB VoiceFilter-Lite model provides a 25.1% improvement to the word error rate (WER) on overlapping speech.


Improving On-Device Speech Recognition
While the original VoiceFilter system was very successful at separating a target speaker's speech signal from other overlapping sources, its model size, computational cost and latency are not feasible for speech recognition on mobile devices.

The new VoiceFilter-Lite system has been carefully designed to fit on-device applications. Instead of processing audio waveforms, VoiceFilter-Lite takes exactly the same input features as the speech recognition model (stacked log Mel-filterbanks), and directly enhances these features by filtering out components not belonging to the target speaker in real time. Together with several optimizations on network topologies, the number of runtime operations is drastically reduced. After quantizing the neural network with the TensorFlow Lite library, the model size is only 2.2 MB, which fits most on-device applications.

To train the VoiceFilter-Lite model, the filterbanks of the noisy speech are fed as input to the network together with an embedding vector that represents the identity of the target speaker (i.e., a d-vector). The network predicts a mask that is element-wise multiplied to the input to produce enhanced filterbanks. A loss function is defined to minimize the difference between the enhanced filterbanks and the filterbanks from the clean speech during training.

Model architecture of the VoiceFilter-Lite system.

VoiceFilter-Lite is a plug-and-play model, which allows the application in which it’s implemented to easily bypass it if the speaker did not enroll their voice. This also means that the speech recognition model and the VoiceFilter-Lite model can be separately trained and updated, which largely reduces engineering complexity in the deployment process.

As a plug-and-play model, VoiceFilter-Lite can be easily bypassed if the speaker did not enroll their voice.

Addressing the Challenge of Over-Suppression
When speech separation models are used for improving speech recognition, two types of error could occur: under-suppression, when the model fails to filter out noisy components from the signal; and over-suppression, when the model fails to preserve useful signal, resulting in some words being dropped from the recognized text. Over-suppression is especially problematic since modern speech recognition models are usually already trained with extensively augmented data (such as room simulation and SpecAugment), and thus are more robust to under-suppression.

VoiceFilter-Lite addresses the over-suppression issue with two novel approaches. First, it uses an asymmetric loss during the training process, such that the model is less tolerant to over-suppression than under-suppression. Second, it predicts the type of noise at runtime, and adaptively adjusts the suppression strength according to this prediction.

VoiceFilter-Lite adaptively applies stronger suppression strength when overlapping speech is detected.

With these two solutions, the VoiceFilter-Lite model retains great performance on streaming speech recognition for other scenarios, such as single-speaker speech under quiet or various noise conditions, while still providing significant improvement on overlapping speech. From our experiments, we observed a 25.1% improvement of word error rate after the 2.2MB VoiceFilter-Lite model is applied on additive overlapping speech. For reverberant overlapping speech, which is a more challenging task to simulate far-field devices such as smart home speakers, we also observed a 14.7% improvement of word error rate with VoiceFilter-Lite.

Future Work
While VoiceFilter-Lite has shown great promise for various on-device speech applications, we are also exploring several other directions to make VoiceFilter-Lite more useful. First, our current model is trained and evaluated with English speech only. We are excited about adopting the same technology to improve speech recognition for more languages. Second, we would like to directly optimize the speech recognition loss during the training of VoiceFilter-Lite, which can potentially further improve speech recognition beyond overlapping speech.

Acknowledgements
The research described in this post represents joint efforts from multiple teams within Google. Contributors include Quan Wang, Ignacio Lopez Moreno, Mert Saglam, Kevin Wilson, Alan Chiao, Renjie Liu, Yanzhang He, Wei Li, Jason Pelecanos, Philip Chao, Sinan Akay, John Han, Stephen Wu, Hannah Muckenhirn, Ye Jia, Zelin Wu, Yiteng Huang, Marily Nika, Jaclyn Konzelmann, Nino Tasca, and Alexander Gruenstein.


RIP Google Music, one of the company’s last examples of generosity


Google Music is dead, and with it one of the few remaining connections I have to the company that doesn’t feel like a gun to my head. The service, now merged haphazardly with YouTube Music, recalled the early days of Google, when they sometimes just made cool internet things. It made it nearly a decade, though — pretty impressive for a one of their products.

I’ll just say it up front: I’m a lifelong music pirate. Oh yes, I’ve reformed in recent years, but I’ve got a huge library of tracks that I’ve cultivated for decades and don’t plan to abandon any time soon (likewise you can pry Winamp from my cold, dead hands). So when Google announced back in 2011 I could stream it all to myself for free, it sounded too good to be true.

And indeed it was a relic of the old Google, which was quite simply all about taking things that are difficult to do yourself (find things online, set up a new email address, collaborate on a spreadsheet) and make them easier.

Google Music — as we’ll call it despite it having gone through several branding changes before the final indignity of being merged into another, worse service as a presumably short-lived tab — was not first to the music-streaming or downloading world by a long shot, but its promise of being able to upload your old music files and access them anywhere as if they were emails or documents was a surprisingly generous one.

Generous not just in that it was providing server space for 20,000 songs (!) for free and the infrastructure for serving those songs where you went, but in its acknowledgement of other models of owning media. It didn’t judge you for having 20,000 MP3s — they weren’t subjected to some kind of legitimacy check, and they didn’t report you to the RIAA for having them, though they certainly could have.

No, Google Music’s free media locker was the company, or at least a quorum of the product team, announcing that they get it: not everyone does everything the same way, and not everyone is ready to embrace whatever business model tech companies decide makes sense. (Notably it has shifted several times more since then.)

Though my perennial work frenemy at the time MG Siegler was not impressed with the beta, I vigorously defended it, noting that Google was starting simple and looking forward rather then trying to beat Apple at their own game. Plus, secretly, I was feverishly uploading a hundred gigs of music I’d gotten from Audiogalaxy, Napster, and SoulSeek. Here, I thought, was a bridge between my antiquarian habits and the cutting edge of tech.

Since then, like the resentfully loving owner of a junker, over the years I’ve been frustrated by Google Music in the ways that only one who truly relies on something can be. The app became essential to me even as its ever-changing and confusing interface confounded me. As Google’s media strategy and offerings fluctuated and blurred, my uploaded music sat there quietly, doing the same thing it did at launch: hosting my music files. Whatever it did in addition to that, it still let me access the glitchy, 128kbps version of The Bends I downloaded in 2001. I also had the security of knowing if my many drives died in a fire, I could at least recover my precious MP3s.

Whether I’d ripped it myself, pirated it in college, bought it on Bandcamp, or got it from a code inside the vinyl I bought at a show, it worked on Google Music. It integrated all my music in a truly all-accepting cloud player, and for that reason, I loved it in spite of its flaws and total lack of hipness.

Now, in deference to the explosion of YouTube’s popularity as a music platform — which more  than anything else really is due to a new type of laziness and platform agnosticism peculiar to the next generation — Google Music exists as a sort of ghost of itself within the YouTube Music app, itself an evolution of a couple other failed music strategies.

Perhaps Google felt that the optics of obsoleting a service and cutting off millions of users from something useful and beloved were not worth risking — after doing exactly that with Reader (RIP).

So (after culling the users who forgot they had accounts) they settled on the next best thing, which was making Google Music suck. Buried inside the new app, the music I uploaded has undergone a regression: intermingled and mixed, poorly organized, unable to be searched through, and at every occasion presented as the worse option, the uploaded library function seems to have been hidden away and hobbled.

The ugly and reliable Music Manager, which has run in the background on my Windows PCs for years, is dead, and adding new music is done by manually dragging the files onto the YouTube Music tab. Complaining about having to move my fingers a few inches when I get a new album seems a bit pampered, so I’ll just say that it’s telling that Google chose to make the user do the work when the whole service was built around preventing exactly that kind of work from having to be done.

I suppose I’m an exception to the usual Google and YouTube user, and as I’ve been careful to show the company for the last 20 years or so, there’s no money to be made from me. Yet as soon as I understood that Google was going to make it hard for me to do what I had been doing with them for a decade, I decided I was willing to pay for it. Now I pay Plex for a service Google decided was below them, and incidentally it’s way better. (Come to think of it, I started paying for Feedly after Google killed Reader, too.)

In a way I’m thankful. The idea of divorcing myself entirely from Google’s ecosystem isn’t a realistic one for me, though I do it where I can (though having moved to iOS, the cure sometimes seems worse than the disease). One of the tattered bindings holding me to Google was the music thing. And while I do plan to take up a hundred gigabytes on one of their databases somewhere for as long as I possibly can, I’m glad the company admitted that what they were giving me didn’t make sense for them any more. It means one less reason that what Google has to give makes sense for me.

Every service from Google now, especially with those new, bad logos, feels less like it’s offering a solution to a problem and more like it’s just another form of leverage for the company. We were spoiled by the old, weird Google that did things like Books because they could, throwing it in the teeth of the publishers, or Wave, an experiment in interactivity that in many ways is still ahead of its time. They did things because they hadn’t been done, and now they do things because they can’t let you leave.

So, RIP Google Music. You were good while you lasted, but ultimately what you did best was show me that we deserved better, and we weren’t going to get it by waiting around for Google to return to its roots.


Read Full Article

Amazon’s new ‘Care Hub’ lets Alexa owners keep tabs on aging family members


Amazon today announced a set of new features aimed at making its Alexa devices more useful to aging adults. With the launch of “Care Hub,” an added option in the Alexa mobile app, family members can keep an eye on older parents and loved ones, with their permission, in order to receive general information about their activities and to be alerted if the loved one has called out for help.

The idea behind Care Hub, the company explains, is to offer reassurance to family members concerned about an elderly family member’s well-being, while also allowing those family members to maintain some independence.

This is not a novel use case for Alexa devices. Already, the devices are being used in senior living centers and other care facilities, by way of third-party providers.

Amazon stresses that while family members will be able to keep an eye on their loved ones’ Alexa use, it will respect their privacy by not offering specific information. For example, while a family member may be able to see that their parent had played music, it won’t say which song was played. Instead, all activity is displayed by category.

In addition, users will be able to configure alerts if there’s no activity or when the first interaction with the device occurs on a daily basis.

And if the loved one calls for help, the family member designated as the emergency contact can drop in on them through the Care Hub or contact emergency services.

Image Credits: Amazon

These new features are double-opt in, meaning that both the family member and their loved one need to first establish a connection between their Alexa accounts through an invitation process. This is begun through the new Care Hub feature in the Alexa app, then confirmed via text message or email.

That may seem like a reasonable amount of privacy protection, but in reality, many older adults either struggle with or tend to avoid technology. Even things seemingly simple — like using a smartphone, email or texting — can sometimes be a challenge.

That means there are scenarios where a family member could set up the Care Hub system by accessing the other person’s accounts without their knowledge or by inventing an email that becomes “the parent’s email” just for this purpose.

Alternatively, they could just mislead mom or dad by saying they are helping them set up the new Alexa device, and — oh, can I borrow your phone to confirm something for the setup? (Or some other such deception.)

A more appropriate option to protect user privacy would be to have Alexa periodically ask the loved one if they were still okay with the Care Hub monitoring option being enabled, and to alert the loved one via the Alexa mobile app that a monitoring option was still turned on.

Of course, there may certainly be older adults who appreciate the ability to be connected to family in this way, especially if they are located at a distance from their family or are feeling isolated due to the coronavirus pandemic and social distancing requirements that’s keeping family members from being able to visit.

Amazon says Care Hub is rolling out in the U.S. The company notes it will learn from customer feedback to expand the feature set over time.


Read Full Article

10 Free Academic Databases for Students


Thanks to the internet and modern technology, doing research is much easier for modern students. Of course, the college library is great, but having access to peer-reviewed sources from all over the world on your laptop is on another level. Both undergraduate and graduate students benefit from academic sources online. The only problem is that […]

The post 10 Free Academic Databases for Students appeared first on ALL TECH BUZZ.


Come June 1, 2021, all of your new photos will count against your free Google storage


Come June 1, 2021, Google will change its storage policies for free accounts — and not for the better. Basically, if you’re on a free account and a semi-regular Google Photos user, get ready to pay up next year and subscribe to Google One.

Currently, every free Google Account comes with 15 GB of online storage for all your Gmail, Drive and Photos needs. Email and the files you store in Drive already counted against those 15 GB, but come June 1, all Docs, Sheets, Slides, Drawings, Forms or Jamboard files will count against the free storage as well. Those tend to be small files, but what’s maybe most important here, virtually all of your Photos uploads will now count against those 15 GB as well.

That’s a bid deal because today, Google Photos lets you store unlimited images (and unlimited video, if it’s in HD) for free as long as they are under 16MP in resolution or you opt to have Google degrade the quality. Come June of 2021, any new photo or video uploaded in high quality, which currently wouldn’t count against your allocation, will count against those free 15 GB.

Image Credits: Google

As people take more photos every year, that free allotment won’t last very long. Google argues that 80 percent of its users will have at least three years to reach those 15 GB. Given that you’re reading TechCrunch, though, chances are you’re in those 20 percent that will run out of space much faster (or you’re already on a Google One plan).

Some good news: to make this transition a bit easier, photos and videos uploaded in high quality before June 1, 2021 will not count toward the 15 GB of free storage. As usual, original quality images will continue to count against it, though. And if you own a Pixel device, even after June 1, you can still upload an unlimited number of high-quality images from those.

To let you see how long your current storage will last, Google will now show you personalized estimates, too, and come next June, the company will release a new free tool for Photos that lets you more easily manage your storage. It’ll also show you dark and blurry photos you may want to delete — but then, for a long time Google’s promise was you didn’t have to worry about storage (remember Google’s old Gmail motto? ‘Archive, don’t delete!’)

In addition to these storage updates, there’s a few additional changes worth knowing about. If your account is inactive in Gmail, Drive or Photos for more than two years, Google ‘may’ delete the content in that product. So if you use Gmail but don’t use Photos for two years because you use another service, Google may delete any old photos you had stored there. And if you stay over your storage limit for two years, Google “may delete your content across Gmail, Drive and Photos.”

Cutting back a free and (in some cases) unlimited service is never a great move. Google argues that it needs to make these changes to “continue to provide everyone with a great storage experience and to keep pace with the growing demand.”

People now upload more than 4.3 million GB to Gmail, Drive and Photos every day. That’s not cheap, I’m sure, but Google also controls every aspect of this and must have had some internal projections of how this would evolve when it first set those policies.

To some degree, though, this was maybe to be expected. This isn’t the freewheeling Google of 2010 anymore, after all. We’ve already seen some indications that Google may reserve some advanced features for Google One subscribers in Photos, for example. This new move will obviously push more people to pay for Google One and more money from Google One means a little bit less dependence on advertising for the company.


Read Full Article