19 August 2019

Ally raises $8M Series A for its OKR solution


OKRs, or Objectives and Key Results, are a popular planning method in Silicon Valley. Like most of those methods that make you fill in some form once every quarter, I’m pretty sure employees find them rather annoying and a waste of their time. Ally wants to change that and make the process more useful. The company today announced that it has raised an $8 million Series A round led by Access Partners, with participation from Vulcan Capital, Founders Co-op and Lee Fixel. The company, which launched in 2018, previously raised a $3 million seed round.

Ally founder and CEO Vetri Vellore tells me that he learned his management lessons and the value of OKR at his last startup, Chronus. After years of managing large teams at enterprises like Microsoft, he found himself challenged to manage a small team at a startup. “I went and looked for new models of running a business execution. And OKRs were one of those things I stumbled upon. And it worked phenomenally well for us,” Vellore said. That’s where the idea of Ally was born, which Vellore pursued after selling his last startup.

Most companies that adopt this methodology, though, tend to work with spreadsheets and Google Docs. Over time, that simply doesn’t work, especially as companies get larger. Ally, then, is meant to replace these other tools. The service is currently in use at “hundreds” of companies in more than 70 countries, Vellore tells me.

One of its early adopters was Remitly. “We began by using shared documents to align around OKRs at Remitly. When it came time to roll out OKRs to everyone in the company, Ally was by far the best tool we evaluated. OKRs deployed using Ally have helped our teams align around the right goals and have ultimately driven growth,” said Josh Hug, COO of Remitly.

Desktop Team OKRs Screenshot

Vellore tells me that he has seen teams go from annual or bi-annual OKRs to more frequently updated goals, too, which is something that’s easier to do when you have a more accessible tool for it. Nobody wants to use yet another tool, though, so Ally features deep integrations into Slack, with other integrations in the works (something Ally will use this new funding for).

Since adopting OKRs isn’t always easy for companies that previously used other methodologies (or nothing at all), Ally also offers training and consulting services with online and on-site coaching.

Pricing for Ally starts at $7 per month per user for a basic plan, but the company also offers a flat $29 per month plan for teams with up to 10 users, as well as an enterprise plan, which includes some more advanced features and single sign-on integrations.


Read Full Article

5 Ways to Make Your Computer Read Documents to You


windows-reading

Want to know how to get your computer to read to you? Several different approaches are available. Both Windows and Mac have native tools that can read documents and MS Word files aloud, while there are bevies of third-party apps.

Want to know more? Keep reading to learn how to get your computer to read documents out loud.

Can Microsoft Word Read to You?

For many people, the main reason for getting their computer to read to them is so they can listen to an audio output of a Microsoft Word file.

It helps give your eyes a break if you’re reading something that’s dozens of pages long. And it’s also a great way to spot typos and other grammatical errors in your work.

But can Microsoft Word read to you directly? The answer is yes.

The app has its own built-in document reader called Speak; you don’t need to use your operating system’s native narrator. Better yet, you can get Word to read to you on both Windows and Mac versions of the software, leading to a seamless experience across both platforms.

How to Make Word Read to You on Windows

How to get Word to read to you.

To make Word read to you on your Windows computer, follow the step-by-step instructions below:

  1. Open the document you want Word to read.
  2. Place the cursor where you want to the Word reader to begin.
  3. Go to Review > Speech > Read Aloud.

The narration should start immediately. If not, click the Play button in the upper right corner of the window. You can also use the Speak panel to edit the speech output; both the reading speak and the voice used are customizable.

The voices available are determined by the language setting you are using in the document. To change the language of the text, use the button in the Status Bar at the bottom of the page.

How to Make Word Read to You on Mac

To get a Mac to read text from a Word file, you can use the same process as Windows:

  1. Open the document you want Word to read.
  2. Place the cursor where you want the reading to begin.
  3. Go to Review > Speech > Read Aloud.

On Mac, the playback controls and settings button appear in floating on-screen widget that you can drag around.

How to Make Your Computer Read to You

We’ve looked at how to make Microsoft Word read aloud, but what about the rest of the Windows or Mac operating system?

Both operating systems have built-in tools, but there are also some third-party apps available.

How to Get Windows to Read to You

How to make your computer read to you with the Narrator app in Windows.

In Windows, the native screen tool is called Narrator. It’s one of the Ease of Access tools. You can find it in the Start menu or by using a Cortana search.

When you use Narrator for the first time, Windows will prompt you to work through a 13-stage setup process. You can customize many aspects about the way Narrator works, including startup settings, voice settings, and custom commands. All the settings are available in Narrator’s app window.

When Narrator is running, you can toggle it on and off by pressing Ctrl + Windows + Enter.

How to Get Your Mac to Read to You

macos voice settings

A Mac can also read any on-screen text. The Speech tool available in the Accessibility Tools menu. To start it, head to Apple > System Preferences > Speech.

At the top of the window, you can choose various speaking voices. The options available are connected to the language packs you are running on your Mac’s operating system. There are also settings for speaking speed, system/app announcements, and an option to enable an on/off toggle.

Third-Party Apps to Make Your Computer Read Documents to You

If you need an app that’s capable of reading all the text within an operating system, the native tools are your best bet.

However, if you just want another document reader, a PDF audio reader, or choices for similar text-to-speech tool, there are plenty of third-party options available.

1. Balabolka

Balabolka is probably the best third-party document reader thanks to its impressive list of features. However, that also means the app is one of the least accessible for beginners.

The app supports a wide list of document types, including DOC, TXT, PDF, EPUB, and HTML. It even lets you save the audio output voice files in the various formats (including WAV and MP3), so you can share them with other people.

Finally, there’s a bookmark feature. This is handy if you’re listening to a narrator of a long document and don’t want to lose your place.

The app is only available on Windows.

Download: Balabolka (Free)

2. Natural Reader

natural reader app prices

The other widely-used text-to-speech app is Natural Reader. It has both a free and a premium version.

The free app has unlimited use, a scanner bar that lets you read any text on the screen, a built-in browser that lets you access the web and read websites aloud in a single interface, and support for DOC, PDF, TXT, and EPUB files.

If you want something a bit more powerful, you can buy the full app for $99.50. It includes two natural voices and downloadable audio files. For $199, you get unlimited OCR to read aloud from images and scanned PDFs.

Natural Reader is available on both Windows and Mac.

Download: Natural Reader (Free, premium version available)

3. eSpeak

eSpeak is an open source document reader that’s available for Windows and Linux computers.

The output is synthesized, unlike many larger big-budget apps which now use human voice recordings to sound more realistic. But on the positive side, the app is tiny—its size is less than 2MB, including all the language data. All major world languages are available, though some are still a work in progress.

Download: eSpeak (Free)

Other Ways to Read Text Out Loud

The tools we’ve discussed in this piece should be suitable for the vast majority of users. Make sure you let us know about your favorite document readers in the comments below.

If you would like to learn more about document readers and accessibility tools in general, read our articles on the best accessibility tools in Office and the best text-to-speech software for Windows.

Read the full article: 5 Ways to Make Your Computer Read Documents to You


Read Full Article

Kodi Remote: The 10 Best Ways to Control Kodi From Your Couch

3 Ways to Schedule SMS Text Messages on Android

The 6 Best Wireless Mouse and Keyboard Combos for All Budgets

Flathub vs. Snap Store: The Best Sites for Downloading Linux Apps


flathub-snapstore

Downloading apps for Linux is no longer the challenge that it once was. Gone are the days when you had to know how to build from source files for any program that wasn’t available in your Linux distribution’s app store or package manager.

Thanks to Flathub and the Snap Store, such apps are now easy to both find and install. But how do these sites compare?

What Are Flatpak and Snap Files?

Flathub and the Snap Store are two websites that have grown around two separate universal package formats for Linux: Flakpaks and snaps.

The idea behind both formats is to provide a way to distribute apps on Linux that works regardless of which distribution you use. These formats also offer security enhancements. Both can isolate apps from one another, so that a rouge piece of software can’t access the pictures or passwords you have open elsewhere on your desktop.

Flatpak is heavily integrated into the GNOME desktop environment, but it still works with others. More Linux distributions have embraced Flatpak as their preferred universal package format. Flatpaks are a community project, though private companies Red Hat and Endless have funded much of the development.

Snap is a file format that comes from Canonical, the company behind the Ubuntu Linux distribution. Unlike Flatpaks, snaps were originally intended for servers. While snaps work on various Linux distributions, they are overwhelmingly Canonical’s baby. Yet with so many people using Ubuntu compared to other distros, the Snap Store is not short on apps. The format may ultimately see greater adoption based on the sheer popularity of Ubuntu alone.

How Do Flathub and the Snap Store Compare?

The Flathub site's homepage

Taken together, Flathub and the Snap Store provide a way to get many of the major desktop apps you might want for Linux. If you use a distro that supports both Flatpak and snap files (which most common distros do), you’re able to enjoy the best of both worlds.

Flathub has more of a free and open source vibe. You get the essentials and little else. In contrast, the Snap Store feels like a more commercial experience. Canonical’s creations look and feel a lot more enterprise than in Ubuntu’s early days.

But the visual differences are mainly cosmetic. You navigate both online app centers in essentially the same way, and each lets you begin installing an app by clicking a button in your browser.

The Nextcloud app open in the Snap Store

While it’s easy to think of Flathub and the Snap Store as app stores, neither contains any paid software. Whether you’re downloading open source or proprietary software, you won’t have to pay anyone for the privilege.

Now let’s dive into these two sites and expand on how they differ.

1. Layout

App categories on the Flathub homepage

Flathub offers a clean and minimal experience. Its interface feels like a web version of GNOME Software. Flathub arranges apps in a grid and sorts them into roughly the same categories you see in Linux app launchers.

The Snap Store’s layout is functionally similar, but the experience feels more corporate. There’s more clutter across the top where Canonical has placed links to developer resources, making the site initially feel geared more toward app makers. You also see a little more sales speak as Canonical hypes up the number of snaps, its user count, and the number of supported distros.

The Snap Store's homepage

Both Flathub and the Snap Store display apps in groups. Flathub contains a few categories on its homepage, whereas the Snap Store provides many for you to scroll through before you dive deeper into the site.

2. Discovering Apps

The Snap Store displaying social media apps

The Snap Store’s app categories are curated, making it easier to browse and discover new software. Notably, the categories go beyond what a developer may put in an app’s metadata. You’ll find sections such as Social, Server and cloud, Security, Devices and IoT, and Art and design. Canonical’s app curation makes it easier to find the apps that are available.

The Snap Store also delivers better search results. Typing “photo” into the search bar in the Snap Store yields around 40 apps. Doing the same on Flathub brings up under 10. Yet that isn’t representative of the apps that are available. The Darktable RAW image editor is available in both stores, but while it appeared in the Snap Store’s search, it did not appear in Flathub’s.

3. App Availability

A search for email apps in the Snap Store

The Snap Store appears to have a larger selection of apps. Canonical claims to have thousands. Flathub, by comparison, lists a little over 600 (though it’s worth pointing out that Flathub is not the only source of Flakpaks, in contrast to snaps).

Whether the Snap Store has more apps that you want depends on what you’re after. Canonical’s store has greater support from companies willing to bring proprietary software to Linux. Flathub has more adoption in the free and open source community.

If you’re looking for an ebook reader for GNOME, you can find both GNOME Books and Foliate in Flathub, but neither appear in the Snap Store at the time of writing. The same is true of the Bookworm app made for elementary OS. Meanwhile, the Snap Store has the proprietary Hiri and Mailspring email clients, plus the Flock team communication app. None of these three are on Flathub.

4. Distro Support

Linux distros with Flathub support

Flathub currently supports 21 distros. The Snap Store supports 41. But the issue of support is more nuanced than whether you can install Flatpaks or snaps on your Linux distro. A potentially more telling question is which format your distro actively embraces. Ubuntu, obviously, is all about Snaps.

Fedora is the distro throwing the most weight behind Flatpaks, but it’s not alone. elementary OS has selected Flatpak as the format it will distribute in AppCenter. Purism, the company behind PureOS, uses Flatpaks on its Librem 5 phone. This influences whether apps made for those distros are more likely to appear on Flathub or in the Snap Store.

Distros are able to host their own Flatpak repositories, which is a big reason why certain distros have chosen to back the format. In contrast, Snaps are hard-coded to come from Canonical servers. This kind of centralization leaves many free software developers feeling uncomfortable. Yes, Canonical is hosting the service out of its own wallet, but if it decides to close down the site, Snaps will go with it. Given Canonical’s history, such a possibility is not unlikely.

Which Linux App Store Should You Use?

Honestly, there’s little reason not to use both. Unlike the DEB and RPM formats, you can easily install Flatpaks and snap packages on the same desktop. While it would be nice to have one universal package format for free and open source desktops, it isn’t necessary. If there are a couple formats that are both likely to work on your PC, that’s a much better situation than what software management on Linux has been in the past.

But if I had to pick a preference, personally, I prefer Flathub. I stick to libre software, and while both stores mark whether an app has a free or a proprietary license, Canonical has made more of an effort reaching out to proprietary app developers. That definitely helps people migrate over from Windows or macOS, but I transitioned years ago, and I’ve long ago acclimated to free alternatives. You can do the same by checking out the best free and open source apps for Linux.

Read the full article: Flathub vs. Snap Store: The Best Sites for Downloading Linux Apps


Read Full Article

The 6 Best Weather Stations for Accurate Forecasts

The Best Way to Sync an Outlook Calendar With Your iPhone

The Best Roku Web Browsers to Use


best-browsers-roku

Can you browse the internet on a Roku? Yes! Contrary to popular belief, it’s possible to install an internet browser on your Roku. On the downside, the number of Roku web browser options are very limited and lacking in features.

Nonetheless, if you want to learn about the best Roku web browser options, along with a workaround that offers a better web browsing experience, keep reading.

Does Roku Have an Internet Browser?

If you want to use your streaming device as a web browser, Roku sticks and set-top boxes are definitely not the best option.

Despite being around for several years (and despite several of Roku’s main competitors offering internet browsers on their own streaming devices), there are only two browsers in the Roku Channel Store. Neither of them developed by Roku itself.

The Best Roku Web Browser in the Channel Store

The two Roku web browsers that are available in the official Roku Channel Store are Web Browser X and POPRISM Web Browser.

1. Web Browser X

roku web browser x

The best Roku web browser is Web Browser X. We use the word “best” somewhat loosely. If you’re expecting a slick and modern interface, you’re going to be disappointed. Web Browser X looks like it was designed in the early 1990s; the fonts and the interface are shockingly old fashioned.

That said, it does work—though it will struggle to render and format highly complex pages. There are some pre-saved favorites (such as Google News, CNN, and ABC News), but you can visit any site by entering the URL. You can also add your own frequently visited sites to your list of favorites.

To navigate a web page, use the left and right buttons on your remote to cycle through the links on a page, and use the up and down arrows to scroll through the text.

On the downside, the browser cannot play videos (so stay clear of YouTube et al.), and it cannot fill in web forms, username fields, and password fields.

And another word of warning. During the research for this piece, I downloaded the app from the Mexican version of the Channel Store. It told me the price was $0.00, but then generated a monthly invoice of $4.99 against my account. The US version of the store does indeed list the price at $4.99/month, so there is a discrepancy between the different national stores. Make sure you’re not caught out.

2. POPRISM Web Browser

roku poprism web browser

The only other Roku web browser in the Channel Store is POPRISM Web Browser. Frankly, it’s many levels worse than Web Browser X.

That’s because it can only read text— there are no images, no GUIs, no CSS, no JavaScript, and so on. Whichever site you visit, you’ll just see a mass of unformatted text.

Needless to say, therefore, the browser is utterly useless for the vast majority of sites. It’s just about passable for text forums, RSS feeds, and other content that’s extremely text-heavy. Basic Google search results are also readable.

On the positive side, the POPRISM Roku browser didn’t try and scam me out of $4.99. You’ve got to look for the bright spots.

Use Screen Mirroring to Browse the Internet on Roku

As we’ve now established, it is possible to install a Roku web browser, but the solutions available are far from ideal.

Therefore, the best approach is to use screen mirroring and cast a browser directly from your phone or computer directly to your Roku device.

How to Cast a Web Browser to Roku from Windows

roku windows connection

Besides the Roku internet browsers available in the channel store, the only other option is to cast a browser from your phone or computer to your Roku using screen mirroring.

To cast a Windows web browser to Roku, follow the step-by-step instructions below:

  1. Check your Roku is running at least version 7.7 of the operating system by heading to Settings > System > About. If it’s not, navigate to Settings > System > System Update > Check Now and let the process complete.
  2. On Windows, open the Action Center by clicking on the appropriate link in the lower right-hand corner of your screen.
  3. Click on the Connect tile. If you cannot see it straight away, you may need to click on Expand.
  4. Allow Windows to scan for your Roku. The process could take up to 30 seconds.
  5. Click on the Roku’s name in the list of devices. The connection will then occur automatically.
  6. Open your web browser of choice and start surfing.

To disable casting, select Stop Video on your TV screen or hit Disconnect on Windows.

How to Cast a Web Browser to Roku from Android

If you’d prefer to browse the internet on Roku from your Android phone or tablet, follow these instructions instead:

  1. Open your Android’s Settings app.
  2. Go to Connected Devices > Pair New Devices.
  3. Wait for Android to find your Roku streaming stick or set-top box.
  4. Tap on the name of your Roku and wait for the connection to initialize.
  5. Open the web browser you want to use on your Roku.

(Note: Not all Android devices support Miracast. For more information, consult the manufacturer’s literature.)

Even Screen Mirroring Has Its Downsides

Unfortunately, even casting a browser to Roku has some drawbacks.

Firstly, Roku screen mirroring relies on Miracast technology. That means only Windows and Android devices can cast their screens natively. Neither iOS and macOS have support for Miracast, meaning you’ll need to use a third-party app to achieve the same result. The best third-party app to cast an iPhone or Mac screen to a Roku is arguably AirBeamTV.

Secondly, Miracast is not a particularly reliable protocol. It’s prone to lagging, connection dropouts, failed pairings, and other issues.

Finally, screen mirroring means you need to a) leave the screen running on the casting device (which can quickly drain your battery), and b) use the casting device to control the web browser.

Using the casting device to control the browser might not be an issue if you’re watching a video. But for active browsing, it’s hard to see what benefits casting would have over simply using the main device—especially considering you’d need to look at the device’s screen to see what you’re doing.

Despite the drawbacks, however, if you absolutely need to have a full-featured web browser on your Roku device, screen mirroring is the best option.

Learn More About Using a Roku

Although support for Roku web browsers might be somewhat lacking, that doesn’t mean that Roku devices aren’t still excellent devices to have around your home.

To learn more about using a Roku, check out our articles explaining how to get Google on your Roku and the best free Roku channels to install today.

Read the full article: The Best Roku Web Browsers to Use


Read Full Article

On-Device, Real-Time Hand Tracking with MediaPipe




The ability to perceive the shape and motion of hands can be a vital component in improving the user experience across a variety of technological domains and platforms. For example, it can form the basis for sign language understanding and hand gesture control, and can also enable the overlay of digital content and information on top of the physical world in augmented reality. While coming naturally to people, robust real-time hand perception is a decidedly challenging computer vision task, as hands often occlude themselves or each other (e.g. finger/palm occlusions and hand shakes) and lack high contrast patterns.

Today we are announcing the release of a new approach to hand perception, which we previewed CVPR 2019 in June, implemented in MediaPipe—an open source cross platform framework for building pipelines to process perceptual data of different modalities, such as video and audio. This approach provides high-fidelity hand and finger tracking by employing machine learning (ML) to infer 21 3D keypoints of a hand from just a single frame. Whereas current state-of-the-art approaches rely primarily on powerful desktop environments for inference, our method achieves real-time performance on a mobile phone, and even scales to multiple hands. We hope that providing this hand perception functionality to the wider research and development community will result in an emergence of creative use cases, stimulating new applications and new research avenues.
3D hand perception in real-time on a mobile phone via Mediapipe. Our solution uses machine learning to compute 21 3D keypoints of a hand from a video frame. Depth is indicated in grayscale.
An ML Pipeline for Hand Tracking and Gesture Recognition
Our hand tracking solution utilizes an ML pipeline consisting of several models working together:
  • A palm detector model (called BlazePalm) that operates on the full image and returns an oriented hand bounding box.
  • A hand landmark model that operates on the cropped image region defined by the palm detector and returns high fidelity 3D hand keypoints.
  • A gesture recognizer that classifies the previously computed keypoint configuration into a discrete set of gestures.
This architecture is similar to that employed by our recently published face mesh ML pipeline and that others have used for pose estimation. Providing the accurately cropped palm image to the hand landmark model drastically reduces the need for data augmentation (e.g. rotations, translation and scale) and instead allows the network to dedicate most of its capacity towards coordinate prediction accuracy.
Hand perception pipeline overview.
BlazePalm: Realtime Hand/Palm Detection
To detect initial hand locations, we employ a single-shot detector model called BlazePalm, optimized for mobile real-time uses in a manner similar to BlazeFace, which is also available in MediaPipe. Detecting hands is a decidedly complex task: our model has to work across a variety of hand sizes with a large scale span (~20x) relative to the image frame and be able to detect occluded and self-occluded hands. Whereas faces have high contrast patterns, e.g., in the eye and mouth region, the lack of such features in hands makes it comparatively difficult to detect them reliably from their visual features alone. Instead, providing additional context, like arm, body, or person features, aids accurate hand localization.

Our solution addresses the above challenges using different strategies. First, we train a palm detector instead of a hand detector, since estimating bounding boxes of rigid objects like palms and fists is significantly simpler than detecting hands with articulated fingers. In addition, as palms are smaller objects, the non-maximum suppression algorithm works well even for two-hand self-occlusion cases, like handshakes. Moreover, palms can be modelled using square bounding boxes (anchors in ML terminology) ignoring other aspect ratios, and therefore reducing the number of anchors by a factor of 3-5. Second, an encoder-decoder feature extractor is used for bigger scene context awareness even for small objects (similar to the RetinaNet approach). Lastly, we minimize the focal loss during training to support a large amount of anchors resulting from the high scale variance.

With the above techniques, we achieve an average precision of 95.7% in palm detection. Using a regular cross entropy loss and no decoder gives a baseline of just 86.22%.

Hand Landmark Model
After the palm detection over the whole image our subsequent hand landmark model performs precise keypoint localization of 21 3D hand-knuckle coordinates inside the detected hand regions via regression, that is direct coordinate prediction. The model learns a consistent internal hand pose representation and is robust even to partially visible hands and self-occlusions.

To obtain ground truth data, we have manually annotated ~30K real-world images with 21 3D coordinates, as shown below (we take Z-value from image depth map, if it exists per corresponding coordinate). To better cover the possible hand poses and provide additional supervision on the nature of hand geometry, we also render a high-quality synthetic hand model over various backgrounds and map it to the corresponding 3D coordinates.
Top: Aligned hand crops passed to the tracking network with ground truth annotation. Bottom: Rendered synthetic hand images with ground truth annotation
However, purely synthetic data poorly generalizes to the in-the-wild domain. To overcome this problem, we utilize a mixed training schema. A high-level model training diagram is presented in the following figure.
Mixed training schema for hand tracking network. Cropped real-world photos and rendered synthetic images are used as input to predict 21 3D keypoints.
The table below summarizes regression accuracy depending on the nature of the training data. Using both synthetic and real world data results in a significant performance boost.

Dataset Mean regression error normalized by palm size
Only real-world 16.1 %
Only rendered synthetic 25.7 %
Mixed real-world + synthetic 13.4 %

Gesture Recognition
On top of the predicted hand skeleton, we apply a simple algorithm to derive the gestures. First, the state of each finger, e.g. bent or straight, is determined by the accumulated angles of joints. Then we map the set of finger states to a set of pre-defined gestures. This straightforward yet effective technique allows us to estimate basic static gestures with reasonable quality. The existing pipeline supports counting gestures from multiple cultures, e.g. American, European, and Chinese, and various hand signs including “Thumb up”, closed fist, “OK”, “Rock”, and “Spiderman”.

Implementation via MediaPipe
With MediaPipe, this perception pipeline can be built as a directed graph of modular components, called Calculators. Mediapipe comes with an extendable set of Calculators to solve tasks like model inference, media processing algorithms, and data transformations across a wide variety of devices and platforms. Individual calculators like cropping, rendering and neural network computations can be performed exclusively on the GPU. For example, we employ TFLite GPU inference on most modern phones.

Our MediaPipe graph for hand tracking is shown below. The graph consists of two subgraphs—one for hand detection and one for hand keypoints (i.e., landmark) computation. One key optimization MediaPipe provides is that the palm detector is only run as necessary (fairly infrequently), saving significant computation time. We achieve this by inferring the hand location in the subsequent video frames from the computed hand key points in the current frame, eliminating the need to run the palm detector over each frame. For robustness, the hand tracker model outputs an additional scalar capturing the confidence that a hand is present and reasonably aligned in the input crop. Only when the confidence falls below a certain threshold is the hand detection model reapplied to the whole frame.
The hand landmark model’s output (REJECT_HAND_FLAG) controls when the hand detection model is triggered. This behavior is achieved by MediaPipe’s powerful synchronization building blocks, resulting in high performance and optimal throughput of the ML pipeline.
A highly efficient ML solution that runs in real-time and across a variety of different platforms and form factors involves significantly more complexities than what the above simplified description captures. To this end, we are open sourcing the above hand tracking and gesture recognition pipeline in the MediaPipe framework, accompanied with the relevant end-to-end usage scenario and source code, here. This provides researchers and developers with a complete stack for experimentation and prototyping of novel ideas based on our model.

Future Directions
We plan to extend this technology with more robust and stable tracking, enlarge the amount of gestures we can reliably detect, and support dynamic gestures unfolding in time. We believe that publishing this technology can give an impulse to new creative ideas and applications by the members of the research and developer community at large. We are excited to see what you can build with it!
Acknowledgements
Special thanks to all our team members who worked on the tech with us: Andrey Vakunov, Andrei Tkachenka, Yury Kartynnik, Artsiom Ablavatski, Ivan Grishchenko, Kanstantsin Sokal‎, Mogan Shieh, Ming Guang Yong, Anastasia Tkach, Jonathan Taylor, Sean Fanello, Sofien Bouaziz, Juhyun Lee‎, Chris McClanahan, Jiuqiang Tang‎, Esha Uboweja‎, Hadon Nash‎, Camillo Lugaresi, Michael Hays, Chuo-Ling Chang, Matsvei Zhdanovich and Matthias Grundmann.

‘Breaking Into Startups’: Torch CEO and Well Clinic founder Cameron Yarbrough on mental health & coaching

The five great reasons to attend TechCrunch’s Enterprise show Sept. 5 in SF


The vast enterprise tech category is Silicon Valley’s richest, and today it’s poised to change faster than ever before. That’s probably the biggest reason to come to TechCrunch’s first-ever show focused entirely on enterprise. But here are five more reasons to commit to joining TechCrunch’s editors on September 5 at San Francisco’s Yerba Buena Center for an outstanding day (agenda here) addressing the tech tsunami sweeping through enterprise. 

#1 Artificial Intelligence.
At once the most consequential and most hyped technology, no one doubts that AI will change business software and increase productivity like few if any, technologies before it. To peek ahead  into that future, TechCrunch will interview Andrew Ng, arguably the world’s most experienced AI practitioner at huge companies (Baidu, Google) as well as at startups. AI will be a theme across every session, but we’ll address again it head-on in a panel with investor Jocelyn Goldfein (Zetta), founder Bindu Reddy (Reality Engines) and executive John Ball (Salesforce / Einstein). 

#2. Data, The Cloud and Kubernetes.
If AI is at the dawn of tomorrow, cloud transformation is the high noon of today.  90% of the world’s data was created in the past two years, and no enterprise can keep its data hoard on-prem forever. Azure’s CTO
Mark Russinovitch (CTO) will discuss Microsft’s vision for the cloud. Leaders in the open-source Kubernetes revolution, Joe Beda (VMWare) and Aparna Sinha (Google) and others will dig into what Kubernetes means to companies making the move to cloud. And last, there is the question of how to find signal in all the data – which will bring three visionary founders to the stage: Benoit Dageville (Snowflake), Ali Ghodsi (Databricks), Murli Thirumale (Portworx). 

#3 Everything else on the main stage!
Let’s start with a fireside chat with
SAP CEO Bill McDermott and Qualtrics Chief Experience Officer Julie Larson-Green.  We have top investors talking where they are making their bets, and security experts talking data and privacy. And then there is quantum,  the technology revolution waiting on the other side of AI: Jay Gambetta, the principal theoretical scientist behind IBM’s quantum computing effort,  Jim Clarke, the director of quantum hardware at Intel Labs, and Krysta Svore, style="font-weight: 400;"> who leads the Microsoft’s quantum effort.

All told, there are 21 programming sessions.

#4 Network and get your questions answered.
There will be two Q&A breakout sessions with top enterprise investors for founders (and anyone else) to query investors directly. Plus, TechCrunch’s unbeatable CrunchMatch app makes it really easy to set up meetings with the other attendees, an
incredible array of folks, plus the  20 early-stage startups exhibiting on the expo floor.

#5 SAP
Enterprise giant SAP is our sponsor for the show, and they are not only bringing a squad of top executives, they are producing four parallel track sessions featuring key SAP Chief Innovation Officer
Max Wessel,  SAP Chief Designer and Futurist  Martin Wezowski and SAP.IO’s managing director Ram Jambunathan (SAP.iO) in sessions including, how to scale-up an enterprise startup, how startups win large enterprise customers, and what the enterprise future looks like.

Check out the complete agenda. Don’t miss this show! This line-up is a view into the future like none other. 

Grab your $349 tickets today, and don’t wait till the day of to book because prices go up at the door!

We still have 2 Startup Demo Tables left. Each table comes with 4 tickets and a prime location to demo your startup on the expo floor. Book your demo table now before they’re all gone!


Read Full Article

When do kids start to care about other people's opinions? | Sara Valencia Botto

When do kids start to care about other people's opinions? | Sara Valencia Botto

Drawing on her research into early childhood development, psychologist Sara Valencia Botto investigates when (and how) children begin to change their behaviors in the presence of others -- and explores what it means for the values we communicate in daily interactions. (Watch for cute footage of sneaky toddlers.)

Click the above link to download the TED talk.

YouTube Originals become ad-supported and free after September 24th


In an email distributed to YouTube Premium subscribers, the company confirmed that access to YouTube’s original programming will no longer be exclusive to Premium customers after September 24th, 2019. Instead, many of YouTube’s Original series, movies, and live events will be offered to all YouTube viewers for free, supported by ads. Premium members, however, can watch the content ad-free.

In addition, Premium subscribers will have access to all the available episodes in a series right when they premiere, says YouTube, and they’ll be able to download them for offline viewing.

There will also continue to be some exclusive subscriber-only content, in the form of things like director’s cuts and extra scenes from YouTube Originals.

YouTube had previously announced its plans to make its original programming available for free back in May, following a larger shift in strategy for the video platform. According to a Deadline report from last November, YouTube had been reassessing its scripted development plans with a goal of refocusing on unscripted shows and specials. It had also stopped taking new scripted pitches.

The company had found some success with scripted content, the report noted — like Cobra Kai which at the time had 100 million views and a 100% Rotten Tomatoes score. But the company was also finding success with celebrity content, like Katy Perry: Will You Be My Witness and Will Smith’s Grand Canyon bungee stunt, for example.

This is the direction YouTube may be aiming to pursue next, Deadline had said.

Perhaps not coincidentally, Variety recently reported on a new crowdfunding service for YouTube creators, Fundo, which allows start to invite fans to virtual meet & greet sessions and other paid online events. However, this project is not from YouTube or Google itself, but rather its in-house incubator Area 120, which operates more independently. That said, it reflects YouTube’s larger interest in the creation of new revenue streams for creators beyond ads and subscriptions.

Along with the news of the changes to YouTube Originals, the email to Premium subscribers also alerted them to the addition of a “Recommended Downloads” feature on the Library tab, which lets them browse and download videos from YouTube’s algorithmic suggestions. And it noted YouTube Music changes, like the ability to switch between video and audio and the launch of “smart downloads” which automatically download up to 500 songs from Liked Songs and other favorite playlists and albums.


Read Full Article

YC’s Earth AI closes funding for its platform to make mining less wasteful


Discovering and drilling for the important minerals used for industry and the technology sector remains incredibly important as existing mines are becoming depleted. If the mining industry can’t become more efficient at finding these important deposits, then more unnecessary, harmful drilling and exploration takes place. Applying AI to this problem would seem like a no-brainer for the environment.

Andreessen Horowitz knows this, as they invested in KoBold Metals. GoldSpot Discoveries is a competitor.

Joining this field is now Earth AI, a mineral targeting startup which is using AI to predict the location of new ore bodies far more cheaply, faster, and with more precision (it claims) than previous methods.

It’s now closed a funding round of ‘up to’ $2.5 million from Gagarin Capital, A VC firm specializing in AI, and Y Combinator, in the latter’s latest cohort announced this week. Previously, Earth AI had raised $1.7 million in two seed rounds from Australian VCs, AirTree Ventures and Blackbird Ventures and angel investors.

The startup uses machine learning techniques on global data, including remote sensing, radiometry, geophysical and geochemical datasets, to learn the data signatures related to industrial metal deposits (from gold, copper, and lead to rare earth elements), train a neural network, and predict where high-value mineral prospects will be.

In particular, it was used to discover a deposit of Vanadium, which is used to build Vanadium Redox Batteries that are used in large industrial applications. Finding these deposits faster using AI means the planet will thus benefit faster from battery technology.

In 2018, Earth AI field-tested remote unexplored areas and claims to have generated a 50X better success rate than traditional exploration methods, while spending on average $11,000 per prospect discovery. In Australia, for instance, companies often spend several million dollars to arrive at the same result.

Jared Friedman, YCombinator partner comented in a statement: “The possibility of discovering new mineral deposits with AI is a fascinating and thought-provoking idea. Earth AI has the potential not just to become an incredibly profitable company, but to reduce the cost of the metals we need to build our civilization, and that has huge implications for the world.”

“Earth AI is taking a novel approach to a large and important industry — and that approach is already showing tremendous promise”, Mikhail Taver, partner at Gagarin Capital said.

Earth AI was founded by Roman Tesyluk, a geoscientist with eight years of mineral exploration and academic experience. Prior to starting Earth AI, he was a PhD Candidate at The University of Sydney, Australia and obtained a Master’s degree in Geology from Ivan Franko University, Ukraine. “EARTH AI has huge ambitions, and this funding round will supercharge us towards reaching our milestones,” he said.

This latest investment from Gagarin Capital joins a line of other AI-based products and services and investments it’s made into YC companies, such as Wallarm, Gosu.AI and CureSkin. Gagarin’s exits include MSQRD (acquired by Facebook), and AIMatter (acquired by Google).


Read Full Article

Disney+ comes to Canada and the Netherlands on Nov. 12, will support nearly all major platforms at launch


Disney+ will have an international launch that begins at the same time as its rollout in the U.S., Disney revealed. The company will be launching its digital streaming service on November 12 in Canada and The Netherlands on November 12, and will be coming to Australia and New Zealand the following week. The streaming service will also support virtually every device and operating system from day one.

Disney+ will be available on iOS, Apple TV, Google Chromecast, Android, Android TV, PlayStation 4, Roku, and Xbox One at launch, which is pretty much an exhaustive list of everywhere someone might want to watch it, leaving aside some smaller proprietary smart TV systems. That, combined with the day-and-date global markets, should be a clear indicator that Disney wants its service to be available to as many customers as possible, as quickly as possible.

Through Apple’s iPhone, iPad and Apple TV devices, customers will be able to subscribe via in-app purchase. Disney+ will also be fully integrated with Apple’s TV app, which is getting an update in iOS 13 in hopes of becoming even more useful as a central hub for all a user’s video content. The one notable exception on the list of supported devices and platforms is Amazon’s Fire TV, which could change closer to launch depending on negotiations.

In terms of pricing, the service will run $8.99 per month or $89.99 per year in Canada, and €6.99 per month (or €69.99 per year) in the Netherlands. In Australia, it’ll be $8.99 per month or $89.99 per year, and in New Zealand, it’ll be $9.99 and $99.99 per year. All prices are in local currency.

That compares pretty well with the $6.99 per month (or $69.99 yearly) asking price in the U.S., and undercuts the Netflix pricing in those markets, too. This is just the Disney+ service on its own, however, not the combined bundle that includes ESPN Plus and Hulu for $12.99 per month, which is probably more comparable to Netflix in terms of breadth of content offering.

 


Read Full Article

Sonos Bluetooth-enabled, battery-powered speaker leaks ahead of official launch


Sonos has an event coming up at the end of the month to reveal something new, but leaks have pretty much given away what’s likely to be the highlight announcement at the event: A new, Bluetooth-enabled speaker that has a built-in battery for portable power.

The speaker originally leaked earlier this month, with Dave Zatz showing off a Avery official looking image, and The Verge reporting some addition details including a toggle switch for moving between Bluetooth and Wifi modes, and a USB-C port for charging, along with rough dimensions that peg it as a little bit bigger than the existing Sonos One.

Now, another leak from Win Future has revealed yet more official-looking images, including a photo of the device with its apparent dock, which provides contact charging. The site also says the new speaker will be called the ‘Sonos Move,’ which makes a lot of sense, given it’ll be the only one that can actually move around and still maintain functionality while portable.

[gallery ids="1870393,1870392"]

Here’s TL;DR of what we know so far, across all the existing leaks:

  • Can stream via Wi-Fi (works with your Sonos network like other Sonos speakers) and Bluetooth (direct pairing with devices), with Bluetooth LE included for easier setup
  • USB-C port for power and Ethernet port for connectivity
  • Similar design to Sonos One, with more rounded corners, but wider and taller (likely to allow room for integrated battery)
  • Built in hand in the back for easier carrying
  • Contacts on bottom for docked charging (as alternative to USB-C)
  • Supports Alexa and Google Assistant and has integrated mic (neither available via Bluetooth mode, however)
  • Suports AirPlay 2
  • Offer ‘Auto Trueplay,’ which automatically tunes speaker sound to your place using onboard mic

No word yet on official availability or pricing, but it’s reasonable to expect that it’ll arrive sometime this fall, following that late August announcement.


Read Full Article

Sonos Bluetooth-enabled, battery-powered speaker leaks ahead of official launch


Sonos has an event coming up at the end of the month to reveal something new, but leaks have pretty much given away what’s likely to be the highlight announcement at the event: A new, Bluetooth-enabled speaker that has a built-in battery for portable power.

The speaker originally leaked earlier this month, with Dave Zatz showing off a Avery official looking image, and The Verge reporting some addition details including a toggle switch for moving between Bluetooth and Wifi modes, and a USB-C port for charging, along with rough dimensions that peg it as a little bit bigger than the existing Sonos One.

Now, another leak from Win Future has revealed yet more official-looking images, including a photo of the device with its apparent dock, which provides contact charging. The site also says the new speaker will be called the ‘Sonos Move,’ which makes a lot of sense, given it’ll be the only one that can actually move around and still maintain functionality while portable.

[gallery ids="1870393,1870392"]

Here’s TL;DR of what we know so far, across all the existing leaks:

  • Can stream via Wi-Fi (works with your Sonos network like other Sonos speakers) and Bluetooth (direct pairing with devices), with Bluetooth LE included for easier setup
  • USB-C port for power and Ethernet port for connectivity
  • Similar design to Sonos One, with more rounded corners, but wider and taller (likely to allow room for integrated battery)
  • Built in hand in the back for easier carrying
  • Contacts on bottom for docked charging (as alternative to USB-C)
  • Supports Alexa and Google Assistant and has integrated mic (neither available via Bluetooth mode, however)
  • Suports AirPlay 2
  • Offer ‘Auto Trueplay,’ which automatically tunes speaker sound to your place using onboard mic

No word yet on official availability or pricing, but it’s reasonable to expect that it’ll arrive sometime this fall, following that late August announcement.


Read Full Article

Minecraft to get big lighting, shadow and color upgrades through Nvidia ray tracing


Minecraft is getting a free update that brings much-improved lighting and color to the game’s blocky graphics using real-time ray tracing running on Nvidia GeForce RTX graphics hardware. The new look is a dramatic change in the atmospherics of the game, and manages to be eerily realistic while retaining Minecraft’s pixelated charm.

The ray tracing tech will be available via a free update to the game on Windows 10 PCs, but it’ll only be accessible to players using an Nvidia GeForce RTX GPU, since that’s the only graphics hardware on the market that currently supports playing games with real-time ray tracing active.

It sounds like it’ll be an excellent addition to the experience for players who are equipped with the right hardware, however – including lighting effects not only from the sun, but also from in-game materials like glowstone and lava; both hard and soft shadows depending on transparency of material and angle of light refraction; and accurate reflections in surfaces that are supposed to be reflective (ie. gold blocks, for instance).

This is welcome news after Minecraft developer Mojang announced last week that it cancelled plans to release its Super Duper Graphics Pack, which was going to add a bunch of improved visuals to the game, because it wouldn’t work well across platforms. At the time, Mojang said it would be sharing news about graphics optimization for some platforms “very soon,” and it looks like this is what they had in mind.

Nvidia meanwhile is showing off a range of 2019 games with real-time ray tracing enabled at Gamescom 2019 in Cologne, Germany, including Dying Light 2, Cyperpunk 2077, Call of Duty: Modern Warfare and Watch Dogs: Legion.

[gallery ids="1870333,1870334,1870335"]


Read Full Article

Conference Question


Conference Question