12 November 2019

Google’s CallJoy phone agent for small businesses gets smarter, more conversational


Earlier this year, Google’s in-house incubator launched CallJoy, a virtual customer service phone agent for small businesses that could block spammers, answer calls, provide callers with basic business information, and redirect other requests like appointment booking or to-go orders to SMS. Today, CallJoy is rolling out its first major update, which now enables the computer phone agent to have more of a conversation with the customer by asking questions and providing more information, among other improvements.

Originally, CallJoy could provide customers with information like the business hours or the address, or could ask the customer for permission to send them a link over text message to help them with their request. With the update, CallJoy’s phone agent can answer questions more intelligently. 

This begins by CallJoy asking the customer, “can I help you?,” which the customer then responds to, as they would usually. Their answer allows CallJoy to offer more information than before, based on what the caller had said.

For example, if a caller asked a restaurant if they had any vegetarian options, the phone agent might respond: “Yes! Our menu has vegetarian and vegan-friendly choices. Can I text you the link to our online menu?”

This isn’t all done through some magical A.I., however. Instead, the business owner has to program in the sort of customer inquiries it wants CallJoy to be able to respond to and handle. While some, like vegetarian options, may be common inquiries, it can be hard to remember everything that customers ask. That where CallJoy’s analytics could help.

The service already gathers call data — like phone numbers, audio and call transcripts– into an online dashboard for further analysis. Business owners can tag calls and run reports to get a better understanding of their call volume, peak call times, and what people wanted to know. This information can be used to better staff their phone lines during busy times or to update their website or business listings, for example. And now, it can help the business owner to understand what sort of inquiries it should train the CallJoy phone agent on, too.

Once trained, the agent can speak an answer, send a link to the customer’s phone with the information, or offer to connect the caller to the business’s phone number to reach a real person. (CallJoy offers a virtual phone number, like Google Voice, but it can ring a “real” phone line as needed to get a person on the line.)

Another feature launching today will allow business owners to implement CallJoy as they see fit.

Some business owners may prefer to answer the phone themselves and speak to their customers directly, for example. But they could still take advantage of a service like this at other times — like after hours or when they’re too busy to answer. The updated version now allows them to program when CallJoy will answer, including by times of day, or after the phone rings a certain number of times, for example.

The business owner will also receive a daily email recap of everything CallJoy did, so they know how and when it was put to use.

The product to date has been aimed at small business owners, who can’t afford the more expensive customer service phone agent systems. Instead, it’s priced at a flat $39 per month.

A spokesperson for CallJoy says the service has signed up “thousands” of small businesses since its initially invite-only launch in May 2019. 

Google’s Area 120 incubator is a place for Google employees to try out new ideas, while still operating inside Google instead of leaving for a startup. It’s considered a separate entity — some the apps produced by Area 120 don’t even mention their Google affiliation in their App Store descriptions, for instance. CallJoy, however, has received more of a spotlight than some. It’s even being featured on Google’s main corporate blog, The Keyword, today. However, if CallJoy makes the leap to Google — something that hasn’t been decided yet — it wouldn’t be the first Area 120 project to do so.

Area 120’s Touring Bird recently landed inside Google as did learn-to-code app Grasshopper and others.

We understand that joining Google is something that’s still on the table for CallJoy, but it’s not at the point of making that switch just yet.


Read Full Article

Facebook pilloried over iPhone ‘secret camera access’ bug


Facebook has faced a barrage of concern over an apparent bug that resulted in the social media giant’s iPhone app exposing the camera as users scroll through their feed.

A tweet over the weekend blew up after Joshua Maddux tweeted a screen recording of the Facebook app on his iPhone. He noticed that the camera would appear behind the Facebook app as he scrolled through his social media feed.

Several users had already spotted the bug earlier in the month. One person called it “a little worrying”.

Some immediately assumed the worst — as you might expect given the long history of security vulnerabilities, data breaches and inadvertent exposures at Facebook over the past year. Just last week, the company confirmed that some developers had improperly retained access to some Facebook user data for more than a year.

Will Strafach, chief executive at Guardian Firewall, said it looked like a “harmless but creepy looking bug.”

The bug appears to only affect iPhone users running the latest iOS 13 software, and those who have already granted the app access to the camera and microphone. It’s believed the bug relates to the “story” view in the app, which opens the camera for users to take photos.

One workaround is to simply revoke camera and microphone access to the Facebook app in their iOS settings.

Despite the apparent widespread concern from users on social media, Facebook did not respond to repeated requests for comment from TechCrunch. That said, Facebook vice president of integrity Guy Rosen tweeted this morning that it “sounds like a bug” and the company was investigating.

“I guess it does say something when Facebook trust has eroded so badly that it will not get the benefit of the doubt when people see such a bug,” said Strafach.


Read Full Article

The case against Grace Hopper Celebration


We’ve heard the criticisms that there were fewer black women speakers than white men at Grace Hopper Celebration in the past, but event organizers heard our complaints and created an entire conference pathway and new grants for “women of color from underrepresented groups and women from untapped pathways.”

We feel better now that our panels include hijabi and transgender women. The work done by women of color and others to broaden our understanding of diversity and inclusion in these spaces cannot go without recognition.

But at the end of it all, my question after a long day of panels and handshakes is, why? What are we really doing here? What ideas are we planting and fostering behind our massive paywall? Are we breaking down barriers for future generations, or simply congratulating ourselves for reaching the upper echelons of women who have vaulted them? Are we pushing to change toxic systems, or asking women to change themselves to navigate them?

Who are we benefiting and elevating with our efforts?

What we can say about the majority of corporate women is that we are currently wealthy and educated. What we can say about many corporate women in the American tech sector is that we are white or Asian-American, heterosexual, abled and a plethora of other dimensions of privileged. Through most of our women in tech events, we self-select into a space where others are educated like us, or aspire to be educated like us, and erect barriers to the tune of thousands of dollars and up to a week off from work/school. Conferences tout scholarships to offset the cost of attendance for the up and coming generation of tech women, but often times those students are required to show existing proclivities to STEM.

Extending resources to students who already have exposure to STEM biases our outreach to those with privilege already; low-income schools in California are four times less likely to offer AP computer science A courses than high-income schools, according to an independent study done by the Kapor Center. Unfortunately, it’s hard to make a case to allocate resources any other way when these events rely on corporate sponsorship and attendance and a business case must be made for return on investment (re: tech talent pipeline).

The following is a (non-comprehensive) list of recommendations for improving the way we build power as women in tech:

1. Increase economic accessibility by supporting smaller conferences

Attending a conference costs more than its ticket price, so increasing accessibility must be more comprehensive than offering scholarships. Some examples of questions to ask ourselves as organizers: will attendees with mobility needs spend more than others for their travel and lodging? Are students who receive financial aid more fearful about taking days off?

At first glance, these questions seem like they can be addressed by throwing money at the problem — more scholarships for disabled and lower-income attendees, easy! But trying to level the playing field in this manner is an exercise in futility; bringing a few lucky underprivileged people into our space does little to address the underlying hierarchy. A better way to look at it is to ask how we can make the benefits available to those of us with privilege equally accessible to those with less.

Smaller, regional events usually cost less to host and attend and spread value more widely. New speakers can practice leadership, attendees can network with professionals in their local area, and students can receive more attention and mentorship. Resources move into local communities and nonprofits instead of into recruiting pipelines for tech giants. Some examples of regional conferences targeting minorities but with more granular goals are CodeNewbies, AfroTech and Take Back Tech. These are the efforts we need to support if we want to effectively grow power in our communities that don’t already have it.

2. Focus on systemic change

If every takeaway from your event is how women can change their actions, then it might be a shallow event. Women and others are not held down because we cry at work, or because we take maternity leave, but because of how those around us perceive those things. Challenging ourselves to change our perceptions is more difficult but ultimately more valuable than stifling our authentic choices and personality to be more convenient.

It’s important to ask ourselves why we, a group of traditionally mistreated professionals, are gathering. Why are we sharing our stories of vulnerability and to what end are we building our collective strength? Marginalized people coming together helps consolidate our power so that we can change the system we’re in. It’s a form of collective action — when dozens of women want maternity leave, their employer is more inclined to provide it than when one woman asks alone. When multiple women talk to each other and realize they’ve been harassed by the same co-worker, they feel empowered to do something about it. We organize and gather so we can change injustices.

Conversations where the whole room may not agree with you can be more impactful than the ones that earn you the most laughs and nods. Challenge your audience; discomfort is where we grow. If you’re holding an event for allies, make them earn the title of ally. Catch yourself when you fall to the instinct of making everyone feel good when your goal is to make a difference.

3. Support grassroots-led change instead of corporate-lead change

Let’s not forget who the greatest winners are after a Women @ Qualcomm weekend, a Microsoft Women in Technology Event or Grace Hopper Celebration — the event organizer.

They recruit from the highly qualified pool of attendees while cultivating positive PR for valuing diversity, gaining much more overall than any one individual, though a single person may stand to gain from the opportunity. Companies have made a major push for students and employees from underrepresented groups to stay in the “tech talent pipeline.” As from any affirmative action, there are positive outcomes from that, but there are also studies that find that the pipeline has not addressed deeper issues with workplace cultures, power asymmetries, and harassment.

Put another way, companies often recruit diversity in ways that bring value to themselves without taking responsibility for the quality of life of those within the pipeline. It’s important to remind ourselves that these are not purely philanthropic goals for corporations and that recruitment and retention are to their benefit. At the very least, we’re entitled to substantive policy change in exchange for our labor.

Grassroots and community-led change is better than corporate-led change if our goal is to empower and further the opportunities for women. We must create opportunities for leadership and support efforts that truly build our strength. We should be fearless in asking for real change. By all means, do the work within the companies and within the mainstream conferences if that empowers you, but be wary of the ways that you might be keeping power in already powerful communities and keep your goals in sight. Don’t be afraid to ask why, even for things that seem to have the best of intentions. Even well-meaning systems can perpetuate harmful power dynamics if those of us within them aren’t constantly questioning and pushing back.


Read Full Article

Highlights from the 2019 Google AI Residency Program




This fall marks the successful conclusion to the fourth year of the Google AI Residency Program. Started in 2016 with 27 individuals in Mountain View, CA, the 12-month program has grown to nearly 100 residents from nine locations across the globe. Program participants have gone on to great success in PhD programs, academia, non-profits, and industry. Many have also become full-time Google researchers.

The program’s latest installment was our most successful yet, as residents advanced progress in a broad range of research fields, such as machine perception, algorithms and optimization, language understanding, healthcare and many more. Below are a handful of innovative projects from some of this year’s alumni.
  • A large-scale study on cross-lingual transfer in massive multilingual neural machine translation models (recently highlighted as part of this post), trained on billions of sentence pairs from more than 100 languages in order to significantly improve translation for both low- and high-resource languages.
    Visualization of the clustering of encoder representations of all modeled languages, based on representational similarity. Encoder representations of different languages cluster according to linguistic similarity. Languages are color-coded by their linguistic family.
  • A generative model for Scalable Vector Graphics (SVGs), which can be used to aid designers in generating fonts.
  • Top: Unlike pixel representations of icons (right), in this case a "6", SVGs (left; middle) are scale-invariant representations. Bottom: By modelling SVGs directly, we can aid artists in quickly and intuitively iterating over typography designs.
  • A method to learn GANs using discrepancy divergence, a measure that accounts for both the loss function and hypothesis set to provide theoretical learning guarantees.
  • As more generators are added to the DGAN ensemble more modes in the real distribution are covered. From left to right: 1 generator, 5 generators, and 10 generators.
  • A likelihood ratio method for deep generative models that effectively corrects for confounding background statistics to improve out-of-distribution (OOD) detection, and a new benchmark dataset for OOD detection in genomics.
  • Log-likelihood (left) and log likelihood-ratio (right) of each pixel for Fashion-MNIST. The likelihood is dominated by the “background” pixels, whereas the likelihood ratio focuses on the “semantic” pixels and is thus better for OOD detection.
  • A study showing when label smoothing helps, focusing on its impact on calibration of predictions, representations learned by the penultimate layer and effectiveness of knowledge distillation.
  • 2D-projection of representations of three CIFAR100 classes. Without label smoothing, examples are spread, but with label smoothing each example is encouraged to be equally distant to the clusters of the other classes, attenuating intra-class variation and inter-class similarity structure.
The successes of our AI residents go beyond academic publishing. Their achievements include:
  • Organizing a workshop, bringing together experts in theoretical physics and deep learning, to explore how tools from physics can shed light on the theory of deep learning.
  • Founding Queer in AI, an organization for fostering a community of queer researchers and raising awareness of queer issues in AI/ML.
  • Organizing a hands-on Tensorflow tutorial on using Deep Learning for Natural Language Processing.
  • Automatically learning neural net architectures with AdaNet, an open-source, TensorFlow-based framework.
  • Developing Coconet, the model behind the first AI-powered Doodle (created to celebrate renowned German composer and musician Johann Sebastian Bach).
Also, beginning with the next program cycle, residents will be hosted for a duration of 12 months, with the option of extending up to 18 months! This exciting shift comes as part of our effort to improve the overall program experience and outcomes for residents as the program continues to grow and scale.

If you are interested in joining our fifth cohort, applications for the 2020 Google AI Residency program are now open! Visit g.co/airesidency/apply for more information on how to apply. Please submit your application as soon as possible, as we will be considering candidates on a rolling basis. Please see g.co/airesidency for more resident profiles, past resident publications, blog posts and stories. We can’t wait to see where the next year will take us, and hope you’ll consider joining our research teams across the world!

The Visual Task Adaptation Benchmark




Deep learning has revolutionized computer vision, with state-of-the-art deep networks learning useful representations directly from raw pixels, leading to unprecedented performance on many vision tasks. However, learning these representations from scratch typically requires hundreds of thousands of training examples. This burden can be reduced by using pre-trained representations, which have become widely available through services such as TensorFlow Hub (TF Hub) and PyTorch Hub. But their ubiquity can itself be a hindrance. For example, for the task of extracting features from images, there can be over 100 models from which to choose. It is hard to know which methods provide the best representations, since different sub-fields use different evaluation protocols, which do not always reflect the final performance on new tasks.

The overarching goal of representation research is to learn representations a single time on large amounts of generic data without the need to train them from scratch for each task, thus reducing data requirements across all vision tasks. But in order to reach that goal, the research community must have a uniform benchmark against which existing and future methods can be evaluated.

To address this problem, we are releasing "The Visual Task Adaptation Benchmark" (VTAB, available on GitHub), a diverse, realistic, and challenging representation benchmark based on one principle — a better representation is one that yields better performance on unseen tasks, with limited in-domain data. Inspired by benchmarks that have driven progress in other fields of machine learning (ML), such as ImageNet for natural image classification, GLUE for Natural Language Processing, and Atari for reinforcement learning, VTAB follows similar guidelines: (i) minimal constraints on solutions to encourage creativity; (ii) a focus on practical considerations; and (iii) challenging tasks for evaluation.

The Benchmark
VTAB is an evaluation protocol designed to measure progress towards general and useful visual representations, and consists of a suite of evaluation vision tasks that a learning algorithm must solve. These algorithms may use pre-trained visual representations to assist them and must satisfy only two requirements:
    i) They must not be pre-trained on any of the data (labels or input images) used in the downstream evaluation tasks.
    ii) They must not contain hardcoded, task-specific, logic. Alternatively put, the evaluation tasks must be treated like a test set — unseen.
These constraints ensure that solutions that are successful when applied to VTAB will be able to generalize to future tasks.

The VTAB protocol begins with the application of an algorithm (A) to a number of independent tasks, drawn from a broad distribution of vision problems. The algorithm may be pre-trained on upstream data to yield a model that contains visual representations, but it must also define an adaptation strategy that consumes a small training set for each downstream task and return a model that makes task-specific predictions. The algorithm’s final score is its average test score across tasks.
The VTAB protocol. Algorithm A is applied to many tasks T, drawn from a broad distribution of vision problems PT. In the example, pet classification, remote sensing, and maze localization are shown.
VTAB includes 19 evaluation tasks that span a variety of domains, divided into three groups — natural, specialized, and structured. Natural image tasks include images of the natural world captured through standard cameras, representing generic objects, fine-grained classes, or abstract concepts. Specialized tasks utilize images captured using specialist equipment, such as medical images or remote sensing. The structured tasks often derive from artificial environments that target understanding of specific changes between images, such as predicting the distance to an object in a 3D scene (e.g., DeepMind Lab), counting objects (e.g., CLEVR), or detecting orientation (e.g., dSprites for disentangled representations).

While highly diverse, all of the tasks in VTAB share one common feature — people can solve them relatively easily after training on just a few examples. To assess algorithmic generalization to new tasks with limited data, performance is evaluated using only 1000 examples per task. Evaluation using the full dataset can be performed for comparison with previous publications.

Findings Using VTAB
We performed a large scale study testing a number of popular visual representation learning algorithms against VTAB. The study included generative models (GANs and VAEs), self-supervised models, semi-supervised models and supervised models. All of the algorithms were pre-trained on the ImageNet dataset. We also compared each of these approaches using no pre-trained representations, i.e., training “from-scratch”. The figure below summarizes the main pattern of results.
Performance of different classes of representation learning algorithms across different task groups: natural, specialized and structured. Each bar shows the average performance of all methods in that class across all tasks in the group.
Overall we find that generative models do not perform as well as the other methods, even worse than from-scratch training. However, self-supervised models perform much better, significantly outperforming from-scratch training. Better still is supervised learning using the ImageNet labels. Interestingly, while supervised learning is significantly better on the Natural group of tasks, self-supervised learning is close on the other two groups whose domains are more dissimilar to ImageNet.

The best performing representation learning algorithm, of those we tested, is S4L, which combines both supervised and self-supervised pre-training losses. The figure below contrasts S4L with standard supervised ImageNet pre-training. S4L appears to improve performance particularly on the Structured tasks. However, representation learning yields a much smaller benefit over training from-scratch groups other than the Natural tasks, indicating that there is much progress required to attain a universal visual representation.
Top: Performance of S4L versus from-scratch training. Each bar corresponds to a task. Positive-valued bars indicate tasks where S4L outperforms from-scratch. Negative bars indicate that from-scratch performed better. Bottom: S4L versus Supervised training on ImageNet. Positive bars indicate that S4L performs better. The bar colour indicates the task group: Red=Natural, Green=Specialized, Blue=Structured. We can see that additional self-supervision tends to help on structured tasks beyond just using ImageNet labels.
Summary
The code to run VTAB is available on GitHub, including the 19 evaluation datasets and exact data splits. Having a publicly available set of benchmarks ensures the reproducibility of results. Progress is tracked with the public leaderboard, and the models evaluated are uploaded to TF Hub for public use and reproduction. A shell script is provided to perform adaptation and evaluation on all the tasks, with a standardized evaluation protocol making VTAB readily accessible across the industry. Since VTAB can be executed on both TPU and GPU, it is highly efficient. One can obtain comparable results with a single NVIDIA Tesla P100 accelerator in a few hours.

The Visual Task Adaptation Benchmark has helped us better understand which visual representations generalize to the broad spectrum of vision tasks, and provides direction for future research. We hope these resources are useful in driving progress toward general and practical visual representations, and as a result, affords deep learning to the long tail of vision problems with limited labelled data.

Acknowledgements
The core team behind this work includes Joan Puigcerver, Alexander Kolesnikov, Pierre Ruyssen, Carlos Riquelme, Mario Lucic, Josip Djolonga, Andre Susano Pinto, Maxim Neumann, Alexey Dosovitskiy, Lucas Beyer, Olivier Bachem, Michael Tschannen, Marcin Michalski, Olivier Bousquet, and Sylvain Gelly.

Small rockets are the next space revolution | Peter Beck

Small rockets are the next space revolution | Peter Beck

We're in the dawn of a new space revolution, says engineer Peter Beck: the revolution of the small. In a talk packed with insights into the state of the space industry, Beck shares his work building rockets capable of delivering small payloads to space rapidly and reliably -- helping us search for extraterrestrial life, learn more about the solar system and create a global internet network.

Click the above link to download the TED talk.

Mozilla partners with Intel, Red Hat and Fastly to take WebAssembly beyond the browser


Mozilla, Intel, Red Hat and Fast today announced the launch of the Bytecode Alliance, a new open-source group that focuses on “creating new software foundations, building on standards such as WebAssembly and WebAssembly System Interface (WASI).”

Mozilla has long championed WebAssembly, the open standard that allows browsers to execute compiled programs in the browser. This allows developers to write their applications in languages like C, C++ and Rust and have those programs execute at native speed, all without having to rely on JavaScript, which would take much longer to parse and execute, especially on mobile devices.

Today, support for WebAssembly is part of all of the major browser engines. Companies like Figma and Autodeskhave experimented with it or are using it in production. I do not get the sense that mass adoption of the technology is near, though, and the barrier to entry is high for most developers. Indeed, today’s announcement probably marks the first time I’ve heard about WebAssemly this year.

The mission of this new group goes beyond the browser, though. It wants to establish “a capable, secure platform that allows application developers and service providers to confidently run untrusted code, on any infrastructure, for any operating system or device, leveraging decades of experience doing so inside web browsers.” The argument here is that there is plenty of potential for WebAssembly outside of the browser because it allows untrusted code components to interact with trusted code inside of a sandboxed environment. Indeed, a Mozilla spokesperson noted that WebAssembly has generated more interested from businesses who are interested in this use case than from the traditional application developers and web technologists. Hence this new alliance.

When Mozilla and others launched the WebAssembly format, Microsoft and Google were also part of that group. They are not members of the new Bytecode Alliance, though.

Some of the code that the various members are contributing to the Alliance include Wasmtime, a runtime for WebAssemble and WASI, as well as Fastly’s Lucet, Intel’s WebAssembly Micro Runtime and code generator Cranelift.

“WebAssembly is changing the web, but we believe WebAssembly can play an even bigger role in the software ecosystem as it continues to expand beyond browsers,” explained Luke Wagner, Distinguished Engineer at Mozilla and co-creator of WebAssembly. “This is a unique moment in time at the dawn of a new technology, where we have the opportunity to fix what’s broken and build new, secure-by-default foundations for native development that are portable and scalable. But we need to take deliberate, cross-industry action to ensure this happens in the right way.

 


Read Full Article

S’More is a new dating app that looks to suspend physical attraction for something more


According to former Chappy Managing Director Adam Cohen Aslatei, “something more” is one of the most common pieces of feedback that dating apps get from their users. That’s where S’More comes in.

S’More was founded by Aslatei to provide a dating app to users that goes beyond superficial looks.

Here’s how it works:

Rather than scrolling through a feed and swiping left and right, users are served five suggested profiles each day. Unlike other dating apps, user profiles on S’More consist of icons, rather than pictures and text, which reveal characteristics about the profile’s owner. For example, a user might put that they’re seeking romance, interested in hiking, and got an education from this or that university, all in the form of little tile icons.

When a user interacts with those icons — S’More calls this a ‘wink’ — more visual pieces of the profile start to unblur and unlock, revealing a profile photo and unlocking the person’s social media feeds, etc.

These interactions also unlock the ability to have a conversation, if they’re reciprocated, which creates a match.

As users continue to interact with others on the platform, S’More learns about what they’re looking for in a relationship and optimizes for those factors when suggesting other profiles.

“The greatest challenge is resetting expectations for consumers,” said Cohen Aslatei. “We know that the swiping mechanism largely doesn’t work, but we’re providing another option which is, if you truly want to get to know someone, suspend physical judgement before you decide if you like them.”

The company plans to generate revenue through a freemium model, charging users extra to access a Discover page on the app, allowing them to interact with and save more profiles than the allotted five per day.

Moreover, S’More asks all users to rank one another, not as prospective mates but as users of the platform. The hope is that the public-facing user rating promotes a healthy, safe environment for all users to meet and connect without the abuse that’s so common on dating apps. Ratings are also determined by a user’s activity on the platform and how complete their profile is.

The company also requires that users who register take a selfie for ID verification right at the point of signing up.

S’More is launching in beta to Boston and the D.C. area with plans to launch in New York soon.


Read Full Article

No one knows how effective digital therapies are, but a new tool from Elektra Labs aims to change that


Depending on which study you believe, the wearable and digital health market could be worth anywhere from $30 billion to nearly $90 billion in the next six years.

If the numbers around the size of the market are a moving target, just think about how to gauge the validity and efficacy of the products that are behind all of those billions of dollars in spending.

Andy Coravos, the co-founder of Elektra Labs, certainly has.

Coravos, whose parents were a dentist and a nurse practitioner, has been thinking about healthcare for a long time. After a stint in private equity and consulting, she took a coding bootcamp and returned to the world she was raised in by taking an internship with the digital therapeutics company, Akili Interactive.

Coravos always thought she wanted to be in healthcare, but there was one thing holding her back, she says. “I’m really bad with blood.”

That’s why digital therapeutics made sense. The stint at Akili led to a position at the U.S. Food and Drug Administration as an entrepreneur in residence, which led to the creation of Elektra Labs roughly two years ago.

Now the company is launching Atlas, which aims to catalog the biometric monitoring technologies that are flooding the consumer health market.

These monitoring technologies, and the applications layered on top of them, have profound implications for consumer health, but there’s been no single place to gauge how effective they are, or whether the suggestions they’re making about how their tools can be used are even valid. Atlas and Elektra are out to change that. 

The FDA has been accelerating its clearances for software-driven products like the atrial fibrillation detection algorithm on the Apple Watch and the ActiGraph activity monitors. And big pharma companies like Roche, Pfizer and Novartis have been investing in these technologies to collect digital biomarker data and improve clinical trials.

Connected technologies could provide better care, but the technologies aren’t without risks. Specifically the accuracy of data and the potential for bias inherent in algorithms which were created using flawed datasets mean that there’s a lot of oversight that still needs to be done, and consumers and pharmaceutical companies need to have a source of easily accessible data about the industry.

”The increase in FDA clearances for digital health products coupled with heavy investment in technology has led to accelerated adoption of connected tools in both clinical trials and routine care. However, this adoption has not come without controversy,” said Coravos, co-founder and CEO of Elektra Labs, in a statement. “During my time as an Entrepreneur in Residence in the FDA’s Digital Health Unit, it became clear to me that like pharmacies which review, prepare, and dispense drug components, our healthcare system needs infrastructure to review, prepare, and dispense connected technologies components.

The analogy to a pharmacy isn’t an exact fit, because Elektra Labs currently doesn’t prepare or dispense any of the treatments that it reviews. But Atlas is clearly the first pillar that the digital therapeutics industry needs as it looks to supplant pharmaceuticals as treatments for some of the largest and most expensive chronic conditions (like diabetes).

Coravos and here team interviewed more than 300 professionals as they built the Atlas toolkit for pharmaceutical companies and other healthcare stakeholders seeking a one-stop-shop for all of their digital healthcare data needs. Like a drug label, or nutrition label, Atlas publishes labels that highlight issues around the usability, validation, utility, security and data governance of a product.

In an article in Quartz earlier this year, Coravos made her pitch for Elektra Labs and the types of things it would monitor for the nascent digital therapeutics industry. It includes the ability to handle adverse events involving digital therapies by providing a single source where problems could be reported; a basic description for consumers of how the products work; an assessment of who should actually receive digital therapies, based on the assessment of how well certain digital products perform with certain users; a description of a digital therapy’s provenance and how it was developed; a database of the potential risks associated with the product; and a record of the product’s security and privacy features.

As the projections on market size show, the problem isn’t going to get any smaller. As Google’s recent acquisition bid for FitBit and the company’s reported partnership with Ascension on “Project Nightingale” to collect and digitize more patient data shows, the intersection of technology and healthcare is a huge opportunity for technology companies.

“Google is investing more. Apple is investing more… More and more of these devices are getting FDA cleared and they’re becoming not just wellness tools but healthcare tools,” says Coravos of the explosion of digital devices pitching potential health and wellness benefits.

Elektra Labs is already working with undisclosed pharmaceutical companies to map out the digital therapeutic environment and identify companies that might be appropriate partners for clinical trials or acquisition targets in the digital market.

“The FDA is thinking about these digital technologies, but there were a lot of gaps,” says Coravos. And those gaps are what Elektra Labs is designed to fill. 

At its core, the company is developing a catalog of the digital biomarkers that modern sensing technologies can track and how effective different products are at providing those measurements. The company is also on the lookout for peer-reviewed published research or any clinical trial data about how effective various digital products are.

Backing Coravos and her vision for the digital pharmacy of the future are venture capital investors including Maverick Ventures, Arkitekt Ventures, Boost VC, Founder Collective, Lux Capital, SV Angel, and Village Global.

Alongside several angel investors, including the founders and chief executives from companies including: PillPack, Flatiron Health, National Vision, Shippo, Revel and Verge Genomics, the venture investors pitched in for a total of $2.9 million in seed funding for Coravos’ latest venture.

“Timing seems right for what Elektra is building,” wrote Brandon Reeves, an investor at Lux Capital, which was . one of the first institutional investors in the company. “We have seen the zeitgeist around privacy data in applications on mobile phones and now starting to have the convo in the public domain about our most sensitive data (health).” 

If the validation of efficacy is one key tenet of the Atlas platform, then security is the other big emphasis of the company’s digital therapeutic assessment.  Indeed, Coravos believes that the two go hand-in-hand. As privacy issues proliferate across the internet, Coravos believes that the same troubles are exponentially compounded by internet-connected devices that are monitoring the most sensitive information that a person has — their own health records.

In an article for Wired, Koravos wrote:

Our healthcare system has strong protections for patients’ biospecimens, like blood or genomic data, but what about our digital specimens? Due to an increase in biometric surveillance from digital tools—which can recognize our face, gait, speech, and behavioral patterns—data rights and governance become critical. Terms of service that gain user consent one time, upon sign-up, are no longer sufficient. We need better social contracts that have informed consent baked into the products themselves and can be adjusted as user preferences change over time.

We need to ensure that the industry has strong ethical underpinning as it brings these monitoring and surveillance tools into the mainstream. Inspired by the Hippocratic Oath—a symbolic promise to provide care in the best interest of patients—a number of security researchers have drafted a new version for Connected Medical Devices.

With more effective regulations, increased commercial activity, and strong governance, software-driven medical products are poised to change healthcare delivery. At this rate, apps and algorithms have the opportunity to augment doctors and complement—or even replace—drugs sooner than we think.


Read Full Article

Dutch court orders Facebook to ban celebrity crypto scam ads after another lawsuit


A Dutch court has ruled that Facebook can be required to use filter technologies to identify and pre-emptively take down fake ads linked to crypto currency scams that carry the image of a media personality, John de Mol, and other well known celebrities.

The Dutch celerity filed a lawsuit against Facebook in April over the misappropriation of his and other celebrities’ likeness to shill Bitcoin scams via fake ads run on its platform.

In an immediately enforceable preliminary judgement today the court has ordered Facebook to remove all offending ads within five days, and provide data on the accounts running them within a week.

Per the judgement, victims of the crypto scams had reported a total of €1.7 million (~$1.8M) in damages to the Dutch government at the time of the court summons.

The case is similar to a legal action instigated by UK consumer advice personality, Martin Lewis, last year, when he announced defamation proceedings against Facebook — also for misuse of his image in fake ads for crypto scams. Lewis withdrew the suit at the start of this year after Facebook agreed to apply new measures to tackle the problem: Namely a scam ads report button. It also agreed to provide funding to a UK consumer advice organization to set up a scam advice service.

In the de Mol case the lawsuit was allowed to run its course — resulting in today’s preliminary judgement against Facebook.

It’s not yet clear whether the company will appeal but in the wake of the ruling Facebook has said it will bring the scam ads report button to the Dutch market early next month.

In court, the platform giant sought to argue that it could not more proactively remove the Bitcoin scam ads containing celebrities’ images on the grounds that doing so would breach EU law against general monitoring conditions being placed on Internet platforms.

However the court rejected that argument, citing a recent ruling by Europe’s top court related to platform obligations to remove hate speech, also concluding that the specificity of the requested measures could not be classified as ‘general obligations of supervision’.

It also rejected arguments by Facebook’s lawyers that restricting the fake scam ads would be restricting the freedom of expression of a natural person, or the right to be freely informed — pointing out that the ‘expressions’ involved are aimed at commercial gain, as well as including fraudulent practices.

Facebook also sought to argue it is already doing all it can to identify and take down the fake scam ads — saying too that its screening processes are not perfect. But the court said there’s no requirement for 100% effectiveness for additional proactive measures to be ordered.

Its ruling further notes a striking reduction in fake scam ads using de Mol’s image since the lawsuit was announced

Facebook’s argument that it’s just a neutral platform was also rejected, with the court pointing out that its core business is advertising. It also took the view that requiring Facebook to apply technically complicated measures and extra effort, including in terms of manpower and costs, to more effectively remove offending scam ads is not unreasonable in this context.

The judgement orders Facebook to remove fake scam ads containing celebrity likenesses from Facebook and Instagram within five days of the order — with a penalty of €10k per day that Facebook fails to comply with the order, up to a maximum of €1M (~$1.1M).

The court order also requires that Facebook provides data to the affected celebrity on the accounts that had been misusing their likeness within seven days of the judgement, with a further penalty of €1k per day for failure to comply, up to a maximum of €100k.

Facebook has also been ordered to pay the case costs.

Responding to the judgement in a statement, a Facebook spokesperson told us:

We have just received the ruling and will now look at its implications. We will consider all legal actions, including appeal. Importantly, this ruling does not change our commitment to fighting these types of ads. We cannot stress enough that these types of ads have absolutely no place on Facebook and we remove them when we find them. We take this very seriously and will therefore make our scam ads reporting form available in the Netherlands in early December. This is an additional way to get feedback from people, which in turn helps train our machine learning models. It is in our interest to protect our users from fraudsters and when we find violators we will take action to stop their activity, up to and including taking legal action against them in court.

One legal expert describes the judgement as “pivotal“. Law professor Mireille Hildebrandt told us that it provides for as an alternative legal route for Facebook users to litigate and pursue collective enforcement of European personal data rights. Rather than suing for damages — which entails a high burden of proof.

Injunctions are faster and more effective, Hildebrandt added.

The judgement also raises questions around the burden of proof for demonstrating Facebook has removed scam ads with sufficient (increased) accuracy; and what specific additional measures it might deploy to improve its takedown rate.

Although the introduction of the ‘report scam ad button’ does provide one clear avenue for measuring takedown performance.

The button was finally rolled out to the UK market in July. And while Facebook has talked since the start of this year about ‘envisaging’ introducing it in other markets it hasn’t exactly been proactive in doing so — up til now, with this court order. 


Read Full Article

Instagram Stories launches TikTok clone Reels in Brazil


Instagram is launching a video-music remix feature to finally fight back against Chinese social rival TikTok. Instagram Reels lets you make 15-second video clips set to music and share them as Stories, with the potential to go viral on a new Top Reels section of Explore. Just like TikTok, users can soundtrack their Reels with a huge catalog of music, or borrow the audio from anyone’s else video to create a remix of their meme or joke. 

Reels is launching today on iOS and Android but limited to just Brazil where it’s called Cenas. Reels leverages all of Instagram’s most popular features to frankenstein together a remarkably coherent competitor to TikTok’s rich features and community of 1.5 billion monthly users including 122 million in the US according to Sensor Tower. Instead of trying to start from scratch like Facebook’s Lasso, Instagram could cross-promote Reels heavily to its own billion users.

But Instagram’s challenge will be retraining its populace to make premeditated, storyboarded social entertainment instead of just spontaneous, autobiographical social media like with Stories and feed posts.

“I think Musically before TikTok, and TikTok deserve a ton of credit for popularizing this format” admits Instagram director of product management Robby Stein. That’s nearly verbatim what Instagram founder Kevin Systrom told me about Snapchat when Instagram launched Stories. “They deserve all the credit”, he said before copying Snapchat so ruthlessly that it stopped growing for three years.

Chinese startups were always criticized for copying American companies, but Reels’ launch signals the grand shift to cloning in the opposite direction.

Yet Stein insists “No two products are exactly the same, and at the end of the day sharing video with music is a pretty univeral idea we think everyone might be interested in using. The focus has been on how to make this a unique format for us.” The key to that divergence? “Your friends are already all on Instagram. I think thats only true of Instagram.”

Throwing Instagram’s Weight Around

Starting in Brazil before potentially rolling out elsewhere could help Instagram nail down its customization and onboarding strategy. Luckily, Brazil has a big Instagram population, a deeply musical culture, and a thriving creator community, says Stein.

It also isn’t completely obsessed with TikTok yet like fellow developing market India. As Facebook CEO Mark Zuckerberg said about trying to grow Lasso, “We’re trying to first see if we can get it to work in countries where TikTok is not already big.” Instagram used this internationalization strategy to make Stories a hit where Snapchat hadn’t expanded yet, and it worked surprisingly well.

Instagram also has the US government on its side for a change. While its parent company Facebook is being investigated for anti-trust and privacy violations, TikTok is also under scrutiny.

Chinese tech giant ByteDance’s $1 billion 2017 acquisition of Musical.ly, another Chinese app similar to TikTok but with traction in the US, is under review by the Committee For Foreign Investment In The United States. ByteDance turned Musical.ly into TikTok, but it could have to unwind the acquisitions or make other concessions to US regulators to protect the country’s national security. Several Senators have also railed against TikTok injecting Chinese social values via censorship into the American discourse.

Perhaps Instagram’s best shot at differentiation is through its social graph. While TikTok is primarily a feed broadcasting app, Instagram can work Reels into its Close Friends and Direct messaging features potentially opening a new class of creators — shy one who only want to share with people they trust not to make fun of them. A lot of this lipsyncing / dancing / humor skit content can be kinda cringey when people don’t get it just right.

How Instagram Reels Works

Users will find it in the Instagram Stories shutter modes tray next to Boomerang and Super-Zoom. They can either record with silence, borrow the audio of another video they find through hashtag search or Explore, or search a popular or trending song. Some audio snippets will even get their own pages showing off top videos made with them. Teaching users to poach audio for their remixes will be essential to getting Reels off the ground.

Facebook’s enormous music collection secured from all the major labels and many indie publishers powers Reels. Users pick the chunk to the song they want, and can then record or upload multiple video clips to fill out their Reel. Instagram has been building towards this moment since June 2018 when it first launched its Music stickers.

Instagram is adding some much-needed editing tools for Reels like timed captions so words appear in certain scenes, and a ghost overlay option for lining up transitions so they look fluid. Still, Reels lacks some of the video filters and special effects that TikTok has purposefully built to power certain gags and cuts between scenes. Stein says those are coming though.

Once users are satisfied with their editing job, they can post their Reel to Stories, Close Friends, or message it to people. If shared publicly, it will also be eligible to appear in the Top Reels section of the Explore tab. Most cleverly, Instagram works around its own ephemerality by letting users add their Reels to their profile’s non-disappearing Highlights for a shot to show up on Explore even after their 24-hour story expires.

Instead of having to monetize later somehow, Instagram can immediately start making money from Reels since it already shows ads in Stories and the Explore tab. The feature is sure to get plenty of exposure since 500 million of Instagram’s users already open Stories and Explore each month. Still, Reels’ composer and feed will be buried a few extra taps away from the homescreen compared to TikTok.

TikTok Screenshots

Cloning TikTok isn’t just about the features, though Reels does a good job of copying the core ones. Creating scripted content is totally new for most Instagram users, and could feel too showy or goofy for an app known for its seriousness.

TikTok is 100% about acting ridiculous just to make people smile, your personal image be damned. That’s the opposite of the carefully manicured image of glamor and glory most Instagram users try to project. It could feel counterintuitively more awkward to perform comedy in front of your real friends and fans than it does on a dedicated world stage.

Instagram, and Instagrammers, may have to lose their artful, cool aesthetic to embrace the silliness of tomorrow’s social entertainment. But if Reels can change Instagram’s culture to one where we’re comfortable looking stupid, it could beat TikTok’s talent competition by opening a million private karaoke rooms for goofing off just with friends.


Read Full Article

Instagram Stories launches TikTok clone Reels in Brazil


Instagram is launching a video-music remix feature to finally fight back against Chinese social rival TikTok. Instagram Reels lets you make 15-second video clips set to music and share them as Stories, with the potential to go viral on a new Top Reels section of Explore. Just like TikTok, users can soundtrack their Reels with a huge catalog of music, or borrow the audio from anyone’s else video to create a remix of their meme or joke. 

Launching today limited to just Brazil where it’s called Cenas, Reels leverages all of Instagram’s most popular features to frankenstein together a remarkably coherent competitor to TikTok’s rich features and community of 1.5 billion monthly users including 122 million in the US according to Sensor Tower. Instead of trying to start from scratch like Facebook’s Lasso, Instagram could cross-promote Reels heavily to its own billion users.

But Instagram’s challenge will be retraining its populace to make premeditated, storyboarded social entertainment instead of just spontaneous, autobiographical social media like with Stories and feed posts.

“I think Musically before TikTok, and TikTok deserve a ton of credit for popularizing this format” admits Instagram director of product management Robby Stein. “No two products are exactly the same, and at the end of the day sharing video with music is a pretty univeral idea we think everyone might be interested in using. The focus has been on how to make this a unique format for us.”

Throwing Instagram’s Weight Around

Starting in Brazil before potentially rolling out elsewhere could help Instagram nail down its customization and onboarding strategy. Luckily, Brazil has a big Instagram population, a deeply musical culture, and a thriving creator community, says Stein.

It also isn’t completely obsessed with TikTok yet like fellow developing market India. As Facebook CEO Mark Zuckerberg said about trying to grow Lasso, “We’re trying to first see if we can get it to work in countries where TikTok is not already big.” Instagram used this internationalization strategy to make Stories a hit where Snapchat hadn’t expanded yet, and it worked surprisingly well.

Instagram also has the US government on its side for a change. While its parent company Facebook is being investigated for anti-trust and privacy violations, TikTok is also under scrutiny.

Chinese tech giant ByteDance’s $1 billion 2017 acquisition of Musical.ly, another Chinese app similar to TikTok but with traction in the US, is under review by the Committee For Foreign Investment In The United States. ByteDance turned Musical.ly into TikTok, but it could have to unwind the acquisitions or make other concessions to US regulators to protect the country’s national security. Several Senators have also railed against TikTok injecting Chinese social values via censorship into the American discourse.

Perhaps Instagram’s best shot at differentiation is through its social graph. While TikTok is primarily a feed broadcasting app, Instagram can work Reels into it’s Close Friends and Direct messaging features potentially opening a new class of creators — shy one who only want to share with people they trust not to make fun of them. A lot of this lipsyncing / dancing / humor skit content can be kinda cringey when people don’t get it just right.

How Instagram Reels Works

Users will find it in the Instagram Stories shutter modes tray next to Boomerang and Super-Zoom. They can either record with silence, borrow the audio of another video they find through hashtag search or Explore, or search Facebook’s enormous music collection secured from all the major labels and many indie publishers. Users pick the chunk to the song they want, and can then record or upload multiple video clips to fill out their Reel.

Once satisfied with their editing job, scene-by-scene captions, and ghost overlay-assisted transitions they can share a Reel to their Story, Close Friends, or message it to people. If shared publicly, it will also be eligible to appear in the Top Reels section of the Explore tab. Most cleverly, Instagram works around its own ephemerality by letting users add their Reels to their profile’s non-disappearing Highlights for a shot to show up on Explore even after their 24-hour story expires.

Instead of having to monetize later somehow, Instagram can immediately start making money from Reels since it already shows ads in Stories and the Explore tab.

Cloning TikTok isn’t just about the features, though Reels does a good job of copying the core ones while leaving out AR effects and transitions for now. But creating scripted content is totally new for most Instagram users, and could feel too showy or goofy for an app known for its seriousness. Instagram may have to lose its artful, cool vibe to embrace the silliness of tomorrow’s social entertainment.


Read Full Article