09 March 2020

Silver Lake makes $1B investment into Twitter; Twitter, Elliott call truce as Dorsey remains CEO


After a week in which it looked like activist investor Elliott Management might try to force out Jack Dorsey as CEO of Twitter to help boost the social platform’s flagging growth, it looks like we have a truce of sorts between the two.

Today, Twitter announced has received a $1 billion investment from Silver Lake, one of the tech industry’s most prolific investors. And as part of that deal, it has also entered into an agreement with Elliott — which owns about 4% of Twitter — to use that investment plus cash on hand toward a $2 billion share repurchase program. Along with that, there are two new people joining Twitter’s board of directors, Silver Lake’s Egon Durban and Elliott’s Jesse Cohn; the company said it is also seeking a third person to join as part of the agreement. It also committed itself to a plan to grow its mDAU (a special metric Twitter uses for “monetizable” daily active users) at 20% or more, with revenue growth accelerating on a year-over-year basis.

Twitter’s stock is down nearly 6% in pre-market trading, which might be in response to this news, but more likely is being mitigated in what is an overall very tough morning for publicly traded stocks in the wake of uncertainty over COVID-19.

In any event, the news of the investment and subsequent deal put an end — at least for now — to the possibility of Twitter co-founder Jack Dorsey being removed as CEO of the company.

The threat was real enough that Dorsey took to the stage (and Twitter) at an investor conference to directly address some of the criticisms that he faced from investors and the public over how he manages the company. (Not all were convinced.) There were rumors that supporters were closing ranks in defense of Dorsey, and it seems that this is an attempt at mitigating a full-scale change (at least for now).

“Twitter serves the public conversation, and our purpose has never been more important,” Dorsey said today in a statement. “Silver Lake’s investment in Twitter is a strong vote of confidence in our work and our path forward. They are one of the most respected voices in technology and finance and we are fortunate to have them as our new partner and as a member of our Board. We welcome the support of Egon and Jesse, and look forward to their positive contributions as we continue to build a service that delivers for customers, and drives value for stakeholders.”

Twitter was clear to state that the agreement between Silver Lake, Elliott and the company will see the new directors unable to influence Twitter policies and rules and enforcement decisions. This is likely in relation to the implication that Elliott — started by a man called Paul Singer, who is often described as a “mega donor” to Republicans and Trump — might try to use its investment to steer Twitter to the Right side of the political spectrum, or at least make it more sympathetic to it.

Patrick Pichette, who is the independent director of the Twitter board, will become the chair of the committee to find another board member. “Twitter has undergone remarkable change over the last several years,” he said in a statement.

“We are deeply proud of our accomplishments and confident we are on the right path with Jack’s leadership and the executive team. As a Board, we regularly review and evaluate how Twitter is run, and while our CEO structure is unique, so is Jack and so is this Company. To continue to ensure strong governance, we are pleased to create a temporary Board committee that will build on our regular evaluation of Twitter’s leadership structure. This committee, which I will chair, will provide a fresh look at our various structures, and report the findings to our Board on an ongoing basis. In an environment where certainty is scarce, I can say with certainty that today we have taken steps to meaningfully strengthen what is already a world-class Board.”

Whether or not Pichette is one of the tacit supporters of the current leadership status quo, it seems that for now investors are willing to give Dorsey a shot, provided there is more active input from board members who are investors.

“Twitter’s revolutionary platform is a cornerstone of the public discourse,” said Durban, who is the co-CEO and managing partner of Silver Lake. “We are impressed by Jack’s tireless work over the last few years to solidify the leadership team, improve the product and strengthen the Company. We are excited to partner with Twitter as an investor and a member of the Board. Jack is a visionary leader, and a critical force behind Twitter’s ongoing evolution and growth. I look forward to working alongside the entire Board and the executive team to drive Twitter’s long-term innovation and success.”

Jesse Cohn, partner at Elliott Management, added: “Twitter is one of the most important platforms in the global dialogue, and one of the most innovative and unique technology companies in the world. We are pleased to have worked collaboratively with Twitter on this constructive engagement. We invested in Twitter because we see a significant opportunity for value creation at the Company. I am looking forward to working with Jack and the Board to help contribute to realizing Twitter’s full potential.”


Read Full Article

Google’s Vint Cerf voices support for common criteria for political ad targeting


Google VP Vint Cerf has voiced support for a single set of standards for Internet platforms to apply around political advertising.

Speaking to the UK parliament’s Democracy and Digital Technologies Committee today, the long time Googler — who has been chief Internet evangelist at the tech giant since 2005 — was asked about the targeting criteria it allows for political ads and whether he thinks there should be a common definition all platforms should apply.

“Your idea that there might be common criteria for political advertising I think has a certain merit to it,” he told the committee. “Because then we would see consistency of treatment — and that’s important because there are so many different platforms available for purposes of — not just advertising but political speech.”

“In the US we’ve already experienced the serious side effects of some of the abuse of these platforms and the ability to target specific audiences for purposes of inciting disagreement,” he added. “We should make it difficult for our platforms to be abused in that way.”

The committee had raised the point that Google and Facebook currently apply different criteria around political ads — also asking whether advertisers could use Google’s tools to target political issue ads at a particular geographical region, such as South Bend in Northern Indiana.

“I don’t think that criterion is allowed in our advertising system,” Cerf responded on that specific example. “I don’t think that we’re that refined, particularly in the political space… We have a small number of criteria that are permitted for targeting political ads.”

Last November Google announced limits on political microtargeting — saying it would limit the ability for advertisers to target political demographics, and also committing itself to take action against “demonstrably false claims.”

The move remains in stark contrast to Facebook which dug in at the start of this year — refusing to limit targeting criteria for political ads. Instead it trumpeted a few settings tweaks that it claimed would afford users more controls over ads. As we (and many others) warned at the time, such tweaks offer no meaningful way for Facebook users to prevent the company’s pervasive background profiling of their Internet activity from being repurposed as an attack surface to erode democracy.

Last year some of Facebook’s own staff also critcized its decision not to restrict politicians from lying in ads and called for it to limit the use of Custom Audiences — arguing microtargeting works against the public scrutiny that Facebook claims keeps politicians honest. However the company has held the line on refusing to apply limits to political ads — with the occasional exception.

The committee also asked Cerf if he has any concerns about online misinformation and disinformation emerging on platforms related to the novel coronavirus outbreak.

Cerf responded by saying he’s “very concerned about the abuse of the system and looking for ways to counter that”.

“I use our tools every single day. I don’t think I would survive without having the ability to search through the world wide web — get information — get answers. I exercise critical thinking as much as I can about the sources and the content. I am a very optimistic person with regard to the value of what’s been done so far. I am very concerned about the abuse of the system and looking for ways to counter that — and those ways may be mechanical but they also involve the ‘wet ware’ up here,” he said, gesturing at his head.

“So my position is this is all positive stuff but how do we preserve the value of what we defend against the abuse? … We’re human beings and we should try very hard to make our tools serve us and our society in a positive way.”


Read Full Article

Google’s Vint Cerf voices support for common criteria for political ad targeting


Google VP Vint Cerf has voiced support for a single set of standards for Internet platforms to apply around political advertising.

Speaking to the UK parliament’s Democracy and Digital Technologies Committee today, the long time Googler — who has been chief Internet evangelist at the tech giant since 2005 — was asked about the targeting criteria it allows for political ads and whether he thinks there should be a common definition all platforms should apply.

“Your idea that there might be common criteria for political advertising I think has a certain merit to it,” he told the committee. “Because then we would see consistency of treatment — and that’s important because there are so many different platforms available for purposes of — not just advertising but political speech.”

“In the US we’ve already experienced the serious side effects of some of the abuse of these platforms and the ability to target specific audiences for purposes of inciting disagreement,” he added. “We should make it difficult for our platforms to be abused in that way.”

The committee had raised the point that Google and Facebook currently apply different criteria around political ads — also asking whether advertisers could use Google’s tools to target political issue ads at a particular geographical region, such as South Bend in Northern Indiana.

“I don’t think that criterion is allowed in our advertising system,” Cerf responded on that specific example. “I don’t think that we’re that refined, particularly in the political space… We have a small number of criteria that are permitted for targeting political ads.”

Last November Google announced limits on political microtargeting — saying it would limit the ability for advertisers to target political demographics, and also committing itself to take action against “demonstrably false claims.”

The move remains in stark contrast to Facebook which dug in at the start of this year — refusing to limit targeting criteria for political ads. Instead it trumpeted a few settings tweaks that it claimed would afford users more controls over ads. As we (and many others) warned at the time, such tweaks offer no meaningful way for Facebook users to prevent the company’s pervasive background profiling of their Internet activity from being repurposed as an attack surface to erode democracy.

Last year some of Facebook’s own staff also critcized its decision not to restrict politicians from lying in ads and called for it to limit the use of Custom Audiences — arguing microtargeting works against the public scrutiny that Facebook claims keeps politicians honest. However the company has held the line on refusing to apply limits to political ads — with the occasional exception.

The committee also asked Cerf if he has any concerns about online misinformation and disinformation emerging on platforms related to the novel coronavirus outbreak.

Cerf responded by saying he’s “very concerned about the abuse of the system and looking for ways to counter that”.

“I use our tools every single day. I don’t think I would survive without having the ability to search through the world wide web — get information — get answers. I exercise critical thinking as much as I can about the sources and the content. I am a very optimistic person with regard to the value of what’s been done so far. I am very concerned about the abuse of the system and looking for ways to counter that — and those ways may be mechanical but they also involve the ‘wet ware’ up here,” he said, gesturing at his head.

“So my position is this is all positive stuff but how do we preserve the value of what we defend against the abuse? … We’re human beings and we should try very hard to make our tools serve us and our society in a positive way.”


Read Full Article

Announcing TensorFlow Quantum: An Open Source Library for Quantum Machine Learning




“Nature isn’t classical, damnit, so if you want to make a simulation of nature, you’d better make it quantum mechanical.” — Physicist Richard Feynman

Machine learning (ML), while it doesn’t exactly simulate systems in nature, has the ability to learn a model of a system and predict the system’s behavior. Over the past few years, classical ML models have shown promise in tackling challenging scientific issues, leading to advancements in image processing for cancer detection, forecasting earthquake aftershocks, predicting extreme weather patterns, and detecting new exoplanets. With the recent progress in the development of quantum computing, the development of new quantum ML models could have a profound impact on the world’s biggest problems, leading to breakthroughs in the areas of medicine, materials, sensing, and communications. However, to date there has been a lack of research tools to discover useful quantum ML models that can process quantum data and execute on quantum computers available today.

Today, in collaboration with the University of Waterloo, X, and Volkswagen, we announce the release of TensorFlow Quantum (TFQ), an open-source library for the rapid prototyping of quantum ML models. TFQ provides the tools necessary for bringing the quantum computing and machine learning research communities together to control and model natural or artificial quantum systems; e.g. Noisy Intermediate Scale Quantum (NISQ) processors with ~50 - 100 qubits.

Under the hood, TFQ integrates Cirq with TensorFlow, and offers high-level abstractions for the design and implementation of both discriminative and generative quantum-classical models by providing quantum computing primitives compatible with existing TensorFlow APIs, along with high-performance quantum circuit simulators.

What is a Quantum ML Model?
A quantum model has the ability to represent and generalize data with a quantum mechanical origin. However, to understand quantum models, two concepts must be introduced - quantum data and hybrid quantum-classical models.

Quantum data exhibits superposition and entanglement, leading to joint probability distributions that could require an exponential amount of classical computational resources to represent or store. Quantum data, which can be generated / simulated on quantum processors / sensors / networks include the simulation of chemicals and quantum matter, quantum controlquantum communication networks, quantum metrology, and much more.

A technical, but key insight is that quantum data generated by NISQ processors are noisy and are typically entangled just before the measurement occurs. However, applying quantum machine learning to noisy entangled quantum data can maximize extraction of useful classical information. Inspired by these techniques, the TFQ library provides primitives for the development of models that disentangle and generalize correlations in quantum data, opening up opportunities to improve existing quantum algorithms or discover new quantum algorithms.

The second concept to introduce is hybrid quantum-classical models. Because near-term quantum processors are still fairly small and noisy, quantum models cannot use quantum processors alone--NISQ processors will need to work in concert with classical processors to become effective. As TensorFlow already supports heterogeneous computing across CPUs, GPUs, and TPUs, it is a natural platform for experimenting with hybrid quantum-classical algorithms.

TFQ contains the basic structures, such as qubits, gates, circuits, and measurement operators that are required for specifying quantum computations. User-specified quantum computations can then be executed in simulation or on real hardware. Cirq also contains substantial machinery that helps users design efficient algorithms for NISQ machines, such as compilers and schedulers, and enables the implementation of hybrid quantum-classical algorithms to run on quantum circuit simulators, and eventually on quantum processors.

We’ve used TensorFlow Quantum for hybrid quantum-classical convolutional neural networks, machine learning for quantum control, layer-wise learning for quantum neural networks, quantum dynamics learning, generative modeling of mixed quantum states, and learning to learn with quantum neural networks via classical recurrent neural networks. We provide a review of these quantum applications in the TFQ white paper; each example can be run in-browser via Colab from our research repository.

How TFQ works
TFQ allows researchers to construct quantum datasets, quantum models, and classical control parameters as tensors in a single computational graph. The outcome of quantum measurements, leading to classical probabilistic events, is obtained by TensorFlow Ops. Training can be done using standard Keras functions.

To provide some intuition on how to use quantum data, one may consider a supervised classification of quantum states using a quantum neural network. Just like classical ML, a key challenge of quantum ML is to classify “noisy data”. To build and train such a model, the researcher can do the following:
  1. Prepare a quantum dataset - Quantum data is loaded as tensors (a multi-dimensional array of numbers). Each quantum data tensor is specified as a quantum circuit written in Cirq that generates quantum data on the fly. The tensor is executed by TensorFlow on the quantum computer to generate a quantum dataset.
  2. Evaluate a quantum neural network model - The researcher can prototype a quantum neural network using Cirq that they will later embed inside of a TensorFlow compute graph. Parameterized quantum models can be selected from several broad categories based on knowledge of the quantum data's structure. The goal of the model is to perform quantum processing in order to extract information hidden in a typically entangled state. In other words, the quantum model essentially disentangles the input quantum data, leaving the hidden information encoded in classical correlations, thus making it accessible to local measurements and classical post-processing.
  3. Sample or Average - Measurement of quantum states extracts classical information in the form of samples from a classical random variable. The distribution of values from this random variable generally depends on the quantum state itself and on the measured observable. As many variational algorithms depend on mean values of measurements, also known as expectation values, TFQ provides methods for averaging over several runs involving steps (1) and (2).
  4. Evaluate a classical neural networks model - Once classical information has been extracted, it is in a format amenable to further classical post-processing. As the extracted information may still be encoded in classical correlations between measured expectations, classical deep neural networks can be applied to distill such correlations.
  5. Evaluate Cost Function - Given the results of classical post-processing, a cost function is evaluated. This could be based on how accurately the model performs the classification task if the quantum data was labeled, or other criteria if the task is unsupervised.
  6. Evaluate Gradients & Update Parameters - After evaluating the cost function, the free parameters in the pipeline should be updated in a direction expected to decrease the cost. This is most commonly performed via gradient descent.
A high-level abstract overview of the computational steps involved in the end-to-end pipeline for inference and training of a hybrid quantum-classical discriminative model for quantum data in TFQ. To see the code for an end-to-end example, please check the “Hello Many-Worlds” example, the quantum convolutional neural networks tutorial, and our guide.
A key feature of TensorFlow Quantum is the ability to simultaneously train and execute many quantum circuits. This is achieved by TensorFlow’s ability to parallelize computation across a cluster of computers, and the ability to simulate relatively large quantum circuits on multi-core computers. To achieve the latter, we are also announcing the release of qsim (github link), a new high performance open source quantum circuit simulator, which has demonstrated the ability to simulate a 32 qubit quantum circuit with a gate depth of 14 in 111 seconds on a single Google Cloud node (n1-ultramem-160) (see this paper for details). The simulator is particularly optimized for multi-core Intel processors. Combined with TFQ, we have demonstrated 1 million circuit simulations for 20 qubit quantum circuit at a gate depth of 20 in 60 minutes on a Google Cloud node (n2-highcpu-80). See the TFQ white paper, Section II E on the Quantum Circuit Simulation with qsim for more information.

Looking Forward
Today, TensorFlow Quantum is primarily geared towards executing quantum circuits on classical quantum circuit simulators. In the future, TFQ will be able to execute quantum circuits on actual quantum processors that are supported by Cirq, including Google’s own processor Sycamore.

To learn more about TFQ, please read our white paper and visit the TensorFlow Quantum website. We believe that bridging the ML and Quantum communities will lead to exciting new discoveries across the board and accelerate the discovery of new quantum algorithms to solve the world’s most challenging problems.

Acknowledgements
This open source project is led by the Google AI Quantum team, and was co-developed by the University of Waterloo, Alphabet’s X, and Volkswagen. A special thanks to the University of Waterloo, whose students made major contributions to this open source software through multiple internship projects at the Google AI Quantum lab.

The legacy of gender equality and fluidity in the Philippines | France Villarta

The legacy of gender equality and fluidity in the Philippines | France Villarta

In much of the world, gender is viewed as binary: man or woman, each assigned characteristics and traits designated by biological sex. But that's not the case everywhere, says France Villarta. In a talk that's part cultural love letter, part history lesson, he details the legacy of gender fluidity and inclusivity in his native Philippines -- and emphasizes the universal beauty of all people, regardless of society's labels.

Click the above link to download the TED talk.

Spotify rolls out a more personalized home screen to users worldwide


Spotify has been slowly rolling out a redesigned mobile app in small sections — first with an update to podcast pages, then to other parts of the experience. Today, the company is revamping the most critical part of the Spotify app: the Home screen. Now, when Spotify users launch the app, they’ll notice the new home screen greets them depending on what time of day it is with a “Good Morning,” “Good Afternoon,” or “Good Evening,” for example. But the screen’s content and recommendations will also change with the time time of day, Spotify says, and the content has also been better organized so you more easily jump back in or browse recommendations from the main page.

Before, Spotify’s home screen emphasized your listening history by putting things like your “Recently Played,” “Your Top Podcasts,” and “Your Heavy Rotation” at the top of the page.

Effectively, the update breaks up the app’s home screen into two main parts: familiar content on top and new or recommended content on the bottom half.

Now, the home screen reserves six spots underneath the daily greeting where you can pick back up with things like the podcast you stream every morning, your workout playlist, or the album you’ve been listening to on heavy rotation this week. This content will update as your day progresses to better match your activities and interests, based on prior behavior.

Beneath these six spots, the home page will display other things like your top podcasts, “made for you” playlists, recommendations for new discoveries based on your listening, and more.

The concept for the new home screen is similar to what Pandora recently rolled out with its personalized “For You” tab late last year. Like Spotify, Pandora’s tab also customizes the content displayed based on the time of day, in addition to the day of the week and other predictions it can make about a customer’s mood or potential activity, based on prior listening data.

Pandora’s revamp led to double the number of users engaging with the personalized page, compared with the old Browse experience, it says. Spotify, too, is likely hoping to see a similar bump in usage and engagement as users won’t have to dart around the app as much to find their favorite content or recommendations. That way, they’ll be able to start streaming more quickly after the app is launched, potentially leading to longer sessions and more discovery of new content.

Spotify to date has defined itself by its advanced personalization and recommendation technology, but its app hasn’t always been the easiest to use and navigate — especially in comparison to its top U.S. rival, Apple Music, which favors a simpler and cleaner look-and-feel. Its recent changes have tried to address this problem by making its various parts and pages easier to use.

Spotify says the updated home screen will roll out starting today to all global users with at least 30 days of listening history.


Read Full Article

Australia sues Facebook over Cambridge Analytica, fine could scale to $529BN


Australia’s privacy watchdog is suing Facebook over the Cambridge Analytica data breach — which, back in 2018, became a global scandal that wiped billions off the tech giant’s share price yet only led to Facebook picking up a $5BN FTC fine.

Should Australia prevail in its suit against the tech giant the monetary penalty could be exponentially larger.

Australia’s Privacy Act sets out a provision for a civil penalty of up to $1,700,000 to be levied per contravention — and the national watchdog believes there were 311,074 local Facebook users in the cache of ~86M profiles lifted by Cambridge Analytica. So the potential fine here is circa $529BN. (A very far cry from the £500k Facebook paid in the UK over the same data misuse scandal.)

In a statement published on its website today the Office of the Australian Information Commissioner (OAIC) says it has lodged proceedings against Facebook in a federal court alleging the company committed serious and/or repeated interferences with privacy.

The suit alleges the personal data of Australian Facebook users was disclosed to the This is Your Digital Life app for a purpose other than that for which it was collected — thereby breaching Australia’s Privacy Act 1988. It further claims the data was exposed to the risk of being disclosed to Cambridge Analytica and used for political profiling purposes, and passed to other third parties.

This is Your Digital Life was an app built by an app developer called GSR that was hired by Cambridge Analytica to obtain and process Facebook users’ data for political ad targeting purposes.

The events from which the suit stems took place on Facebook’s platform between March 2014 and May 2015 when user data was being siphoned off by GSR, under contract with Cambridge Analytica — which worked with US political campaigns, including Ted Cruz’s presidential campaign and later (the now) president Donald Trump.

GSR was co-founded by two psychology researchers, Aleksandr Kogan and Joseph Chancellor. And in a still unexplained twist in the saga, Facebook hired Chancellor, in about November 2015, which was soon after some of its own staffers had warned internally about the “sketchy” business Cambridge Analytica was conducting on its ad platform. Chancellor has never spoken to the press and subsequently departed Facebook as quietly and serendipitously as he arrived.

In a concise statement summing up its legal action against Facebook the OIAC writes:

Facebook disclosed personal information of the Affected Australian Individuals. Most of those individuals did not install the “This is Your Digital Life” App; their Facebook friends did. Unless those individuals undertook a complex process of modifying their settings on Facebook, their personal information was disclosed by Facebook to the “This is Your Digital Life” App by default. Facebook did not adequately inform the Affected Australian Individuals of the manner in which their personal information would be disclosed, or that it could be disclosed to an app installed by a friend, but not installed by that individual.

Facebook failed to take reasonable steps to protect those individuals’ personal information from unauthorised disclosure. Facebook did not know the precise nature or extent of the personal information it disclosed to the “This is Your Digital Life” App. Nor did it prevent the app from disclosing to third parties the personal information obtained. The full extent of the information disclosed, and to whom it was disclosed, accordingly cannot be known. What is known, is that Facebook disclosed the Affected Australian Individuals’ personal information to the “This is Your Digital Life” App, whose developers sold personal information obtained using the app to the political consulting firm Cambridge Analytica, in breach of Facebook’s policies.

As a result, the Affected Australian Individuals’ personal information was exposed to the risk of disclosure, monetisation and use for political profiling purposes.

Commenting in a statement, Australia’s information commissioner and privacy commissioner, Angelene Falk, added: “All entities operating in Australia must be transparent and accountable in the way they handle personal information, in accordance with their obligations under Australian privacy law. We consider the design of the Facebook platform meant that users were unable to exercise reasonable choice and control about how their personal information was disclosed.

“Facebook’s default settings facilitated the disclosure of personal information, including sensitive information, at the expense of privacy. We claim these actions left the personal data of around 311,127 Australian Facebook users exposed to be sold and used for purposes including political profiling, well outside users’ expectations.”

Reached for comment, a Facebook spokesperson sent this statement:

We’ve actively engaged with the OAIC over the past two years as part of their investigation. We’ve made major changes to our platforms, in consultation with international regulators, to restrict the information available to app developers, implement new governance protocols and build industry-leading controls to help people protect and manage their data. We’re unable to comment further as this is now before the Federal Court.


Read Full Article

Box is now letting all staff work from home to reduce coronavirus risk


Box has joined a number of tech companies supporting employees to work remotely from home in response  the outbreak of the novel coronavirus, known as COVID-19.

It’s applying the policy to all staff, regardless of location.

Late yesterday Box co-founder Aaron Levie tweeted a statement detailing the cloud computing company’s response to COVID-19 — to, as he put it, “ensure the availability of our service and safety of our employees”.

In recent days Twitter has similarly encouraged all staff members to work from home. While companies including Amazon, Google, LinkedIn and Microsoft have also advised some staff to work remotely to reduce the risk of exposure to the virus.

In its response statement Box writes that it’s enacted its business continuity plans “to ensure core business functions and technology are operational in the event of any potential disruption”.

“We have long recognized the potential risks associated with service interruptions due to adverse events, such as an earthquake, power outage or a public health crisis like COVID-19, affecting our strategic, operational, stakeholder and customer obligations. This is why we have had a Business Continuity program in place to provide the policies and plans necessary for protecting Box’s operations and critical business functions,” the company writes.

In a section on “workforce resilience and business continuity” it notes that work from home practices are a normal part of its business operations but says it’s now extending the option to all its staff, regardless of the office or location they normally work out of — saying it’s doing so “out of an abundance of caution during COVID-19”.

Other measures the company says it’s taken to further reduce risk include suspending all international travel and limiting non-essential domestic travel; reducing large customer events and gatherings; and emphasizing health and hygiene across all office locations — “by maintaining sanitation supplies and encouraging an ‘if you are sick, stay home’ mindset”.

It also says it’s conducting all new hire orientation and candidate interviews virtually.

Box names a number of tools it says it routinely uses to support mobility and remote working, including its own service for secure content collaboration; Zoom’s video communication tool; the Slack messaging app; Okta for secure ID; plus additional unnamed “critical cloud tools” for ensuring “uninterrupted remote work for all employees”.

Clearly spying the opportunity to onboard new users, as more companies switch on remote working as a result of COVID-19 concerns, Box’s post also links to free training resources for its own cloud computing tools.


Read Full Article

Crypto wallet app ZenGo launches savings mode


ZenGo is expanding beyond the basic features of a cryptocurrency wallet — letting you hold, send and receive crypto assets. You can now set aside some of your crypto assets to earn interests. In other words, ZenGo now also acts like a savings account.

The company has partnered with two DeFi projects for the new feature. DeFi means “decentralized finance”, and it has been a hot trend in the cryptocurrency space. DeFi projects are the blockchain equivalent of traditional financial products. For instance, you can lend and borrow money, invest in derivative assets and more.

If you want to learn more about DeFi, here’s an article I wrote on the subject:

But let’s come back to ZenGo. When you have crypto assets in your ZenGo wallet, you can now open the savings tab, pick an asset, such as Dai, and select what percentage of your holdings you want to set aside.

After that, all you have to do is wait. You get an overview of your savings “accounts” at any time. This way, you can see your total earned interests. Interests are automatically reinvested over time. You can move your money from those DeFi projects back to your wallet whenever you want.

Behind the scene, ZenGo uses the Compound protocol, a lending DeFi project. It works a bit like LendingClub, but on the blockchain. Some users send money to Compound to contribute to liquidity pools. Other users borrow money from that pool.

Interest rates go up and down depending on supply and demand. That’s why you currently earn more interests when you inject DAI or USD Coin in Compound. But that could change over time.

ZenGo also uses Figment in order to stake Tezos. This time, it isn’t a lending marketplace. When you lock some money in a staking project, it means that you support the operations of a particular blockchain. Few blockchains support staking as they need to be based on proof-of-stake.

For the end user, it looks like a savings account whether you’re relying on Compound or Figment. There are other wallet apps that let you access DeFi projects, such as Coinbase Wallet and Argent. But ZenGo thinks they’re still too complicated for regular users.


Read Full Article