23 October 2020

Rethinking Attention with Performers


Transformer models have achieved state-of-the-art results across a diverse range of domains, including natural language, conversation, images, and even music. The core block of every Transformer architecture is the attention module, which computes similarity scores for all pairs of positions in an input sequence. This however, scales poorly with the length of the input sequence, requiring quadratic computation time to produce all similarity scores, as well as quadratic memory size to construct a matrix to store these scores.

For applications where long-range attention is needed, several fast and more space-efficient proxies have been proposed such as memory caching techniques, but a far more common way is to rely on sparse attention. Sparse attention reduces computation time and the memory requirements of the attention mechanism by computing a limited selection of similarity scores from a sequence rather than all possible pairs, resulting in a sparse matrix rather than a full matrix. These sparse entries may be manually proposed, found via optimization methods, learned, or even randomized, as demonstrated by such methods as Sparse Transformers, Longformers, Routing Transformers, Reformers, and Big Bird. Since sparse matrices can also be represented by graphs and edges, sparsification methods are also motivated by the graph neural network literature, with specific relationships to attention outlined in Graph Attention Networks. Such sparsity-based architectures usually require additional layers to implicitly produce a full attention mechanism.

Standard sparsification techniques. Left: Example of a sparsity pattern, where tokens attend only to other nearby tokens. Right: In Graph Attention Networks, tokens attend only to their neighbors in the graph, which should have higher relevance than other nodes. See Efficient Transformers: A Survey for a comprehensive categorization of various methods.

Unfortunately, sparse attention methods can still suffer from a number of limitations. (1) They require efficient sparse-matrix multiplication operations, which are not available on all accelerators; (2) they usually do not provide rigorous theoretical guarantees for their representation power; (3) they are optimized primarily for Transformer models and generative pre-training; and (4) they usually stack more attention layers to compensate for sparse representations, making them difficult to use with other pre-trained models, thus requiring retraining and significant energy consumption. In addition to these shortcomings, sparse attention mechanisms are often still not sufficient to address the full range of problems to which regular attention methods are applied, such as Pointer Networks. There are also some operations that cannot be sparsified, such as the commonly used softmax operation, which normalizes similarity scores in the attention mechanism and is used heavily in industry-scale recommender systems.

To resolve these issues, we introduce the Performer, a Transformer architecture with attention mechanisms that scale linearly, thus enabling faster training while allowing the model to process longer lengths, as required for certain image datasets such as ImageNet64 and text datasets such as PG-19. The Performer uses an efficient (linear) generalized attention framework, which allows a broad class of attention mechanisms based on different similarity measures (kernels). The framework is implemented by our novel Fast Attention Via Positive Orthogonal Random Features (FAVOR+) algorithm, which provides scalable low-variance and unbiased estimation of attention mechanisms that can be expressed by random feature map decompositions (in particular, regular softmax-attention). We obtain strong accuracy guarantees for this method while preserving linear space and time complexity, which can also be applied to standalone softmax operations.

Generalized Attention
In the original attention mechanism, the query and key inputs, corresponding respectively to rows and columns of a matrix, are multiplied together and passed through a softmax operation to form an attention matrix, which stores the similarity scores. Note that in this method, one cannot decompose the query-key product back into its original query and key components after passing it into the nonlinear softmax operation. However, it is possible to decompose the attention matrix back to a product of random nonlinear functions of the original queries and keys, otherwise known as random features, which allows one to encode the similarity information in a more efficient manner.

LHS: The standard attention matrix, which contains all similarity scores for every pair of entries, formed by a softmax operation on the query and keys, denoted by q and k. RHS: The standard attention matrix can be approximated via lower-rank randomized matrices Q′ and K′ with rows encoding potentially randomized nonlinear functions of the original queries/keys. For the regular softmax-attention, the transformation is very compact and involves an exponential function as well as random Gaussian projections.

Regular softmax-attention can be seen as a special case with these nonlinear functions defined by exponential functions and Gaussian projections. Note that we can also reason inversely, by implementing more general nonlinear functions first, implicitly defining other types of similarity measures, or kernels, on the query-key product. We frame this as generalized attention, based on earlier work in kernel methods. Although for most kernels, closed-form formulae do not exist, our mechanism can still be applied since it does not rely on them.

To the best of our knowledge, we are the first to show that any attention matrix can be effectively approximated in downstream Transformer-applications using random features. The novel mechanism enabling this is the use of positive random features, i.e., positive-valued nonlinear functions of the original queries and keys, which prove to be crucial for avoiding instabilities during training and provide more accurate approximation of the regular softmax attention mechanism.

Towards FAVOR: Fast Attention via Matrix Associativity
The decomposition described above allows one to store the implicit attention matrix with linear, rather than quadratic, memory complexity. One can also obtain a linear time attention mechanism using this decomposition. While the original attention mechanism multiplies the stored attention matrix with the value input to obtain the final result, after decomposing the attention matrix, one can rearrange matrix multiplications to approximate the result of the regular attention mechanism, without explicitly constructing the quadratic-sized attention matrix. This ultimately leads to FAVOR+.

Left: Standard attention module computation, where the final desired result is computed by performing a matrix multiplication with the attention matrix A and value tensor V. Right: By decoupling matrices Q′ and K′ used in lower rank decomposition of A and conducting matrix multiplications in the order indicated by dashed-boxes, we obtain a linear attention mechanism, never explicitly constructing A or its approximation.

The above analysis is relevant for so-called bidirectional attention, i.e., non-causal attention where there is no notion of past and future. For unidirectional (causal) attention, where tokens do not attend to other tokens appearing later in the input sequence, we slightly modify the approach to use prefix-sum computations, which only store running totals of matrix computations rather than storing an explicit lower-triangular regular attention matrix.

Left: Standard unidirectional attention requires masking the attention matrix to obtain its lower-triangular part. Right: Unbiased approximation on the LHS can be obtained via a prefix-sum mechanism, where the prefix-sum of the outer-products of random feature maps for keys and value vectors is built on the fly and left-multiplied by query random feature vector to obtain the new row in the resulting matrix.

Properties
We first benchmark the space- and time-complexity of the Performer and show that the attention speedups and memory reductions are empirically nearly optimal, i.e., very close to simply not using an attention mechanism at all in the model.

Bidirectional timing for the regular Transformer model in log-log plot with time (T) and length (L). Lines end at the limit of GPU memory. The black line (X) denotes the maximum possible memory compression and speedups when using a “dummy” attention block, which essentially bypasses attention calculations and demonstrates the maximum possible efficiency of the model. The Performer model is nearly able to reach this optimal performance in the attention component.

We further show that the Performer, using our unbiased softmax approximation, is backwards compatible with pretrained Transformer models after a bit of fine-tuning, which could potentially lower energy costs by improving inference speed, without having to fully retrain pre-existing models.

Using the One Billion Word Benchmark (LM1B) dataset, we transferred the original pre-trained Transformer weights to the Performer model, which produces an initial non-zero 0.07 accuracy (dotted orange line). Once fine-tuned however, the Performer quickly recovers accuracy in a small fraction of the original number of gradient steps.

Example Application: Protein Modeling
Proteins are large molecules with complex 3D structures and specific functions that are essential to life. Like words, proteins are specified as linear sequences where each character is one of 20 amino acid building blocks. Applying Transformers to large unlabeled corpora of protein sequences (e.g. UniRef) yields models that can be used to make accurate predictions about the folded, functional macromolecule. Performer-ReLU (which uses ReLU-based attention, an instance of generalized attention that is different from softmax) performs strongly at modeling protein sequence data, while Performer-Softmax matches the performance of the Transformer, as predicted by our theoretical results.

Performance at modeling protein sequences. Train = Dashed, Validation = Solid, Unidirectional = (U), Bidirectional = (B). We use the 36-layer model parameters from ProGen (2019) for all runs, each using a 16x16 TPU-v2. Batch sizes were maximized for each run, given the corresponding compute constraints.

Below we visualize a protein Performer model, trained using the ReLU-based approximate attention mechanism. Using the Performer to estimate similarity between amino acids recovers similar structure to well-known substitution matrices obtained by analyzing evolutionary substitution patterns across carefully curated sequence alignments. More generally, we find local and global attention patterns consistent with Transformer models trained on protein data. The dense attention approximation of the Performer has the potential to capture global interactions across multiple protein sequences. As a proof of concept, we train models on long concatenated protein sequences, which overloads the memory of a regular Transformer model, but not the Performer due to its space efficiency.

Left: Amino acid similarity matrix estimated from attention weights. The model recognizes highly similar amino acid pairs such as (D,E) and (F,Y), despite only having access to protein sequences without prior information about biochemistry. Center: Attention matrices from 4 layers (rows) and 3 selected heads (columns) for the BPT1_BOVIN protein, showing local and global attention patterns.
Performance on sequences up to length 8192 obtained by concatenating individual protein sequences. To fit into TPU memory, the Transformer’s size (number of layers and embedding dimensions) was reduced.

Conclusion
Our work contributes to the recent efforts on non-sparsity based methods and kernel-based interpretations of Transformers. Our method is interoperable with other techniques like reversible layers and we have even integrated FAVOR with the Reformer's code. We provide the links for the paper, Performer code, and the Protein Language Modeling code. We believe that our research opens up a brand new way of thinking about attention, Transformer architectures, and even kernel methods.

Acknowledgements
This work was performed by the core Performers designers Krzysztof Choromanski (Google Brain Team, Tech and Research Lead), Valerii Likhosherstov (University of Cambridge) and Xingyou Song (Google Brain Team), with contributions from David Dohan, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, and Adrian Weller. We give special thanks to the Applied Science Team for jointly leading the research effort on applying efficient Transformer architectures to protein sequence data.

We additionally wish to thank Joshua Meier, John Platt, and Tom Weingarten for many fruitful discussions on biological data and useful comments on this draft, along with Yi Tay and Mostafa Dehghani for discussions on comparing baselines. We further thank Nikita Kitaev and Wojciech Gajewski for multiple discussions on the Reformer, and Aurko Roy and Ashish Vaswani for multiple discussions on the Routing Transformer.


Google removes 3 Android apps for children, with 20M+ downloads between them, over data collection violations


When it comes to apps, Android leads the pack with nearly 3 million apps in its official Google Play store. The sheer volume also means that sometimes iffy apps slip through the cracks.

Researchers at the International Digital Accountability Council (IDAC), a non-profit watchdog based out of Boston, found that a trio of popular and seemingly innocent-looking apps aimed at younger users were recently found to be violating Google’s data collection policies, potentially accessing users’ Android ID and AAID (Android Advertising ID) numbers, with the data leakage potentially connected to the apps being built using SDKs from Unity, Umeng, and Appodeal.

Collectively, the apps had more than 20 million downloads between them.

The three apps in question — Princess Salon​, Number Coloring and ​Cats & Cosplay — have now been removed from the Google Play app store, as you can see in the links above. Google confirmed to us that it removed the apps after IDAC brought the violations to its attention.

“We can confirm that the apps referenced in the report were removed,” said a Google spokesperson. “Whenever we find an app that violates our policies, we take action.”

The violations point to a wider concern with the three publishers’ approach to adhering to data protection policies. “The practices we observed in our research raised serious concerns about data practices within these apps,” said IDAC president Quentin Palfrey.

The incident is being highlighted at a time when a lot of attention is being focused on Google and the size of its operation. Earlier this week, the US Department of Justice and 11 States sued the company, accusing it of monopolistic and anticompetitive behavior in search and search advertising.

To be clear, the app violations here are not related to search, but they underscore the scale of Google’s operation, and how even small oversights can lead to tens of millions of users being affected. They also serve as a reminder of the challenges of proactively policing individual violations on such a scale, and that those challenges can land in a particularly risky area: how minors use apps.

At least in the cases of two of the publishers, Creative APPS and Libii Tech (whose apps are built around the cast of characters illustrated at the top of this story), other apps are still live. And it also appears that versions of the apps are also still downloadable through APK sites (like this one). There are also versions on iOS (for example here), but Palfrey said it had not assessed iOS versions so it’s not clear if they are similarly leaking data.

The violation in this case is complex but is an example of one of the ways that users can unknowingly be tracked through apps.

Pointing to the behind-the-scenes activity and data processing that gets loaded into innocent-looking apps, IDAC highlighted three SDKs in particular used by the app developers: the Unity 3D and game engine, Umeng (an Alibaba-owned analytics provider known as the “Flurry of China” that some have described also as an adware provider), and Appodeal (another app monetization and analytics provider) — as the source of the issues.

Palfrey explained that the problem lies in how the data that the apps were able to access by way of the SDKs could be linked up with other kinds of data, such as geolocation information. “If AAID information is transmitted in tandem with a persistent identifier [such as Android ID] it’s possible for the protection measures that Google puts in place for privacy protection to be bridged,” he said.

IDAC did not specify the violations in all of the SDKs, but noted in one example that certain versions of Unity’s SDK were collecting both the user’s AAID and Android ID simultaneously, and that could have allowed developers “to bypass privacy controls and track users over time and across devices.”

IDAC describes the AAID as “the passport for aggregating all of the data about a user in one place.” It lets advertisers target ads to users based on signals for preferences that a user might have. The AAID can be reset by users. However, if an SDK is also providing a link to a users Android ID, which is a static number, it starts to create a “bridge” to identify and track a user.

Palfrey would not get too specific on whether it could determine how much data was actually drawn as a result of the violations that it identified, but Google said that it was continuing to work on partnerships and procedures to catch similar (intentional or otherwise) bad actors.

“One example of the work we are doing here is the Families ad certification program, which we announced in 2019),” said the spokesperson. “For apps that wish to serve ads in kids and families apps, we ask them to use only ad SDKs that have self-certified compliance with kids/families policies. We also require that apps that solely target children not contain any APIs or SDKs that are not approved for use in child-directed services.”

IDAC, which was launched in April 2020 as a spinoff of the Future of Privacy Forum, has also carried out investigations into data privacy violations on fertility apps and Covid-19 trackers, and earlier this week it also published findings on data leakage from an older version of Twitter’s MoPub SDK affecting millions of users.


Read Full Article

Leverage public data to improve content marketing outcomes


Recently I’ve seen people mention the difficulty of generating content that can garner massive attention and links. They suggest that maybe it’s better to focus on content without such potential that can earn just a few links but do it more consistently and at higher volumes.

In some cases, this can be good advice. But I’d like to argue that it is very possible to create content that can consistently generate high volumes of high-authority links. I’ve found in practice there is one truly scalable way to build high-authority links, and it’s predicated on two tactics coming together:

  1. Creating newsworthy content that’s of interest to major online publishers (newspapers, major blogs or large niche publishers).
  2. Pitching publishers in a way that breaks through the noise of their inbox so that they see your content.

How can you use new techniques to generate consistent and predictable content marketing wins?

The key is data.

Techniques for generating press with data-focused stories

It’s my strong opinion that there’s no shortcut to earning press mentions and that only truly new, newsworthy and interesting content can be successful. Hands down, the simplest way to predictably achieve this is through a data journalism approach.

One of the best ways you can create press-earning, data-focused content is by using existing data sets to tell a story.

There are tens of thousands — perhaps hundreds of thousands — of existing public datasets that anyone can leverage for telling new and impactful data-focused stories that can easily garner massive press and high levels of authoritative links.

The last five years or so have seen huge transparency initiatives from the government, NGOs and public companies making their data more available and accessible.

Additionally, FOIA requests are very commonplace, freeing even more data and making it publicly available for journalistic investigation and storytelling.

Because this data usually comes from the government or another authoritative source, pitching these stories to publishers is often easier because you don’t face the same hurdles regarding proving accuracy and authoritativeness.

Potential roadblocks

The accessibility of data provided by the government especially can vary. There are little to no data standards in place, and each federal and local government office has varying amounts of resources in making the data they do have easy to consume for outside parties.

The result is that each dataset often has its own issues and complexities. Some are very straightforward and available in clean and well-documented CSVs or other standard formats.

Unfortunately, others are often difficult to decode, clean, validate or even download, sometimes being trapped inside of difficult to parse PDFs, fragmented reports or within antiquated querying search tools that spit out awkward tables.

Deeper knowledge of web scraping and programmatic data cleaning and reformatting are often required to be able to accurately acquire and utilize many datasets.

Tools to use

TunesKit Spotify Music Converter: Best Spotify Music Downloader


If you used to be a user of Napster, you must be crazy about sharing music files with your friends. Back in the 2000s, people uploaded and downloaded songs on Napster. Nothing is better than owning the actual file of a song and being able to play it anywhere. But as online streaming keeps growing […]

The post TunesKit Spotify Music Converter: Best Spotify Music Downloader appeared first on ALL TECH BUZZ.


Freelancer banking startup Lili raises $15M


It’s only been a few months since Lili announced its $10 million seed round, and it’s already raised more funding — namely, a $15 million Series A.

The startup, founded by CEO Lilac Bar David and CTO Liran Zelkha, is creating a bank account and associated products designed for freelancers, with features like early access to direct deposit payments and the ability to set aside a percentage of income for taxes.

The account (and associated Visa debit card) is free of overdraft fees or minimum balance requirements; Bar David said the company only makes money from card processing fees.

She also said that the platform has seen rapid growth this year, with transactions up 700% since the beginning of the pandemic and nearly 100,000 accounts opened since the launch in 2019.

Bar David suggested that the economic turmoil caused by COVID-19 has prompted (or forced) more skilled workers — such as programmers and digital marketers — to turn to freelancing. Meanwhile, she’s also seen “a big shift from part-time freelance to full-time freelance.”

Lili CEO Lilac Bar David

Lili CEO Lilac Bar David

Bar David predicted that the recent growth of the freelance economy won’t simply disappear once the pandemic is over, because workers are discovering the benefits of freelancing.

“If you have a 9-to-5 job, you’re dependent on one employer,” she said. “If something happens you’re out of a job … If you’ve got a diversified customer base, you’re not dependent on just one source of income.”

In recent months, Lili has added new features like automatically generated quarterly income and expense reports, a digital debit card (which customers can use before the physical card arrives in the mail) and the ability to send and receive money via Google Pay (Lili already supported Cash App and Venmo).

Bar David said the startup decided to raise more funding to expand its engineering team and further accelerate its growth. Apparently she was preparing for a traditional Series A fundraising process (albeit one that was conducted in the middle of a pandemic), but “our current investors were so tremendously impressed by the product-market fit and the growth” that they were willing to fund almost all of the new round.

So the Series A was led by previous investor Group 11, with participation from Foundation Capital, AltaIR Capital, Primary Venture Partners and Torch Capital — along with new backer Zeev Ventures.

“As the global workforce evolves at a rapid pace, we are excited to lead another round of funding to help Lili capitalize on unprecedented demand and offer an entirely new solution to help freelancers seamlessly save time and money,” said Group 11’s Dovi Frances in a statement.


Read Full Article

Acapela, from the founder of Dubsmash, hopes ‘asynchronous meetings’ can end Zoom fatigue


Acapela, a new startup co-founded by Dubsmash founder Roland Grenke, is breaking cover today in a bid to re-imagine online meetings for remote teams.

Hoping to put an end to video meeting fatigue, the product is described as an “asynchronous meeting platform,” which Grenke and Acapela’s other co-founder, ex-Googler Heiki Riesenkampf (who has a deep learning computer science background), believe could be the key to unlock better and more efficient collaboration. In some ways the product can be thought of as the antithesis to Zoom and Slack’s real-time and attention-hogging downsides.

To launch, the Berlin-based and “remote friendly” company has raised €2.5 million in funding. The round is led by Visionaries Club with participation from various angel investors, including Christian Reber (founder of Pitch and Wunderlist) and Taavet Hinrikus (founder of TransferWise). I also understand Entrepreneur First is a backer and has assigned EF venture partner Benedict Evans to work on the problem. If you’ve seen the ex-Andreessen Horowitz analyst writing about a post-Zoom world lately, now you know why.

Specifically, Acapela says it will use the injection of cash to expand the core team, focusing on product, design and engineering as it continues to build out its offering.

“Our mission is to make remote teams work together more effectively by having fewer but better meetings,” Grenke tells me. “With Acapela, we aim to define a new category of team collaboration that provides more structure and personality than written messages (Slack or email) and more flexibility than video conferencing (Zoom or Google Meet)”.

Grenke believes some form of asynchronous meetings is the answer, where participants don’t have to interact in real-time but the meeting still has an agenda, goals, a deadline and — if successfully run — actionable outcomes.

“Instead of sitting through hours of video calls on a daily basis, users can connect their calendars and select meetings they would like to discuss asynchronously,” he says. “So, as an alternative to everyone being in the same call at the same time, team members contribute to conversations more flexibly over time. Like communication apps in the consumer space, Acapela allows rich media formats to be used to express your opinion with voice or video messages while integrating deeply with existing productivity tools (like GSuite, Atlassian, Asana, Trello, Notion, etc.)”.

In addition, Acapela will utilise what Grenke says is the latest machine learning techniques to help automate repetitive meeting tasks as well as to summarise the contents of a meeting and any decisions taken. If made to work, that in itself could be significant.

“Initially, we are targeting high-growth tech companies which have a high willingness to try out new tools while having an increasing need for better processes as their teams grow,” adds the Acapela founder. “In addition to that, they tend to have a technical global workforce across multiple time zones which makes synchronous communication much more costly. In the long run we see a great potential tapping into the space of SMEs and larger enterprises, since COVID has been a significant driver of the decentralization of work also in the more traditional industrial sectors. Those companies make up more than 90% of our European market and many of them have not switched to new communication tools yet”.


Read Full Article

Common PostgreSQL Challenges and How to Overcome Them


PostgreSQL is perhaps the most popular open-source database in the world at the moment, and its surge over the past decade has been nothing short of remarkable. Its comprehensiveness and reliability grab the attention of large, established organizations. Additionally, it is a free, open-source database management system. This serves as its main selling point among […]

The post Common PostgreSQL Challenges and How to Overcome Them appeared first on ALL TECH BUZZ.


How Can You Make The Most Out Of Bets Online


The online sports betting industry has made a number of people across the world earn a significant amount of money. However, this only becomes when you are proactive about learning different gaming strategies. You can securely place bets from the comfort of your home when it comes to getting along with casinos. Read ahead to […]

The post How Can You Make The Most Out Of Bets Online appeared first on ALL TECH BUZZ.


Daily Crunch: Facebook Dating comes to Europe


Facebook’s dating feature expands after a regulatory delay, we review the new Amazon Echo and President Donald Trump has an on-the-nose Twitter password. This is your Daily Crunch for October 22, 2020.

The big story: Facebook Dating comes to Europe

Back in February, Facebook had to call off the European launch date of its dating service after failing to provide the Irish Data Protection Commission with enough advanced notice of the launch. Now it seems the regulator has given Facebook the go-ahead.

Facebook Dating (which launched in the U.S. last year) allows users to create a separate dating profile, identify secret chats and go on video dates.

As for any privacy and regulatory concerns, the commission told us, “Facebook has provided detailed clarifications on the processing of personal data in the context of the Dating feature … We will continue to monitor the product as it launches across the EU this week.”

The tech giants

Amazon Echo review: Well-rounded sound — This year’s redesign centers on another audio upgrade.

Facebook adds hosting, shopping features and pricing tiers to WhatsApp Business — Facebook is launching a way to shop for and pay for goods and services in WhatsApp chats, and it said it will finally start to charge companies using WhatsApp for Business.

Spotify takes on radio with its own daily morning show — The new program will combine news, pop culture, entertainment and music personalized to the listener.

Startups, funding and venture capital

Chinese live tutoring app Yuanfudao is now worth $15.5 billion — The homework tutoring app founded in 2012 has surpassed Byju’s as the most valuable edtech company in the world.

E-bike subscription service Dance closes $17.7M Series A, led by HV Holtzbrinck Ventures — The founders of SoundCloud launched their e-bike service three months ago.

Freelancer banking startup Lili raises $15M — It’s only been a few months since Lili announced its $10 million seed round, and it’s already raised more funding.

Advice and analysis from Extra Crunch

How unicorns helped venture capital get later, and bigger — Q3 2020 was a standout period for how high late-stage money stacked up compared to cash available to younger startups.

Ten Zurich-area investors on Switzerland’s 2020 startup outlook — According to official estimates, the number of new Swiss startups has skyrocketed by 700% since 1996.

Four quick bites and obituaries on Quibi (RIP 2020-2020) — What we can learn from Quibi’s amazing, instantaneous, billions-of-dollars failure.

(Reminder: Extra Crunch is our membership program, which aims to democratize information about startups. You can sign up here.)

Everything else

President Trump’s Twitter accessed by security expert who guessed password “maga2020!” — After logging into President Trump’s account, the researcher said he alerted Homeland Security and the password was changed.

For the theremin’s 100th anniversary, Moog unveils the gorgeous Claravox Centennial — With a walnut cabinet, brass antennas and a plethora of wonderful knobs and dials, the Claravox looks like it emerged from a prewar recording studio.

Announcing the Agenda for TC Sessions: Space 2020 — Our first-ever dedicated space event is happening on December 16 and 17.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.


Read Full Article

Is it Safe to Use a VPN on Android?


VPNs are Safe on Android Using a VPN (a virtual private network) on an Android device is safe. However, you need to choose a trustworthy app, like Surfshark VPN for Android. Since a premium VPN for Android has the same architecture and safeguards as its desktop version, the VPN has the same benefits in maintaining […]

The post Is it Safe to Use a VPN on Android? appeared first on ALL TECH BUZZ.


How to Give Back When You Have an Online Business


Do you have an online business or e-commerce store and would like to give back to your favorite charities? Helping good causes can give you a lot of satisfaction, as well as helping to enhance your brand. Whether you want to help the environment or promote children’s charities, there are some simple ways to do […]

The post How to Give Back When You Have an Online Business appeared first on ALL TECH BUZZ.


President Trump’s Twitter accessed by security expert who guessed password ‘maga2020!’


A Dutch security researcher says he accessed President Trump’s @realDonaldTrump Twitter account last week by guessing his password: “maga2020!”.

Victor Gevers, a security researcher at the GDI Foundation and chair of the Dutch Institute for Vulnerability Disclosure, which finds and reports security vulnerabilities, told TechCrunch he guessed the president’s account password and was successful on the fifth attempt.

The account was not protected by two-factor authentication, granting Gevers access to the president’s account.

After logging in, he emailed US-CERT, a division of Homeland Security’s cyber unit Cybersecurity and Infrastructure Security Agency (CISA), to disclose the security lapse, which TechCrunch has seen. Gevers said the president’s Twitter password was changed shortly after.

A screenshot from inside Trump’s Twitter account. (Image: Victor Gevers)

It’s the second time Gevers has gained access to Trump’s Twitter account.

The first time was in 2016, when Gevers and two others extracted and cracked Trump’s password from the 2012 LinkedIn breach. The researchers took his password — “yourefired” — his catchphrase from the television show “The Apprentice” — and found it let them into his Twitter account. Gevers reported the breach to local authorities in the Netherlands, with suggestions on how Trump could improve his password security. One of the passwords he suggested at the time was “maga2020!” he said. Gevers said he “did not expect” the password to work years later.

Dutch news outlet Vrij Nederland first reported the story.

In a statement, Twitter spokesperson Ian Plunkett said: “We’ve seen no evidence to corroborate this claim, including from the article published in the Netherlands today. We proactively implemented account security measures for a designated group of high-profile, election-related Twitter accounts in the United States, including federal branches of government.”

Twitter said last month that it would tighten the security on the accounts of political candidates and government accounts, including encouraging but not mandating the use of two-factor authentication.

Trump’s account is said to be locked down with extra protections after he became president, though Twitter has not said publicly what those protections entail. His account was untouched by hackers who broke into Twitter’s network in July in order to abuse an “admin tool” to hijack high-profile accounts and spread a cryptocurrency scam.

A spokesperson for the White House and the Trump campaign did not immediately comment, but White House deputy press secretary Judd Deere reportedly said the story is “absolutely not true,” but declined to comment on the president’s social media security. A spokesperson for CISA did not immediately confirm the report.

“It’s unbelievable that a man that can cause international incidence and crash stock markets with his Tweets has such a simple password and no two-factor authentication,” said Alan Woodward, a professor at the University of Surrey. “Bearing in mind his account was hacked in 2016 and he was saying only a couple of days ago that no one is hacked the irony is vintage 2020.”

Gevers has previously reported security incidents involving a facial recognition database used to track Uyghur Muslims and a vulnerability in Oman’s stock exchange.

Updated with Twitter comment, and corrected the name of publication which first published the news.


Read Full Article

Sexual assault, shame and teaching kids to ask for help | Kristin Jones

Sexual assault, shame and teaching kids to ask for help | Kristin Jones

Sexual assault is never the victim's fault, says advocate Kristin Jones. In this courageous talk, she tells her story of overcoming the shame that followed sexual abuse as a teenager -- and shares how parents can foster an open conversation about abuse to empower kids and encourage them to ask for help. (This talk contains mature content)

https://ift.tt/35ny5ls

Click this link to view the TED Talk

3 reforms social media platforms should make in light of ‘The Social Dilemma’


“The Social Dilemma” is opening eyes and changing digital lives for Netflix bingers across the globe. The filmmakers explore social media and its effects on society, raising some crucial points about impacts on mental health, politics and the myriad ways firms leverage user data. It interweaves interviews from industry executives and developers who discuss how social sites can manipulate human psychology to drive deeper engagement and time spent within the platforms.

Despite the glaring issues present with social media platforms, people still crave digital attention, especially during a pandemic, where in-person connections are strained if not impossible.

So, how can the industry change for the better? Here are three ways social media should adapt to create happier and healthier interpersonal connections and news consumption.

Stop censoring

On most platforms, like Facebook and Instagram, the company determines some of the information presented to users. This opens the platform to manipulation by bad actors and raises questions about who exactly is dictating what information is seen and what is not. What are the motivations behind those decisions? And some of the platforms dispute their role in this process, with Mark Zuckerberg saying in 2019, “I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online.”

Censorship can be absolved with a restructured type of social platform. For example, consider a platform that does not rely on advertiser dollars. If a social platform is free for basic users but monetized by a subscription model, there is no need to use an information-gathering algorithm to determine which news and content are served to users.

This type of platform is not a ripe target for manipulation because users only see information from people they know and trust, not advertisers or random third parties. Manipulation on major social channels happens frequently when people create zombie accounts to flood content with fake “likes” and “views” to affect the viewed content. It’s commonly exposed as a tactic for election meddling, where agents use social media to promote false statements. This type of action is a fundamental flaw of social algorithms that use AI to make decisions about when and what to censor as well as what it should promote.

Don’t treat users like products

The issues raised by “The Social Dilemma” should reinforce the need for social platforms to self-regulate their content and user dynamics and operate ethically. They should review their most manipulative technologies that cause isolation, depression and other issues and instead find ways to promote community, progressive action and other positive attributes.

A major change required to bring this about is to eliminate or reduce in-platform advertising. An ad-free model means the platform does not need to aggressively push unsolicited content from unsolicited sources. When ads are the main driver for a platform, then the social company has a vested interest in using every psychological and algorithm-based trick to keep the user on the platform. It’s a numbers game that puts profit over users.

More people multiplied by more time on the site equals ad exposure and ad engagement and that means revenue. An ad-free model frees a platform from trying to elicit emotional responses based on a user’s past actions, all to keep them trapped on the site, perhaps to an addictive degree.

Encourage connections without clickbait

A common form of clickbait is found on the typical social search page. A user clicks on an image or preview video that suggests a certain type of content, but upon clicking they are brought to unrelated content. It’s a technique that can be used to spread misinformation, which is especially dangerous for viewers who rely on social platforms for their news consumption, instead of traditional outlets. According to the Pew Research Center, 55% of adults get their news from social media “often” or “sometimes.” This causes a significant problem when clickbait articles make it easier to offer distorted “fake news” stories.

Unfortunately, when users engage with clickbait content, they are effectively “voting” for that information. That seemingly innocuous action creates a financial reason for others to create and disseminate further clickbait. Social media platforms should aggressively ban or limit clickbait. Management at Facebook and other firms often counter with a “free speech” argument when it comes to stopping clickbait. However, they should consider the intent is not to act as censors that are stopping controversial topics but protecting users from false content. It’s about cultivating trust and information sharing, which is much easier to accomplish when post content is backed by facts.

“The Social Dilemma” is rightfully an important film that encourages a vital dialogue about the role social media and social platforms play in everyday life. The industry needs to change to create more engaged and genuine spaces for people to connect without preying on human psychology.

A tall order, but one that should benefit both users and platforms in the long term. Social media still creates important digital connections and functions as a catalyst for positive change and discussion. It’s time for platforms to take note and take responsibility for these needed changes, and opportunities will arise for smaller, emerging platforms taking a different, less-manipulative approach.


Read Full Article