03 May 2019

Daily Crunch: Facebook bans far-right figures


The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here.

1. Facebook bans a fresh batch of mostly far-right figures

The banned figures include Milo Yiannopoulos, Paul Joseph Watson, Laura Loomer, Paul Nehlen and Louis Farrakhan — plus, Facebook doubled down on banning Alex Jones.

The company said this is part of its policy to ban “individuals or organizations who promote or engage in violence and hate, regardless of ideology.”

2. Microsoft makes a push to simplify machine learning

Ahead of its Build conference, Microsoft today released a slew of new machine learning products and tweaks to some of its existing services. These range from no-code tools to hosted notebooks, with a number of new APIs and other services in-between.

3. YouTube confirms plans to make Originals available for free

Since last fall, YouTube has acknowledged that it’s moving toward an ad-supported model for its Originals. Last night, its chief business officer said all original programming moving forward will have a free window.

4. Why you don’t want Tumblr sold to exploitative Pornhub

The Wall Street Journal reports that TechCrunch parent company Verizon is considering selling Tumblr, and Pornhub VP Corey Price told BuzzFeed, “We’re extremely interested in acquiring the platform.”

5. Spotify spotted testing ‘Your Daily Drive,’ a personalized playlist that includes podcasts

This is the first Spotify playlist to mix music and podcasts, customized to users’ tastes.

6. Sonic the Hedgehog director says character is getting makeover after backlash

After the release of the film’s trailer, director Jeff Fowler tweeted, “The message is loud and clear… you aren’t happy with the design & you want changes. It’s going to happen.”

7. 3 key secrets to building extraordinary teams

For one thing, hire people before skills, because scrappiness and cultural fit matter more than intelligence and experience. (Extra Crunch membership required.)


Read Full Article

Announcing Google-Landmarks-v2: An Improved Dataset for Landmark Recognition & Retrieval




Last year we released Google-Landmarks, the largest world-wide landmark recognition dataset available at that time. In order to foster advancements in research on instance-level recognition (recognizing specific instances of objects, e.g. distinguishing Niagara Falls from just any waterfall) and image retrieval (matching a specific object in an input image to all other instances of that object in a catalog of reference images), we also hosted two Kaggle challenges, Landmark Recognition 2018 and Landmark Retrieval 2018, in which more than 500 teams of researchers and machine learning (ML) enthusiasts participated. However, both instance recognition and image retrieval methods require ever larger datasets in both the number of images and the variety of landmarks in order to train better and more robust systems.

In support of this goal, this year we are releasing Google-Landmarks-v2, a completely new, even larger landmark recognition dataset that includes over 5 million images (2x that of the first release) of more than 200 thousand different landmarks (an increase of 7x). Due to the difference in scale, this dataset is much more diverse and creates even greater challenges for state-of-the-art instance recognition approaches. Based on this new dataset, we are also announcing two new Kaggle challenges—Landmark Recognition 2019 and Landmark Retrieval 2019—and releasing the source code and model for Detect-to-Retrieve, a novel image representation suitable for retrieval of specific object instances.
Heatmap of the landmark locations in Google-Landmarks-v2, which demonstrates the increase in the scale of the dataset and the improved geographic coverage compared to last year’s dataset.
Creating the Dataset
A particular problem in preparing Google-Landmarks-v2 was the generation of instance labels for the landmarks represented, since it is virtually impossible for annotators to recognize all of the hundreds of thousands of landmarks that could potentially be present in a given photo. Our solution to this problem was to crowdsource the landmark labeling through the efforts of a world-spanning community of hobby photographers, each familiar with the landmarks in their region.
Selection of images from Google-Landmarks-v2. Landmarks include (left to right, top to bottom) Neuschwanstein Castle, Golden Gate Bridge, Kiyomizu-dera, Burj khalifa, Great Sphinx of Giza, and Machu Picchu.
Another issue for research datasets is the requirement that images be shared freely and stored indefinitely, so that the dataset can be used to track the progress of research over a long period of time. As such, we sourced the Google-Landmarks-v2 images through Wikimedia Commons, capturing both world-famous and lesser-known, local landmarks while ensuring broad geographic coverage (thanks in part to Wiki Loves Monuments) and photos sourced from public institutions, including historical photographs that are valuable to test instance recognition over time.

The Kaggle Challenges
The goal of the Landmark Recognition 2019 challenge is to recognize a landmark presented in a query image, while the goal of Landmark Retrieval 2019 is to find all images showing that landmark. The challenges include cash prizes totaling $50,000 and the winning teams will be invited to present their methods at the Second Landmark Recognition Workshop at CVPR 2019.

Open Sourcing our Model
To foster research reproducibility and help push the field of instance recognition forward, we are also releasing open-source code for our new technique, called Detect-to-Retrieve (which will be presented as a paper in CVPR 2019). This new method leverages bounding boxes from an object detection model to give extra weight to image regions containing the class of interest, which significantly improves accuracy. The model we are releasing is trained on a subset of 86k images from the original Google-Landmarks dataset that were annotated with landmark bounding boxes. We are making these annotations available along with the original dataset here.

We invite researchers and ML enthusiasts to participate in the Landmark Recognition 2019 and Landmark Retrieval 2019 Kaggle challenges and to join the Second Landmark Recognition Workshop at CVPR 2019. We hope that this dataset will help advance the state-of-the-art in instance recognition and image retrieval. The data is being made available via the Common Visual Data Foundation.

Acknowledgments
The core contributors to this project are Andre Araujo, Bingyi Cao, Jack Sim and Tobias Weyand. We would like to thank our team members Daniel Kim, Emily Manoogian, Nicole Maffeo, and Hartwig Adam for their kind help. Thanks also to Marvin Teichmann and Menglong Zhu for their contribution to collecting the landmark bounding boxes and developing the Detect-to-Retrieve technique. We would like to thank Will Cukierski and Maggie Demkin for their help organizing the Kaggle challenge, Elan Hourticolon-Retzler, Yuan Gao, Qin Guo, Gang Huang, Yan Wang, Zhicheng Zheng for their help with data collection, Tsung-Yi Lin for his support with CVDF hosting, as well as our CVPR workshop co-organizers Bohyung Han, Shih-Fu Chang, Ondrej Chum, Torsten Sattler, Giorgos Tolias, and Xu Zhang. We have great appreciation for the Wikimedia Commons Community and their volunteer contributions to an invaluable photographic archive of the world’s cultural heritage. And finally, we’d like to thank the Common Visual Data Foundation for hosting the dataset.

When it comes to elections, Facebook moves slow, may still break things


This week, Facebook invited a small group of journalists — which didn’t include TechCrunch — to look at the “war room” it has set up in Dublin, Ireland, to help monitor its products for election-related content that violates its policies. (“Time and space constraints” limited the numbers, a spokesperson told us when he asked why we weren’t invited.)

Facebook announced it would be setting up this Dublin hub — which will bring together data scientists, researchers, legal and community team members, and others in the organization to tackle issues like fake news, hate speech and voter suppression — back in January. The company has said it has nearly 40 teams working on elections across its family of apps, without breaking out the number of staff it has dedicated to countering political disinformation. 

We have been told that there would be “no news items” during the closed tour — which, despite that, is “under embargo” until Sunday — beyond what Facebook and its executives discussed last Friday in a press conference about its European election preparations.

The tour looks to be a direct copy-paste of the one Facebook held to show off its US election “war room” last year, which it did invite us on. (In that case it was forced to claim it had not disbanded the room soon after heavily PR’ing its existence — saying the monitoring hub would be used again for future elections.)

We understand — via a non-Facebook source — that several broadcast journalists were among the invites to its Dublin “war room”. So expect to see a few gauzy inside views at the end of the weekend, as Facebook’s PR machine spins up a gear ahead of the vote to elect the next European Parliament later this month.

It’s clearly hoping shots of serious-looking Facebook employees crowded around banks of monitors will play well on camera and help influence public opinion that it’s delivering an even social media playing field for the EU parliament election. The European Commission is also keeping a close watch on how platforms handle political disinformation before a key vote.

But with the pan-EU elections set to start May 23, and a general election already held in Spain last month, we believe the lack of new developments to secure EU elections is very much to the company’s discredit.

The EU parliament elections are now a mere three weeks away, and there are a lot of unresolved questions and issues Facebook has yet to address. Yet we’re told the attending journalists were once again not allowed to put any questions to the fresh-faced Facebook employees staffing the “war room”.

Ahead of the looming batch of Sunday evening ‘war room tour’ news reports, which Facebook will be hoping contain its “five pillars of countering disinformation” talking points, we’ve compiled a run down of some key concerns and complications flowing from the company’s still highly centralized oversight of political campaigning on its platform — even as it seeks to gloss over how much dubious stuff keeps falling through the cracks.

Worthwhile counterpoints to another highly managed Facebook “election security” PR tour.

No overview of political ads in most EU markets

Since political disinformation created an existential nightmare for Facebook’s ad business with the revelations of Kremlin-backed propaganda targeting the 2016 US presidential election, the company has vowed to deliver transparency — via the launch of a searchable political ad archive for ads running across its products.

The Facebook Ad Library now shines a narrow beam of light into the murky world of political advertising. Before this, each Facebook user could only see the propaganda targeted specifically at them. Now, such ads stick around in its searchable repository for seven years. This is a major step up on total obscurity. (Obscurity that Facebook isn’t wholly keen to lift the lid on, we should add; Its political data releases to researchers so far haven’t gone back before 2017.)

However, in its current form, in the vast majority of markets, the Ad Library makes the user do all the leg work — running searches manually to try to understand and quantify how Facebook’s platform is being used to spread political messages intended to influence voters.

Facebook does also offer an Ad Library Report — a downloadable weekly summary of ads viewed and highest spending advertisers. But it only offers this in four countries globally right now: the US, India, Israel and the UK.

It has said it intends to ship an update to the reports in mid-May. But it’s not clear whether that will make them available in every EU country. (Mid-May would also be pretty late for elections that start May 23.)

So while the UK report makes clear that the new ‘Brexit Party’ is now a leading spender ahead of the EU election, what about the other 27 members of the bloc? Don’t they deserve an overview too?

A spokesperson we talked to about this week’s closed briefing said Facebook had no updates on expanding Ad Library Reports to more countries, in Europe or otherwise.

So, as it stands, the vast majority of EU citizens are missing out on meaningful reports that could help them understand which political advertisers are trying to reach them and how much they’re spending.

Which brings us to…

Facebook’s Ad Archive API is far too limited

In another positive step Facebook has launched an API for the ad archive that developers and researchers can use to query the data. However, as we reported earlier this week, many respected researchers have voiced disappointed with what it’s offering so far — saying the rate-limited API is not nearly open or accessible enough to get a complete picture of all ads running on its platform.

Following this criticism, Facebook’s director of product, Rob Leathern, tweeted a response, saying the API would improve. “With a new undertaking, we’re committed to feedback & want to improve in a privacy-safe way,” he wrote.

The question is when will researchers have a fit-for-purpose tool to understand how political propaganda is flowing over Facebook’s platform? Apparently not in time for the EU elections, either: We asked about this on Thursday and were pointed to Leathern’s tweets as the only update.

This issue is compounded by Facebook also restricting the ability of political transparency campaigners — such as the UK group WhoTargetsMe and US investigative journalism site ProPublica — to monitor ads via browser plug-ins, as the Guardian reported in January.

The net effect is that Facebook is making life hard for civil society groups and public interest researchers to study the flow of political messaging on its platform to try to quantify democratic impacts, and offering only a highly managed level of access to ad data that falls far short of the “political ads transparency” Facebook’s PR has been loudly trumpeting since 2017.

Ad loopholes remain ripe for exploiting

Facebook’s Ad Library includes data on political ads that were active on its platform but subsequently got pulled (made “inactive” in its parlance) because they broke its disclosure rules.

There are multiple examples of inactive ads for the Spanish far right party Vox visible in Facebook’s Ad Library that were pulled for running without the required disclaimer label, for example.

“After the ad started running, we determined that the ad was related to politics and issues of national importance and required the label. The ad was taken down,” runs the standard explainer Facebook offers if you click on the little ‘i’ next to an observation that “this ad ran without a disclaimer”.

What is not at all clear is how quickly Facebook acted to removed rule-breaking political ads.

It is possible to click on each individual ad to get some additional details. Here Facebook provides a per ad breakdown of impressions; genders, ages, and regional locations of the people who saw the ad; and how much was spent on it.

But all those clicks don’t scale. So it’s not possible to get an overview of how effectively Facebook is handling political ad rule breakers. Unless, well, you literally go in clicking and counting on each and every ad…

There is then also the wider question of whether a political advertiser that is found to be systematically breaking Facebook rules should be allowed to keep running ads on its platform.

Because if Facebook does allow that to happen there’s a pretty obvious (and massive) workaround for its disclosure rules: Bad faith political advertisers could simply keep submitting fresh ads after the last batch got taken down.

We were, for instance, able to find inactive Vox ads taken down for lacking a disclaimer that had still been able to rack up thousands — and even tens of thousands — of impressions in the time they were still active.

Facebook needs to be much clearer about how it handles systematic rule breakers.

Definition of political issue ads is still opaque

Facebook currently requires that all political advertisers in the EU go through its authorization process in the country where ads are being delivered if they relate to the European Parliamentary elections, as a step to try and prevent foreign interference.

This means it asks political advertisers to submit documents and runs technical checks to confirm their identity and location. Though it noted, on last week’s call, that it cannot guarantee this ID system cannot be circumvented. (As it was last year when UK journalists were able to successfully place ads paid for by ‘Cambridge Analytica’.)

One other big potential workaround is the question of what is a political ad? And what is an issue ad?

Facebook says these types of ads on Facebook and Instagram in the EU “must now be clearly labeled, including a paid-for-by disclosure from the advertiser at the top of the ad” — so users can see who is paying for the ads and, if there’s a business or organization behind it, their contact details, plus some disclosure about who, if anyone, saw the ads.

But the big question is how is Facebook defining political and issue ads across Europe?

While political ads might seem fairly easy to categorize — assuming they’re attached to registered political parties and candidates, issues are a whole lot more subjective.

Currently Facebook defines issue ads as those relating to “any national legislative issue of public importance in any place where the ad is being run.” It says it worked with EU barometer, YouGov and other third parties to develop an initial list of key issues — examples for Europe include immigration, civil and social rights, political values, security and foreign policy, the economy and environmental politics — that it will “refine… over time.”

Again specifics on when and how that will be refined are not clear. Yet ads that Facebook does not deem political/issue ads will slip right under its radar. They won’t be included in the Ad Library; they won’t be searchable; but they will be able to influence Facebook users under the perfect cover of its commercial ad platform — as before.

So if any maliciously minded propaganda slips through Facebook’s net, because the company decides it’s a non-political issue, it will once again leave no auditable trace.

In recent years the company has also had a habit of announcing major takedowns of what it badges “fake accounts” ahead of major votes. But again voters have to take it on trust that Facebook is getting those judgement calls right.

Facebook continues to bar pan-EU campaigns

On the flip side of weeding out non-transparent political propaganda and/or political disinformation, Facebook is currently blocking the free flow of legal pan-EU political campaigning on its platform.

This issue first came to light several weeks ago, when it emerged that European officials had written to Nick Clegg (Facebook’s vice president of global affairs) to point out that its current rules — i.e. that require those campaigning via Facebook ads to have a registered office in the country where the ad is running — run counter to the pan-European nature of this particular election.

It means EU institutions are in the strange position of not being able to run Facebook ads for their own pan-EU election everywhere across the region. “This runs counter to the nature of EU institutions. By definition, our constituency is multinational and our target audience are in all EU countries and beyond,” the EU’s most senior civil servants pointed out in a letter to the company last month.

This issue impacts not just EU institutions and organizations advocating for particular policies and candidates across EU borders, but even NGOs wanting to run vanilla “get out the vote” campaigns Europe-wide — leading to a number to accuse Facebook of breaching their electoral rights and freedoms.

Facebook claimed last week that the ball is effectively in the regulators’ court on this issue — saying it’s open to making the changes but has to get their agreement to do so. A spokesperson confirmed to us that there is no update to that situation, either.

Of course the company may be trying to err on the side of caution, to prevent bad actors being able to interfere with the vote across Europe. But at what cost to democratic freedoms?

What about fake news spreading on WhatsApp?

Facebook’s ‘election security’ initiatives have focused on political and/or politically charged ads running across its products. But there’s no shortage of political disinformation flowing unchecked across its platforms as user uploaded ‘content’.

On the Facebook-owned messaging app WhatsApp, which is hugely popular in some European markets, the presence of end-to-end encryption further complicates this issue by providing a cloak for the spread of political propaganda that’s not being regulated by Facebook.

In a recent study of political messages spread via WhatsApp ahead of last month’s general election in Spain, the campaign group Avaaz dubbed it “social media’s dark web” — claiming the app had been “flooded with lies and hate”.

Posts range from fake news about Prime Minister Pedro Sánchez signing a secret deal for Catalan independence to conspiracy theories about migrants receiving big cash payouts, propaganda against gay people and an endless flood of hateful, sexist, racist memes and outright lies,” it wrote. 

Avaaz compiled this snapshot of politically charged messages and memes being shared on Spanish WhatsApp by co-opting 5,833 local members to forward election-related content that they deemed false, misleading or hateful.

It says it received a total of 2,461 submissions — which is of course just a tiny, tiny fraction of the stuff being shared in WhatsApp groups and chats. Which makes this app the elephant in Facebook’s election ‘war room’.

What exactly is a war room anyway?

Facebook has said its Dublin Elections Operation Center — to give it its official title — is “focused on the EU elections”, while also suggesting it will plug into a network of global teams “to better coordinate in real time across regions and with our headquarters in California [and] accelerate our rapid response times to fight bad actors and bad content”.

But we’re concerned Facebook is sending out mixed — and potentially misleading — messages about how its election-focused resources are being allocated.

Our (non-Facebook) source told us the 40-odd staffers in the Dublin hub during the press tour were simultaneously looking at the Indian elections. If that’s the case, it does not sound entirely “focused” on either the EU or India’s elections. 

Facebook’s eponymous platform has 2.375 billion monthly active users globally, with some 384 million MAUs in Europe. That’s more users than in the US (243M MAUs). Though Europe is Facebook’s second-biggest market in terms of revenues after the US. Last quarter, it pulled in $3.65BN in sales for Facebook (versus $7.3BN for the US) out of $15BN overall.

Apart from any kind of moral or legal pressure that Facebook might have for running a more responsible platform when it comes to supporting democratic processes, these numbers underscore the business imperative that it has to get this sorted out in Europe in a better way.

Having a “war room” may sound like a start, but unfortunately Facebook is presenting it as an end in itself. And its foot-dragging on all of the bigger issues that need tackling, in effect, means the war will continue to drag on.


Read Full Article

Google’s budget Pixel 3a XL pops up at an Ohio Best Buy


The Pixel 3a is arriving next week at Google I/O. That statement felt like all but a given before, and now that’s the handset is showing up at Ohio-area Best Buys, well, you can pretty much bank on it at this point.

Google’s budget take on its Pixel flagship is expected to take the stage during the May 7 keynote at Mountain View. Meantime, we’ve got another pretty good look at the thing courtesy of an Android Police reader who spotted boxes at a Springfield store.

The shots confirm Google’s strict adherence to silly color naming conventions, with the appearance of “Purple-ish” alongside “Just Black.” The former is a new color and looks to be about as subtle as you can get with a purple piece of electronics. Other side-of-the-box specs confirm what we’ve seen so far, including a 6-inch display on the XL version, coupled with 64GB of storage.

The handsets arrive just six or so months after the release of the Pixel 3. The company addressed the flagship device’s poor sales on this week’s earnings call, noting, among other things, that it had some hardware planned for I/O, marking a break from past years. It will be interesting to see how Google positions the product, as it continues to make software, AI and ML the focus of upgrades over hardware specs.

More info on what to expect next week in Mountain View can be found here.


Read Full Article

Google’s budget Pixel 3a XL pops up at an Ohio Best Buy


The Pixel 3a is arriving next week at Google I/O. That statement felt like all but a given before, and now that’s the handset is showing up at Ohio-area Best Buys, well, you can pretty much bank on it at this point.

Google’s budget take on its Pixel flagship is expected to take the stage during the May 7 keynote at Mountain View. Meantime, we’ve got another pretty good look at the thing courtesy of an Android Police reader who spotted boxes at a Springfield store.

The shots confirm Google’s strict adherence to silly color naming conventions, with the appearance of “Purple-ish” alongside “Just Black.” The former is a new color and looks to be about as subtle as you can get with a purple piece of electronics. Other side-of-the-box specs confirm what we’ve seen so far, including a 6-inch display on the XL version, coupled with 64GB of storage.

The handsets arrive just six or so months after the release of the Pixel 3. The company addressed the flagship device’s poor sales on this week’s earnings call, noting, among other things, that it had some hardware planned for I/O, marking a break from past years. It will be interesting to see how Google positions the product, as it continues to make software, AI and ML the focus of upgrades over hardware specs.

More info on what to expect next week in Mountain View can be found here.


Read Full Article

6 Unwritten Twitter Rules You’re Probably Breaking


unwritten-twitter-rules

Twitter is a fast-paced site with its own set of rules. The problem is some of these Twitter rules aren’t in the official documentation. Think of them like an unwritten code of conduct implemented by the users.

Like most social codes, everyone in the “in-group” is expected to observe them. But do you know what these rules are, and if so, are you following them? Here are the unwritten Twitter rules you’re probably breaking.

1. Snitch Tagging

If you don’t know how to use Twitter yet, we recommend checking out that link. If you do know how to use it, you’ll know the social media platform lives on drama. In fact, Twitter thrives on it.

This drama takes on the form of vaguely worded “subtweets”. Subtweets are when someone talks about a subject in enough detail for their immediate in-crowd to get it, but they avoid keywords so the tweet isn’t searchable.

This avoidance of keywords is due to the public nature of the platform. Social media mobs can pile on pretty quick, and it’s a constant fear in the back of most users’ minds.

However, subtweeting isn’t always a bad thing. Sometimes—when you see it—you shouldn’t let the subjects of those subtweets know.

Let’s say you see someone subtweeting about your favorite author. This person is talking about how much they hate their book, but they haven’t tagged the author, haven’t used their name, or named the book itself. It’s practically unsearchable unless you know the specific details of the story.

Even still, you get mad at this subtweet. You might reply to it and @ the author’s handle to say “Look at this! How dare they!”

It might even feel like you’re doing a good deed. Unfortunately, this act of @’ing the author over a subtweet is called “snitch tagging”—you’re exposing a sub tweeter in order to get them in trouble.

Why It’s Bad

In this particular instance, snitch tagging is bad for both parties.

The author may not want to see someone trashing their book, and the person who was subtweeting may have done so for safety reasons. The author’s fans could be very aggressive in protecting the book they love. If they saw this person talking about their beloved book openly, they may have targeted them.

In general, don’t snitch tag on a subtweet unless there’s a good reason to do so, like reporting a credible thread.

Otherwise you might end up getting blocked by both parties for causing unnecessary drama.

2. Jumping Into Someone’s DMs

People are picky about DMs—or “direct messages”—on Twitter. Not everyone is a fan, especially since Twitter is predicated on openness.

If you’re not mutuals with the person you’re DM’ing, they have no reason to trust you. “Mutuals” means that you and the person you’re DM’ing follow each other.

If they don’t follow you, and you message them privately, it can be seen as an invasion of personal space. This is especially true if neither of you follow each other, or if you haven’t chatted publicly before.

To avoid this misconception, just make sure you’re respectful of the other person’s personal space on Twitter—the same as you would in real life. Chat publicly, and when you do DM, don’t spam them with multiple messages until they reply.

Remember, always look for social cues that tell you when and where you need to back off. Sometimes a conversation just isn’t meant to be, and you shouldn’t harbor a grudge over it.

3. Swapping a Follow for a Follow

Maybe you’re one of those people who came to Twitter to try to gain a big following. When you first started out you thought the best way to do this was through “follow for follow”.

Follow for follow is when you follow potential readers in quick succession in the hopes they follow you back. It’s done with the goal of boosting a person’s visibility.

There’s nothing wrong with following potential readers, in theory. They might be really cool people and think you’re cool in turn. However, the trick to getting long-term followers is engagement, not raw numbers.

If you follow and unfollow these people when they don’t follow you back, they’ll quickly notice what you’re all about and be rightfully annoyed. No one likes to be thought of as an accessory to someone else’s popularity.

Additionally, there’s a general perception on Twitter that if your following count is the same ratio as your follower count, you’re desperate for eyeballs. Which leads us to our next point.

4. Maintaining an Even Following vs. Follower Ratio

Twitter Rules You're Breaking Following Versus Follower Ratio

Public perception on Twitter is a huge thing, not unlike Instagram. Both platforms are open and revolve around maintaining an online persona.

Unless a person has set their profile to private, you can read all their tweets. You can also see the number of people that are following them and how many people they follow in turn.

There’s no hard or fast rule on how many people you should follow versus how many people follow you. However when those numbers are almost identical—especially when you’re following a ridiculous amount of people—other users will start to wonder what you’re all about.

Additionally, this high following count might create a misconception that you’re a bot who is automatically searching for new accounts to latch onto.

To avoid this, it’s best to curate your following list to keep those numbers realistic. Follow the accounts that bring you genuine fun and engagement.

Remember, your fellow Twitter users are people, not numbers.

5. Using Too Many Hashtags

What if you’re using Twitter to draw attention to a product or service? What if you thought the best way to do this was through hashtags, and you used a bunch?

While one or two targeted hashtags is a good idea, and generally works well, too many of them tends to make people’s eyes glaze over.

Once again, public perception plays into this. By spreading your net as wide as possible there’s a general belief that you’re desperate for views. On top of that, if your hashtags are too general—the color “red” for example—no one will search for it.

In both cases, people will roll their eyes and decide that you don’t know what you’re doing. Your shot at putting your best foot forward could be ruined forever.

6. Not Untagging the Original Tweeter

Sometimes you’ll see someone retweet something, and even if they’re not the original poster you want to reply to the tweet. Let’s say you do.

When the original poster replies back to you, you respond, and the two of you get into a long, involved discussion over the content. While this is going on, the person who retweeted the tweet doesn’t contribute.

Unfortunately, that doesn’t mean they’re not part of the conversation.

When you respond to something that a person has retweeted, both their handle and the handle of the original poster are included in your reply. A back-and-forth convo can quickly clog a person’s notifications, and if they’re not participating in the conversation the notifications can be unwanted.

Forgetting to “untag” a person is seen as bad form.

To fix this, simply click on the Replying to link above your new tweet to show all the names that are included in that reply. Uncheck those names from the people who are not involved in the conversation. It will stop them from being notified constantly.

Try Not to Break These Twitter Rules

Now that we’ve gone over Twitter’s social do’s and don’ts, you can hopefully avoid making most of these mistakes. But if you do break one of the unwritten Twitter rules, you can be sure another Twitter user will let you know.

Are you looking for new communities to get involved with? Then check out our list of thriving Twitter communities for geeks.

Read the full article: 6 Unwritten Twitter Rules You’re Probably Breaking


Read Full Article

How to Add Top Features From Other Text Editors to Vim


add-features-vim

If you’re like many people, you know Vim as that editor you open to tweak a config file then can’t manage to exit. On the other hand, if you frequently use Vim, you know how powerful its modal editing features are. If you run Linux or any other Unix flavor, Vim is worth learning.

That said, Vim shows its age pretty easily. By default, it lacks many of the features we’ve come to rely on in modern text editors. That said, install a few packages and Vim can hold its own with Visual Studio Code, Sublime Text, and more.

Plugin Management: Vim-Plug

Installing plugins in Vim-Plug

One key feature in modern text editors is the ability to extend them with plugins. While Vim added native package management in version 8.0, many find it cumbersome compared to third-party package managers. One of the most popular package managers is Vim-Plug.

Before you can start using Vim-Plug, you’ll need to install it. On a Unix system like Linux or macOS, run the following in a terminal to download and install vim-plug.

curl -fLo ~/.vim/autoload/plug.vim --create-dirs \
 https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim

If you’re using Vim in Windows, you can install Vim-Plug by pasting the following into PowerShell.

md ~\vimfiles\autoload
$uri = 'https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim'
(New-Object Net.WebClient).DownloadFile(
 $uri,
 $ExecutionContext.SessionState.Path.GetUnresolvedProviderPathFromPSPath(
 "~\vimfiles\autoload\plug.vim"
 )
)

Now, you’ll be able to install plugins by adding them to your ~/.vimrc file. You’ll need to add two new lines to the file:

call plug#begin('~/.vim/plugged')
call plug#end()

To install a plugin, add Plug, followed by the part of its GitHub URL that follows http://www.github.com in single quotes. For example, to install the Solarized color scheme, your config file would contain the following:

call plug#begin('~/.vim/plugged')

Plug 'altercation/vim-colors-solarized'

call plug#end()

For more information on how to install the package manager, see the Vim-Plug GitHub page.

Error Checking: Syntastic

Syntastic for Vim

Another feature many have come to rely on is your editor of choice telling you when the code you’ve written is invalid. This is often known as “linting.” It won’t keep you from writing code that won’t run, but it will catch basic syntax errors you may not have noticed.

As the name hints at, Syntastic is a syntax checking plugin for Vim. It doesn’t actually do much by itself for many languages. Instead, you’ll need to install a linter or syntax checker for the language or languages of your choice. Syntastic will then integrate the checker into Vim, checking your code every time you save the file.

Syntastic supports more languages than we can list here, so it’s highly likely that the language you’re using is supported. For instructions on how to configure the plugin, see the Syntastic GitHub page.

Code Completion: YouCompleteMe

Autocompletion in YouCompleteMe

Syntax checking is nice, but if you come from Visual Studio Code or a similarly feature-packed editor you’re probably missing something else. This is code competition, also known as Intellisense in the Visual Studio world. If you’re using vim for more than editing config files, it will make your life a lot easier.

Code completion makes writing code easier by popping up suggestions as you type. This is nice if you’re using a method that is heavily nested, so you don’t have to remember the entire string.

YouCompleteMe is a code completion engine for Vim, and it’s one of the more powerful plugins you can install. It’s also somewhat trickier to install than other plugins. You can install the basics with a package manager like Vim-Plug, but you’ll need to compile it.

The easiest way to compile the plugin is to use the included install.py script. To do this on macOS or Linux, enter the following:

cd ~/.vim/bundle/YouCompleteMe
./install.py --clang-completer

Note that on Linux you’ll have to install development tools, CMake, and the required headers before you can compile YouCompleteMe.

For instructions on installing and compiling YouCompleteMe on other systems or for more information, see the YouCompleteMe GitHub page.

Fuzzy Search: CtrlP

CtrlP for Vim

If you’re working on a project with many different files, Vim’s method of opening files might frustrate you. The :e command has basic autocomplete, but you’ll still need to know where your file is located. You could drop to the command line to find it, but wouldn’t it be better if you could do this right from Vim?

Fortunately, you can. The CtrlP plugin can search files, but it can also do much more. The CtrlP GitHub page describes it as a “full path fuzzy file, buffer, mru, tag, … finder for Vim.” The plugin is similar to Sublime Text’s “Goto Anything” command which, surprise surprise, has the keyboard shortcut of Ctrl + P or Command + P.

This feature or an equivalent can be found in most modern text editors, and if you find yourself missing it, it’s nice to have in Vim.

File Browsing: NERDTree

NERDTree running in Vim

You might prefer a more traditional type of file browsing. If you miss the left-hand panel display of files found in many editors, you’ll be glad to know it’s available in Vim. This is thanks to the NERDTree plugin.

Unlike the left menu in Sublime Text, Visual Studio Code, and others, NERDTree is a full file system explorer. Instead of displaying just your project directory, you can navigate anywhere on your computer. If you’re working with files across multiple projects, this can be a very handy feature to have.

To open NERDTree inside Vim, just use the :NERDTree command. If you’d rather bind it to a command, you can do this with a ~/.vimrc option like the following:

map <C-n> :NERDTreeToggle<CR>

This would let you simply hit Ctrl + N to open and close the NERDTree panel.

Git Integration: fugitive.vim

Add Fugitive to Vim

Git integration has become a must-have feature in modern text editors, so it’s good to know that it’s available in Vim too. The project GitHub page describes fugitive.vim as “a Git wrapper so awesome, it should be illegal.”

Running :GStatus will bring up something similar to what you’d see with the git status command. If you’ve finished your work on a file and are ready to commit it, run :GCommit %. This will let you edit the commit message inside the currently running Vim window.

There are too many commands to list here, plus you can run any standard Git command by running :Git. For more information, including screencasts, see the fugitive.vim GitHub page.

Are You Looking for Even More Vim Tips?

The above tips will help to modernize Vim, but they’re far from the only thing you can do to customize the editor to your liking. As you may have already guessed from reading this article, Vim is an extremely tweak-able editor.

If you’re ready for more, take a look at our list of Vim customizations to make it even better.

Read the full article: How to Add Top Features From Other Text Editors to Vim


Read Full Article

10 Tips to Keep Your Online Bank Account Secure

The 6 Best Digital Photo Frames

Why you don’t want Tumblr sold to exploitative Pornhub


Tumblr has been squandered ever since it was bought for $1.1 billion in 2013 by Yahoo, now part of Verizon Media Group. Without proper strategy or talent, the blogging tool and early meme-sharing network fell into decline while Medium and Instagram soared. Yahoo wrote down Tumblr’s value by $230 million in 2016. Then last year, Verizon evicted Tumblr’s huge and loyal base of porn bloggers, leaving no viable platform for independent adult content creators and curators.

Now the Wall Street Journal reports that TechCrunch parent company Verizon is considering selling Tumblr.

Many immediately hoped it’d change hands to an owner who’d embrace pornography, such as social media darling Pornhub. BuzzFeed quickly reported that Pornhub VP Corey Price told it “We’re extremely interested in acquiring the platform and are very much looking forward to one day restoring it to its former glory with NSFW content.”

But given Pornhub parent company MindGeek’s record of exploitation of adult performers, that could be a disastrous proceeding for the world of kink.

Outside of Pornhub, MindGeek owns many of the top porn streaming sites like YouPorn, RedTube, and GayTube. Widespread piracy of porn films by those sites has made it tough for performers to earn a living. Many smaller studios or performers don’t have the legal or financial resources to file constant copyright infringement takedown notices, and MindGeek’s sites have been accused of allow re-uploads of videos days after taking them down.

The truly insidious part is that MindGeek has also bought up a bunch of the top porn production studios including Brazzers, Babes.com, and Digital Playground. MindGeek has been accused of allowing those studios’ films to be pirated by its own streaming sites. That lets MindGeek earn and keep streaming ad revenue without giving performers a proper cut.

The result has been a massive decline in the wages of porn performers and the number of films being made. This is turn pushes performers into more rough and extreme porn genres they’re not comfortable with, or into other sex work like prostitution that can be dangerous. We reached out to Verizon Media Group which told us “we don’t comment on rumors”, and we’re awaiting comment on piracy issues from MindGeek.

If Pornhub and MindGeek succeed in acquiring Tumblr to strengthen their near monopoly, they could end up exploiting porn bloggers and the performers they post about too. You could imagine the photos and GIFs in diverse porn genres that populated Tumblr getting scraped and shared across MindGeek’s network of sites beyond the bloggers’ or performers’ control. Or Tumblr’s porn blogs could be used to funnel traffic towards MindGeek’s crooked streaming sites, exacerbating the piracy problem. A more optimistic view is would be that Pornhub’s newer features that let performers set up their own paywalls could help Tumblr curators earn money for themselves…and MindGeek. If Pornhub managed to turn Tumblr around, it would deal a stern lesson to platforms that were quick to ban adult content.

Since many of the puritanical US government’s elected officials likely see porn performers as godless heathens undeserving of protection, they’re unlikely to try to safeguard the profession with anti-trust or fair payout regulation. The SESTA-FOSTA law that went into effect last year intending to stop sex trafficking ended up pushing sites like Tumblr, Facebook, and Patreon towards tougher crack downs on porn, nudity, or even innocent discussions about sex within support communities for LGTBQ people and other underprivileged minorities.

Unfortunately, MindGeek’s massive footprint means it might be willing to bid the highest price for Tumblr. If Verizon does sell Tumblr, it should seek a buyer with an upstanding record for how it treats creators. But Verizon could also modernize Tumblr to emphasize what’s differentiated about it in today’s tech landscape versus when it was founded in 2007. Obviously, it could reopen to porn. But there are also family friendly opportunities.

Tumblr was one of the first big meme-sharing communities, even spawning its own format of screenshots of progressively crazier replies to a short text post. Yet in 2019, the top meme networks like Instagram, Reddit, and Imgur aren’t actually built for distributing massive ‘dumps’ of memes. They don’t understand which you’ve already seen to prevent showing re-runs, or how remixes of an original meme all relate and should be linked. Tumblr could build meme-specific features that give users more curational power than Reddit and Imgur, but more freedom of expression under less pressure than Instagram.

Tumblr could also be repurposed into a “your Internet homepage” platform. Most social networks are so desperate to keep users on their apps that they restrict or deemphasize the ability to promote your other web presences. They also often focus on a narrow set of content types like photos and videos on Instagram. This leaves users who don’t have their own dedicated websites without a central hub where they can freely express their identity and link to profiles elsewhere. This is a huge opportunity for Tumblr, which has already established itself an open-ended self-expression platform open to a variety of content formats.

AOL, which was combined with Yahoo to form the Verizon Media Group, previously owned a web profile platform called About.me, but sold it back to its creator Tony Conrad in 2013. Tumblr could assume much of About.me’s functionality as a directory of someone’s presences on other apps, and add that to its blogging platform. Instead of being locked into Instagram and Pinterest’s grids and standardized designs, Tumblr could let people create a homepage collage representing their prismatic identities.

Tumblr’s already been waning in popularity for years, so Verizon might not have a lot to lose by giving Tumblr a year to execute on this strategy before selling it for surely much less than it bought it for in 2013. Tumblr’s remaining users deserve better than the platform fading into nothing or being sold to the unscrupulous.

If any pornography industry professionals want to weigh in, please contact this article’s author Josh Constine via phone/text or Signal encrypted messenger at (585)750-5674 or joshc ‘at’ techcrunch dot com.


Read Full Article

Netflix’s High-Quality Audio Makes Streams Sound Better


The next time you watch something on Netflix you may notice it sounds better. This is thanks to Netflix’s new “high-quality audio”, which is designed to bring studio-quality sound to Netflix streams. Appeasing audiophiles everywhere.

Netflix Introduces High-Quality Audio

On The Netflix Blog, the company explains how high-quality audio was born. While reviewing Stranger Things 2 in a living room environment, Netflix realized that the sound wasn’t quite up to par. The engineers fixed it by delivering the sound at a higher bitrate.

The problem is higher bitrates require more bandwidth. So rather than trying to deliver the master recordings, which are 24-bit 48 kHz with a bitrate of around 1 Mbps per channel, Netflix figured out the optimum bitrates to balance quality and bandwidth requirements.

Netflix’s high-quality audio isn’t lossless but is “perceptually transparent”. Which makes it “indistinguishable from the original source.” Therefore, delivering audio at a higher bitrate would “take up more bandwidth without bringing any additional value”.

In terms of the numbers, the high-quality audio bitrate you’ll receive ranges from 192 kbps to 640 kbps (for 5.1), and from 448 kbps to 768 kbps (for Dolby Atmos). Netflix considers even the lowest bitrate of 192 kbps to represent “good audio”.

High Quality Audio Adapts to Your Network

As well as delivering better quality soundscapes, high-quality audio is adaptive. Until now, the quality of the audio you hear on Netflix has been set from when you start streaming. However, from now on, the sound will adapt to your network conditions.

This means that if you start streaming while you’re suffering from low internet speeds you won’t be stuck with that poor audio for the duration of the stream. Instead, if and when your speeds improve, the audio will improve with it, just as the video already does.

The overall result will be better potential sound quality that adapts to your network conditions. And if you want to delve deeper into the technology behind Netflix’s new high-quality audio and how adaptive streaming works, check out The Netflix Tech Blog.

Should You Pay More for Netflix?

While Netflix has annoyed users by upping the price regularly, high-quality audio makes it clear the company is investing heavily behind the scenes. And that’s on top of all the original content. This is why we think you should be happy to pay more for Netflix.

Image Credit: Lauri Rantala/Flickr

Read the full article: Netflix’s High-Quality Audio Makes Streams Sound Better


Read Full Article

17 Simple HTML Code Examples You Can Learn in 10 Minutes


simple-html-code

Even though modern websites are generally built with user-friendly interfaces, it’s useful to know some basic HTML. If you know the following 17 tags (and a few extras), you’ll be able to create a basic webpage from scratch or tweak the code created by an app like WordPress.

We’ve provided HTML code examples with output for most of the tags. If you want to see them in action, download the sample HTML file at the end of the article. You can play with it in a text editor and load it up in a browser to see what your changes do.

1. <!DOCTYPE html>

You’ll need this tag at the beginning of every HTML document you create. It ensures that a browser knows that it’s reading HTML, and that it expects HTML5, the latest version.

Even though this isn’t actually an HTML tag, it’s still a good one to know.

2. <html>

This is another tag that tells a browser that it’s reading HTML. The <html> tag goes straight after the DOCTYPE tag, and you close it with a </html> tag right at the end of your file. Everything else in your document goes between these tags.

3. <head>

The <head> tag starts the header section of your file. The stuff that goes in here doesn’t appear on your webpage. Instead, it contains metadata for search engines, and info for your browser.

For basic pages, the <head> tag will contain your title, and that’s about it. But there are a few other things that you can include, which we’ll go over in a moment.

4. <title>

html title tag

This tag sets the title of your page. All you need to do is put your title in the tag and close it, like this (I’ve included the header tags, as well):

<head>
<title>My Website</title>
</head>

That’s the name that will be displayed as the tab title when it’s opened in a browser.

5. <meta>

Like the title tag, metadata is put in the header area of your page. Metadata is primarily used by search engines, and is information about what’s on your page. There are a number of different meta fields, but these are some of the most commonly used:

  • description—A basic description of your page.
  • keywords—A selection of keywords applicable to your page.
  • author—The author of your page.
  • viewport—A tag for ensuring that your page looks good on all devices.

Here’s an example that might apply to this page:

<meta name="description" content="A basic HTML tutorial">
<meta name="keywords" content="HTML,code,tags">
<meta name="author" content="MakeUseOf">
<meta name="viewport" content="width=device-width, initial-scale=1.0">

The “viewport” tag should always have “width=device-width, initial-scale=1.0” as the content to make sure your page displays well on mobile and desktop devices.

6. <body>

After you close the header section, you get to the body. You open this with the <body> tag, and close it with the </body> tag. That goes right at the end of your file, just before the </html> tag.

All of the content of your webpage goes in between these tags. It’s as simple as it sounds:

<body>
Everything you want displayed on your page.
</body>

7. <h1>

The <h1> tag defines a level-one header on your page. This will usually be the title, and there will ideally only be one on each page.

<h2> defines level-two headers such as section headers, <h3> level-three sub-headers, and so on, down to <h6>. As an example, the names of the tags in this article are level-two headers.

<h1>Big and Important Header</h1>
<h2>Slightly Less Big Header</h2>
<h3>Sub-Header</h3>

Result:

html header tags

As you can see, they get smaller at each level.

8. <p>

The paragraph tag starts a new paragraph. This usually inserts two line breaks.

Look, for example, at the break between the previous line and this one. That’s what a <p> tag will do.

<p>Your first paragraph.</p>
<p>Your second paragraph.</p>

Result:

Your first paragraph.

Your second paragraph.

You can also use CSS styles in your paragraph tags, like this one which changes the text size:

<p style="font-size: 120%;">20% larger text</p>

Result:

20% larger text

To learn how to use CSS to style your text, check out these HTML and CSS tutorials.

9. <br>

The line break tag inserts a single line break:

<p>The first line.<br>
The second line (close to the first one).</p>

Result:

The first line.
The second line (close to the first one).

Working in a similar way is the <hr> tag. This draws a horizontal line on your page and is good for separating sections of text.

10. <strong>

This tag defines important text. In general, that means it will be bold. However, it’s possible to use CSS to make <strong> text display differently.

However, you can safely use <strong> to bold text.

<strong>Very important things you want to say.</strong>

Result:

Very important things you want to say.

If you’re familiar with the <b> tag for bolding text, you can still use it. There’s no guarantee it will continue to work in future versions of HTML, but for now, it works.

11. <em>

Like <b> and <strong>, <em> and <i> are related. The <em> tag identifies emphasized text, which generally means it will get italicized. Again, there’s the possibility that CSS will make emphasized text display differently.

<em>An emphasized line.</em>

Result:

An emphasized line.

The <i> tag still works, but again, it’s possible that it will be deprecated in future versions of HTML.

12. <a>

The <a>, or anchor, tag lets you create links. A simple link looks like this:

<a href="//www.makeuseof.com/>Go to MakeUseOf</a>

Result:

Go to MakeUseOf

The “href” attribute identifies the destination of the link. In many cases, this will be another website. It could also be a file, like an image or a PDF.

Other useful attributes include “target” and “title.” The target attribute is almost exclusively used to open a link in a new tab or window, like this:

<a href="//www.makeuseof.com/" target="_blank">Go to MakeUseOf in a new tab</a>

Result:

Go to MakeUseOf in a new tab

The “title” attribute creates a tooltip. Hover over the link below to see how it works:

<a href="//www.makeuseof.com/" title="This is a tool tip">Hover over this to see the tool tip</a>

Result:

Hover over this to see the tool tip

13. <img>

If you want to embed an image in your page, you’ll need to use the image tag. You’ll normally use it in conjunction with the “src” attribute. This specifies the source of the image, like this:

<img src="wp-content/uploads/2019/04/sunlit-birds.jpg">

Result:

Sunlit birds image using img src tags

Other attributes are available, such as “height,” “width,” and “alt.” Here’s how that might look:

<img src="wp-content/uploads/2019/04/sunlit-birds.jpg" alt="the name of your image">

As you might expect, the “height” and “width” attributes set the height and width of the image. In general, it’s a good idea to only set one of them so the image scales correctly. If you use both, you could end up with a stretched or squished image.

The “alt” tag tells the browser what text to display if the image can’t be displayed and is a good idea to include with any image. If someone has an especially slow connection or an old browser, they can still get an idea of what should be on your page.

14. <ol>

The ordered list tag lets you create an ordered list. In general, that means you’ll get a numbered list. Each item in the list needs a list item tag (<li>), so your list will look like this:

<ol>
<li>First thing</li>
<li>Second thing</li>
<li>Third thing</li>
</ol>

Result:

  1. First thing
  2. Second thing
  3. Third thing

In HTML5, you can use <ol reversed> to reverse the order of the numbers. And you can set the starting value with the start attribute.

The “type” attribute lets you tell the browser which type of symbol to use for the list items. It can be set to “1,” “A,” “a,” “I,” or “i,” setting the list to display with the indicated symbol like this:

<ol type="A">

15. <ul>

The unordered list is much simpler than its ordered counterpart. It’s simply a bulleted list.

<ul>
<li>First item</li>
<li>Second item</li>
<li>Third item</li>
</ul>

Result:

  • First item
  • Second item
  • Third item

Unordered lists also have “type” attributes, and you can set it to “disc,” “circle,” or “square.”

16. <table>

While using tables for formatting is frowned upon, there are plenty of times when you’ll want to use rows and columns to segment information on your page. Several tags are needed to get a table to work. Here’s the sample HTML code:

<table>
<tbody>
<tr>
<th>1st column</th>
<th>2nd column</th>
</tr>
<tr>
<td>Row 1, column 1</td>
<td>Row 1, column 2</td>
</tr>
<td>Row 2, column 1</td>
<td>Row 2, column 2</td>
</tbody>
</table>

The <table> and </table> tags specify the start and end of the table. The <tbody> tag contains all the table content.

Each row of the table is enclosed in a <tr> tag. Each cell within each row is wrapped in either <th> tags for column headers, or <td> tags for column data. You need one of these for each column on each row.

Result:

1st column 2nd column
Row 1, column 1 Row 1, column 2
Row 2, column 1 Row 2, column 2

17. <blockquote>

When you’re quoting another website or person and you want to set the quote apart from the rest of your document, use the blockquote tag. All you need to do is enclose the quote in opening and closing blockquote tags:

<blockquote>The Web as I envisaged it, we have not seen it yet. The future is still so much bigger than the past.</blockquote>

Result:

The Web as I envisaged it, we have not seen it yet. The future is still so much bigger than the past.

The exact formatting that’s used may depend on the browser you’re using or the CSS of your site. But the tag remains the same.

Go Forth and HTML

With these 17 HTML tags (and counting) you should be able to create a simple webpage. To see how to put them all together, you can download our sample HTML page. Open it in your browser to see how it all comes together, or in a text editor to see exactly how the code works.

For more bite-sized lessons in HTML, try these microlearning apps for coding.

Read the full article: 17 Simple HTML Code Examples You Can Learn in 10 Minutes


Read Full Article

How to Use Android Without Google: Everything You Need to Know

Life-size robo-dinosaur and ostrich backpack hint at how first birds got off the ground


Everyone knows birds descended from dinosaurs, but exactly how that happened is the subject of much study and debate. To help clear things up, these researchers went all out and just straight up built a robotic dinosaur to test their theory: that these proto-birds flapped their “wings” well before they ever flew.

Now, this isn’t some hyper-controversial position or anything. It’s pretty reasonable when you think about it: natural selection tends to emphasize existing features rather than invent them from scratch. If these critters had, say, moved from being quadrupedal to being bipedal and had some extra limbs up front, it would make sense that over a few million years those limbs would evolve into something useful.

But when did it start, and how? To investigate, Jing-Shan Zhao of Tsinghua University in Beijing looked into an animal called Caudipteryx, a ground-dwelling animal with “feathered forelimbs that could be considered “proto-wings.”

Based on the well-preserved fossil record of this bird-dino crossover, the researchers estimated a number of physiological metrics, such as the creature’s top speed and the rhythm with which it would run. From this they could estimate forces on other parts of the body — just as someone studying a human jogger would be able to say that such and such a joint is under this or that amount of stress.

What they found was that, in theory, these “natural frequencies” and biophysics of the Caudipteryx’s body would cause its little baby wings to flap up and down in a way suggestive of actual flight. Of course they wouldn’t provide any lift, but this natural rhythm and movement may have been the seed which grew over generations into something greater.

To give this theory a bit of practical punch, the researchers then constructed a pair of unusual mechanical items: a pair of replica Caudipteryx wings for a juvenile ostrich to wear, and a robotic dinosaur that imitated the original’s gait. A bit fanciful, sure — but why shouldn’t science get a little crazy now and then?

In the case of the ostrich backpack, they literally just built a replica of the dino-wings and attached it to the bird, then had the bird run. Sensors on board the device verified what the researchers observed: that the wings flapped naturally as a result of the body’s motion and vibrations from the feet impacting the ground.

The robot is a life-size reconstruction based on a complete fossil of the animal, made of 3D-printed parts, to which the ostrich’s fantasy wings could also be affixed. The researchers’ theoretical model predicted that the flapping would be most pronounced as the speed of the bird approached 2.31 meters per second — and that’s just what they observed in the stationary model imitating gaits corresponding to various running speeds.

You can see another gif over at the Nature blog. As the researchers summarize:

These analyses suggest that the impetus of the evolution of powered flight in the theropod lineage that lead to Aves may have been an entirely natural phenomenon produced by bipedal motion in the presence of feathered forelimbs.

Just how legit is this? Well, I’m not a paleontologist. And an ostrich isn’t a Caudipteryx. And the robot isn’t exactly convincing to look at. We’ll let the scholarly community pass judgment on this paper and its evidence (don’t worry, it’s been peer reviewed), but I think it’s fantastic that the researchers took this route to test their theory. A few years ago this kind of thing would be far more difficult to do, and although it seems a little silly when you watch it (especially in gif form), there’s a lot to be said for this kind of real-life tinkering when so much of science is occurring in computer simulations.

The paper was published today in the journal PLOS Computational Biology.


Read Full Article

Once a major name in smartphones, LG Mobile is now irrelevant — and still losing money


LG was once a stalwart of the smartphone industry — remember its collaboration with Facebook back in the day? — but today the company is swiftly descending into irrelevance.

The latest proof is LG’s Q1 financials, released this week, which show that its mobile division grossed just KRW 1.51 trillion ($1.34 billion) in sales for the quarter. That’s down 30% year-on-year and the lowest income for LG Mobile for at least the last eight years. We searched back eight years to Q1 2011 — before that LG was hit and miss with releasing specific financial figures for its divisions.

To give an indication of its decline, LG shipped more than 15 million phones in Q4 2015 when its revenue was 3.78 trillion RKW, or $3.26 billion. That’s 2.5 times higher than this recent Q1 2019 period.

Regular readers will be aware that LG mobile is a loss-making division. That’s the reason its activities — and consequently sales — have scaled down in recent years. But the losses are still coming.

LG put Brian Kwon, who leads its lucrative Home Entertainment business, in charge of its mobile division last November and his task remains ongoing, it appears.

LG Mobile recorded a loss of 203.5 billion KRW ($181.05 million) for Q1 which it described as “narrowed.”

It is true that LG Mobile’s Q1 loss is lower than the 322.3 billion KRW ($289.8 million) loss it carded in the previous quarter, but it is wider than one year previous. Indeed, the mobile division lost 136.1 billion KRW ($126.85 million) in Q1 2018.

LG said Mr. Kwon is presiding over “a revised smartphone launch strategy,” which is why the numbers are changing so drastically. Going forward, it said that the launch of its G7 ThinQ flagship phone and a new upgrade center — first announced last year — are in the immediate pipeline, but it is hard to see how any of this will reverse the downward trend.

LG Mobile is increasingly problematic because the parent company is seeing success in other areas, but that’s being countered by a poor-performing smartphone business. Last quarter, mobile dragged LG to its first quarterly loss in two years, for example.

Just looking at the Q1 numbers, LG’s overall profit was 900.6 billion KRW ($801.25 million) thanks to its home appliance business ($647.3 million profit) and that home entertainment business, which had a profit of $308.27 million. Its automotive business — which is, among other things, focused on EVs — did bite into the profits, but that is at least a business that is going places.


Read Full Article