21 June 2018

Google adds a search feature to account settings to ease use


Google has announced a refresh of the Google Accounts user interface. The changes are intended to make it easier for users to navigate settings and review data the company has associated with an account — including information relating to devices, payment methods, purchases, subscriptions, reservations, contacts and other personal info.

The update also makes security and privacy options more prominent, according to Google.

“To help you better understand and take control of your Google Account, we’ve made all your privacy options easy to review with our new intuitive, user-tested design,” it writes. “You can now more easily find your Activity controls in the Data & Personalization tab and choose what types of activity data are saved in your account to make Google work better for you.

“There, you’ll also find the recently updated Privacy Checkup that helps you review your privacy settings and explains how they shape your experience across Google services.”

Android users will get the refreshed Google Account interface first, with iOS and web coming later this year.

Last September the company also refreshed Google Dashboard — to make it easier to use and better integrate it into other privacy controls.

While in October it outed a revamped Security Checkup feature, offering an overview of account security that includes personalized recommendations. The same month it also launched a free, opt-in program aimed at users who believe their accounts to be at particularly high risk of targeted online attacks.

And in January it announced new ad settings controls, also billed as boosting transparency and control. So settings related updates have been coming pretty thick and fast from the ad targeting tech giant.

The latest refresh comes at a time when many companies have been rethinking their approach to security and privacy as a result of a major update to the European Union’s data protection framework which applies to entities processing EU people’s data regardless of where that data is being crunched.

Google also announced a raft of changes to its privacy policy as a direct compliance response with GDPR back in May — saying it was making the policy clearer and easier to navigate, and adding more detail and explanations. It also updated user controls at that time, simplifying on/off switches for things like location data collection and web and app activity.

So that legal imperative to increase visibility and user controls at the core of digital empires looks to be generating uplift that’s helping to raise the settings bar across entire product suites. Which is good news for users.

As well as rethinking how Google Account settings are laid out, the updated “experience” adds some new functions intended to make it easier for people to find the settings they’re looking for too.

Notably a new search functionality for locating settings or specific info within an account — such as how to change a password. Which sounds like a really handy addition. There’s also a new dedicated support section offering help with common tasks, and answers from community experts.

And while it’s certainly welcome to see a search expert like Google adding a search feature to help people gain more control over their personal data, you do have to wonder what took it so long to come up with that idea.

Controls are only as useful as they are easy to use, of course. And offering impenetrable and/or bafflingly complex settings has, shamefully, been the historical playbook of the tech industry — as a socially engineered pathway to maximize data gathering via obfuscation (and obtain consent by confusion).

Again, the GDPR makes egregious personal data heists untenable over the long term — at least where the regulation has jurisdiction.

And while built-in opacity around technology system operation is something regulators are really only beginning to get to grips with — and much important work remains to be done to put vital guardrails in place, such as around the use of personal data for political ad targeting, for instance, or to ensure AI blackboxes can’t bake in bias — several major privacy scandals have knocked the sheen off big tech’s algorithmic Pandora’s boxes in recent years. And politicians are leaning into the techlash.

So, much like all these freshly redesigned settings menus, the direction of regulatory travel looks pretty clear — even if the pace of progress is never as disruptive as the technologies themselves.


Read Full Article

What Being a Freelance Programmer Is REALLY Like: The Pros and Cons

Fb Messenger auto-translation chips at US/Mexico language wall


Facebook’s been criticized for tearing America apart, but now it will try to help us forge bonds with our neighbors to the south. Facebook Messenger will now offer optional auto-translation of English to Spanish and vice-versa for all users in the United States and Mexico. It’s a timely launch given the family separation troubles at the nations’ border.

The feature could facilitate cross-border and cross-language friendships, business, and discussion that might show people in the two countries that deep down we’re all just human. It could be especially powerful for US companies looking to use Messenger for conversational commerce without having to self-translate everything.

Facebook tells me “we were pleased with the results” following a test using AI to translate the language pair in Messenger for US Facebook Marketplace users in April.

Now when users receive a message that is different from their default language, Messenger’s AI assistant M will ask if they want it translated. All future messages will in that thead will be auto-translated unless a user turns it off. Facebook plans to bring the feature to more language pairs and countries soon.

A Facebook spokesperson tells me “The goal with this launch is really to enable people to communicate with people they wouldn’t have been able to otherwise, in a way that is natural and seamless.”

Starting in 2011, Facebook began offering translation technology for News Feed posts and comments. For years it relied on Microsoft Bing’s translation technology, but Facebook switched to its own stack in mid-2016. By then it was translating 2 billion pieces of text a day for 800 million users.

Conversational translation is a lot tougher than social media posts, though. When we chat with friends, it’s more colloquial and full of slang. We’re also usually typing in more a hurry and can be less accurate. But if Facebook can reliably figure out what we’re saying, Messenger could become the modern day Babel Fish. At 2016’s F8, Facebook CEO Mark Zuckerberg through shade on Donald Trump saying “instead of building walls, we can build bridges.” Trump still doesn’t have that wall, and now Zuck is building a bridge with technology.


Read Full Article

Fb Messenger auto-translation chips at US/Mexico language wall


Facebook’s been criticized for tearing America apart, but now it will try to help us forge bonds with our neighbors to the south. Facebook Messenger will now offer optional auto-translation of English to Spanish and vice-versa for all users in the United States and Mexico. It’s a timely launch given the family separation troubles at the nations’ border.

The feature could facilitate cross-border and cross-language friendships, business, and discussion that might show people in the two countries that deep down we’re all just human. It could be especially powerful for US companies looking to use Messenger for conversational commerce without having to self-translate everything.

Facebook tells me “we were pleased with the results” following a test using AI to translate the language pair in Messenger for US Facebook Marketplace users in April.

Now when users receive a message that is different from their default language, Messenger’s AI assistant M will ask if they want it translated. All future messages will in that thead will be auto-translated unless a user turns it off. Facebook plans to bring the feature to more language pairs and countries soon.

A Facebook spokesperson tells me “The goal with this launch is really to enable people to communicate with people they wouldn’t have been able to otherwise, in a way that is natural and seamless.”

Starting in 2011, Facebook began offering translation technology for News Feed posts and comments. For years it relied on Microsoft Bing’s translation technology, but Facebook switched to its own stack in mid-2016. By then it was translating 2 billion pieces of text a day for 800 million users.

Conversational translation is a lot tougher than social media posts, though. When we chat with friends, it’s more colloquial and full of slang. We’re also usually typing in more a hurry and can be less accurate. But if Facebook can reliably figure out what we’re saying, Messenger could become the modern day Babel Fish. At 2016’s F8, Facebook CEO Mark Zuckerberg through shade on Donald Trump saying “instead of building walls, we can build bridges.” Trump still doesn’t have that wall, and now Zuck is building a bridge with technology.


Read Full Article

The iPhone Camera Roll: 8 Tips and Fixes for Common Issues

How Can Neural Network Similarity Help Us Understand Training and Generalization?



In order to solve tasks, deep neural networks (DNNs) progressively transform input data into a sequence of complex representations (i.e., patterns of activations across individual neurons). Understanding these representations is critically important, not only for interpretability, but also so that we can more intelligently design machine learning systems. However, understanding these representations has proven quite difficult, especially when comparing representations across networks. In a previous post, we outlined the benefits of Canonical Correlation Analysis (CCA) as a tool for understanding and comparing the representations of convolutional neural networks (CNNs), showing that they converge in a bottom-up pattern, with early layers converging to their final representations before later layers over the course of training.

In “Insights on Representational Similarity in Neural Networks with Canonical Correlation” we develop this work further to provide new insights into the representational similarity of CNNs, including differences between networks which memorize (e.g., networks which can only classify images they have seen before) from those which generalize (e.g., networks which can correctly classify previously unseen images). Importantly, we also extend this method to provide insights into the dynamics of recurrent neural networks (RNNs), a class of models that are particularly useful for sequential data, such as language. Comparing RNNs is difficult in many of the same ways as CNNs, but RNNs present the additional challenge that their representations change over the course of a sequence. This makes CCA, with its helpful invariances, an ideal tool for studying RNNs in addition to CNNs. As such, we have additionally open sourced the code used for applying CCA on neural networks with the hope that will help the research community better understand network dynamics.

Representational Similarity of Memorizing and Generalizing CNNs
Ultimately, a machine learning system is only useful if it can generalize to new situations it has never seen before. Understanding the factors which differentiate between networks that generalize and those that don’t is therefore essential, and may lead to new methods to improve generalization performance. To investigate whether representational similarity is predictive of generalization, we studied two types of CNNs:
  • generalizing networks: CNNs trained on data with unmodified, accurate labels and which learn solutions which generalize to novel data.
  • memorizing networks: CNNs trained on datasets with randomized labels such that they must memorize the training data and cannot, by definition, generalize (as in Zhang et al., 2017).
We trained multiple instances of each network, differing only in the initial randomized values of the network weights and the order of the training data, and used a new weighted approach to calculate the CCA distance measure (see our paper for details) to compare the representations within each group of networks and between memorizing and generalizing networks.

We found that groups of different generalizing networks consistently converged to more similar representations (especially in later layers) than groups of memorizing networks (see figure below). At the softmax, which denotes the network’s ultimate prediction, the CCA distance for each group of generalizing and memorizing networks decreases substantially, as the networks in each separate group make similar predictions.
Groups of generalizing networks (blue) converge to more similar solutions than groups of memorizing networks (red). CCA distance was calculated between groups of networks trained on real CIFAR-10 labels (“Generalizing”) or randomized CIFAR-10 labels (“Memorizing”) and between pairs of memorizing and generalizing networks (“Inter”).
Perhaps most surprisingly, in later hidden layers, the representational distance between any given pair of memorizing networks was about the same as the representational distance between a memorizing and generalizing network (“Inter” in the plot above), despite the fact that these networks were trained on data with entirely different labels. Intuitively, this result suggests that while there are many different ways to memorize the training data (resulting in greater CCA distances), there are fewer ways to learn generalizable solutions. In future work, we plan to explore whether this insight can be used to regularize networks to learn more generalizable solutions.

Understanding the Training Dynamics of Recurrent Neural Networks
So far, we have only applied CCA to CNNs trained on image data. However, CCA can also be applied to calculate representational similarity in RNNs, both over the course of training and over the course of a sequence. Applying CCA to RNNs, we first asked whether the RNNs exhibit the same bottom-up convergence pattern we observed in our previous work for CNNs. To test this, we measured the CCA distance between the representation at each layer of the RNN over the course of training with its final representation at the end of training. We found that the CCA distance for layers closer to the input dropped earlier in training than for deeper layers, demonstrating that, like CNNs, RNNs also converge in a bottom-up pattern (see figure below).
Convergence dynamics for RNNs over the course of training exhibit bottom up convergence, as layers closer to the input converge to their final representations earlier in training than later layers. For example, layer 1 converges to its final representation earlier in training than layer 2 than layer 3 and so on. Epoch designates the number of times the model has seen the entire training set while different colors represent the convergence dynamics of different layers.
Additional findings in our paper show that wider networks (e.g., networks with more neurons at each layer) converge to more similar solutions than narrow networks. We also found that trained networks with identical structures but different learning rates converge to distinct clusters with similar performance, but highly dissimilar representations. We also apply CCA to RNN dynamics over the course of a single sequence, rather than simply over the course of training, providing some initial insights into the various factors which influence RNN representations over time.

Conclusions
These findings reinforce the utility of analyzing and comparing DNN representations in order to provide insights into network function, generalization, and convergence. However, there are still many open questions: in future work, we hope to uncover which aspects of the representation are conserved across networks, both in CNNs and RNNs, and whether these insights can be used to improve network performance. We encourage others to try out the code used for the paper to investigate what CCA can tell us about other neural networks!

Acknowledgements
Special thanks to Samy Bengio, who is a co-author on this work. We also thank Martin Wattenberg, Jascha Sohl-Dickstein and Jon Kleinberg for helpful comments.


Bag Week 2018: The Nomadic NF-02 keeps everything in its right place


Nomadic, a Japanese brand sold by JetPens in the US, makes some of my favorite bags and backpacks. The Wise Walker Toto was an amazing little bag and I’ve always enjoyed the size, materials, and design. The $89 Nomadic NF-02 is no different.

The best thing about this 15×7 inch backpack is the compact size and internal pouches. The Nomadic can hold multiple pens, notebooks, and accessories, all stuck in their own little cubbies, and you can fit a laptop and a few books in the main compartment. This is, to be clear, not a “school” backpack. It’s quite compact and I doubt it would be very comfortable with a much more than a pair of textbooks and a heavier laptop. It’s definitely a great travel sack, however, and excellent for the trip from home to the office.

The bag comes in a few colors including turquoise and navy and there is a small hidden pouch for important papers and passports. There is a reflective strip on the body and it is water repellent so it will keep your gear dry.

Again, my favorite part of this bag are the multiple little pockets and spaces. It’s an organizer’s dream and features so many little spots to hide pens and other gear that it could also make an excellent tourist pack. It is small enough for easy transport but holds almost anything you can throw at it.

Nomadic is a solid backpack. It’s small, light, and still holds up to abuse. I’m a big fan of the entire Nomadic line and it’s great to see this piece available in the US. It’s well worth a look if you’re looking for a compact carrier for your laptop, accessories, and notebooks.

bag week 2018


Read Full Article

4 Tools to Back Up Your Android Device to Your PC

Bag Week 2018: WP Standard’s Rucksack goes the distance


WP Standard – formerly called Whipping Post Leather – makes rugged leather bags, totes, and briefcases and their Rucksack is one of my favorites. Designed to look like something a Pony Express rider would slip on for a visit to town, this $275 is sturdy, handsome, and ages surprisingly well.

There are some trade-offs, however. Except for two small front pouches there are no hidden nooks and crannies in this spare 15×15 inch sack. The main compartment can fit a laptop and a few notebooks and the front pouches can hold accessories like mice or a little collection of plugs. There is no fancy nylon mesh or gear organizers here, just a brown expanse of full grain leather.

I wore this backpack for a few months before writing this and found it surprisingly comfortable and great for travel. Because it is so simple I forced myself to pare down my gear slightly and I was able to consolidate my cables and other accessories into separate pouches. I could fit a laptop, iPad Pro, and a paperback along side multiple notebooks and planners and I could even overstuff the thing on long flights. As long as I was able to buckle the front strap nothing fell out or was lost.

This bag assumes that you’re OK with thick, heavy leather and that you’re willing to forgo a lot of the bells and whistles you get with more modern styles. That said, it has a great classic look and it’s very usable. I suspect this bag would last decades longer than anything you could buy at Office Depot and it would look good doing it. At $275 it’s a bit steep but you’re paying for years – if not decades – of regular use and abuse. It’s worth the investment.

bag week 2018


Read Full Article

Google Assistant’s ‘Continued Conversation’ feature is now live


Google I/O was awash with Assistant news, but Duplex mystery aside, Continued Conversation was easily one of the most compelling announcements of the bunch. The feature is an attempt to bring more naturalized conversation to the AI — a kind of holy grail with these sorts of smart assistants.

Continued Conversation is rolling out to Assistant today for users in the U.S. with a Home, Home Mini and Home Max. The optional setting is designed to offer a more natural dialogue, so users don’t have to “Hey Google” Assistant every time they have a request. Google offers the following example in a blog post that just went up,

So next time you wake up and the skies are grey, just ask “Hey Google, what’s the weather today?”… “And what about tomorrow?”… “Can you add a rain jacket to my shopping list”… “And remind me to bring an umbrella tomorrow morning”…“Thank you!”

You’ll need to access the Assistant settings on an associated device in order to activate the feature. And that initial “Ok Google” or “Hey Google” will still have to be spoken to trigger the Assistant. From there, it will stay listening for up to eight seconds without detecting any speech. It’s not exactly a dialogue, so much as a way of easing the awkward interaction of having to repeat the same command over and over again. 

Given all of the recent privacy concerns that have arisen as smart speakers and the like have exploded in popularity, it’s easy to see why Google’s gone and taken all of these safeguards to assure users that the devices aren’t listening for anything beyond a wake word.

An extra eight seconds isn’t much, but those who are already skeptical about product privacy might want to keep it off, for good measure.


Read Full Article

Facebook expands fact-checking program, adopts new technology for fighting fake news


Facebook this morning announced an expansion of its fact-checking program and other actions it’s taking to combat the scourge of fake news on its social network. The company, which was found to be compromised by Russian trolls whose disinformation campaigns around the November 2016 presidential election reached 150 million Americans, has been increasing its efforts at fact-checking news through a combination of technology and human review in the months since.

The company began fact-checking news on its site last spring, with help from independent third-party fact-checkers certified through the non-partisan International Fact-Checking Network. These fact checkers rate the accuracy of the story, allowing Facebook to take action on those rated false by lowering them in the News Feed, and reduced the distribution of those Pages that are repeat offenders.

Today, Facebook says it has since expanded this program to 14 countries around the world, and plans to roll it out to more countries by year-end. It also claims the impact of fact-checking reduced the distribution of fake news by an average of 80 percent.

The company also announced the expansion of its program for fact-checking photos and video. First unveiled this spring, Facebook has been working to fact-check things like manipulated  videos or misused photos where images are taken out of context in order to push a political agenda. This is a huge issue, because memes have become a popular way of rallying people around a cause on the internet, but they often do so by completely misrepresenting the facts by using images from different events, places, and times.

One current example of this is the photo used by Drudge Report showing young boys holding guns in a story about the U.S.-Mexico border battle. The photo was actually taken nowhere near the border, but rather was snapped in Syria in 2012 and was captioned: “Four young Syrian boys with toy guns are posing in front of my camera during my visit to Azaz, Syria. Most people I met were giving the peace sign. This little city was taken by the Free Syrian Army in the summer of 2012 during the Battle of Azaz.”

Using fake or misleading images to stoke fear, disgust, or hatred of another group of people is a common way photos and videos are misused online.

Facebook also says it’s taking advantage of new machine learning technology to help it find duplicates of already debunked stories, and will working with fact-checking partners to use Schema.org‘s Claim Review, an open-source framework that will allow fact-checkers to share ratings with Facebook so the company can act more quickly, especially in times of crisis.

And the company says it will expand its efforts in downranking fake news by using machine learning to demote foreign Pages that are spreading financially-motivated hoaxes to people in other countries.

In the weeks ahead, an elections research commission working in partnership with Facebook to measure the volume and effect of misinformation on the social network will launch its website and its first request for proposals.

The company had already announced its plans to further investigate the role social media plans in elections and in democracy. The commission will receive access to privacy-protected data sets with a sample of links that people engaged with on Facebook, which will allow it to understand what sort of content is being shared. Facebook says the research will “help keep us accountable and track our progress.”


Read Full Article

MacBook vs. iMac: A Comparison Guide to Help You Decide


macbook-vs-imac

If you need a portable Mac, you buy a MacBook. If you want the most powerful Mac experience, you buy an iMac—right?

Deciding between a desktop and a laptop isn’t quite as simple as you might think. We have to balance our expectations, real-world requirements, and a realistic budget before taking the plunge.

So we’ve done the agonizing for you. Here’s how two of Apple’s flagship machines stack up and a guide to deciding whether a MacBook or an iMac would be better suited to your needs.

Comparing MacBook vs. iMac

For the purpose of this comparison, we’ll look at the top-end 27-inch iMac model and its closest competitor, the fastest 15-inch MacBook Pro. You’ll likely have your own wishlist, but this comparison is fairly representative of the differences between the models whatever your budget.

iMac 27

Even at this stage, it’s worth thinking about the lifespan of the product. Of the many reasons why people buy Macs, hardware reliability and longevity is perhaps the most important. Make sure whatever you choose will fit the bill for a few years. This is especially true when it comes to storage capacity, since Apple’s machines are less upgradeable than ever before.

MacBook Pro with Touch Bar 15-inch

Now let’s take a look at each aspect of Apple’s computers by directly comparing hardware, and ultimately value for money.

MacBook vs. iMac: CPU and RAM

There once was a time when the desktop variants would run away with the show here. But thanks to the ever-shrinking silicon chip, it’s far less clear-cut than it once was. Mobile chips need to be efficient, which means you’re unlikely to see comparable clock speeds. This doesn’t necessarily translate into a black and white performance deficit, though.

iMac options

The top-tier 27-inch iMac comes with a 3.8GHz Intel Core i5 processor. You can upgrade this to an i7 4.2GHz processor for an extra $200. The MacBook Pro has an Intel Core i7 processor that tops out at 2.9GHz, with an upgrade to the 3.1GHz model available for another $200.

In terms of processing power, while the iMac has the advantage due to higher clock speeds, you’re unlikely to notice the difference in daily use. When it comes to RAM, it’s a similar state of affairs.

The top-tier MacBook Pro comes with 16GB of RAM onboard, compared to the iMac’s 8GB. You can upgrade the iMac to 16GB ($200), 32GB ($400), or 64GB ($1,200). However, you can’t upgrade a MacBook Pro beyond 16GB.

MacBook Pro options

But the iMac has another trick up its sleeve: a slot at the back of the unit which allows you to upgrade the RAM yourself. This is not possible on the MacBook Pro, but it’s a nice option for iMac users who want to save some money today and upgrade in the future.

Conclusion: Processing power is comparable, though the iMac just tips it which makes the MacBook Pro even more impressive. User-expandable memory and more options at checkout further give the iMac the edge here.

MacBook vs. iMac: GPU and Display

Both the MacBook Pro and iMac have comparable displays. Each is Retina quality, which means the pixel density is high enough that you can’t make out individual pixels. Both have a brightness of 500 nits, and both use the P3 wide color gamut offering 25 percent more colors compared to standard RGB.

P3 vs RGB

The most obvious difference is size, with a top-end iMac coming in at 27-inch compared to the MacBook Pro at 15-inch. And while the MacBook Pro manages a native resolution of 2880×1800, the iMac has a native 5K display at a jaw-dropping resolution of 5120×2880.

Both will make your videos and photos pop and the hours you spend staring at your screen more pleasant. There really is something to be said for the iMac’s massive 5K screen, though you’ll need to sacrifice portability for the privilege.

Powering those displays is no small feat, which is why Apple opted for dedicated Radeon Pro graphics chips from AMD for both models. The MacBook Pro puts up a good fight with its Radeon Pro 560 and 4GB of dedicated VRAM, but it comes up short against the Radeon Pro 580 and its 8GB of VRAM.

iMac 27 Display

You’re certainly not going to see double the performance on the iMac, but there’s no mistaking the fact that the best visual performance is found on the desktop. This is further compounded by the heat generated by GPUs under load, which is far more noticeable on a laptop than it is on a desktop.

That added heat might limit your use of the MacBook under extreme load. If you’re going to stress the GPU regularly with lengthy video editing or gaming sessions, the iMac will provide a more pleasant base of operation. You’ll also have a lot more screen real estate at your disposal.

Conclusion: The MacBook Pro’s top-tier discrete graphics chips are a force to be reckoned with, but the iMac is still faster (and cooler).

MacBook vs. iMac: Storage, SSDs, and Fusion Drive

Here’s where the comparison starts to get really interesting, since the MacBook range led the SSD revolution many years ago with the arrival of the MacBook Air. SSDs (solid-state drives) are storage devices that use memory chips rather than moving parts to store data. This results in much faster read and write times, and they’re a lot tougher.

MacBook Pro storage

Every MacBook, MacBook Air, and MacBook Pro comes with an SSD. Most start at 256GB, but you can still find the odd 128GB option around. By comparison, all iMac models come with a Fusion Drive.

Apple’s Fusion Drive is two drives—an SSD and a standard spinning HDD—that appear as a single volume. Core system files and often-used resources reside on the SSD for speed, while documents, media, and long-term storage defaults to the slower HDD.

The SSD is faster than the Fusion Drive, but SSDs are also more limited in space. This is why the top-tier MacBook Pro comes with 512GB, and the top-tier iMac comes with 2TB. You can upgrade that MacBook to a 1TB SSD for an additional $400, and you can make the same swap in your iMac for $600.

iMac storage

Conclusion: You’ll get more space for your money in an iMac, but it won’t be as fast as the MacBook’s all-SSD approach. If money is no object, you can upgrade both models to a 2TB SSD and laugh all the way home.

It comes down to performance, and the tradeoff you make between convenience and speed. One word of advice: always buy more storage than you think you need.

MacBook vs. iMac: Ports and Portability

If you’ve followed Apple’s hardware decisions of late, you’ll know that the current generation MacBook has fewer ports than any that came before it. Apple decided to strip all but a stereo output and four USB-C ports (capable of USB 3.1 gen 2 and Thunderbolt 3) from the MacBook Pro.

MacBook Pro connections and ports

This means you’ll need to rely on adapters and docks if you want to use regular USB type-A connectors, drive an HDMI monitor, plug in a memory card, or connect to a wired network. The new MacBook Pro is even powered via USB-C, with an 87W USB-C power adapter included in the box.

Conversely, the iMac has a port for almost anything. You’ll get two of those fancy USB-C ports that can handle USB 3.1 gen 2 and Thunderbolt 3. You’ll also get four regular USB 3.0 type-A connectors, for all your old hard drives and peripherals.

Then there’s an SDXC card slot on the back, for connecting SD, SDHC, SDXC and microSD (via adapters) directly to your Mac. The iMac even delivers a gigabit Ethernet port, something the MacBook range dropped years ago.

iMac connection ports

The iMac is also compatible with the same adapters and docks, enabling HDMI and DVI out, or compatibility with Mini DisplayPort and Thunderbolt 2 devices with an adapter. You won’t have to carry this adapter with you either, since your iMac lives on a desk.

Conclusion: The MacBook drops the ball in this department, with its stubborn USB-C approach. As for the iMac, we’re shocked Apple still builds a computer with an Ethernet port!

MacBook vs. iMac: Everything Else

There are a few other areas you might not consider when shopping, and though they’re not deal-breakers (to us), they’re still worth highlighting.

Keyboard

While the MacBook Pro has a built-in keyboard, the iMac comes with Apple’s Magic Keyboard. You can also opt to ditch these and plug in any keyboard you want, something that makes more sense on the iMac.

Apple Magic Keyboard

Some users have reported issues with Apple’s “butterfly” key mechanism on the latest MacBook models. There have been reports of broken keys that have prompted several class-action lawsuits, as well as the keyboard having a different “feel” to previous Apple keyboards.

You’ll probably want to try out the MacBook before you buy if you plan to do a lot of typing (and even if you’re not, since a dud key compromises the entire laptop’s purpose).

Mice, Trackpads, and Touch Gestures

Apple has designed macOS with a number of touch-based gestures in mind. These include two-finger scrolling, swipes from left to right to change between desktop spaces, and quick reveal gestures for running apps and the desktop. macOS is better with a trackpad than it is with a mouse.

Magic TrackPad 2

The MacBook Pro has a giant trackpad front and center. Force Touch means you can press harder to access a third context-dependent input, just like 3D Touch on the iPhone.

The iMac comes with a Magic Mouse 2, probably because Apple has a big dusty warehouse full of them. If you want the best macOS experience, you’ll need to upgrade to a Magic Trackpad 2 for $50 at checkout.

Touch Bar and Touch ID

The Touch Bar and Touch ID fingerprint scanner are both present (and non-negotiable) on the top-tier MacBook Pro models. This replaces the function keys with a touch-sensitive OLED panel. The panel adapts to whatever you’re doing and shows you relevant app controls, emoji, and traditional media key functions.

MacBook Pro TouchBar

Touch ID is a fingerprint scanner that works just like Touch ID on iOS. You can use your fingerprint to store login credentials, unlock your Mac, and generally speed up daily authorization events. It’s a great convenience, but probably won’t tip your decision either way.

Some users have complained that the Touch Bar is a gimmick that doesn’t really solve any problems. If you feel the same you can disable the Touch Bar entirely, though you’ll have to live with touch-based function keys.

MacBook vs. iMac: Which One Should You Get?

A top-of-the-line iMac is cheaper than a comparable MacBook Pro. It packs a marginally faster processor, better graphics capabilities, a bigger screen, more storage space, and an array of ports a MacBook owner could only dream of. It lacks the 16GB of RAM, but it’s got a port that you can use to upgrade it yourself.

But the top-end MacBook Pro isn’t a weak option. You’ve got a strong Core i7 processor, a powerful GPU that can handle 4K video editing, a blisteringly fast SSD on every model, and that all-important portable form factor. Ultimately though, you’ll pay more for a less capable machine compared to the iMac.

For pricing, Apple’s best base MacBook Pro (without any upgrades) costs $2,799 compared to $2,299 for a top-end base iMac. When you’re paying $500 more for a less capable machine, you might want to ask yourself: do you really need all that power in a portable machine? Or is portability worth the premium to you?

iMac and MacBook Pro

If you need as much power in the field as possible, then the MacBook Pro is your best bet at this stage. Just make sure you opt for a large enough SSD to see you through to your next upgrade.

But if like me, you’re replacing an old MacBook, you might want to opt for the iMac. You can make your old Mac feel like new, then use it as a light mobile office of sorts. Offload your resource-intensive tasks to the iMac at home, and you’ve got the best of both worlds.

Read the full article: MacBook vs. iMac: A Comparison Guide to Help You Decide


Read Full Article

Inside Atari’s rise and fall

AT&T launches a low-cost live TV streaming service, WatchTV


AT&T this morning announced the launch of a second TV streaming service, called WatchTV, days after its merger with Time Warner. The lower-cost alternative to AT&T’s DirecTV Now will offer anyone the ability to join WatchTV for only $15 per month, but the service will also be bundled into AT&T wireless plans. This $15 per month price point undercuts newcomer Philo, which in November had introduced the cheapest over-the-top TV service at just $16 per month.

With WatchTV, customers gain access to over 30 live TV channels from top cable networks including A&E, AMC, Animal Planet, CNN, Discovery, Food Network, Hallmark, HGTV, History, IFC, Lifetime, Sundance TV, TBS, TLC, TNT, VICELAND, and several others. (Full list below).

Shortly after launch, it will add BET, Comedy Central, MTV2, Nicktoons, Teen Nick, and VH1.

There are also over 15,000 TV shows and movies on demand, along with premium channels and music streaming options as add-ons.

While the new WatchTV service is open to anyone, AT&T is also bundling it into two new unlimited plans for no additional cost.

These plans are the AT&T Unlimited & More Premium plan and the AT&T Unlimited & More plan.

The Premium plan customers will have all the same features of the existing AT&T Unlimited Plus Enhanced Plan, including 15 GB of high-speed tethering, high-quality video and a $15 monthly credit towards DirecTV, U-verse TV, or, AT&T’s other streaming service, DirecTV Now. They can also choose to add on other options, like HBO, Showtime, Starz, Amazon Music Unlimited, Pandora Premium and VRV, more for an additional fee. Add-ons can only be swapped out once per year.

The regular plan (AT&T Unlimited & More) only offers SD video streams when on AT&T’s network, including when customers are viewing WatchTV. It also includes the $15 monthly credit towards other AT&T video services and up to 4G LTE unlimited data.

The Premium plan costs $80 for a single line after the AutoPay billing credit; or $190 for 4 lines. The regular plan is $70 with the AutoPay billing credit and paperless billing. It’s $5 more per line per month then the current Unlimited Choice Enhanced plan, but when you go up to 4 lines, it works out to the same price as before, $40 per line per month.

AT&T CEO Randall Stephenson had previously revealed the carrier’s plans for the new low-cost streaming TV service while in court defending the Time Warner merger against anti-trust claims. He used its launch as a point of rebuttal against comments about the ever-higher prices for AT&T’s DirecTV satellite service.

The Justice Department was concerned that following the merger, AT&T would raise prices on Time Warner’s HBO and Turner networks, like TNT, TBS and CNN, in order to prop up its own offerings. For now, it seems AT&T will just come up with a million different ways to generate revenue from its networks, by offering different bundles and packages to AT&T customers and other consumers.

The company also touted the merger, when announcing today’s news:

Our merger brings together the elements to fulfill our vision for the future of media and entertainment. We’ll bring a fresh approach to how media and entertainment works for you—including new offerings that integrate content and connectivity.


Read Full Article

5 Common Encryption Types and Why You Shouldn’t Make Your Own


common-encryption-types

Encryption is frequently talked about in news, but it’s usually on the receiving end of misinformed government policy or taking part of the blame for terrorist atrocities.

That ignores just how vital encryption is. The vast majority of internet services use encryption to keep your information safe.

Encryption, however, is somewhat difficult to understand. There are numerous types, and they have different uses. How do you know what the “best” type of encryption is, then?

Let’s take a look at how some of the major encryption types work, as well as why rolling your own encryption isn’t a great idea.

Encryption Types vs. Encryption Strength

One of the biggest encryption language misnomers comes from the differences between types of encryption, encryption algorithms, and their respective strengths. Let’s break it down:

  • Encryption type: The encryption type concerns how the encryption is completed. For instance, asymmetric cryptography is one of the most common encryption types on the internet.
  • Encryption algorithm: When we discuss the strength of encryption, we’re talking about a specific encryption algorithm. The algorithms are where the interesting names come from, like Triple DES, RSA, or AES. Encryption algorithm names are often accompanied by a numerical value, like AES-128. The number refers to the encryption key size and further defines the strength of the algorithm.

There are a few more encryption terms you should familiarize yourself with that will make the rest of this discussion easier to understand.

The 5 Most Common Encryption Algorithms

The types of encryption form the foundation for the encryption algorithm, while the encryption algorithm is responsible for the strength of encryption. We talk about encryption strength in bits.

Moreover, you probably know more encryption algorithms than you realize. Here some of the most common encryption types, with a little information about how they work.

1. Data Encryption Standard (DES)

The Data Encryption Standard is an original US Government encryption standard. It was originally thought to be unbreakable, but the increase in computing power and a decrease in the cost of hardware has rendered 56-bit encryption essentially obsolete. This is especially true regarding sensitive data.

John Gilmore, the EFF co-founder who headed the Deep Crack project, said “When designing secure systems and infrastructure for society, listen to cryptographers, not to politicians.” He cautioned that the record time to crack DES should send “a wake-up call” to anyone who relies on DES to keep data private.

Nonetheless, you’ll still find DES in many products. The low-level encryption is easy to implement without requiring a huge amount of computational power. As such, it is a common feature of smart cards and limited-resource appliances.

2. TripleDES

TripleDES (sometimes written 3DES or TDES) is the newer, more secure version of DES. When DES was cracked in under 23 hours, the government realized there was a significant issue coming its way. Thus, TripleDES was born. TripleDES bulks up the encryption procedure by running DES three times.

The data is encrypted, decrypted, and then encrypted again, giving an effective key length of 168 bits. This is strong enough for most sensitive data. However, while TripleDES is stronger than standard DES, it has its own flaws.

TripleDES has three keying options:

  • Keying Option 1: All three keys are independent. This method offers the strongest key strength: 168-bit.
  • Keying Option 2: Key 1 and Key 2 are independent, while Key 3 is the same as Key 1. This method offers an effective key strength of 112 bits (2×56=112).
  • Keying Option 3: All three keys are the same. This method offers a 56-bit key.

Keying option 1 is the strongest. Keying option 2 isn’t as strong, but still offers more protection than simply encrypting twice with DES. TripleDES is a block cipher, meaning data is encrypted in one fixed-block size after another. Unfortunately, the TripleDES block size is small at 64 bits, making it somewhat susceptible to certain attacks (like block collision).

3. RSA

RSA (named after its creators Ron Rivest, Adi Shamir, and Leonard Adleman) is one of the first public key cryptographic algorithms. It uses the one-way asymmetric encryption function found in the previously linked article.

Many facets of the internet use the RSA algorithm extensively. It is a primary feature of many protocols, including SSH, OpenPGP, S/MIME, and SSL/TLS. Furthermore, browsers use RSA to establish secure communications over insecure networks.

RSA remains incredibly popular due to its key length. An RSA key is typically 1024 or 2048 bits long. However, security experts believe that it will not be long before 1024-bit RSA is cracked, prompting numerous government and business organizations to migrate to the stronger 2048-bit key.

4. Advanced Encryption Standard (AES)

The Advanced Encryption Standard (AES) is now the trusted US Government encryption standard.

It is based on the Rijndael algorithm developed by two Belgian cryptographers, Joan Daemen and Vincent Rijmen. The Belgian cryptographers submitted their algorithm to the National Institute of Standards and Technology (NIST), alongside 14 others competing to become the official DES successor. Rijndael “won” and was selected as the proposed AES algorithm in October 2000.

AES is a symmetric key algorithm and uses a symmetric block cipher. It comprises three key sizes: 128, 192, or 256 bits. Furthermore, there are different rounds of encryption for each key size.

A round is the process of turning plaintext into cipher text. For 128-bit, there are 10 rounds. 192-bit has 12 rounds, and 256-bit has 14 rounds.

There are theoretical attacks against the AES algorithm, but all require a level of computing power and data storage simply unfeasible in the current era. For instance, one attack requires around 38 trillion terabytes of data—more than all the data stored on all the computers in the world in 2016. Other estimates put the total amount of time required to brute-force an AES-128 key in the billions of years.

As such, encryption guru Bruce Schneier does not “believe that anyone will ever discover an attack that will allow someone to read Rijndael traffic,” outside theoretical academic encryption breaks. Schneiers’ Twofish encryption algorithm (discussed below) was a direct Rijndael challenger during the competition to select the new national security algorithm.

5. Twofish

Twofish was a National Institute of Standards and Technology Advanced Encryption Standard contest finalist—but it lost out to Rijndael. The Twofish algorithm works with key sizes of 128, 196, and 256 bits, and features a complex key structure that makes it difficult to crack.

Security experts regard Twofish as one of the fastest encryption algorithms and is an excellent choice for both hardware and software. Furthermore, the Twofish cipher is free for use by anyone.

It appears in some of the best free encryption software, such as VeraCrypt (drive encryption), PeaZip (file archives), and KeePass (open source password management), as well as the OpenPGP standard.

Why Not Make Your Own Encryption Algorithm?

You have seen some of the best (and now-defunct) encryption algorithms available. These algorithms are the best because they are essentially impossible to break (for the time being, at least).

But what about creating a homebrew encryption algorithm? Does creating a secure private system keep your data safe? Put shortly, no! Or perhaps it’s better to say no, but…

The best encryption algorithms are mathematically secure, tested with a combination of the most powerful computers in conjunction with the smartest minds. New encryption algorithms go through a rigorous series of tests known to break other algorithms, as well as attacks specific to the new algorithm.

Take the AES algorithm, for instance:

  • NIST made the call for new encryption algorithms in September 1997.
  • NIST received 15 potential AES algorithms by August 1998.
  • At a conference in April 1999, NIST selected the five finalist algorithms: MARS, RC6, Rijndael, Serpent, and Twofish.
  • NIST continued to test and receive comments and instructions from the cryptographic community until May 2000.
  • In October 2000, NIST confirmed Rijndael as the prospective AES, after which another consultation period began.
  • Rijndael, as the AES, was published as a Federal Information Processing Standard in November 2001. The confirmation started validation testing under the Cryptographic Algorithm Validation Program.
  • AES became the official federal government encryption standard in May 2002.

You Don’t Have the Resources to Create a Strong Algorithm

So you see, the production of a truly secure, long-lasting, and powerful encryption takes time and in-depth analysis from some of the most powerful security organizations on the planet. Or as Bruce Schneier says:

“Anyone can invent an encryption algorithm they themselves can’t break; it’s much harder to invent one that no one else can break.”

And that is where the but comes in. Of course, you can write a program that takes your text, multiples the alphabet value of each letter by 13, adds 61, and then sends it to a recipient.

The output is a mess, but if your recipient knows how to decrypt it, the system is functional. However, if you use your homebrew encryption in the wild, to send private or sensitive information, you’re going to have a bad time.

There’s a further if, too. If you want to learn about encryption and cryptography, experimenting with the development of and breaking a personally developed encryption algorithm is highly recommended. Just don’t ask anyone to use it!

Embrace Encryption and Don’t Reinvent the Wheel

Encryption is important. Understanding how it works is useful, but not imperative to use it. There are plenty of ways to encrypt your daily life with little effort.

What is imperative is realizing that our hyper-networked global community needs encryption to remain secure. There are, unfortunately, a large number of governments and government agencies that want weaker encryption standards. That must never happen.

Read the full article: 5 Common Encryption Types and Why You Shouldn’t Make Your Own


Read Full Article

Twitter acquires anti-abuse technology provider Smyte


Twitter this morning announced it has agreed to buy San Francisco-based technology company Smyte, which describes itself as “trust and safety as a service.” Founded in 2014 by former Google and Instagram engineers, Smyte offers tools to stop online abuse, harassment, and spam, and protect user accounts.

Terms of the deal were not disclosed, but this is Twitter’s first acquisition since buying consumer mobile startup Yes, Inc. back in December 2016

Online harassment has been of particular concern to Twitter in recent months, as the level of online discourse across the web has become increasingly hate-filled and abusive. The company has attempted to combat this problem with new policies focused on the reduction of hate speech, violent threats, and harassment on its platform, but it’s fair to say that problem is nowhere near solved.

As anyone who uses Twitter will tell you, the site continues to be filled with trolls, abusers, bots, and scams – and especially crypto scams, as of late.

This is where Smyte’s technology – and its team – could help.

The company was founded by engineers with backgrounds in spam, fraud and security.

Smyte CEO Pete Hunt previously led Instagram’s web team, built Instagram’s business analytics products, and helped to open source Facebook’s React.js; co-founder Julian Tempelsman worked on Gmail’s spam and abuse team, and before that Google Wallet’s anti-fraud team and the Google Drive anti-abuse team; and co-founder Josh Yudaken was a member of Instagram’s core infrastructure team.

The startup launched out of Y Combinator in 2015, with a focus on preventing online fraud.

Today, its solutions are capable of stopping all sorts of unwanted online behavior, including phishing, spam, fake accounts, cyberbullying, hate speech and trolling, the company’s website claims.

Smyte offer customers access to its technology via a REST API, or it can pull data directly from its customer’s app or data warehouse to analyze. Smyte would then import the existing rules, and use machine learning to create new rules and other machine learning models suited to the business’s specific needs.

The customers data scientists could also use Smyte to deploy (but not train) their own custom machine learning models, too.

Smyte’s system includes a dashboard where analysts can surface emerging trends in real-time, as well as conduct manual reviews of individual entities or clusters of related entities and take bulk actions.

Non-technical analysts could use Smyte to create custom rules tested on historical data, then roll them out to production and watch how they perform in real-time.

For Twitter, the use case for Smyte is obvious – its technology will be integrated with Twitter itself and its backend systems for monitoring and managing reports of abuse, while also taking aim at bots, scammers and a number of other threats today’s social networks typically face.

Of course, combatting abuse and bullying will remain Twitter’s most pressing area of concern – especially as it’s the place where President Trump tweets, and the daily news is reported and discussed (and angrily fought about).

But Twitter could use some help with its troll and bot problem, too. The company, along with Facebook, was home to Russian propaganda during the 2016 U.S presidential election. In January, Twitter notified at least 1.4 million users they saw content created by Russian trolls; it also was found  to have hosted roughly 50,000 Russian bots tweeting election-related content in November 2016.

Presumably, Smyte’s technology could help weed out some of these bad actors, if it works as well as described.

Twitter didn’t provide much detail as to how, specifically, it plans to put Smyte’s technology to use.

Instead, the company largely touted the team’s expertise and the “proactive” nature of Smyte’s anti-abuse systems, in today’s announcement:

From ensuring safety and security at some of the world’s largest companies to specialized domain expertise, Smyte’s years of experience with these issues brings valuable insight to our team. The Smyte team has dealt with many unique issues facing online safety and believes in the same proactive approach that we’re taking for Twitter: stopping abusive behavior before it impacts anyone’s experience. We can’t wait until they join our team to help us make changes that will further improve the health of the public conversation.

According to Smyte’s website, the company has a number of high-profile clients, including Indiegogo, GoFundMe, npm, Musical.ly, TaskRabbit, Meetup, OLX, ThredUp, YouNow, 99 Designs, Carousell, and Zendesk.

Twitter tells us that Smyte will wind down its operations with those customers – it didn’t acquire Smyte for its revenue-generation potential, but rather for its talent and IP.

 

LinkedIn reports there are only a couple dozen employees at Smyte today, including the founders. But Smtye’s own website lists just nineteen. Twitter wouldn’t confirm Smtye’s current headcount but says it’s working to find positions for all.

Terms of the deal were not disclosed, but Smyte had raised $6.3 million in funding from Y Combinator, Baseline Ventures, Founder Collective, Upside Partnership, Avalon Ventures, and Harrison Metal, according to Crunchbase.


Read Full Article