17 October 2019

Google Maps adds more Waze-like features, including driving incident reports


Google Maps is starting to look a lot more like Waze. Google today announced a series of new features that will allow drivers using the Maps app on iOS to report accidents, speed traps, and traffic jams. And on both iOS and Android, users will be able to report other driving hazards and incidents, like road construction, lane closures, disabled vehicles, and objects in the road — like debris. These are all core Waze features and among the primary reasons why many users opt for Waze over Google Maps.

Google had already offered accident, speed traps, and traffic slowdown reports on Android before today.

The new updates follow a steady launch of Waze-like additions to the Google Maps app.

For example, Google launched speed limits and speed trap alerts in over 40 countries in Google Maps back in May. And it had been testing various driving hazard alerts before now. Google Maps had also previously adopted other Waze features, like the ability to add a stop to your route while in navigation mode, or the ability to view nearby gas prices.

Mid trip UGC Report

When you’re navigating your route in Google Maps, you can tap to add a report then choose from a long list that now includes: Crash, Speed Trap, Slowdown, Construction, Lane Closure, Disabled vehicle, and Object on Road.

With the additions, Google is chipping away at the many reasons why people still turn to Waze.

However, Waze is still better for planning a trip by connecting to your personal calendar or Facebook events, while Google Maps has instead focused more on helping users plan their commutes. Waze also is more social and includes a carpooling service.

The benefit of more users switching to Maps means more aggregate data to help power Google’s other products. Data collection from Google Maps is behind features like those that show the wait times, popular times and visit duration at local businesses, for example. Plus, Google Maps is a jumping off point for Google’s My Business platform, which has more recently been challenging Facebook Pages by allowing Maps users to follow their favorite businesses to track promotions and events, and even message the businesses directly.

Google says the new Google Maps features start rolling out globally on Android and iOS this week.

 


Read Full Article

Video Architecture Search




Video understanding is a challenging problem. Because a video contains spatio-temporal data, its feature representation is required to abstract both appearance and motion information. This is not only essential for automated understanding of the semantic content of videos, such as web-video classification or sport activity recognition, but is also crucial for robot perception and learning. Just like humans, an input from a robot’s camera is seldom a static snapshot of the world, but takes the form of a continuous video.

The abilities of today’s deep learning models are greatly dependent on their neural architectures. Convolutional neural networks (CNNs) for videos are normally built by manually extending known 2D architectures such as Inception and ResNet to 3D or by carefully designing two-stream CNN architectures that fuse together both appearance and motion information. However, designing an optimal video architecture to best take advantage of spatio-temporal information in videos still remains an open problem. Although neural architecture search (e.g., Zoph et al, Real et al) to discover good architectures has been widely explored for images, machine-optimized neural architectures for videos have not yet been developed. Video CNNs are typically computation- and memory-intensive, and designing an approach to efficiently search for them while capturing their unique properties has been a difficult.

In response to these challenges, we have conducted a series of studies into automatic searches for more optimal network architectures for video understanding. We showcase three different neural architecture evolution algorithms: learning layers and their module configuration (EvaNet); learning multi-stream connectivity (AssembleNet); and building computationally efficient and compact networks (TinyVideoNet). The video architectures we developed outperform existing hand-made models on multiple public datasets by a significant margin, and demonstrate a 10x~100x improvement in network runtime.

EvaNet: The first evolved video architectures
EvaNet, which we introduce in “Evolving Space-Time Neural Architectures for Videos” at ICCV 2019, is the very first attempt to design neural architecture search for video architectures. EvaNet is a module-level architecture search that focuses on finding types of spatio-temporal convolutional layers as well as their optimal sequential or parallel configurations. An evolutionary algorithm with mutation operators is used for the search, iteratively updating a population of architectures. This allows for parallel and more efficient exploration of the search space, which is necessary for video architecture search to consider diverse spatio-temporal layers and their combinations. EvaNet evolves multiple modules (at different locations within the network) to generate different architectures.

Our experimental results confirm the benefits of such video CNN architectures obtained by evolving heterogeneous modules. The approach often finds that non-trivial modules composed of multiple parallel layers are most effective as they are faster and exhibit superior performance to hand-designed modules. Another interesting aspect is that we obtain a number of similarly well-performing, but diverse architectures as a result of the evolution, without extra computation. Forming an ensemble with them further improves performance. Due to their parallel nature, even an ensemble of models is computationally more efficient than the other standard video networks, such as (2+1)D ResNet. We have open sourced the code.
Examples of various EvaNet architectures. Each colored box (large or small) represents a layer with the color of the box indicating its type: 3D conv. (blue), (2+1)D conv. (orange), iTGM (green), max pooling (grey), averaging (purple), and 1x1 conv. (pink). Layers are often grouped to form modules (large boxes). Digits within each box indicate the filter size.
AssembleNet: Building stronger and better (multi-stream) models
In “AssembleNet: Searching for Multi-Stream Neural Connectivity in Video Architectures”, we look into a new method of fusing different sub-networks with different input modalities (e.g., RGB and optical flow) and temporal resolutions. AssembleNet is a “family” of learnable architectures that provide a generic approach to learn the “connectivity” among feature representations across input modalities, while being optimized for the target task. We introduce a general formulation that allows representation of various forms of multi-stream CNNs as directed graphs, coupled with an efficient evolutionary algorithm to explore the high-level network connectivity. The objective is to learn better feature representations across appearance and motion visual clues in videos. Unlike previous hand-designed two-stream models that use late fusion or fixed intermediate fusion, AssembleNet evolves a population of overly-connected, multi-stream, multi-resolution architectures while guiding their mutations by connection weight learning. We are looking at four-stream architectures with various intermediate connections for the first time — 2 streams per RGB and optical flow, each one at different temporal resolutions.

The figure below shows an example of an AssembleNet architecture, found by evolving a pool of random initial multi-stream architectures over 50~150 rounds. We tested AssembleNet on two very popular video recognition datasets: Charades and Moments-in-Time (MiT). Its performance on MiT is the first above 34%. The performances on Charades is even more impressive at 58.6% mean Average Precision (mAP), whereas previous best known results are 42.5 and 45.2.
The representative AssembleNet model evolved using the Moments-in-Time dataset. A node corresponds to a block of spatio-temporal convolutional layers, and each edge specifies their connectivity. Darker edges mean stronger connections. AssembleNet is a family of learnable multi-stream architectures, optimized for the target task.
A figure comparing AssembleNet with state-of-the-art, hand-designed models on Charades (left) and Moments-in-Time (right) datasets. AssembleNet-50 or AssembleNet-101 has an equivalent number of parameters to a two-stream ResNet-50 or ResNet-101.
Tiny Video Networks: The fastest video understanding networks
In order for a video CNN model to be useful for devices operating in a real-world environment, such as that needed by robots, real-time, efficient computation is necessary. However, achieving state-of-the-art results on video recognition tasks currently requires extremely large networks, often with tens to hundreds of convolutional layers, that are applied to many input frames. As a result, these networks often suffer from very slow runtimes, requiring at least 500+ ms per 1-second video snippet on a contemporary GPU and 2000+ ms on a CPU. In Tiny Video Networks, we address this by automatically designing networks that provide comparable performance at a fraction of the computational cost. Our Tiny Video Networks (TinyVideoNets) achieve competitive accuracy and run efficiently, at real-time or better speeds, within 37 to 100 ms on a CPU and 10 ms on a GPU per ~1 second video clip, achieving hundreds of times faster speeds than the other human-designed contemporary models.

These performance gains are achieved by explicitly considering the model run-time during the architecture evolution and forcing the algorithm to explore the search space while including spatial or temporal resolution and channel size to reduce computations. The below figure illustrates two simple, but very effective architectures, found by TinyVideoNet. Interestingly the learned model architectures have fewer convolutional layers than typical video architectures: Tiny Video Networks prefers lightweight elements, such as 2D pooling, gating layers, and squeeze-and-excitation layers. Further, TinyVideoNet is able to jointly optimize parameters and runtime to provide efficient networks that can be used by future network exploration.
TinyVideoNet (TVN) architectures evolved to maximize the recognition performance while keeping its computation time within the desired limit. For instance, TVN-1 (top) runs at 37 ms on a CPU and 10ms on a GPU. TVN-2 (bottom) runs at 65ms on a CPU and 13ms on a GPU.
CPU runtime of TinyVideoNet models compared to prior models (left) and runtime vs. model accuracy of TinyVideoNets compared to (2+1)D ResNet models (right). Note that TinyVideoNets take a part of this time-accuracy space where no other models exist, i.e., extremely fast but still accurate.
Conclusion
To our knowledge, this is the very first work on neural architecture search for video understanding. The video architectures we generate with our new evolutionary algorithms outperform the best known hand-designed CNN architectures on public datasets, by a significant margin. We also show that learning computationally efficient video models, TinyVideoNets, is possible with architecture evolution. This research opens new directions and demonstrates the promise of machine-evolved CNNs for video understanding.

Acknowledgements
This research was conducted by Michael S. Ryoo, AJ Piergiovanni, and Anelia Angelova. Alex Toshev and Mingxing Tan also contributed to this work. We thank Vincent Vanhoucke, Juhana Kangaspunta, Esteban Real, Ping Yu, Sarah Sirajuddin, and the Robotics at Google team for discussion and support.

The Information will launch Ticker, a tech news app that costs $29 per year


Since it was founded by journalist Jessica Lessin in 2013, The Information has stood out in the tech news landscape for its focus on an ad-free, subscription-driven business model (a focus that seems increasingly prescient).

Now, the upcoming launch of an app called Ticker suggests that the company is looking to expand its audience while maintaining that subscription model.

The Information describes Ticker as its first consumer app. The assumption is that anyone who’s currently paying the $399 annual fee for an Information subscription needs it for their job — whether they’re an investor, entrepreneur or some other professional in the tech industry.

The new app, meanwhile, is designed for anyone who might be interested in keeping up-to-date with the latest tech news, and it’s priced much more affordably, at $29 per year. (Information subscribers will get access as well.)

The Information ticker app

Apparently the app was inspired by the Briefing section of The Information website, which offers quick summaries (usually drawn from reporting by other publications) of major tech news.

Ticker, meanwhile, will include a section called Today with summaries of the day’s tech headlines — similar to Briefing, but written for a consumer audience. It will also include a calendar highlighting upcoming IPOs, conferences and other events that readers might want to know about. (Not included: The Information’s full articles and original reporting.)

“More and more, we’ve been hearing from readers who don’t have a business reason to follow tech but are finding it more and more central to their lives,” Lessin said in a statement. “We are launching Ticker for them — giving them access to the best summaries of the most significant news, written by our team at The Information.”

The company plans to launch Ticker later this fall. In the meantime, you can sign up here.


Read Full Article

Logitech’s MX Master 3 mouse and MX Keys keyboard should be your setup of choice


Logitech recently introduced a new mouse and keyboard, the MX Master 3 ($99.99) and MX Keys ($99.99) respectively. Both devices borrow a lot from other, older hardware in Logitech’s lineup – but they build on what the company has gotten really right with input devices, and add some great new features to make these easily the best option out there when it comes to this category of peripherals.

Logitech MX Keys

This new keyboard from Logitech inherits a lot from the company’s previous top-of-the-line keyboard aimed at creatives, the Logitech Craft keyboard. It looks and feels a lot like the premium Craft – minus the dial that Logitech placed at the top of that keyboard, which worked with companion software to offer a variety of different controls for a number of different applications.

The Craft’s dial was always a bit of a curiosity, and while probably extremely useful for certain creative workflows where having a tactile dial control makes a lot of sense (for scrubbing a video timeline during editing, for instance), in general the average user probably isn’t going to need or use it much.

Logitech MX Keys MX Master 3 5The MX Keys doesn’t have the Craft’s dial, and it takes up less space on your desk as a result. It also costs $70 less than the Craft, which is probably something most people would rather have than the unique controller. The MX Keys still have excellent key travel and typing feel, like its bigger sibling, and it also has smart backlighting that turns on automatically when your hand approaches the keys – and which you can adjust or turn off to suit your preference, and extend battery life.

MX Keys has a built-in battery that chargers via USB-C, and provides up to 10 days of use on a full charge when using the backlight, or for up to 5 months if you disable the backlight entirely. For connectivity, you get both Bluetooth and Logitech’s USB receiver, which can also connect to other Logitech devices like the MX Master series of mice.

Logitech MX Keys MX Master 3 3The keyboard can connect to up to three devices at once, with dedicated buttons to switch between them. It supports Windows, Mac, Linux, Android and iOS out of the box, and has multi-marked keys to make it easier to transition between operating systems. Plus, when you’re using the MX Keys in tandem with the MX Master 3 or other Logitech mice that support its Flow software, you can transition seamlessly between computers and even operating systems, for doing things like copying and pasting files.

AT $99.99, the MX Keys feels like an incredible value, since it offers very premium-feeling hardware in an attractive package, with a suite of features that’s hard to match in a keyboard from anyone else – including first-party peripherals from Microsoft and Apple.

Logitech MX Master 3

When it comes to mice, there are few companies that can match Logitech’s reputation or record. The MX Master series in particular has won plenty of fans – and for good reason.

Logitech MX Keys MX Master 3 9The MX Master 3 doesn’t re-invent the wheel – except that it literally does, in the case of the scroll wheel. Logitech has introduced a new school wheel with ‘MagSpeed’ technology, that switches automatically between fluid scrolling and more fine-grained, pixel-precise control. The company claims the new design is 90 percent faster and 87 percent more precise than its previous scroll wheel, which is pretty much an impossible claim to verify through standard use. That said, it does feel like a better overall scrolling experience, and the claim that it’s now ‘ultra quiet’ is easy to confirm.

Logitech has also tweaked the shape of the mouse, with a new silhouette it says is better suited to matching the shape of your palm. That new shape is complimented with a new thumb scroll wheel, which has always been a stellar feature of the Master series and which again, does feel better in actual use though it’s difficult to put your finger on exactly why. Regardless, it feels better than the Master 2S, and that’s all that really matters.

Logitech MX Keys MX Master 3 10In terms of tracking, Logitech’s Darkfield technology is here to provide effective tracking on virtually all surfaces. It tracks at 4,000 DPI, which is industry-leading for accuracy, and you can adjust sensitivity, scroll direction and other features in Logitech’s desktop software. The MX Master 3 also supports up to three devices at once, and works with Flow to copy and past between different operating systems.

One of the most noteworthy changes on the MX Master 3 is that it gains USB-C for charging, replacing Micro USB, which is fantastic news for owners of modern Macs who want to simplify their cable lives and just stick with one standard where possible. Since that matches up with the USB-C used on the MX Keys, that means you can just use one cable for charging both when needed. The MX Master 3 gets up to 70 days on a full charge, and you can gain 3 hours of use from a fully exhausted battery with just one minute of charging.

Logitech MX Keys MX Master 3 7Bottom line

Logitech has long been a leader in keyboard and mice for very good reason, and the company’s ability to iterate on its existing successes with improvements that are smart and make sense is impressive. The MX Keys is probably the best keyboard within its price range that you can get right now – and better than a lot of more premium-priced hardware. The MX Master 3 is without a doubt the only mouse I’d recommend for most people, especially now that it offers USB-C charging alongside its terrific feature set. Combined, they’re a powerful desktop pair for work, creative and general use.


Read Full Article

How a handful of fishing villages sparked a marine conservation revolution | Alasdair Harris

How a handful of fishing villages sparked a marine conservation revolution | Alasdair Harris

We need a radically new approach to ocean conservation, says marine biologist Alasdair Harris. In a visionary talk, he lays out a surprising solution to the problem of overfishing that could both revive marine life and rebuild local fisheries -- all by taking less from the ocean. "When we design it right, marine conservation reaps dividends that go far beyond protecting nature," he says.

Click the above link to download the TED talk.

Farewell, Google Clips


Amid a slew of updated hardware, Clips has gone missing from Google’s online store. Odds are you probably don’t remember what Clips is. If you do, odds are you’re not surprised by this turn of events.

We’ve reached out to company to confirm whether this is, indeed, definitively the end for the niche device. All I can say for now is that the future doesn’t look bright for a product neither reviewers, consumers nor Google itself figured out. One the company knew for sure what that the Clips was unequivocally not a life-logging camera. The answer of what it was, however, was a far more difficult one.

The device was a kind of showcase for the company’s AI technologies, designed to capture candid life moments, so users weren’t stuck behind their cameras. I reviewed it and if nothing else got this fun Gif of my rabbit, Lucy:

unnamed 1

So not a total loss, I guess. Certainly not enough to justify paying $249, however. One colleague jokingly asking me ahead of this week’s Pixel event whether a Clips 2 was on the way. I suppose we know the answer now.

The discovery follows news that the company has discontinued its Daydream View, VR headset. Such is the Google circle of life. The lukewarmly reviewed first-gen Pixel Buds have been pulled from the store, as well. That line, at least, still has a future


Read Full Article

Sentons launches SurfaceWave, a processor and tech to create software-defined surfaces that supercharge touch and gesture


As handset makers continue to work on ways of making smartphones more streamlined and sleek, while at the same time introducing new features that will get people buying more devices, a startup that is pioneering something called “software-defined” surfaces — essentially, using ultrasound and AI to turn any kind of material, and any kind of surface, into one that will respond to gestures, touch and other forces — is setting out its stall to help them and other hardware makers change up the game.

Sentons, the startup out of Silicon Valley that is building software-defined surface technology, is today announcing the launch of SurfaceWave, a processor and accompanying gesture engine that can be used in smartphones and other hardware to create virtual wheels and buttons to control and navigate apps and features on the devices themselves. The SurfaceWave processor and engine are available to “any mobile manufacturer.”

Before this, Sentons had actually already inked direct deals to test out market interest in its technology. There were actually already three smartphones released — two of which were only sold in Asia (models and customer names undisclosed by Sentons) and one of which is made by Asus in partnership with Tencent, the Republic of Gamers phone (the Air Triggers are powered by Sentons). Jess Lee, the company’s CEO, told me in an interview that there are another 10-12 devices “in process” right now due to be released in coming cycles. He would not comment on whether his former employer is one of them.

Sentons has actually been around since 2011 but very much under the radar until this year, when it announced that Lee — who had been at Apple, after his previous company, the cutting-edge imaging startup InVisage, was acquired by the iPhone maker — was coming on as CEO.

The company has quietly raised about $35 million from two investors, NEA and Lee confirmed to me that it’s currently raising another, probably larger, round. (Given the company’s partnership with Tencent and Asus, those are two companies I would think are candidates as strategic investors.)

The sound of silence

Sentons’ core idea is focused around sound — specifically ultra sound.

posterImage 4813Its system is based around a processor that emits ultrasonic “pings” (similar to sonar array, the company says, which is used for example on submarines to navigate and communicate) to detect physical movement and force on the surface of an object. The company says that this technique is much more sophisticated than capacitive touch that has been used on smartphones up to now, since combined with Sentons’ algorithms it can measure force and intent as well as touch.

Combined with the processor that emits the pings and houses the gesture engine, Sentons also uses “sensor modules” around the perimeter of a device to detect when those pings are interrupted. The system trains itself and can adjust both to temporal “buttons” and also other unintended things like when a screen cracks and your gestures move over to a different area of the phone.

Asus ROG 350x176Gaming — the main use case for Asus’s ROG phone — is an obvious category ripe for software-defined surfaces. The medium always strives for more immersive experiences, and as more games are either natively made for phones, or ported there because of the popularity of mobile gaming, handset makers and publishers are always trying to come up with ways to enhance what is, ultimately, very limited real estate (even with larger screens). Using any and all parts of a device to experience motion and other physical responses, and to control the game, is a natural fit for what Sentons has built.

But the bigger picture and longer term goal is to apply Sentons’ technology for other uses on devices — photography and building enhanced camera tools is one obvious example — and on other “hardware,” like connected cars, clothes and even the human body, since Sentons’ technology can also work on and through human tissue.

“Every surface is an opportunity,” Lee said, noting that conversations around health and medical technology are still very early, while other areas like wearables and automotive are seeing “engagement” already. “In the cabin of a vehicle, you have a wealth of tactile materials, whether it’s leather dashboards or metal buttons, and all of those are extremely interesting to us,” he added.

At the same time, the more immediate opportunity for Sentons is the mobile industry.

Smartphone sales have slowed down, and for some vendors declined, in recent years; and while some of that might have to do with premium device prices continuing to climb, and much higher smartphone penetration globally, some have laid the blame in part on a lack of innovation. Specifically, newer phones are just not providing enough “must have” new features to merit making a purchase of a new device if you already have one.

You could argue that making a technology like this widely available and open to all comers might make those who are trying to make their devices stand out with special features less inclined to jump on the bandwagon.

“Yes, you could say there is more value in scarcity, an approach we took in the last company,” Lee said, referring to InVisage and how very under the radar it was before being snapped up by Apple.

However, he thinks a different approach is needed here. “Whether we launched this platform to everyone or not, the gates have opened, the piñata has broken, and we see a lot more opportunities and want to go for them,” he said.

“You can call it a multi-pronged approach,” he continued, “but ensuring the adoption of software-defined interactions [by trying to work with as many companies as possible] gets the technology or use out there quickly.” He noted that when a new gesture is introduced on devices, it can take time for the world to absorb it, “and we are positive there will be followers, perhaps with different technology, that will compete with us, so a broad launch is what we are going for.”


Read Full Article

Samsung confirms glaring S10 fingerprint reader flaw, promises fix


Galaxy S10 users should be turn on some alternative security features as Samsung works to address a major flaw with the device’s in-screen fingerprint sensor. The consumer electronics giant noted the issue today after a British user reported the ability to unlock her device with unregistered fingerprints.

The flaw was discovered after placing a $3.50 screen protector on the device, confirming earlier reports that adding one could introduce an air gap that interfered with the ultrasonic scanner. The company noted the issue in a statement, telling the press that it was, “aware of the case of S10’s malfunctioning fingerprint recognition and will soon issue a software patch.”

Third party companies including Korean bank KaKaoBank have suggested users turn off the reader until the issue is addressed. That certainly appears to be the most logical course of action until the next software update.

When it hit the market back in March, the company touted the technology as one of the industry’s most secure biometric features, noting that it was, “engineered to be more secure than a traditional 2D optical scanner, the industry-first Ultrasonic Fingerprint ID, with sensors embedded in the display, reads the 3D contours of your physical fingerprint to keep your phone and data safe. This advanced biometric security technology earned the Galaxy S10 the world’s first FIDO Alliance Biometric Component certification.”

Samsung has warned against the use of screen protectors previously, but the ability to fool the product with a cheap off the shelf mobile accessory clearly presents a major and unexpected security concern for Galaxy users. We’ve reached out to Samsung for further comment.


Read Full Article

Snapchat goes after retailers and DTC brands with new Dynamic Ads


Snap today is announcing a new kind of advertising product, Dynamic Ads, that will help it to better attract ad dollars from retail, e-commerce, and other direct-to-consumer brands — a group that today thrives on Instagram. With Dynamic Ads, advertisers can now automatically create ads in real-time based on extensive product catalogs that may contain hundreds of thousands of products. These ads are then served to Snapchat users based on their interests using a variety of templates provided by Snap.

These templates have been designed for mobile, Snap says, and will help the advertiser save time as they won’t have to manually create their ads. Instead, they just sync their product catalog and allow Snap’s system to build the ad in real-time. As product availability or prices change, the ads will also adjust.

The move to better serve advertisers in the retail and direct-to-consumer (DTC) space comes at a time when many DTC brands have been increasingly turning to Snapchat as Instagram has grown too crowded. Advertisers have complained about saturation and higher ad prices there. Snap, meanwhile, targeted this category of advertisers with a growing number of tools. The result, according to some DTC brands were ads that were 8 times cheaper than Instagram.

ShadyRays DPA 2

The Dynamic Ads are the latest in a long line of new ad products and tools. Since Snap launched its Ads Manager two years ago, it has rolled out new ad types, integrations, buying types, and more, including Snap Pixel, Product Ads, advanced optimization, reach & frequency buying, quick Instant Create ads, Shopify integrations, and others aimed at video marketers. like the premium Snap Select program, the non-skip, six-second video Commercials.

More recently, it’s been focused on making ad creation easier. In July, Snap launched an “instant” tool called Instant Create that would help advertisers who were not used to creating ads for the smartphone-friendly vertical format. This ad tool would generate an ad from a brand’s existing assets, like an e-commerce storefront, in just three steps.

Vitaly DPA 2

The new Dynamic Ads will be even simpler, in a way, as advertisers will be able to build “always-on” campaigns that don’t need constant updating.

That being said, the ads risk being a little more generic. Once these templated ads spread across Snapchat, it may be harder for the products being sold to stand out from others. After all, Instagram DTC ads often succeed because of the creative ad collateral involved, or the storytelling, which goes beyond just showcasing product photos. Instagram also allows brands to connect with a wide variety of influencers to promote the products.

Snap has clearly thought about this issue, though, as its Dynamic Ads can use the same product image across five different template styles.

Snap DAB BlogPostBlog Asset v1

Snapchat believes it can do well in this space, because it can better deliver the millennial audience. The company claims that 38% of Snapchat users 16 and up can’t be reached on Instagram daily, followed by 49% on Facebook. Snapchat, meanwhile, reaches over 90% of 13 to 24-year olds in the U.S. And its user base is highly engaged with the app, which gives advertisers more opportunity to reach them.

DPA Template example

“Snapchat has become a go-to destination to reach the largest and most economically influential generations in history, Millennials and Gen Z. Snapchat Dynamic Ads now allow brands to create real-time optimized mobile ads quickly and at scale, with products showcased in visually-appealing templates that feel native to the app,” said Snap’s Kathleen Gambarelli, Group Product Marketing Manager, Direct Response, in a statement.

“More than 75% of the 13-34-year-old U.S. population is active on Snapchat, and daily Snapchat users open the app over 20 times each day, offering brands major opportunities to reach the right person with the right message at the right time,” she added.

Interested advertisers will be able to start setting up their campaigns today in an open beta test, and these will begin running in one or two weeks’ time. Dynamic Ads will be available worldwide for all Snapchat advertisers, but campaigns will only reach U.S. users to start. Snap says global markets will begin in the coming months.

[gallery ids="1898904,1898902,1898901,1898938,1898903,1898899,1898905,1898939,1898900"]


Read Full Article

Microsoft accessibility grants go out to companies aiming to improve tech for the disabled


The tech world has a lot to offer those with disabilities, but it can be hard to get investors excited about the accessibility space. That’s why Microsoft’s AI for Accessibility grants are so welcome: equity-free Azure credits and cash for companies looking to adapt AI to the needs of those with disabilities. The company just announced ten more, including education for the blind startup ObjectiveEd.

The grant program was started a while back with a $5 million, 5-year mission to pump a little money into deserving startups and projects — and get them familiar with Microsoft’s cloud infrastructure, of course.

Applications are perennially accepted, and “anybody who wants to explore the value of AI and machine learning for people with disabilities is welcome to apply,” said Microsoft’s Mary Bellard. As long as they have “great ideas and roots in the disability community.”

Among the grantees this time around is ObjectiveEd, which I wrote about earlier this year. The company is working on an iPad-based elementary school curriculum for blind and low-vision students that’s also accessible to sighted kids and easy for teachers to deploy.

Part of that, as you might guess, is braille. But there aren’t nearly enough teachers capable of teaching braille as students who need to learn it, and the most common technique is very hands-on: a student reads braille (on a hardware braille display) out loud and a teacher corrects them. Depending on whether a student has access to the expensive braille display and a suitable tutor at home, that can mean as little as an hour a week dedicated to these crucial lessons.

ObjectiveEd 2

A refreshable braille display for use with apps like ObjectiveEd’s.

“We thought, wouldn’t it be cool if we could send a sentence to the braille display, have the student speak the words out loud, then have Microsoft’s Azure Services translate that to text and compare that to the braille display, then correct the student if necessary and move on. All within the context of a game, to make it fun,” said ObjectiveEd founder Marty Schultz.

And that’s just what the company’s next app does. Speech-to-text accuracy is high enough now that it can be used for a variety of educational and accessibility purposes, so all it will take for a student to get some extra time in on their braille lessons is an iPad and braille display — admittedly more than a thousand dollars worth of hardware, but no ever one said being blind was cheap.

Braille literacy is dropping, and, I suggested, no surprise there: With pervasive and effective audio interfaces, audio books, and screen readers, there are fewer times when blind and low-vision people truly need braille. But as Schulz and Bellard both pointed out, it’s great to be able to rely on audio for media consumption, but for serious engagement with the written word and many educational purposes, braille is either necessary or a very useful alternative to speech.

Both Schultz and Bellard noted that they are not trying to replace teachers at all — “Teachers teach, we help kids practice,” Schultz said. “We’re not experts in teaching, but we can follow their advice to make these tools useful to students.”

There are ten other grantees in this round of Microsoft’s program, covering a wide variety of approaches and technologies. I like the SmartEar, for instance, which listens for things like doorbells or alarms and alerts deaf people of them via their smartphone.

And City University of London has a great idea in personalizing object recognition. It’s pretty straightforward for a computer vision system to recognize a mug or keychain on a table. But for a blind person it’s more useful if a system can identify their mug or keychain, and then perhaps say, it’s on the brown table left of the door, or what have you.

Here are the ten grantees besides ObjectiveEd (descriptions provided by Microsoft, as I wasn’t able to investigate each one, but may in the future):

  • AbiliTrek : A platform for the disability community to rate and review the accessibility of any establishment, with the ability to tailor search results to the specific needs of any individual.
  • Azur Tech Concept – SmartEar : A service that actively listens for environmental sounds (i.e. doorbell, fire alarm, phone call) and retransmits them in colored flashes on small portable boxes or a smart phone to support the deaf community.
  • Balance for Autism – Financial Accessibility: An interactive program which provides information and activities designed to better match people with programs and services
  • City University of London – The ORBIT : Developing a data set to train AI systems for personalizing object recognition, which is becoming increasingly important for tools used by the blind community.
  • Communote – BeatCaps : A new form of transcription that uses beat tracking to generate subtitles that visualize the rhythm of music. These visualizations allow the hard of hearing to experience music.
  • Filmgsindl GmbH – EVE: A system that recognizes speech and generates automatic live subtitles for people with a hearing disability.
  • Humanistic Co-Design : A cooperative of individuals, organizations and institutions working together to increase awareness about how designers, makers, and engineers can apply their skills in collaboration with people who have disabilities.
  • iMerciv –  MapinHood : A Toronto-based startup developing a navigation app for pedestrians who are blind or have low vision and want to choose the routes they take if they’re walking to work, or to any other destination.
  • inABLE and I-Stem – I-Assistant: A serves that uses text-to-speech, speech recognition, and AI to give students a more interactive and conversational alternative to in-person testing in the classroom.
  • Open University – ADMINS : A chatbot that provides administrative support for people with disabilities who have difficulty filling out online academic forms.

The grants will take the form of Azure credits and/or cash for immediate needs like user studies and keeping the lights on. If you’re working on something you think might be a good match for this program, you can apply for it right here.


Read Full Article

How Artificial Intelligence Will Shape the Future of Malware


ai-shape-malware

As we move into the future, the prospect of AI-driven systems becomes more appealing. Artificial Intelligence will help us make decisions, power our smart cities, and—unfortunately—infect our computers with nasty strains of malware.

Let’s explore what the future of AI means for malware.

What Is AI in Malware?

When we use the term “AI-driven malware,” it’s easy to imagine a Terminator-style case of an AI “gone rogue” and causing havoc. In reality, a malicious AI-controlled program wouldn’t be sending robots back through time; it would be sneakier than that.

AI-driven malware is conventional malware altered via Artificial Intelligence to make it more effective. AI-driven malware can use its intelligence to infect computers faster or make attacks more efficient. Instead of being a “dumb” program that follows pre-set code, AI-driven malware can think for itself—to an extent.

How Does AI Enhance Malware?

There are several ways that Artificial Intelligence can enhance malware. Some of these methods are figurative, while some are tangible within the real world in some way.

Targeted Ransomware Demonstrated by DeepLocker

One of the scariest AI-driven malware examples is Deeplocker. Thankfully, IBM Research developed the malware as a proof-of-concept so you won’t find it in the wild.

The concept of DeepLocker was to demonstrate how AI can smuggle ransomware into a target device. Malware developers can do a “shotgun spread blast” against a company with ransomware, but there’s a high chance they won’t manage to infect the essential computers. As such, the alert may go up too soon for the malware to reach the most prominent targets.

DeepLocker was teleconferencing software that smuggled in a unique strain of WannaCry. It didn’t activate the payload, though; instead, it would merely perform its duties as a teleconferencing program.

As it did its job, it would scan the faces of the people that used it. Its goal was to infect a specific person’s computer, so it monitored everyone as they used the software. When it detected the target’s face, it would activate the payload and cause the PC to be locked down by WannaCry.

Adaptive Worms That Learn From Detection

One theoretical use of AI in malware is a worm that “remembers” every time an antivirus detects it. Once it knows what actions cause an antivirus to spot it, it then stops performing that action and finds another way to infect the PC.

This is particularly dangerous, as modern-day antivirus tends to run off of strict rules and definitions. That means all a worm needs to do is find a way in that doesn’t trip the alarm. Once it does, it can inform the other strains about the hole in the defense, so they can infect other PCs easier.

Independence From the Developer

Modern-day malware is quite “dumb;” it can’t think by itself or make decisions. It performs a series of tasks that the developer gave it before the infection happened. If the developer wants the software to do something new, they have to broadcast the next list of instructions to their malware.

This center of communication is called a “command and control” (C&C) server, and it has to be hidden very well. If the server is discovered, it could lead back to the hacker, often ending with arrests.

If the malware can think for itself, however, there is no need for a C&C server. The developer unleashes the malware and sits back as the malware does all the work. This means the developer doesn’t need to risk outing themselves while giving commands; they can just “set and forget” their malware.

Monitoring User Voices for Sensitive Information

If an AI-driven malware gets control over a target’s microphone, it can listen in and record what people are saying nearby. The AI then pieces through what it heard, transcribes it into text, then sends the text back to the developer. This makes life easier for the developer, who doesn’t have to sit through hours of audio recording to find trade secrets.

How Can a Computer “Learn?”

Malware can learn from its actions through what’s called “machine learning.” This is a specific area of AI, related to how computers can learn from their efforts. Machine learning is useful for AI developers because they don’t need to code for every scenario. They let the AI know what’s right and what’s not, then let it learn through trial and error.

When AI trained by machine-learning faces an obstacle, it tries different methods to overcome it. At first, it will do a poor job at passing the challenge, but the computer will note what went wrong and what can be improved. Over the course of several iterations of learning and trying, it eventually gets a good idea of what the “correct” answer is.

You can see an example of this progress in the video above. The video shows an AI learning how to make different creatures walk properly. The first few generations walk as if they are drunk, but the later ones hold their posture. This is because the AI learned from the previous failures and did a better job on the later models.

Malware developers use this power of machine learning to figure out how to correctly attack a system. If something goes wrong, the system logs this error and notes what they did that caused that problem. In the future, the malware will adapt its attack patterns for better results.

How Can We Defend Against Malware-Driven AI?

The big problem with machine-learning AI is that they exploit the current way that antiviruses work. An antivirus likes to work via straightforward rules; if a program fits a specific niche that an antivirus knows is malicious, it blocks it.

AI-driven malware, however, won’t work via hard and set rules. It will continuously prod at the defenses, trying to find a way through. Once it has made its way in, it can perform its job without hindrance until the antivirus receives updates specific to the threat.

So, what’s the best way to fight off this “smart” malware? Sometimes you need to fight fire with fire, and the best way to do that is to introduce AI-driven antivirus programs. These don’t use static rules to catch malware, like our current models. Instead, they analyze what a program is doing and stops it if it’s acting maliciously, according to the antivirus’s opinion.

A Future Defined by Artificial Intelligence

Basic rules and simple instructions won’t define malware attacks in the future. Instead, they’ll use machine learning to adapt and shape themselves to counter whatever security they meet. It may not be as exciting as how Hollywood depicts malicious AI, but the threat is very much real.

If you’d like to see some less-scary examples of Artificial Intelligence, check these AI-powered websites.

Image Credit: sdecoret/Depositphotos

Read the full article: How Artificial Intelligence Will Shape the Future of Malware


Read Full Article

How to Politely Ignore Someone on Facebook


politely-ignore-facebook

Facebook is a good way to connect with family and friends. But sometimes you might want to disconnect with someone because their posts have become downright annoying or you want to hide your activities from them.

Whatever reason you may have for unfriending someone, it’s not really easy, especially if they are your family or close friends.

Instead of unfriending, which might be considered rude, there are many tactful ways to disconnect from others on Facebook.

Turn On Tag Review

Turning on the Tag Review feature will not allow your family members or relatives to tag you in family photos. Thus you can keep away from unwanted inclusion in the family scene on social media.

To turn it on, follow these steps:

Click on the down arrow on the top right corner of Facebook.

facebook dropdown settings

Select Settings. From the left panel, click on Timeline and Tagging.

facebook timeline settings

On the right panel, you will see the Review section. Go to “Review posts you’re tagged in before the posts appear on your timeline” and click on Edit. Then select Enabled.

facebook enable tag review

Do the same for “Review tags that people add to your posts before the tags appear on your Facebook.”

Now your friends will see your tagged photos only if you approve the tag.

Unfollow Them

If you want a relative to vanish from your Facebook feed, you can use the Unfollow feature. You will still be friends on Facebook, but you will never see their posts again. This is a polite way to “delete” someone from your Facebook.

To unfollow them, follow these steps:

Visit their profile and click on the arrow next to the button that says Following.

facebook unfollow

Click on the Unfollow option and you won’t get any updates from them.

Put Them in the Restricted List

If you put them on the restricted list, they won’t be able to see the posts that are visible to your other friends, so they are practically “unfriended.” Keep in mind that if you make a post public, they’ll be able to see it.

To put them in the restricted list:

Visit their profile and click on the Friends button. It will open a drop down list.

facebook restricted list setting

Click on Add to another list and then click on Restricted from the list.

If they confront you over why they are not able to see your posts but a common friend can, you can always blame Facebook glitches.

Change the Visibility Setting of Each Post

If you don’t want them to see specific posts, instead of unfriending them, you can change the visibility settings of each individual post.

When you post anything on Facebook, you see the option of posting it to either the News Feed or Your Story. It also shows the visibility of the post (Public, Friends, Only Me, etc).

To change the visibility settings of a post:

Click on the visibility icon (lock icon in this case) and you’ll see a menu.

facebook post privacy setting

Click on Friends Except. It will show you a list of your friends.

facebook-friends except privacy setting

Select the friend from whom you want to hide your posts and click on Save Changes.

Now, that post and all future posts will be hidden from that friend unless you change the settings again.

Add Them to the Acquaintance List

If you want to hide your posts from multiple persons, you can put them in a separate group so you don’t have to select them individually for each post.

This can be done by adding them to the Acquaintance list. Now whenever you want to post something, change the visibility of the post to Friends except Acquaintances. This will hide that post from everyone you added to the Acquaintance list.

To add someone to the Acquaintance list:

Visit their profile.

Click on Friends.

facebook acquaintance list

On the drop down menu, click on Acquaintances.

Make a New Facebook Profile

While having multiple personal accounts is against Facebook Community Standards, many people create an alternate account with their nickname. This can help you share some posts with close friends and other posts with work friends and family members.

When you want to post photos from the booze fest last night, you can use the new account and keep the original one for more appropriate and family-friendly posts and photos.

Unfriend Them

Unfriending them should be the last resort. Since they are family, there’s no easy way to do it. Unfriending one specific person will make it look like you had something personal against them. You can delete them all and send them a personal group message that you had to unfriend everyone due to professional reasons.

Tell them you had to turn your profile professional and you’ll stay in touch with them using messages, emails, and phone calls. Mention that you’re sure they’ll understand and you wish all the best for them.

To unfriend, visit their profile and click on Friends button and then on Unfriend.

Go Dead on Facebook

If you don’t want to try any options mentioned above, you can take a break from Facebook. If their posts or comments disturb you, this can be a respite. And once you cool down, you can decide if you want to continue being their friend or not.

Take a Deep Breath and Scroll Past

Are you planning to unfriend Aunt Peggy because she commented “It was undercooked and smelled bad” on the beef and vegetable casserole pics you posted on Facebook? There could be many reasons you might want to delete Facebook friends.

While sometimes friends and family can come across as rude, it’s best to cool off and not let it affect you so much. If you often meet them and they are generally helpful, it’s totally okay to let go of small things. Keep scrolling if you see an objectionable post or comment on Facebook.

Have you been unfriended by someone? It can be hurtful. Here’s what you can do if you discover someone has deleted you on Facebook.

Read the full article: How to Politely Ignore Someone on Facebook


Read Full Article