21 May 2019

Select Bose smart speakers get Google Assistant


A week after Sonos added long-promised Google Assistant integration to a pair of speakers, Bose is following suit. The company’s bringing the popular smart home AI to a trio of existing models, the Home Speaker 500 and Soundbar 500 and 700. The forthcoming, pint-sized Home Speaker 300 will be hitting the market with the feature built in.

Like Sonos, you’ll get your standard array of Assistant queries, including music playback, Chromecast TV control and the ability to control connected home features like smart lighting. All of that will be accessible through the built-in speaker array. Like Sonos, the aforementioned speakers are also compatible with Alexa.

It’s clearly in the best interest of these third party manufacturers not to have to play sides. For Google and Amazon, it means bringing their respective smart home ecosystems to a pair of well-regarded brands. Also like Sonos, setup happens in the company’s music app, which means, unfortunately, that you won’t have the full suite of setup options you get with Google’s own Home speakers.

The upgrade is available starting today. Additional features, including news and podcasts are coming this summer. Ditto for the Home Speaker 300, which is arriving this summer.


Read Full Article

Peak Design’s Travel Tripod


The camera clip and bag company has made a portable, packable, easy-to-setup professional travel tripod.

 

Video Producers: Yashad Kulkarni, Gregory S. Manalo
Shooter / Editor: Gregory S. Manalo


Read Full Article

Microsoft makes a push for service mesh interoperability


Services meshes. They are the hot new thing in the cloud native computing world. At Kubecon, the bi-annual festival of all things cloud native, Microsoft today announced that it is teaming up with a number of companies in this space to create a generic service mesh interface. This will make it easier for developers to adopt the concept without locking them into a specific technology.

In a world where the number of network endpoints continues to increase as developers launch new micro-services, containers and other systems at a rapid clip, they are making the network smarter again by handling encryption, traffic management and other functions so that the actual applications don’t have to worry about that. With a number of competing service mesh technologies, though, including the likes of Istio and Linkerd, developers currently have to chose which one of these to support.

“I’m really thrilled to see that we were able to pull together a pretty broad consortium of folks from across the industry to help us drive some interoperability in the service mesh space,” Gabe Monroy, Microsoft’s lead product manager for containers and the former CTO of Deis, told me. “This is obviously hot technology — and for good reasons. The cloud-native ecosystem is driving the need for smarter networks and smarter pipes and service mesh technology provides answers.”

The partners here include Buoyant, HashiCorp, Solo, Red Hat, AspenMesh, Weaveworks, Docker, Rancher, Pivotal, Kinvolk and VMWare. That’s a pretty broad coalition, though it notably doesn’t include cloud heavyweights like Google, the company behind Istio, and AWS.

For the time being, the interoperability features focus on traffic policy, telemetry and traffic management. Monroy argues that these are the most pressing problems right now. He also stressed that this common interface still allows the different service mesh tools to innovate and that developers can always work directly with their APIs when needed. He also stressed that the Service Mesh Interface (SMI), as this new specification is called, does not provide any of its own implementations of these features. It only defines a common set of APIs.

Currently, the most well-known service mesh is probably Istio, which Google, IBM and Lyft launched about two years ago. SMI may just bring a bit more competition to this market since it will allow developers to bet on the overall idea of a service mesh instead of a specific implementation.

In addition to SMI, Microsoft also today announced a couple of other updates around its cloud-native and Kubernetes services. It announced the first alpha of the Helm 3 package manager, for example, as well as the 1.0 release of its Kubernetes extension for Visual Studio Code and the general availability of its AKS virtual nodes, using the open source Virtual Kubelet project.

 


Read Full Article

Facebook still a great place to amplify pre-election junk news, EU study finds


A study carried out by academics at Oxford University to investigate how junk news is being shared on social media in Europe ahead of regional elections this month has found individual stories shared on Facebook’s platform can still hugely outperform the most important and professionally produced news stories, drawing as much as 4x the volume of Facebook shares, likes, and comments.

The study, conducted for the Oxford Internet Institute’s (OII) Computational Propaganda Project, is intended to respond to widespread concern about the spread of online political disinformation on EU elections which take place later this month, by examining pre-election chatter on Facebook and Twitter in English, French, German, Italian, Polish, Spanish, and Swedish.

Junk news in this context refers to content produced by known sources of political misinformation — aka outlets that are systematically producing and spreading “ideologically extreme, misleading, and factually incorrect information” — with the researchers comparing interactions with junk stories from such outlets to news stories produced by the most popular professional news sources to get a snapshot of public engagement with sources of misinformation ahead of the EU vote.

As we reported last year, the Institute also launched a junk news aggregator ahead of the US midterms to help Internet users get a handle on manipulative politically-charged content that might be hitting their feeds.

In the EU the European Commission has responded to rising concern about the impact of online disinformation on democratic processes by stepping up pressure on platforms and the adtech industry — issuing monthly progress reports since January after the introduction of a voluntary code of practice last year intended to encourage action to squeeze the spread of manipulative fakes. Albeit, so far these ‘progress’ reports have mostly boiled down to calls for less foot-dragging and more action.

One tangible result last month was Twitter introducing a report option for misleading tweets related to voting ahead of the EU vote, though again you have to wonder what took it so long given that online election interference is hardly a new revelation. (The OII study is also just the latest piece of research to bolster the age old maxim that falsehoods fly and the truth comes limping after.)

The study also examined how junk news spread on Twitter during the pre-EU election period, with the researchers finding that less than 4% of sources circulating on Twitter’s platform were junk news (or “known Russian sources”) — with Twitter users sharing far more links to mainstream news outlets overall (34%) over the study period.

Although the Polish language sphere was an exception — with junk news making up a fifth (21%) of EU election-related Twitter traffic in that outlying case.

Returning to Facebook, while the researchers do note that many more users interact with mainstream content overall via its platform, noting that mainstream publishers have a higher following and so “wider access to drive activity around their content” and meaning their stories “tend to be seen, liked, and shared by far more users overall”, they also point out that junk news still packs a greater per story punch — likely owing to the use of tactics such as clickbait, emotive language, and outragemongering in headlines which continues to be shown to generate more clicks and engagement on social media.

It’s also of course much quicker and easier to make some shit up vs the slower pace of doing rigorous professional journalism — so junk news purveyors can get out ahead of news events also as an eyeball-grabbing strategy to further the spread of their cynical BS. (And indeed the researchers go on to say that most of the junk news sources being shared during the pre-election period “either sensationalized or spun political and social events covered by mainstream media sources to serve a political and ideological agenda”.)

“While junk news sites were less prolific publishers than professional news producers, their stories tend to be much more engaging,” they write in a data memo covering the study. “Indeed, in five out of the seven languages (English, French, German, Spanish, and Swedish), individual stories from popular junk news outlets received on average between 1.2 to 4 times as many likes, comments, and shares than stories from professional media sources.

“In the German sphere, for instance, interactions with mainstream stories averaged only 315 (the lowest across this sub-sample) while nearing 1,973 for equivalent junk news stories.”

To conduct the research the academics gathered more than 584,000 tweets related to the European parliamentary elections from more than 187,000 unique users between April 5 and April 20 using election-related hashtags — from which they extracted more than 137,000 tweets containing a URL link, which pointed to a total of 5,774 unique media sources.

Sources that were shared 5x or more across the collection period were manually classified by a team of nine multi-lingual coders based on what they describe as “a rigorous grounded typology developed and refined through the project’s previous studies of eight elections in several countries around the world”.

Each media source was coded individually by two separate coders, via which technique they say was able to successfully label nearly 91% of all links shared during the study period. 

The five most popular junk news sources were extracted from each language sphere looked at — with the researchers then measuring the volume of Facebook interactions with these outlets between April 5 and May 5, using the NewsWhip Analytics dashboard.

They also conducted a thematic analysis of the 20 most engaging junk news stories on Facebook during the data collection period to gain a better understanding of the different political narratives favoured by junk news outlets ahead of an election.

On the latter front they say the most engaging junk narratives over the study period “tend to revolve around populist themes such as anti-immigration and Islamophobic sentiment, with few expressing Euroscepticism or directly mentioning European leaders or parties”.

Which suggests that EU-level political disinformation is a more issue-focused animal (and/or less developed) — vs the kind of personal attacks that have been normalized in US politics (and were richly and infamously exploited by Kremlin-backed anti-Clinton political disinformation during the 2016 US presidential election, for example).

This is likely also because of a lower level of political awareness attached to individuals involved in EU institutions and politics, and the multi-national state nature of the pan-EU project — which inevitably bakes in far greater diversity. (We can posit that just as it aids robustness in biological life, diversity appears to bolster democratic resilience vs political nonsense.)

The researchers also say they identified two noticeable patterns in the thematic content of junk stories that sought to cynically spin political or social news events for political gain over the pre-election study period.

“Out of the twenty stories we analysed, 9 featured explicit mentions of ‘Muslims’ and the Islamic faith in general, while seven mentioned ‘migrants’, ‘immigration’, or ‘refugees’… In seven instances, mentions of Muslims and immigrants were coupled with reporting on terrorism or violent crime, including sexual assault and honour killings,” they write.

“Several stories also mentioned the Notre Dame fire, some propagating the idea that the arson had been deliberately plotted by Islamist terrorists, for example, or suggesting that the French government’s reconstruction plans for the cathedral would include a minaret. In contrast, only 4 stories featured Euroscepticism or direct mention of European Union leaders and parties.

“The ones that did either turned a specific political figure into one of derision – such as Arnoud van Doorn, former member of PVV, the Dutch nationalist and far-right party of Geert Wilders, who converted to Islam in 2012 – or revolved around domestic politics. One such story relayed allegations that Emmanuel Macron had been using public taxes to finance ISIS jihadists in Syrian camps, while another highlighted an offer by Vladimir Putin to provide financial assistance to rebuild Notre Dame.”

Taken together, the researchers conclude that “individuals discussing politics on social media ahead of the European parliamentary elections shared links to high-quality news content, including high volumes of content produced by independent citizen, civic groups and civil society organizations, compared to other elections we monitored in France, Sweden, and Germany”.

Which suggests that attempts to manipulate the pan-EU election are either less prolific or, well, less successful than those which have targeted some recent national elections in EU Member States. And logic would suggest that co-ordinating election interference across a 28-Member State bloc does require greater co-ordination and resource vs trying to meddle in a single national election — on account of the multiple countries, cultures, languages and issues involved.

We’ve reached out to Facebook for comment on the study’s findings.

The company has put a heavy focus on publicizing its self-styled ‘election security’ efforts ahead of the EU election. Though it has mostly focused on setting up systems to control political ads — whereas junk news purveyors are simply uploading regular Facebook ‘content’ at the same time as wrapping it in bogus claims of ‘journalism’ — none of which Facebook objects to. All of which allows would-be election manipulators to pass off junk views as online news, leveraging the reach of Facebook’s platform and its attention-hogging algorithms to amplify hateful nonsense. While any increase in engagement is a win for Facebook’s ad business, so er…


Read Full Article

ByteDance, TikTok’s parent company, plans to launch a free music streaming app


Does the overcrowded and cut-throat music streaming business have room for an additional player? The world’s most valuable startup certainly thinks so.

Chinese conglomerate ByteDance, valued at over $75 billion, is working on a music streaming service, two sources familiar with the matter told TechCrunch. The company, which operates popular app TikTok, has held discussions with music labels in recent months to launch the app as soon as end of this quarter, one of the sources said.

The app will offer both a premium and an ad-supported free tier, one of the sources said. Bloomberg, which first wrote about the premium app, reported that ByteDance is targeting emerging markets with its new music app. A ByteDance spokesperson declined to comment.

For ByteDance, interest in a music app does not come as a surprise. Snippets of pop songs from movies and albums intertwined with videos shot by its humongous userbase is part of the service’s charm. The company already works with music labels worldwide to licence usage of their tracks on its platform. In China, where ByteDance claims to have tie ups with over 800 labels, it has been aggressively expanding efforts to find music talents and urge them to make their own tracks.

Besides, ByteDance has been expanding its app portfolio in recent months. Earlier this year, the company released Duoshan, a video chat app that appears to be a mix of TikTok and Snap. This week, it launched Feiliao, another chat app that is largely focused on text-driven conversations. At some point, the company may have realized the need for a standalone music consumption app.

When asked about TikTok’s partnership with music labels last month, Todd Schefflin, TikTok’s head of global music business development, told WSJ that music is part of the app’s “creative DNA” but it is “ultimately for short video creation and viewing, not a product for music consumption.”

The private Chinese company is likely eyeing India as a key market for its music app. The company has been in discussion with local music labels T Series and Times Music for rights. Moreover, its apps are estimated to have over 300 million monthly active users in the nation, though there could be significant overlaps among them.

India may have also inspired ByteDance to consider a free, ad-supported version of its music app. Even as more than 150 million users in India listen to music online, only a tiny portion of this user base is willing to pay for it.

This has made India a unique battleground for local and international music giants, most of which offer an ad-supported, free version of their apps in the market. Even premium offerings from Apple and Spotify cost under $1.2 a month. India is the only market where Spotify offers a free version of its app that has access to the entire catalog on-demand.

The launch of the app could put the spotlight again on ByteDance in India, where its TikTok app recently landed in hot water. An Indian court banned the app for roughly a week after expressing concerns over questionable content on the platform. Ever since the nation lifted the ban on TikTok, the company has become visibly cautious about its movement.


Read Full Article

Master Your Raspberry Pi and Build Alexa Apps With This $29 Training Bundle


For anyone interested in the internet of Things and smart home automation, the Raspberry Pi is a brilliant educational tool. This microcomputer makes it easy to create amazing setups using your own code and hardware. If you’re new to this game, the Complete Raspberry Pi & Alexa A-Z Bundle can help you get started. The bundle includes four courses and 10 hours of video tutorials — and it’s now just $29 at MakeUseOf Deals.

Raspberry-Flavored Alexa

Although the Raspberry Pi is a very capable Linux computer, it was actually designed to help people get started with code and electronics. This training bundle helps you unleash the full potential of your tiny machine, with hours of hands-on training.

Through concise video lessons, you discover how to connect LEDs, switches, sensors, and other components to your Pi. At the same time, you learn how to write your first lines of Python code. With the basics in place, you start work on a range of fun projects — from a simple web server to a tiny games console.

The training also looks at Alexa, and how you can use Amazon’s voice assistant in your own future projects. You actually end up building an Amazon Echo clone and coding your own Alexa skills from scratch.

10 Hours of Training for $29

Order now for just $29 to get your hands on this bundle, worth over $300.

Read the full article: Master Your Raspberry Pi and Build Alexa Apps With This $29 Training Bundle


Read Full Article

5 Simple Ways to Set a Budget and Avoid Overspending

Oculus Quest and Rift S now shipping


Facebook-owned Oculus is shipping its latest VR headgear from today. Preorders for the PC-free Oculus Quest and the higher end Oculus Rift S opened up three weeks ago.

In a launch blog Oculus touts the new hardware’s “all-in-one, fully immersive 6DOF VR” — writing: “We’re bringing the magic of presence to more people than ever before — and we’re doing it with the freedom of fully untethered movement”.

For a less varnished view on what it’s like to stick a face-computer on your head you can check out our reviews by clicking on the links below…

Oculus Quest

TC: “The headset may not be the most powerful, but it is doubtlessly the new flagship VR product from Facebook”

Oculus Rift S

TC: “It still doesn’t feel like a proper upgrade to a flagship headset that’s already three years old, but it is a more fine-tuned system that feels more evolved and dependable”

The Oculus blog contain no detail on pre-order sales for the headsets — beyond a few fine-sounding words.

Meanwhile Facebook has, for months, been running native ads for Oculus via its eponymous and omnipresent social network — although there’s no explicit mention of the Oculus brand unless you click through to “learn more”.

Instead it’s pushing the generic notion of “all-in-one VR”, shrinking the Oculus brand stamp on the headset to an indecipherable micro-scribble.

Here’s one of Facebook’s ads that targeted me in Europe, back in March, for e.g.:

For those wanting to partake of Facebook flavored face gaming (and/or immersive movie watching), the Oculus Quest and Rift S are available to buy via oculus.com and retail partners including Amazon, Best Buy, Newegg, Walmart, and GameStop in the US; Currys PC World, FNAC, MediaMarkt, and more in the EU and UK; and Amazon in Japan.

Just remember to keep your mouth shut.


Read Full Article

Google brings release channels and Windows Container support to its Kubernetes Engine


At KubeCon + CloudNativeCon, the bi-annual gathering of cloud-native computing boffins, Google today announced that it now offers three release channels for its Google Kuberentes Engine (GKE): Rapid, Ragular and Stable. With these, Google Cloud users can decide whether they want the freshest release or the most stable one — or easily evaluate the latest updates in a development environment. This new feature is now in alpha testing.

“Each channel offers different version maturity and freshness, allowing developers to subscribe their cluster to a stream of updates that match risk tolerance and business requirements,” Google explains in today’s release.

The company is launching this new feature into alpha with the first release in the Rapid channel, which will give developers early access to the latest versions of Kubernetes.

With this release into the Rapid channel, Google is also bringing early support for Windows Containers to GKE. Over the course of the last few releases, the Kubernetes community brought improved Windows support to the platform and now Google will offer support for Windows Server Containers in June.

In addition to these features, the company is also releasing Stackdriver Kubernetes Monitoring into general availability. This tool can be used to monitor and log data from GKE, as well as Kubernetes deployments in other clouds and on-premises infrastructure.


Read Full Article

Stanford’s Doggo is a petite robotic quadruped you can (maybe) build yourself


Got a few thousand bucks and a good deal of engineering expertise? You’re in luck: Stanford students have created a quadrupedal robot platform called Doggo that you can build with off-the-shelf parts and a considerable amount of elbow grease. That’s better than the alternatives, which generally require a hundred grand and a government-sponsored lab.

Due to be presented (paper on arXiv here) at the IEEE International Conference on Robots and Automation, Doggo is the result of research by the Stanford Robotics Club, specifically the Extreme Mobility team. The idea was to make a modern quadrupedal platform that others could build and test on, but keep costs and custom parts to a minimum.

The result is a cute little bot with rigid-looking but surprisingly compliant polygonal legs that has a jaunty, bouncy little walk and can leap more than three feet in the air. There are no physical springs or shocks involved, but by sampling the forces on the legs 8,000 times per second and responding as quickly, the motors can act like virtual springs.

It’s limited in its autonomy, but that’s because it’s built to move, not to see and understand the world around it. That is, however, something you, dear reader, could work on. Because it’s relatively cheap and doesn’t involve some exotic motor or proprietary parts, it could be a good basis for research at other robotics departments. You can see the designs and parts necessary to build your own Doggo right here.

“We had seen these other quadruped robots used in research, but they weren’t something that you could bring into your own lab and use for your own projects,” said Doggo lead Nathan Kau in a Stanford news post. “We wanted Stanford Doggo to be this open source robot that you could build yourself on a relatively small budget.”

In the meantime, the Extreme Mobility team will be both improving on the capabilities of Doggo by collaborating with the university’s Robotic Exploration Lab, and also working on a similar robot but twice the size — Woofer.


Read Full Article

This New Google App Helps Kids Learn How to Read


Google has launched a new app designed to help kids learn to read. Called Rivet, the free reading app contains more than 2,000 books suitable for all ages, definitions and translations for words, and gamification elements designed to encourage children to learn.

The Importance of Learning How to Read

Learning to read is an absolute must for anyone looking to get on in life. And the earlier kids learn to read the better. The problem is not all kids take to reading, especially if they don’t dedicate at least some time to practising their reading skills.

Citing the fact that “64 percent of fourth grade students in the United States perform below the proficient level in reading,” Google has launched Rivet. This is Google’s attempt to improve those statistics by giving children a fun, free way to learn how to read.

How Rivet Helps Kids Learn How to Read

Rivet is a free reading app from Area 120, Google’s experimental workshop. It boasts 2,000+ free books suitable for children, covering a range of ages, categories, and difficulty levels. The idea being that there will be something of interest to everyone.

Kids can either read a book by themselves or have the app read to them and follow along. Either way, if they encounter a word they don’t recognize they can tap on it to ask for help. They will then see a definition for the word and be able to practice saying it aloud.

The app will also offer kids feedback on their reading, allowing them to practice without needing to ask their parents for help. The app can also translate words into any of 25 different languages. This is all made possible thanks to advanced speech technology.

Last but not least, Rivet offers motivation and encouragement via gamification. Kids will earn points and badges for practising their reading, turning the scholarly pursuit of learning how to read into a game. There are also mini-games buried within the app itself.

We Should All Endeavor to Read More

Rivet has been in beta since 2018, but is finally ready for a wider rollout. Rivet is now available for free on Android and iOS in 11 countries worldwide, including the US, Canada, Australia, New Zealand, South Africa, Brazil, India, Bangladesh, and Pakistan.

Download: Rivet: Better Reading Practice on Android | iOS

While Rivet is all about helping kids learn how to read at a young age, we should all endeavor to read more even as adults. So if you’re a parent reading about this new Google app and feeling a little left out, here’s how to read 50+ books this year.

Read the full article: This New Google App Helps Kids Learn How to Read


Read Full Article

7 Warning Signs Your Computer Is Going to Crash (And What to Do)

What Is AppleScript? Writing Your First Mac Automation Script


If you feel comfortable in the world of scripting and you work on a Mac, AppleScript might be the automating solution for you. AppleScript is a powerful language that gives you the power to control any app, as long as it provides an AppleScript library.

Use it for such mundane tasks as resizing Photoshop photos automatically, renaming folders, and locking files with a password. We’ll show you how to start using it.

What Is AppleScript?

Like bash, AppleScript is a scripting language. And similar to Automator, it interacts primarily with apps and Finder to automate tasks for you. It released as part of Mac OS System 7, all the way back in 1993. It’s stuck around since then, nestled in the Utilities folder.

AppleScript increased in power with the debut of Mac OS X. The Cocoa framework made it much easier for app developers to include AppleScript compatibility. That increased flexibility, combined with AppleScript’s ability to talk directly to the command line, makes AppleScript one of the best tools for tinkerers. It also gives macOS the edge over iOS when it comes to automation.

Overview of Pre-Installed AppleScripts

Before we get into breaking down exactly what an AppleScript says, let’s take a look at the scripts that come pre-installed with Script Editor and how you can use them.

The preinstalled scripts live in Macintosh HD > Library > Scripts. You can also access them by opening the Script Editor (search for it with Spotlight), going to Preferences > General > Show Script menu in menu bar, and then clicking the script icon that appears in the menu bar.

The menu bar item for Scripts Editor

You can simply run one of these script from the menu bar.

Let’s take a look at Folder Actions. A Folder Action is an AppleScript that’s attached to a folder. When enabled, the script will run on any file that is added to that folder.

If you go to Folder Actions > Attach Scripts to a Folder, a window popup will ask what kind of script you want to add to a folder. You can flip photos horizontally or vertically, duplicate them as JPEG or PNG, rotate them, or prompt an alert when a new item is added.

Pre installed AppleScripts

Once you’ve selected your script and the folder you want to attach it to, right-click on the folder itself. Go down to Services > Folder Action Setup, and make sure that Enable Folder Actions is checked. Then drag a file on top of the folder to see your AppleScript run.

Play around with the Scripts menu bar to get a sense of what else AppleScript can do for you. To take a look at what’s going on under the hood, go to the Scripts folder, right-click on any script, and open it with Script Editor.

Understanding the Tell Statement

New item alert AppleScript

AppleScript uses a human-readable syntax. This means that, compared with many other programming languages, it’s written in an understandable format. Because it uses full words and sentences to send commands, it’s easy to understand and straightforward to learn.

Let’s look at the beginning syntax of the add – new item alert.scpt in Folder Actions. This will give an idea of the most fundamental statement in AppleScript: the tell statement.

 on adding folder items to this_folder after receiving added_items
        try
                tell application "Finder"
                        --get the name of the folder
                        set the folder_name to the name of this_folder
                end tell

A “tell statement” is composed of three parts:

  1. The word “tell”
  2. The object to reference (in this case, the application “Finder”)
  3. The action to perform (here, “set the folder_name to the name of this_folder”).

In layman’s terms, the tell statement above is saying “Tell Finder to use the name of the folder this script is attached to whenever the script asks for “this_folder”.

The purpose of AppleScript is to automate tasks for you by telling apps to perform tasks you don’t feel like doing yourself. Therefore, the “tell” command is essential. You can get far in the AppleScript world with “tell” alone.

Also note: the line that says --get the name of the folder is actually just a comment, telling the user what the script is doing at that moment. Comments are essential—not just for telling other people what your script did, but for reminding yourself.

Writing Your First AppleScript

Hello World dialog box

If you have some programming experience and are familiar with concepts like variables, do-while loops, and conditionals, you can get a lot out of AppleScript beyond the scope of this introduction. For now, we’re just going to show you how to create, write, run, and save a basic script:

  1. Create the script: Open the Script Editor and go to File > New.
  2. Write your script: The Script Editor window is divided into two halves. The top half is for entering your script; the bottom half will show you the output when you run it. Type: tell application "System Events" to display dialog "Hello world!". Then hit the hammer button in the menu bar right above the script to compile it. This will run through your script to check for syntax errors. If you receive no error dialog, and your script changes formatting and font, then it compiled successfully.
  3. Run your script: Next to the hammer button is a Play button. Hit that, and see what happens.
  4. Save your script: Now that you have a basic script, you can save it as an clickable application. Go to File > Save, and under File Format, choose Application. Now, instead of opening the Script Editor and hitting Play, you can simply double-click your script to run it. If you like to script in bash, you can use AppleScript to turn your bash scripts into clickable applications.

Save Script as an Application

With this simple syntax down, you can tell nearly any Mac app to do pretty much anything. To review the available commands for a given app, go to File > Open Dictionary and choose the application. From there, you can see all the available AppleScript commands.

Applescript commands dictionary for iPhoto

For Simpler Mac Automation, Use Automator

If programming gives you a headache, there are simpler ways to automate your tasks. Automator uses a friendly GUI and a simple interface to turn mind-numbing routines into one-click set-and-forget tasks.

While Automator is not as customizable or intricate as AppleScript, it is simpler and much harder to break. Take a look at some Automator workflows that will save you time if you’re interested.

Read the full article: What Is AppleScript? Writing Your First Mac Automation Script


Read Full Article

The 7 Best MacBook Keyboard Covers

SmartSHOW 3D: Cool Photo Slideshow Software for Your PC


smartshow-slideshow-soft

Want to create a slideshow? You can do so with various free tools, from Android apps to the built-in slideshow player in Windows 10. But what if you want something a bit more polished?

SmartSHOW 3D is professional-quality photo slideshow software for Windows. Using this program, you can create polished, visually-striking slideshows with ease.

Anyone Can Use Professional Slideshow Software

Along with a feature-packed collection of transitions, animation effects, and support for audio, SmartSHOW 3D’s main draw is its ease of use. The software literally guides you through the creation of a new slideshow production.

Your end product might be family photos and video set to music, a school project, a motion comic, or even a hugely popular YouTube video. The only limit is your imagination and the quality of media you have at your disposal.

SmartSHOW 3D Screenshot 1

Before SmartSHOW 3D came along, the creation of impressive slideshows was limited to some expensive tools. With SmartSHOW 3D slideshow software for PC (it runs on all versions from Windows XP through Windows 10), you can create a picture slideshow as easily as you might organize photos in an album.

Easily Create a Slideshow With SmartSHOW 3D

It takes just five minutes to create a spectacular slideshow using SmartSHOW 3D. Simply run the software and jump right in, or refer to the SmartSHOW 3D website for tutorials. There’s even a “quick start” tutorial video provided to help get your slideshow off the ground.

You can use a template, or start from scratch, adding your own choice of transitions and animations. We’d recommend starting with a template, then you can make the necessary tweaks once the images, optional video, and music are in place.

For example, you can adjust the caption text, audio volume, and duration of transitions and animations. The software also allows you to tweak backgrounds, and change the position of slideshow images. It doesn’t take long to make the necessary changes to a template.

SmartSHOW 3D Screenshot 2

Unless you have a lot of changes to make, or you’re making very minor tweaks, you’ll be done making your slideshow in the time it takes to brew a cup of coffee. In many cases, it will take longer to watch the slideshow than it took to create it!

Ultimately, SmartSHOW 3D gives you the tools to create almost any kind of slideshow you can imagine. And if you run into problems, simply visit the Slideshow Forum community for help.

SmartSHOW 3D Features

So what can you do with SmartSHOW 3D? Among its features, you’ll find:

  • A wizard to create a slideshow in under five minutes
  • Hundreds of templates, 150+ transitions, and animation effects
  • Support for multiple audio tracks
  • A linear timeline for editing
  • Captions with a range of text styles, including 3D text
  • Support for animated slides, videos, and photo/video collages
  • Ability to edit animations for a truly unique slideshow
  • Animation effects such as fireworks, sparklers, falling leaves, and bubbles
  • Ability to export the completed slideshow to a range of platforms in various formats.

SmartSHOW 3D is available in two versions: the $39.90 Standard version, and the $59.50 Deluxe release. Deluxe adds the ability to write to DVD, over 350 transitions and animation effects, and support for video clips within the slideshow.

SmartSHOW 3D: Top-Rated Photo and Video Slideshow Software

So it’s easy to use, includes a considerable selection of transitions and animations, and even lets you export to HD video, optical disc, stream to your TV, play on portable devices, and upload to websites. What’s not to like about SmartSHOW 3D?

If you’re looking for software to create eye-candy photo and video slideshows, then SmartSHOW 3D is your best option. Grab your copy of SmartSHOW 3D today from Smartshow Software.

Read the full article: SmartSHOW 3D: Cool Photo Slideshow Software for Your PC


Read Full Article

What Do “Dual Core” and “Quad Core” Mean?


intel-skylake-cpu

When you are purchasing a new laptop or building a computer, the processor is the most important decision. But there’s a lot of jargon, especially the cores. Do you need a dual core, a quad core, a hexa core, an octo core…

Let’s cut the jargon and understand what it all really means.

Dual Core vs. Quad Core, Explained

Here’s everything you need to know:

  • There is always only one processor chip. That chip can have one, two, four, six, or eight cores.
  • Currently, an 18-core processor is the best you can get in consumer PCs.
  • Each “core” is the part of the chip that does the processing work. Essentially, each core is a central processing unit (CPU).

This article deals with dual core vs. quad core processors for computers, not for smartphones. We have a separate post on understanding smartphone cores.

How Speed Is Affected by Dual- and Quad-Core CPUs

You might think more cores will make your processor faster overall, but that’s not always the case. It’s a little more complicated than that.

More cores are faster only if a program can split its tasks between the cores. Not all programs are developed to split tasks between cores. More on this later.

The clock speed of each core also is a crucial factor in speed, as is the architecture. A newer dual core CPU with a higher clock speed will often outperform an older quad core CPU with a lower clock speed.

Power Consumption

More cores also lead to higher power consumption by the processor. When the processor is switched on, it supplies power to all the cores, not just one at a time.

Chip makers have been trying to reduce power consumption and make processors more energy efficient. But as a general rule of thumb, a quad core processor will draw more power from your laptop (and thus make it run out of battery faster).

More Cores Equal More Heat

More factors than the core affect the heat generated by a processor. But again, as a general rule, more cores leads to more heat.

Due to this additional heat, manufacturers need to add better heat sinks or other cooling solutions.

Are Quad Core CPUs More Expensive Than Dual Core?

More cores isn’t always a higher price. Like we said earlier, clock speed, architecture versions, and other considerations come into play.

But if all other factors are the same, then more cores will fetch a higher price.

It’s All About the Software

Here’s the dirty little secret that chip manufacturers don’t want you to know. It’s not about how many cores you are running, it’s about what software you are running on them.

Programs have to be specifically developed to take advantage of multiple processors. Such “multi-threaded software” isn’t as common as you might think.

Importantly, even if it’s a multi-threaded program, it’s also about what it is used for. For example, the Google Chrome web browser supports multiple processes, as does video editing software Adobe Premier Pro.

Adobe Premier Pro instructs different cores to work on different aspects of your edit. Considering the many layers involved in video editing, this makes sense, as each core can work on a separate task.

Similarly, Google Chrome instructs different cores to work on different tabs. But herein lies the problem. Once you open a web page in a tab, it is usually static after that. There is no further processing work needed; the rest of the work is about storing the page in the RAM. Which means even though the core can be used for a background tab, there is no need for it.

This Google Chrome example is an illustration of how even multi-threaded software might not give you much of a real-world performance boost.

Double the Cores Is Not Double the Speed

Dual Core vs. Quad Core - double the cores is not double the speed

So let’s say you have the right software and all your other hardware is the same. Would a quad core processor then be twice as fast as a dual core processor? Nope.

Increasing cores does not address the software problem of scaling. Scaling to cores is the theoretical ability of any software to assign the right tasks to the right cores, so each core is computing at its optimal speed. That’s not what happens in reality. In reality, tasks are split sequentially (which most multi-threaded software does) or randomly.

For example, let’s say you have a quad-core processor (Core1, Core2, Core3, Core4). You need to accomplish three tasks (T1, T2, T3) to finish an action, and you have five actions (A1, A2, A3, A4, A5) like this.

Here’s how the software will divide tasks:

  • Core1 = A1T1
  • Core2 = A1T2
  • Core3 = A1T3
  • Core4 = A2T1

The software is not smart though. If A1T3 is the hardest and longest task, the software have should split A1T3 between Core3 and Core4. But now, even after Core1 and Core2 finish their tasks, they have to wait for the slower Core3’s task to complete the action.

All of this is a roundabout way of saying that software, as it stands today, isn’t optimized to take full advantage of multiple cores. And doubling the cores does not equal doubling the speeds.

Where Do More Cores Really Help?

Now that you know what cores do and their restrictions in boosting performance, you must be asking yourself, “Do I need more cores?” Well, it depends on what you plan to do with them.

Dual Core and Quad Core in Gaming

If you fancy yourself to be a gamer, then get more cores on a gaming PC. The vast majority of new AAA titles (i.e. popular games from big studios) support multi-threaded architecture. Video games are still largely dependent on the graphics card to look good, but a multi-core processor helps too.

Editing Videos or Audio

For any professional who works with video or audio programs, more cores will be beneficial. Most of the popular audio and video editing tools take advantage of multi-threaded processing.

Photoshop and Design

If you’re a designer, then a higher clock speed and more processor cache will increase speeds better than more cores. Even the most popular design software, Adobe Photoshop, largely supports single threaded or lightly threaded processes. Multiple cores isn’t going to be a significant boost with this.

Should You Get More Cores?

Overall, a quad core processor is going to perform faster than a dual core processor for general computing. Each program you open will work on its own core, so if the tasks are shared, the speeds are better. If you use a lot of programs simultaneously, switch between them often, and assigning them their own tasks, then get a processor with more cores.

Just know this: overall system performance is one area where far too many factors come into play. Don’t expect a magical boost by changing one component like the processor. Choose wisely and buy the right processor for your needs.

Read the full article: What Do “Dual Core” and “Quad Core” Mean?


Read Full Article

Snap appoints new execs as it aims to keep 2019 momentum


Snap has another appointment in the apt saga of its ephemeral CFOs.

Four months after losing its CFO Tim Stone following a reported “personality clash” between Stone and CEO Evan Spiegel, Snap has promoted its VP of Finance Derek Andersen to the role, the company said Monday. Andersen is the company’s third CFO since March of 2017, when it went public.

Lara Sweet, who was serving as the company’s interim CFO as well as the chief accounting officer, will be stepping into a new role as chief people officer.

Snap has had a less cataclysmic 2019 in the public markets compared to its two previous calendar years. Snap has nearly doubled its share price since the year’s start, though the stock still sits just above where it was one year ago.


Read Full Article

Maisie Williams’ talent discovery startup Daisie raises $2.5 million, hits 100K members


Maisie Williams’ time on Game of Thrones may have come to an end, but her talent discovery app Daisie is just getting started. Co-founded by film producer Dom Santry, Daisie aims to make it easier for creators to showcase their work, discover projects and collaborate with one another through a social networking-style platform. Only 11 days after Daisie officially launched to the public, the app hit an early milestone of 100,000 members. It also recently closed on $2.5 million in seed funding, the company tells TechCrunch.

The round was led by Founders Fund, which contributed $1.5 million. Other investors included 8VC, Kleiner Perkins, and newer VC firm Shrug Capital, from AngelList’s former head of marketing Niv Dror, who also separately invested. To date — including friends and family money and the founders’ own investment — Daisie has raised roughly $3 million.

It will later move toward raising a larger Series A, Santry says.

On Daisie, creators establish a profile as you would on a social network, find and follow other users, then seek out projects based on location, activity, or other factors.

“Whether it’s film, music, photography, art — everything is optimized around looking for collaborators,” explains Santry. “So the projects that are actively open and looking for people to get involved, are the ones we’re really pushing for people to discover and hopefully get involved with,” he says.

The company’s goal to offer an alternative path to talent discovery is a timely one. Today, the creative industry is waking up — as are many others — to the ramifications of the #MeToo and #TimesUp movements. As power-hungry abusers lose their jobs, new ways of working, networking and sourcing talent are taking hold.

As Williams said when she first introduced the app last year, Daisie’s focus is on giving the power back to the creator.

“Instead of [creators] having to market themselves to fit someone else’s idea of what their job would be, they can let their art speak for themselves,” she said at the time.

The app was launched into an invite-only beta on iOS last summer, and quickly saw a surge of users. After 37,000 downloads in week one, it crashed.

“We realized that the community was a lot larger than the product we had built, and that scale was something we needed to do properly,” Santry tells TechCrunch.

The team realized there was another problem, too: Once collaborators found each other in Daisie, there wasn’t a clear cut way for them to get in touch with one another as the app had no communication tools or ways to share files built in.

“That journey from concept to production was pretty muddy and quite muddled…so we realized, if we were bringing teams together, we actually wanted to give them a place to work — give them this creative hub…and take their project from concept all the way to production on Daisie,” Santry notes.

With this broader concept in mind, Daisie began fundraising in San Francisco shortly after the beta launch. The round initially closed in October 2018, but was more recently reopened to allow Dror’s investment.

With the additional funding in tow, Daisie has been able to grow its team of five to eighteen, including new hires from Monzo, Deliveroo, BBC, Microsoft, and others — specifically engineers who were familiar with designing apps for scale. Tasked with developing better infrastructure and a more expansive feature set, the team set to work on bringing Daisie to the web.

Nine months later, the new version launched to the public and is stable enough to handle the load. Today, it topped 100,000 users — most of which are in London. However, Daisie is planning to focus on taking its app to other cities including Berlin, New York, and L.A. going forward.

The company has monetization ideas in mind, but the app does not currently generate revenue. However, it’s already fielding inquiries from companies who want Daisie to find them the right talent for their projects.

“We want the best for the creators on the platform, so if that means bringing clients on — and hopefully giving those connectivity opportunities — then we’ll absolutely [go] down those roads,” Santry says.

The app may also serve as a talent pipeline for Maisie Williams’ own Daisy Chain Productions. In fact, Daisie recently ran a campaign called London Creates which connected young, emerging creators with project teams, two of which were headed by Santry’s Daisy Chain Productions co-founders, Williams and Bill Milner.

Now Daisy Chain Productions is going to produce a film from the Daisie collaboration as a result.

While celebs sometimes do little more than lend their name to projects, Williams was hands-on in terms of getting Daisie off the ground, Santry says. During the first quarter of 2019, she worked on Daisie 9-to-5, he notes. But she has since started another film project and plans to continue to work as an actress, which will limit her day-to-day involvement. Her role now and in the future may be more high-level.

“I think her role is going to become one of, culturally, like: where does Daisie stand? What do we stand for? Who do we work with? What do we represent?” he says. “How do we help creators everywhere? That’s mainly want Maisie wants to make sure Daisie does.”


Read Full Article

Why is Facebook doing robotics research?


It’s a bit strange to hear that the world’s leading social network is pursuing research in robotics rather than, say, making search useful, but Facebook is a big organization with many competing priorities. And while these robots aren’t directly going to affect your Facebook experience, what the company learns from them could be impactful in surprising ways.

Though robotics is a new area of research for Facebook, its reliance on and bleeding-edge work in AI are well known. Mechanisms that could be called AI (the definition is quite hazy) govern all sorts of things, from camera effects to automated moderation of restricted content.

AI and robotics are naturally overlapping magisteria — it’s why we have an event covering both — and advances in one often do the same, or open new areas of inquiry, in the other. So really it’s no surprise that Facebook, with its strong interest in using AI for a variety of tasks in the real and social media worlds, might want to dabble in robotics to mine for insights.

What then could be the possible wider applications of the robotics projects it announced today? Let’s take a look.

Learning to walk from scratch

“Daisy” the hexapod robot.

Walking is a surprisingly complex action, or series of actions, especially when you’ve got six legs, like the robot used in this experiment. You can program in how it should move its legs to go forward, turn around, and so on, but doesn’t that feel a bit like cheating? After all, we had to learn on our own, with no instruction manual or settings to import. So the team looked into having the robot teach itself to walk.

This isn’t a new type of research — lots of roboticists and AI researchers are into it. Evolutionary algorithms (different but related) go back a long way, and we’ve already seen interesting papers like this one:

By giving their robot some basic priorities like being “rewarded” for moving forward, but no real clue how to work its legs, the team let it experiment and try out different things, slowly learning and refining the model by which it moves. The goal is to reduce the amount of time it takes for the robot to go from zero to reliable locomotion from weeks to hours.

What could this be used for? Facebook is a vast wilderness of data, complex and dubiously structured. Learning to navigate a network of data is of course very different from learning to navigate an office — but the idea of a system teaching itself the basics on a short timescale given some simple rules and goals is shared.

Learning how AI systems teach themselves, and how to remove roadblocks like mistaken priorities, cheating the rules, weird data-hoarding habits and other stuff is important for agents meant to be set loose in both real and virtual worlds. Perhaps the next time there is a humanitarian crisis that Facebook needs to monitor on its platform, the AI model that helps do so will be informed by the autodidactic efficiencies that turn up here.

Leveraging “curiosity”

Researcher Akshara Rai adjusts a robot arm in the robotics AI lab in Menlo Park. (Facebook)

This work is a little less visual, but more relatable. After all, everyone feels curiosity to a certain degree, and while we understand that sometimes it kills the cat, most times it’s a drive that leads us to learn more effectively. Facebook applied the concept of curiosity to a robot arm being asked to perform various ordinary tasks.

Now, it may seem odd that they could imbue a robot arm with “curiosity,” but what’s meant by that term in this context is simply that the AI in charge of the arm — whether it’s seeing or deciding how to grip, or how fast to move — is given motivation to reduce uncertainty about that action.

That could mean lots of things — perhaps twisting the camera a little while identifying an object gives it a little bit of a better view, improving its confidence in identifying it. Maybe it looks at the target area first to double check the distance and make sure there’s no obstacle. Whatever the case, giving the AI latitude to find actions that increase confidence could eventually let it complete tasks faster, even though at the beginning it may be slowed by the “curious” acts.

What could this be used for? Facebook is big on computer vision, as we’ve seen both in its camera and image work and in devices like Portal, which (some would say creepily) follows you around the room with its “face.” Learning about the environment is critical for both these applications and for any others that require context about what they’re seeing or sensing in order to function.

Any camera operating in an app or device like those from Facebook is constantly analyzing the images it sees for usable information. When a face enters the frame, that’s the cue for a dozen new algorithms to spin up and start working. If someone holds up an object, does it have text? Does it need to be translated? Is there a QR code? What about the background, how far away is it? If the user is applying AR effects or filters, where does the face or hair stop and the trees behind begin?

If the camera, or gadget, or robot, left these tasks to be accomplished “just in time,” they will produce CPU usage spikes, visible latency in the image, and all kinds of stuff the user or system engineer doesn’t want. But if it’s doing it all the time, that’s just as bad. If instead the AI agent is exerting curiosity to check these things when it senses too much uncertainty about the scene, that’s a happy medium. This is just one way it could be used, but given Facebook’s priorities it seems like an important one.

Seeing by touching

Although vision is important, it’s not the only way that we, or robots, perceive the world. Many robots are equipped with sensors for motion, sound, and other modalities, but actual touch is relatively rare. Chalk it up to a lack of good tactile interfaces (though we’re getting there). Nevertheless, Facebook’s researchers wanted to look into the possibility of using tactile data as a surrogate for visual data.

If you think about it, that’s perfectly normal — people with visual impairments use touch to navigate their surroundings or acquire fine details about objects. It’s not exactly that they’re “seeing” via touch, but there’s a meaningful overlap between the concepts. So Facebook’s researchers deployed an AI model that decides what actions to take based on video, but instead of actual video data, fed it high-resolution touch data.

Turns out the algorithm doesn’t really care whether it’s looking at an image of the world as we’d see it or not — as long as the data is presented visually, for instance as a map of pressure on a tactile sensor, it can be analyzed for patterns just like a photographic image.

What could this be used for? It’s doubtful Facebook is super interested in reaching out and touching its users. But this isn’t just about touch — it’s about applying learning across modalities.

Think about how, if you were presented with two distinct objects for the first time, it would be trivial to tell them apart with your eyes closed, by touch alone. Why can you do that? Because when you see something, you don’t just understand what it looks like, you develop an internal model representing it that encompasses multiple senses and perspectives.

Similarly, an AI agent may need to transfer its learning from one domain to another — auditory data telling a grip sensor how hard to hold an object, or visual data telling the microphone how to separate voices. The real world is a complicated place and data is noisier here — but voluminous. Being able to leverage that data regardless of its type is important to reliably being able to understand and interact with reality.

So you see that while this research is interesting in its own right, and can in fact be explained on that simpler premise, it is also important to recognize the context in which it is being conducted. As the blog post describing the research concludes:

We are focused on using robotics work that will not only lead to more capable robots but will also push the limits of AI over the years and decades to come. If we want to move closer to machines that can think, plan, and reason the way people do, then we need to build AI systems that can learn for themselves in a multitude of scenarios — beyond the digital world.

As Facebook continually works on expanding its influence from its walled garden of apps and services into the rich but unstructured world of your living room, kitchen, and office, its AI agents require more and more sophistication. Sure, you won’t see a “Facebook robot” any time soon… unless you count the one they already sell, or the one in your pocket right now.


Read Full Article

Instagram’s IGTV copies TikTok’s AI, Snapchat’s design


Instagram conquered Stories, but it’s losing the battle for the next video formats. TikTok is blowing up with an algorithmically suggested vertical one-at-a-time feed featuring videos of users remixing each other’s clips. Snapchat Discover’s 2 x infinity grid has grown into a canvas for multi-media magazines, themed video collections, and premium mobile TV shows.

Instagram’s IGTV…feels like a flop in comparison. Launched a year ago, it’s full of crudely cropped & imported viral trash from around the web. The long-form video hub that lives inside both a homescreen button in Instagram as well as a standalone app has failed to host lengthier must-see original vertical content. Sensor Tower estimates that the IGTV app has just 4.2 million installs worldwide with just 7,700 new ones per day — implying less than half a percent of Instagram’s billion-plus users have downloaded it. IGTV doesn’t rank on the overall charts and hangs low at #191 on the US – Photo & Video app charts according to App Annie.

Now Instagram has quietly overhauled the design of IGTV’s space inside its main app to crib what’s working from its two top competitors. The new design showed up in last week’s announcements for Instagram Explore’s new Shopping and IGTV discovery experiences, but the company declined to answer questions about it.

IGTV has ditched its category-based navigation system’s tabs like “For You”, “Following”, “Popular”, and “Continue Watching” for just one central feed of algorithmically suggested videos — much like TikTok. This affords a more lean-back, ‘just show me something fun’ experience that relies on Instagram’s AI to analyze your behavior and recommend content instead of putting the burden of choice on the viewer.

IGTV has also ditched its awkward horizontal scrolling design that always kept a clip playing in the top half of the screen. Now you’ll scroll vertically through a 2 x infinity grid of recommended clips in a what looks just like Snapchat Discover feed. Once you get past a first video that auto-plays up top, you’ll find a full-screen grid of things to watch. You’ll only see the horizontal scroller in the standalone IGTV app, or if you tap into an IGTV video, and then tap the Browse button for finding a next clip while the last one plays up top.

Instagram seems to be trying to straddle the designs of its two competitors. The problem is that TikTok’s one-at-a-time feed works great for punchy, short videos that get right to the point. If you’re bored after 5 second you swipe to the next. IGTV’s focus on long-form means its videos might start too slowly to grab your attention if they were auto-played full-screen in the feed rather than being chosen by a viewer. But Snapchat makes the most of the two previews per row design IGTV has adopted because professional publishers take the time to make compelling cover thumbnail images promoting their content. IGTV’s focus on independent creators means fewer have labored to make great cover images, so viewers have to rely on a screenshot and caption.

Instagram is prototyping a number of other features to boost engagement across its app, as discovered by reverse engineering specialist and frequent TechCrunch tipster Jane Manchun Wong. Those include options to blast a direct message to all your Close Friends at once but in individual message threads, see a divider between notifications and likes you have or haven’t seen, or post a Chat sticker to Stories that lets friends join a group message thread about that content. And to better compete with TikTok, it may let you add lyrics stickers to Stories that appear word-by-word in sync with Instagram’s licensed music soundtrack feature, and share Music Stories to Facebook.

When I spoke with Instagram co-founder and ex-CEO Kevin Systrom last year a few months after IGTV’s launch, he told me “It’s a new format. It’s different. We have to wait for people to adopt it and that takes time . . . Everything that is great starts small.”

But to grow large, IGTV needs to demonstrate how long-form portrait mode video can give us a deeper look at the nuances of the influencers and topics we care about. The company has rightfully prioritized other drives like safety and well-being with features that hide bullies and deter overuse. But my advice from August still stands despite all the ground Instagram has lost in the meantime. “Concentrate on teaching creators how to find what works on the format and incentivizing them with cash and traffic. Develop some must-see IGTV and stoke a viral blockbuster. Prove the gravity of extended, personality-driven vertical video.” Until the content is right, it won’t matter how IGTV surfaces it.


Read Full Article