29 July 2019

Y Combinator-backed Vahan is helping low-skilled workers in India find jobs on WhatsApp


The emergence of online hyperlocal services and e-commerce firms in India has led to the creation of about 200,000 jobs for blue-collar workers who deliver items to customers, according to industry estimates.

But it is also the kind of job that continues to see a high attrition rate. This means that companies like Zomato, Swiggy, Dunzo, Amazon India, and Flipkart have to replace a significant portion of their delivery workforce every three to four months.

“A small portion of these workers either switch jobs to go to a different delivery company, or they take up a different job,” said Madhav Krishna. “And a large chunk of them end up going back to their villages to work on their farms.”

“There is a cyclical migration phenomenon in India wherein a very large population migrates from villages to cities looking for a job. They work in cities for a few months and then return to their hometowns in time for the next crop harvesting season,” he said.

The attrition rate is so high that it has become a major challenge for companies to keep hiring new people, Krishna said. Additionally, with e-commerce and on-demand delivery space projected to grow four to five times in India by 2025, efficient supply acquisition is a major requirement for growth.

whatsapp vahan

Three years ago, Krishna, who obtained his Masters in machine learning from Columbia University before moving to Bangalore, founded Vahan, a startup that is attempting to help these companies find potential blue-collar workers at scale.

Vahan operates a WhatsApp Business account where it informs potential candidates of the available jobs in the industry. Interested candidates are presented with a series of qualifying questions, screened, and are authenticated by Vahan.

Much of this process, which takes merely minutes, is automated via an AI-driven chatbot, and Vahan (Hindi for “vehicle”) directs the shortlisted candidates to its clients for a walk-in interview and on-boarding. Its clients today include food delivery firms such as Zomato and Swiggy, hyperlocal concierge service Dunzo, and logistics company Lalamove.

“There are three things that need to come together: What do people want? What are their capabilities? And the third is, what is available in the market?” Krishna said. “It’s really a matching problem that we’re trying to solve. We are using data and machine learning to solve a complex matching problem.”

Y Combinator (YC) recently selected Vahan to participate in its Summer 2019 batch. In a statement, Adora Cheung, a partner at YC, said, “High mobile penetration coupled with massive growth in data consumption has made it possible for companies such as Vahan to reach millions of Indian via digital channels.”

“Vahan is addressing a space that is severely underserved and is poised for disruption via tech. Their use of WhatsApp is a great fit for reaching the blue-collar audience and their traction proves it. We are excited to back them and see them grow!”

Vahan makes revenue from taking a cut each time its suggested candidates become part of the corporate client’s workforce. For one of the aforementioned clients, the referral cut is about 7.5%, two sources at that company said. The amount also varies based on how long those candidates stick to the platform, they added.

In the last year, Vahan has amassed over a million users and has helped over 20,000 people secure a job. Each day, around 5,000 users check Vahan to look for a job. “This is all through zero-marketing spend. Job seekers are finding us through their friends,” Krishna said.

Vahan

Vahan team

This is why Vahan operates on WhatsApp, too. “Most of the people we’re trying to help, they are not active on any online recruitment platform. WhatsApp is one of the few apps they use heavily,” Krishna said. “We send over 50,000 messages a day on WhatsApp and 95% of them get read. In fact, 15% get read in under 10 seconds”, he added.

WhatsApp, with more than 400 million users in India, has become a daily habit for much of the internet-connected population of the country. In recent years, many companies have built their businesses on top of the platform to use it as an effective distribution channel for their businesses.

For instance, Meesho, a social commerce app, helps millions of people in the country buy and sell products on WhatsApp. It recently received an investment from Facebook, the first of its kind by the social juggernaut in the country. Dunzo, which has been backed by Google, and Sharechat, which counts Twitter as one its investors, also started on Facebook’s instant messaging app.

As for Vahan, it plans to soon offer skiling courses to its users as they navigate more job opportunities in the future. It is also working with many companies for deeper system-level integrations.

As it sees the platform receive traction, Vahan also wants to expand its offering beyond delivery jobs.


Read Full Article

Robust Neural Machine Translation




In recent years, neural machine translation (NMT) using Transformer models has experienced tremendous success. Based on deep neural networks, NMT models are usually trained end-to-end on very large parallel corpora (input/output text pairs) in an entirely data-driven fashion and without the need to impose explicit rules of language.

Despite this huge success, NMT models can be sensitive to minor perturbations of the input, which can manifest as a variety of different errors, such as under-translation, over-translation or mistranslation. For example, given a German sentence, the state-of-the-art NMT model, Transformer, will yield a correct translation.

“Der Sprecher des Untersuchungsausschusses hat angekündigt, vor Gericht zu ziehen, falls sich die geladenen Zeugen weiterhin weigern sollten, eine Aussage zu machen.”

(Machine translation to English: “The spokesman of the Committee of Inquiry has announced that if the witnesses summoned continue to refuse to testify, he will be brought to court.”),

But, when we apply a subtle change to the input sentence, say from geladenen to the synonym vorgeladenen, the translation becomes very different (and in this case, incorrect):

“Der Sprecher des Untersuchungsausschusses hat angekündigt, vor Gericht zu ziehen, falls sich die vorgeladenen Zeugen weiterhin weigern sollten, eine Aussage zu machen.”

(Machine translation to English: “The investigative committee has announced that he will be brought to justice if the witnesses who have been invited continue to refuse to testify.”).

This lack of robustness in NMT models prevents many commercial systems from being applicable to tasks that cannot tolerate this level of instability. Therefore, learning robust translation models is not just desirable, but is often required in many scenarios. Yet, while the robustness of neural networks has been extensively studied in the computer vision community, only a few prior studies on learning robust NMT models can be found in literature.

In “Robust Neural Machine Translation with Doubly Adversarial Inputs” (to appear at ACL 2019), we propose an approach that uses generated adversarial examples to improve the stability of machine translation models against small perturbations in the input. We learn a robust NMT model to directly overcome adversarial examples generated with knowledge of the model and with the intent of distorting the model predictions. We show that this approach improves the performance of the NMT model on standard benchmarks.

Training a Model with AdvGen
An ideal NMT model would generate similar translations for separate inputs that exhibit small differences. The idea behind our approach is to perturb a translation model with adversarial inputs in the hope of improving the model’s robustness. It does this using an algorithm called Adversarial Generation (AdvGen), which generates plausible adversarial examples for perturbing the model and then feeds them back into the model for defensive training. While this method is inspired by the idea of generative adversarial networks (GANs), it does not rely on a discriminator network, but simply applies the adversarial example in training, effectively diversifying and extending the training set.

The first step is to perturb the model using AdvGen. We start by using Transformer to calculate the translation loss based on a source input sentence, a target input sentence and a target output sentence. Then AdvGen randomly selects some words in the source sentence, assuming a uniform distribution. Each word has an associated list of similar words, i.e., candidates that can be used for substitution, from which AdvGen selects the word that is most likely to introduce errors in Transformer output. Then, this generated adversarial sentence is fed back into Transformer, initiating the defense stage.
First, the Transformer model is applied to an input sentence (lower left) and, in conjunction with the target output sentence (above right) and target input sentence (middle right; beginning with the placeholder “<sos>”), the translation loss is calculated. The AdvGen function then takes the source sentence, word selection distribution, word candidates, and the translation loss as inputs to construct an adversarial source example.
During the defend stage, the adversarial sentence is fed back into the Transformer model. Again the translation loss is calculated, but this time using the adversarial source input. Using the same method as above, AdvGen uses the target input sentence, word replacement candidates, the word selection distribution calculated by the attention matrix, and the translation loss to construct an adversarial target example.
In the defense stage, the adversarial source example serves as input to the Transformer model, and the translation loss is calculated. AdvGen then uses the same method as above to generate an adversarial target example from the target input.
Finally, the adversarial sentence is fed back into Transformer and the robustness loss using the adversarial source example, the adversarial target input example and the target sentence is calculated. If the perturbation led to a significant loss, the loss is minimized so that when the model is confronted with similar perturbations, it will not repeat the same mistake. On the other hand, if the perturbation leads to a low loss, nothing happens, indicating that the model can already handle this perturbation.

Model Performance
We demonstrate the effectiveness of our approach by applying it to the standard Chinese-English and English-German translation benchmarks. We observed a notable improvement of 2.8 and 1.6 BLEU points, respectively, compared to the competitive Transformer model, achieving a new state-of-the-art performance.
Comparison of Transformer model (Vaswani et al., 2017) on standard benchmarks.
We then evaluate our model on a noisy dataset, generated using a procedure similar to that described for AdvGen. We take an input clean dataset, such as that used on standard translation benchmarks, and randomly select words for similar word substitution. We find that our model exhibits improved robustness compared to other recent models.
Comparison of Transformer, Miyao et al. and Cheng et al. on artificial noisy inputs.
These results show that our method is able to overcome small perturbations in the input sentence and improve the generalization performance. It outperforms competitive translation models and achieves state-of-the-art translation performance on standard benchmarks. We hope our translation model will serve as a robust building block for improving many downstream tasks, especially when those are sensitive or intolerant to imperfect translation input.

Acknowledgements
This research was conducted by Yong Cheng, Lu Jiang and Wolfgang Macherey. Additional thanks go to our leadership Andrew Moore and Julia (Wenli) Zhu‎.

Google’s Pixel 4 smartphone will have motion control and face unlock


Google’s Pixel 4 is coming out later this year, and it’s getting the long reveal treatment thanks to a decision this year from Google to go ahead and spill some of the beans early, rather than saving everything for one big final unveiling closer to availability. A new video posted by Google today about the forthcoming Pixel 4 (which likely won’t actually be available until fall) shows off some features new to this generation: Motion control and face unlock.

The new “Motion Sense” feature in the Pixel 4 will detect waves of your hand and translate them into software control, including skipping songs, snoozing alarms and quieting incoming phone call alerts, with more planned features to come, according to Google. It’s based on Soli, a radar-based fine motion detection technology that Google first revealed at its I/O annual developer conference in 2016. Soli can detect very fine movements, including fingers pinched together to mimic a watch-winding motion, and it got approval from the FCC in January, hinting it would finally be arriving in production devices this year.

Pixel 4 is the first shipping device to include Soli, and Google says it’ll be available in “select Pixel countries” at launch (probably due to similar approvals requirements wherever it rolls out to consumers).

Google also teased “Face unlock,” something it has supported in Android previously – but Google is doing it very differently than it has been handled on Android in the past with the Pixel 4. Once again, Soli is part of its implementation, turning on the face unlock sensors in the device as it detects your hand reaching to pick up the device. Google says this should mean that the phone will be unlocked by the time you’re ready to use it, since it does this all on the fly, and works from pretty much any authentication.

Face unlock will be supported for authorizing payments and logging into Android apps, as well, and all of the facial recognition processing done for face unlock will occur on the device – a privacy-oriented feature that’s similar to how Apple handles its own Face ID. In fact, Google will also be storing all the facial recognition data securely in its own dedicated on-device Titan M security chip, another move similar to Apple’s own approach.

Google made the Pixel 4 official and tweeted photos (or maybe photorealistic renders) of the new smartphone back in June, bucking the trend of keeping things unconfirmed until an official reveal closer to release. Based on this update, it seems likely we can expect to learn more about the new smartphone ahead of its availability, which is probably going to happen sometime around October based on past behavior.


Read Full Article

Google’s Pixel 4 smartphone will have motion control and face unlock


Google’s Pixel 4 is coming out later this year, and it’s getting the long reveal treatment thanks to a decision this year from Google to go ahead and spill some of the beans early, rather than saving everything for one big final unveiling closer to availability. A new video posted by Google today about the forthcoming Pixel 4 (which likely won’t actually be available until fall) shows off some features new to this generation: Motion control and face unlock.

The new “Motion Sense” feature in the Pixel 4 will detect waves of your hand and translate them into software control, including skipping songs, snoozing alarms and quieting incoming phone call alerts, with more planned features to come, according to Google. It’s based on Soli, a radar-based fine motion detection technology that Google first revealed at its I/O annual developer conference in 2016. Soli can detect very fine movements, including fingers pinched together to mimic a watch-winding motion, and it got approval from the FCC in January, hinting it would finally be arriving in production devices this year.

Pixel 4 is the first shipping device to include Soli, and Google says it’ll be available in “select Pixel countries” at launch (probably due to similar approvals requirements wherever it rolls out to consumers).

Google also teased “Face unlock,” something it has supported in Android previously – but Google is doing it very differently than it has been handled on Android in the past with the Pixel 4. Once again, Soli is part of its implementation, turning on the face unlock sensors in the device as it detects your hand reaching to pick up the device. Google says this should mean that the phone will be unlocked by the time you’re ready to use it, since it does this all on the fly, and works from pretty much any authentication.

Face unlock will be supported for authorizing payments and logging into Android apps, as well, and all of the facial recognition processing done for face unlock will occur on the device – a privacy-oriented feature that’s similar to how Apple handles its own Face ID. In fact, Google will also be storing all the facial recognition data securely in its own dedicated on-device Titan M security chip, another move similar to Apple’s own approach.

Google made the Pixel 4 official and tweeted photos (or maybe photorealistic renders) of the new smartphone back in June, bucking the trend of keeping things unconfirmed until an official reveal closer to release. Based on this update, it seems likely we can expect to learn more about the new smartphone ahead of its availability, which is probably going to happen sometime around October based on past behavior.


Read Full Article

Google’s Pixel 4 smartphone will have motion control and face unlock


Google’s Pixel 4 is coming out later this year, and it’s getting the long reveal treatment thanks to a decision this year from Google to go ahead and spill some of the beans early, rather than saving everything for one big final unveiling closer to availability. A new video posted by Google today about the forthcoming Pixel 4 (which likely won’t actually be available until fall) shows off some features new to this generation: Motion control and face unlock.

The new “Motion Sense” feature in the Pixel 4 will detect waves of your hand and translate them into software control, including skipping songs, snoozing alarms and quieting incoming phone call alerts, with more planned features to come, according to Google. It’s based on Soli, a radar-based fine motion detection technology that Google first revealed at its I/O annual developer conference in 2016. Soli can detect very fine movements, including fingers pinched together to mimic a watch-winding motion, and it got approval from the FCC in January, hinting it would finally be arriving in production devices this year.

Pixel 4 is the first shipping device to include Soli, and Google says it’ll be available in “select Pixel countries” at launch (probably due to similar approvals requirements wherever it rolls out to consumers).

Google also teased “Face unlock,” something it has supported in Android previously – but Google is doing it very differently than it has been handled on Android in the past with the Pixel 4. Once again, Soli is part of its implementation, turning on the face unlock sensors in the device as it detects your hand reaching to pick up the device. Google says this should mean that the phone will be unlocked by the time you’re ready to use it, since it does this all on the fly, and works from pretty much any authentication.

Face unlock will be supported for authorizing payments and logging into Android apps, as well, and all of the facial recognition processing done for face unlock will occur on the device – a privacy-oriented feature that’s similar to how Apple handles its own Face ID. In fact, Google will also be storing all the facial recognition data securely in its own dedicated on-device Titan M security chip, another move similar to Apple’s own approach.

Google made the Pixel 4 official and tweeted photos (or maybe photorealistic renders) of the new smartphone back in June, bucking the trend of keeping things unconfirmed until an official reveal closer to release. Based on this update, it seems likely we can expect to learn more about the new smartphone ahead of its availability, which is probably going to happen sometime around October based on past behavior.


Read Full Article

When I'm Back at a Keyboard


When I'm Back at a Keyboard

Why governments should prioritize well-being | Nicola Sturgeon

Why governments should prioritize well-being | Nicola Sturgeon

In 2018, Scotland, Iceland and New Zealand established the Wellbeing Economy Governments network to challenge the acceptance of GDP as the ultimate measure of a country's success. In this visionary talk, First Minister of Scotland Nicola Sturgeon explains the far-reaching implications of a "well-being economy" -- which places factors like equal pay, childcare, mental health and access to green space at its heart -- and shows how this new focus could help build resolve to confront global challenges.

Click the above link to download the TED talk.

Huawei’s first 5G phone goes on sale in China next month


Huawei on Friday announced the upcoming release of its first 5G handset in its home market. Following on the heels of its UK debut, the Mate 20 X is currently up for pre-order, with an expected China arrival of August 16.

The handset beats the foldable Mate X to market, in spite of that handset having made its debut way back at Mobile World Congress in February. Of course, companies are understandably cautious about foldable in the wake of the mess with the Samsung Galaxy Fold, which finally got an approximate release date last week.

China Mobile flipped the switch on its Huawei-powered 5G transport network late last month, with commercial rollout expected to begin in October. In June, China Telecom and China Unicom were also granted licenses to operate commercial 5G networks, after some delay. Last week, ZTE’s Axon 10 Pro 5G went up for presale in its native China, as well.

Until rollout begins, those purchasing 5G handsets will have to rely on older networks like the rest of us, putting the U.S. and China in similar boats on that front. Of course, security concerns have put both Huawei and ZTE in the crosshairs internationally, particularly North America.

Huawei has reportedly been looking to build much of its own hardware and software in house, particularly in the wake of a ban on its use offerings from U.S. companies. Notably it also announced a $436 million investment in building out an ecosystem around its Arm-based Kunpeng server chip.


Read Full Article

Europe’s top court sharpens guidance for sites using leaky social plug-ins


Europe’s top court has made a ruling that could affect scores of websites that embed the Facebook ‘Like’ button and receive visitors from the region.

The ruling by the Court of Justice of the EU states such sites are jointly responsible for the initial data processing — and must either obtain informed consent from site visitors prior to data being transferred to Facebook, or be able to demonstrate a legitimate interest legal basis for processing this data.

The ruling is significant because, as currently seems to be the case, Facebook’s Like buttons transfer personal data automatically, when a webpage loads — without the user even needing to interact with the plug-in — which means if websites are relying on visitors’ ‘consenting’ to their data being shared with Facebook they will likely need to change how the plug-in functions to ensure no data is sent to Facebook prior to visitors being asked if they want their browsing to be tracked by the adtech giant.

The background to the case is a complaint against online clothes retailer, Fashion ID, by a German consumer protection association, Verbraucherzentrale NRW — which took legal action in 2015 seeking an injunction against Fashion ID’s use of the plug-in which it claimed breached European data protection law.

Like ’em or loath ’em, Facebook’s ‘Like’ buttons are an impossible-to-miss component of the mainstream web. Though most Internet users are likely unaware that the social plug-ins are used by Facebook to track what other websites they’re visiting for ad targeting purposes.

Last year the company told the UK parliament that between April 9 and April 16 the button had appeared on 8.4M websites, while its Share button social plug-in appeared on 931K sites. (Facebook also admitted to 2.2M instances of another tracking tool it uses to harvest non-Facebook browsing activity — called a Facebook Pixel — being invisibly embedded on third party websites.)

The Fashion ID case predates the introduction of the EU’s updated privacy framework, GDPR, which further toughens the rules around obtaining consent — meaning it must be purpose specific, informed and freely given.

Today’s CJEU decision also follows another ruling a year ago, in a case related to Facebook fan pages, when the court took a broad view of privacy responsibilities around platforms — saying both fan page administrators and host platforms could be data controllers. Though it also said joint controllership does not necessarily imply equal responsibility for each party.

In the latest decision the CJEU has sought to draw some limits on the scope of joint responsibility, finding that a website where the Facebook Like button is embedded cannot be considered a data controller for any subsequent processing, i.e. after the data has been transmitted to Facebook Ireland (the data controller for Facebook’s European users).

The joint responsibility specifically covers the collection and transmission of Facebook Like data to Facebook Ireland.

“It seems, at the outset, impossible that Fashion ID determines the purposes and means of those operations,” the court writes in a press release announcing the decision.

“By contrast, Fashion ID can be considered to be a controller jointly with Facebook Ireland in respect of the operations involving the collection and disclosure by transmission to Facebook Ireland of the data at issue, since it can be concluded (subject to the investigations that it is for the Oberlandesgericht Düsseldorf [German regional court] to carry out) that Fashion ID and Facebook Ireland determine jointly the means and purposes of those operations.”

Responding the judgement in a statement attributed to its associate general counsel, Jack Gilbert, Facebook told us:

Website plugins are common and important features of the modern Internet. We welcome the clarity that today’s decision brings to both websites and providers of plugins and similar tools. We are carefully reviewing the court’s decision and will work closely with our partners to ensure they can continue to benefit from our social plugins and other business tools in full compliance with the law.

The company said it may make changes to the Like button to ensure websites that use it are able to comply with Europe’s GDPR.

Though it’s not clear what specific changes these could be, such as — for example — whether Facebook will change the code of its social plug-ins to ensure no data is transferred at the point a page loads. (We’ve asked Facebook and will update this report with any response.)

Facebook also points out that other tech giants, such as Twitter and LinkedIn, deploy similar social plug-ins — suggesting the CJEU ruling will apply to other social platforms, as well as to thousands of websites across the EU where these sorts of plug-ins crop up.

“Sites with the button should make sure that they are sufficiently transparent to site visitors, and must make sure that they have a lawful basis for the transfer of the user’s personal data (e.g. if just the user’s IP address and other data stored on the user’s device by Facebook cookies) to Facebook,” Neil Brown, a telecoms, tech and internet lawyer at U.K. law firm Decoded Legal, told TechCrunch.

“If their lawful basis is consent, then they’ll need to get consent before deploying the button for it to be valid — otherwise, they’ll have done the transfer before the visitor has consented

“If relying on legitimate interests — which might scrape by — then they’ll need to have done a legitimate interests assessment, and kept it on file (against the (admittedly unlikely) day that a regulator asks to see it), and they’ll need to have a mechanism by which a site visitor can object to the transfer.”

“Basically, if organisations are taking on board the recent guidance from the ICO and CNIL on cookie compliance, wrapping in Facebook ‘Like’ and other similar things in with that work would be sensible,” Brown added.

Also commenting on the judgement, Michael Veale, a UK-based researcher in tech and privacy law/policy, said it raises questions about how Facebook will comply with Europe’s data protection framework for any further processing it carries out of the social plug-in data.

“The whole judgement to me leaves open the question ‘on what grounds can Facebook justify further processing of data from their web tracking code?'” he told us. “If they have to provide transparency for this further processing, which would take them out of joint controllership into sole controllership, to whom and when is it provided?

“If they have to demonstrate they would win a legitimate interests test, how will that be affected by the difficulty in delivering that transparency to data subjects?’

“Can Facebook do a backflip and say that for users of their service, their terms of service on their platform justifies the further use of data for which individuals must have separately been made aware of by the website where it was collected?

“The question then quite clearly boils down to non-users, or to users who are effectively non-users to Facebook through effective use of technologies such as Mozilla’s browser tab isolation.”

How far a tracking pixel could be considered a ‘similar device’ to a cookie is another question to consider, he said.

The tracking of non-Facebook users via social plug-ins certainly continues to be a hot-button legal issue for Facebook in Europe — where the company has twice lost in court to Belgium’s privacy watchdog on this issue. (Facebook has continued to appeal.)

Facebook founder Mark Zuckerberg also faced questions about tracking non-users last year, from MEPs in the European Parliament — who pressed him on whether Facebook uses data on non-users for any other uses vs the security purpose of “keeping bad content out” that he claimed requires Facebook to track everyone on the mainstream Internet.

MEPs also wanted to know how non-users can stop their data being transferred to Facebook? Zuckerberg gave no answer, likely because there’s currently no way for non-users to stop their data being sucked up by Facebook’s servers — short of staying off the mainstream Internet.


Read Full Article

Syncing Problems With OneDrive on Windows 10? Here Are 10 Easy Fixes


windows-10-onedrive-sync

OneDrive is Microsoft’s cloud storage offering that is built into Windows 10. The fact that it’s free and has Office integration makes it a popular and easy choice for many.

However, it can sometimes have trouble syncing your files. If you are experiencing that some or all of your Microsoft OneDrive files aren’t syncing, we’ve put together easy solutions to help fix the problem.

1. Try to Access OneDrive Online

The first thing to check is that the problem is with your system, rather than the OneDrive service itself.

To do so, right click the OneDrive icon in your notification area and click View online. This should open your OneDrive files within your browser. If they don’t load, or you get an error (and not a general network error, which signals that your internet is down), it’s likely a problem at Microsoft’s end.

Microsoft 365 service status page

You can double check this by visiting the Microsoft 365 Service health page. This tells you if OneDrive is up and running—if you see a green tick, everything is fine.

If the problem is with OneDrive itself, all you can do is wait until it’s resolved.

2. Restart OneDrive

Have you tried turning it off and on again? Often simply closing and opening a program can fix it.

Right click the OneDrive icon in your notification area and click Close OneDrive. Then open Start, search for OneDrive, and open it.

Close OneDrive option

3. Ensure You Have Enough Storage Space

OneDrive offers 5 GB of storage for free, although you might have 50 GB, 1 TB or 5 TB if you have upgraded or have an Office 365 subscription.

Though it might seem like ample amounts of space, you’d be surprised how quickly it fills up. As such, you should check that your OneDrive account isn’t at capacity.

To do so, right click the OneDrive icon in your notification area and click Settings. Switch to the Account tab and see how much storage space you have used.

OneDrive capacity

If you have no free space left at all, or are close to the limit, remove some files from OneDrive or upgrade your capacity.

4. Check for Incompatible Files

First: Ensure that no individual file you want to sync is larger than 20 GB or larger than your remaining OneDrive space. If it is, try compressing the file first. Check out our list of free compression tools if you need a hand.

Second: The entire file path (including the file name) cannot exceed 400 characters. This can happen if you have lots of nested folders or really long folder or file names. To fix this, rename the files or move them to a top-level folder.

Third: File and folder names cannot contain these characters:

" * : < > ? / \ |

For more information on file name restrictions, refer to Microsoft’s support article.

5. Update Windows and OneDrive

You should always keep both Windows and OneDrive up-to-date to benefit from the latest features and bug fixes. Both should update automatically, but you can do it manually too.

To update Windows, press Windows key + I to open Settings. Click Update & Security, then Check for updates. You will only be served updates as they become compatible with your system.

Check for updates in Windows 10

For OneDrive, you can grab the latest version from the OneDrive website. Look for the “Need to reinstall?” message, run the installer, follow the wizard to completion, and then sign back into your Microsoft account on OneDrive.

6. Relink OneDrive

There might be a problem with the connection to your OneDrive account. It’s worth unlinking OneDrive from your computer and then reconnecting it to see if that solves the issue.

Don’t worry, this won’t delete anything from your OneDrive account. Your data is safe.

Right click the OneDrive icon in your notification area and click Settings. On the Account tab, click Unlink this PCUnlink account.

Unlink OneDrive account from PC

You will then see the Set up OneDrive wizard. Enter your email address and follow this through in order to relink your OneDrive account.

7. Temporarily Turn Off Protection

Your Windows firewall or antivirus software might be conflicting with OneDrive. You can temporarily disable them to find out if that’s true.

To turn Windows Defender Firewall off, press Windows key + I to open Settings and go to Update & Security > Windows Security > Firewall & network protection. Select the network labeled as active, and slide Windows Defender Firewall to Off.

virus and threat protection in Windows Defender

Then, to disable Windows Defender antivirus, keep the same window open and click Virus & threat protection from the left-hand navigation. Click Virus & threat protection settings and slide Real-time protection to Off.

If you’re using a third-party firewall or antivirus (though we’ve compared the best antivirus software and rate Windows Defender highly), refer to that program’s support documentation to find out how to disable them.

Remember to turn both your firewall and antivirus back on after seeing if it fixes your OneDrive sync issue.

8. Move Stuck Files Out of OneDrive

Whether you know which files are causing the sync problems or not, moving some files out of a OneDrive sync folder can help.

First, right click the OneDrive icon in your notification area and click Pause syncing > 2 hours.

Next, go to one of the folders you are trying to sync and move a file to a location on your PC that you aren’t syncing. Right click OneDrive again and click Resume syncing. When the sync is done, move the file back.

9. Disable Office Upload

If your sync problem is with Microsoft Office files specifically, the Office upload cache may be interfering with OneDrive. You can disable the setting in OneDrive to see if it fixes the problem.

Right click the OneDrive icon in your notification area and click Settings. Go to the Office tab and uncheck Use Office 2016 to sync Office files that I open and click OK.

Office sync setting in OneDrive

Disabling this will mean that any simultaneous changes to Office files in your OneDrive won’t merge automatically. Of course, if it doesn’t resolve the sync problem, simply enable the setting again.

10. Fully Reset OneDrive

Resetting OneDrive will put all your settings back to default, including the folders you have selected to sync, but it can resolve sync problems. Also, it won’t remove any of your files, so don’t worry.

To begin, press Windows key + R to open Run. Input the following and click OK:

%localappdata%\Microsoft\OneDrive\onedrive.exe /reset

You may see a Command Prompt window appear. If you do, wait for it to vanish.

Next, open Start, search for OneDrive and open it. Follow the wizard through to set up your account settings. Remember to configure your settings again, like selecting which folders to sync.

Is OneDrive Right for You?

Hopefully one of these tips has helped resolve your OneDrive sync issues and your files are now flowing with ease.

If these problems have made you reconsider your use of OneDrive, you might want to take a look at our comparison of the three big cloud storage providers to help decide on an alternative.

Read the full article: Syncing Problems With OneDrive on Windows 10? Here Are 10 Easy Fixes


Read Full Article

How to Sync Google Calendar With Your iPhone

The World’s First Bluetooth 5.0 Cassette Tape Player Is Coming Soon


What’s old is new again! Everyone loves a nice throwback, and that’s exactly what IT’S OK is. The new device is a Bluetooth-compatible cassette player. If you miss the days of Sony’s Walkman, then you’re going to love this thing!

Is it necessary to blend an old and dated technology with the latest wireless standard? Absolutely not, but it most definitely is cool and interesting!

While people might not look back at cassette tapes with the same rose-colored glasses as they do with vinyl records, there are definitely people out there who are nostalgic for the sound of tapes, and this device aims to fill that niche.

IT’S OK Bluetooth Cassette Tape Player Features

The wireless tape player features Bluetooth 5.0 technology, so while it might be using an old-school form of media, it features the latest Bluetooth standard and all of the features and benefits that it offers. It can connect to a pair of Bluetooth headphones or a speaker, making it a pretty versatile device (assuming you still have some tapes sitting around to listen to).

In addition to playing music, IT’S OK actually has the ability to record audio. You won’t need to worry about finding a store that sells blank tapes since the creators of the Bluetooth player include a tape with 60-minutes of recording capacity with the device.

If you really want to go old school, there’s also a 3.5mm port on IT’S OK, so you can actually connect it to a pair of wired headphones or older stereo system.

Keeping with the old-school theme, the device is powered by two AA batteries. That’s an interesting design choice considering most devices nowadays elect to go with some sort of rechargeable battery. At least it’s one less thing you need to charge.

The tape player is available in three different colors—Cloud (white), Sakura (pink), and Evening (blue). The actual portion that sits over the tape is transparent, so you can watch the gears turn while you listen to your old-school music.

As far as the size of the device goes, it’s 118mm x 84mm x 33.5mm and 152g without the aforementioned AA batteries. It would be difficult for the creators to make it much smaller than that, as it needs to be able to fit the tape and physical parts that make a tape work.

IT’S OK Price and Availability

NIMN Lab is seeking funding for IT’S OK on Kickstarter. It already exceeded its goal, which was quite modest at only $12,789. If you’re interested in ordering a device for yourself (though there’s no guarantee that you’ll receive one, as there are risks involved in backing a crowdfunding project), you can do so for around $75 USD.

Read the full article: The World’s First Bluetooth 5.0 Cassette Tape Player Is Coming Soon


Read Full Article

Google at ACL 2019




This week, Florence, Italy hosts the 2019 Annual Meeting of the Association for Computational Linguistics (ACL 2019), the premier conference in the field of natural language understanding, covering a broad spectrum of research areas that are concerned with computational approaches to natural language.

As a leader in natural language processing and understanding, and a Diamond Level sponsor of ACL 2019, Google will be on hand to showcase the latest research on syntax, semantics, discourse, conversation, multilingual modeling, sentiment analysis, question answering, summarization, and generally building better systems using labeled and unlabeled data.

If you’re attending ACL 2019, we hope that you’ll stop by the Google booth to meet our researchers and discuss projects and opportunities at Google that go into solving interesting problems for billions of people. Our researchers will also be on hand to demo the Natural Questions corpus, the Multilingual Universal Sentence Encoder and more. You can also learn more about the Google research being presented at ACL 2019 below (Google affiliations in blue).

Organizing Committee includes:
Enrique Alfonseca

Accepted Publications
A Joint Named-Entity Recognizer for Heterogeneous Tag-sets Using a Tag Hierarchy
Genady Beryozkin, Yoel Drori, Oren Gilon, Tzvika Hartman, Idan Szpektor

Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study
Chinnadhurai Sankar, Sandeep Subramanian, Chris Pal, Sarath Chandar, Yoshua Bengio

Generating Logical Forms from Graph Representations of Text and Entities
Peter Shaw, Philip Massey, Angelica Chen, Francesco Piccinno, Yasemin Altun

Extracting Symptoms and their Status from Clinical Conversations
Nan Du, Kai Chen, Anjuli Kannan, Linh Trans, Yuhui Chen, Izhak Shafran

Stay on the Path: Instruction Fidelity in Vision-and-Language Navigation
Vihan Jain, Gabriel Magalhaes, Alexander Ku, Ashish Vaswani, Eugene Le, Jason Baldridge

Meaning to Form: Measuring Systematicity as Information
Tiago Pimentel, Arya D. McCarthy, Damian Blasi, Brian Roark, Ryan Cotterell

Matching the Blanks: Distributional Similarityfor Relation Learning
Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, Tom Kwiatkowski

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, Ruslan Salakhutdinov

HighRES: Highlight-based Reference-less Evaluation of Summarization
Hardy Hardy, Shashi Narayan, Andreas Vlachos

Zero-Shot Entity Linking by Reading Entity Descriptions
Lajanugen Logeswaran, Ming-Wei Chang, Kristina Toutanova, Kenton Lee, Jacob Devlin, Honglak Lee

Robust Neural Machine Translation with Doubly Adversarial Inputs
Yong Cheng, Lu Jiang, Wolfgang Macherey

Natural Questions: a Benchmark for Question Answering Research
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, Slav Petrov

Like a Baby: Visually Situated Neural Language Acquisition
Alexander Ororbia, Ankur Mali, Matthew Kelly, David Reitter

What Kind of Language Is Hard to Language-Model?
Sebastian J. Mielke, Ryan Cotterell, Kyle Gorman, Brian Roark, Jason Eisner

How Multilingual is Multilingual BERT?
Telmo Pires, Eva Schlinger, Dan Garrette

Handling Divergent Reference Texts when Evaluating Table-to-Text Generation
Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, William Cohen

BAM! Born-Again Multi-Task Networks for Natural Language Understanding
Kevin Clark, Minh-Thang Luong, Urvashi Khandelal, Christopher D. Manning, Quoc V. Le

Dynamically Composing Domain-Data Selection with Clean-Data Selection by “Co-Curricular Learning" for Neural Machine Translation
Wei Wang, Isaac Caswell, Ciprian Chelba

Monotonic Infinite Lookback Attention for Simultaneous Machine Translation
Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, Chung-Cheng Chiu, Semih Yavuz, Ruoming Pang, Wei Li, Colin Raffel

On the Robustness of Self-Attentive Models
Yu-Lun Hsieh, Minhao Cheng, Da-Cheng Juan, Wei Wei, Wen-Lian Hsu, Cho-Jui Hsieh

Neural Decipherment via Minimum-Cost Flow: from Ugaritic to Linear B
Jiaming Luo, Yuan Cao, Regina Barzilay

How Large Are Lions? Inducing Distributions over Quantitative Attributes
Yanai Elazar, Abhijit Mahabal, Deepak Ramachandran, Tania Bedrax-Weiss, Dan Roth

BERT Rediscovers the Classical NLP Pipeline
Ian Tenney, Dipanjan Das, Ellie Pavlick

Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling
Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pappagari, R. Thomas Mccoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, Samuel R. Bowman

Robust Zero-Shot Cross-Domain Slot Filling with Example Values
Darsh Shah, Raghav Gupta, Amir Fayazi, Dilek Hakkani-Tur

Latent Retrieval for Weakly Supervised Open Domain Question Answering
Kenton Lee, Ming-Wei Chang, Kristina Toutanova

On-device Structured and Context Partitioned Projection Networks
Sujith Ravi, Zornitsa Kozareva

Incorporating Priors with Feature Attribution on Text Classification
Frederick Liu, Besim Avci

Informative Image Captioning with External Sources of Information
Sanqiang Zhao, Piyush Sharma, Tomer Levinboim, Radu Soricut

Reducing Word Omission Errors in Neural Machine Translation: A Contrastive Learning Approach
Zonghan Yang, Yong Cheng, Yang Liu, Maosong Sun

Synthetic QA Corpora Generation with Roundtrip Consistency
Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, Michael Collins

Unsupervised Paraphrasing without Translation
Aurko Roy, David Grangier

Workshops
Widening NLP 2019
Organizers include: Diyi Yang

NLP for Conversational AI
Organizers include: Thang-Minh Luong, Tania Bedrax-Weiss

The Fourth Arabic Natural Language Processing Workshop
Organizers include: Imed Zitouni

The Third Workshop on Abusive Language Online
Organizers include: Zeerak Waseem

TyP-NLP, Typology for Polyglot NLP
Organizers include: Manaal Faruqui

Gender Bias in Natural Language Processing
Organizers include: Kellie Webster

Tutorials
Wikipedia as a Resource for Text Analysis and Retrieval
Organizer: Marius Pasca


Last-mile training and the future of work in an expanding gig economy


The future of work is so uncertain that perhaps the only possible job security exists for the person who can credibly claim to be an expert on the future of work.

Nevertheless, there are two trends purported experts are reasonably certain about: (1) continued growth in the number of jobs requiring substantive and sustained interaction with technology; and (2) continued rapid expansion of the gig economy.

This first future of work trend is evident today in America’s skills gap with 7 million unfilled jobs — many mid- or high-skill position requiring a range of digital and technology capabilities.

Amazon’s recent announcement that it will spend $700 million over the next six years to upskill 100,000 of its low-wage fulfillment center employees for better digital jobs within Amazon and elsewhere demonstrates an understanding that the private sector must take some responsibility for the requisite upskilling and retraining, as well as the importance of establishing pathways to these jobs that are faster and cheaper than the ones currently on offer from colleges and universities.

These pathways typically involve “last-mile training”, a combination of digital skills, specific industry or enterprise knowledge, and soft skills to make candidates job-ready from day one.

The second trend isn’t new; the gig economy has existed since the advent of the “Help Wanted” sign. But what’s powered the gig revolution is the shift from signs and classified ads to digital platforms and marketplaces that facilitate continued and repeated matching of gig and gig worker. These talent platforms have made it possible for companies and organizations to conceptualize and compartmentalize work as projects rather than full-time jobs, and for workers to earn a living by piecing together gigs.

Critics of the gig economy decry the lack of job security, healthcare and benefits, and rightly so. If it’s hard to make ends meet as a full-time employee making a near-minimum wage, it’s impossible to do so via a gig platform at a comparable low wage. But rather than fighting the onset of the gig economy, critics might achieve more by focusing on upskilling gig workers.

To date, conversations about pathways and upskilling have focused on full-time employment. In the workforce or skills gap vernacular, upskilled Amazon workers might leave the fulfillment center for a tech support job with Amazon or another company, but it’s always a full-time job. But how do these important concepts intersect with the rising gig economy?

GettyImages 924636730

Image via Getty Images / PeterSnow

Just as there are low-skill and high-skill jobs, there are gig platforms that require limited or low skills, and platforms that require a breadth of advanced skills. Gig platforms that can be classified as low-skill include Amazon’s Mechanical TurkTaskRabbitUber and Lyft, and Instawork (hospitality). There are also mid-tier platforms like Upwork that span a wide range of gigs. And then there are platforms like Gigster (app development), and Business Talent Group (consulting and entire management functions) that require the same skillset as the most lucrative, in-demand, full-time positions.

So just as Amazon is focused on last-mile training programs to upskill workers and create new pathways to better jobs, in the gig economy context, our focus should be on strategies and platforms that allow gig workers to move from lower-skill to higher-skill platforms i.e., pathways for Uber drivers to become Business Talent Group executives.

One high-skill gig platform has developed an innovative strategy to do exactly this. CreatorUp is a gig platform for digital video production that has built in a last-mile training on-ramp. CreatorUp offers low-cost or free last-mile training programs on its own and in conjunction with clients like YouTube and Google to upskill gig workers so they can be effective digital video producers on the CreatorUp platform.

CreatorUp’s programs are driven by client demand; because the company saw significant demand from clients for AR/VR video production, it launched a new AR/VR training track. Graduates of CreatorUp’s programs join the platform and are staffed on a wide range of productions that clients require to engage customers, suppliers, employees and/or to build their brands.

Screen Shot 2019 07 28 at 4.39.10 PM

The good news for CreatorUp and other high-skill gig platforms that begin to incorporate last-mile training is that investing in these pathways can start the flywheel that every successful talent marketplace requires. Clients only patronize talent marketplaces once there’s a critical mass of talent on the platform. So how do platforms attract talent? One way is to be first-to-market in a category. A second is to attract billions in venture capital. But a third might be to use last-mile training to create new talent.

CreatorUp believes its last-mile training programs have allowed it to grow a network that serves diverse client needs better than any other video production platform. For not only has last-mile training allowed CreatorUp to understand and certify the skills of talent on the platform, and therefore to meet the needs of more clients, it has also allowed CreatorUp to bid more competitively because newly trained talent is often willing to work for less.

Last-mile training has the potential to be a win-win for the gig economy. It’s a strategy that may allow gig platforms to scale, matching more talent with more clients. Meanwhile, by allowing workers to upskill from lower-tier gig platforms to higher skill platforms, it’s also the first gig economy solution for social mobility.


Read Full Article

Reports claims all three new iPhones planned for 2020 will support 5G


Apple analyst Ming-Chi Kuo — sometimes described as “the most accurate Apple analyst in the world” — has written a new note to investors saying that the three iPhones expected to launch in 2020 will feature support for 5G. In previous Kuo reports, it’s said the 2020 iPhones could be available in new sizes: a 5.4 and 6.7-inch high-end iPhones with OLED displays, along with a 6.1-inch model with an OLED display.

Previously, he predicted that only two of the three new iPhones slated for 2020 would support 5G. But with well-spec’d Androids flooding the market, he says it looks like Apple will offer 5G in all models in order to better compete. He’s also confirmed the view that Apple will be able to throw more resources into developing the 5G iPhone now that it has acquired Intel’s smartphone modem chip business.

The report, leaked to MacRumors, contains this quote:

We now believe that all three new 2H20 iPhone models will support 5G for the following reasons. (1) Apple has more resource for developing the 5G iPhone after the acquisition of Intel baseband business. (2) We expect that the prices of 5G Android smartphones will decline to $249-349 USD in 2H20. We believe that 5G Android smartphones, which will be sold at $249-349 USD, will only support Sub-6GHz. But the key is that consumers will think that 5G is the necessary function in 2H20. Therefore, iPhone models which will be sold at higher prices have to support 5G for winning more subsidies from mobile operators and consumers’ purchase intention. (3) Boosting 5G developments could benefit Apple’s AR ecosystem.

The report expects all three 2020 iPhone models to support both mmWave and Sub-6GHz spectrum (two different kinds of 5G) for the US market. Whether Apple will launch a 5G iPhone that only supports Sub-6GHz, allowing for a lower price and thus making it suitable for the Chinese market, remains unclear.

mmWave is the ‘fastest 5G’ that’s most often referred to, but as it is suited to denser, urban areas, it will not be used as much in rural or suburban areas, where mid-bands and low-bands, called sub-6GHz 5G, will be employed. All are banks are faster than 4G, with mmWave the fastest.

Apple will use modem chips from Qualcomm in its 2020 5G iPhones, while it works on its own modem chips, due in 2021.


Read Full Article

Reports claims all three new iPhones planned for 2020 will support 5G


Apple analyst Ming-Chi Kuo — sometimes described as “the most accurate Apple analyst in the world” — has written a new note to investors saying that the three iPhones expected to launch in 2020 will feature support for 5G. In previous Kuo reports, it’s said the 2020 iPhones could be available in new sizes: a 5.4 and 6.7-inch high-end iPhones with OLED displays, along with a 6.1-inch model with an OLED display.

Previously, he predicted that only two of the three new iPhones slated for 2020 would support 5G. But with well-spec’d Androids flooding the market, he says it looks like Apple will offer 5G in all models in order to better compete. He’s also confirmed the view that Apple will be able to throw more resources into developing the 5G iPhone now that it has acquired Intel’s smartphone modem chip business.

The report, leaked to MacRumors, contains this quote:

We now believe that all three new 2H20 iPhone models will support 5G for the following reasons. (1) Apple has more resource for developing the 5G iPhone after the acquisition of Intel baseband business. (2) We expect that the prices of 5G Android smartphones will decline to $249-349 USD in 2H20. We believe that 5G Android smartphones, which will be sold at $249-349 USD, will only support Sub-6GHz. But the key is that consumers will think that 5G is the necessary function in 2H20. Therefore, iPhone models which will be sold at higher prices have to support 5G for winning more subsidies from mobile operators and consumers’ purchase intention. (3) Boosting 5G developments could benefit Apple’s AR ecosystem.

The report expects all three 2020 iPhone models to support both mmWave and Sub-6GHz spectrum (two different kinds of 5G) for the US market. Whether Apple will launch a 5G iPhone that only supports Sub-6GHz, allowing for a lower price and thus making it suitable for the Chinese market, remains unclear.

mmWave is the ‘fastest 5G’ that’s most often referred to, but as it is suited to denser, urban areas, it will not be used as much in rural or suburban areas, where mid-bands and low-bands, called sub-6GHz 5G, will be employed. All are banks are faster than 4G, with mmWave the fastest.

Apple will use modem chips from Qualcomm in its 2020 5G iPhones, while it works on its own modem chips, due in 2021.


Read Full Article