26 November 2019

Astrophotography with Night Sight on Pixel Phones




Taking pictures of outdoor scenes at night has so far been the domain of large cameras, such as DSLRs, which are able to achieve excellent image quality, provided photographers are willing to put up with bulky equipment and sometimes tricky postprocessing. A few years ago experiments with phone camera nighttime photography produced pleasing results, but the methods employed were impractical for all but the most dedicated users.

Night Sight, introduced last year as part of the Google Camera App for the Pixel 3, allows phone photographers to take good-looking handheld shots in environments so dark that the normal camera mode would produce grainy, severely underexposed images. In a previous blog post our team described how Night Sight is able to do this, with a technical discussion presented at SIGGRAPH Asia 2019.

This year’s version of Night Sight pushes the boundaries of low-light photography with phone cameras. By allowing exposures up to 4 minutes on Pixel 4, and 1 minute on Pixel 3 and 3a, the latest version makes it possible to take sharp and clear pictures of the stars in the night sky or of nighttime landscapes without any artificial light.
The Milky Way as seen from the summit of Haleakala volcano on a cloudless and moonless September night, captured using the Google Camera App running on a Pixel 4 XL phone. The image has not been retouched or post-processed in any way. It shows significantly more detail than a person can see with the unaided eye on a night this dark. The dust clouds along the Milky Way are clearly visible, the sky is covered with thousands of stars, and unlike human night vision, the picture is colorful.
A Brief Overview of Night Sight
The amount of light detected by the camera’s image sensor inherently has some uncertainty, called “shot noise,” which causes images to look grainy. The visibility of shot noise decreases as the amount of light increases; therefore, it is best for the camera to gather as much light as possible to produce a high-quality image.

How much light reaches the image sensor in a given amount of time is limited by the aperture of the camera lens. Extending the exposure time for a photo increases the total amount of light captured, but if the exposure is long, motion in the scene being photographed and unsteadiness of the handheld camera can cause blur. To overcome this, Night Sight splits the exposure into a sequence of multiple frames with shorter exposure times and correspondingly less motion blur. The frames are first aligned, compensating for both camera shake and in-scene motion, and then averaged, with careful treatment of cases where perfect alignment is not possible. While individual frames may be fairly grainy, the combined, averaged image looks much cleaner.

Experimenting with Exposure Time
Soon after the original Night Sight was released, we started to investigate taking photos in very dark outdoor environments with the goal of capturing the stars. We realized that, just as with our previous experiments, high quality pictures would require exposure times of several minutes. Clearly, this cannot work with a handheld camera; the phone would have to be placed on a tripod, a rock, or whatever else might be available to hold the camera steady.

Just as with handheld Night Sight photos, nighttime landscape shots must take motion in the scene into account — trees sway in the wind, clouds drift across the sky, and the moon and the stars rise in the east and set in the west. Viewers will tolerate motion-blurred clouds and tree branches in a photo that is otherwise sharp, but motion-blurred stars that look like short line segments look wrong. To mitigate this, we split the exposure into frames with exposure times short enough to make the stars look like points of light. Taking pictures of real night skies we found that the per-frame exposure time should not exceed 16 seconds.
Motion-blurred stars in a single-frame two-minute exposure.
While the number of frames we can capture for a single photo, and therefore the total exposure time, is limited by technical considerations, we found that it is more tightly constrained by the photographer’s patience. Few are willing to wait more than four minutes for a picture, so we limited a single Night Sight image to at most 15 frames with up to 16 seconds per frame.

Sixteen-second exposures allow us to capture enough light to produce recognizable images but a useable camera app capable of taking pictures that look great must deal with additional issues that are unique to low-light photography.

Dark Current and Hot Pixels
Dark current causes CMOS image sensors to record a spurious signal, as if the pixels were exposed to a small amount of light, even when no actual light is present. The effect is negligible when exposure times are short, but it becomes significant with multi-second captures. Due to unavoidable imperfections in the sensor’s silicon substrate, some pixels exhibit higher dark current than their neighbors. In a recorded frame these “warm pixels,” as well as defective “hot pixels,” are visible as tiny bright dots.

Warm and hot pixels can be identified by comparing the values of neighboring pixels within the same frame and across the sequence of frames recorded for a photo, and looking for outliers. Once an outlier has been detected, it is concealed by replacing its value with the average of its neighbors. Since the original pixel value is discarded, there is a loss of image information, but in practice this does not noticeably affect image quality.
Left: A small region of a long-exposure image with hot pixels, and warm pixels caused by dark current nonuniformity. Right: The same image after outliers have been removed. Fine details in the landscape, including small points of light, are preserved.
Scene Composition
Mobile phones use their screens as electronic viewfinders — the camera captures a continuous stream of frames that is displayed as a live video in order to aid with shot composition. The frames are simultaneously used by the camera’s autofocus, auto exposure, and auto white balance systems.

To feel responsive to the photographer, the viewfinder is updated at least 15 times per second, which limits the viewfinder frame exposure time to 66 milliseconds. This makes it challenging to display a detailed image in low-light environments. At light levels below the rough equivalent of a full moon or so, the viewfinder becomes mostly gray — maybe showing a few bright stars, but none of the landscape — and composing a shot becomes difficult.

To assist in framing the scene in extremely low light, Night Sight displays a “post-shutter viewfinder”. After the shutter button has been pressed, each long-exposure frame is displayed on the screen as soon as it has been captured. With exposure times up to 16 seconds, these frames have collected almost 250 times more light than the regular viewfinder frames, allowing the photographer to easily see image details as soon as the first frame has been captured. The composition can then be adjusted by moving the phone while the exposure continues. Once the composition is correct, the initial shot can be stopped, and a second shot can be captured where all frames have the desired composition.
Left: The live Night Sight viewfinder in a very dark outdoor environment. Except for a few points of light from distant buildings, the landscape and the sky are largely invisible. Right: The post-shutter viewfinder during a long exposure shot. The image is much clearer; it updates after every long-exposure frame.
Autofocus
Autofocus ensures that the image captured by the camera is sharp. In normal operation, the incoming viewfinder frames are analyzed to determine how far the lens must be from the sensor to produce an in-focus image, but in very low light the viewfinder frames can be so dark and grainy that autofocus fails due to lack of detectable image detail. When this happens, Night Sight on Pixel 4 switches to “post-shutter autofocus.” After the user presses the shutter button, the camera captures two autofocus frames with exposure times up to one second, long enough to detect image details even in low light. These frames are used only to focus the lens and do not contribute directly to the final image.

Even though using long-exposure frames for autofocus leads to consistently sharp images at light levels low enough that the human visual system cannot clearly distinguish objects, sometimes it gets too dark even for post-shutter autofocus. In this case the camera instead focuses at infinity. In addition, Night Sight includes manual focus buttons, allowing the user to focus on nearby objects in very dark conditions.

Sky Processing
When images of very dark environments are viewed on a screen, they are displayed much brighter than the original scenes. This can change the viewer’s perception of the time of day when the photos were captured. At night we expect the sky to be dark. If a picture taken at night shows a bright sky, then we see it as a daytime scene, perhaps with slightly unusual lighting.

This effect is countered in Night Sight by selectively darkening the sky in photos of low-light scenes. To do this, we use machine learning to detect which regions of an image represent sky. An on-device convolutional neural network, trained on over 100,000 images that were manually labeled by tracing the outlines of sky regions, identifies each pixel in a photograph as “sky” or “not sky.”
A landscape picture taken on a bright full-moon night, without sky processing (left half), and with sky darkening (right half). Note that the landscape is not darkened.
Sky detection also makes it possible to perform sky-specific noise reduction, and to selectively increase contrast to make features like clouds, color gradients, or the Milky Way more prominent.

Results
With the phone on a tripod, Night Sight produces sharp pictures of star-filled skies, and as long as there is at least a small amount of moonlight, landscapes will be clear and colorful.

Of course, the phone’s capabilities are not limitless, and there is always room for improvement. Although nighttime scenes are dark overall, they often contain bright light sources such as the moon, distant street lamps, or prominent stars. While we can capture a moonlit landscape, or details on the surface of the moon, the extremely large brightness range, which can exceed 500,000:1, so far prevents us from capturing both in the same image. Also, when the stars are the only source of illumination, we can take clear pictures of the sky, but the landscape is only visible as a silhouette.

For Pixel 4 we have been using the brightest part of the Milky Way, near the constellation Sagittarius, as a benchmark for the quality of images of a moonless sky. By that standard Night Sight is doing very well. Although Milky Way photos exhibit some residual noise, they are pleasing to look at, showing more stars and more detail than a person can see looking at the real night sky.
Examples of photos taken with the Google Camera App on Pixel 4. An album with more pictures can be found here.
Tips and Tricks
In the course of developing and testing Night Sight astrophotography we gained some experience taking outdoor nighttime pictures with Pixel phones, and we’d like to share a list of tips and tricks that have worked for us. You can find it here.

Acknowledgements
Night Sight is an ongoing collaboration between several teams at Google. Key contributors to the project include from the Gcam team, Orly Liba, Nikhil Karnad, Charles He, Manfred Ernst, Michael Milne, Andrew Radin, Navin Sarma, Jon Barron, Yun-Ta Tsai, Tianfan Xue, Jiawen Chen, Dillon Sharlet, Ryan Geiss, Sam Hasinoff, Alex Schiffhauer, Yael Pritch Knaan and Marc Levoy; from the Super Res Zoom team, Bart Wronski, Peyman Milanfar, and Ignacio Garcia Dorado; from the Google camera app team, Emily To, Gabriel Nava, Sushil Nath, Isaac Reynolds, and Michelle Chen; from the Android platform team, Ryan Chan, Ying Chen Lou, and Bob Hung; from the Mobile Vision team, Longqi (Rocky) Cai, Huizhong Chen, Emily Manoogian, Nicole Maffeo, and Tomer Meron; from Machine Perception, Elad Eban and Yair Movshovitz-Attias.

Android’s Ambient Mode will soon come to ‘select devices’


You’ve probably heard murmurs about Google’s forthcoming Ambient Mode for Android. The company first announced this feature, which essentially turns an Android device into a smart display while it’s charging, in September. Now, in a Twitter post, Google confirmed that it will launch soon, starting with a number of select devices that run Android 8.0 or later.

At the time, Google said Ambient Mode was coming to the Lenovo Smart Tab M8 HD and Smart Tab tablets, as well as the Nokia 7.2 and 6.2 phones. According to the Verge, it’ll also come to Sony, Nokia, Transsion and Xiaomi phones, though Google’s own Pixels aren’t on the company’s list yet.

“The ultimate goal for proactive Assistant is to help you get things done faster, anticipate your needs and accomplish your tasks as quickly and as easily as possible,” said Google Assistant product manager Arvind Chandrababu in the announcement. “It’s fundamentally about moving from an app-based way of doing things to an intent-based way of doing things. Right now, users can do most things with their smartphones, but it requires quite a bit of mental bandwidth to figure out, hey, I need to accomplish this task, so let me backtrack and figure out all the steps that I need to do in order to get there.”

Those are pretty lofty goals. In practice, what this means, for now, is that you will be able to set an alarm with just a few taps from the ambient screen, see your upcoming appointments, turn off your connected lights and see a slideshow of your images in the background. I don’t think that any of those tasks really consumed a lot of mental bandwidth in the first place, but Google says it has more proactive experiences planned for the future.

 


Read Full Article

Android’s Ambient Mode will soon come to ‘select devices’


You’ve probably heard murmurs about Google’s forthcoming Ambient Mode for Android. The company first announced this feature, which essentially turns an Android device into a smart display while it’s charging, in September. Now, in a Twitter post, Google confirmed that it will launch soon, starting with a number of select devices that run Android 8.0 or later.

At the time, Google said Ambient Mode was coming to the Lenovo Smart Tab M8 HD and Smart Tab tablets, as well as the Nokia 7.2 and 6.2 phones. According to the Verge, it’ll also come to Sony, Nokia, Transsion and Xiaomi phones, though Google’s own Pixels aren’t on the company’s list yet.

“The ultimate goal for proactive Assistant is to help you get things done faster, anticipate your needs and accomplish your tasks as quickly and as easily as possible,” said Google Assistant product manager Arvind Chandrababu in the announcement. “It’s fundamentally about moving from an app-based way of doing things to an intent-based way of doing things. Right now, users can do most things with their smartphones, but it requires quite a bit of mental bandwidth to figure out, hey, I need to accomplish this task, so let me backtrack and figure out all the steps that I need to do in order to get there.”

Those are pretty lofty goals. In practice, what this means, for now, is that you will be able to set an alarm with just a few taps from the ambient screen, see your upcoming appointments, turn off your connected lights and see a slideshow of your images in the background. I don’t think that any of those tasks really consumed a lot of mental bandwidth in the first place, but Google says it has more proactive experiences planned for the future.

 


Read Full Article

Instagram founders join $30M raise for Loom work video messenger


Why are we all trapped in enterprise chat apps if we talk 6X faster than we type, and our brain processes visual info 60,000X faster than text? Thanks to Instagram, we’re not as camera-shy anymore. And everyone’s trying to remain in flow instead of being distracted by multi-tasking.

That’s why now is the time for Loom. It’s an enterprise collaboration video messaging service that lets you send quick clips of yourself so you can get your point across and get back to work. Talk through a problem, explain your solution, or narrate a screenshare. Some engineering hocus pocus sees videos start uploading before you finish recording so you can share instantly viewable links as soon as you’re done.

“What we felt was that more visual communication could be translated into the workplace and deliver disproportionate value” co-founder and CEO Joe Thomas tells me. He actually conducted our whole interview over Loom, responding to emailed questions with video clips.

Launched in 2016, Loom is finally hitting its growth spurt. It’s up from 1.1 million users and 18,000 companies in February to 1.8 million people at 50,000 businesses sharing 15 million minutes of Loom videos per month. Remote workers are especially keen on Loom since it gives them face-to-face time with colleagues without the annoyance of scheduling synchronous video calls. “80% of our professional power users had primarily said that they were communicating with people that they didn’t share office space with” Thomas notes.

A smart product, swift traction, and a shot at riding the consumerization of enterprise trend has secured Loom a $30 million Series B. The round that’s being announced later today was led by prestigious SAAS investor Sequoia and joined by Kleiner Perkins, Figma CEO Dylan Field, Front CEO Mathilde Collin, and Instagram co-founders Kevin Systrom and Mike Krieger.

“At Instagram, one of the biggest things we did was focus on extreme performance and extreme ease of use and that meant optimizing every screen, doing really creative things about when we started uploading, optimizing everything from video codec to networking” Krieger says. “Since then I feel like some products have managed to try to capture some of that but few as much as Loom did. When I first used Loom I turned to Kevin who was my Instagram co-founder and said, ‘oh my god, how did they do that? This feels impossibly fast.'”

Systrom concurs about the similarities, saying “I’m most excited because I see how they’re tackling the problem of visual communication in the same way that we tried to tackle that at Instagram.” Loom is looking to double-down there, potentially adding the ability to Like and follow videos from your favorite productivity gurus or sharpest co-workers.

Loom is also prepping some of its most requested features. The startup is launching an iOS app next month with Android coming the first half of 2020, improving its video editor with blurring for hiding your bad hair day and stitching to connect multiple takes. New branding options will help external sales pitches and presentations look right. What I’m most excited for is transcription, which is also slated for the first half of next year through a partnership with another provider, so you can skim or search a Loom. Sometimes even watching at 2X speed is too slow.

But the point of raising a massive $30 million Series B just a year after Loom’s $11 million Kleiner-led Series A is to nail the enterprise product and sales process. To date, Loom has focused on a bottom-up distribution strategy similar to Dropbox. It tries to get so many individual employees to use Loom that it becomes a team’s default collaboration software. Now it needs to grow up so it can offer the security and permissions features IT managers demand. Loom for teams is rolling out in beta access this year before officially launching in early 2020.

Loom’s bid to become essential to the enterprise, though, is its team video library. This will let employees organize their Looms into folders of a knowledge base so they can explain something once on camera, and everyone else can watch whenever they need to learn that skill. No more redundant one-off messages begging for a team’s best employees to stop and re-teach something. The Loom dashboard offers analytics on who’s actually watching your videos. And integration directly into popular enterprise software suites will let recipients watch without stopping what they’re doing.

To build out these features Loom has already grown to a headcount of 45. It’s also hired away former head of growth at Dropbox Nicole Obst, head of design for Slack Joshua Goldenberg, and VP of commercial product strategy for Intercom Matt Hodges.

Still, the elephants in the room remain Slack and Microsoft Teams. Right now, they’re mainly focused on text messaging with some additional screensharing and video chat integrations. They’re not building Loom-style asynchronous video messaging…yet. “We want to be clear about the fact that we don’t think we’re in competition with Slack or Microsoft Teams at all. We are a complementary tool to chat” Thomas insists. But given the similar productivity and communication ethos, those incumbents could certainly opt to compete.

Loom co-founder and CEO Joe Thomas

Hodges, Loom’s head of marketing, tells me “I agree Slack and Microsoft could choose to get into this territory, but what’s the opportunity cost for them in doing so? It’s the classic build vs. buy vs. integrate argument.” Slack bought screensharing tool Screenhero, but partners with Zoom and Google for video chat. Loom will focus on being easily integratable so it can plug into would-be competitors. And Hodges notes that “Delivering asynchronous video recording and sharing at scale is non-trivial. Loom holds a patent on its streaming, transcoding, and storage technology, which has proven to provide a competitive advantage to this day.”

The tea leaves point to video invading more and more of our communication, so I expect rival startups and features to Loom will crop up. As long as it has the head start, it needs to move as fast as it can. “It’s really hard to maintain focus to deliver on the core product experience that we set out to deliver versus spreading ourselves too thin. And this is absolutely critical” Thomas tells me.

One thing that could set Loom apart? A commitment to financial fundamentals. “When you grow really fast, you can sometimes lose sight of what is the core reason for a business entity to exist, which is to become profitable. . . Even in a really bold market where cash can be cheap, we’re trying to keep profitability at the top of our minds.”


Read Full Article

Instagram founders join $30M raise for Loom work video messenger


Why are we all trapped in enterprise chat apps if we talk 6X faster than we type, and our brain processes visual info 60,000X faster than text? Thanks to Instagram, we’re not as camera-shy anymore. And everyone’s trying to remain in flow instead of being distracted by multi-tasking.

That’s why now is the time for Loom. It’s an enterprise collaboration video messaging service that lets you send quick clips of yourself so you can get your point across and get back to work. Talk through a problem, explain your solution, or narrate a screenshare. Some engineering hocus pocus sees videos start uploading before you finish recording so you can share instantly viewable links as soon as you’re done.

“What we felt was that more visual communication could be translated into the workplace and deliver disproportionate value” co-founder and CEO Joe Thomas tells me. He actually conducted our whole interview over Loom, responding to emailed questions with video clips.

Launched in 2016, Loom is finally hitting its growth spurt. It’s up from 1.1 million users and 18,000 companies in February to 1.8 million people at 50,000 businesses sharing 15 million minutes of Loom videos per month. Remote workers are especially keen on Loom since it gives them face-to-face time with colleagues without the annoyance of scheduling synchronous video calls. “80% of our professional power users had primarily said that they were communicating with people that they didn’t share office space with” Thomas notes.

A smart product, swift traction, and a shot at riding the consumerization of enterprise trend has secured Loom a $30 million Series B. The round that’s being announced later today was led by prestigious SAAS investor Sequoia and joined by Kleiner Perkins, Figma CEO Dylan Field, Front CEO Mathilde Collin, and Instagram co-founders Kevin Systrom and Mike Krieger.

“At Instagram, one of the biggest things we did was focus on extreme performance and extreme ease of use and that meant optimizing every screen, doing really creative things about when we started uploading, optimizing everything from video codec to networking” Krieger says. “Since then I feel like some products have managed to try to capture some of that but few as much as Loom did. When I first used Loom I turned to Kevin who was my Instagram co-founder and said, ‘oh my god, how did they do that? This feels impossibly fast.'”

Systrom concurs about the similarities, saying “I’m most excited because I see how they’re tackling the problem of visual communication in the same way that we tried to tackle that at Instagram.” Loom is looking to double-down there, potentially adding the ability to Like and follow videos from your favorite productivity gurus or sharpest co-workers.

Loom is also prepping some of its most requested features. The startup is launching an iOS app next month with Android coming the first half of 2020, improving its video editor with blurring for hiding your bad hair day and stitching to connect multiple takes. New branding options will help external sales pitches and presentations look right. What I’m most excited for is transcription, which is also slated for the first half of next year through a partnership with another provider, so you can skim or search a Loom. Sometimes even watching at 2X speed is too slow.

But the point of raising a massive $30 million Series B just a year after Loom’s $11 million Kleiner-led Series A is to nail the enterprise product and sales process. To date, Loom has focused on a bottom-up distribution strategy similar to Dropbox. It tries to get so many individual employees to use Loom that it becomes a team’s default collaboration software. Now it needs to grow up so it can offer the security and permissions features IT managers demand. Loom for teams is rolling out in beta access this year before officially launching in early 2020.

Loom’s bid to become essential to the enterprise, though, is its team video library. This will let employees organize their Looms into folders of a knowledge base so they can explain something once on camera, and everyone else can watch whenever they need to learn that skill. No more redundant one-off messages begging for a team’s best employees to stop and re-teach something. The Loom dashboard offers analytics on who’s actually watching your videos. And integration directly into popular enterprise software suites will let recipients watch without stopping what they’re doing.

To build out these features Loom has already grown to a headcount of 45. It’s also hired away former head of growth at Dropbox Nicole Obst, head of design for Slack Joshua Goldenberg, and VP of commercial product strategy for Intercom Matt Hodges.

Still, the elephants in the room remain Slack and Microsoft Teams. Right now, they’re mainly focused on text messaging with some additional screensharing and video chat integrations. They’re not building Loom-style asynchronous video messaging…yet. “We want to be clear about the fact that we don’t think we’re in competition with Slack or Microsoft Teams at all. We are a complementary tool to chat” Thomas insists. But given the similar productivity and communication ethos, those incumbents could certainly opt to compete.

Loom co-founder and CEO Joe Thomas

Hodges, Loom’s head of marketing, tells me “I agree Slack and Microsoft could choose to get into this territory, but what’s the opportunity cost for them in doing so? It’s the classic build vs. buy vs. integrate argument.” Slack bought screensharing tool Screenhero, but partners with Zoom and Google for video chat. Loom will focus on being easily integratable so it can plug into would-be competitors. And Hodges notes that “Delivering asynchronous video recording and sharing at scale is non-trivial. Loom holds a patent on its streaming, transcoding, and storage technology, which has proven to provide a competitive advantage to this day.”

The tea leaves point to video invading more and more of our communication, so I expect rival startups and features to Loom will crop up. Vidyard and Wistia’s Soapbox are already pushing into the space. As long as it has the head start, Loom needs to move as fast as it can. “It’s really hard to maintain focus to deliver on the core product experience that we set out to deliver versus spreading ourselves too thin. And this is absolutely critical” Thomas tells me.

One thing that could set Loom apart? A commitment to financial fundamentals. “When you grow really fast, you can sometimes lose sight of what is the core reason for a business entity to exist, which is to become profitable. . . Even in a really bold market where cash can be cheap, we’re trying to keep profitability at the top of our minds.”


Read Full Article

How you can use impostor syndrome to your benefit | Mike Cannon-Brookes

How you can use impostor syndrome to your benefit | Mike Cannon-Brookes

Have you ever doubted your abilities, feared you were going to be discovered as a "fraud"? That's called "impostor syndrome," and you're definitely not alone in feeling it, says entrepreneur and CEO Mike Cannon-Brookes. In this funny, relatable talk, he shares how his own experiences of impostor syndrome helped pave the way to his success -- and shows how you can use it to your advantage, too.

Click the above link to download the TED talk.

5 Ways to Get In-Car Wi-Fi for Internet Access on the Go


wifi-in-car

New cars come connected, with some even offering Wi-Fi to passengers. But what about those of us who cannot afford a connected car? What options are available for getting Wi-Fi in your vehicle?

Whether you want live updates from Google Maps as you drive, listen to Spotify, or simply keep your kids entertained, here’s how you can get wireless internet in your car.

1. Simple Option: Use 4G Phones and Tablets

As most older vehicles shipped without any built-in networking, it makes sense to use 4G (or 5G, where available).

After all, if your passengers have mobile internet, why would you need to provide them with connectivity? Well, a couple of reasons spring to mind:

  1. It’s a long journey and their mobile data might be capped
  2. You’re driving beyond their carrier’s mobile internet range

In either scenario, it’s smart to use an in-car solution. But for city and suburban travel, there’s no reason why passengers shouldn’t use their own internet.

2. Set Your Mobile as a Hotspot for In-Car Internet

Okay, so you’re on the road with a car full all wanting music, video, even audiobooks. Maybe some online gaming. What do you do?

Well, if you’re basically unprepared, but have your smartphone with you, it makes sense to just share the connection. Mobile internet can be shared by setting your phone up as a hotspot. How you do this depends on what type of phone you use.

If you use an iPhone, it’s easy to set up the hotspot feature. No iPhone? It’s also simple to enable wireless tethering on Android and create a hotspot.

Set a password, share it with your passengers, and everyone in the car can benefit from your mobile internet connection.

3. Use a Universal Portable Hotspot

If you’re likely to regularly require mobile internet for your family or yourself, a portable hotspot seems smart.

These devices basically replicate the hotspot function of a mobile phone. Like your home router, they connect to the internet and share access via a secure password. The difference is, like a phone, portable hotspots use mobile internet. As such, a subscription or pre-payment is required to use them, on-top of the initial purchase price.

Various manufacturers produce portable hotspots. This Netgear 4G LTE Wi-Fi hotspot has 10 hours of battery and up to 10 days of standby charge.

To use one, you’ll need to speak to a mobile network to order a SIM card. Be sure to choose one that fits and provides the right level of usage. Otherwise, you could end up running out of data and have to use your smartphone!

4. Purchase an In-Car Hotspot

While you can easily use a portable hotspot in your car, some devices are specifically designed for in-car use.

One example is the Sprint Drive, a car tracking module that connects to your vehicle’s ODB-II port. This means that the device can share data about your vehicle’s performance, trip history analysis, fuel efficiency, vehicle health alerts, and diagnostics.

No idea what OBD-II is? Don’t worry, it’s a bit of a secret to most car users. In short, there’s a secret plug somewhere in the front of your car. Auto repair garages use the OBD-II port to perform car diagnostics, but you can also access it.

As a benefit for passengers, up to eight devices can be connected to the device’s Wi-Fi hotspot. Sprint Drive supports 5G, 4G, and 4G LTE. It will set you back around $120.00, with a 2GB plan for $10.00 a month or unlimited at $25.00/month.

Want to try a hotspot intended for a car? This Huawei device provides 150Mpbs

5. Use Old Hardware as an In-Car Wi-Fi Hotspot

Use a Sprint Drive for in-car internet

While portable hotspots and dedicated in-car 4G routers are expensive, you have cheaper options available.

For example, if you have an old phone, you could use this instead of your main device. Just leave it connected to your car’s charge port, maybe secure with tape or Velcro, and keep it hidden. Only share its existence with your passengers. Plus, you have a spare phone to use in an emergency.

Alternatively, you could rely on a mobile dongle. While such devices are rare these days, they can be found on eBay, or at the back of a drawer.

Importantly, mobile dongles only require a USB power source. So, you could take a few minutes to set one up manually with your laptop, then connect it to your car’s USB port. Whenever your car is running, the USB port will be powered, and the hotspot active.

It’s not a perfect solution, but it works.

A Word About Safe Driving

Before we finish, it’s worth taking a moment to consider the importance of adhering to road safety and the law.

In short, driving a vehicle while interacting with a mobile device is almost certainly an offense where you live. It’s a dangerous act, an inevitable, forced lapse in concentration that can cause an accident. Your vehicle’s occupants, pedestrians, and other road users are all at risk.

So, when using digital devices while driving, either pull over to set them up, or rely on a passenger.

Safety goes further, however. Getting internet in your car means staying secure, as well as safe. So, ensure a strong password is set for your wireless hotspot, and if possible hide the SSID (network name). This will keep the network hidden—just tell your passengers what to look for when they’re trying to connect.

Get Connected While You Drive

Long journeys can be tough. Some connected devices can help travelers relax, stream music, games, and more to phones and tablets.

At this stage, you should know exactly how to keep everyone in your car happy with an internet connection. To get in-car Wi-Fi you can:

  • Suggest everyone use their own mobile internet
  • Set a mobile as a hotspot
  • Use a portable Wi-Fi hotspot
  • Use a dedicated in-car hotspot
  • Consider a DIY in-car wireless hotspot

Alternatively, you might want to simply connect your phone to your car. Here’s how to stream music from your phone to your car audio system.

Read the full article: 5 Ways to Get In-Car Wi-Fi for Internet Access on the Go


Read Full Article

How to Block Someone on LinkedIn


block-linkedin-account

Don’t assume that everyone who’s on LinkedIn is as professional as you. Because LinkedIn has its fair share of self-promoters, scammers, and bullies. Thankfully, all you need to do is block them. So in this article we’ll show you how to block someone on LinkedIn.

Why You Should Block Someone on LinkedIn

There are always going to be LinkedIn connections you don’t want. Perhaps you’ve accepted LinkedIn invites from strangers only to find that they’re marketers in disguise. And while it’s possible to find your next mentor on LinkedIn, there are also plenty of bad actors you need to avoid at all costs.

The good news is that LinkedIn allows you to block people and remove them if they don’t serve any use. Here are some of the behaviors which should automatically make you block the profiles behind them:

  • Someone who refuses to take “no” for an answer.
  • A member (or acquaintance) who keeps asking for money.
  • Members who use bad language, or make misogynist or racist attacks.
  • You invited someone from your email contact list with a click on the wrong button.
  • Anyone looking for false endorsements and testimonials from you.
  • People who try to use LinkedIn as a dating site.
  • An individual who exhibits creepy behavior.

How to Block Someone on LinkedIn

To block someone on LinkedIn, you’ll have to visit their LinkedIn profile. A fellow LinkedIn member can see that you have visited their profile. But when you block someone, your entry will disappear from Who’s Viewed Your Profile.

So, don’t hesitate to take this step as the blocked member won’t get any notification of this action.

1. Search for the person from the LinkedIn search bar or go to the list of your connections (My Network > Connections).

To block someone on LinkedIn, go to the list of your connections

2. Go to the profile of the person you want to block and follow the steps below:

3. Click the More… button below the member’s profile picture. Select the Report/Block option from the list.

LinkedIn option to block someone

4. A box titled What do you want to do? appears and asks you to take a follow-up action from the three choices. Click on Block [Member Name].

LinkedIn asks you what do you want to do

5. A follow-up box confirms if “Are you sure you want to block [Name]?“. Read the fine print, and if you are OK with the result, click on the blue Block button. If not, you can click on Go Back.

That’s it. There’s no other notification. The member you have blocked also won’t receive an alert. LinkedIn says that you can block up to 1000 members on LinkedIn.

How to Unblock Someone on LinkedIn

If you want to unblock someone on LinkedIn, head to your LinkedIn Blocked List. This can also help you confirm you’ve successfully blocked someone. The Blocked list tells you how many contacts you’ve blocked and how long ago you blocked them.

  1. Click the Me icon at top of your LinkedIn homepage.
  2. From the dropdown menu, select Settings & Privacy.
  3. The Privacy tab opens in your personal settings page. Click Blocking and hiding on the left sidebar.
  4. Click Change next to Blocking on the list.
  5. From your blocked list, find the person’s name and click Unblock. Enter your LinkedIn log-in password in the box that pops up and click the blue Unblock member button.

Unblock a member from the LinkedIn Blocked List

LinkedIn says that after unblocking someone you’ll have to wait 48 hours before re-blocking them again.

How to Block Someone on the LinkedIn Mobile Apps

If you’re using the LinkedIn mobile apps the steps to blocking people are similar. Here’s how to block someone on LinkedIn using the iOS app…

  1. Search for and go to the profile of the person you’d like to block.
  2. Click on the More below the profile photo of your contact.
  3. Tap Report or block option on the screen.
  4. Tap Block [Name] in the dialog that says What do you want to do?
  5. Confirm it again in the next pop-up window with a tap on Block.

What Happens When You Block Someone on LinkedIn?

There are some important caveats worth noting when deciding whether to block someone on LinkedIn:

  • There won’t be a record of visits under Who’s Viewed Your Profile.
  • You won’t be able to access the blocked member’s profile on LinkedIn either.
  • Endorsements and recommendations from that member will be removed from your profile.
  • LinkedIn won’t suggest the names for People You May Know and People also Viewed.
  • LinkedIn Event notifications from either member will cease.
  • All shared content between you and the member will be erased.
  • Messaging between both members will be disabled.

The above limitations won’t matter if the connection is undesirable in the first place. But for any other relationship, you will have to weigh the pros and cons before you block someone.

If both of you belong to the same LinkedIn Group, the status of each individual member in the group depends on the Group Manager. That’s because they have the power to remove or block any member. You can choose to report a group member if they go against the group’s intent. Until then, both of you will be part of any group conversations.

Can Someone Still View Your Profile If You Have Blocked Them?

Yes, anyone can access your public profile. And it’s a no-brainer that someone who really wants to view your LinkedIn profile can just ask someone else to show it to them.

So, as well as following our LinkedIn profile tips to guarantee success, be sure to configure your privacy settings so that everyone can only see what you want them to see.

More LinkedIn Features You Should Be Using

There are many reasons to block someone on LinkedIn, and only you will know whether it’s the right thing to do. And while blocking someone isn’t the perfect solution to harassment and bullying, it’s a good option we’re glad exists.

The option to block someone on LinkedIn should help keep LinkedIn the professional networking platform it was designed to be. And there are plenty of other LinkedIn features you should be using.

Read the full article: How to Block Someone on LinkedIn


Read Full Article

How to Sleep Better With a Smart Home


sleep-better-smarthome

Insomnia is a surprising epidemic. Around 60 million Americans suffer from sleeplessness—though 50 percent frequently present symptoms—and its effects are wide-reaching. Insomnia is linked to depression, overdoses, and incidents on the road.

While there’s no cure-all fix for the condition, smart device can help you get more sleep each night. You can get better sleep by making a few simple changes around your house.

White Noise Generators

Hearing is different to listening. It’s why you might be used to traffic noise, but a dog barking keeps you on high alert. You can “tune out” of one but be alarmed by the other. Seth Horowitz writes, in The Universal Sense: How Hearing Shapes the Mind:

“There is no such thing as silence. We are constantly immersed in and affected by sound and vibration… And the reasons the constant thrumming doesn’t drive us all insane are the same reasons we get distracted by radio jingles and can’t read when the TV is on: we are good at choosing what we hear.”

This is where white noise comes in. It’s a constant frequency that acts as background noise. Your brain gets used to this and blends other sounds into the tone you’re already used to.

SNOOZ White Noise Sound Machine

SNOOZ White Noise Sound Machine SNOOZ White Noise Sound Machine Buy Now On Amazon $79.99

The SNOOZ White Noise Sound Machine is a fan-based unit—but don’t worry, you can still use it in winter as it doesn’t generate cold air. Its timer and variable tones allow you to customize the SNOOZ for adults, kids, and pets. Control it remotely through the accompanying app.

This includes a timer and “Nursery Calibration”, ideal for protecting babies and toddlers from excessive noise.

It looks like an Echo Dot or Google Nest Mini, so it’s light and compact. Once you’re accustomed to white noise, you can take it wherever you go.

Every Moment Counts White Noise Machine

Every Moment Counts White Noise Machine Every Moment Counts White Noise Machine Buy Now On Amazon $29.86

Every Moment Counts’ White Noise Machine produces white noise but is also a natural sound generator. These are perfect if you’re not keen on listening to something that sounds artificial.

This unit can generate 38 sounds, including birdsong, a train ride, pink noise (a deeper resonance than white), waves, a clock, and lullabies. Some—like the heartbeat or a dog barking—are superfluous or unsettling. The dripping water function might make you need the toilet. But this nonetheless offers amazing range and personalization.

Breathing Devices

Ever hear someone say they’ll sleep well after a long walk in the countryside? Natalie Dautovich, PhD, of the National Sleep Foundation, explains:

“Exercise helps, of course, but fresh air is a big factor. A decrease in temperature can lead to tiredness, and fresh air simulates a cooler environment… [T]here are a lot of positive associations between fresh air and relaxation, and when we feel relaxed and comfortable in our environment, we’re more likely to feel sleepy.”

You could also be victim to allergies, especially if you suffer from congestion during the night. Your breathing could be affected by dust particles, pollen, and mold, all of which may stop you relaxing in an evening.

AeraMax 300 Large Room Air Purifier

AeraMax 300 Large Room Air Purifier AeraMax 300 Large Room Air Purifier Buy Now On Amazon $248.90

The AeraMax 300 Large Room Air Purifier uses a High-Efficiency Particulate Air (HEPA) Filter to remove allergens, dust mites, and mold spores.

It also comes with an intensive mode to filter germs in flu seasons and pollen in the spring. This makes it ideal for anyone with breathing difficulties, as certified by the Asthma and Allergy Foundation of America.

LONOVE Dehumidifier

LONOVE Dehumidifier LONOVE Dehumidifier Buy Now On Amazon $45.99

The small LONOVE Dehumidifier is great for small bedrooms, bathrooms, or basements.

By keeping air moisture below 50 percent, it reduces the chance of mold, making a healthier environment for sleeping.

Yes, it’s tiny, but this makes it easily portable and quiet. Unlike some dehumidifiers, it emits less than 30dB, so can be used in children’s bedrooms. You can even put one in your wardrobe and it’ll slowly eliminate musty smells and slowly dry your clothes.

Temperature Control

You already know that it’s difficult to sleep if it’s too hot or too cold. Bill Bryson, in The Body: A Guide for Occupants, writes:

“Although our body temperature varies slightly throughout the day (it is lowest in the morning, highest in the late afternoon or evening), it stays within a decidedly narrow compass of 36 to 38 degrees Celsius. To avoid catastrophe, the brain has its trusty control centre, the hypothalamus, which tells the body to cool itself by sweating or to warm itself by shivering and diverting blood flow away from the skin and into the more vulnerable organs.”

Our body temperature reacts to the environment around us. Aim for your bedroom to be 16 to 18 degrees Celsius (that’s 60 to 65 degrees Fahrenheit).

Google Nest Thermostat

Google Nest Thermostat Google Nest Thermostat Buy Now On Amazon

It’s one of the best-known smart home devices. The Google Nest Thermostat is a self-learning unit which creates a heating schedule based on what temperatures you like and when you’re typically home.

The main selling point of a Nest is to save on energy bills. But it can also help you sleep by automatically regulating ambient room temperature. Of course, you can also alter settings remotely through the Nest app.

SwitchBot Hub Plus

SwitchBot Hub Plus SwitchBot Hub Plus Buy Now On Amazon $49.00

The SwitchBot Hub Plus isn’t designed solely for temperature control—pair it with any smart home appliance. That means you can use the SwitchBot app to control the lights, turn the TV on and off, and regulate the air conditioner.

Plan a schedule using programs like IFTTT, then decide which devices should be automated in different seasons. It means you can cool the bedroom in the summer while you’re binge-watching Netflix in the living room.

Plus the light-up cloud design looks fantastic.

Smart Home Lighting and Light Switches

Our bodies react to light. You get tired as the gloomier evenings draw in; you wake up earlier than usual as the sun shines through your blinds. It’s part of your internal clock known as the circadian rhythm, also controlled by the hypothalamus but affected by light exposure. The Sleep Council says:

“When we see light, our bodies assume it’s time to wake up. When it’s dark, we release melatonin which relaxes the body and helps us to drift off.”

Smart lighting affects your health. By attuning to your circadian rhythm, you regulate your sleep/wake cycle, so you’ll feel more energized during the day.

Philips Hue Starter Kit

Philips Hue Starter Kit Philips Hue Starter Kit Buy Now On Amazon $112.86

The biggest brand name in smart lighting can be intimidating. How do you even use it?

Just screw in one of the smart bulbs, download the free app, and connect the Hub to your router using an Ethernet cable or smart plug. You can use your phone to remotely turn your lights on, even if you’re not at home. You can dim them, set timers, and (depending on the bulb) change the color.

Each bulb is designed to last over 20 years.

If you or a youngster needs a night light, this is a good option as the Philips Hue Starter Kit bulbs can emit red light. A 2017 study by the University of Haifa found that “exposure to red light showed a very similar level of melatonin production” to when sleeping in total darkness.

Treatlife Smart Light Switch

Treatlife Smart Light Switch Treatlife Smart Light Switch Buy Now On Amazon $29.63

The Treatlife Smart Light Switch lets you control your lights from the comfort of your bed. Use the app on your smartphone or connect it to a voice assistant. You can also set a schedule so your house automatically lights up at the time you’re normally home from work.

There’s an Away Mode, reverting to a routine when you’re on vacation so no one knows you’re away.

The most useful addition is Group Control. Use these smart home light switches in every room then sync them. When you get into bed, Group Control lets you turn off every light in the house with one command. Just learn the distinction between a smart home light switch and bulb, so you make sure you’re opting for the right appliance.

How to Set a Sleep Timer Using Alexa

Can voice assistants help you sleep better? Many smart devices connect to voice assistants like Alexa. You don’t have to get up to adjust the Google Nest, for instance: just instruct Alexa to do it for you.

Voice assistants can also be used to relax in the evening by playing your favorite soothing music. Worried that it’ll continue playing after you’ve drifted off? It’s easy to set a Sleep Timer on Alexa. Simply say “Alexa, set a sleep timer for 60 minutes”, or however long you want it to continue.

After an hour has elapsed, it’ll switch off the music, audiobook, or podcast you’ve been listening to.

You can extend this afterwards if you’re still awake, of course. But if you want to stop the timer, say “Alexa, cancel sleep timer”.

How to Get a Better Night’s Sleep

Technology is often linked to insomnia. But gadgets aren’t always damaging:

  • White noise and natural sound generators help you ignore background noises.
  • Breathing devices purify the air in your bedroom, meaning you suffer less from allergens.
  • Temperature controllers help you regulate your body’s heat.
  • Smart home lighting aids your natural circadian rhythm.

In fact, if you’re suffering from sleeplessness, these apps track your nights and find out why you can’t drift off.

Read the full article: How to Sleep Better With a Smart Home


Read Full Article