Google’s Pixel 2 phones have a clever trick up their sleeve when recording video: they can use both electronic and optical image stabilization, delivering largely jitter-free clips even if you’re walking down the street. But how does it meld those two technologies, exactly? Google is happy to explain: it just posted an in-depth exploration of how this stabilization works. As you might guess, Google uses some of its machine learning know-how to incorporate both anti-shake technologies where many phones can only use one or the other.
The system starts off by collecting motion info from both OIS and the phone’s gyroscope, making sure it’s in “perfect” sync with the image. But it’s what happens next that matters most: Google uses a “lookahead” filtering algorithm that pushes image frames into a deferred queue and uses machine learning to predict where you’re likely to move the phone next. This corrects for a wider range of movement than OIS alone, and can counteract common video quirks like wobbling, rolling shutter (the distortion effect where parts of the frame appear to lag behind) or focus hunting. The algorithmic method even introduces virtual motion to mask wild variations in sharpness when you move the phone quickly.
This isn’t to say that Google’s approach is flawless. As others have noted, the Pixel 2 can crop the frame in unexpected ways and blur low light footage more than it should. On the balance, though, this shows just how much AI-related technology can help with video. It can erase typical errors that EIS or OIS might not catch by themselves, and produces footage so smooth it can look like it was captured with the help of a gimbal.
Source: Google Research Blog