Google is opening up the Pixel 2’s Google-designed machine learning SoC, the Pixel Visual Core, to third-party apps. The first apps to take advantage of the chip are Snapchat and Facebook’s pile of social media apps: Facebook, Instagram, and WhatsApp. With the February Android security update for the Pixel 2, each app will get to use Google’s HDR+ photo processing in their own pictures.
With the launch of Android 8.1 Oreo, Google enabled the Pixel Visual Core in the Pixel 2 and Pixel 2 XL and added a “Neural Networks API” to Android. The new API allows apps to tap into any machine-learning hardware acceleration chips present in the device, of which the Pixel Visual Core is one of the first examples. Google’s HDR+ photo algorithm is one of the first pieces of software written for the Pixel Visual Core, and now it’s open to more apps than just the Google camera app.
Google’s HDR+ algorithm takes a burst of photos with short exposure times, aligning them to account for any movement and averaging them together. The result is a noticeably better image, with less noise and higher dynamic range. The images are also upsampled to provide more detail than you would otherwise get with a single 12MP image. HDR+ is so good that the Android modding community has taken to porting the Pixel-exclusive Google Camera app to other devices, where using HDR+ instantly improves the output of the camera.
Source:: Arstechnica – Gadgets