A remarkable bit of new research promises the power of After Effects with automated in-camera technology.
Their technique uses data from still photos to improve the quality of video footage, such as detail, dynamic range, exposure and lighting. Furthermore, it can perform retouching, remove unwanted objects and change surface textures by automatically interpolating changes made to a single frame.
Their video demonstrates it in glorious, trippy detail:
The technique seems to work in these stages:
- Video and photos are captured of a scene, near-simultaneously.
- A 3D point-cloud model of the scene is built by analysing the motion of the video footage from multiple angles – more than just stereo.
- The high quality photo data is mapped onto this 3D model.
- The motion of the video is recreated within the 3D model resulting in a completely algorithm-generated video.
- Glitches and artefacts due to mapping errors or incomplete data are eliminated with their “Spacetime Fusion” technique.
If the photos are edited prior to the start of processing – for example, if an object is painted out from one of them – these changes will be mapped into the enhanced video automatically.
Some digital cameras can shoot stills and video at the same time. Therefore it should be possible to generate full HDR video at tens of megapixels entirely automatically, non-destructively and in-camera.
It’s not clear how many photos it requires to get a good result, or what kind of real-world situations would cause problems for the process.
The researchers also make it very clear that the technique only works when the subject is static – the camera can move, but the subject it is filming cannot.
But regardless, I’m pretty excited to see what happens next and I hope that some camera manufacturers and software developers are taking notice. It seems like research is fast, but commerce is slow.