Creating 3D models of people from 2D images

Creating 3D models of people from 2D images

Continuing humanity’s race towards potential deepfake hell, researchers have developed a way of creating 3D models from 2D images using neural networks. The full title of the project is PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization and here’s some gibberish from the researchers:

Recent advances in image-based 3D human shape estimation have been driven by the significant improvement in representation power afforded by deep neural networks. Although current approaches have demonstrated the potential in real world settings, they still fail to produce reconstructions with the level of detail often present in the input images. We argue that this limitation stems primarily form two conflicting requirements; accurate predictions require large context, but precise predictions require high resolution. Due to memory limitations in current hardware, previous approaches tend to take low resolution images as input to cover large spatial context, and produce less precise (or low resolution) 3D estimates as a result. We address this limitation by formulating a multi-level architecture that is end-to-end trainable. A coarse level observes the whole image at lower resolution and focuses on holistic reasoning. This provides context to an fine level which estimates highly detailed geometry by observing higher-resolution images. We demonstrate that our approach significantly outperforms existing state-of-the-art techniques on single image human shape reconstruction by fully leveraging 1k-resolution input images.

I’m all for the progression of science and technology, but I honestly have no idea where all this research is heading. If things keep progressing the way they are, eventually researchers will be able to recreate the entire Universe using somebody’s Instagram profile picture.

Keep going for a video detailing the project and showing how it renders 2D video.

Previous Story
Landing at LAX in real life vs Microsoft Flight Simulator 2020
This is a side by side comparison of what it’s like to land an Airbus A320 at Los Angeles airport in real life compared to Microsoft Flight…

Using internet photos to create 3D reconstructions
Researchers have developed a system for taking photo collections from the internet and turning them into 3D reconstructions. They call it NeRF-W, which stands for Neural Radiance…

MIT researchers create deepfake of Nixon delivering ‘In Event of Moon Disaster’ speech
To illustrate the dangers of deepfakes, a team from the MIT Center for Advanced Virtuality created a deepfake of President Nixon delivering the real contingency speech written…

Researchers discover resolution limit to “upsampling” of pixelated faces
In news surprising only to CSI writers, AI researchers have discovered there is an inherent resolution limit to “upsampling” pixelated faces. Duke University researchers have created an…

Gollum lip syncs The Scatman using AI Lip Sync
Another day, another video of movie characters lip syncing to pop songs. This time, somebody turned AI Lip Sync on Gollum to make him lip sync The…