- September 4, 2023
- Events, Past Events
- IDS Seminar / Guest Lecture
Venue: P307, HKU IDS Office, Graduate House
Abstract
We live in a 3D world. When we interact with the environment, objects, and humans—such as walking on a road, grasping a cup, or shaking hands—we are actually aware of the geometric shape of the scenes, objects, and humans. Furthermore, we are actively shaping our environment through architectural design, interior decoration, and the creation of novel objects. These capabilities correspond to the task of 3D reconstruction and generation in the field of computer vision.
Conventional point-matching-based 3D reconstruction methods often stumble when dealing with textureless or repetitively textured surfaces, yielding point clouds that are too sparse to effectively serve the downstream applications. Additionally, single-image-based 3D reconstruction is an ill-posed problem in conventional 3D reconstruction framework. In response to these challenges, we propose the integration of geometric priors concerning scenes and objects into the 3D reconstruction, such as representing a scene as piecewise planar surfaces. We then attempt to merge the geometric priors with a signed distance function-based implicit neural representation for 3D reconstruction. Furthermore, we delve into the realm of 3D image generation with image/text conditions, which offers a potential solution for single-image-based 3D reconstruction. Recognizing that an image is a projection of the 3D world, we contend that the 3D shape priors play an important role for ensuring multi-view consistency and geometric accuracy in 2D image generation. As an example, we will showcase our efforts in tackling human motion imitation, novel view synthesis, and appearance transfer (virtual try-on) within a unified framework by leveraging 3D human representation.
Speaker
For information, please contact:
Email: datascience@hku.hk