Zoom: https://oregonstate.zoom.us/j/96491555190?pwd=azJHSXZ0TFQwTFFJdkZCWFhnT…
3D reconstruction and generation are essential for a variety of vision and graphics applications, including 3D content creation, augmented reality, virtual reality, gaming, and robotics. Our research aims to address the core challenges in these areas by re-inventing neural 3D representations and architectural designs, leveraging advanced AI technologies. This talk will cover the recent progress in the fields and showcase our latest works. We will begin by discussing the recent advancements in neural field representations, which have enabled compact and realistic 3D modeling, thereby facilitating both reconstruction and generation tasks. We will then investigate the efficient estimation of neural fields using deep networks in a feed-forward manner, enabling fast and generalizable 3D reconstruction. The talk will also demonstrate the strong inherent connection between 3D reconstruction and generation, exploring how reconstruction techniques can serve as a foundation for powerful generation models. Our recent research introduces transformer-based 3D large reconstruction models, achieving fast and realistic 3D reconstruction and generation across various tasks, ranging from multi-view reconstruction to single-view reconstruction and text-to-3D generation.
Zexiang Xu is a research scientist at Adobe Research. His research interests lie at the intersection of computer vision, computer graphics, and machine learning, with the primary focus on enabling efficient and realistic 3D reconstruction and rendering. His work covers key areas in 3D vision and graphics, including 3D reconstruction, 3D generation, neural representations, view synthesis, relighting, rendering, and appearance acquisition. Previously, he earned his Ph.D. in 2020 from UC San Diego working with Prof. Ravi Ramamoorthi.