Abhimitra (Abhi) Meka

EmailLinkLinkedInLink

I am a Research Scientist in the Augmented Reality Perception group at Google where I work with Thabo Beeler, Christoph Rhemann and many other exceptional researchers, engineers and artists.  My work lies at the intersection of computer graphics, computer vision and machine learning. I am particularly interested in the process of acquiring, understanding and modifying visual appearance of people and objects in images and videos to enable augmented reality.

I encourage you to talk to me about inverse rendering, view synthesis and relighting for Augmented Reality applications. Or we could talk if you are familiar with this!

Research Interests



Research Projects

LitNeRF: Intrinsic Radiance Decomposition for High-Quality View Synthesis and Relighting of Faces

Sarkar, Bühler, Li, Wang, Vicini, Riviera, Zhang, Orts-Escolano, Gotardo, Beeler, Meka

ACM SIGGRAPH Asia 2023 Conference Proceedings

A volumetric formulation to achieve ultra high-quality view-synthesis and relighting of human heads in sparse multi-view multi-light capture rigs

Project Page Paper

Preface: A Data-driven Volumetric Prior for Few-shot Ultra High-resolution Face Synthesis

Bühler, Sarkar, Shah, Li, Wang, Helminger, Orts-Escolano, Lagun, Hilliges, Beeler, Meka

International Conference on Computer Vision (ICCV) 2023

A novel data-driven volumetric human face prior that enables high-quality synthesis of ultra high-resolution novel views of human faces from very sparse input images

Project Page Paper Extended Webpage (Download)

Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold

Pan, Tewari, Leimkühler, Liu, Meka, Theobalt

ACM SIGGRAPH 2023 Conference Proceedings

An interactive point-and-drag based image manipulation technique through optimization of generative image features

Project Page Paper Code

VoLux-GAN: A Generative Model for 3D Face Synthesis with HDRI Relighting

Tan, Fanello, Meka, Orts-Escolano, Tang, Pandey, Taylor, Tan, Zhang

ACM SIGGRAPH 2022 Conference Proceedings

A generative model that synthesizes novel volumetric 3D human heads that can be photorealistically relit under desired environments

Project Page Paper Supplementary Code

VariTex:Variational Neural Face Textures

Bühler, Meka, Li, Beeler, Hilliges

International Conference on Computer Vision (ICCV) 2021

A generative model that synthesizes novel 3D human faces with fine-grained explicit control over extreme poses and expressions

Project Page Paper Code Presentation Demo Video Blog

Real-time Global Illumination Decomposition of Videos

Meka*, Shafiei*, Zollhoefer, Richardt, Theobalt

ACM Transactions on Graphics 2021 (Presented at SIGGRAPH 2021)

An optimization based technique to decompose videos into per-frame reflectance and global illumination layers in real-time 

Project Page Paper Supplementary Video

Deep Relightable Textures: Volumetric Performance Capture with Neural Rendering

Meka*, Pandey*, Haene, Orts-Escolano, Barnum, Davidson, Erickson, Zhang, Taylor, Bouaziz, Legendre, Ma, Overbeck, Beeler, Debevec, Izadi, Theobalt, Rhemann, Fanello

ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia) 2020 

First technique to perform high-quality view synthesis and relighting of dynamic full-body human performances in a Lightstage

Project Page Paper Video

Self-supervised Outdoor Scene Relighting

Yu, Meka, Elgharib, Seidel, Theobalt, Smith

European Conference on Computer Vision (ECCV) 2020

A neural rendering technique to relight outdoor scenes under desired lighting from a single image

Project Page Paper Dataset Code&Models Video

Deep Reflectance Fields: High-Quality Facial Reflectance Field Inference From Color Gradient Illumination

Meka, Haene, Pandey, Zollhoefer, Fanello, Fyffe, Kowdle, Yu, Busch, Dourgarian, Denny, Bouaziz, Lincoln, Whalen, Harvey, Taylor,  Izadi, Debevec, Theobalt, Valentin, Rhemann

ACM Transactions on Graphics (Proceedings of SIGGRAPH) 2019

A neural rendering technique to capture fully relightable high-resolution dynamic facial performances in a Lightstage  

Project Page Paper Presentation Video

LIME: Live Intrinsic Material Estimation

Meka, Maximov, Zollhöfer, Chatterjee, Seidel, Richardt, Theobalt

IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018 - Spotlight

An ML technique to estimate high-frequency material of an object of any shape  from a single image and lighting from depth+video

Project Page Paper Supplementary Presentation Poster Code&Models Dataset(34 GB) Video

Live User-Guided Intrinsic Video For Static Scenes

Meka*, Fox*, Zollhöfer, Richardt, Theobalt

IEEE Transactions on Visualization and Computer Graphics (TVCG) 2017

Presented at International Symposium on Mixed and Augmented Reality (ISMAR) 2017

An interactive technique guided by 3D user strokes to perform geometry reconstruction and  intrinsic decomposition of static scenes

Project Page Paper Presentation Poster Dataset Video

Live Intrinsic Video

Meka, Zollhöfer, Richardt, Theobalt

ACM Transactions on Graphics (Proceedings of SIGGRAPH) 2016

The first technique to perform intrinsic decomposition of live video streams using fast non-linear GPU optimization

Project Page Paper Presentation Dataset Video