Abhimitra (Abhi) Meka
I am a Research Scientist in the Augmented Reality Perception group at Google where I work with Thabo Beeler, Christoph Rhemann and many other exceptional researchers, engineers and artists. My work lies at the intersection of computer graphics, computer vision and machine learning. I am particularly interested in the process of acquiring, understanding and modifying visual appearance of people and objects in images and videos to enable augmented reality.
I encourage you to talk to me about inverse rendering, view synthesis and relighting for Augmented Reality applications. Or we could talk if you are familiar with this!
Research Interests
Inverse Rendering and Relighting
Novel-view Synthesis
Digital Humans
Augmented Reality Rendering
Research Projects
Cafca: High-quality Novel View Synthesis of Expressive Faces from Casual Few-shot Captures
Bühler, Li, Wood, Helminger, Chen, Shah, Wang, Garbin, Orts-Escolano, Hilliges, Lagun, Riviera, Gotardo, Beeler, Meka, Sarkar
ACM SIGGRAPH Asia 2024 Conference Proceedings
A volumetric face prior learnt from synthetic data for high-fidelity expressive face modeling from casual in-the-wild captures
Project Page Paper Video Supplementary Extended Results Dataset
FaceFolds: Meshed Radiance Manifolds for Efficient Volumetric Rendering of Dynamic Faces
Medin, Li, Du, Garbin, Davidson, Wornell, Beeler, Meka
ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D) 2024
A novel single multi-layer mesh + video texture representation to achieve very efficient rendering of volumetric dynamic face sequences on graphics platforms like game engines without ML integration
GANtlitz: Ultra High Resolution Generative Model for Multi-Modal Face Textures
Gruber, Collins, Meka, Müller, Sarkar, Orts-Escolano, Prasso, Busch, Gross, Beeler
Computer Graphics Forum (Proceedings of Eurographics) 2024
A generative model to synthesize ultra-high-resolution (6𝑘 × 4𝑘) multi-modal face appearance maps for novel identities trained from very sparse data (~100 identities).
ShellNeRF: Learning a Controllable High-resolution Model of the Eye and Periocular Region
Li, Sarkar, Meka, Bühler, Müller, Gotardo, Hilliges, Beeler
Computer Graphics Forum (Proceedings of Eurographics) 2024
A novel discretized volumetric representation for animation and synthesis of the eye and periocular region using concentric surfaces around a 3DMM face mesh
One2Avatar: Generative Implicit Head Avatar for Few-shot User Adaptation
Yu, Bai, Meka, Tan, Xu, Pandey, Fanello, Park, Zhang
A novel approach to generate an animatable photo-realistic avatar from only a few or one image of the target person using a by a 3D generative model learned from multi-view multi-expression data
LitNeRF: Intrinsic Radiance Decomposition for High-Quality View Synthesis and Relighting of Faces
Sarkar, Bühler, Li, Wang, Vicini, Riviera, Zhang, Orts-Escolano, Gotardo, Beeler, Meka
ACM SIGGRAPH Asia 2023 Conference Proceedings
A volumetric formulation to achieve ultra high-quality view-synthesis and relighting of human heads in sparse multi-view multi-light capture rigs
Preface: A Data-driven Volumetric Prior for Few-shot Ultra High-resolution Face Synthesis
Bühler, Sarkar, Shah, Li, Wang, Helminger, Orts-Escolano, Lagun, Hilliges, Beeler, Meka
International Conference on Computer Vision (ICCV) 2023
A novel data-driven volumetric human face prior that enables high-quality synthesis of ultra high-resolution novel views of human faces from very sparse input images
Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold
Pan, Tewari, Leimkühler, Liu, Meka, Theobalt
ACM SIGGRAPH 2023 Conference Proceedings
An interactive point-and-drag based image manipulation technique through optimization of generative image features
MonoAvatar: Learning Personalized High Quality Volumetric Head Avatars from Monocular RGB Videos
Bai, Tan, Huang, Sarkar, Tang, Qiu, Meka, Du, Dou, Orts-Escolano, Pandey, Tan, Beeler, Fanello, Zhang
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2023
A volumetric representation anchored on a 3D morphable model to generate photorealistic 3D avatars from monocular video
VoLux-GAN: A Generative Model for 3D Face Synthesis with HDRI Relighting
Tan, Fanello, Meka, Orts-Escolano, Tang, Pandey, Taylor, Tan, Zhang
ACM SIGGRAPH 2022 Conference Proceedings
A generative model that synthesizes novel volumetric 3D human heads that can be photorealistically relit under desired environments
VariTex:Variational Neural Face Textures
Bühler, Meka, Li, Beeler, Hilliges
International Conference on Computer Vision (ICCV) 2021
A generative model that synthesizes novel 3D human faces with fine-grained explicit control over extreme poses and expressions
Real-time Global Illumination Decomposition of Videos
Meka*, Shafiei*, Zollhoefer, Richardt, Theobalt
ACM Transactions on Graphics 2021 (Presented at SIGGRAPH 2021)
An optimization based technique to decompose videos into per-frame reflectance and global illumination layers in real-time
Deep Relightable Textures: Volumetric Performance Capture with Neural Rendering
Meka*, Pandey*, Haene, Orts-Escolano, Barnum, Davidson, Erickson, Zhang, Taylor, Bouaziz, Legendre, Ma, Overbeck, Beeler, Debevec, Izadi, Theobalt, Rhemann, Fanello
ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia) 2020
First technique to perform high-quality view synthesis and relighting of dynamic full-body human performances in a Lightstage
Meka, Haene, Pandey, Zollhoefer, Fanello, Fyffe, Kowdle, Yu, Busch, Dourgarian, Denny, Bouaziz, Lincoln, Whalen, Harvey, Taylor, Izadi, Debevec, Theobalt, Valentin, Rhemann
ACM Transactions on Graphics (Proceedings of SIGGRAPH) 2019
A neural rendering technique to capture fully relightable high-resolution dynamic facial performances in a Lightstage
LIME: Live Intrinsic Material Estimation
Meka, Maximov, Zollhöfer, Chatterjee, Seidel, Richardt, Theobalt
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018 - Spotlight
An ML technique to estimate high-frequency material of an object of any shape from a single image and lighting from depth+video
Project Page Paper Supplementary Presentation Poster Code&Models Dataset(34 GB) Video
Live User-Guided Intrinsic Video For Static Scenes
Meka*, Fox*, Zollhöfer, Richardt, Theobalt
IEEE Transactions on Visualization and Computer Graphics (TVCG) 2017
Presented at International Symposium on Mixed and Augmented Reality (ISMAR) 2017
An interactive technique guided by 3D user strokes to perform geometry reconstruction and intrinsic decomposition of static scenes
Dissertations
Doctoral Dissertation Live Inverse Rendering
Master's Thesis A Technique for Simultaneous Fusion and Segmentation of Hyperspectral Images