Yunfan Zeng 曾云帆

Second-year Doctural Student
Yunfan Zeng - portrait

About Me

I am a second-year Doctural student in the Department of Computer Science and Engineering (CSE) at The Hong Kong University of Science and Technology (HKUST), advised by Prof. Pedro V. Sander. My research interests include differentiable rendering, 3D reconstruction, and physically based rendering.

Email | GitHub

Page last updated: 2026/03/04

Education

  • Aug. 2024 - now: Doctoral student in the Department of Computer Science and Engineering (CSE), The Hong Kong University of Science and Technology (HKUST) | Hong Kong SAR, China
    Advisor: Prof. Pedro V. Sander

  • Sep. 2020 - Jul. 2024: Bachelor of Engineering in Computer Science and Technology, Tsinghua University | Beijing, China

Publications

  1. GSWT: Gaussian Splatting Wang Tiles
    Yunfan Zeng, Li Ma, Pedro V. Sander
    SIGGRAPH Asia 2025 (Conference Paper)
    Project | GitHub | Paper

  2. A Survey on Physics-based Differentiable Rendering
    Yunfan Zeng, Guangyan Cai, Shuang Zhao
    arXiv, 2025
    Paper

  3. Fast Learning Radiance Fields by Shooting Much Fewer Rays
    Wenyuan Zhang, Ruofan Xing, Yunfan Zeng, Yu-Shen Liu, Kanle Shi, Zhizhong Han
    IEEE Transactions on Image Processing, 2023
    GitHub | Paper

Intern Experience

  • Meituan | Autonomous Driving Department, Research Intern (Jun 2023 - Aug 2023)
    Advisor: Xiaofei Wang

    Reproduced CVPR 2023 Best Paper: Planning-oriented Autonomous Driving (UniAD). Improved the MapFormer module in UniAD to generate higher-quality road maps.

Research Experience

  • Gaussian Splatting Wang Tiles (Aug 2024 - now)
    Advisor: Prof. Pedro V. Sander, HKUST

    Combined Gaussian Splatting with Wang Tiles to achieve high-performance, real-time expandable 3D scene representation and rendering. Leveraged the seamless tiling and non-periodic properties of Wang Tiles to split a given 3D Gaussian scene into reusable local tiles. New tiles can be added at runtime to achieve infinite scene expansion during rendering. First-author paper published at SIGGRAPH Asia 2025 [1].

  • Mesh-based Differentiable Rendering (Jun 2023 - Apr 2025)
    Advisor: Shuang Zhao, University of California, Irvine

    Proposed a unified benchmark framework for various mesh-based differentiable renderers (including ray-tracing and rasterization engines) to evaluate and compare their capability in solving inverse rendering problems (reconstructing 3D mesh from 2D images). Developed an inverse rendering toolkit as a universal platform for different differentiable renderers. First-author survey paper published on arXiv [2].

  • LUISA: A High-Performance Rendering Framework (Jun 2022 - Sep 2022, Sep 2023 - Dec 2023)
    Advisor: Kun Xu, Tsinghua University

    Studied and applied Luisa, a high-performance rendering framework, understanding its core architecture: a kernel programming DSL, unified runtime, and multiple optimized backends. Implemented a simple ray-tracing renderer "Nori" and learned GPU programming with CUDA for parallel rendering. Developed a basic path-tracing renderer on Luisa, achieving 50×–100× speedup compared to CPU-based implementation. Implemented a basic differentiable ray-tracing renderer through edge sampling on Luisa.

  • Fast 3D Reconstruction with NeRF (Nov 2021 - Jul 2022)
    Advisor: Yu-Shen Liu, Tsinghua University

    Contributed in proposing a method to accelerate and improve the efficiency of the Neural Radiance Field (NeRF) algorithm for the problem of 3D reconstruction from sparse views. The key idea is to reduce the redundancy by shooting much fewer rays in the multi-view volume rendering procedure, which is the base for almost all radiance fields-based methods. Tested and tuned the algorithm across multiple datasets, improving efficiency by 20%–50%. Third-author paper published in IEEE TIP [3].

Personal Projects

  • WebGPU Ray Tracing Renderer, Course Project (Sep 2025 - Dec 2025)
    GitHub

    Developed a GPU ray-tracing renderer deployable on the web using Rust and WebGPU. Derived from the "Ray Tracing in One Weekend" series, supporting various primitives, materials, and textures; used Bounding Volume Hierarchy (BVH) for ray intersection acceleration. Supported both Mega-kernel Path Tracing and Wavefront Path Tracing algorithms. Implemented Multiple Importance Sampling (MIS).

  • CPU Ray Tracing Renderer, Course Project (Mar 2022 - Jul 2022)
    GitHub

    Developed a multi-threaded CPU ray-tracing renderer in C++ with Stochastic Progressive Photon Mapping (SPPM). Supported mesh rendering, texture mapping, curves and surfaces of revolution, and normal interpolation.