InstDrive: Instance-Aware 3D Gaussian Splatting for Driving Scenes

Hongyuan Liu, Haochen Yu, Bochao Zou, Jianfei Jiang, Qiankun Liu, Jiansheng Chen, Huimin Ma
University of Science and Technology Beijing

Abstract

Reconstructing dynamic driving scenes from dashcam videos has attracted increasing attention due to its significance in autonomous driving and scene understanding. While recent advances have made impressive progress, most methods still unify all background elements into a single representation, hindering both instance-level understanding and flexible scene editing. Some approaches attempt to lift 2D segmentation into 3D space, but often rely on pre-processed instance IDs or complex pipelines to map continuous features to discrete identities. Moreover, these methods are typically designed for indoor scenes with rich viewpoints, making them less applicable to outdoor driving scenarios. In this paper, we present InstDrive, an instance-aware 3D Gaussian Splatting framework tailored for the interactive reconstruction of dynamic driving scene. We use masks generated by SAM as pseudo ground-truth to guide 2D feature learning via contrastive loss and pseudo-supervised objectives. At the 3D level, we introduce regularization to implicitly encode instance identities and enforce consistency through a voxel-based loss. A lightweight static codebook further bridges continuous features and discrete identities without requiring data pre-processing or complex optimization. Quantitative and qualitative experiments demonstrate the effectiveness of InstDrive, and to the best of our knowledge, it is the first framework to achieve 3D instance segmentation in dynamic, open-world driving scenes.

Method

InstDrive framework overview showing the pipeline of our method

Framework Overview. We extend Gaussian attributes with an instance feature dimension and train the scene using multi-view images and LiDAR points. A contrastive loss and a voting-based pseudo-supervision loss guide 2D feature learning, while a voxel-based consistency loss enforces 3D coherence by aligning nearby Gaussians. Both 2D and 3D features are mapped to discrete instance IDs via a binarized static codebook.

InstDrive framework overview showing the pipeline of our method

Existing reconstruction methods fail to achieve structured 3D reconstruction with instance-level editability in dynamic driving scenes. To address this, we propose InstDrive, which directly supervises training using SAM-processed video frames without requiring instance matching. We employ a shared 2D–3D color map to enable bijection between instance IDs and colors. During real-time rendering, trained Gaussians are assigned full opacity and colored according to their instance IDs. By capturing click events in the pixel space and retrieving the corresponding color, we map it back to the instance ID using the color map and select all Gaussians associated with that ID, enabling real-time, interactive selection and manipulation of 3D Gaussian instances.

Real-time Interactive Instance Selection

Our system enables real-time interactive instance selection through simple mouse clicks. Users can click on any object in the rendered view to instantly select all Gaussians belonging to that instance, enabling intuitive manipulation of individual objects in the scene. This interactive selection mechanism provides a seamless and efficient way to work with complex 3D scenes.

Point-level Instance Segmentation

Our approach achieves instance-level reconstruction of driving scenes, producing noise-free results and demonstrating superior spatial consistency compared to baseline methods.

Scene 1
Group 1
Open 1
With 1
Scene 2
Group 2
Open 2
With 2
Scene 3
Group 3
Open 3
With 3

Camera Data

GSGroup

OpenGS

Ours

2D Segmentation Results

Our method relies solely on untracked 2D segmentation to achieve panoramic, instance-level scene reconstruction with multi-view consistency. The spatiotemporal coherence of our reconstructions markedly outperforms that of baseline approaches.

Camera Data

Sam Segmentation

GSGroup

OpenGS

Ours