StochasticSplats:
Stochastic Rasterization for Sorting-Free 3D Gaussian Splatting
1Google DeepMind 2University of British Columbia 3Google 4Runway ML
5Simon Fraser University 6University of Toronto
*equal contribution, work done at Google





Abstract

3D Gaussian splatting (3DGS) is a popular radiance field method, with many application-specific extensions. Most variants rely on the same core algorithm: depth-sorting of Gaussian splats then rasterizing in primitive order. This ensures correct alpha compositing, but can cause rendering artifacts due to built-in approximations. Moreover, for a fixed representation, sorted rendering offers little control over render cost and visual fidelity. For example, and counter-intuitively, rendering a lower-resolution image is not necessarily faster. In this work, we address the above limitations by combining 3D Gaussian splatting with stochastic rasterization. Concretely, we leverage an unbiased Monte Carlo estimator of the volume rendering equation. This removes the need for sorting, and allows for accurate 3D blending of overlapping Gaussians. The number of Monte Carlo samples further imbues 3DGS with a way to trade off computation time and quality. We implement our method using OpenGL shaders, enabling efficient rendering on modern GPU hardware. At a reasonable visual quality, our method renders more than four times faster than sorted rasterization.




Method

Sort-free rendering. The core idea of our method is to replace the sorted alpha blending used by 3DGS with a stochastic estimator. Stochastic transparency [Enderton et al. 2010] estimates the result of alpha blending using a Monte Carlo sum. A rasterized fragment is kept with a probability that matches its alpha value, and discarded otherwise. Centrally, each individual fragment is opaque and can be rendered using standard, unsorted rasterization with z-buffer. We apply this method to replace the sorted alpha blending of 3DGS. The following figure and pseudocode illustrate the basic stochastic transparency estimator:

Stochastic transparency introduction
Alpha blending Stochastic transparency (4 SPP)
Blending vs. stochastic - We visualize a 1D (top) and 3D example (bottom) of the different ways to compute transparency. Two foreground Gaussians are composited on a white background.
Stochastic transparency pseudocode
Pseudocode for stochastic transparency - We denote \(\alpha_i\) the opacity, \(z_i\) the depth and \(c_i\) the color of of Gaussian \(i\). In practice, this algorithm can is compatible with the hardware-rasterization pipeline and can be implemented in the pixel shader.

Gradient estimator. Following recent work on differentiable rendering, we also formulate an unbiased gradient estimator, which can estimate reverse-mode gradients without storing a heavy automatic differentiation graph. The following figure validates the gradients computed using that method in forward-mode, i.e., by rendering a gradient image due to a global translation along x:

Stochastic transparency gradients
Alpha blending Ours (128 SPP) Ours (512 SPP)
Stochastic gradients - We compare alpha blending gradients to our stochastic estimator with 128 and 512 SPP. Red and blue colors encode positive and negative values, respectively. Our estimator accurately approximates the groundtruth alpha blending gradients.

Pop-free rendering. Stochastic transparency further enables various versions of pop-free rendering. Similar to StopThePop [Radl et al. 2024], we approximate the depth of a Gaussian as a (view-dependent) surface. We propose to use a simple plane-approximation, which is a good fit for the hardware rasterization pipeline (unlike StopThePop's curved surface). Moreover, stochastic transparency allows full 3D intermixing of 3D Gaussians, which avoids the discontinuities inherent to other "pop-free" techniques. We showcase both the plane-approximation and full intermixing in the following figures:

popfree rendering by plane approximation
3DGS StopThePop vs. ours
Popping in 3DGS – (left) As sorting in 3DGS is done with respect to the \(z\) distance of the Gaussian mean from the camera, a small camera rotation can cause visible "popping" artifacts (i.e., sudden pixel color changes). (right) StopThePop [Radl et al. 2024] corrects for this behavior by associating a surface with each Gaussian, and determining the depth \(z\) per-fragment (dashed line), rather than per-Gaussian. Our solution leads to visually comparable results, but approximates this surface linearly (solid line), so that per-fragment depth \(z\) can be computed efficiently in hardware.
Comparison of sorting, pop-free rendering and 3D intermixing
Alpha blending Pop-free Our full intermixing
Fully volumetric intermixing – While alpha blending (left) suffers from popping and "pop-free" variants (middle) show discontinuities, our full volumetric intermixing (right) accurately composes overlapping Gaussians.

Pop-free rendering

This comparison showcases a rendered trajectory in the Garden scene from the MipNerf360 dataset using both StochasticSplats and standard, sorted 3DGS. Use the slider to see how our fine-tuned StochasticSplats effectively removes distracting popping artifacts (for example on the metal surface in the middle of the table):


Results with temporal anti-aliasing (TAA)

The following MipNerf360 scenes are rendered using our OpenGL implementation and 1 sample per pixel. We average samples over multiple frames using a simple temporal anti-aliasing (TAA) implementation. We achieve smooth rendering on a low-end laptop with an NVIDIA Quadro T1000 Max-Q GPU, where we recorded the screen using the NVIDIA app.

Bonsai
Kitchen
Bicycle
Stump

Results without temporal anti-aliasing (TAA)

The following scene is rendered using 1 sample per pixel without temporal anti-aliasing, showcasing the variance of the unprocessed estimator:

Room

BibTeX

@misc{KheradmandVicini2025stochasticsplats,
  title         = {StochasticSplats: Stochastic Rasterization for Sorting-Free 3D Gaussian Splatting},
  author        = {Shakiba Kheradmand and Delio Vicini and George Kopanas and Dmitry Lagun
                    and Kwang Moo Yi and Mark Matthews and Andrea Tagliasacchi},
  year          = {2025},
  url           = {https://arxiv.org/abs/2503.24366},
  eprint        = {2503.24366},
  archiveprefix = {arXiv},
  primaryclass  = {cs.CV}
}Copy to clipboard