Adaptive PPM Progressive Photon Mapping

Adaptive PPM
Original PPM
Adaptive
Progressive Photon Mapping
Anton S. Kaplanyan
Karlsruhe Institute of Technology, Germany
Progressive Photon Mapping in Essence
Pixel estimate using eye and light subpaths
πΌβ‰ˆ
π‘Šπ‘ π‘˜(π‘₯𝑁 βˆ’ 𝑦𝑖 )𝛾𝑖
𝑁,𝑖
Eye subpath
importance
Photon
radiance
Generate full path by joining subpaths
Kernel-regularized
connection of subpaths
π‘Šπ‘
𝛾𝑖 𝛾𝑖+1
2
Reformulation of Photon Mapping
PPM = recursive (online) estimator [Yamato71]
π‘βˆ’1
𝐼𝑁 =
πΌπ‘βˆ’1 + π‘Šπ‘
π‘˜(π‘₯ βˆ’ π‘₯𝑖 ) 𝛾𝑖
𝑁
𝑖
Rearrange the sum to see that
π‘βˆ’1
𝐼𝑁 =
πΌπ‘βˆ’1 +
π‘˜ π‘₯ βˆ’ π‘₯𝑖 [π‘Šπ‘ 𝛾𝑖 ]
𝑁
Kernel
Path
𝑖
estimation contribution
3
Radius Shrinkage
Shrink radius (bandwidth) for 𝑁th photon map
π‘Ÿπ‘2 = π‘Ÿ02 𝑁 π›Όβˆ’1 ,
𝛼 ∈ 0; 1
User-defined parameters π‘Ÿ0 and 𝛼
Problem:
Optimal value π‘Ÿ0 of and 𝛼 are unknown
Usually globally constant / k-NN defined
4
User Parameters Example
Box scene
(reference)
5
Larger π’“πŸŽ
User Parameters Example
Difference
image
𝒓
𝒓
𝒓
𝒓
Larger 𝛼
6
Radius Shrinkage Parameters
𝛼
π‘Ÿ0
…
π‘Ÿ0
π‘Ÿ0
7
Optimal Convergence
of Progressive Photon Mapping
Optimal Asymptotic Convergence Rate
𝛼
π‘Ÿ0
…
π‘Ÿ0
π‘Ÿ0
10
Optimal Convergence Rate
Variance and bias depend on 𝛼 [KZ11]
VarMeas 𝛼 ~𝑁 βˆ’π›Ό
2
BiasKernel
(𝛼)~𝑁 π›Όβˆ’1
𝛼 𝐨𝐩𝐭
Optimal rate is MSE ∝ 𝑁 βˆ’ 2/3 with 𝛼opt = 2/3
Asymptotic convergence
Unbiased Monte Carlo is faster: MSE ∝ 𝑁 βˆ’1
11
Convergence Rate of Kernel Estimation
Convergence rate for 𝑑 dimensions
MSE ∝ 𝑁 βˆ’ 4/(𝑑+4)
Suffers from curse of dimensionality
Adding a dimension reduces the rate!
Shutter time kernel estimation – not recommended
Wavelength kernel estimation – not recommended
Volumetric photon mapping MSE ∝ 𝑁 βˆ’ 4/πŸ•
12
Adaptive Bandwidth Selection
Optimal Asymptotic Convergence Rate
𝛼
π‘Ÿ0
…
π‘Ÿ0
π‘Ÿ0
14
Adaptive Bandwidth Selection
𝛼opt might not yield minimal MSE
Minimize MSE with respect to π‘Ÿ0
Achieve variance ↔ bias tradeoff
Select optimal π‘Ÿ using past samples
15
Estimation Error
Mean Squared Error [Hachisuka et al. 2010]
2
MSE = VarEst + BiasKernel
16
Estimation Error
MSE =
2
VarMeas + VarKernel + BiasKernel
Variance is two-fold:
Path measurement contribution
Kernel estimation
17
Estimation Error
MSE β‰ˆ VarMeas +
2
BiasKernel
Measurement variance is higher
VarMeas ≫ VarKernel
18
Estimation Error
So, MSE has noise (path variance) and bias
2
MSE β‰ˆ VarMeas + BiasKernel
Variance
Bias
19
Adaptive Bandwidth Selection
Both variance and bias depend on π‘Ÿ
VarMeas π‘Ÿ ~π‘Ÿ βˆ’2
BiasKernel r ~βˆ†πΌ π‘Ÿ 2
Where βˆ†πΌ = βˆ†(π‘Šπ‘ 𝛾𝑖 ) is a pixel Laplacian
Laplacian βˆ†πΌ is unknown
20
Estimating Pixel Laplacian
βˆ†πΌ consists of Laplacians at all shading points
Weighted per-vertex Laplacians βˆ†π›Ύπ‘– = βˆ†πΏ
21
Estimating Per-Vertex Laplacian
Estimate per-vertex Laplacian at a point
Recursive finite differences [Ngen11]
Yet another recursive estimator
Another shrinking bandwidth β„Ž
Robust estimation on discontinuities
𝐿π‘₯+π‘’β„Ž + 𝐿π‘₯βˆ’π‘’β„Ž βˆ’ 2𝐿π‘₯
βˆ†πΏπ‘’ =
β„Ž2
π‘₯ βˆ’ π‘’β„Ž
π‘₯
π‘₯ + β„Žπ‘’
22
Adaptive Bandwidth Selection
Estimate all unknowns
Path variance
Pixel Laplacian
Minimize MSE as MSE(r)
Lower initial error
Keeps noise-bias balance
Data-driven bandwidth selector
23
Results
Progressive Photon Mapping
Adaptive PPM
20 seconds!
24
Results
Progressive Photon Mapping
Adaptive PPM
3 seconds!
25
Conclusion
Optimal asymptotic convergence rate
Asymptotically slower than unbiased methods
Not always optimal in finite time
Adaptive bandwidth selection
Based on previous samples
Balances variance-bias
Speeds up convergence
Attractive for interactive preview
26
Thank you for your attention.