QMC: THE THEORY Variation: an intuitive measure of smoothness (or βwigglinessβ): J π π = K |πβ²(π₯)|ππ₯ ] β’ A less intuitive multivariate generalization: Hardy-Krause variation: J l πβΊΕ π = N N J πΕΎ π Kβ¦K ππ₯Pw β¦ ππ₯PΕΈ ΕΎRJ J£Pw ¤β―¤PΕΈ £l ] ] ππ₯Pw β¦ ππ₯PΕΈ r RJ,¡¢Pw ,β¦PΕΈ 65 QMC: THE THEORY Koksma-Hlawka theorem. Suppose π: 0,1 variation. Then l β β is of bounded HK- Q 1 K π π₯ ππ₯ β N π π₯P π ¨ ],J β€ π· β (π₯J , β¦ π₯Q )πβΊΕ (π) PRJ β’ Complicated ingredients aside, the final result is quite intuitive. β’ The error bound is proportional to the non-uniformity of the points and also to the wiggliness of the function. β’ We donβt know πβΊΕ (π), but itβs a constant, which gives us the asymptotics of the error (as a function of discrepancy). 66 MC VS QMC: THE THEORY Asymptotics: β’ β’ Koksma-Hlawka: for a given function, the QMC integration error is proportional to the star-discrepancy of the QMC sequence. Commonly cited result: the star-discrepancy of the ddimensional Halton set is Ξ β’ log π π l Q: is this a positive or negative result? (Also when compared to MC?) 67 MC VS QMC: THE THEORY Asymptotics: β’ β’ The QMC integration error is proportional to the stardiscrepancy of the QMC sequence. Commonly cited result: the star-discrepancy of the ddimensional Halton set is Ξ log π π l With 2 parameters, this notation is shady! β’ What is meant here is that as a function of n (with d fixed!), the discrepancy is proportional to β’ βΛβ’ Q ¨ Q . But the proportionality constant depends (horribly!) on d. 68 QMC IN HIGH DIMENSIONS The Halton sequence (as most other QMC sequences) becomes visibly less impressive in high dimensions: d=10 1 1 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 d=30 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 0 0.2 0.4 0.6 d=5 0.8 1 0 0.2 0.4 0.6 0.8 1 d=20 69 QMC IN HIGH DIMENSIONS β’ βΛβ’ Q ¨ Q tends to zero a lot faster than J . Q β’ In low dimensions, QMC is great! β’ In high dimensions the dimension-free scaling of MC catches up. QMC stops working around 15-20 dimensions. β’ Even without the hidden dimension-independent constant, compare βΛβ’ Q ¨ Q values of nβ¦ and J Q for small d and small/moderate 70 STATISTICAL BOUNDS IN QUASI MONTE-CARLO We can also get statistical bounds similar to those from MC: β’ Shift every element in the the QMC sequence by the same uniform random vector in 0,1 l . β’ The discrepancy of the shifted set is approximately the same as the original. (This is fairly intuitive.) β’ The estimates of the integral arising from different shifted sets are independent. β’ The same method (as we saw in MC) for computing the endpoints of a confidence interval applies. 71 MATLAB Sauer has a simple implementation of the Halton set Built-in, with more features: β’ haltonset β’ H = haltonset(d) initializes a d-dimensional array β’ H(1:n, :) produces an n-by-d array with the first n points β’ Skip, Scramble, ... β’ It doesnβt implement the random shift, but itβs easy to do manually. (HW 1.) 72 QMC/INTEGRATION OUTLOOK There are many other (better) QMC sequences and grids than just Halton. Popular ones include: 1. Sobolev sequence. Complicated, wonβt discuss. Available in Matlab: sobolset. 2. Integration lattices. Easy to explain, surprisingly complicated theory. l πΏl = π£ = N β¡ π£¡ such that β¡ β β€ for all π = 1, β¦ , π ¡RJ β’ linearly indep. vectors (generators) chosen to satisfy β€j β πΏj Not available in Matlab, but there is a good open source implementation: Lattice Builder (https://github.com/umontreal-simul/latbuilder). 73 QMC/INTEGRATION OUTLOOK What happened to Gaussian quadrature, polynomial exactness and all that cool stuff we learned in 427? 3. Gaussian quadrature: some theory exists. Very difficult to characterize the domains for which Gaussian quadrature formulas exist. (Remember, in the univariate setting there was a unique formula for each degree.) 4. Sparse grids. β¦ 74 SPARSE GRIDS Sparse grids. β’ Exact for multivariate polynomials up to a chosen total degree. (Full grids are exact for every polynomial of a given degree in each variable.) β’ A combination of lower order grids. β’ As a result, they use way fewer points. β’ But they also have some negative weights! 75
© Copyright 2026 Paperzz