Evaluation of REMSADBRAVO Simulations Using Tracer Data and Synthesized Modeling Michael Barna Cooperative Institute for Research in the Atmosphere Colorado State University, Fort Collins, CO Bret Schichtel, Kristi Gebhart and William Malm Air Resources Division National Park Service, Fort Collins, CO PM Model Performance Workshop RTP, NC 10-11 February 2004 Acknowledgements • Assistance for the REMSAD simulations conducted at CIRA/CSU – Betty Pun, Shiang-Yuh Wu and Christian Seigneur (AER): initial assistance with REMSAD and met data processing – Hampden Kuhns (DRI) and Jeff Vukovich (MCNC): emissions inventory – Eladio Knipping and Naresh Kumar (EPRI): sulfur concentrations from GOCART – Nelson Seaman (PSU): MM5 simulations – Sharon Douglas, Tom Myers (ICF) and Tom Braverman (EPA): useful discussions on model evaluation BRAVO: a study designed to understand haze at Big Bend National Park • Big Bend NP is located in remote southwestern Texas, along the Texas/Mexico border • Haze has increased in recent years – a rarity for a western park • BRAVO (Big Bend Regional Aerosol and Visibility Observational Study) investigates the pollution sources that are contributing to this haze – Field program: July-October 1999 – Many participants: EPA NPS NOAA EPRI CSU DRI TCEQ AER Et al. Flight Over BBNP Area (5 November 2003) Who is contributing sulfate to BBNP? 1/Mm • Sulfate is the main constituent of visibilityimpairing PM at BBNP Big Bend, Bext Budget, BRAVO 100 90 80 70 60 50 40 30 20 10 0 7/1 Rayleigh 7/15 7/29 Sulfate 8/12 Nitrate 8/26 9/9 Organics 9/23 LAC 10/7 Fine Soil 10/21 Coarse • Who is contributing? – the Carbon I/II power plant just over the border? – sources in eastern Texas? – sources in the eastern US? – how large is the influence of the boundary concentrations? BRAVO’s “weight of evidence” approach to determine sulfate attributions • Don’t rely on one analytical method or model; rather, use “weight of evidence” approach: Source-oriented models: Receptor-oriented models: Hybrid models: REMSAD TrMB “Synthesized REMSAD” CMAQ FMB “Synthesized CMAQ” This talk will look at three ways to evaluate the BRAVO air quality simulations • Simulation of conserved tracers – Important but somewhat dull (Barna) • Simulation of sulfate with base emissions – Important but somewhat dull (Barna) • Identifying model biases using “synthesis inversion analysis” – Exciting! (Schichtel) Evaluating the REMSAD BRAVO sims • Simulation of conserved tracer – examine transport and dispersion of conservative tracers – if model can’t simulate transport and dispersion there’s not point in continuing • Simulation of sulfate with base emissions – time series analysis of predicted sulfate against BRAVO and CASTNET monitors – evaluate different periods to identify potential temporal biases – evaluate different monitors to identify potential spatial biases – evaluate at spatial patterns of interpolated observations and predictions – do the match? Evaluating the REMSAD BRAVO sims (cont’d) • Use “synthesized inversion modeling” to identify biases with respect to different source regions – A hybrid approach that starts with attribution results from REMSAD (or CMAQ or any model) – Use a statistical approach to identify multiplicative terms for each source region that would result in a best fit to the measurement data – If REMSAD attributions for that source region are • perfect: scaling coef = 1 • underestimated: scaling coef > 1 (i.e., need to increase) • overestimated: scaling coef < 1 (i.e., need to decrease) Simulation of conserved tracers Predicting transport is the most important aspect of air quality modeling • No other modeled process, e.g., emissions, deposition, chemical transformation, has as big an impact on model results as transport • transport = advection + turbulent diffusion • A tracer experiment is the most robust method for evaluating transport – Halocarbon tracer is conserved – negligible transformation and deposition – Detectable at very low concentrations – We know release rates – can check skill of receptor models for determining attribution – expensive BRAVO tracer source and receptor sites Tracer release sites: •Eagle Pass •San Antonio •Big Brown PP •Parish PP Tracer receptors at BBNP: •Persimmon Gap •K-Bar •San Vicente Example tracer plumes from REMSAD: Observed and predicted tracer time series Eagle Pass Tracer NE Texas Tracer REMSAD Tracer Prediction for BRAVO Obs BBNP3 PDCH (ppqV) REMSAD Tracer Prediction for BRAVO Obs BBNP3 PPCH (ppqV) TRN.003 BBNP3 SOA-PDCH (ppqV) 0.4 mixing ratio (ppqV) 1.5 mixing ratio (ppqV) TRN.003 BBNP3 POA-PPCH (ppqV) 0.5 2.0 1.0 0.5 0.0 0.3 0.2 0.1 10/28 10/21 10/14 10/7 9/30 9/23 9/16 9/9 9/2 8/26 8/19 8/12 8/5 7/29 7/22 7/15 7/8 7/1 10/28 10/21 10/14 10/7 9/30 9/23 9/16 9/9 9/2 8/26 8/19 8/12 8/5 7/29 7/22 7/15 7/8 7/1 0.0 -0.5 -0.1 date 1999 date 1999 San Antonio Tracer Houston Tracer REMSAD Tracer Prediction for BRAVO Obs BBNP3 PDCB (ppqV) REMSAD Tracer Prediction for BRAVO Obs BBNP3 PTCH (ppqV) TRN.003 BBNP3 PMF-PDCB (ppqV) 5 0.5 4 0.4 mixing ratio (ppqV) mixing ratio (ppqV) observed predicted 3 2 1 TRN.003 BBNP3 PEC-PTCH (ppqV) 0.3 0.2 0.1 0.0 10/29 date 1999 10/22 -0.1 date 1999 10/15 10/8 10/1 9/24 10/29 10/22 10/15 10/8 10/1 9/24 9/17 -1 9/17 0 Performance (or lack thereof?) statistics Eagle Pass NE Texas Houston San Antonio Average Observed (ppqV) 0.21 0.00 0.06 0.52 Average Predicted (ppqV) 0.39 0.02 0.03 0.33 R: 0.47 0.34 0.31 0.52 Normalized Gross Error: 412% 130% 74% 70% Normalized Bias: 380% 65% -71% -24% • What do we expect for “good performance”? Expecting perfection is naïve…. – Grid models aren’t ideal for simulating plumes – the “real” plumes likely have very strong concentration gradients that won’t be represented by model – Complex terrain is complex…and will not be resolved at 36 km Problems with this time series analysis • Tracer concentrations at two of the four sites are too low for meaningful time series analysis (negative concentrations!), but there is still useful information here • Looking at the preceding time series, your eye tells you that the model clearly has some skill (e.g., timing of Eagle Pass tracer), but this is not reflected in the bias or error statistics Comparing interpolated spatial patterns • Need to move beyond simple time series analysis to something more comprehensivie – How to assess patterns? – Magnitude – Concentration gradients – Spatial shifts (e.g., tomorrow’s predicted pattern matches today’s observed pattern) Observed sulfate spatial patterns: Predicted sulfate spatial patterns: Simulation of sulfate using the base emissions inventory REMSAD SO2 and SO4 plumes • Before using REMSAD to assign sulfate source attributions, need to evaluate the “base case” Predicted SO2 Predicted SO4 Observed Sulfate Predicted Sulfate Wichita Mtns Hagerman concentration (ug/m3) Predicted Sulfate 10/28 10/21 10/14 10/7 9/30 10/28 10/21 10/14 10/7 9/30 9/23 9/16 9/9 9/2 8/26 8/19 8/12 8/5 7/29 7/22 7/15 7/8 7/1 date 1999 9/23 date 1999 9/16 9/9 9/2 8/26 8/19 8/12 8/5 7/29 7/22 7/15 concentration (ug/m3) Predicted Sulfate Observed Sulfate 7/8 concentration (ug/m3) 10/28 10/21 10/14 10/7 Laguna Atascosa 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 7/1 10/28 10/21 10/14 10/7 9/30 9/23 9/16 9/9 9/2 8/26 8/19 8/12 8/5 7/29 7/22 7/15 7/8 7/1 date 1999 Observed Sulfate 9/30 San Bernard Predicted Sulfate 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 9/23 Observed Sulfate 9/16 Falcon Dam 9/9 Big Thicket Everton Ranch Aransas Lake Corpus Christi Padre Island Laredo 9/2 date 1999 Somerville Brackettville Pleasanton 8/26 Center Stillhouse Eagle Pass 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 8/19 Amistad Rio Grande Predicted Sulfate 8/12 Big Bend K-Bar 8/5 Presidio 7/29 10/28 10/21 10/14 10/7 9/30 9/23 9/16 9/9 9/2 8/26 8/19 8/12 8/5 7/29 7/22 7/15 7/8 7/1 date 1999 Marathon Persimmon Gap Ft McKavett Ft Stockton Ft Lancaster Sanderson Langtry LBJ 7/22 Monahans McDonald Esperanza Wright Patman Purtis Creek 7/15 Lake Colorado City Stephenville Observed Sulfate 7/1 Guadalupe Mtns 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 7/8 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 concentration (ug/m3) concentration (ug/m3) How much skill does REMSAD have in predicting sulfate? (BRAVO sites) How much skill does REMSAD have in predicting sulfate? (BRAVO sites) y = 0.28 x + 0.55 R = 0.40 18 16 14 12 10 8 6 4 14 12 10 8 6 4 0 4 6 8 10 12 14 16 Observed SO4 (ug/m3) 18 y = 0.76 x + 1.17 R = 0.63 18 20 1:1 Predicted SO4 (ug/m3) 16 0 Oct 1999: 14 12 10 8 6 4 2 1:1 16 0 2 y = 0.84 x - 0.16 R = 0.75 18 2 20 Predicted SO4 (ug/m3) 20 2 0 Sept 1999: 1:1 Aug 1999: Predicted SO4 (ug/m3) 20 Predicted SO4 (ug/m3) July 1999: 2 4 20 6 8 10 12 14 16 Observed SO4 (ug/m3) 18 y = 1.07 x + 1.51 R = 0.60 18 20 1:1 16 14 12 10 8 6 4 2 0 0 0 2 4 6 8 10 12 14 16 Observed SO4 (ug/m3) 18 20 0 2 4 6 8 10 12 14 16 Observed SO4 (ug/m3) 18 20 Performance statistics: 37 BRAVO sites Overall Jul-99 Aug-99 Sep-99 Oct-99 Observed Average (ug/m3) 3.1 2.1 3.5 3.5 2.8 Predicted Average (ug/m3) 3.3 1.1 2.8 3.8 4.6 R 0.61 0.40 0.75 0.63 0.60 Normalized Error 62% 51% 53% 43% 98% Normalized Bias 1% -41% -43% 2% 78% Data Completeness 98% 88% 100% 100% 100% How much skill does REMSAD have in predicting sulfate? (CASTNET sites) 20 1:1 16 14 12 Aug 1999: 10 8 6 4 2 16 14 12 10 8 6 4 0 2 4 6 8 10 12 14 16 Observed SO4 (ug/m3) 18 20 20 y = 0.98 x + 0.38 R = 0.88 18 1:1 16 14 12 10 8 6 4 2 0 Oct 1999: 2 4 6 8 10 12 14 16 Observed SO4 (ug/m3) 18 20 20 y = 1.21 x + 0.49 R = 0.87 18 Predicted SO4 (ug/m3) 0 Predicted SO4 (ug/m3) 1:1 2 0 Sept 1999: y = 1.03 x + 0.46 R = 0.91 18 Predicted SO4 (ug/m3) 18 Predicted SO4 (ug/m3) July 1999: 20 y = 0.87 x + 0.53 R = 0.92 1:1 16 14 12 10 8 6 4 2 0 0 0 2 4 6 8 10 12 14 16 Observed SO4 (ug/m3) 18 20 0 2 4 6 8 10 12 14 16 Observed SO4 (ug/m3) 18 20 Performance statistics: 67 CASTNET sites Overall Jul-99 Aug-99 Sep-99 Oct-99 Observed Average (ug/m3) 4.5 5.8 5.6 4.1 2.6 Predicted Average (ug/m3) 5.0 5.6 6.2 4.5 3.6 R 0.90 0.92 0.91 0.88 0.87 Normalized Error 45% 36% 36% 43% 65% Normalized Bias 21% 3% 12% 21% 50% Data Completeness 97% 99% 97% 96% 97% Monthly spatial patterns of bias Observed and predicted spatial patterns Observed sulfate Predicted sulfate • Need to develop a quantitative metric that describes the agreement between two spatial patterns! Synthesized inversion modeling Using models for sulfate source apportionment in BRAVO • Models can be used for “source attributions”, i.e., “who is causing the pollution at a receptor” BBNP • How this was done for BRAVO: remove SO2 from a source region and re-run REMSAD Example: remove SO2 emissions from Texas and re-run the model. How do sulfate concentrations at a receptor site change. Sulfate contributions for each region from REMSAD – “unscramble the sulfate egg” Base Case Sulfate + Mexico Sulfate = + Texas Sulfate W. US Sulfate + + E. US Sulfate Boundary Sulfate REMSAD daily attributions for sulfate at Big Bend NP for the major source regions need to add mass here…. and reduce mass here…. ...but which sources need to increased or decreased? Use synthesis inversion modeling to address biases when determining attributions • Synthesis inversion modeling – a technique for identifying model biases by combining observations with model results ci Gij s j i mi i j ci Gij = = sj mi = = vector of sulfate observations matrix of the source attribution from each source region/time pair to each observation source attribution scaling coefficients modeled concentration values i = errors in ci 8 7 5 4 3 2 1 0 July 9 August 9 September 9 October 9 g/m 3) 100% 8 7 80% Sulfate Source Attribution ( 6 5 60% 4 40% 3 2 20% 1 0% 0 July 9 Texas August 9 Mexico Eastern US September 9 Western US October 9 Other Observed S * 3 g/m 3) Sulfate Source Attribution ( 6 Other Western US Eastern US Mexico Texas Observed S * 3 Observed Sulfate ( g/m 3 ) Apply scaling factors to original predictions to get “synthesized REMSAD” New sulfate attributions at Big Bend NP for the BRAVO period Average Sulfate Attribution at Big Bend Carbon: 23% Mexico (14%) E. TX: 14% Texas 39% (23%) 16% (16%) (14%) 32% (42%) Eastern US Western US 6% (9%) Bndy Cond. 7% (7%) 0 0.2 0.4 0.6 Source Attribution (mg/m3) 0.8 1 Conclusions • REMSAD is one tool among many used in BRAVO for developing sulfate source attributions….but we need to try and understand model errors and biases • Unfortunately, model evaluation is often ambiguous, difficult and incomplete • We often can’t determine why certain model results arise – it is too hard to analyze the individual processes that drive the results – “Cloud processing” of SO2 - are clouds in the right place? Rainout? – Are Mexican emission rates known? – Are predicted oxidant concentrations correct? – And lots more conjecture…. Conclusions (cont’d) • Tracer experiments provide the minimum bar that the model should get over – if transport can’t be simulated then everything else is suspect • Longer simulations (months) are necessary to elucidate temporal biases • Larger domains (continental) are necessary to elucidate spatial biases • We need better tools than the “standard issue” time series analyses – Synthesized inversion to merge observations with model predictions to identify – Develop a metric that describes the agreement between spatial patterns Conclusions (cont’d) Sulfat Source Attribution (%) Original Source Attributon of Big Bend's Sulfate 60 50 40 30 20 10 0 Carbon CMAQ Mexico REMSAD Texas Eastern Western US US FMBR - MM5 TRMB - MM5 B.C. Sulfat Source Attribution (%) • Don’t trust one model; rather, examine results from both receptor models and regional models 50 45 40 35 30 25 20 15 10 5 0 Source Attributon of Big Bend's Sulfate Synthesized CMAQ Synthesized REMSAD Scaled FMBR Scaled TrMB Carbon Mexico Texas E. US W. US • Questions: – [email protected] – [email protected] (synthesis inversion) Other
© Copyright 2026 Paperzz