JP

Global warming is expected to alter the frequency, intensity, and risk of extreme precipitation events. However, global climate models in general do not correctly reproduce the frequency and intensity distribution of precipitation, especially at the regional scale. We present an analogue method to detect the occurrence of extreme precipitation events without relying on modeled precipitation. Our approach is based on the use of composites to identify the distinct large-scale atmospheric conditions associated with widespread outbreaks of extreme precipitation events across local scales. The development of composite maps, exemplified in the South-Central United States and the Western United States, is achieved through the joint analysis of 27-yr (1979–2005) CPC gridded station data and NASA's Modern Era Retrospective-analysis for Research and Applications (MERRA). Various circulation features and moisture plumes associated with extreme precipitation events are examined. This analogue method is evaluated against the MERRA reanalysis with a success rate of around 80% in detecting extreme events within one or two days. When applied to the climate model simulations of the 20th century from Coupled Model Intercomparison Project Phase 5 (CMIP5), we find the analogues from the CMIP5 models produces more consistent (and less uncertain) total number of extreme events compared against observations as opposed to using their corresponding simulated precipitation over the three regions examined. The analogues also perform better to characterize the interannual range of extreme days with the smaller RMSE across all the models for all the descriptive statistics (minimum, lower and higher quartile, median, and maximum). These results suggest the capability of CMIP5 models to simulate the realistic large-scale atmospheric conditions associated with widespread local-scale extreme events, with a credible frequency. Collectively speaking, the presented analyses clearly highlight the comparative and enhanced nature of these results to studies that consider only modeled precipitation output to assess extreme-event frequency.

An analogue method is presented to detect the occurrence of heavy precipitation events without relying on modeled precipitation. The approach is based on using composites to identify distinct large-scale atmospheric conditions associated with widespread heavy precipitation events across local scales. These composites, exemplified in the south-central, midwestern, and western United States, are derived through the analysis of 27-yr (1979–2005) Climate Prediction Center (CPC) gridded station data and the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA). Circulation features and moisture plumes associated with heavy precipitation events are examined. The analogues are evaluated against the relevant daily meteorological fields from the MERRA reanalysis and achieve a success rate of around 80% in detecting observed heavy events within one or two days. The method also captures the observed interannual variations of seasonal heavy events with higher correlation and smaller RMSE than MERRA precipitation. When applied to the same 27-yr twentieth-century climate model simulations from Phase 5 of the Coupled Model Intercomparison Project (CMIP5), the analogue method produces a more consistent and less uncertain number of seasonal heavy precipitation events with observation as opposed to using model-simulated precipitation. The analogue method also performs better than model-based precipitation in characterizing the statistics (minimum, lower and upper quartile, median, and maximum) of year-to-year seasonal heavy precipitation days. These results indicate the capability of CMIP5 models to realistically simulate large-scale atmospheric conditions associated with widespread local-scale heavy precipitation events with a credible frequency. Overall, the presented analyses highlight the improved diagnoses of the analogue method against an evaluation that considers modeled precipitation alone to assess heavy precipitation frequency.

An international emissions trading system is a featured instrument in the Kyoto Protocol to the Framework Convention on Climate Change, designed to reduce emissions of greenhouse gases among major industrial countries. The US was the leading proponent of emissions trading in the negotiations leading up to the Protocol, with the European Union initially reluctant to embrace the idea. However the US withdrawal from the Protocol has greatly changed the nature of the agreement. One result is that the EU has moved rapidly ahead, establishing in 2003 the Emission Trading Scheme (ETS) for the period of 2005-2007. This Scheme was intended as a test designed to help its member states transition to a system that would lead to compliance with their Kyoto Protocol commitments, which cover the period 2008-2012. The ETS covers CO2 emissions from big industrial entities in the electricity, heat, and energy-intensive sectors. It is a system that itself is evolving as allocations, rules, and registries were still being finalized in some member states late into 2005, even though the system started in January of that year. We analyze the ETS using the MIT Emissions Prediction and Policy Analysis (EPPA) model. We find that a competitive carbon market clears at a carbon price of about 0.6 to 0.9 €/tCO2 (~2 to 3 €/tC) for the 2005-2007 period in a base run of our model in line with many observers' expectations who saw the cuts required under the system as very mild, but in sharp contrast to the actual history of trading prices, which have settled in the range of 20 to 25 €/tCO2 (~70 to 90 €/tC) by the middle of 2005. In various comparison exercises the EPPA model's estimates of carbon prices have been similar to that of other models, and so the contrast between projection and reality in the ETS raises questions regarding the potential real cost of emissions reductions vis-á-vis expectations previously formed based on results from the modeling community. While it is beyond the scope of this paper to reach firm conclusions on reasons for this difference, what happens over the next few years will have important implications for greenhouse gas emissions trading and so further analysis of the emerging European trading system will be crucial.

We develop a forward-looking version of the recursive dynamic MIT Emissions Prediction and Policy Analysis (EPPA) model, and apply it to examine the economic implications of proposals in the US Congress to limit greenhouse gas (GHG) emissions. We find that shocks in the consumption path are smoothed out in the forward-looking model and that the lifetime welfare cost of GHG policy is lower than in the recursive model, since the forward-looking model can fully optimize over time. The forward-looking model allows us to explore issues for which it is uniquely well suited, including revenue-recycling and early action crediting. We find capital tax recycling to be more welfare-cost reducing than labor tax recycling because of its long-term effect on economic growth. Also, there are substantial incentives for early action credits; however, when spread over the full horizon of the policy they do not have a substantial effect on lifetime welfare costs.

© 2011 Cambridge University Press

Analyses of global climate policy as a sequential decision under uncertainty have been severely restricted by dimensionality and computational burdens. Therefore, they have limited the number of decision stages, discrete actions, or number and type of uncertainties considered. In particular, two common simplifications are the use of two-stage models to approximate a multi-stage problem and exogenous formulations for inherently endogenous or decision-dependent uncertainties (in which the shock at time t+1 depends on the decision made at time t). In this paper, we present a stochastic dynamic programming formulation of the Dynamic Integrated Model of Climate and the Economy (DICE), and the application of approximate dynamic programming techniques to numerically solve for the optimal policy under uncertain and decision-dependent technological change in a multi-stage setting. We compare numerical results using two alternative value function approximation approaches, one parametric and one non-parametric. We show that increasing the variance of a symmetric mean-preserving uncertainty in abatement costs leads to higher optimal first-stage emission controls, but the effect is negligible when the uncertainty is exogenous. In contrast, the impact of decision-dependent cost uncertainty, a crude approximation of technology R&D, on optimal control is much larger, leading to higher control rates (lower emissions). Further, we demonstrate that the magnitude of this effect grows with the number of decision stages represented, suggesting that for decision-dependent phenomena, the conventional two-stage approximation will lead to an underestimate of the effect of uncertainty.

© 2012 Springer-Verlag

We describe a coupled climate model of intermediate complexity designed for use in global warming experiments. The atmospheric component is a two-dimensional (zonally averaged) statistical-dynamical model based on the Goddard Institute for Space Study's atmospheric general circulation model (GCM). In contrast with energy-balance models used in some climate models of intermediate complexity, this model includes full representation of the hydrological and momentum cycles. It also has parameterizations of the main physical processes, including a sophisticated radiation code. The ocean component is a coarse resolution ocean GCM with simplified global geometry based on the Geophysical Fluid Dynamics Laboratory modular ocean model. Because of the simplified geometry the resolution in the western boundary layers can be readily increased compared to conventional coarse resolution models, without increasing the model's computational requirements in a significant way. The ocean model's efficiency is also greatly increased by using a moderate degree of asynchronous coupling between the oceanic momentum and tracer fields. We demonstrate that this still allows an accurate simulation of transient behavior, including the seasonal cycle. A 100 years simulation with the model requires less than 8 hours on a state-of the art workstation. The main novelty of the model is therefore a combination of computational efficiency, statistical-dynamical atmosphere and 3D ocean. Long-term present-day climate simulations are carried out using the coupled model with and without flux adjustments, and with either the Gent-McWilliams (GM) parametrization scheme or horizontal diffusion (HD) in the ocean. Deep ocean temperatures systematically decrease in the runs without flux adjustment. We demonstrate that the mismatch between heat transports in the uncoupled states of two models is the main cause for the systematic drift. In addition, changes in the circulation and sea-ice formation also contribute to the drift. Flux adjustments in the freshwater fluxes are shown to have a stabilizing effect on the thermohaline circulation in the model, whereas the adjustments in the heat fluxes tend to weaken the global "conveyor". To evaluate the model's response to transient external forcing global warming simulations are also carried out with the flux-adjusted version of the coupled model. The coupled model reproduces reasonably well the behavior of more sophisticated coupled GCMs for both current climate and for the global warming scenarios.

© Springer

A new method for parametric uncertainty analysis of numerical geophysical models is presented. It approximates model response surfaces, which are functions of model input parameters, using orthogonal polynomials, whose weighting functions are the probabilistic density functions (PDFs) of the input uncertain parameters. This approach has been applied to the uncertainty analysis of an analytical model of the direct radiative forcing by anthropogenic sulfate aerosols which has nine uncertain parameters. This method is shown to generate PDFs of the radiative forcing which are very similar to the exact analytical PDF. Compared with the Monte Carlo method for this problem, the new method is a factor of 25 to 60 times faster, depending on the error tolerance, and exhibits an exponential decrease of error with increasing order of the approximation.

© 1997 American Geophysical Union

Aggregate energy intensity in the United States has been declining steadily since the mid-1970s and the first oil shock. Energy intensity can be reduced by improving efficiency in the use of energy or by moving away from energy-intensive activities. At the national level, I show that roughly three-quarters of the improvements in U.S. energy intensity since 1970 results from efficiency improvements. This should reduce concerns that the United States is off-shoring its carbon emissions.

A state-level analysis shows that rising per capita income and higher energy prices have played an important part in lowering energy intensity. Price and income predominantly influence intensity through changes in energy efficiency rather than through changes in economic activity. In addition, the empirical analysis suggests that little policy intervention will be needed to achieve the Bush Administration goal of an 18 percent reduction in carbon intensity by the end of this decade.

© 2008 International Association for Energy Economics

A number of observational studies indicate that the carbon uptake by terrestrial ecosystem and its response to changes in climate conditions depends on interaction between carbon and nitrogen dynamics. However, many of the terrestrial ecosystem models used for climate change study do not take this effect into account. We study the global impact of carbon/nitrogen interaction on the feedback between climate and a terrestrial carbon cycle by means of numerical simulations with Earth System model of intermediate complexity. Two versions of terrestrial ecosystem model (TEM), with (standard version) and without interaction between carbon and nitrogen cycles were used in this study. Feedback between the climate and terrestrial carbon cycle is examined by comparing results of Earth system model with various climate sensitivities to an increase in atmospheric CO2 concentration. Our results show that the interaction between terrestrial carbon and nitrogen changes both the sign and a magnitude of the feedback. In the simulations with the carbon only version of TEM, surface warming significantly reduces carbon uptake by both soil and vegetation leading to the positive carbon cycleclimate feedback. In contrast, if gross primary productivity is limited by nitrogen availability, climate change related increases in carbon uptake by vegetation exceeds an increase in soil carbon decomposition. As a result, the feedback between climate and terrestrial carbon uptake becomes negative, with the exception of very strong surface warming (in conjunction with high climate sensitivity) when terrestrial ecosystem becomes a carbon source. In spite of that, for small or moderate increase in surface temperature standard version of TEM takes less carbon than the carbon-only version, resulting in a larger increase in atmospheric CO2 concentration for a given global carbon emission.

Emissions of greenhouses gases and conventional pollutants are closely linked through shared generation processes and thus policies directed toward long-lived greenhouse gases affect emissions of conventional pollutants and, similarly, policies directed toward conventional pollutants affect emissions of greenhouse gases. Some conventional pollutants such as aerosols also have direct radiative effects. NOx and VOCs are ozone precursors, another substance with both radiative and health impacts, and these ozone precursors also interact with the chemistry of the hydroxyl radical which is the major methane sink. Realistic scenarios of future emissions and concentrations must therefore account for both air pollution and greenhouse gas policies and how they interact economically as well as atmospherically, including the regional pattern of emissions and regulation. We have modified a 16 region computable general equilibrium economic model (the MIT Emissions Prediction and Policy Analysis model) by including elasticities of substitution for ozone precursors and aerosols in order to examine these interactions between climate policy and air pollution policy on a global scale. Urban emissions are distributed based on population density, and aged using a reduced form urban model before release into an atmospheric chemistry/climate model (the earth systems component of the MIT Integrated Global Systems Model). This integrated approach enables examination of the direct impacts of air pollution on climate, the ancillary and complementary interactions between air pollution and climate policies, and the impact of different population distribution algorithms or urban emission aging schemes on global scale properties. This modeling exercise shows that while ozone levels are reduced due to NOx and VOC reductions, these reductions lead to an increase in methane concentrations that eliminates the temperature effects of the ozone reductions. However, black carbon reductions do have significant direct effects on global mean temperatures, as do ancillary reductions of greenhouse gases due to the pollution constraints imposed in the economic model. Finally, we show that the economic benefits of coordinating air pollution and climate policies rather than separate implementation are on the order of 20% of the total policy cost.

Pages

Subscribe to JP