JP

Studying the uncertainty in computationally expensive models has required the development of specialized methods, including alternative sampling techniques and response surface approaches. However, existing techniques for response surface development break down when the model being studied exhibits discontinuities or bifurcations. One uncertain variable that exhibits this behavior is the thermohaline circulation (THC) as modeled in three-dimensional general circulation models. This is a critical uncertainty for climate change policy studies. We investigate the development of a response surface for studying uncertainty in THC using the Deterministic Equivalent Modeling Method, a stochastic technique using expansions in orthogonal polynomials. We show that this approach is unable to reasonably approximate the model response. We demonstrate an alternative representation that accurately simulates the model's response, using a basis function with properties similar to the model's response over the uncertain parameter space. This indicates useful directions for future methodological improvements.

Studying the uncertainty in computationally expensive models has required the development of specialized methods, including alternative sampling techniques and response surface approaches. However, existing techniques for response surface development break down when the model being studied exhibits discontinuities or bifurcations. One uncertain variable that exhibits this behavior is the thermohaline circulation (THC) as modeled in three-dimensional general circulation models. This is a critical uncertainty for climate change policy studies. We investigate the development of a response surface for studying uncertainty in THC using the Deterministic Equivalent Modeling Method, a stochastic technique using expansions in orthogonal polynomials. We show that this approach is unable to reasonably approximate the model response. We demonstrate an alternative representation that accurately simulates the model's response, using a basis function with properties similar to the model's response over the uncertain parameter space. This indicates useful directions for future methodological improvements.

© 2006 Baywood Publishing Co., Inc.

To improve the estimate of economic costs of future sea-level rise associated with global climate change, this report generalizes the sea-level rise cost function originally proposed by Fankhauser, and applies it to a new database on coastal vulnerability developed as part of the Dynamic Interactive Vulnerability Assessment (DIVA) tool.
      An analytic expression for the generalized sea-level rise cost function is obtained to explore the effect of various spatial distributions of capital and nonlinear sea-level rise scenarios. With its high spatial resolution, the DIVA database shows that capital is usually highly spatially concentrated along a nation's coastline, and that previous studies, which assumed linear marginal capital loss for lack of this information, probably overestimated the fraction of a nation's coastline to be protected and hence protection cost. In addition, the new function can treat a sea-level rise scenario that is nonlinear in time. As a nonlinear sea-level rise scenario causes more costs in the future than an equivalent linear sea-level rise scenario, using the new equation with a nonlinear scenario also reduces the estimated damage and protection fraction through discounting of the costs in later periods.
      Numerical calculations are performed, applying the cost function to the DIVA database and socioeconomic scenarios from the MIT Emissions Prediction and Policy Analysis (EPPA) model. The effect of capital concentration substantially decreases protection cost and capital loss compared with previous studies, but not wetland loss. The use of a nonlinear sea-level rise scenario further reduces the total cost because the cost is postponed into the future.

Methane (CH4) and carbon dioxide (CO2) are the two most radiatively important greenhouse gases attributable to human activity. Large uncertainties in their source and sink magnitudes currently exist. We estimate global methane surface emissions between 1996 and 2001, using a top-down approach that combines observed and simulated atmospheric CH4 concentrations. As a secondary study, we describe our participation in a CO2 inverse-modeling intercomparison.

The available methane time-series data used in this work include observations from 13 high-frequency stations (in-situ) and 74 low-frequency sites (flask). We also construct an annually-repeating reference emissions field from pre-existing datasets of individual methane processes. For our forward simulations, we use the 3-D global chemical transport model MATCH driven by NCEP meteorology. A prescribed, annually-repeating OH field scaled to fit methyl chloroform observations is used as the methane sink. A total methane source of approximately 600 Tg/yr best reproduces the methane growth rate between 1993-2001. Using the reference emissions, MATCH can reproduce the observed methane variations at many sites. Interannual variations in transport, including those associated with ENSO and the NAO, are found to be important at certain locations.

We adapt the Kalman Filter to estimate methane flux magnitudes and uncertainties between 1996 and 2001. Seven seasonal processes (3 wetland, rice, and 3 biomass burning) are optimized at each month, while three aseasonal processes (animals/waste, coal, and gas) are optimized as constant emissions. These optimized emissions represent adjustments to the reference emissions. For the entire period, the inversion reduces coal and gas emissions, and increases rice and biomass burning emissions. The optimized seasonal emission has a strong peak in July, largely due to increased emissions from rice producing regions. The inversion also attributes the large 1998 increase in atmospheric CH4 to global wetland emissions, consistent with a bottom-up study based on a wetland process model. The current observational network can significantly constrain northern emitting regions, but is less effective at constraining tropical emitting regions due to limited observations. We further assessed the inversion sensitivity to different observing sites and model sampling strategies. Better estimates of global OH fluctuations are also necessary to fully describe the interannual behavior of methane observations. Carbon dioxide inversions were conducted as part of the Transcom 3 (Level 1) modeling intercomparison. We further explored the sensitivity of our CO2 inversion results to different parameters.

An international emissions trading system is a featured instrument in the Kyoto Protocol to the Framework Convention on Climate Change, designed to reduce emissions of greenhouse gases among major industrial countries. The US was the leading proponent of emissions trading in the negotiations leading up to the Protocol, with the European Union initially reluctant to embrace the idea. However the US withdrawal from the Protocol has greatly changed the nature of the agreement. One result is that the EU has moved rapidly ahead, establishing in 2003 the Emission Trading Scheme (ETS) for the period of 2005-2007. This Scheme was intended as a test designed to help its member states transition to a system that would lead to compliance with their Kyoto Protocol commitments, which cover the period 2008-2012. The ETS covers CO2 emissions from big industrial entities in the electricity, heat, and energy-intensive sectors. It is a system that itself is evolving as allocations, rules, and registries were still being finalized in some member states late into 2005, even though the system started in January of that year. We analyze the ETS using the MIT Emissions Prediction and Policy Analysis (EPPA) model. We find that a competitive carbon market clears at a carbon price of about 0.6 to 0.9 €/tCO2 (~2 to 3 €/tC) for the 2005-2007 period in a base run of our model in line with many observers' expectations who saw the cuts required under the system as very mild, but in sharp contrast to the actual history of trading prices, which have settled in the range of 20 to 25 €/tCO2 (~70 to 90 €/tC) by the middle of 2005. In various comparison exercises the EPPA model's estimates of carbon prices have been similar to that of other models, and so the contrast between projection and reality in the ETS raises questions regarding the potential real cost of emissions reductions vis-á-vis expectations previously formed based on results from the modeling community. While it is beyond the scope of this paper to reach firm conclusions on reasons for this difference, what happens over the next few years will have important implications for greenhouse gas emissions trading and so further analysis of the emerging European trading system will be crucial.

Copyright Carlos de Miguel, Xavier Labandeira and Baltasar Manzano 2006

The study of the uncertainties in future climate projections requires large ensembles of simulations with different values of model characteristics that define its response to external forcing. These characteristics include climate sensitivity, strength of aerosol forcing and the rate of ocean heat uptake. The latter can be easily varied over a wide range in an anomaly diffusing ocean model (ADOM). The rate of heat uptake in a three-dimensional ocean general circulation model (OGCM) is, however, defined by a large number of factors and is far more difficult to vary. The range of the rate of the oceanic heat uptake produced by existing Atmosphere-Ocean General Circulation Models (AOGCMs) is narrower than the range suggested by available observations. As a result, simpler models, like an ADOM, are useful in probabilistic climate forecast type studies as they can take into account the full uncertainty in ocean heat uptake.
        To evaluate the performance of the ADOM on different time scales we compare results of simulations with two versions of the MIT Integrated Global System Model (IGSM): one with an ADOM and the second with a full three dimensional OCGM. Our results show that in spite of its inability to depict feedbacks associated with the changes in the ocean circulation and a very simple parameterization of the ocean carbon cycle, the version of the IGSM with ADOM is able to reproduce important aspects of the climate response simulated by the version with the OCGM through the 20th and 21st century and can be used to obtain probability distributions of changes in many of the important climate variables, such as surface air temperature and sea level, through the end of 21st century. On the other hand, the ADOM is not able to reproduce results for longer term climate change and specifically those concerned with details of the feedbacks on the heat and carbon storage. Such studies will require the use of the OGCM and uncertainties in those results will be limited.

The experiment reported here tests the case of so-called exclusionary manipulation of emission permit markets, i.e., when a dominant firm — here a monopolist — increases its holding of permits in order to raise its rivals' costs and thereby gain more on a product market. Earlier studies have claimed that this type of market manipulation is likely to substantially reduce the social gains of permit trading and even result in negative gains. The experiment designed here parallels institutional and informational conditions likely to hold in real trade with carbon permits among electricity producers. Although the dominant firm withheld supply from the electricity market, the outcome seems to reject the theory of exclusionary manipulation. In later trading periods, closing prices on both markets, permit holdings and total electricity production are near competitive levels. Social gains of emissions trading are higher than in earlier studies.

Recent events have revived interest in explaining the long-run changes in the energy intensity of the U.S. economy. We use a KLEM dataset for 35 industries over 39 years to decompose changes in the aggregate energy-GDP ratio into shifts in sectoral composition (structural change) and adjustments in the energy demand of individual industries (intensity change). We find that although structural change offsets a rise in sectoral energy intensities from 1960 until the mid-1970s, after 1980 the change in the industrial mix has little impact and the average sectoral energy intensity experiences decline. Then, we use these data to econometrically estimate the influence on within-industry changes in energy intensity of price-induced substitution of variable inputs, shifts in the composition of capital and embodied and disembodied technical progress. Our results suggest that innovations embodied in information technology and electrical equipment capital stocks played a key role in energy intensity's long-run decline.

This paper reconciles conflicting explanations for the decline in U.S. energy intensity over the last 40 years of the 20th century. Decomposing changes in the energy-GDP ratio into shifts in the structure of sectoral composition and adjustments in the efficiency of energy use within individual industries reveals that while inter-industry structural change was the principal driver of the observed decline in aggregate energy intensity, intra-industry efficiency improvements played a more important role in the post-1980 period. Econometric results attribute this phenomenon to adjustments in quasi-fixed inputs-particularly vehicle stocks, and disembodied autonomous technological progress, and show that price-induced substitution of variable inputs generated transitory energy savings, while innovation induced by energy prices had only a minor impact.
© 2007 Elsevier B.V.

Pages

Subscribe to JP