JP

As in Kyoto three years earlier, after weeks of acrimonious debate, the climate negotiations at The Hague in November 2000 culminated in the need for a compromise between Europe and a U.S.-led coalition in the wee morning hours of the final day. This time, however, no deal could be brokered. The Kyoto Protocol, prepared in December 1997, had masked irreconcilable differences among participants by papering over many fundamental disagreements among and within the negotiating parties. In Kyoto, nations had specified only the overall national emissions reduction targets, but they had left vague the definitions and had not stipulated the mechanisms available to reach those targets. In so doing, nations had not so much come to agreement as postponed disagreement. It was this vagueness that would ultimately doom the Sixth Conference of the Parties (COP-6) at The Hague.

© 2001 Helen Dwight Reid Educational Foundation

The climate system, as a nonlinear system, has very high dimensionality and thus, understanding and quantifying its predictability requires simplification of the system, for both conceptual and practical reasons. This issue has led the climate research community to use a range of models, from simple to complex (1D EBM to 3D GCM) to explore behavior ranging from global-mean to regional temperatures. Here, we employ two methods of ``intermediate complexity'' in both modeling and diagnostics to assign quantitative answers, in terms of probabilities, to questions regarding uncertainty in climate predictions, that are themselves {\it conditional} on uncertainties in the assumed future forcings. We employ the MIT 2D climate model (Sokolov and Stone, 1998, {\it Clim. Dyn.}, {\bf 14}, 291-303) which allows us to address uncertainties in three key properties of the system: the equilibrium climate sensitivity to a doubling of CO$_2$, the rate of heat uptake by the deep ocean, and the net aerosol forcing. The combination of these three properties largely determines the modeled climate system's global mean response to prescribed changes in greenhouse gas concentrations and aerosol loadings. To assess probabilistic ranges of these properties and thus, explore uncertainty in future climate changes, we force the climate model with anthropogenic forcings (changes in greenhouse gas, aerosol, and ozone concentrations) and compare observations with modeled temperature changes to explore the regions of the model parameter space for which the simulations and observations are consistent (see Forest et al. (2000) (GRL, {\bf 27}, 4, 569--572). To reduce the dimensionality in the data comparison, data are compared using optimal climate-change detection diagnostics and the resulting goodness-of-fit statistics provide constraints on model properties. Here, we will present two aspects of this problem that reflect the non-linear nature of the problem. First, we explore the relation between model forcings and response and how this affects the interpretation of climate-change attribution results. Second, we present the results as an example of how model uncertainty relates to the uncertainty of future climate projections.

Limiting anthropogenic climate change over the next century will require controlling multiple substances. The Kyoto Protocol structure constrains the major greenhouse gases and allows trading among them, but there exist other possible regime architectures which may be more efficient. Tradeoffs between the market efficiency of all-inclusive policies and the benefits of policies targeted to the unique characteristics of each substance are investigated using an integrated assessment approach, using the MIT Emissions Prediction and Policy Analysis model, the Integrated Global Systems Model, and political analysis methods.

The thesis explores three cases. The first case addresses stabilization, the ultimate objective of Article 2 of the UN Framework Convention on Climate Change. We highlight the implications of imprecision in the definition of stabilization, the importance of non-CO2 substances, and the problems of excessive focus on long-term targets. The results of the stabilization analysis suggest that methane reduction will be especially valuable because of its importance in low-cost mitigation policies that are effective on timescales up to three centuries. Therefore in the second case we examine methane, demonstrating that methane constraints alone can account for a 15% reduction in temperature rise over the 21st century. In contrast to conventional wisdom, we show that Global Warming Potential based trading between methane reductions and fossil CO2 reductions is flawed because of the differences in their atmospheric characteristics, the uncertainty in methane inventories, the negative interactions of CO2 constraints with underlying taxes, and higher political barriers to constraining CO2. The third case examines the benefits of increased policy coordination between air pollution constraints and climate policies. We calculate the direct effects of air pollution constraints to be less than 8% of temperature rise over the century, but ancillary reductions of GHGs lead to an additional 17% decrease. Furthermore, current policies have not had success coordinating air pollution constraints and CO2 constraints, potentially leading to a 20% welfare cost penalty resulting from separate implementation. Our results lead us to recommend enacting near term multinational CH4 constraints independently from CO2 policies as well as supporting air pollution policies in developing nations that include an emphasis on climate friendly projects.

We identify three major areas of ignorance which limit predictability in current ocean GCMs. One is the very crude representation of subgrid-scale mixing processes. These processes are parameterized with coefficients whose values and variations in space and time are poorly known. A second problem derives from the fact that ocean models generally contain multiple equilibria and bifurcations, but there is no agreement as to where the current ocean sits with respect to the bifurcations. A third problem arises from the fact that ocean circulations are highly nonlinear, but only weakly dissipative, and therefore are potentially chaotic. The few studies that have looked at this kind of behavior have not answered fundamental questions, such as what are the major sources of error growth in model projections, and how large is the chaotic behavior relative to realistic changes in climate forcings. Advances in computers will help alleviate some of these problems, for example by making it more practical to explore to what extent the evolution of the oceans is chaotic. However models will have to rely on parameterizations of key small-scale processes such as diapycnal mixing for a long time. To make more immediate progress here requires the development of physically based prognostic parameterizations and coupling the mixing to its energy sources. Another possibly fruitful area of investigation is the use of paleoclimate data on changes in the ocean circulation to constrain more tightly the stability characteristics of the ocean circulation.

We identify three major areas of ignorance which limit predictability in current ocean GCMs. One is the very crude representation of subgrid-scale mixing processes. These processes are parameterized with coefficients whose values and variations in space and time are poorly known. A second problem derives from the fact that ocean models generally contain multiple equilibria and bifurcations, but there is no agreement as to where the current ocean sits with respect to the bifurcations. A third problem arises from the fact that ocean circulations are highly nonlinear, but only weakly dissipative, and therefore are potentially chaotic. The few studies that have looked at this kind of behavior have not answered fundamental questions, such as what are the major sources of error growth in model projections, and how large is the chaotic behavior relative to realistic changes in climate forcings. Advances in computers will help alleviate some of these problems, for example by making it more practical to explore to what extent the evolution of the oceans is chaotic. However models will have to rely on parameterizations of key small-scale processes such as diapycnal mixing for a long time. To make more immediate progress here requires the development of physically based prognostic parameterizations and coupling the mixing to its energy sources. Another possibly fruitful area of investigation is the use of paleoclimate data on changes in the ocean circulation to constrain more tightly the stability characteristics of the ocean circulation. (© 2004 International Union of Geodesy and Geophysics and the American Geophysical Union)

Uncertainties in future climate arise from the uncertainties in the future emissions of greenhouse gases and from the uncertainties in characteristics of the climate system which define its response to external forcing. These characteristics include climate sensitivity, strength of aerosol forcing and the rate of ocean heat uptake. Evaluation of uncertainties in future climate projections requires large ensembles of simulations. At the present time Earth system models of intermediate complexity seem to be the best tool for conducting studies of this kind, due to their computational efficiency and ability to vary the above mentioned characteristics over wide ranges.

Here we evaluate the uncertainty in climate response to prescribed changes in greenhouse gas concentrations using the MIT Integrated Global System Model (IGSM).We carried out three 250 member ensembles for SRES scenarios B1, A1B and A2. Probability distributions for the climate sensitivity, strength of aerosol forcing and the rate of ocean heat uptake were obtained by comparing 20th century climate as simulated by the IGSM with available observations.

With global concern on climate change impacts, developing countries are given special attention due their susceptibility. In this paper, change and variability in climate, land use and farmers' perception, adaptation and response to change are examined in Danangou watershed in the Chinese Loess Plateau. The first focus is to look at how climate data recorded at meteorological stations recently have evolved, and how farmers perceived these changes. Further, we want to see how the farmers respond and adapt to climate variability and what the resulting impact on land use is. Finally, other factors causing change in land use are considered. Local precipitation and temperature instrumental data and interview data from farmers were used. The instrumental data shows that the climate is getting warmer and drier, the latter despite large interannual variability. The trend is seen on the local and regional level. Farmers' perception of climatic variability corresponds well with the data record. During the last 20 years, the farmers have become less dependent on agriculture by adopting a more diversified livelihood. This adaptation makes them less vulnerable to climate variability. It was found that government policies and reforms had a stronger influence on land use than climate variability. Small-scale farmers should therefore be considered as adaptive to changing situations, planned and non-consciously planned.

© 2005 Springer Netherlands 

The role of undisturbed tropical land ecosystems in the global carbon budget is not well understood. It has been suggested that interannual climate variability can affect the capacity of these ecosystems to store carbon in the short term. In this paper, we use a transient version of the Terrestrial Ecosystem Model (TEM) to estimate annual carbon storage in undisturbed Amazonian ecosystems during the period 1980-94, and to understand the underlying causes of the year-to-year variations in net carbon storage for this region.

We estimate that the total carbon storage in the undisturbed ecosystems of the Amazon Basin in 1980 was 127.6 Pg C, with about 94.3 Pg C in vegetation and 33.3 Pg C in the reactive pool of soil organic carbon. About 83% of the total carbon storage occurred in tropical evergreen forests. Based on our model's results, we estimate that, over the past 15 years, the total carbon storage has increased by 3.1 Pg C (+ 2%), with a 1.9-Pg C (+2%) increase in vegetation carbon and a 1.2-Pg C (+4%) increase in reactive soil organic carbon. The modelled results indicate that the largest relative changes in net carbon storage have occurred in tropical deciduous forests, but that the largest absolute changes in net carbon storage have occurred in the moist and wet forests of the Basin.

Our results show that the strength of interannual variations in net carbon storage of undisturbed ecosystems in the Amazon Basin varies from a carbon source of 0.2 Pg C/year to a carbon sink of 0.7 Pg C/year. Precipitation, especially the amount received during the drier months, appears to be a major controller of annual net carbon storage in the Amazon Basin. Our analysis indicates further that changes in precipitation combine with changes in temperature to affect net carbon storage through influencing soil moisture and nutrient availability. 

On average, our results suggest that the undisturbed Amazonian ecosystems accumulated 0.2 Pg C/year<sub /> as a result of climate variability and increasing atmospheric CO2 over the study period. This amount is large enough to have compensated for most of the carbon losses associated with tropical deforestation in the Amazon during the same period.

Comparisons with empirical data indicate that climate variability and CO2 fertilization explain most of the variation in net carbon storage for the undisturbed ecosystems. Our analyses suggest that assessment of the regional carbon budget in the tropics should be made over at least one cycle of El Niño-Southern Oscillation because of interannual climate variability. Our analyses also suggest that proper scaling of the site-specific and subannual measurements of carbon fluxes to produce Basin-wide flux estimates must take into account seasonal and spatial variations in net carbon storage.

Copyright Blackwell Publishing

A thorough analysis of the ozone transport was carried out using the Transformed-Mean Eulerian (TEM) tracer transport equation and the European Centre for Medium-Range Weather Forecasts (ECMWF) Re- Analysis (ERA-40). In this budget analysis, the chemical net production term, which is calculated as the residual of the other terms, displays the correct features of a chemical sink and source term, including location and seasonality, and shows a good agreement in magnitude compared to other methods of calculating ozone loss rates. This study provides further insight into the role of the eddy ozone transport and underlines its fundamental role in the recovery of the ozone hole during spring. The trend analysis reveals that the ozone hole intensification over 1980-2001 time period is not directly related to the trend in chemical losses, but more specifically to the balance in the trends in chemical losses and transport. That is because, in the SH from October to December, the large increase in the chemical destruction of ozone is balanced by an equally large trend in the eddy transport, associated with a small increase of the mean transport. This study shows that the increase in the eddy transport is characterized by more poleward ozone eddy flux by transient waves in the midlatitudes and by stationary waves in the polar region. This is primarily due to the presence of storm tracks in the midlatitudes and of the asymmetric Antarctic topography and ice-sea heating contrasts near the pole. Overall, this study makes clear of the fact that without an increase in the eddy ozone transport over the 1980-2001 time period, the ozone hole over Antarctica would be drastically more severe. This underlines the need for careful diagnostics of the eddy ozone transport in modeling studies of long-term changes in stratospheric ozone.

Pages

Subscribe to JP