JP

How much will people travel in the future? Which modes of transport will they use? Where will traffic be most intense? The answers are critical for planning infrastructures and for assessing the consequences of mobility. They will help societies anticipate environmental problems such as regional acid rain and global warming, which are partially caused by transport emissions. These questions also lie at the center of efforts to estimate the future size of markets for transportation hardware--aircraft, automobiles, buses and trains.

In our research, we have tried to answer these questions for 11 geographic regions specifically and more generally for the world. One of us (Schafer) compiled historical statistics for all four of the principal motorized modes of transportation-- trains, buses, automobiles and high-speed transport (aircraft and high-speed trains, which we place in a single category because both could eventually offer mobility at comparable quality and speed). Together we used that unique database to compose a scenario for the future volume of passenger travel, as well as the relative prevalence of different forms of transportation through the year 2050. Our perspective was both long term and large scale because transport infrastructures evolve slowly, and the effects of mobility are increasingly global. The answers to those fundamental questions, we found, depend largely on only a few factors.

 

Anthropogenic emissions of greenhouse gases are very likely to have already changed the Earth’s climate, and will continue to change it for centuries if no action is taken. Nuclear power, a nearly carbon-free source of electricity, could contribute significantly to climate change mitigation by replacing conventional fossil-fueled electricity generation technologies. To examine the potential role of nuclear power, an advanced nuclear technology representing Generation III reactors is introduced into the Emissions Predictions and Policy Analysis economic model, which projects greenhouse gas and other air pollutant emissions as well as climate policy costs. The model is then used to study how the cost and availability of nuclear power affect the economy and the environment at the global scale.

A literature review shows that estimates of nuclear power costs vary widely, because of differences in both calculation methods and cost parameters. Based on a sensitivity analysis, the most important parameters are the discount rate, the overnight cost, the capacity factor and the economic lifetime. The methodological differences affect not only the absolute power costs, but also the relative costs among electricity generation technologies. Acknowledging this uncertainty, a levelized cost model leads to bus-bar cost scenarios ranging from $35/MWh to $60/MWh.

Cap-and-trade climate policies strengthen the development of nuclear power in the high nuclear cost scenarios. In low-cost cases, nuclear power grows significantly even without climate policies, which have little further influence on the market share of nuclear power. Lower costs of nuclear power decrease the costs of climate policies: the consumption NPV loss due to a 550ppm climate policy is reduced by 36% if nuclear costs are reduced from the highest to the lowest scenario. Nuclear power development at the largest scale projected would involve the depletion of currently known conventional and phosphate uranium deposits.

Environmental benefits of the development of competitive nuclear power include a reduction in greenhouse gas emissions, even if no climate policy is implemented. For example, CO2 emissions decrease by 32% in 2050 in the lowest nuclear cost scenario. Conventional pollutant emissions are also reduced: NOx and SO2 emissions decrease by 14% and 24% in 2050.

The economic value of the political decision to keep the nuclear option open is evaluated to range between $1,300 billion and $17,600 billion, in terms of consumption NPV loss, depending on the climate policy regime. These benefits should eventually be weighed against the proliferation, waste and safety issues associated with further development of nuclear power.

The role of sinks in climate policy has been controversial and confused. The major supporters for including sinks in an international climate policy under the Kyoto Protocol were the Umbrella Group of countries, led by the USA and including Australia, Canada, Japan and Russia. This group also pushed strongly for international emissions trading, imagining that countries would distribute emissions allowances to private sector emitters, who would then be required to have an allowance for each tonne of greenhouse gas (GHG) they emitted. With emissions trading, emitters who found they could cheaply reduce their emissions might have allowances to sell, or those who could not easily reduce these could purchase allowances to cover their emissions. With international trading, these permits could be exchanged among allowance holders anywhere among the parties subject to an emissions cap.

In principle, accounting and crediting sinks under a cap-and-trade system should be straightforward: (i) measure the stock of carbon at an initial year; (ii) measure the stock of carbon in subsequent years; (iii) if the carbon stock rises from one period to the next, the increased sequestration is added to the allowances or cap on emissions of the country or entity, and if the stock declines, the net release to the atmosphere is subtracted from the allowances or cap. This simplicity has eluded designers of carbon policy. For various reasons, a desire has developed to identify specific types of sink-enhancement actions that may or may not be included under agreed caps as well as an unwillingness to bring the entire terrestrial biosphere carbon stock within a policy target. The result has been thousands of pages of attempts to define a forest, the difference between afforestation and reforestation, what constitutes 'management', if a change in carbon stocks is due to human action, and spatial and temporal leakage. Most of this would be irrelevant if a simple accounting framework and broad coverage of land use emissions and uptake were adopted in the design of carbon policy. How and why did we get from a simple and straightforward idea to the complex design and controversial issues now discussed as part of sinks policy? Are there good reasons why the problem is not as simple as it at first seems? Is it possible (or desirable) to now try to work towards fairly simple mechanisms for sinks in a carbon policy? These are the questions we hope to address in this chapter.

We first review existing policies with attention to issues that arise with regard to terrestrial sinks, and how sinks are to be included. We then show some of the important aspects of managing sinks that arise because they depend on environmental conditions that are largely outside the control of the land owner. Next we work through a very simple example of two hypothetical countries, and show the effects of including sinks. Finally we address several issues that have arisen as countries have negotiated the inclusion of sinks in GHG mitigation policies. Some of these are important and real issues that must be addressed if climate policy design is to create incentives for efficiently managing carbon in the terrestrial biosphere. However, many of the issues arise from, or in response to, the tangled policy approaches we have designed for sinks enhancement, and attempts to straighten it out seem only to further tangle the issue.

© CAB International 2007

We investigate the economics of coal-to-liquid (CTL) conversion, a polygeneration technology that produces liquid fuels, chemicals, and electricity by coal gasification and Fischer–Tropsch process. CTL is more expensive than extant technologies when producing the same bundle of output. In addition, the significant carbon footprint of CTL may raise environmental concerns. However, as petroleum prices rise, this technology becomes more attractive especially in coal-abundant countries such as the U.S. and China. Furthermore, including a carbon capture and storage (CCS) option could greatly reduce its CO2 emissions at an added cost. To assess the prospects for CTL, we incorporate the engineering data for CTL from the U.S. Department of Energy (DOE) into the MIT Emissions Prediction and Policy Analysis (EPPA) model, a computable general equilibrium model of the global economy. Based on DOE’s plant design that focuses mainly on liquid fuels production, we find that without climate policy, CTL has the potential to account for up to a third of the global liquid fuels supply by 2050 and at that level would supply about 4.6% of global electricity demand. A tight global climate policy, on the other hand, severely limits the potential role of the CTL even with the CCS option, especially if low-carbon biofuels are available. Under such a policy, world demand for petroleum products is greatly reduced, depletion of conventional petroleum is slowed, and so the price increase in crude oil is less, making CTL much less competitive.

© Elsevier Ltd.

We investigate the economics of coal-to-liquid (CTL) conversion, a polygeneration technology that produces liquid fuels, chemicals, and electricity by coal gasification and Fischer-Tropsch process. CTL is more expensive than extant technologies when producing the same bundle of output. In addition, the significant carbon footprint of CTL may raise environmental concerns. However, as petroleum prices rise, this technology becomes more attractive especially in coal-abundant countries such as the U.S. and China. Furthermore, including a carbon capture and storage (CCS) option could greatly reduce its CO2 emissions at an added cost. To assess the prospects for CTL, we incorporate the engineering data for CTL from the U.S. Department of Energy (DOE) into the MIT Emissions Prediction and Policy Analysis (EPPA) model, a computable general equilibrium model of the global economy. Based on DOE's plant design that focuses mainly on liquid fuels production, we find that without climate policy, CTL has the potential to account for up to a third of the global liquid fuels supply by 2050 and at that level would supply about 4.6% of global electricity demand. A tight global climate policy, on the other hand, severely limits the potential role of the CTL even with the CCS option, especially if low-carbon biofuels are available. Under such a policy, world demand for petroleum products is greatly reduced, depletion of conventional petroleum is slowed, and so the price increase in crude oil is less, making CTL much less competitive.

The Clean Development Mechanism (CDM) has evolved at a surprising speed since 2003 and is considered to have made positive contributions to the development of greenhouse-gas-reducing projects in developing countries. Taking into account its historical significance as the first effort of its kind and its current success, a thorough evaluation of its system and its effectiveness is of critical importance. Against this backdrop, this study closely investigates each stage of the CDM project cycle from development and registration of projects to issuance of certified emission reductions and identifies influential factors for the successful CDM implementation. For the analysis, we performed an extensive quantitative analysis augmented by a descriptive study, based on information of approximately 5000 CDM project.

Our findings suggest that the development of CDM projects is stimulated by favorable economic, social and technical environments in host countries as well as supportive CDM administration. This explains why projects are currently concentrated in certain countries such as China and India. Once projects are developed and submitted for validation, the success of the CDM projects at the next stages of project cycle related to registration and Certified Emission Reduction (CER) issuance is influenced by their types and a choice of Designated Operational Entities and project consultants. In particular, significant difference in registration success exists across project types, which calls for special attention of both the CDM authority and project participants to projects with high risks like energy efficiency, fossil fuel switch and biomass projects. Lastly, we found that performance of projects is affected by very project-specific conditions. For many of the most poorly performing projects, failure is attributable to technical and operational problems at the initial stage of project implementation, which highlights the importance of well-prepared PDDs. Based on the findings, the thesis concludes with policy recommendations to enhance the capacities and improve the performance of the major players under the CDM.

This thesis addresses the role of aerosols in the troposphere from three perspectives: (1) the radiative forcing by aerosols; (2) responses of meteorological and chemical fields to the aerosol radiative forcing; (3) uncertainty analysis of the radiative forcing by anthropogenic sulfate aerosols.

The sensitivity of the direct radiative forcing by anthropogenic sulfate aerosols to their optical properties, concentrations and the ambient humidity has been investigated in an explicit radiative transfer model with available aerosol and meteorological data. Results indicate that aerosol concentrations and optical properties contribute about equally to the factor-of-three difference in the estimates of this forcing in the literature. The use of constant humidity scaling factors for aerosol optical properties is a good approximation, provided that these factors in the visible wavelength are kept the same as the observed ones. Neglecting the humidity effect on aerosol single-scattering albedo and asymmetry factor will only lead to a 10% overestimate of the result.

The global distribution of radiative flux changes at the top of the atmosphere and the surface due to climatological aerosols in d'Almeida et al. [1991] is calculated with the radiative transfer scheme by Fu and Liou [1992]. At the top of the atmosphere aerosols decrease the net downward short-wave radiation in most parts of the global. Increases occur mainly in the Saharian desert region, its downwind equatorial east Atlantic, and part of Australia. At the surface the decrease of the net downward short-wave radiation is more than a factor of 2 larger than that at the top of the atmosphere. Decreases of the net outgoing long-wave radiation are about an order of magnitude smaller than changes of the short-wave radiation. Changes in short- and long-wave radiation are characterized by large spatial variations and gradients. The annual global mean radiative forcing owing to aerosols increases by about a factor of 2 when the humidity effect on aerosol optical properties is included. Replacing the boundary layer heights as prescribed in the aerosol data set by the actual values decreases the result by a similar factor.

The mechanism and magnitude of meteorological and chemical responses to aerosol radiative forcing are studied in three different models: a one-dimensional radiative-convective equilibrium model, a mesoscale meteorological model, and a photochemical air quality model. The simulations in the one-dimensional radiative-convective model show that the prescribed time-invariant short-wave heating in the lower troposphere causes an increase of the long-wave heating in the upper troposphere and a decrease of the convective heating throughout the troposphere. As a result, the model atmosphere at the new equilibrium is cooler and drier. Analysis of the surface energy balance in the model shows that the equilibrium temperature change depends not only on the external forcing but also on the internal feedbacks. A negative feedback between the surface temperature and the sensible and latent heat fluxes and a positive feedback between the surface temperature and atmospheric water vapor are evident. The negative feedback coefficient increases by a factor of 4 to 5 when the equilibrium is reached. In the meantime, the positive feedback coefficient also increases to its equilibrium value, which is comparable to the magnitude of the negative feedback. As a result, the initial sensitivity of surface temperature change to aerosol radiative forcing is about a factor of 4 to 5 smaller than the equilibrium sensitivity. The transient surface temperature change approaches the equilibrium one with a characteristic time scale, which depends on both feedback factors and surface properties.

The mesoscale model is set up in the Southern California Air Quality Study (SCAQS) region and model predictions without aerosols are validated against the SCAQS measurements. Four aerosol types with low- and high-level concentrations from the climatological aerosol data set in d'Almeida et al. [1991] are introduced into the mesoscale model. The single-column simulations show that in response to large decreases in the incoming solar radiation due to aerosols surface temperature changes are less sensitive than those in the one-dimensional radiative-convective model. This attributes to the difference between initial and equilibrium sensitivity of surface temperature change to aerosol radiative forcing. The three-dimensional mesosclae simulations indicate that the domain averaged relative changes are -30 to 10% for boundary layer height, -30 to 40% from wind speed, and -20 to 0% for net downward SW radiation. Temperature changes vary from -1.5 to 0.5 °C, and wind direction changes vary within 50 degrees.

Simulations in the photochemical air quality model with either uniformly perturbed or aerosol-induced meteorological fields show that the chemical fields are more sensitive to changes in temperature, wind, and boundary layer height. The domain-averaged relative changes of ground level concentrations of chemical species range up to 10% due to the prescribed low-level aerosol loading. Shifting aerosol loading from the low-level to high-level appears to almost double the concentration response of species during the day. Changes at specific sites can be much larger than the domain-averaged changes. The spatial pattern in the response of chemical fields bears little or no resemblance to that in any single meteorological perturbation.

The uncertainty of the direct and indirect radiative forcing by anthropogenic sulfate aerosols has been addressed with a new uncertainty analysis technique. The probability density function of the direct radiative forcing under the influence of 9 uncertain parameters is calculated for four different models. The mean value of the result varies from 9.3 to 1.3 W/m2 with a 95% confidence range of 0.1 to 4.2 W/m2. Variance analysis identifies the sulfate yield and lifetime as the two primary uncertain parameters. The probability density function of the indirect radiative forcing has been evaluated in 5 different models with respect to 20 uncertain parameters. The mean value of the indirect forcing varies from 1.2 to 1.7 W/m2 with a 95% confidence range of 0.1 to 5.2 W/m2. Variance analysis ranks aerosol size distribution as the leading contributor to the model uncertainty.

International trade is important for agriculture, and agriculture is changing rapidly. Governmental interventions in world domestic agriculture distort trade in significant ways; however, they fail to insulate regions from world market forces. The phenomenon of human-induced climate change is global. As research moves from understanding potential impacts of climate and environmental change forces on agriculture to identifying adaptation strategies, successful strategies need to be adapted that encompass such forces as economic and agricultural market change. Issues of trade, environmental change and growth, and adaptation of agriculture to these forces are the focus of this chapter.

(Chapter available by request)
 

First steps toward a broad climate agreement, such as the Kyoto Protocol, have focused on less than global geographic coverage. We consider instead a policy that is less comprehensive in term of greenhouse gases (GHGs), including only the non-CO2 GHGs, but is geographically comprehensive. Abating non-CO2 GHGs may be seen as less of a threat to economic development and therefore it may be possible to involve developing countries in such a policy even though they have resisted limits on CO2 emissions. The policy we consider involves a GHG price of about $15 per ton carbon-equivalent (tce) levied only on the non-CO2 GHGs and held at that level through the century. We estimate that such a policy would reduce the global mean surface temperature in 2100 by about 0.55 °C if only methane is covered that alone would achieve a reduction of 0.3 to 0.4°C. We estimate the Kyoto Protocol in its current form would achieve a 0.25°C reduction in 2100 if Parties to it maintained it as is through the century. Furthermore, we estimate the costs of the non-CO2 policies to be a small fraction of the Kyoto policy. Whether as a next step to expand the Kyoto Protocol, or as a separate initiative running parallel to it, the world could well make substantial progress on limiting climate change by pursuing an agreement to abate the low cost non-CO2 GHGs. The results suggest that it would be useful to proceed on global abatement of non-CO2 GHGs so that lack of progress on negotiations to limit CO2 does not allow these abatement opportunities to slip away.

A CGE model is used in an integrated modeling framework to examine the economic and climate impacts of various low cost ($15/ton carbon equivalent) non-CO2 GHG policies. We estimate that global mean surface temperature in 2100 could be decreased by 0.57 degrees C with a non-CO2 policy, of which more than half the reduction is due to methane alone. In comparison, Kyoto maintained in its current form for the remainder of the century would yield only 0.30 degrees C temperature reduction, with a significantly higher cost (as measured by net present value of consumption over the century). A further benefit of methane reduction is a 5% decrease in global mean tropospheric ozone concentrations.

Pages

Subscribe to JP