JP

Extreme precipitation events pose a significant threat to public safety, natural and managed resources, and the functioning of society. Changes in such high-impact, low-probability events have profound implications for decision-making, preparation and costs of mitigation and adaptation efforts. Understanding how extreme precipitation events will change in the future and enabling consistent and robust projections is therefore important for the public and policymakers as we prepare for consequences of climate change.

Projection of extreme precipitation events, however, particularly at the local scale, presents a critical challenge: the climate model-based simulations of precipitation that we currently rely on for such projections—general circulation models (GCMs)—are not very realistic, mainly due to the models’ coarse spatial resolution. This coarse resolution precludes adequate representation of highly influential, small-scale features such as moisture convection and topography. Regional circulation models (RCMs) provide much higher resolution and better representation of such features, and are thus often perceived as an optimum approach to producing more accurate heavy precipitation statistics than GCMs. However, they are much more computationally intensive, time-consuming and expensive to run.

In a previous paper, the researchers developed an algorithm that detects the occurrence of heavy precipitation events based on climate models’ well-resolved, large-scale atmospheric circulation conditions associated with those events—rather than relying on these models’ simulated precipitation. The algorithm’s results corresponded with observations with much greater precision than the model-simulated precipitation.

In this paper, the researchers show that the performance of the new algorithm in detecting heavy precipitation event is not dependent on the model resolution and even better than that of precipitation simulated from RCMs. The algorithm thus presents a robust and economic way to assess extreme precipitation frequency across a broad range of GCMs and multiple climate change scenarios with minimal computational requirements.   

David L. Chandler | MIT News Office 
April 6, 2018

Putting a price on carbon, in the form of a fee or tax on the use of fossil fuels, coupled with returning the generated revenue to the public in one form or another, can be an effective way to curb emissions of greenhouse gases. That’s one of the conclusions of an extensive analysis of several versions of such proposals, carried out by researchers at MIT and the National Renewable Energy Laboratory (NREL).

Particulate pollution-driven severe haze events in Southeast Asia have become more intense and frequent in recent years, degrading air quality and threatening human health. While widespread biomass burning is a major source of these events, particulate pollutants from other human activities also play a key role in degrading the region’s air quality. In this study, MIT Joint Program and collaborating researchers conducted numerical simulations to examine the contributions of aerosols emitted from fire (via biomass burning) vs. non-fire (including fossil fuel combustion, road and industrial dust, land use and land-use change) sources to the degradation of air quality and visibility over Southeast Asia. Covering 2002-2008, these simulations were driven by emissions from: (a) fossil fuel burning only, (b) biomass burning only, and (c) both (a) and (b).

Across the ASEAN 50 cities, these model results reveal that 39% of observed low visibility days (LVDs) can be explained by either fossil fuel burning or biomass burning emissions alone, a further 20% by fossil fuel burning alone, a further 8% by biomass burning alone, and a further 5% by a combination of fossil fuel and biomass burning. The remaining 28% of observed LVDs remain unexplained, likely due to emissions sources not yet identified.

Further analysis of the 24-hour PM2.5 Air Quality Index (AQI) indicates that compared to the simulated result of the standalone non-fire emissions case, the coexisting fire and non-fire PM2.5 case can substantially increase the chance of AQI being in the moderate or unhealthy pollution level from 23% to 34%. The premature mortality among major Southeast Asian cities due to degradation of air quality by particulate pollutants is estimated to increase from ~4110 per year in 2002 to ~6540 per year in 2008.

Finally, the study includes an exploratory experiment of using machine learning algorithms to forecast the occurrence of haze events in Singapore. All results suggest that besides minimizing biomass burning activities, an effective air pollution mitigation policy for Southeast Asia must consider controlling emissions from non-fire anthropogenic sources.

Restructuring an electricity sector entails a complex realignment of political and economic institutions, which may both delay and distort the achievement of market conditions that are reasonably competitive. In research and planning for policy interventions in power systems under these varied regulatory environments, typical operational models may neglect important areas in which engineering constraints and political realities combining together can substantially change outcomes, leading to poor understanding of underlying causes of inefficiency and to inappropriate recommendations. We develop tractable formulations of a common power systems model used on a daily basis--the unit commitment optimization--which consider important political factors in the Northeast grid region of China. We demonstrate the importance of these interactions on operations and provide a set of options for researchers to explore further pathways for China's ongoing power system reforms. For example, wind integration, a key policy priority, is inhibited by the interaction of institutions limiting short- and long-term sources of flexibilities in inter-provincial trade.

This article provides a proof of concept for using a biogeochemical/ecosystem/optical model with a radiative transfer component as a laboratory to explore aspects of ocean colour. We focus here on the satellite ocean colour chlorophyll a (Chl a) product provided by the often-used blue/green reflectance ratio algorithm. The model produces output that can be compared directly to the real-world ocean colour remotely sensed reflectance. This model output can then be used to produce an ocean colour satellite-like Chl a product using an algorithm linking the blue versus green reflectance similar to that used for the real world. Given that the model includes complete knowledge of the (model) water constituents, optics and reflectance, we can explore uncertainties and their causes in this proxy for Chl a (called derived Chl ain this paper). We compare the derived Chl a to the actual model Chl a field. In the model we find that the mean absolute bias due to the algorithm is 22 % between derived and actual Chl a. The real-world algorithm is found using concurrent in situ measurement of Chl a and radiometry. We ask whether increased in situ measurements to train the algorithm would improve the algorithm, and find a mixed result. There is a global overall improvement, but at the expense of some regions, especially in lower latitudes where the biases increase. Not surprisingly, we find that region-specific algorithms provide a significant improvement, at least in the annual mean. However, in the model, we find that no matter how the algorithm coefficients are found there can be a temporal mismatch between the derived Chl a and the actual Chl a. These mismatches stem from temporal decoupling between Chl a and other optically important water constituents (such as coloured dissolved organic matter and detrital matter). The degree of decoupling differs regionally and over time. For example, in many highly seasonal regions, the timing of initiation and peak of the spring bloom in the derived Chl a lags the actual Chl a by days and sometimes weeks. These results indicate that care should also be taken when studying phenology through satellite-derived products of Chl a. This study also reemphasizes that ocean-colour-derived Chl a is not the same as the real in situ Chl a. In fact the model derived Chl a compares better to real-world satellite-derived Chl a than the model actual Chl a. Modellers should keep this is mind when evaluating model output with ocean colour Chl a and in particular when assimilating this product. Our goal is to illustrate the use of a numerical laboratory that (a) helps users of ocean colour, particularly modellers, gain further understanding of the products they use and (b) helps the ocean colour community to explore other ocean colour products, their biases and uncertainties, as well as to aid in future algorithm development.

The Paris Agreement makes long-term energy and climate projections particularly important because it calls for a goal that likely requires an energy system that is based on a radically different fuel mix than currently in use. This presents a challenge for energy companies as they try to anticipate the types of energy and fuels that will be required to stay competitive while meeting environmental requirements. A new scenario (called Sky) developed by Shell International examines the challenge of moving to an energy system with net-zero CO2 emissions and gradually eliminate emissions from deforestation by midway through the second half of the century (specifically by the year of 2070). Using the MIT Integrated Global System Modeling (IGSM) framework, we simulate a 400-member ensemble, reflecting uncertainty in Earth system response of global temperature change associated with the Sky scenario by 2100. We find that for the median climate parameters the global surface temperature increase by 2100 is 1.75°C above the pre-industrial levels with an 85% probability of remaining below 2°C. The geographic distribution of the temperature change shows a stronger warming in Polar regions. If, in addition, there is a significant effort directed toward global reforestation then, with median climate parameters, temperature increase by 2100, is near 1.5°C above pre-industrial levels.

Projections of the pathways that reduce carbon emission to the levels consistent with limiting global average temperature increases to 1.5°C or 2°C above pre-industrial levels often require negative emission technologies like bioelectricity with carbon capture and storage (BECCS). We review the global energy production potential and the ranges of costs for the BECCS technology.  We then represent a version of the technology in the MIT Economic Projection and Policy Analysis (EPPA) model to see how it competes with other low carbon options under stabilization scenarios. We find that, with a global price on carbon designed to achieve climate stabilization goals, the technology could make a substantial contribution to energy supply and emissions reduction in the second half of the 21st century. The main uncertainties weighing on bioelectricity with carbon capture and storage are biomass availability at large scale, the pace of improvements in carbon capture technologies, the availability and cost of CO2 storage, and social acceptance.  Commercial viability would appear to depend strongly on a policy environment, such as carbon pricing, that would advantage it, given the technology costs we assume. Compared to previous studies, we provide a consistent approach to evaluate all of the components of the technology, from growing biomass to CO2 storage assessment. Our results show that global economic costs and needed carbon prices to hit the stabilization target are substantially lower with the technology available at reasonable costs.

Pages

Subscribe to JP