Tuesday, May 29, 2012

In Search of Hurricane History

Friday marks the beginning of the 2012 Atlantic hurricane season. Cue the groans, the crossed fingers and the hope that mad rushes for plywood and batteries will wait for another year. Many of you are probably wondering what's the chance that you will get hit this year.

As residents of Eastern North Carolina know well, hurricanes are not idle threats. According to the National Climatic Data Center, tropical storm and hurricane strikes are the single most common causes of billion-dollar natural disasters in the United States, accounting for nearly $260 billion in damages between 1980 and 2005, or more than half of the combined losses from all U.S. natural disasters. And since 1851, 18 percent of all hurricane strikes on the United States occurred in North Carolina.

Part of the frustration with hurricanes—and one reason why they are so destructive—is that hurricane strikes are anything but predictable. Along the North Carolina coast, the total number of storm that make landfall varies enormously from year to year. For instance, between 1986 and 1995, the N.C. coast was directly struck by only one hurricane (Charley in 1986) and brushed by another (Emily in 1993). However, six hurricanes would make landfall along the coast over the next 10 years (Bertha and Fran in 1996, Bonnie in 1998, Dennis and Floyd in 1999, and Isabel in 2003). Back in 1955, three hurricanes—Connie, Diane, and Ione—struck the NC coast within a six-week span.

Such great variability in the number hurricane landfalls demands an explanation. Intuitively, the number of landfalls reflects the total number of hurricanes: the more hurricanes in a season, the greater chance that any area could receive a direct hit. For North Carolina, the number of hurricane impacts per decade tracks the average number of yearly storms, as the figure below shows:


Average number of yearly Atlantic hurricanes per decade (blue line), and number of hurricanes striking within 150 miles of North Carolina (red bars). Atlantic hurricane counts from Weather Underground, North Carolina Hurricane statistics from State Climate Office of NC.


One way to estimate the chance of a hurricane strike to a certain area is to estimate the total number of storms during a hurricane season. For such estimates, it is crucial to have accurate records of past hurricane strikes. Unfortunately, historical data are limited in length, only going back to the mid-19th century, and their accuracy and coverage are questionable at best.

For Jeff Donnelly, an associate scientist at Woods Hole Oceanographic Institution in Massachusetts, extending the historical record of hurricane strikes is a matter of digging deeper. Just not into library archives or journals; instead, Donnelly searches for evidence of hurricanes in salt marshes on the landward side of barrier islands.

Over the last two decades, scientists like Dr. Donnelly have increasingly turned to clues from the earth for records of past hurricane strikes. This approach, termed paleotempestology (paleo- past, tempest- storms; logy- the study of), relies on a simple principle: Hurricanes tend to move things to where they normally would not be found.

In salt marshes, the material moved is sand. When a hurricane approaches the coast, wave action and storm surge erodes sand from the beach and brings it inland, depositing the sand on top of the muddy sediment in the salt marsh. After the storm surge recedes, the marsh recovers, and muddy sediment is again deposited on top of the sandy layer. The end result, according to Jon Woodruff, assistant professor at University of Massachusetts-Amherst and a former student of Donnelly, is “the perfect dirt layer cake”:


A sediment sample collected from the Florida Panhandle. The dark sediment is mud that is normally deposited in a marsh; the lighter bands of sediment represent layers of sand that are washed into the marsh during hurricanes. Photo credit: Jon Woodruff


Donnelly and Woodruff take cores in the landward salt marshes, and reconstruct past hurricane strikes from the sand layers in these tubes of sediment. Previous work published by Donnelly and colleagues has recovered evidence for major hurricane strikes in the 18th, 19th and early 20th centuries in the salt marshes of New Jersey, Long Island and Rhode Island that coincided with documented hurricane strikes.

But the potential doesn’t stop there. With longer sediment cores and the right location, hurricane strikes can be inferred from times long before the historical record. “Cores help us take a look at history over 5,000 years, and that’s a powerful tool,” Donnelly told Oceanus magazine in 2009. A recent hurricane strike reconstruction from Puerto Rico, published in Nature, suggests that increased hurricane activity in the North Atlantic over the last 5,000 years generally corresponds to weaker El NiƱo events and a stronger West African monsoon.

Unfortunately, similar hurricane reconstructions have yet to be generated from eastern North Carolina. A 2006 study by Steven Culver and colleagues from East Carolina University examined salt marsh cores from Pea Island. The authors showed that sand deposits in these salt marshes were far too variable and widespread to be explained by hurricane activity and likely related to changes in inlet positions along Pea Island. Tellingly, the North Carolina coast seems to be too energetic to preserve records of past hurricane strikes.

Hurricanes will never cease to be a risk to the residents and economy of eastern North Carolina. But with methods like paleotempestology, scientists are gaining a more complete picture of past hurricane variability and of what factors may contribute to future hurricane variability. While scientists may never be able to predict the chance of a hurricane strike in one location in any given year with certainty, research into past hurricane strikes has certainly underscored the dynamic history of the coasts we call home.

Author's note: Cross posted from Coastal Review Online, a publication of the North Carolina Coastal Federation.

Wednesday, February 1, 2012

It's Getting Hot in Here- Explanations for a Warm Winter

I got the idea to write this piece today while sitting outside in a t-shirt. It was 65 degrees and sunny, a beautiful mid-April day for New York City. The only problem is, it’s not April; it’s only February 1st.

As I’m sure a lot of you on the East Coast have noticed, it’s been an unusually mild winter, especially when compared to the Snowmageddons of 2010 and 2011. Just how different has it been? Consider this: in January 2011, it snowed 36” at Central Park. This year? The officially tally was less than one-tenth as much, at 3.23”. Obviously it must be global warming, right?

No, it’s not that easy- one warm winter does not make global warming. But thanks to research spurred in part by the massive snowstorms of the past few winters, climate scientists have begun to understand what drives this winter weather weirdness. One mechanism that shares most of the blame for these events is something called the “Arctic Oscillation”.

Figure 1. Characteristics of a negative (top) and positive (bottom) Arctic Oscillation. Gridded data are temperature anomalies (current temperature minus average temperature for the period 1951-2000). Data from the NASA Goddard Institute for Space Studies (GISS) Surface Temperature Analysis.


Before you get lost in fancy terms, let’s take a step back. Climatologists define indices (like El Nino or the Arctic Oscillation) that relate to phases of pressure in the atmosphere. You might have noticed that when our area is under high pressure, the weather tends to be nice; when it is under low pressure, the weather tends to be stormy.


When you take a shower, water comes out of your showerhead because the water inside your shower pipes is at higher pressure than outside. Air acts the same way; air moves from areas of high pressure to areas of low pressure. And much like you couldn’t force water to go back inside your showerhead, you can’t force air from an area of low pressure to go to an area of high pressure.


Why does this matter? In simple terms, the Arctic Oscillation dictates where cold air will end up in the Northern Hemisphere. When there is a negative Arctic Oscillation, atmospheric pressures are high over the Arctic and low outside the Arctic. The end result is that the high pressure forces cold air out of the Arctic, toward lower latitudes, like New York. This is exactly what happened in the winters of 2010 and 2011, leading to the massive snow events in DC and NYC. Consequently, the Arctic ends up warmer than usual (Figure 1).


When there is a positive Arctic Oscillation, the opposite is true: atmospheric pressures are low over the Arctic and high outside the Arctic. The high pressures outside the Arctic trap cold air, leading to warmer temperatures in areas like New York. As you can probably guess, the Arctic Oscillation has been strongly positive this winter, and temperatures have accordingly been quite warm (Figure 2).

Figure 2. Average January High Temperatures from Central Park, NY (top, blue) and Reagan National Airport, Washington DC (bottom cyan) against monthly Arctic Oscillation Index over the last 5 years. Colder January temperatures are associated with - Arctic Oscillation indices and warmer January temperatures with positive. Note that the average high temperature this January is nearly 10 degrees warmer than last January in both DC and NY! Arctic Oscillation data from Climate Prediction Center; Temperature data from National Weather Service.


It is important to note that you can’t just get rid cold or warm air masses; you can only move them around. That’s exactly what the Arctic Oscillation does- it determines whether cold air ends up far up in the North Pole or in our backyards. And just because it has been warm here, doesn’t mean it has been warm everywhere- Sarah Palin has been freezing her a$$ off in Alaska, which has seen some of its coldest temperatures and largest snowfalls on record. Europe and Asia have also seen above-average snowfall this winter (see below).


For those of you who are missing the snow this winter, I have some bad news: unfortunately, there is a connection between global warming and the Arctic Oscillation. Continued melting of Arctic sea ice, driven by warming at high latitudes, leads to a generally warmer Arctic Ocean. A warmer Arctic Ocean in turn reduces atmospheric pressures over the Arctic, leading to a positive Arctic Oscillation mode. This would tend to favor less and less snowfall in the US. Long story short- if you enjoy snow, you might need to head farther and farther north in the coming decades.

A final good link on the Arctic Oscillation: NOAA ClimateWatch


(Note from above- The North Atlantic Oscillation, a similar feature to the Arctic Oscillation, plays a role in determining how warm the U.S. is relative to Europe. More on this in a later entry)

Tuesday, August 23, 2011

East Coast Earthquake 8/23/2011: Rapid Reaction

At around 1:51pm, a magnitude 5.8 earthquake struck about 85 miles southwest of Washington, D.C. Judging from social media postings and talking to friends, approximately 20 to 30 seconds of noticeable shaking was felt in the D.C. metro area within a minute of the earthquake. Seismic activity propagated up and down the East Coast; shaking was felt in Philadelphia by 1:52pm, New York City by 1:53pm, and eventually up to Boston, Cape Cod, and even Toronto.

Question 1: Is the the Apocalypse?

No, it is not. This earthquake occurred in a region known as the Central Virginia Seismic Zone, where small earthquake activity has been commonly noticed since at least 1774. The largest previous earthquake in the Central Virginia Seismic Zone was a magnitude 4.8 in 1875. More recently, a magnitude 4.5 earthquake struck the Central Virginia Seismic Zone on December 9th, 2003. This will undoubtedly be the largest recorded earthquake within the Central Virginia Seismic Zone; however, if its current designation of a magnitude 5.8 holds, it will not be the strongest earthquake in the history of Virginia. That distinction belongs to the Giles County earthquake of 1897, a magnitude 5.9 centered near the West Virginia border that was felt all the way in Cleveland, Indianapolis, and Atlanta.

Question 2: How worried should I be about aftershocks?

Probably not too worried, but be cautious. From a cursory look through USGS earthquake records, previous Central Virginia Seismic Zone earthquakes (the 1875 and 2003 earthquakes) show little record of significant aftershocks. At the time my writing, there have already been two aftershocks, a magnitude 2.8 at 2:46pm Eastern Time and a magnitude 2.2 at 3:20pm Eastern Time (Figure 1). An aftershock would likely need to be at least magnitude 3 to be felt in the greater D.C. region. For comparison, the Giles Country Earthquake of 1897 produced aftershocks for a week after the earthquake, although every earthquake is different and we can't be sure of what features translate between earthquakes.

Figure 1. Seismic Activity in the Central Virginia Seismic Zone as of 4:14pm, 8/23/2011.

Question 3. Why did this happen? I thought earthquakes only occurred in California.

Earthquakes can occur anywhere there are faults, regardless of the age of the faults. Many faults are concentrated today at plate boundaries, where large portions of the Earth's crust join and grind against each other. California is the first and foremost example of this; the Pacific plate meets the North American plate at California, and the geologic happenings associated with this juncture lead to a lot of seismic activity. The Eastern Seaboard of the US is a geologically calm area today, but that was not always the case; when the Appalachian Mountains formed around over 200 million years ago, there was an active plate boundary just off the East Coast. Faults existed all along the East Coast then, and many of those faults are still preserved in the bedrock today. We don't know what triggers them to reactivate, but occasionally they do. And because they happen so less frequently on the East Coast than in California, we are always in for a surprise when they do occur.

The obligatory hands-on-the-face, "what-the-heck-is-going-on" East Coaster in an Earthquake look. Courtesy of New York Times.

Thursday, June 24, 2010

Uncertainly Certain: The fossil fuel(less) future of human society

Today, I drove fifteen miles to work in my car, packed my lunch with plastic bags, used my plastic ID to get into my office, where I immediately sat down and turned on my (you guessed it) plastic-cased computer powered by an always-faithful supply of electricity. The common thread? Fossil fuels. Gasoline powers the internal combustion engine in my car, refined oil is used to create the omnipresent plastics on which we so rely, and the burning of fossil fuels is the dominant source of electricity for the world. Moreso than any other factor, it is this preponderance of fossil fuel applications that epitomizes the modern industrial era, an era that has—despite the invention of computers and the internet—remained fundamentally knotted to fossil fuels for over a century now.

My generation sits at an uncomfortable tipping point in this equation. Common sense dictates that the supply of fossil fuels is not limitless, and the extraction of fossil fuels—as the Deepwater Horizon event so (non)neatly demonstrates—is far from harmless. Legacies of environmental imperilment and callous mismanagement by drilling companies aside, there will come a time when the impetus to separate human society from fossil fuel use derives not from issues of sustainability, but from issues of necessity. The future of fossil fuels in human society is what I term “uncertainly certain”: certain in the sense that we will run out, uncertain in the sense of timing.

When will we run out? Well, there are quite a few different fossil fuels, and the answer is more suited to a book than a blog. But let’s focus on one in particular: oil. Oil is a vitally important substance used dominantly for transportation, industrial, residential, and utility applications. In 2008, nearly 71% of oil consumed in the United States—the world’s greatest consumer of oil, no less—went toward the transportation sector. If we were to run out of oil tomorrow, our transportation system would shut down, 8.1 million households in the US would freeze come winter, and a full 35% of the world's energy supply would disappear. The magnitude of such an event was not lost on the scientific community; the studies of oil reserves and expectations for future supply have existed for quite some time. Here I want to introduce the most widely circulated theory of our future oil supply—“peak oil”—analyze its validity in the face of strong criticisms, and perhaps provide some idea as to what we can expect for industry 50 years from now. If you can’t read any further, here’s a short summary: if the world is the same, we’re probably screwed.

Peak Oil

Marion King Hubbert was a research scientist at Shell Oil Company, professor at Columbia University, and vaguely looked like the late Burt Reynolds (at least to me). In 1956 he postulated what is now known as the “peak oil” theory: for any well, oil field, or even nation, the production of oil follows a bell-shaped curve. Great, who cares? No one did at the time, until Hubbert predicted that U.S. oil production would peak between 1965 and 1970…and he nailed it.

Source: S. Foucher

The important concept behind the “bell curve” shape for oil production is that there is a single point where oil production is at its maximum before it declines; this point is referred to as “peak oil”. This is significant due to issues of supply and demand. Demand for oil has almost constantly risen in modern times: Between 1965 and 2009, global oil consumption increased by 270% and shows no sign of slacking in the near future. As peak oil represents the time when the supply of oil is at its highest, it represents the timing of a fundamental shift between the supply and demand of oil. After peak oil is reached, oil supply can no longer grow to meet demand. As basic economics shows us, this means oil prices will skyrocket.

Show me the money

Peak oil theory has successfully predicted oil production from individual wells, oil fields, and nation-states, but has been criticized when applied at the regional or global level. One of the strongest counter-arguments states that peak oil theory is misleading because it is fundamentally insensitive to price. Quite simply, the amount of oil that a company can recover at a profit depends on the price at which they can sell the oil. Rising oil prices make the recovery of oil at more expensive/risky sites (hint hint Deepwater Horizon) economically viable, allowing for an increase in global production during higher prices.

Fortunately, whether oil price has any relation to oil production is easily testable. Figure 1 below plots inflation-adjusted crude oil price and total world oil production for the past few decades.

Figure 1

While both show generally increasing trends, especially after the early 1980s, they do not appear either very consistent or coherent. A scatter plot, Figure 2, reinforces this conclusion.


Figure 2

Remember I warned in a previous entry that correlation does not imply causation. Well, the opposite is not the case; a lack of correlation does indeed imply a lack of causation. If one variable forced another, we would expect them to show at the least a meaningful correlation (e.g. not an r-squared of 0.05). Therefore, because oil price and oil production do not correlate, oil production is in actuality price insensitive, at least between 1970 and present. This is an especially surprising result, as you would think oil companies would ramp up production during high oil prices to maximize profits. It is likely they are unable to; I dare say increasing oil production is not as simple as flicking a switch.


Have we reached peak oil?

Several more surprising results follow from this. First, looking back at Figure 1, it appears as if the rate of increase in oil production has slowed significantly in the last several years.

Uh oh:

The previous image shows the results of a compilation of oil production models (yellow path) indicating that we either have already reached peak oil or will reach it within the next couple years. Also noteworthy is that the International Energy Agency, or IEA, have significantly reduced their expectations for future oil production from 2006 to 2008, and now suggest that we will reach peak oil by around 2030. Note also that their forecast now underestimates the population model, which assumes that oil production is tied to the world population. Either way, it appears that in the coming decade, for the first time in recent history, population growth will outstrip global oil production.

There are other indications we may be approaching peak oil, as well. Figure 3 shows that, since 2000, both the average depth of “exploratory” wells drilled in search of untapped oil reserves and the cost of recovery of a barrel of oil have increased dramatically in the U.S.


Figure 3

Combined, these indicate that we are searching deeper than ever before for more oil, and at greater cost. Coincident with this greater cost is greater risk, as surely the Deepwater Horizon incident has awakened us to. In my last post, I argued that sealing off the Deepwater Horizon wellhead would be a nearly impossible feat due to the pressure at which the oil and gas mixture is escaping. Remember that this pressure relates in the first order to the depth at which the well was drilled. If we keep drilling deeper and deeper in search of more fossil fuels, we will be attempting to tap into fossil fuel deposits at higher and higher pressures, and as such we should expect greater risk and a greater chance for major, nearly uncontrollable environmental disasters like the Deepwater Horizon incident.

What about the future of fossil fuels in human society? It is safe, for now. Although oil production may be price insensitive, high oil prices have made other methods for extraction feasible (e.g. oil sands and oil shales), unlocking reservoirs that equal to or dwarf the largest current oil reserves. Further, even if we reach peak oil within the next decade, it will still be some time until oil reserves become depleted enough to consider an “oil-free future” as a reality.

Nevertheless, there is something we should be very concerned about: while oil supply may be close to leveling off, oil demand is far from level. Rising industrial nations like China and India will force a major rebalancing of world oil supplies; between 1995 and 2005, U.S. oil consumption increased 17%, while China’s more than doubled. Global energy consumption is projected to increase by nearly 50% by 2035, with oil use alone increasing by 22%. Such increases in demand without meaningful increases in oil supply will surely pressure oil prices upward. In closing, although we should not fear running out of oil, our society is so tied to oil that even the threat of a shortfall could wreak havoc. Although the environmental argument for an oil-free future is a strong one, there is a much better argument lurking beneath the surface: We’re going to have to face it someday, so we might as well start getting prepared now.


Agree? Disagree? Think I’m full of it? Post your comments below and I’ll try to answer them. Also, I’ve included some links I found very useful if this topic interests you:

The 2005 Hirsch Report: Peaking of World Oil Production- Impacts, Mitigation & Risk Management (U.S. Department of Energy).

BP’s 2010 Stastictical Review of World Energy (yeah, I know they f’ed up in the Gulf, but they’re still one of the most knowledgeable energy companies around).

Finally, I don’t endorse wikipedia much, but their section on peak oil is quite well-researched.