Monday, April 30, 2007

Apparent Reserve Growth

In several prior posts, I have tried to explain the enigma of reserve growth via some physical mechanisms, concentrating primarily on diffusion. The process of diffusion makes some physical sense, particularly if you consider that oil actually can diffuse through porous material, and that mathematical relationships such as Fick's first law map fairly well to a reservoir depth acting as a permeable membrane.

However, we should also consider the psycho-business aspects of making reserve growth predictions. These often have little to do with the physical processes, but have much more to do with following conservative rules which guard against overly speculative forecasts. Fortunately, we can just as easily make a model of econometric predictions as we can of a real physical process. Watch this unfold, as the math points to an almost counter-intuitive result.

First, consider that any kind of prediction has to deduce from a probability model. To keep things simple, say that a reservoir has a finite depth L0, which corresponds to a given extractable oil volume. Importantly, we do not know the exact value of L0, but can make educated guesses based on a depth that we do have confidence in. We call this the "depth of confidence" and assign it to the random variable Lambda. This has the property that as we go beyond this depth, our confidence in our prediction becomes less and less concrete, or alternatively, more and more fuzzy. With this simple premise providing a foundation, we can use basic probability arguments to estimate a value for the unknown L0 which we call L-bar.

A line by line analysis of the derivation:
  1. Probability density function for the depth of confidence.
  2. Estimator for reservoir depth, a mean value.
  3. Estimator shows a piecewise integration; the first part integrating to the actual depth, and the second part adding in a higher confidence factor as we probe blow the actual depth.
  4. Solution to the integration, giving the enigmatic reserve growth dynamics as Lambda increases with time.
  5. If Lambda increases linearly with time
  6. If Lambda increases with a parabolic dependence, which matches a diffusional process.
On the face of it, the estimator for reservoir depth looks like a classical exponential damping formula. But if you look closely, you see that the random variable lands in the denominator of the exponent, leading to a sharper initial rise but much more gradual dampening than the traditional form. The conservative nature of the estimation comes about because Lambda rises only gradually with time.

This function does reach a clear asymptote, given by L0, however much it superficially looks as if it may grow indefinitely. I consider this analysis a huge breakthrough in understanding the "enigma" of reserve growth. The fundamental form basically provides a simple explanation to what we observe, and dissipates the smoke and mirrors that the peak oil deniers trot out when they talk about reserve growth as some magical phenomenon. Clearly, we can have reserve growth as a pure book-keeping exercise, which basically deflates the preposterous Lynch and Attanasi & Root arguments.

As a bottom-line, I suggest that we start using the "apparent reserve growth" equation on observed profiles. A single variable essentially guides the growth, from which we can infer asymptotic behavior simply and quickly.

Thank you for participating in Part 72 of the series, "Why big oil refuses to admit to the principles of basic math."



As I finish this post up, I hear the voice of Jerome a Paris from DKos as he makes an appearance on the Thom Hartmann radio show.

Sunday, April 22, 2007

Cubic Discovery Model Feedback

Although I stand by the premise of the cubic discovery growth model, the negative forcing feedback seems a bit of an introduced artificiality. So instead of subtracting an accumulating factor to the differential equation (see the integral term below), I decided to take a different tact -- in particular, one with a better physical basis.

The idea that a sampling volume which increases cubically with time still forms the basis of the model, but I decided to add a probabilistic suppression function to the growth term. I came up with a perturbation based on some elementary collision and coalescence considerations. Staring at what the growth term does, it reminds me an awful lot of a real physical growth process -- akin to what happens during crystal nucleation. Although the sampling volumes happen to occur randomly, like nucleating sites on a crystal, interactions occur between sampled regions. In particular, the volumes only grow until they meet another volume; but of which we cannot sample again to prevent double-counting of previously encountered volumes. This provides a forcing feedback analogy to further exploration in an ever growing volumetric region.

A nucleating region shown in red grows until it hits the cubic growth limit (i.e. width) or it reaches a constraint set by a previously grown region. Eventually, growth rate limits as the total coverage approaches unity and collisions with other volumes occur more frequently.


The strength of the forcing feedback relates to the probability of meeting up with another previously sampled volume. Here, the idea of sampled coverage and a Markovian transition rate plays an important role. In general, the probability of encountering another sampled volume follows an exponential dependence which depends on the coverage. In a Markov approximation, the probability of collision remains constant with distance but tracks as the reciprocal of coverage as time increases. To establish the analogy, the coverage derives directly from the accumulated discoveries. So we have a dependence that basically follows this law:
DiscoveryRate = a*t3*exp(-k*t4)
This formulation has an interesting property in that the cumulative discovery becomes even more concise:
CumulativeDiscoveries =
Integral(DiscoveryRate) = Dtotal*(1-exp(-k*t4))
As a check, the following code duplicates the behavior in terms of a set of Monte Carlo simulation runs:
with Ada.Numerics.Discrete_Random;
with Text_IO;

procedure Ran_Fill is
-- Filling a pool by randomly sampling cells according to a cubic growth law.
-- As the pool fills, regions get truncated by collisions with previous samples.
subtype Pool is Natural range Natural'First .. 1_000_000;
package G_Natural is new Ada.Numerics.Discrete_Random (Pool);
G : G_Natural.Generator;
Checked : array (Pool) of Boolean := (others => False);
Acceleration : constant Integer := 1;
Draw : Pool;
Discoveries, Width : Integer;
Cumulative : Integer := 0;
subtype Years is Integer range 1..1000;
Values : array(Years) of Integer := (others => 0);
begin
G_Natural.Reset (G, 1); -- Use the same starting seed each time
for Samples in 1..1000 loop -- Run a bunch of MonteCarlo simulations
Cumulative := 0;
Checked := (others => False); -- Set the pool to all undiscovered
for Year in Years loop
Width := Acceleration * Year * Year * Year; -- Cubic acceleration of sample volume
Discoveries := 0; -- Yearly discoveries proportional to sampled volume
loop
Draw := G_Natural.Random (G);
if not Checked (Draw) then
declare
I : Pool := Draw;
begin
loop -- Go in the forward direction looking for collisions
Checked (I) := True;
Discoveries := Discoveries + 1;
exit when Width = Discoveries or I = Pool'Last;
I := I + 1; -- Go to the neighboring volume
exit when Checked (I); -- Collision with another sampled element
end loop;
I := Draw;
loop -- Go in the reverse direction looking for collisions
exit when Width = Discoveries or I = Pool'Last or I = Pool'First;
I := I - 1; -- Go to the neighboring volume
exit when Checked (I); -- Collision with another sampled element
Checked (I) := True;
Discoveries := Discoveries + 1;
end loop;
end;
exit; -- Either have collided or reached the cubic growth limit
end if;
end loop;
Cumulative := Cumulative + Discoveries;
exit when Cumulative > Pool'Last; -- Checked every last region
Values (Year) := Values (Year) + Discoveries; -- Save histogram
end loop;
end loop;
for Year in Years loop -- Dump accumulated MonteCarlo runs
Text_IO.Put_Line (Year'Img & Values(Year)'Img);
exit when Values(Year) = 0;
end loop;
end;
A plot of the results approaches that of the DiscoveryRate formula shown earlier, apart from any truncation effects from using a finite sample pool.

Cubic growth model and associated Monte Carlo run. Also shown, world discoveries and a moving average (M.A.) of the discoveries at peak to smooth fluctuations.


Again, what I like about this model: (1) It has a constraint of initial discovery in the year 1858, which means that the curve must start at that point, (2) It has one parameter that basically sets the peak position along the horizontal axis, and (3) It has one parameter that scales the curve on the vertical axis. This becomes a universal curve that behaviorally describes how we randomly sample a volume until we have explored it completely. The fluctuations in real discovery data from the smoothened cubic curve come about as not every sampled volume leads to a discovery.

To smooth the discovery data, we can do something like what Laherrere did with cumulative discoveries in the following chart and lay the cubic model right on top. TBD, for now. (Note that this chart has discoveries earlier than his other more well-known discovery plots.)






So back to that Michael Lynch quote in my previous post:
Lynch thinks that the oil peak lies farther into the future, partially because there's likely to be a lot of oil in as-yet undiscovered smaller fields.

"You don't go looking for them until you run out of the giant fields," Lynch said in a telephone interview.
If we read between the lines, we find that peak oil denier Lynch has admitted to completely sampling the earth's volume for oil, and that we have only the remaining voids and table scraps to pick at. But nobody in their right mind should believe that "a lot of oil" exists in those undiscovered areas. Just look at the model.

Saturday, April 21, 2007

Bean Counters

Step Back rightly pointed out that I didn't pull any punches with my last post. Well, if you had any doubt what the wankers ... I mean the leaders of the industry that will eventually pull us under ... think, read this passage from Duncan Clarke's book:
"Here it is pertinent to note that peak oil forecasters do not enjoy an undiluted view of the state or corporate portfolios that contain these internal and hidden assessments which their models logically require."
Screw you, Clarke. These wankers don't realize that you barely need to know how to count beans to figure out these "internal and hidden assessments". Via Groovy Green, secondary wanker Michael Lynch lets the cat out of the bag:
Lynch thinks that the oil peak lies farther into the future, partially because there's likely to be a lot of oil in as-yet undiscovered smaller fields.

"You don't go looking for them until you run out of the giant fields," Lynch said in a telephone interview.
Stay tuned as I debark Lynch, and exposed the worthless pulp these puppets promulgate.

But the obfuscation doesn't stop at the oil industry. It also permeates down to the wankers at the USGS, who as of late presumably work under the direction of the corrupt Bush administration, who pride themselves in rewriting scientific reports. Commenter Murray described at TOD how he petitioned the USGS to retract a 2000 report from the USGS, criticizing inflated reserve growth rates.
The petition was sent in July 2002 under Public Law 106-554, H. R. 5658 sec. 515, covering integrity of information from Federal Agencies. It was rejected with a nonsense and non-responsive answer, and I have had better things (for me) to do with my time and money than hire a lawyer to sue. My demand was to get the USGS 2000 report withdrawn, with an apology to Congress. Clearly the USGS has totally failed to comply with the provisions of this law in almost all respects. So much for government respect for our laws.
That report occurred in 2000. No telling what has happened in the 6+ years of BushCo.

Wednesday, April 18, 2007

Ig Nore Ad Vice

After Khebab posted Part II of his very extensive Oil Shock Model analysis, this comment by Engineer deserves a bit of reflection:
You can make a (simplified) elevator speech about how it works, and most people will get it. Don't mention the math.
This appears as an intuitive bit of advice, but then tristero from Digby's blog offers a swift counterpoint. Tristero's opinion centers around the outcome of framing the scientific dialog and perhaps dumbing down and making the results more exciting. So you end up getting something like this:
Not only is it pointless to try when that is the case, it is counterproductive as it comes off as phony pandering and a waste of time ("The Higgs Boson: What's In It For Me?").
PZ agrees with the ultimate futility of this attitude and to science writer Chris Mooney's approach to injecting a religious frame to certain debates. This apparently has the effect of drawing in the people that would normally never engage in a scientific discussion. According to people witnessing Mooney's experiment in action, it appears not to work.

So, no pandering from me.

A second rhetorical observation I can make from Khebab's post. Why, and how, with the billions of dollars invested in the fossil fuel industry does it take amateurs to decipher our reality-based future? I bet that the oil industry actually doesn't want to know, and even if they did know they never seriously wanted to teach it to the engineers and scientists studying the discipline. Considering how fundamental these mathematically-based rate theories that Khebab and I delve into, it borders on the criminal that people more "in the know" completely ignore this train of thought.

I can come up with any number of analogies to hypothetically missing theories or laws in other engineering disciplines, but they would seem preposterous to imagine actually happening. Like what would happen if we built up our electronics industry without actually understanding Faraday's Law.

So Engineer has a point but I can't think of making an elevator speech centered on how an inductor actually works. And have that person step off the elevator retaining anything I said. It hurts my brain to even think about it.

So we have to do the math. And I suggest we leave the framing arguments to subjective opinions like my second rhetorical observation. It gets the people "in the know" upset enough that they will send flunkies to look up the math and report back on what the heathen have figured out. Only then will we make progress.

Thursday, April 12, 2007

Firebombing

My first content on this blog included a link to a Kurt Vonnegut article called Cold Turkey.

Yesterday, Vonnegut died.

Only after reading a bunch of his most famous novels several years after they came out, did I learn that my dad and uncle actually suffered through the Dresden fire-bombing as young D.P.'s during the tail end of the war. And hearing stories of them goofing around with potentially live ordnance, makes the surrealism of "Slaughterhouse-Five" hit home. I have the urge to revisit that book.

I get irked by those who consider Vonnegut a "young person's" writer. The fact that many high school and college-age students ate up his work does not diminish his standing in my mind. On the other hand, I keep on hearing that Vonnegut's style wears thin as one sufficiently "matures". In fact, I heard that idiocy repeated by the talentless hack blogger/columnist, James Lileks, on the Halfwit radio program today. So Lileks admitted to reading some of his stuff in college, but didn't think much of Vonnegut and contended that no one ever reread a Vonnegut novel. Well, like I said, I read lots of Vonnegut, as well as Woody Allen growing up. And I read Lileks while he wrote for my college paper, where I clearly saw a style as rote as Vonnegut proved creative. I kind of enjoyed Lileks back then because he did the Woodman shtick pretty good and it worked as a substitute since Woody never turned out the written product with the regularity of his movies. I'd get a chuckle as Lileks frequently crafted thinly veiled Allen aphorisms in the Daily's opinion or humor section. Fast forward two decades and we find that Lileks continues to churn out unctuous, cloying, goop-like StarTribune opinions that constantly reference safe targets like Target(R) along with "mature" vulgarities directed toward reporters like Robert Fisk who actually amounted to something.

In the end, Lileks will never understand that you don't have to reread a novel to make it have an impact on your life. I contend that reading something as creative and thought-provoking as Vonnegut growing up has had a huge effect on many a mind. The idea that "War is Hell" ranks at the top of the list of mind-altering moments, and the fact that Lileks turned into a putz sits at the bottom.

Update: said it better



It hit Mike Malloy hard too, he affectionately called him Mark Twain on acid as he quoted several timely passages during his nightly radio show. Tomorrow also marks the official end of "The Funny" Air America Radio era as Sam Seder gets displaced to the weekends. Rumor has it that Marc Maron will make an appearance, and if the sharp TV & radio minds get a clue, they can hire and put either SS or MM in old putz Imus's recently vacated slot.

Saturday, April 7, 2007

Compartmental Models

I have mentioned the concept of compartmental models in past comments and during a discussion at peakoil.com with the commenter named EnergySpin, primarily with respect to the data flow construction of the Oil Shock model. I had not pursued the method too deeply because it essentially reinforces what the shock model tries to achieve in the first place, and the on-line literature has proven a bit spotty and discipline-specific, hidden behind publishing house firewalls. For example, this link on Stochastic Compartmental Models likely relates to drug delivery models in the field of pharmacokinetics. As I recall, EnergySpin happened to have an MD and perhaps and engineering or math degree and had expertise in such matters.

Let me take a moment and look back at what EnergySpin said in the PeakOil.com thread from 2 years ago:
I was pleasantly surprised to notice that you are using a form of compartmental analysis without knowing it in your models (Smile)
In the following post at your blog:
http://mobjectivist.blogspot.com/2005/06/part-i-micro-peak-oil-model.html
you actually make some really good points about the shape of the depletion curve (we seem to agree that the mathematics constrain this curve to have at least one point where its derivative goes to zero i.e. a maximum and nothing else!), but I'm more interested in the following statement:
Quote: --This is me
I use as an implicit assumption that any rate of extraction or flow is proportional to the amount available and nothing more; past and future history do not apply.
This is basically the equation for the output compartment in a multi-compartment system communicating with the real world. In actuality all the steps that you mention in your post try to model reality as fluxes of material between distinct states, with rates proportional to the amount of material in different stages of the material flow graph.
For example if one has N compartments linked in a unidirectional chain graph, the output can be shown to be be given as the weighted sum of N exponential decay terms or go directly to ODE models that relate the constants to the physical properties of the systems (e.g. volumes of quantities, flux constants etc because under certain conditions of connectivity the two formulations are equivalent)
I bring this up again because we might have another connection that may prove more worthwhile to pursue, if for nothing else that it further substantiates the assumptions that I have made in building up the foundation of the Oil Shock model. Based on some discussions of the concept of "fractional flow" at TOD, with a petroleum engineer poster named Fractional_Flow, I googled this keyword phrase: "fractional flow" "compartmental model". Lo and behold, we come up with a bunch of links to the same pharmacology disciplines that came up in the earlier EnergySpin discussion. So evidently the general mathematical concept known as fractional flow applies to compartmental models. This probably does not come as a great revelation to those petroleum engineers studying fractional flow theory during their university coursework, but it does help build some confidence in the intuition behind the oil shock model.

Simply put, if fractional flow places diffusion-like properties on top of the oil extraction process, the original premise behind the Oil Shock model still stands:
"I use as an implicit assumption that any rate of extraction or flow is proportional to the amount available and nothing more"
To first-order, diffusion properties derive from concentration differentials and the flow stays proportional to the magnitude on the high side. I would suspect the second-order nature of oil extraction may kick in as a reservoir starts depleting and we start hitting the reserve growth components. At that point a more complicated diffusion law would probably prove more effective. I suspect fractional flow and my musings concerning self-limiting parabolic growth have something in common. I just don't see the need (nor do I have the time dedication) to get into the arcanities of 3-D flow to achieve a proper fractional flow analysis when a statistical macro view of the behavior would suffice.

BTW, Khebab has done yours truly a great service and has started to review the Oil Shock model in great detail, both at his blog and at TOD. He has some very intriguing interpretations and posters have left some good comments, including Memmel (who triggered me to review the compartment model). A commenter named Davet left this appraisal of the assumptions behind the Oil Shock model:
There's a principle which I think you call convenience and I call doability: certain approximations are routinely made in order to keep the problem tractable. Gaussian distributions using central limit considerations even when the statistical basis set is small, Markovian conditional probabilities, stationarity, etc, simply because the idea of non-Markovian or worse yet nonstationary dynamics is just too exhausting to entertain; and besides, often our precision isn't good enough to warrant the extra effort -- that is, the simpler assumptions are "good enough for government work" and at least do not lull us into thinking we have the correct mechanism. (Disclosure: I do it all the time). And yet ...

The underlying principle determining Markovian behavior is that the individual events contributing to the behavior of the statistical ensemble have very short timescales relative to the ensemble timescale and have small amplitude. So the decay of a U235 atom is essentially instantaneous but the decay of a kg of U235 takes millennia; the timescale of the Chicxulub event was minutes but the amplitude so large that the future even now is affected by that event 65 MYA. Radioactive decay follows an exponential beautifully; the history of life does not. I would very much like to know what the h functions are and how they satisfy this.

The reason for picking on this (and why I keep asking about the h functions) is that it is important to get the mechanism right or at least realistic. A model with the wrong mechanism can have good descriptive ability but have poor predictive ability, as you know. As we should all know from arguing about HL. Correlation is not causation.
That comment brilliantly displays the scientific counterpart to the political "concern trolls" who routinely show up on popular blogs. In this variation, we have the concerned scientist who probably has our best interests in mind, but tries to repudiate the direction of the analysis because it doesn't prove "deep" enough. Well, screw that. I contend that the lack of simplified, yet realistic, models remains the big barrier to making progress in the oil depletion analysis field. You just don't dismiss and then abandon a promising avenue of research because of some perceived impediment that gets in the path. That gets us into a state of statis, hoping that somebody will come along to do what we affectionately refer to as an "end-to-end" model. Well, we know that will never happen. Instead, you basically have to make sound engineering judgments at every turn, knowing when to dismiss something if the math proves to unwieldy, or when to make some grand simplifying assumption if your intuition tells you. I replied to Davet's concern with this comment:
The thing that you may find confusing is that Khebab and I use the Exponential function for 2 things: estimating latencies and also for the final extraction. The Uranium decay you speak of refers more to the latter type of exponential decay. I have the amplitude taken care of. Bigger wells proportionately generate greater output, but the decay is the same. Oil extraction is like pigs feeding at the trough -- the bigger the trough, the more pigs can feed (or the more holes you punch in a reservoir/trough), but the pigs can only extract the slop at a relatively fixed rate. Heck, we're not talking about orders of magnitude differences like U decay!

I can sum up the history of life, greed=pigs=humans, so if you take the perspective that humankind is greedy, then we will extract any reservoir, big or small, with the same relative voracious appetite, plus or minus some unimportant differences in the greater scheme of things.
That about summarizes my most elementary level of intuition on the matter; I equate oil extraction to the diffusivity of pig slop.

Tuesday, April 3, 2007

How to generate convenience

To add some context to the preceding post on matching the dynamics of a Logistic curve, I will step-by-step deconstruct the math behind the governing Verhulst equation. Although deceptively simple in both form and final result, you will quickly see how a "fudge" factor gets added to a well-understood amd arguably acceptable 2nd-order differential equation just so we end up with a convenient closed-form expression. This convenience of result essentially robs the Peak Oil community of a deeper understanding of the fundamental oil depletion dynamics. I consider that a shame, as not only have we wasted many hours fitting to an empirical curve, we also never gave the alternatives a chance -- something that in this age of computing we should never condone.

Premise
If we consider the discovery rate dynamics in terms of a proportional growth model, we can easily derive a 2nd-order differential equation whereby the damping term gets supplied by an accumulated discovery term. The latter term signifies a maximum discovery (or carrying) capacity that serves to eventually limit growth.

Now, if we refactor the Logistic/Verhulst equation to mimic the 2nd-order differential equation in appearance, it appears very similar apart from a conspicuous non-linear damping term shown at the lower right above.

That non-linear term in any realistic setting makes absolutely no sense. The counter-argument rests on a "you can't have a cake and eat it at the same time" finding. On the one hand, we assume an exponential growth rate based on the amount of instantaneous discoveries made. But on the other hand, believers in the Logistic model immediately want to turn around and modulate the proportional growth characteristics with what amounts to a non-linear "fudge" factor. This happens to just slow the contrived exponential growth with another contrived feedback term. Given the potential chaotic nature of most non-linear phenomena, we should feel lucky that we have a concise result. And to top it off, the fudge factor leads to a shape that becomes symmetric on both sides of the peak since it modulates the proportional growth equally around dD/dt=0, with an equal and opposite sign. As the Church lady would say, "How convenient!". Yet, we all know that the downside regime has to have a different characteristic than the upside (see the post on cubic growth for the explanation, and why the exponential growth law may not prove adequate in the first place).

Unfortunately, this "deep" analysis gets completely lost on the users of the Logistic curve. They simply like the fact that the easily solvable final result looks simple and gives them some convenience. Even though they have no idea of the underlying fundamentals, they remain happy as clams -- and I and people like R2 and monkeygrinder as angry as squids.

Monday, April 2, 2007

The real Logistic model derivation

Classical oil depletion modelers occupy a no-man's land situated between islands of poor data and a huge immovable object known as the Logistic curve. I always considered the Logistic formulation an empirical fit, widely used only because it generated a convenient, concise closed-form solution.

Wishing to get rid of empiricism at every turn, I think I came up with an analytical model that behaves much like the Logistic, but actually stems from much more understandable first-principles logic. It essentially branches off from the premise of the quadratic and cubic discovery models. Keeping it simple, I switch the power-law dependence of discovery growth to an exponential law:
dDiscovery(t)/dt = b*Discovery(t) - c*Integral(Discovery(t))
This has the property of the rate of discovery increase tracking the current instantaneous rate of discoveries. Although arguable in the validity of its premise, it has a basis in human nature that nothing attracts success like success, which translates into a "gold-rush" mentality for the growth in discoveries. The decline comes about as a finite supply of discoveries accumulate and provide the negative feedback in the integral term. This turns into a classic 2nd-order differential equation.
D" - bD' + cD = 0
I used an online differential equation solver to seek out the regime which corresponds to the classic growth and decline in discoveries:

This appears close in shape to the cubic growth model, but showing meatier tails and a sharper profile. It also needs an initial condition to kick in, as the solution degenerates to Discovery(t) = 0 without an initial discovery stimulus. D(0) and D'(0) provide the initial "gold rush" stimulus.

Similar to the cubic model, the backside part of the curve probably needs modification -- reflected by what I consider a different growth regime governed by a change in human dynamics:
dDiscovery(t)/dt = b0 - c*Integral(Discovery(t))
I would justify this by suggesting that once a permanent decline kicks in by the relentlessly diminishing resources available for discovery, the incentive to discover turns into a constant (i.e. no more bandwagon jumpers), giving a damped exponential beyond the sharp decline (see the cubic example of this behavior below).

In general the shape of this curve mimics the shape of the Logistic curve, an exponentially ramped up-slope and an exponentially damped down-slope. Of course, the solution soes not match the simplicity of the Logistics curve, but we never intended to generate a concise solution; in my mind latching onto a concise thought process remains the ultimate goal.

RMS of Cluelessness

A while ago, I reviewed a book on debunking global warming by Essex and McKitrick. As I pointed out, as did Tim Lambert, the duo had no clue on what the fundamental concept of temperature meant. Well, apparently the two thrive in an embarrassment-rich environment and have decided to take another beating some three years later. Read the RealClimate.org account of their antics, and also the skewering supplied by Eli Rabett. At the same time as I try to comprehend their misuse and toying with elementary statistics (taking the RMS of a temperature series? huh?), I can't imagine what they want to gain out of it other than to mock the idea of peer-reviewed science.

So in turn I seek to mock these same deniers. Several days ago, a fellow named Dennis Avery appeared on a local radio show pushing his book "Unstoppable Global Warming". (Note the title tries to belittle climate change science but only inadvertently reminds me of the attitude that "My New Filing Technique is Unstoppable" succeeded in mocking mindless bureaucracy.). The cretins running the radio show included one of the Powerwhiner bloggers and a dude named Chad ("The Elder") Doughty who recently tried to raise a stink about a local community center that wanted to show the Al Gore documentary. In a funny local news story, Doughty embarrassed himself by admitting to TV reporter Tim Sherno that he never actually watched the film -- view the clip here. In any case, once Avery started talking about the solar wind/cosmic ray theory of cloud formation, I decided to call up the station and subtly mock him. Summoning up a nerdy-sounding voice, I suggested that yes indeed the "hot" solar wind coming from the sun had caused the Earth's warming much like a hair drier would heat up your head. And Avery, showing either gross stupidity or an insatiable craving for acceptance, didn't completely disabuse me of my theory! Why should he, as my clueless denier alter-ego probably makes up the bulk of his audience. Listen to the short exchange between myself and Avery and marvel at the total joke I can make out of his premise.

Avery: "It's not that direct, but you're close"

What a pair, Avery and Doughty. A freakin' knob and a doorstop. The collective brains of a mailbox.