A few weeks ago I wrote about how to derive wind speed characteristics from a straightforward maximum entropy analysis: Wind Dispersion and the Renewable Hubble Curve. This assumed only a known mean of wind energy levels (measured as power integrated over a fixed time period).
From this simple formulation, one can get a wind speed probability graph. Knowing the probability of wind speed, you can perform all kinds of interesting extrapolations -- for example, how long it would take to accumulate a certain level of energy.
I received a few comments on the post, with one by BDog pointing out how the wind flow affects the rate of energy transfer, i.e. the load of kinetic energy enclosed by a volume of air gets pushed along at a rate proportional to its speed. I incorporated that modification in a separate calculation and did indeed notice a dispersive effect on the output. I didn't pick up on this at first so I edited the post with BDog's new correction included.
As a fortunate coincidence, Jerome posted a wind-themed article at TheOilDrum and in the comment section LenGould volunteered a spreadsheet of Ontario wind speed data (thanks Len).
I used LenGould's data set to try to verify the entropic dispersion model.In the past 12 months, the max output was 1017 MW, so there's at least that much online, quite widely distributed accross the 500 mile width of the southern part of the province near the great lakes (purportedly excellent wind resource territory).
On April 20th from 8:00 to 10:00 AM, the output averaged 3.5 MW. (0.34%)
On Mar 16th from 11:00AM to 1:00 PM, the output averaged 4.0 MW. (0.39%)
On Mar 9th from 10:00AM to 6:00 PM, the output averaged 6.7 MW. (0.66%)That's just a few random picks I made in peak demand hours. I've done thorough analysis of this before and found the data to completely contradict your statement. These wind generators aren't anywhere NEAR to baselaod, and look like they never will be, since winds from here to North Dakota all travel in the same weather patterns.
The data file consisted of about 36,000 sequential hourly measurements in terms of energy (kilowatt-hours). The following chart shows the cumulative probability distribution function of the energy values. This shows the classic damped exponential function, which derives from either the Maximum Entropy Principle (probability) or the Gibbs-Boltzmann distribution (statistics). It also shows a knee in the curve at about 750 KWh, which I assume comes from a regulating governor of some sort designed to prevent the wind turbine from damaging itself at high winds.
I also charted the region around zero energy to see any effect in the air flow transfer regime (which should be strong near zero). In this regime the probability would go as sqrt(E)*exp(-E/E0) instead of exp(-E/E0). Yet only a linearized trend appears courtesy of the Taylor's series expansion of the exponential around E=0.
Remember that this data consists of a large set of independent turbines. You might think that because of the law of large numbers that the distribution might narrow or show a peak. Instead, the mixture of these turbines over a wide variation in the wind speed provides a sufficiently disordered path so that we can apply the maximum entropy principle.
With a gained confidence in the entropic dispersive model, we can test the obvious nagging question behind wind energy -- How long do we have to wait until we get a desired level of energy?
I generated a resampled set of the data (only resampled in the sense that I used a wraparound at the 4 year length of the data to create a set free from any boundary effects). The output of the resampling essentially generated a histogram of years it would take to reach a given energy level. I chose two levels, E(T)=1000 MW-hrs and E(T)=200 MW-hrs. I plotted the results below along with the predetermined model fit next to the data.
The model from the previous post predicts the behavior used in the two fits:
p(t | E>E(T)) = T * exp(-T/ t ) / t 2where T is the average time it will take to reach E(T). From the exponential fit in the first figure, this gives T= 200/178 and T=1000/178, reespectively, for the two charts. As expected we get the fat-tails that fall off as 1/t^2 (not 1/t^1.5 as the velocity flow argument would support).
The models do not work real effectively at the boundary conditions, simply because the wind turbine limiting governors prevent the accumulation of any energy levels above 1000 MWh level; this occurs either in a short amount of time or at long times as a Poisson process of multiple gusts of lower energy. That said, any real deviations likely arise from short-duration correlations between wind energy measurements spaced close together. We do see this as the lower limit of E(200) shows more correlation curvature than E(1000) does. Wind speeds do change gradually so these correlations will occur; yet these seem minor perturbations on the fundamental entropic dispersion model, which seems to work quite well under these conditions.
As a bottom-line, this analysis tells us what we already intuited. Because of intermittency in wind speed, it often takes a long time to accumulate a specific level of energy. Everyone knows this from their day-to-day experience dealing with the elements. However, the principle of maximum entropy allows us to draw on some rather simple probability formula so that we can make some excellent estimates for long-term use.
The derivation essentially becomes the equivalent of a permanent weather forecast. Weathermen perform a useless function in this regard. Only something on the scale of massive global warming will likely effect the stationary results.