Friday, April 30, 2010

Rayleigh Fading, Wireless Gadgets, and a Global Context

The intermittent nature of wind power that I recently posted on has a fundamental explanation based on entropy arguments. It turns out that this same entropy-based approach explains some other related noisy and intermittent phenomena that we deal with all the time. The obvious cases involve the use of mobile wireless gadgets such as WiFi devices, cell phones, and GPS navigation aids in an imperfect (i.e. intermittent) situation. The GPS behavior has the most interesting implications which I will get to in a moment.

First of all, consider that we often use these wireless devices in cluttered environments where the supposedly constant transmitted power results in frustrating fade-outs that we have all learned to live with. An example of Rayleigh fading appears to the right. You can find some signal interference-based explanations for why this happens, originating via the same intentional phase cancellations that occur in noise-cancelling headphones. For the headphones, the electronics flip the phase so all interferences turn destructive, but for wireless devices, the interferences turn random, some positive and some negative, so the result gives the random signal shown.

In the limit of a highly interfering environment the amplitude distribution of the signal shows a Rayleigh distribution, the same observed for wind speed.
p(r) = 2kr exp(-kr2)



Because all we know about our signal is an average power, it occurred to me that one can use Maximum Entropy Principles to estimate the amplitude from the energy stored in the signal, just like one can derive it for wind speed. So, as a starting premise, if we know the average power alone, then we can derive the Rayleigh distribution.

The following figure (taken from here) shows the probability density function of the correlated power measured from a GPS signal. Since power in an electromagnetic signal relates to energy as a flow of constant energy per unit time, then we would expect the energy or power distribution to look like a damped exponential, in line with the maximum entropy interpretation. And sure enough, it does exactly match a damped exponential (note that the Std Dev = Mean, a dead giveaway for an exponential distribution).
p(E) = k*exp(-kE)

Yet since power (E) is proportional to Amplitude squared (r 2), we can derive the probability density function by invoking the chain rule.
p(A) = p(E)*dE/dr = exp(-kr2) * d(r2)/dr = 2kr * exp(-kr2)
which precisely matches the Rayleigh distribution, implying that Rayleigh fits the bill as a Maximum Entropy (MaxEnt) distribution. So too does the uniformly random phase in the destructive interference process qualify as a MaxEnt distribution, which will range from 0 to 360 degrees (which gives an alternative derivation of Rayleigh). So all three of these distributions, Exponential, Rayleigh, and Uniform all act together to give a rather parsimonious application of the maximum entropy principle.

The most interesting implication of an entropic signal strength environment relates to how we deal with this power variation in our electronic devices. If you own a GPS, you know this when when trying to acquire a GPS signal from a cold-start. The amount of time it takes to acquire GPS satellites can range from seconds to minutes, and sometimes we don't get a signal at all, especially if we have tree cover with branches swaying in the wind.

Explaining the variable delay in GPS comes out quite cleanly as a fat-tail statistic if you understand how the GPS locks into the set of satellite signals. The solution assumes the entropy variations of the signal strength and integrating this against the search space that the receiver needs to lock-in to the GPS satellites.

Since the search space involves time on one axis and frequency in the other, it takes in the limit ~N2 steps to decode a solution that identifies a particular satellite signal sequence for your particular unknown starting position [1]. This gets reduced because of the mean number of steps needed on average in the search space. We can use some dynamic programming matrix methods and parallel processing (perhaps using an FFT) to get this to order N, so the speed-up for a given rate is t2. So this will take a stochastic amount of time according to MaxEnt of :
P (t | R) = exp(-c*R*t2)
However because of the Rayleigh fading problem we don't know how long it will take to integrate our signal with regard to the rate R. This rate has a density function proportional to the power level distribution :
p (R) = k * exp(-k*R)
then according to Bayes the conditionals line up to give the probability of acquiring a signal within time t:
P (t) = integral of P(t |R) * p(R) over all R
this leads to the entropic dispersion result of:
P (t < T) = 1/(1+(T/a)2)
where a is an empirically determined number derived from k and c. I wouldn't consider this an extremely fat tail because the acceleration of the search by quadrature tends to mitigate very long times.

I grabbed some data from a GPS project that has a goal to speed up wild-fire response times by cleverly using remote transponders : The FireBug project. They published a good chunk of data for cold-start times as shown in the histogram below. Note that the data shows many times that approach 1000 seconds. The single parameter entropic dispersion fit (a=62 seconds) appears as the blue curve, and it fits the data quite well:



Interesting how we can sharpen the tail in a naturally entropic environment by applying an accelerating technology (also see oil discovery). Put this in the context of a diametrically opposite situation where the diffusion limitations of CO2 slow down the impulse response times in the atmosphere, creating huge fat-tails which will inevitably lead to global warming.

If we can think of some way to accelerate the CO2 removal, we can shorten the response time, just like we can speed up GPS acquisition times or speed up oil extraction. Or should we have just slowed down oil extraction to begin with?

How's that for some globally relevant context?




Notes:

[1] If you already know your position and have that stored in your GPS, the search time shrinks enormously. This is the warm or hot-start mode that is currently used by most manufacturers. The cold-start still happens if you transport a "cold" GPS to a completely different location and have to re-ascquire the position based on unknown starting coordinates.

Correlaciones numéricas PVT


Muchas veces, sin embargo, no se dispone de información experimental, debido a que no se pueden obtener muestras representativas de los fluidos o porque el horizonte productor no garantiza el gasto en realizar un análisis P.V.T. de los fluidos del yacimiento. En estos casos, las propiedades físicas de los fluidos deben ser determinadas por analogía o mediante el uso de correlaciones empíricas.

En el pasado las correlaciones P.V.T. fueron presentadas en forma tabular y/o grafica; sin embargo, con la aparición de las calculadoras manuales programables y las computadoras personales, tales correlaciones han sido reducidas a simples ecuaciones numéricas o expresiones analíticas con el propósito de utilizarlas en programas de computación.


DETALLES
Nombre: Correlaciones numéricas PVT
Autor: Carlos Banzer S.
Paginas: 150 Pages
Publicador: INPERUZ (Universiad del Zulia)
Idioma: Ingles
Contraseña: www.casapetrolera.blogspot.com

Luego de clickear el link de descarga aparecera una pagina intermedia, solo debes darle click en SKIP AD y ya pasas al lugar en donde se hospeda el archivo... disfrutalo!!!

Computational Methods for Multiphase Flows in Porous Media


This book offers a fundamental and practical introduction to the use of computational methods, particularly finite element methods, in the simulation of fluid flows in porous media. It is the first book to cover a wide variety of flows, including single-phase, two-phase, black oil, volatile, compositional, nonisothermal, and chemical compositional flows in both ordinary porous and fractured porous media. In addition, a range of computational methods are used, and benchmark problems of nine comparative solution projects organized by the Society of Petroleum Engineers are presented for the first time in book form. Computational Methods for Multiphase Flows in Porous Media reviews multiphase flow equations and computational methods to introduce basic terminologies and notation. A thorough discussion of practical aspects of the subject is presented in a consistent manner, and the level of treatment is rigorous without being unnecessarily abstract. Each chapter ends with bibliographic information and exercises.

DETALLES
Nombre: Computational Methods for Multiphase Flows in Porous Media
Autor: Zhangxin Chen, Guanren Huan, Yuanle Ma
Paginas: 531 Pages
Publicador: Society for Industrial and Applied Mathematic (March 30, 2006)
Idioma: Ingles
Hospedaje: Uploaded | Depositfiles
Contraseña: www.casapetrolera.blogspot.com

Luego de clickear el link de descarga aparecera una pagina intermedia, solo debes darle click en SKIP AD y ya pasas al lugar en donde se hospeda el archivo... disfrutalo!!!

Tuesday, April 27, 2010

The Fat-Tail in CO2 Persistence



I constantly read about different estimates for the "CO2 Half-Life" of the atmosphere. I have heard numbers as short as 6 years and others as long as 100 years or more.
ClimateProgress.org -- Strictly speaking, excess atmospheric CO2 does not have a half-life. The distribution has a very long tail, much longer than a decaying exponential.

As an approximation, use 300-400 years with about 25% ‘forever’.
....
ClimateProgress.org -- David is correct. Half-life is an inappropriate way to measure CO2 in the atmosphere. The IPCC uses the Bern Carbon Cycle Model. See Chapter 10 of the WG I report (Physical Basis) or http://www.climate.unibe.ch/ ~joos/ OUTGOING/ publications/ hooss01cd.pdf

This issue has importance because CO2 latency and the possible slow retention has grave implications for rebounding from a growing man-made contribution of CO2 to the atmosphere. A typical climate sceptic response will make the claim for a short CO2 lifetime :
Endangerment Finding Proposal
Lastly; numerous measurements of atmospheric CO2 resident lifetime, using many different methods, show that the atmospheric CO2 lifetime is near 5-6 years, not 100 year life as stated by Administrator (FN 18, P 18895), which would be required for anthropogenic CO2 to be accumulated in the earth's atmosphere under the IPCC and CCSP models. Hence, the Administrator is scientifically incorrect replying upon IPCC and CCSP -- the measured lifetimes of atmospheric CO2 prove that the rise in atmospheric CO2 cannot be the unambiguous result of human emissions.
Not knowing a lot about the specific chemistry involved but understanding that CO2 reaction kinetics has much to do with the availability of reactants, I can imagine the number might swing all over the map, particular as a function of altitude. CO2 at higher altitudes would have fewer reactants to interact with.

So what happens if we have a dispersed rate for the CO2 reaction?

Say the CO2 mean reaction rate is R=0.1/year (or a 10 year half-life). Since we only know this as a mean, the standard deviation is also 0.1. Placing this in practical mathematical terms, and according to the Maximum Entropy Principle, the probability density function for a dispersed rate r is:
p(r) = (1/R) * exp(-r/R)
One can't really argue about this assumption, as it works as a totally unbiased estimator, given that we only know the global mean reaction rate.

So what does the tail of reaction kinetics look like for this dispersed range of half-lifes?

Assuming the individual half-life kinetics act as exponential declines then the dispersed calculation derives as follows
P(t) = integral of p(r)*exp(-rt) over all r
This expression when integrated gives the following simple expression:
P(t) = 1/(1+Rt)
which definitely gives a fat-tail as the following figure shows (note the scale in 100's of years). I can also invoke a more general argument in terms of a mass-action law and drift of materials; this worked well for oil reservoir sizing. Either way, we get the same characteristic entroplet shape.

Figure 1: Drift (constant rate) Entropic Dispersion

For the plot above, at 500 years, for R=0.1, about 2% of the original CO2 remains. In comparison for a non-dispersed rate, the amount remaining would drop to exp(-50) or ~2*10-20 % !

Now say that R holds at closer to a dispersed mean of 0.01, or a nominal 100 year half-life. Then, the amount left at 500 years sits at 1/(1+0.01*500) = 1/6 ~ 17%.

In comparison, the exponential would drop to exp(-500/100) = 0.0067 ~ 0.7%

Also, 0.7% of the rates will generate a half-life of 20 years or shorter. These particular rates quoted could conceivably result from those volumes of the atmosphere close to the ocean.

Now it gets interesting ...

Climatologists refer to the impulse response of the atmosphere to a sudden injection of carbon as a key indicator of climate stability. Having this kind of response data allows one to infer the steady state distribution. The IPCC used this information in their 2007 report.
Current Greenhouse Gas Concentrations
The atmospheric lifetime is used to characterize the decay of an instanenous pulse input to the atmosphere, and can be likened to the time it takes that pulse input to decay to 0.368 (l/e) of its original value. The analogy would be strictly correct if every gas decayed according to a simple expotential curve, which is seldom the case.
...
For CO2 the specification of an atmospheric lifetime is complicated by the numerous removal processes involved, which necessitate complex modeling of the decay curve. Because the decay curve depends on the model used and the assumptions incorporated therein, it is difficult to specify an exact atmospheric lifetime for CO2. Accepted values range around 100 years. Amounts of an instantaneous injection of CO2 remaining after 20, 100, and 500 years, used in the calculation of the GWPs in IPCC (2007), may be calculated from the formula given in footnote a on page 213 of that document. The above-described processes are all accounted for in the derivation of the atmospheric lifetimes in the above table, taken from IPCC (2007).

Click on the following captured screenshot for the explanation of the footnote.

The following graph shows impulse responses from several sets of parameters using the referenced Bern IPCC model (found in Parameters for tuning a simple carbon cycle model). What I find bizarre about this result is that it shows an asymptotic trend to a constant baseline, and the model parameters reflect this. For a system at equilibrium, the impulse response decay should go to zero. I believe that it physically does, but that this model completely misses the fact that it eventually should decay completely. In any case, the tail shows huge amount of "fatness", easily stretching beyond 100 years, and something else must explain this fact.

Figure 2: IPCC Model for Impulse Response

If you think of what happens in the atmosphere, the migration of CO2 from low-reaction rate regions to high-reaction rate regions can only occur via the process of diffusion. We can write a simple relationship for Fick's Law diffusion as follows:
dG(t)/dt = D (C(0)-C(x))/G(t)
This states that the growth rate dG(t)/dt remains proportional to the gradient in concentration it faces. As a volume gets swept clean of reactants, G(t) gets larger and it takes progressively longer for the material to "diffuse" to the side where it can react. This basically describes oxide growth as well.

The outcome of Fick's Law generates a growth law that goes as the square root of time - t 1/2. According to the dispersion formulation for cumulative growth, we simply have to replace the previous linear drift growth rate shown in Figure 1 with the diffusion-limited growth rate.
P(t) = 1/(1+R*t 1/2)
or in an alternate form where we replace the probability P(t) with a normalized response function R(t):
R(t) = a/(a+t 1/2)
At small time scales, diffusion can show an infinite growth slope, so using a finite width unit pulse instead of a delta impulse will create a reasonable picture of the dispersion/diffusion dynamics.

Remarkably, this simple model reproduces the IPCC-SAR model almost exactly, with the appropriate choice of a and a unit pulse input of 2 years. The IPCC-TAR fit uses a delta impulse function. The analytically calculated points lie right on top of the lines of Figure 2, which actually makes it hard to see the excellent agreement. The window of low to high reaction rates generates a range of a from 1.75 to 3.4, or approximately a 50% variation about the nominal. I find it very useful that the model essentially boils down to a single parameter of entropic rate origin (while both diffusion and dispersion generates the shape) .

Figure 3: Entropic Dispersion with diffusional growth kinetics describes the CO2 impulse response function with a single parameter a. The square of this number describes a characteristic time for the CO2 concentration lifetime.

You don't see it on this scale, but the tail will eventually reach zero, but at a rate asympotically proportional to the square root of time. In 10,000 years, it will reach approximately the 2% level (i.e. 2/sqrt(10000)).

Two other interesting observations grow out of this most parsimonious agreement.

First of all, why did the original IPCC modelers from Bern not use an expression as simple as the entropic dispersion formulation? Instead of using a three-line derivation with a resultant single parameter to model with, they chose an empirical set of 5 exponential functions with a total of 10 parameters and then a baseline offset. That makes no sense unless their model essentially grows out of some heuristic fit to measurements from a real-life carbon impulse (perhaps data from paleoclimatology investigation of an ancient volcanic eruption; I haven't tracked this down yet). I can only infer that they never made the connection to the real statistical physics.

Secondly, the simple model really helps explain the huge discrepancy between the quoted short lifetimes by climate sceptics and the long lifetimes stated by the climate scientists. These differ by more than a magnitude. Yet, just by looking at the impulse response in Figure 3, you can see the fast decline that takes place in less than a decade and distinguish this from the longer decline that occurs over the course of a century. This results as a consequence of the entropy within the atmosphere, leading to a large dispersion in reaction rates, and the rates limited by diffusion kinetics as the CO2 migrates to conducive volumes. The fast slope evolving gradually into a slow slope has all the characteristics of the "law of diminishing returns" characteristic of diffusion, with the precise fit occurring because I included dispersion correctly and according to maximum entropy principles. (Note that I just finished a post on cloud ice crystal formation kinetics which show this same parsimonious agreement).

Think of it this way: if this simple model didn't work, one would have to reason why it failed. I contend that entropy and disorder in physically processes plays such a large role that it ends up controlling a host of observations. Unfortunately, most scientists don't think in these terms; they still routinely rely on deterministic arguments alone. Which gets them in the habit of using heuristics instead of the logically appropriate stochastic solution.

Which leads me to realize that the first two observations have the unfortunate effect of complicating the climate change discussion. I don't really know, but might not climate change deniers twist facts that have just a kernel of truth? Yes, "some" of the CO2 concentrations may have a half-life of 10 years, but that misses the point completely that variations can and do occur. I am almost certain that sceptics that hang around at sites like ClimateAudit.org see that initial steep slope on the impulse response and convince themselves that a 10 year half-life must happen, and then decide to use that to challenge climate change science. Heuristics give the skilled debater ammo to argue their point any way they want.

I can imagine that just having the ability to argue in the context of a simple entropic disorder can only help the discussion along, and relying on a few logically sound first-principles models provides great counter-ammo against the sceptics.

One more thing ...

So we see how a huge fat tail can occur in the CO2 impulse response. What kind of implication does this have for the long term?

Disconcerting, and that brings us to the point that the point that climate scientists have made all along. With a fat-tail, one can demonstrate that a CO2 latency fat-tail will cause the responses to forcing functions to continue to get worse over time.

As this paper notes and I have modeled, applying a stimulus generates a non-linear impulse response which will look close to Figure 3. Not surprisingly but still quite disturbing, applying multiple forcing functions as a function of time will not allow the tails to damp out quickly enough, and the tails will gradually accumulate to a larger and larger fraction of the total. Mathematically you can work this out as a convolution and use some neat techniques in terms of Laplace or Fourier transforms to prove this analytically or numerically.

This essentially explains the 25% forever in the ClimateProgress comment. Dispersion of rates essentially prohibit the concentrations to reach a comfortable equilibrium. The man-made forcing functions keep coming and we have no outlet to let it dissipate quickly enough.

I realize that we also need to consider the CO2 saturation level in the atmosphere. We may asymptotically reach this level and therefore stifle the forcing function build-up, but I imagine that no one really knows how this could play out.

As to one remaining question, do we believe that this dispersion actually exists? Applying Bayes Theorem to the uncertainty in the numbers that people have given, I would think it likely. Uncertainty in people's opinions usually results in uncertainty (i.e. dispersion) in reality.

This paper addresses many of the uncertainties underlying climate change: The shape of things to come: why is climate change so predictable?
The framework of feedback analysis is used to explore the controls on the shape of the probability distribution of global mean surface temperature response to climate forcing. It is shown that ocean heat uptake, which delays and damps the temperature rise, can be represented as a transient negative feedback. This transient negative feedback causes the transient climate change to have a narrower probability distribution than that of the equilibrium climate response (the climate sensitivity). In this sense, climate change is much more predictable than climate sensitivity. The width of the distribution grows gradually over time, a consequence of which is that the larger the climate change being contemplated, the greater the uncertainty is about when that change will be realized. Another consequence of this slow growth is that further eff orts to constrain climate sensitivity will be of very limited value for climate projections on societally-relevant time scales. Finally, it is demonstrated that the e ffect on climate predictability of reducing uncertainty in the atmospheric feedbacks is greater than the eff ect of reducing uncertainty in ocean feedbacks by the same proportion. However, at least at the global scale, the total impact of uncertainty in climate feedbacks is dwarfed by the impact of uncertainty in climate forcing, which in turn is contingent on choices made about future anthropogenic emissions.
In some sense, the fat-tails may work to increase our certainty in the eventual effects -- we only have uncertainty in the when it will occur. People always think that fat-tails only expose the rare events. In this case, they can reveal the inevitable.


Added Info:

Segalstad pulled together all the experimentally estimated residence times for CO2 that he could find, and I reproduced them below. By collecting the statistics for the equivalent rates, it turns out that the standard deviation approximately equals the mean (0.17/year) -- this supports the idea that the uncertainty in rates found by measurement matches the uncertainty found in nature, thus giving the entropic fat tail. These still don't appear to consider diffusion, which fattens the tail even more.



Authors [publication year] Residence time (years)
Based on natural carbon-14
Craig [1957]
7 +/- 3
Revelle & Suess [1957] 7
Arnold & Anderson [1957]including living and dead biosphere 10
(Siegenthaler,1989)
4-9
Craig [1958] 7 +/- 5
Bolin & Eriksson [1959] 5
Broecker [1963], recalc. by Broecker & Peng [1974] 8
Craig [1963] 5-15
Keeling [1973b] 7
Broecker [1974] 9.2
Oeschger et al. [1975] 6-9
Keeling [1979] 7.53
Peng et al. [1979] 7.6 (5.5-9.4)
Siegenthaler et al. [1980] 7.5
Lal & Suess [1983] 3-25
Siegenthaler [1983] 7.9-10.6
Kratz et al. [1983] 6.7
Based on Suess Effect
Ferguson [1958] 2 (1-8)
Bacastow & Keeling [1973] 6.3-7.0
Based on bomb carbon-14
Bien & Suess [1967] >10
Münnich & Roether [1967] 5.4
Nydal [1968] 5-10
Young & Fairhall [1968] 4-6
Rafter & O'Brian [1970] 12
Machta (1972) 2
Broecker et al. [1980a] 6.2-8.8
Stuiver [1980] 6.8
Quay & Stuiver [1980] 7.5
Delibrias [1980] 6
Druffel & Suess [1983] 12.5
Siegenthaler [1983] 6.99-7.54
Based on radon-222
Broecker & Peng [1974] 8
Peng et al. [1979] 7.8-13.2
Peng et al. [1983] 8.4
Based on solubility data
Murray< (1992) 5.4
Based on carbon-13/carbon-12 mass balance
Segalstad (1992) 5.4

Offshore Blowouts. causes and control


Exploration and development of offshore oil and gas fields involve a number of risks related to loss of human lives, pollution, and loss of material assets. All those involved in the offshore industry are aware of the hazards. The potential for major accidents will always be present, but it is important to keep the risks within acceptable levels, and as low as reasonably practicable.

A main contributor to the total risk is uncontrolled release of pressurized hydrocarbons, i.e., gas leakages and blowouts. It should, however, not be forgotten that other aspects such as vessel stability, helicopter transport, and occupational accidents are also significant contributors to the total risk.


History shows that uncontrolled releases of hydrocarbons have caused several major accidents. The Bravo blowout on the Ekofisk field in 1977, the West Vanguard blowout in 1985, the Piper Alpha gas leak in 1988, and the Ocean Odyssey blowout in 1988 are all well-known accidents that occurred in the North Sea. In addition, several less severe accidents involving uncontrolled releases have occurred in the North Sea.


DETALLES
Nombre: Offshore Blowouts. causes and control
Autor: Per Holand
Paginas: 177 Pages
Publicador: Gulf publishing company
Idioma: Ingles
Contraseña: www.casapetrolera.blogspot.com

After clicking the download link will appear an intermediate page, just giving you click on "SKIP AD" and you get to the place where the hosts file ... enjoy it!

Sauidi Aramco well control


The single most important step to blowout prevention is closing the blowout preventers when the well kicks. The decision to do so may well be the most important of your working life. It ranks with keeping the hole full of fluid as a matter of extreme importance in drilling operations. The successful detection and handling of threatened blowouts (‘kicks’) is a matter of maximum importance to our company. Considerable study and experience has enabled the industry to develop simple and easily understood procedures for detecting and controlling threatened blowouts. It is extremely important that supervisory personnel have a thorough understanding of these procedures as they apply to Saudi Aramco operated drilling rigs.

DETALLES
Nombre: Sauidi Aramco well control
Autor: N/A
Paginas: 422 Pages
Publicador: Saudi Aramco
Idioma: Ingles
Contraseña: www.casapetrolera.blogspot.com

After clicking the download link will appear an intermediate page, just giving you click on "SKIP AD" and you get to the place where the hosts file ... enjoy it!

Blowout and well control handbook


As with his 1994 book, Advanced Blowout and Well Control, Grace offers a book that presents tested practices and procedures for well control, all based on solid engineering principles and his own more than 25 years of hands-on field experience. Specific situations are reviewed along with detailed procedures to analyze alternatives and tackle problems. The use of fluid dynamics in well control, which the author pioneered, is given careful treatment, along with many other topics such as relief well operations, underground blowouts, slim hole drilling problems, and special services such as fire fighting, capping, and snubbing. In addition, case histories are presented, analyzed, and discussed.


Contents

1. Equipment in Well Control 2. Classic Pressure Control Procedures While Drilling 3. Pressure Control Procedures While Tripping 4. Special Conditions, Problems, and Procedures in Well Control 5. Fluid Dynamics in Well Control 6. Special Services in Well Control 7. Relief Well Design and Operations 8. The Underground Blowout 9. The Al-Awda Project: The Oil Fires of Kuwait 10. Index.


DETALLES
Nombre: Blowout and well control handbook
Autor: Robert D. Grace
Paginas: 469 Pages
Publicador: Gulf Professional Publishing
Idioma: Ingles
Contraseña: www.casapetrolera.blogspot.com

After clicking the download link will appear an intermediate page, just giving you click on "SKIP AD" and you get to the place where the hosts file ... enjoy it!

Advanced blowout & well control

In petroleum industry drilling operations even the most simple blowout can kill people and cost millions of dollars in equipment losses - and millions more in environmental damage and ensuing litigation. Government environmental and safety requirements demand that operating companies and drilling contractor personnel be rigorously trained in well control procedures and in responding to blowouts when they occur.


The book reviews classical pressure control procedures. In addition, specific situations are presented along with detailed procedure to analyze alternatives and tackle problems. The use of fluid dynamics in well control, which the author pioneered, is given careful treatment, along with many other topics such as relief well operations, underground blowouts, slim hole drilling problems, and special services such as fire fighting, capping, and snubbing. Case histories are presented, analyzed, and discussed.



DETALLES
Nombre: Advanced blowout & well control
Autor: Robert D. Grace
Paginas: 396 Pages
Publicador: Gulf Professional Publishing; illustrated edition edition (November 25, 1994)
Idioma: Ingles
Contraseña: www.casapetrolera.blogspot.com

After clicking the download link will appear an intermediate page, just giving you click on "SKIP AD" and you get to the place where the hosts file ... enjoy it!

Well control. Trainning manual


Tested to prove their competence. Unquestionably, training plays an important role in successful well control, for a drilling crew that knows and understands the principles and technical procedures of well control is a crew that is less likely to experience a well blowing out of control.


This manual is intended to be a training aid for all personnel who are concerned with well control—rotary helpers, drillers, toolpushers, company representatives, or any whose job takes him or her directly onto a rig location. The book presents a practical approach to well control in that it emphasizes the things a rig crew should know and be able to do to control a well. It is also used as a basic textbook by those who attend well-control courses conducted by many training organization.


DETALLES
Nombre: Well control. Trainning manual
Autor: Aberdeen drilling schools
Paginas: 390 Pages
Publicador: Class III electronic document
Idioma: Ingles
Contraseña: www.casapetrolera.blogspot.com

After clicking the download link will appear an intermediate page, just giving you click on "SKIP AD" and you get to the place where the hosts file ... enjoy it!