The idea behind hyperbolic discounting makes a lot of sense, and it uses the same argument that I started with in estimating reserve growth. The first level of uncertainty involves a Bayesian or Maximum Entropy estimate of the probability distribution of the rate of advance through a region. This becomes an exponential PDF characterized by a mean value. But since we don't know the mean value (i.e. depending on the size of the sweet spot explored), then we have to go through another level of distribution smearing to further qualify our uncertainty. This reduces the certainty of our initial estimate and increases the entropy, essentially doubling the Shannon value. This may be the maximum entropy formulation where we only know the mode of the distribution function (since the mean or expected value no longer integrates to a finite value).
The wiki page on hyperbolic discounting has a neat derivation that explains the concept by invoking a double stochastic integration, leading to the result:
Discount = 1/(1+kt)where t is the time delay of the reward and k describes the uncertainty.
And of course (since everything fits together like a jigsaw puzzle) this looks a lot like the odds function I discussed recently.
Probability = 1/(1+Odds)So the kt factor serves as an odds function that the human brain subliminally processes when trying to make a decision based on a deferred award.
Someone found that in studies of discounting in which the subject got offered the decision to "I'll either give you 100 dollars, or we can flip a fair coin and I'll give you 300 dollars if it's heads, nothing otherwise", people tend to choose not to risk losing the hypothetical 100 dollars. The future rewards that they can get may not outweigh the immediate payoff. This also shows up in studies of drug addiction.
This essentially explains the heads/tails decision making because the person remains uncertain about whether they will actually receive the reward in the future. In this case, the odds favor the subject but the eventual payoff may never occur (i.e t = infinity). Obviously the professional gambler will expect that he can play the game several times and eventually beat the odds. But playing just the ONE time, the person offered the reward takes the easy route.
That explains why the hyperbolic discounting has such a wide probability spread, as it errs on the conservative side, and k gets established according to Bayesian observations of human behavior and not on the true odds.
Which brings me to the reality of probabilities.
I came across a lengthy comment in a recent American Journal of Physics titled "Econophysics and economics: Sister disciplines?" [1].
Are econophysics and economics complementary fields or totally separated disciplines? In this paper I argue that econophysics is not a subfield of economics, and these two fields are separate disciplines.That section essentially explains my approach. I have no economic model to push; instead I try to explain the data based on the logic of science: probability theory.
There are two kinds of gaps between economics and econophysics. The methodological gap refers to a way of doing science. Although economists base their work on a priori methodology, econophycisists use a data-driven methodology. The other gap concerns the way they think about reality. Econophycisists and economists do not see the world in the same way.
In contrast to econophysics, economics is not an empirical discipline. Even if there are debates about the empirical dimension of economics, the empirical dimension in economics is exaggerated. According to econophysicists, complexity studies need an empirical basis. “The real empirical data are certainly at the core of this whole enterprise econophysics and the models are built around it, rather than some non-existent, ideal market as in economics.” This empirical dimension is frequently mentioned in econophysical research and is often presented as the main difference with economics.
This difference between economics and econophysics can be illustrated by considering fat tails or financial crashes. Economists assume that price changes obey a lognormal probability distribution with a near zero kurtosis ͑a mesokurtic distribution͒. This a priori perspective implies that massive fluctuations have a very small probability. However, real data show a positive kurtosis and a leptokurtic distribution in which extreme events have a higher probability of occurring. By beginning with observed data, econophysicists develop models in which some extreme events such asa financial crash can occur. This a priori thinking leads different. economists to underestimate the occurrence of financial crashes. “The standard theory, as taught in business school around the world, would estimate the odds of that final, August 31 ͓1998͔ collapse at one in 20 million—an event that, if you traded daily for nearly 100 000 years, you would not expect to see even once.” However, several financial crisis were observed during the past century, and therefore economic theory seems to be unable to describe this kind of phenomena.
Economists tend to forget that their probabilistic approach to uncertainty is an incomplete representation of reality, and they substitute their models for uncertainty.The author states that economists invariably reduce uncertainty to risk, because, after all, economic models exist to support prediction of financial reward and failure (what else?). Econophysics becomes a more uncertainty-oriented discipline than economics, because scientists describe the behaviors operationally using the more rigorous physical explanations such as maximum entropy, etc.
Economists invent economic reality, while econophysicists try to describe it.An amazing commentary, and please read the whole thing. My last post on labor productivity stated in so many words that I could care less about the individual agent psychology, and all that matters relates to the statistical ensemble behavior. Which then helps to explain this:
Despite this diversity, complex economic systems seem to obey a kind of invariance that can be characterized by power law distributions of the general form p(x) ~x-a, where p(͑x) is the probability of an event of magnitude x and the scaling exponent a can be determined either by empirically observed behavior of the system or by a theory or simulation.I got that covered all right.
[1] C. Schinckus, Am. J. Phys. v.78, p.325 (2010).