Quantifying the Value of Bitcoin

This post was updated on 3-13-14 to include discussion on the fundamental eigenvalue and the concept of criticality. If you have read the original post the additional discussion is at the very end. It starts off with good math and then gets a little ranty and comes back to some chart porn.
This update will be my last and this post will be my last for a while. If you want to participate and think you have something worthwhile to contribute, contact me and show me what you have. If its an honest work, I’ll post it. Otherwise please try again.
If I didn’t already mention it, the early stages of the bitcoin transaction history are not well suited for the model I made. While things like entropy are defined, the approximation of the transaction distribution as continuous is a poor assumption. A proper treatment requires a quantized approach. For useful purposes today, the continuous approach is simple and effective yielding powerful results.

Bitcoin is a tremendous experiment. Here we have a ledger that is open for everyone to inspect and use. Because every transaction is recorded, the potential for data mining is nearly unlimited. Today’s post will do some data mining, but on a very limited scale. This will be a relatively long post and will be heavy in theory and math. Of course you can skip all the “blahh, blahh blahh, math, blah, blah.” and just look at the pretty pictures. Included in today’s post are the ~1-hour (6 blocks) averaged data files and the Matlab source code. To use the Matlab source, you will need a parsed blockchain database, I used bitcoin-abe v0.9 compiled under Mac OS 10.9.2 to create a MySQL database. I can’t make the database available because my ISP would hate me, and there is not enough space on my WordPress account.

I develop a measure of marginal utility of bitcoin from the time needed to find each block. This is simply the estimated network hash rate divided by the total coinbase for that interval. The difficulty is in arriving at this relationship. To do this, I used the complementary relationship of time and the Hamiltonian (utility). Armed with the understanding of the relationship of time to utility, we use the time to discover a new block as the measure of the utility gained by the discovery. The effort of the miners to discover the coinbase is the marginal power [TH/BTC-s] needed to maintain the miners’ operations. We will expand later the relationship of a TH/s to the MW. We can choose to leave the network power as a non dimensional quantity, using difficulty in lieu of hash rate. For the sake of making clearer analogies, we will use the dimensional form.

The Uncertainty Principle

Werner Heisenberg as part of his PhD thesis developed the relationship between our knowledge of the complementary measures of a system. His work focused on a particles position and momentum. While we are not dealing with particles or of position in real space or momentum in real space, we are dealing with measurable space. The blockchain contains a record of every transaction. We can in theory and in practice count these transactions, allowing us to apply the principles of statistical inference to the data that we have at hand. I began in Elementary Principles of Statistical Economics by showing how to use the time evolution of a probability density of state with a Hamiltonian. The Hamiltonian is merely an operator and if it exists for a space then under the uncertainty principle there exists a complementary measure, time. John Baez has an interesting post that explores the concept of the relationship between time and the Hamiltonian.

The time stamp of the block creation time is not a precise measure. There is a great deal of uncertainty behind the measure due to the constraints in the code for permissible time stamps. Because of this uncertainty, we cannot precisely know the time t. We measure  time t^*. We describe the uncertainty principle as,

\left(\Delta t^*\right)\left(\Delta H\right)\geq C                           (1)

At the classical limit, we find that the fluctuation of the Hamiltonian, \Delta H is inversely proportional to the fluctuation in measured time, \Delta t^*. Using this limit we measure the fluctuation of the Hamiltonian as proportional to the inverse of the change in measured time.

\Delta H\propto\frac{1}{\Delta t^*}                             (2)

Quantifying Intensive Measures

Our attention now turns to measuring the intensive parameters of the system defined by the bloakchain. We develop measures first for pressure and then for temperature.

bitcoin pressure

When we include the coinbase of each block we find the measure of the marginal utility, pressure, of bitcoin. The marginal utility of bitcoin is defined as,

\lambda = \frac{\partial U}{\partial M}                                (3)

I apologize for the variable confusion. When we typically talk about utility we use U. To this point I followed the convention of physics that when we describe the internal energy/utility, U, in quantum contexts we refer to it as the operator H. Under the constraints of Bitcoin, miners work for a specific payout of bitcoins, the coinbase. How hard they work to do this is the time taken to discover the next block. In this context, mining acts to determine the marginal utility of finding the next bitcoin. Figure 1 shows the marginal utility of bticoin measured, using the aggregate information every 6 blocks, approximately hourly. The study period is from the genesis block through block 289207. We express the units of bitcoin pressure as TH/BTC, terahash/bitcoin.

Figure 1: Marginal utility of bitcoin

Figure 1: Marginal utility of bitcoin

bitcoin specific internal utility

The internal utility of the Bitcoin network is not an intensive/endogenous measure. It is an extensive/exogenous measure. Later, I will show how it is proportional to the temperature (an intensive measure). This requires an assumption of a certain utility function, which at this point limits the generality of the approach. We will therefor avoid such consideration until we have the theoretical measures developed.

We note that the distribution of bitcoin transactions is most closely represented by the log-normal distribution. I looked at both \mathtt{tx_{out}} and \mathtt{tx_{in}}. \mathtt{tx_{out}} provided the best representation as there can be only one change address when using a wallet based on the reference wallet implementation. While transactions occur discretely and are integers of satoshi, there is enough division with them and especially later enough transactions (>30/6 blocks) to approximate the system classically as  a continuous distribution. Only in the first year, when there aren’t enough transactions that this assumption breaks down. The recent years, of which we are concerned, this assumption holds relatively well.

The temperature of a log normal distribution is the expectation of \mathtt{tx_{out}} over each integration period. I showed this in Various Properties of the Log-Normal Distribution. We determine this by:

E\left[\mathtt{tx_{out}}\right]=e^{\mu +\frac{\sigma^2}{2}}                                    (4)

We compute \mu and \sigma using the MLE estimates for a log-normally distributed variable.

This is expectation is related to specific internal utility, u. The units of the expectation are measured in BTC, not in units of utility. To convert to utility we let:

u = \lambda E\left[\mathtt{tx_{out}}\right]

Figure 2 shows the specific internal utility with units of TH.

Figure 2: Bitcoin specific internal utility

Figure 2: Bitcoin specific internal utility

bitcoin participant potential

The \sigma statistic of the log-normal distribution has a component of variance that is independent of location, \mu. When we rearrange the entropy in terms of E\left[\mathtt{tx_{out}}\right], \sigma, and \lambda we have,

S=\frac{1}{2}\mathrm{ln}\left[2\pi e\sigma^2e^{-\sigma^2}\right]+\mathrm{ln}\left[\lambda E\left[\mathtt{tx_{out}}\right]\right]

We define a new parameter z=\sigma^2e^{-\sigma^2} which is a form of the Lambert W function. For \sigma^2\geq0 , z\in[0,\frac{1}{e}]. \sigma describes the distance of the system from the maximum entropic carrying capacity.  Systems of relatively low density have \sigma>1. Systems of relatively high density have \sigma<1. Figure 3 shows three distinct phases of the bitcoin network, inception, expansion, and our current maturation. Bitcoin will likely reach an equilibrium of \sigma=1 which maximizes the networks entropy for any given utility and total satoshi’s. With bitcoin transactions being log-normally distributed, the z potential, \mu_z=\frac{T}{2z}. At the point of maximum entropy, \mu_z=-\frac{e}{2}T.

Figure 3: z-parameter

Figure 3: z-parameter

The phases of bitcoin, remind me of evolutionary history. When life first appeared there were very few forms, early adopters if you will that existed in what can best be thought of as quantum states of very low biodiversity. The expansion phase is akin to the Cambrian Explosion, where life tried every combination it could think of causing a very low density of states, \sigma>1. Life on the planet then went through a consolidation phase where many of the life forms that were tried out became extinct as they couldn’t compete with the more capable life forms. Life on the planet then came back to an equilibrium where \sigma=1. The density of life then appears to follow the constraints based upon planet, including several ice ages, and 5 major extinction events. Species diversity today appears to follow a canonical log-normal distribution where \sigma\approx1. Bitcoin appears to be well within its adolescence, and is not yet fully mature.

Maturity has several equivalent definitions. First is \sigma\approx1. Another is when the maximum potential of participants is reached, |\left(\mu_z\right)_T|\approx\mathrm{maximum}=\frac{e}{2}T. Just like what appears to be the case with life on the planet, the maximum entropic carrying capacity is not fixed.

Figure 4: Phanerozoic biodiversity as shown by the fossil record. Source Wikipedia

Figure 4: Phanerozoic biodiversity as shown by the fossil record. Source Wikipedia

Quantifying Extensive Measures

We now turn our focus to the logically independent extensive parameters money supply M, utility U, and number of unique participants N. Please note that z is a function of N. We begin with the simplest to measure, M

bitcoin money supply

The total number of satoshi is programmed into the source code and is solely a function of the block number (time). I used the average of each six block interval.

Figure 5: bitcoin money supply

Figure 5: bitcoin money supply

bitcoin temperature

The bitcoin temperature is computed by differentiating the specific entropy with respect to specific internal utility, leaving:


where a is a positive constant of proportionality

bitcoin participants

The number of bitcoin participants was calculated to a constant of proportionality by assuming that

\lambda M = a N T

Later, I will show how this assumption is justified as our model to this point does not incorporate M into the entropic equation of state. Based on the definition of T = \lambda E\left[\mathtt{tx_{out}}\right], the number of participants reduces to,

N =\frac{M}{a E\left[\mathtt{tx_{out}}\right]}

When coupled with


we can compute the carrying capacity of the bitcoin network as:

c_k=\frac{M}{a E\left[\mathtt{tx_{out}}\right]}\left(e^{\sigma^2}-1\right)

Figure 6 shows the maximum entropic carrying capacity and the estimated number of bitcoin participants.

Figure 6: maximum entropic carrying capacity and number of network participants

Figure 6: maximum entropic carrying capacity and number of network participants

bitcoin entropy

The entropy is computed in one of two ways, from the data directly or from the entropy derived form the model distribution. I estimate the former by dividing the transactions into 3 dB wide bins. Based on the range of transactions this is typically 40 bins. I then count the occupancy of each bin and divide by the total number of transactions to compute the probability of occupancy in each bin. I do same to the the CDF of the log-normal distribution, to compute it’s bin occupancy probability.

Figure 7: Bitcoin network entropy

Figure 7: Bitcoin network entropy

Please note that the negative entropy is due to measurement error, and is based upon its information theoretic definition physically impossible. I think the negative entropy here is due to the distribution of the marginal utility of transactions in those periods as not being constant as I assumed.  I compute the differential entropy of the data distribution, figure 7, by:

s=\sum_i p_i\mathrm{ln}\left[p_i\right]+E\left[ \mathrm{ln}\left[\lambda\mathtt{tx_{out}}\right]\right]

The differential entropy of the log-normal distribution is:

s=\frac{1}{2}\mathrm{ln}\left[2\pi e\sigma^2\right]+\mu

Using the bin data, I compute the relative entropy between the data and the model as,

s_{rel}=\sum_i p_i\mathrm{ln}\left[\frac{p_i}{p_{\mathrm{ln}\mathcal{N},i}}\right]

Figure ?? shows the relative entropy between the model and the data. |s_{rel}|<1 is a significant finding. We see in figure ?? that for the recent blocks |s_{rel}|<1. This means the simple log normal model represents the data well, and that there are other unknown factors that influence the data beyond the model, but are not statistically significant.

Figure 8: Relative entropy of Log-Normal model to observed entropy

Figure 8: Relative entropy of Log-Normal model to observed entropy

We see that the entropy is mostly a function of the temperature and the relationship between the number of network participants and the structural capacity, z. While the money supply has an impact, a different approach is needed to formally develop this understanding.

Empirical Modeling

We turn our attention to modeling the observed data and developing relationships for the extensive and intensive parameters. In the blockchain, we identified 4 logically independent variables: specific entropy, specific internal utility, marginal utility, and the z-parameter. We begin by assuming a simple functional relationship between these parameters.


Using a MLE linear regression of the data since December 24, 2010 at 13:01:45 we find,


with residuals that follow p\sim\mathrm{StudentT}\left[0.097,0.515,3.08\right] in figure 9.

Figure 9: Residuals of fitted bitcoin utility function

Figure 9: Residuals of fitted bitcoin utility function.

We find that our simple model explains the observed data quite well and is adequate for us to assess the as yet unquantified intensive parameters. This also allows us to derive the Maxwell relationships for bitcoin.


The work here provides additional insight into bitcoin. It presents a fairly complete picture. Some key observations are:

  1. The development of a measure of network participants
  2. The identification of a measure of marginal utility for bitcoin
  3. Understanding that it is possible to have deflation (falling prices) in a stable system, \frac{\mathrm{d}S}{\mathrm{d}t}>0 while maintaining a stable network.
  4. We can see instances where the network became unstable (notably in 2011 after the first Mt Gox crash), the impact of Cyprus “bail-ins”, and the lack of network impact of the final crash of Mt Gox earlier this year.
  5. The methods developed here are easily adapted into SQL code and can be incorporated into existing blockchain API’s.

Data Files

All the data files can be found in this DropBox folder. If you have difficulty accessing this information have other comments/questions please post a comment on this page or contact me on twitter.  Enjoy!

Eigenvalues and Other Exotic Beasts

This next part is a hypothesis. I do not have the knowledge or mathematical ability to prove/disprove it. So I’ll call it Cal’s conjecture. When I was playing around with the variable relationships, and I noticed that the characteristics of the variance of the log-normal distribution had some interesting properties that were eriely similar to the inverse of the fundamental eigenvalue. The discussion I am about to go into is regarding the Boltzmann transport equation in a multiplying media. The Boltzmann formalism and the formalism of Gibbs are used to describe and develop the same branch of science, statistical mechanics. In deriving and developing everything here, I used Gibbs’ formalism. It is not a branch of physics that I am familiar with so it took me a while to wrap my head around it. By training I am a nuclear engineer, we use the transport equation in everything that we do. It is the starting point of the entire set of mathematics and neumerical methods we use to describe a reactor. So I may be a little eigenvalue happy, but those things do have a tendency to pop up from time to time.

When we talk about neutron flux, we typically break the time evolution of the flux into two sets of equations, first is one that defines the spatial distribution, which we assume is time invariant to allow separability. I’m going to make the same mathematical abuse here. The other set of equations are a superposition of spatial modes called eigenfunctions. (Duderstadt and Hamilton 1976)


We tend to concern ourselves with the fundamental, asymptotic, form as the eigenvalues are ordered -\lambda_1>-\lambda_2>\dots and the fundamental eigenvalue dominates the time evolution of the system.(Duderstadt and Hamilton 1976) This leaves us with,

\phi\left(\mathbf{r},t\right)=v n\left( t\right)\psi_1\left(\mathbf{r}\right)

When we substitute this into the diffusion model, we have:


Thinking of the variance of the log normal distribution, \sigma as the inverse of -\lambda_1 we see much of the importance of the variance. in nuclear engineering we call the inverse of the fundamental eigenvalue, k. If k>1 the system is super critical meaning that it is creating its own growth. If k=1, the system is said to be critical or self sustaining. The critical configuration is very special, not only is it where we operate reactors at power, it is a maximum entropy configuration that is invariant to the the power (neutron density of the reactor). The other variable in the above equation l is the average neutron lifetime, how long a neutron spends bouncing around in the reactor from birth to loss (absorption or leakage). We can use a similar analogy here about the bitcoin “flux”, I don’t have a really good physical interpretation other than to call it the “bitcoin lifetime”. We let k=\sigma and n=N

Testing to see if this theory is even worth looking into we can descritize the above equation as




Solving for l_i results in figure 10, where \left<l\right>=2.75\,10^{6}[\mathrm{s}] for all recent (last 4-years) of transactions.

Figure 10: Average bitcoin lifetime in seconds

Figure 10: Average bitcoin lifetime in seconds

So maybe my hairbrained idea isn’t so out to lunch after all. Right now this is just a convenient observation that happens to give a profound result. Proving the deeper mathematical meaning is far beyond my skill set and leave that to another enterprising soul.

Looking at the residuals of figure 10 we have figure 11 which shows the lifetime as being lognormally distributed… \mu=13.42 and \sigma=1.674.

Figure 11: Fit of the residuals of the distribution of the average bitcoin lifetime.

Figure 11: Fit of the residuals of the distribution of the average bitcoin lifetime.

We can talk about period T\equiv\frac{l}{k-1}\equiv\mathrm{economic period} but we are going to avoid this. If you are interested in what all this sort of kinetic junk means get a hold of a good reactor analysis text book I reccomend Duderstadt and Hamilton’s 1976 classic “Nuclear Reactor Analysis”. Almost all of the above discussion is from their text, pp 235 and 236. That chapter on reactor kinetics is perhaps the easiest to understand as any I’ve ever read. I’m a subject matter expert on reactor kinetics, my PhD thesis is on reactor kinetics. If your interested here’s my Proposal.

Implications of the Exotic Beasts

Ok, my head is hurting. I’m sure yours is too. I am almost done, I promise. We need to discuss the implications of k on the growth and development of economies. This is very very important. Because after listening to economics podcasts for the last three years, a graduate course in micro economics, and just reading the news/watching the Fed, economics have no $%#@! idea what the hell they are talking about. I mean seriously, these jokers think that they can apply the powder of sympathy and fix the economy. ‘Come on! How full of $%&! can you be?

Economics can be a science. As it is practiced today, it is most definitely not. The Austrians call it a pseudoscience or scientism. And the Austrians aren’t above this either because they have all this great logic and then say the problem is with counting, you can’t count, if you count your one of them. Seriously?! Heaven forbid we actually use our brains and count stuff… BTW the theory that developed all of the above analysis is built off of counting. Science is about counting. More specifically, I define science as the study of reproducible events. No more and no less. This is also why I think that math is the language of science. Austrian economics is like Fight Club, “The first rule of Fight Club is that you don’t talk about Fight Club.” Golly guys can we grow up already?

My definition of science leaves plenty of room for what we know and what we don’t know. Science explains the stuff we observe and can reproduce. Religion explains everything else. They are not at odds with each other they are complementary and have distinct boundaries. I think people blur these boundaries to try and control those who can’t or don’t want to think. This goes for both sides of the argument too, the creationist lunatics say ignore what you can count, while the atheists say only the rational mind is important there is nothing else. Atheism is as much a religion as Catholicism, don’t let them fool you. Everything has boundaries and when we apply those assumptions outside of those boundaries then we cause problems.

We Are in Trouble, but bitcoin’s good and so is the World.

Growth. We are focused on Growth. Must. Have. Growth. I can hear the Kenysians chanting this now “supply side economics” “demand side economics”. Let’s print money lets give it away for shovel ready jobs. Lets give everyone healthcare it is a right. Raise minimum wage. Income Equality for all now! Government has a need to spy on its citizens. War is good for the economy. And on and on…

It is hard not to hear this silly prattle. It’s ubiquitous. We think that by restricting our action that we are doing well, by invading privacy, regulating specific outcomes we are doing what is best for society. We are not. That by redistributing wealth we make everyone better off. We do not. Figure 12 shows the Gini coefficient of several developed and formerly developed countries, see if you spot the similarities.

Figure 12: Gini Coefficient for various western countries

Figure 12: Gini Coefficient for various western countries (Source: Wikipedia)

Income distributions are log normal. There is a Lorenz Symmetry associated with a log-normal distribution too, but we will ignore that. A log-normal distribution that is critical, k=\sigma^2=1 has a Gini coefficient of 0.521. if the Gini is less than that k<1 meaning that the economy is sub-critical. If the Gini is > 0.521, the economy is super critical. Redistributing wealth makes the economy more sub critical. So how do we have economic growth in a sub-critical economy. Simple, we borrow from the future to pay for today. It’s called debt.

A sub-critical economy can only grow, when there is an outside source term of capital pumping money in. The farther the economy is away from criticality the larger the source has to be to create growth. The relationship here is quite simple. Using another definition of k

k\equiv\frac{\mathrm{Utility of one generation}}{\mathrm{Utility of preceding generation}}

Think of it as economic efficiency, We have a GDP in 2014 of x amount of dollars. Multiply it by k and see what the “growth” would be if we didn’t have our wise and prescient government intervening by making our kids pay for our lifestyles today. Let’s take Greece as an example. In Greece, their Gini of .32 corresponds to a k\approx0.6. This means Greece policy is so effective that it takes 40% of the previous years GDP and throws it away. To get the GDP growth to look “normal” and not cause a default on their loans, they need to borrow 42% GDP to get a nominal growth rate. Please note, when I refer to GDP I am talking the REAL GDP not in fiat inflated value destroying currency, but no joke UTILITY not monopoly money.

Speaking of monopoly money the central banks like making it look like they are creating growth in nominal terms. That ain’t growth it’s theft. They are taking value away from savers to give it to the financial sector. This is why whenever measuring utility always use a fixed reference point. I like using the price of primary energy. It’s a hard number to fudge and the governments haven’t figured out that they need to fudge it or suppress it to hide what they are doing, yet.

Figure 13: Energy Price Index for the United States

Figure 13: Energy Price Index for the United States

Let’s look at the US, Gini of 0.38 after tax and redistribution. The US has a k=0.74. To get historic 3% growth we need to borrow 37% of the previous years GDP. That does not sound like a sustainable economic model to me. The scale of the problem for the US is much different. Our economy is $16 trillion compared to Greece’s $250 billion. This is roughly $6 trillion/year ($19,000/person-year) compared to $105 billion/year ($9,300/person-year). Which is more sustainable? Fortunately for us the actual burden is reduced because we are destroying our currency. This way we can appear to have historic growth by exerting much less work. Phew! Central banks make is so easy for us to grow. I mean really! This money printing makes us all better off… No not really. It actually reduces our action.

So how do we achieve criticality? How do we make our economy grow without having to borrow money? Simple, make less restrictive policy. This is not likely to happen because politicians are incentivized to maintain political rents so they can get reelected. Political rents do not contribute to our overall economy. Sorry Virgina, Santa ain’t real.

Bitcoin actually creates value from value. The rate is starting to taper out as feedback effects are starting to kick in as the bitcoin economy scales. But the fact when Satoshi Nakamoto greated a place without government and it has had a k>>1 for the last 4 years is amazing and shows the potential this protocol and currency have.

So what about the rest of the world why is there hope? Simple, there are still places in the world where policy is not so restrictive as to kill economic growth. The world has a k=1.4. You can pick off the numbers from this graphic I used form the Bill and Melinda Gates Foundation 2014 letter.

Figure 14: The poverty curve from two humps to one. ( via Gates Foundation)

Figure 14: The poverty curve from two humps to one. ( via Gates Foundation)


I really am done here. I you didn’t get too upset at my destruction of the entire sum of economic growth theory or by my libertarian rants. If you have an issue with what I said or my analysis. Everything I used to develop it is here on this website. I was as careful as I could be and am sure I made some mistakes. However, when I applied my theory to actual data, the models just fell into place with the theory with the data. My proofs may be messy and incomplete, but their results are indisputable, you need to come up with a simpler model that explains everything that I did better than I did and more. You can also choose to ignore me. Fine go ahead. But if you think the bitcoin price volatility is an indication of insecurity, you are missing out on the biggest thing in a very very long time. I may be wrong, but you will be hard pressed to prove it.

I wish you good luck. I’m signing off of blogging for an indefinite period of time. I have other more important work (more important to me) that I need to focus my attention upon. This has been a fun project, I learned a lot. When I started, I still was a bit of a socialist. Now, any semblance of progressiveness is gone. I learned new math and taught myself economics. I see how complex systems can be easily understood in very broad terms. It has been wild. I see the world around me with the veil lifted. It is not pretty and it is hopeful. It is complex and beautiful.

I turn these ideas over to you. Whomever you are, I stand relieved. I understand what I need to, I’ve given you the tools and the framework to understand as I do in its depth, richness, and humility, as our knowledge is ultimately constrained and is more powerful than most anything that is out there. There is not a paradox between science and religion. It is like saying there is a paradox with a coin that has two sides. Each side is part of the coin, just as the known and the unknown are parts of our lives.

Caveat Emptor: Lessons from Mt Gox Collapse

Caveat Emptor: Lessons from Mt Gox Collapse

Mt Gox presents an interesting view into the world of Bitcoin. Here is a company that helped raise Bitcoin to where it is today, the CEO was on the board of directors of the Bitcoin Foundation, and Mt Gox is a “Gold” level contributor to the foundation. Why then was it that of all organizations did Mt Gox blame their woes on transaction malleability, when they were likely informed of the the issue existing back in 2011. I think to some extent they didn’t think of it as that big of a problem, or were not impacted by it. But this is not the issue I want to raise. Instead, I want to talk about venture capital investment in bitcoin.

There is some concern of the capital being invested in bitcoin that they think of Bitcoin as a polished and finished project where they have the sort of support of a well paid development team. Mt Gox perhaps felt this way too, based on their reaction to a long known problem. One of my favorite Latin quotes is caveat emptor, buyer beware. By all indications, Mt Gox contributed a non trivial cash flow to the development team, they just didn’t head the warnings the development team gave them. Oh well, they won’t do that again. But what about the influx of capital into the Bitcoin ecology? I am not confident that they see funding development as important. It’s called the free rider problem.

There are consequences for not participating, developers are not cheap. They have many opportunities available to them, what incentive do they face to put their creative effort into something without being paid? What level of support will a venture capital start up get if it doesn’t pay for the service and assume somebody else will foot the bill for development? I assume that not listening to advice given is equivalent to not getting advice period. Information not acted upon is information not at hand. Let’s look at Mt Gox.

Mt Gox Price DeclineIf I were a VC, and I saw my capital having the potential half life of a week, I would take great pause. Burning the cash would be more effective. So what can I do to be comfortable with my investment? Well, I could pay someone for their advice. But who is an expert on Bitcoin. There is this foundation, but they just develop the code.

Let’s pretend for a moment that the Bitcoin Foundation does a little more than develop code for free. We’ll imagine that they take a fee. That fee is structured so that it provides a service, software support, and some surplus. Since the foundation is a non-profit they have to donate a significant portion of their revenue in lieu of payments to shareholders. Their payment can be a payment in kind of funding their developers time in maintaining the source code.

Some might ask, “Hey, Cal, aren’t you suggesting that we hand over Bitcoin to corporations? We want the foundation to be free of corporate influence to benefit us all.”

Not at all. Constraints matter. Bitcoin is an open source project. because of this access to the network is non-rivalrous. This means that anyone can provide the support and development service, even as a for profit enterprise. If they forked the development and forked the block chain, they would loose access to the rest of the ecosystem. If somebody provided a better service there is no coercive force saying they must use the Bitcoin Foundation or any other entity.

Software support is a service and is an insurance policy against service interruption. Let’s treat it as such. Oh and by the way it can be used as a means of aiding the code development.

Power for the People: A Cryptographic and Nuclear Renaissance

I am grateful to see the Bill and Melinda Gates foundation identify three myths that affect the global poor. I will clarify the first myth–providing additional insights. If our goal is to end global poverty there are two things that we must do, unfortunately both are not included in the Gates Letter. We must increase global energy consumption and we must increase individual liberty. I provide a technical justification for cryptography and nuclear energy as paired solutions to end global poverty. I also show how these are fundamental drivers for human progress.

We begin with the global income distribution.

The poverty curve from two humps to one. ( via Gates Foundation)

Figure 1: The poverty curve from two humps to one. ( via the Gates Foundation)

Examining the chart we see that the income distribution in 2000 is p\left(I\right)\sim\mathrm{ln}\mathcal{N}\left(\mu,\sigma\right). where \mu=4.6 and \sigma=1.4. When we look at that transition of the income distribution from a mixture to a non mixture we should take great solace. Globalization and global communication united the world. As Hans Rosling says, there is no us and them there is we. In a previous post, I hypothesized that the world reached an equilibrium state. This figure when in consideration with the entropy of the United States shows our world clearly as being in equilibrium, \frac{\mathrm{d}S}{\mathrm{d}t}\approx0.

Figure 2. United States entropy from 1990–2012 using various deflators.

Figure 2. United States entropy from 1990–2012 using various deflators.

Getting back to the global income distribution, we need to consider Adam Smith’s statement that “[w]herever there is a great property, there is great inequality”. We see that globally we have a great property and that as a result we have inequality. I am of Smith’s mind, that inequality is. It does not in and of itself carry a value judgement. If we apply a normative value that everyone should have the same income we see that we destroy wealth, property and freedom. I derive these consequences in Various Properties of the Log–Normal Distribution. Let me say this again just to make sure that you heard correctly.

When we adopt Rawl’s position of distributive justice, we demand the destruction of wealth, private property, and freedom. I believe that humanity is fundamentally a good in the world. I am a humanist, and as such I adopt the normative principle that humanity should be preserved and not destroyed. We can avoid such normative judgements and accept a positive one, that the dynamical systems tend to the point of maximum entropy. My basis in this is well founded, it is also the “most honest” assumption that assumes nothing beyond the data given. [Jaynes 2006]

The entropy of a log-normally distributed variable is given by:

S=\frac{1}{2}\mathrm{ln}\left[2\pi e\sigma^2 e^{\frac{\sigma^2}{2}}\right]+\mathrm{ln}\left[E\left(I\right)\right]

We see that for any given E\left(I\right) that entropy is maximized by \sigma=1. Our global \sigma=1.4 in 2000. This suggests some stark realities that we face. First is that our global carrying capacity is structurally suppressed, based on observed trends in society, I think this is best explained by reduced economic freedom globally. There are various explanations as to why, but it all comes back to government intervention in our lives. Government is an entity that is logically independent from the governed, formally government is orthogonal to the people around the globe. Thus government can act on society exerting force (literal and figurative) on the citizens. Such forces can only exist outside of the system being observed, which is why I reach this conclusion. Said differently, people will act to maximize their freedom given the constraints that they face. Government, by definition, is a constraint on society.

There is a reason why liberty is now a global issue and why we are seeing revolutions and rebellion around the world. The world’s population wants \sigma=1 government  wants \sigma\approx \mathrm{Constant} as it can use the entropy difference to extract wealth from society. It is, as it always was, about control.

Rawl’s philosophy formally justifies \sigma<1. While this gets us in the direction towards \sigma\to1 it does so at the expense of our liberty. Rawls’ philosophy (progressivism) and that of our current governments (conservativism), are fundamentally limited, both resulting in lower overall societal entropy.

After we acknowledge the canonical trend of \sigma\to1, we are left with one alternative for increasing entropy, increasing average global income. Governments around the world recognize this which is why they print so much money. Don’t be fooled. increasing the money supply without actually increasing individual productivity, actually is a net harm to society cooling individual action. It is theft at the grandest scale.

Armed with that understanding we need to answer, “how do we increase individual productivity?” First we need to understand the natural constraints that we face. Our world is governed by the laws of thermodynamics. If we want to increase the amount that we do, we have no option other than to increase the energy consumed by society. This is a first law statement. We can make second law (efficiency) improvements, but these are limited by our knowledge, materials and capital; only resulting in marginal improvements.

Let’s provide a lower bound for the amount of energy we need to raise average global income from the $5.6/day (2000 dollars) to say $56/day (2000 dollars). Because of the second law we know that we have to supply more energy (heat) than is needed to increase the useful work output. Part of this is necessary to overcome hysteresis (stable dynamical systems tend to resist change). We assume the lower bound of energy needed to increase global wealth a factor of 10 to be a factor of 10 from what it was in 2000.

In 2000 the 6 billion people on the planet consumed 120 PW-hr, or 20 MW-hr/person. For a global population of say 10 billion with an average daily income of $56/day (2000 dollars) the world would have to consume  2,000 PW-hr every year! To provide context to this number, in 2008 the world consumed 117 PW-hr of fossil fuels, which is just off the peak fossil fuel consumption of the world. 117PW-hr/year is a daily consumption of 190 million barrels of oil equivalent (MMbbloe/day), or 68 billion bbloe every year.

Scaling fossil fuels to be 82% of a global energy consumption of 2,000 PW-hr/yr is one trillion bbloe/yr. This scale of energy consumption is not possible for fossil fuels. If it is not possible for fossil fuels then it is certainly not possible for renewable energy. We have one option left, nuclear energy.

Assume for the sake of argument the entirety of the 2,000 PW-hr/yr comes form nuclear. How much uranium is needed? Not as much as you might think, 300,000 metric tonnes of heavy metal (MTHM). Known reserves of uranium total 5 million MTHM at <$130/kg. This is terrestrial energy with conventional mining in existing mines. Once price increases far enough there are billions of tonnes of uranium in the oceans.

We have thousands of years of uranium available and as much if not more thorium. Accessing the fuel needed to sustain humanity, support global population, and increase human wealth is not a problem. It never was a problem, nor will it be a problem for any foreseeable future.

If we want to make the poorest amongst us (globally) as rich as the global mean, we need as much power every year as humanity ever consumed in its entire history.  We have the capability to access that power today!

I think the term of a “Nuclear Renaissance” is quite fitting. I talked about humanity at the cusp of a second social Renaissance due to recent cryptographic advances. As I understand the world cryptography and nuclear power are going to be the fundamental drivers of humanity. They will end this neo-feudal fossil era. We live in interesting times.

Progressives Beware: Why what you think you know about economics just ain’t so

I had a random idea a while back and decided to chase it down. I saw that Gibbs method of statistical mechanics relies on operations of certain distribution functions, like the Gamma distribution or the Normal distribution. In my work I kept on coming across the Log-Normal distribution. It is everywhere. So I decided to have a look and see what I could derive. While I did this for the Log-Normal and Gibbs did it for the Gamma and Normal distributions, this approach is applicable to any other distribution.

Theory is no good unless you can test it and see it in our data. So I tested it on income distributions, reported in the the attached paper, bitcoin transactions (work in progress), and radiation exposure to cells (unreported work).

In my work, I derived the Cobb-Douglas function as a property of the Log-Normal distribution. Yesterday a friend tweeted a link to a blog The entropy of nations.  If you have not read it I suggest you do. The author identifies maximum entropy as Adam Smith’s “invisible hand”. I emphatically agree with his analogy.  The tendency towards maximum entropy is seen in the process of price discovery it is the invidible hand that drives arbitrage.

His work motivated me to finish mine and with a boring and non descriptive title to boot! Various Properties of the LogNormal Distribution. Can you say math fun? The policy stuff is in the last section if you are so inclined and the mathy stuff is in the front two sections. I found income distributions obey the second law of thermodynamics. This is great news! The bad news and what inspired my evocative title is that while we can propose policy and develop theories that say it will work, that no quantity of ink or PhD’s is going to change the second law. If you are not aware that it applies, so well. There is a chance you got the theory right, but its just not very likely.  I had a Commanding Officer tell me, “Cal, Hope is not a plan. A plan is a plan.” Wise words. He left out that my plan actually better be possible. Hope as a policy is wishing for unicorns. That is my best description of any progressive policy, wishing for unicorns. It explains why what we get isn’t what we were sold. Somewhere along the line reality hit.

Here to any progressives that follow me or stumble across this blog, there is my gauntlet laid at your feet. I invite your responses.

Theft at the Grandest Scale

Theft is a strong word. It requires two things, desire and a lack of consent. One party desires something from another and takes it without their consent. Simple. My current project is developing an understanding of macroeconomics based on a formal aggregation of microeconomics. I did not anticipate the understanding that I found. This last weekend I worked on understanding claims against wealth inequality. I began by looking at the distribution of income derived form the Average Wage Index (AWI) from 1990-2012. Plotted on a Log-Log scale the distribution appeared to follow a canonical distribution Figure 1.

Figure 1 distribution of wealth (# people/$ v. income) in United States (2012)

Figure 1 distribution of wealth (# people/$ v. income) in United States (2012)

Continue reading

Thermodynamic Wind Energy Analysis: Bonneville Power Administration

A thermodynamic paper on a renewable energy source on an economics blog? Seriously?  Yes seriously.

The paper is my first attempt at applying the principles of statistical economics into other fields. I model the consumption and production of various sources of electricity.  Electricity is a pure human creation for human use and is the defining characteristic of modern life.  So yes, it is all connected.

The results shocked me when I saw them. The power of the method is readily apparent and starts to lay bare our understanding or lack thereof of energy production and consumption.

Thermodynamic Wind Energy Analysis: Bonneville Power Administration

I look forward to your comments.

Natural Gas Price “Spike”

Recent news reports heralding higher natural gas prices at 19-month highs (Bloomberg) due to cold weather (Reuters) and diminished inventories (even reports of the shale boom ending CSM) piqued my interest. There is hope that natural gas futures are bright (Forth Worth Star Telegram).  I wanted to check these narratives against some of my models.

Estimating the marginal utility of money plagues my understanding of economics and greatly affects my ability to model it. However, I developed an alternative deflator called the Energy Price Index (EPI). I took historical Energy Information Agency (EIA) data and performed a regression of the data against the price of oil and natural gas for each fuel source, and then aggregated them based on fraction of total primary energy consumption to get the average price of primary energy delivered to the economy. I tied this model then to the daily WTI crude oil prices and the Henry Hub spot market. Here is an early attempt trying to describe this Economics for Engineers.  This model has several problems first it ignores technological change in the conversion of energy to useful work, it assumes that the distribution between energy and non energy feedstocks is fixed, and it uses an adiabatic model of the US economy. The last assumption only affects the estimation of wealth, I perform a more rigorous derivation of the price and money relationships in The Effect of Price in Macroeconomics and have not had time to rebuild my models based on this work.  The other two assumptions are a function of my own ignorance. Ayers and Warr estimate the necessary information to fix this error. Continue reading

The Second Law: The limited potential of wind energy

Cal Abel
23 March 2013

After watching An Inconvenient Truth and becoming aware of the push for renewable energy, I questioned the efficacy of renewable energy sources meeting global energy needs. I thought thermodynamics held the key in being able to understand this. Thus my quest began in January 2007. Today, I can report meaningful progress on this subject.

To build the appropriate model, I started with some publicly available fine grain data from the Bonneville Power Administration. I used data from January 1, 2007 00:00 to February 28, 2011 12:05 PST. The data is segregated into 5 minute blocks of the average power within that 5 minute period. Here is the excel file of the BPA wind power/capacity and grid load. You can verify this data by comparing the previous links. The date format is from Mathematica and is in “Absolute Time” : each full integer is 1 second. As a reference, 3376598400 is January 1, 2007 00:00:00 PST. The data is posted here in a parsed format only for your convenience and to aid in your analysis as the entirety of the modeling can readily be done in Excel if so desired. Continue reading