- Digital Earth and the Energy Internet
Mathematical and computational models of physical phenomenon are necessarily incomplete by construction because they must be falsifiable. They are falsifiable in the same way a scientific theory must be falsifiable or else you get into the realm of metaphysics or end up making dogmatic claims.
Intuitively speaking there may be a deep synergy between the fact that both scientific models and theories are necessarily falsifiable in the same way that there is an isomorphism between types and proofs (see Curry–Howard isomorphism).
Scientific theories are founded upon mathematical and epistemological proofs. Scientific models are constructed much like computer programs are constructed by interconnecting dependent types. Well, real world software is actually a bit more hairy, but lets assume humans can actually code formally verified software for a second. This type of hypothetical software is kind of at the same level as a mini-scientific theory or framework.
Though its common to think of physical models as closed black box which you give input to and observe outputs, the data going into and out of the models is less important than how they are framed within different physical interrelated types. The inputs have types and the outputs have types. The way these types are connected must make scientific.
For example, imagine a software system for climatology that simulates and predicts weather patterns. Though the data is the true indicator of falsifiability at the end of the day, the first step is to ensure the types and interfaces used to design the software (to connect the black boxes) make sense physically. Ie the model must make sense before you can hope the data makes sense.
If improper physical units of measure are used in some function somewhere and you have a fallacy in your proof or a bug in your computational model, then we are indeed doing worse than fudging the numbers and we unleash the wrath of climate deniers. We end up being wrong without knowing it and must psychologically prepare for the effects of the bull-shit asymmetry principle.
That’s all very theoretical and remains to be proven sound. Practically speaking, we’ve built many of these types of physical models and simulators before: climatology, geology, oceanography systems, etc. Some of it is written in Fortran in the 80s by our dads, while other more modern ones are used to upset the expectations of the every day consumer news consumer naively awaiting a sunny day tomorrow.
I’d imagine it’s possible to create a software system that lets people connect their systems of physical models. It certainly wouldn’t be written in Fortran. Perhaps it would be more like Idris or dependently typed Haskell. I personally really like this Idris library as its quite beautiful: https://github.com/timjb/quantities
The fact is that interfaces or types could be designed which would allow implementations of the models to be plugged in together and interconnect these physical simulators. What’s so great about that?
Well, I’m going to point out a specific concept to map these ideas to: data can now be tokenized. I’m referring to tokens in distributed ledger technologies or blockchains. Yes, its true with simulators like this the amount of data is enormous, but bear with me as off-chain systems are emerging and I’m just trying to portray an idea for a system.
Why are tokens so important? Well, the token is like a unit of something that flows much like data flows through a model or a black box. They flow in one way and can’t be ‘double spent’ or split as they flow. That is between untrusting parties using software, or even autonomous pieces of code (see smart-contracts), the tokens will flow in the exact same way from the frame of reference of all participants in the network. If you have trouble understanding what I’m referring to please familiarize yourself with data-flow programming or go play with LabVIEW. Its basically like visualizing water flowing through a pipe, except its not only water, but other types as well.
Basically, to sum of the first critical point I want to make, it’s that blockchain tokens have different types, perhaps different physical quantities of units of measure like energy, or mass, etc. This is really important point and there is a synergy between these digital blockchain tokens that can represent value of some sort, and the energy flows in physical processes.
Now lets take a step back and try to frame what we mean by value.
In an abstract sense I look at the earth as a thermodynamically closed system that receives energy input from the Sun and dissipates some energy out back to the solar system. A good visual is the diagram seen in ecological economic models:
Now, given this diagram, and the concept outlined earlier of a token that can be used for physical models can you begin to formulate a visual for how these tokens can flow? As can be seen in the diagram, the economic system is a closed blackbox box where people create value with relevance to the sphere of human activity (see Noosphere).
This type of value doesn’t have any direct correspondence to physical value. However, the boundary or interface around the economic system is the biosphere or ecology. This interface is what I think needs to be built with tokens that represent physical quantities that have value. The economic systems that interface to these natural resources, whether through consumption or waste, must understand what debt they are taking from the natural resource stocks and ecosystem services.
If such tokens were to be constructed for representing these physical matter and energy flows, these tokens would have to have dynamic and changing supply as well as interdependence amongst each other.
In modeling ecological systems, there is a stock of nature resources that regenerates at a certain rate; though entropy says it doesn’t fully regenerate as there is some cost we can simplify here.
Typically, ecological models use differential equations (see Lokterra-Volta equations) and stocks and flow diagrams to model these supply dynamics. If we allow scientists to construct these stocks and flows models with the right data types, basically a type-based modeling framework for tokens, then we can connect these systems.
So, as a practical example, imagine the stock of say biochar in the Amazon rainforest. This might be estimated through satellite imagery sampled at a certain rate, or a set of different sensor networks. This data can be fed into a model (which runs one of these equations to adjust the token supply) and in between samples the model can predict the levels of the supply kind of like a Kalman filter can be used to reduce noise in a sampling system.
Basically using a sensor network to receive data about the state of natural resources and models of the data to predict changes of state between samples, we can create smart tokens with a supply that is minted based on the models. When tokens are consumed or transformed by human economic processes the stock will go down. Also, there since these tokens can be interconnected as they form an interdependent chain of value amongst themselves, externalities can be calculated before consumption and value (or potential debt) can be estimated before hand to comply with carrying capacity.
Given that data and models can drive the natural capital stock and ecosystems services of the earth in a tokenized form, we might be able to create a resource economy that works. If we enter a post-growth era and we end up focusing more using regenerative systems to fuel economic development I think such a system is a necessarily outcome for the survival of human beings. I also think the reason people are so obsessed with going to Mars is because they don’t think such a system can be created. It can and it must once the technology is ready.
- Date of publication:
- Wed, 02/14/2018 - 00:12
Click on the link - it will be copied to clipboard