Around 6 years ago I was experimenting and pushing the practical boundaries of what the JVM could handle with regards to heap size.
We had Java 8 at the time, and allocating hundreds of GB of heap space was risky. Lots of time and effort was spend in many IT teams around the world just trying to keep the GC happy with this setup.
Some, often myself included, would recommend we aim for an architecture with several smaller JVMs, each dealing with a more manageable heap size. But such a setup does sometimes bring with it a lot of complexity. While a web application might easily scale horizontally like that, not all application have that luxury.
At the time we didn’t have much choice. One popular approach was going off-heap. That means, managing memory yourself. You could either do that with native code, but then you’d miss out on all the other Java goodness. So, often, you’d be doing it through serialization and off-heap storage, like what I did in the Binary off-heap hashmap, and blogged about as well in my “Only the Good Die Young (or Move Off-heap?)” post.
So what’s the story these days? Are we still have these types of issues? It’s no longer uncommon to have 10, 20, 50, or maybe even 100 GB or more available as heap to the JVM. And with big data analysis we’d often like to easily be able to use all this space.
While there is some specific difference among the two, both allow us to allocate massive heaps, many hundreds of GBs, and have hard guarantees about maximum pause times. These hard guarantees not only make it possible to run big data analysis with large heaps, but they also lend themselves well towards real-time applications.
Would I, with these, still recommend going off-heap? Probably not. Because going off-heap adds complexity, and also CPU cost for serialization and deserialization each time you’re crossing that boundary. And that’s a cost you’d like to avoid whenever possible.
Consider Amazon, the e-commerce bits of the company. While it might have started out as an online bookshop, it evolved into a generalized online marketplace.
So what was critical in the success of Amazon?
The internet would probably be a good answer to that question, among many other things. But how much of the overall software and business development efforts at Amazon do you think relate to core internet activities? Sure, it’s got a big online presence, which will require a lot of internet hardware, internet people and teams, ops, etc..
But in the grand scheme of things, how much of it all is directly related to core internet connectivity? Probably not that much..
What I’m getting at is that, while the internet is critical infrastructure in both of the above cases, it plays a very small to almost insignificant part of the overall work that goes into creating what’s built on top. The time is spent elsewhere, on stuff that sits on top. The only exception is when it stops working..
The same will apply to the blockchain.
Now, don’t get me wrong. I’m not saying the internet runs itself. I know there many competent and dedicated people involved, making sure it all runs as well as it does, 24/7, every day of the year. This stuff doesn’t happen through magic.
And neither will blockchain, with all of its infrastructure. Infrastructure we’re still defining, developing, building and deploying. But just like the internet, blockchain will become a critical infrastructure we’ll not spend much time thinking about. It will become 5-10% of your software architecture: Absolutely critical, but managed by some people you sometimes interact with, within the otherwise big but standard machinery of usual software development.
And of course, just like the internet, if your software doesn’t have those 5-10% of blockchain in it, it quickly becomes obsolete.
Many years ago, in my late teens, I used to attend The Gathering. It’s a fantastic combination of people, passion, dedication and fun mixed together for a few days each Easter.
If you’re not familiar with computer culture this whole thing might seem very alien, and, for many, somewhat a waste of time. But for those of us whom are familiar with this domain, and especially those of us who had some experience with the demoscene, it brings back a feeling of accomplishment.
This accomplishment is tied in with creating multimedia, before it was even called multimedia, mixing graphics, animations, music, and computer code. But it wasn’t like it is today, where your computer can easily spit out 4k video, add graphics and surround sound without much issue. No, back then if was difficult enough to make it do any one of those things with ease, and certainly not all together.
The embedded video above is an example of such an accomplishment. It’s what’s called an intro, created by Conspiracy, is a small Hungarian group. It took 1st prize at st(ART) 2004. Everything you see in that video is produced by a computer, real time, and all the code and content to make it happen fits within a limit of 64 thousand bytes. 64 kB, as it’s also labeled, is comparable to writing a ~10 thousand word blog post, when comparing raw storage requirements.
What about blockchain?
Blockchain is at a similar point of technical capabilities right now. It’s costly, slow, or simply impossible to store or process much data on it. It will improve over time, and it needs to if it is to take on much of the typical use cases we talk about. But right now it’s a challenge to develop the most simple of things on it.
And this challenge is one I welcome, and it’s why I brought in the above demoscene topic. Because even with the limitations of blockchain currently, people are building fantastic stuff on top. Everything for decentralized representations of value, to fully functional marketplaces without any central point of failure, to games, supply chains and beyond. It’s all happening.
For some time I’ve wanted to see how I could incorporate something related to art on the blockchain in a form which includes the actual art being stored on-chain.
There are other examples where the blockchain itself only serves as a secondary element to the art, for example by proving ownership. Due to the cost, lack of speed, and limited storage capabilities, this approach makes sense for many. But it’s not the type of “demoscene challenge” I’m referring to.
The Million DAI Website plays on the current state of blockchain, using DeFi as a tool to allow people to buy and sell tiles. A tile is a 10 x 10 collection of pixels, with a minimum price of 100 DAI. And owners of a tile can draw on these pixels. These then become part of history, as they are stored on-chain.
It’s important to understand that this is not an attempt for me to amass 1 million DAI and become rich over night. That would be a bit crude, and the site doesn’t work like that because tile owners can always sell their tile back to the smart contract. New owners can also outbid existing owners, in an attempt to keep the site active. While pixel art and its limitations is one pillar in The Million DAI Website, another is precisely this fact that the site does not represent a one way transfer of money.
Blockchain and DeFi makes the world a more efficient place, one where third-parties that don’t add value are removed from the value chain, because it’s easier to remove them than in your traditional setting. Because it’s so easy to transfer value, whatever that value is, on-chain, it also makes perfect sense that you can instantly sell your tile back and receive your original amount of DAI.
It also makes perfect sense to allow the smart contract to receive the interest earned on this DAI while it’s kept in escrow, and this plays on a third pillar, which is what’s often labeled balance sheet as a business model.
Anyway, looking a head, I will spend some time experimenting with pixel art on the blockchain, using The Million DAI Website as a platform. And because it’s an open playground I hope others will take on the challenge as well.
Sometimes, while coding up some Solidity contracts, all you want is a simple key/value mapping, to store some details into.
You want to keep it simple so the gas costs are low, but what about everything else? Would you still like to use smart contract proxies, so you can update your code? And maybe you’d like to keep this key/value mapping separate from your code, keeping upgrades manageable..?
With previous versions of AKAP-utils you a LinkedHashMap implementation. That’s still there and it gives you a lot of features, from put, get, remove and update entries, to iterating over keys, in insert order. Basically a full featured linked hash map, in Solidity. But you do pay a bit more in gas for these features.
If all you’re looking for is a simple key/value mapping, and don’t want to give up on upgradability, managed write access or any of the other features you get with AKAP, then we’ve got good news for you.
As of AKAP-utils v0.9.4, released earlier today, you can now use the SimpleMap for all your simple key/value mapping needs.
With already two beta releases of AKAP utils on NPM things are moving forward quickly, as often is the case early on. While they are 0.9.x releases (beta), all the stuff in there is well tested. You’re very much encouraged to take it for a spin. And of course, feedback always welcome!
The documentation is an important part of being able to “take software for a spin”, and much of the docs focus on patterns. These are software patterns showing how you can apply the AKA protocol, using AKAP utils, to make that easier. Like with AKAP utils, the docs are also a work in progress, with further patterns to be added soon.
The fun thing about thinking through patterns when writing the docs is that you start thinking of additional patterns. It becomes a very fruitful exercise, and I’m looking forward to fill it all in.