Distributed computing is less binary, more probability

Since late 2011 I’ve been working on distributed computing platforms, where software that runs across more than one machine is running as what we’d call a cluster.

Now, as I focus more on blockchain and DLT, a lot of the approaches to computing are similar.

A lightweight version of the challenges with working on distributed systems can be found when working on software that uses multiple threads to do concurrent computing. Things can quickly get very ugly if you do not know how to approach the challenge.

On simple computer systems, that aren’t distributed or multi-threaded, things are simpler. It’s more binary; Either it’s in this state or the next.. Sure, bad software practices can make this a challenge as well, but it’s easier to get away with crap code when the environment is simpler.

But when you start having multiple things happening over multiple CPUs across a network you need to think about a potential outcome differently. You catch yourself having design discussions with your colleagues where the topic is about the likelihood of certain outcomes and how to handle them. Things start being about probabilities and stuff like eventual consistency and the CAP theorem enter the picture.

As a software engineer, if you ever get the chance to work on a distributed system you should take it. You might not like it, you might find it frustrating, but it’s a golden opportunity to learn something new.

If you’re into blockchain and DLTs, check out my post on this topic over at the LEDGERPATH blog titled distributed systems.