Kubernetes: brave new world, same old problems

Kubernetes: brave new world, same old problems

Load balancer automation Published on 3 mins Last updated

So I was tearing my hair out wondering why the code I was building for our product in Kubernetes wasn't working.

And then I had an epiphany, and the rest is what you read here...

Next-Gen app building with Kubernetes

Picture me. Merrily building cool things for our new Next-Gen ADC products in Kubernetes clusters. Deep in the zone, listening to the Bladerunner soundtrack, smashing out code.

It's no surprise people love Kubernetes. As we explained in our earlier blog: what is Kubernetes, and why are containers so popular, it's great if you want:

  • An automated containerized environment
  • A proven open source solution
  • Quick scale-up and scale-down functionality
  • A true cloud-native tool with full portability

It makes it easy to manage your containers, is scalable and flexible, efficient, has a great user base, and is the industry standard for container orchestration.

My team and I love it because it provides easy portability and enables us to use a single Helm chart to deploy our services across multiple environments. And going cloud-native with Kubernetes means we can build and run scalable applications in dynamic environments such as public, hybrid, and private clouds.

Anyway....so there I was, in full programming mode, creating my latest masterpiece and extolling the joys of cloud-native microservices environments, but then when I tried to upload my latest service in Kubernetes, it just didn't work! No matter what I tried, it just wasn't working...and (to make matters worse) the entire application was now segfaulting.

So there I was, frustrated that I had inadvertently caused an error, and confused as to why my upload was segfaulting and causing the crash of our services - despite having successfully passed the Jenkins test series it had just gone through.

After all, one of the main advantages of a microservices approach is supposed to be better fault isolation and more robust application resilience, right? In other words, Kubernetes kept trying to do its job and self-heal by continuously trying to bring things back online. Yet it still wasn't working. It just made no sense.

Because each container runs autonomously of all the others, me uploading a faulty service should not have negatively impacted all the others. In other words, because each service shares the same core, how could I have possibly changed the core itself in a way that was now causing it to crash?!

Consequently, I set about carefully unpicking all the updates I had made over the last few hours, and eventually over the last few days. But no matter what I did, the application continued to crash, despite me rewinding things to well before I had had any issues previously! And it wasn't until 4 hours later that I eventually figured out what the problem was...

New world, same old problems

Ultimately, I worked out that the day before, my team had installed some microservices distributed tracing software (an application monitoring solution giving us visibility of program errors, as well as network infrastructure errors, before the customer spots them). So the trace API was trying to do all of that automatically and attach itself to our programs to provide us with that feedback (as opposed to a middleware or DIY monitoring series) and it was THAT software that was causing all the issues. And not my shiny new code. That had nothing to do with it!

So the moral of this (albeit short!) story is...for all the hype, no matter how much we embrace cool new tech, and even when developing a new application from scratch, we can still have the same old legacy-style issues to deal with. It all came down to that age-old Dev problem, where the smallest issues and tiniest errors cause us the biggest headaches, scuppering the whole project, and unpicking it all becomes the biggest drain on our time!

The reality of course is that many of us are not working with a blank canvas and building from scratch. We're still dragging the old spaghetti of monolithic applications along with us.

And, in fact, that's what a big percentage of our customers still need help with. So whether you are struggling with overgrown monolithic giants, nimbly building yourself a microservices paradise, or more likely stuck somewhere between those two worlds, we at Loadbalancer.org can give you a helping hand.

So to conclude...

Even when starting with a blank canvas, using the latest tools and deployment methodologies, we can still be blindsided. In fact, there is a school of thought that says it is easier to break up an existing monolithic application into microservices than to build into containers from scratch. Whatever your approach, there's no getting away from having to deal with legacy headaches! And (while it may not be as exciting as shouting about the latest new 'big thing') it is still the reality of daily life for most of us.

Need help with legacy issues?

Our experts are always here