At Lunar Way we value openness and sharing of knowledge. We are active in the meetup community in Aarhus as the co-founders of the Cloud Native Aarhus group and have been sharing our experiences in other groups as well.
This past week, we changed gears and attended both CloudNative London and GOTO Copenhagen. If you are not familiar with these, GOTO Copenhagen is probably the biggest software conference in Denmark with over 1.000 attendees. CloudNative London was the first of its kind.
CloudNative London 2017
The focus of these presentations has been to share our experiences in transforming our architecture, organization, and teams to adopt a Cloud Native / DevOps approach. In this blog post, I will try to answer some of the questions asked at these conferences, and provide the ones who did not attend an understanding of what it is we are trying to do.
So, the title of the talk is: “Lunar Way’s journey towards Cloud Native Utopia”. Whatever does that mean? According to the Oxford Dictionary, Utopia means;
“An imagined place or state of things in which everything is perfect”
That sounds like something to strive for. Right? But what about Cloud Native, then? What does that mean? There are many definitions but we tend to use the one stated by the Cloud Native Computing Foundation
By that definition, it simply means, that you build your software as small decoupled services, you package these services in containers, and you deploy these containers in a dynamically orchestrated infrastructure.
Is that all it takes to be Cloud Native? We believe there’s more to it than that. It is sort of given if you take a closer look at the definition of Microservices, which also focus a lot on how you as a company (e.g. Conway’s law) are organized and much more. If you are interested, have a closer look at Martin Fowler and James Lewis’ great article from 2014.
Another, sort of, buzzword that comes to mind when talking about Cloud Native, is DevOps. The short definition of DevOps is: the practice to unify development and operations by utilizing topics such as automation, fast feedback, and much more.
So where are we going with this rant? This “new” paradigm is what we believe is key to unlock velocity — and this is one of the reasons we want to go there. To sum it all up I will highlight Joe Beda’s, CTO at Heptio, definition of Cloud Native because it includes all aspects.
“Cloud Native is structuring teams, culture, and technology to utilize automation and architectures to manage complexity and unlock velocity.”
Great, now we know what Cloud Native is, but what is it that we will gain from it?
This is just some of the reasons as seen from a business perspective. We will cover the arguments seen from the software architecture perspective in a series of future blog posts.
The rest of my talk focuses on where we started and how we are in the process of rebuilding our entire infrastructure allowing the above mentioned benefits. Instead of writing everything in this post I encourage the reader to go see the presentation from CloudNative London which can be found here:
We will cover these topics in more details in future blog posts as well. So stay tuned.
Instead, what I will do is to try to answer some of the questions I received from the audience at GOTO Copenhagen.
GOTO Copenhagen 2017
All our services contain a number of unit tests to ensure all code paths are tested. To test the interactions between service, we spin up small environments.
Our organization is structured in squads inspired by Spotify. These squads usually contain an Android-, iOS-, and Backend-developer along with a designer, business development focused person, and a person from marketing. We have three of these ‘feature’ squads, that are 100% focused on building features. Our core squad is responsible for driving the technological innovation. They set the architectural vision, provide the feature squads with tools and services they need in order for them to be as fast as possible. The goal is to provide our feature squads a more or less self-service platform.
We have one repo per service instead of what’s called a mono-repo. We think that decoupling on this level is important as well, in terms of providing the teams a completely decoupled view of everything from their source code, to their deployment specification, to the specification that tells our production system how to run the service.
To some extent yes. All our network infrastructure is specific to AWS. However, we do spend a lot of time focussing on choosing technologies that will not lock us more to the platform than necessary. One of the reasons for choosing Kubernetes as our dynamically scheduled infrastructure is to allow for a much easier migration than choosing a closed-source service such as AWS ECS. We can fairly easy spin up a Kubernetes cluster at another Cloud Provider and migrate our services. Another example of reasonable lock-in is AWS managed Elasticsearch cluster. It is a managed version of the open-source tool and it allows for easy configuration and management.
I hope this answers your questions.
You are very welcome to hit me up @phennex on Twitter if you have any further questions.
I would like to say a big thank you to all participants and especially all of you who commented and asked questions.
We have not yet reached Cloud Native utopia but we are definitely on the right path. The decomposition of our system allows our feature squads to be autonomous, self-driving, and fast. But be careful, remember that you are decomposing into services communicating over a non-deterministic network. Microservices are complex, but we are strong believers in the benefits outweighs the complexities of such an architecture.
In the coming months we will publish more tech-focused blog posts along with deeper dives into the technology we use at Lunar Way and how we architect our systems to “Make Your Money Matter”. Stay tuned.