Wednesday, February 12, 2020

Cloud vs. Cloud Native

Introduction

These days everyone is moving “On cloud”. Having many cloud vendors with lucrative offers of TCO reduction, does deploying your application in cloud makes you cloud-ready?

“Cloud Native” is more than just moving part or all of your applications to a cloud vendor. This requires a completely different approach towards application architecture, application development, team alignment, infrastructure etc. etc. Let’s take a deep dive to understand what actually “Cloud Native Architecture” means and why it is important.  

By definition: “Cloud Native” is an approach for architecture, development and deployment of applications; which takes the characteristics and nature of the cloud into account - resulting in processes and workflows that fully take advantage of the platform.

So what does fully taking advantage of the cloud means?


Cloud Brings Microservice

Traditional on-premise infrastructure used to be a centralized system. To all intents and purposes, everything was in one place. However, in cloud servers, databases etc. are distributed. In this context, if you simply migrate your existing application from traditional on-premise server to the cloud, it won’t give you much ROI from your migration. It will be simply as equal to as changing your hardware on-premise (or may be worse in sense they are no longer one centralized system any more).This is because applications those were hosted on on-premise servers were built as monoliths. Applications were including every feature and service that was meant to make them a single, big lump of code. Current day, with microservice architecture, applications are being built as a collection of distributed services, which compliments the distributed nature of the cloud perfectly. This approach gives individual microservices the advantage to automate in all possible ways to maximize the efficiency and minimize the cost & effort:

·        Auto Provisioning: Automatically provisioning environments as code
·        Auto Scaling: Tracking utilization of various modules/services within your application and scale up/down resources automatically whenever & wherever required
·        Auto Redundancy: Cloud Native applications are resilient to failure, by nature. In the event of a service degradation, application instances move to another server or data center automatically and seamlessly.

So, Cloud with microservices provide more granular means of deploying resources required for maintaining performance & reliability (SLA). While it’s possible to just migrate your existing monolith application with its legacy codebase to a cloud platform, however you won’t be able to get any of the benefits of being truly “Cloud Native”.


Microservices Brings Containers

The main driving factor for the rise of underlying container ecosystem (e.g. Docker, CoreOS rkt, Mesos, LXC etc.) is the microservices architecture. Managing your applications as a collection of individual (micro) services has implications on infrastructure. Since every service in a microservices application needs to be a self-contained unit, they need their own allocation of computing, memory, and network resources. From a cost and management point of view, it is not feasible to multiply 10 or 100 times the number of VMs to host individual (micro) services of your application as you move to the cloud. This is where containers come into the existence. Containers lightweight, standalone, executable package of software that includes everything (in right amount) needed to run an application: code, runtime, system tools, system libraries and settings. They make great alternative to VMs for packaging microservices, enabling the benefits mentioned above.


Container Brings Service Mesh Capabilities

With introduction of microservice and container; services have become more agile and location transparent. So the “Cloud Native” architecture demands more mesh or collaboration capabilities from services.

Let’s imagine that you are running an application that invokes other services or within your Cloud Native application different modules / (micro) services interacting with each other over a REST API or Thrift API. In order to make a request, your consumer needs to know the location of the provider service instance. In a traditional application running on a few hardware servers in a data center, the service locations are relatively static. For example, your code can read the network locations from a configuration file that is occasionally updated. In a modern, “Cloud Native” Microservice driven application, where main ideology is to run anywhere transparently, this is a much more difficult problem to solve. Because Service instances have dynamically assigned network locations. Moreover, the set of service instances changes dynamically for Auto Scaling, failures, and upgrades. Consequently, your client code needs to use a more elaborate service discovery mechanism.

This bring the considerations to re imagine the dynamic discovery patterns (e.g. Client‑Side Discovery Pattern, Server‑Side Discovery Pattern etc.), Service registry, Circuit breakers, API Gateway etc.

There are bunch of open source projects that help to design better mesh or collaboration capabilities for services. E.g. Zuul, Eureka, Hystrix, Ribbon, Gravitee etc.


Cloud, Microservice, Containers, Service Mesh together Brings “Cloud Native” Tools

The fundamental change in infrastructure with introduction of cloud demands a change in tools as well. Legacy tools those were built to manage tasks across a few hardware servers in a data center, can no longer sustain the complexity of microservices running in containers across the cloud. While cutting across a distributed containerized microservice based application, simple things like latency optimization, RCA of the backend, E2E monitoring etc can be complex. The resource consumption of each microservice needs to be metered to ensure degrading SLA for one service don’t affect others.

There are bunch of open source projects that help run microservice applications. E.g. Kubernetes, Containerd, Fluentd, Prometheus, Envoyproxy, CoreDNS, Jaegertracing, Vitess etc.


Cloud, Microservice & Containers together Brings Devops

Traditional application require different teams for each stage of an application’s lifecycle e.g. Development, DBA, QA, System Admin, RM, PM etc. with each team having their own goals and priorities. Team’s priorities often clash, and it takes a toll on the overall application delivery & maintenance.

With introduction of cloud, microservices & containers, team structures can be realigned to reflect the change in application development, deployment & maintenance architecture.

Modern software delivery teams are small and cross-functional fitting perfectly into “two pizza rule”. Teams are structured and organized by the services they support. This keeps team’s agile, improving collaboration and decision-making. It also eliminates the accountability issues, whereby teams can renounce responsibility for a feature after it is moves on from their team, but before it is released.


Cloud Native Release

With the right architecture to build applications, the right tools to manage them, and the right team structure to bring it all together, the comprehensive effect is that Cloud Native Applications has become more agile & frequent in terms of their releases. In essence, there is no longer any major releases, since every release affects a single (micro) service, or a couple of (micro) services at most. Due to the limited scope for every release, the find & fix are easy now. With capability automatic rollback tools, it is much easier to revert to the previous stable release when a new release fails.

Autonomous independent teams can ship updates to the services they own without any conflict with other teams. With multiple teams simultaneously releasing updates to services frequently, the pace of deployment is sky high. Netflix engineers deploy code thousands of times per day.

The biggest advantage of a “Cloud Native” architecture is that it takes a concept to rollout (Time to Market) in the quickest possible time.


Conclusion

“Cloud Native” architecture is more than just migrating your existing application to any cloud instance. It goes way further to change the way application is architected, planned & provisioned into the infrastructure, effectively monitored and maintained.

You can start your modernization journey (if not yet started) with an application modernization assessment and then a single pilot project, allowing you to become familiar with this approach before growing your cloud adoption further over time.

Next time you hear someone talking about cloud, stop and think if it really mean “Cloud Native”.

Cloud vs. Cloud Native

Introduction These days everyone is moving “On cloud”. Having many cloud vendors with lucrative offers of TCO reduction, does deploying yo...