AWS Lambda has changed the way we deploy and run software, but the serverless paradigm has created new challenges to old problems: How do you test a cloud-hosted function locally? How do you monitor them? What about logging and config management? And how do we start migrating from existing architectures?
Yan Cui shares solutions to these challenges, drawing on his experience running Lambda in production and migrating from an existing monolithic architecture.
What do you do after you learn Kubernetes, after you deploy your applications to a production cluster, and after you fully automate continuous deployment pipeline? You work on making your cluster self-sufficient by adding monitoring, alerting, logging, and auto-scaling.
The fact that we can run (almost) anything in Kubernetes and that it will do its best to make it fault tolerant and highly available, does not mean that our applications and clusters are bulletproof. We need to monitor the cluster, and we need alerts that will notify us of potential issues. When we do discover that there is a problem, we need to be able to query metrics and logs of the whole system. We can fix an issue only once we know what the root cause is. In highly dynamic distributed systems like Kubernetes, that is not as easy as it looks.
Further on, we need to learn how to scale (and de-scale) everything. The number of Pods of an application should change over time to accommodate fluctuations in traffic and demand. Nodes should scale as well to fulfill the needs of our applications.
Kubernetes already has the tools that provide metrics and visibility into logs. It allows us to create auto-scaling rules. Yet, we might discover that Kuberentes alone is not enough and that we might need to extend our system with additional processes and tools. We'll discuss how to make your clusters and applications truly dynamic and resilient and that they require minimal manual involvement. We'll try to make our system self-adaptive.
Dropping the monolith is always difficult, since it almost always involves changing tools, processes, and most importantly, mindsets. Changing everything at once is disruptive and probably subject to fail, and while allowing each team to come up with their own solutions might lead to creative solutions, putting them all together is probably a challenge of its own.
Daniel will walk you through Mambu’s experience on our road to container-based microservices, a feature which we release internally as a Platform-as-a-Service (PaaS) solution, all based around a single clear ideology: GitOps, or in other words, Git as the source of truth.
You’ll learn how any kind of change in our system is done through a git commit and/or Merge request, how we scale our deployments securely on any number of clusters through our pull instead of push model, how we configure and keep infrastructure updated as code, as well as how we ensure consistency between our code and real cluster state.
Istio is a service mesh for Kubernetes that offers advanced networking features. It provides intelligent routing, resiliency, and security features, so that service authors don’t have to keep re-implementing them.
Istio is rapidly taking off and there are great introductory talks everywhere. However in this session, we will explore precisely how it does what it does, following one brave little packet in from the internet and back out again.
This talk will give a great insight into Istio’s full power, and its fascinating architecture.