Antrea: The Ubiquitous CNI
About six months ago, a new container network interface (CNI) was released to the world, this one coming out of VMware called Antrea. I had heard positive things about this new CNI and although I thought “great, yet another CNI”, I decided to dive in, take a look, and see why the community needed a new one. It turns out that we do, so in this article I’m going to discuss a little of why Antrea is particularly valuable (and needed), show the installation of Antrea with some non-default configured options, and look at monitoring.
Complete CI/CD for Hugo on AWS with GitHub Actions
For about five months now, I’ve been using Hugo to build my personal blog here at neonmirrors.net. It has been a great way for me to get more familiar with modern development practices and learn more about AWS in the process. It was somewhat of a learning curve what with me never having a blog previously, but now that I’m using it I would never go to Wordpress or any other system which uses databases.
Getting Started with Tanzu Kubernetes Grid
After VMworld US 2019 when the Tanzu portfolio was announced, there was naturally a lot of excitement for what VMware was doing around modern application development. Fast forward a couple months when more things were announced, names were changed, and Pivotal became part of VMware, and what Tanzu actually is and what the product sets are has been wildly confusing to most. One of those offerings called Tanzu Kubernetes Grid (TKG) has now been released, and so in this article I want to explain what this offering is, what it promises, and, lastly, how to get started with the CLI tool by putting rubber to the road.
Behind the Scenes with Cluster API Provider vSphere
The verdict is in and Kubernetes has proven a great success in its ability to deploy and manage containerized applications. And in that regard, it is now seen as the de facto platform for running cloud-native applications. A somewhat more recent project has emerged which seeks to apply that same system of desired state reconciliation to Kubernetes clusters themselves, and this is called the Cluster API (CAPI) project. Cluster API has already been written about numerous times (this blog is a good overview; so is this for more of a technical primer) and so I won’t rehash its merits here.
Buy Don't Build
As long as I’ve been involved in technology, I’ve noticed a central theme which reappears countless times and is a powerful influencer in every decision, hire, and discussion. This theme invisibly steers businesses in meaningful ways yet isn’t given enough serious thought by developers and technical folks. That theme, or question rather, is whether to build or buy. In other words, “do we build this thing ourselves or do we buy a thing from someone else?
Why Kubernetes on Virtual Machines?
One of the arguments or debates (depending on how generous you are) that continues to rage in the world of Kubernetes is the old “bare metal versus virtual machines” subject. Some people seem to be all in on bare metal while you have those staunch opponents (as well as some hilarious memes) who advise not even attempting it.
The reality, of course, is an “it depends” moment, with pros and cons on each side.
Authentication and Authorization in Kubernetes
Kubernetes is designed to be secured by default, and many of the built-in technologies and concepts are designed to ensure that is the case. Often times, the first exposure users will have to Kubernetes security will be to authentication and authorization: simply getting inside the cluster and being permitted to do something. In this blog post, I want to focus on human user/operator access to Kubernetes and the mechanisms at your disposal to ensure you get it right the first try.
pks-rancher-reg: Automated PKS Cluster Registration For Rancher
My previous blog showed a method that can be used to stand up Rancher Server in HA on VMware PKS, and I’m going to build on it in this one. Having a Rancher Server environment is great for managing all sorts of Kubernetes clusters from the edge to those hosted in cloud providers, but we can also use it to manage VMware PKS clusters as well. That’s really no secret. However, since PKS focuses on a high degree of difficult automation to produce ready-to-run Kubernetes clusters at the end of the day, wouldn’t it be nice to extend that to making them ready-to-manage as well?
Rancher HA on Enterprise PKS
Rancher is a container orchestration and management tool that has been around for several years at this point and performs a variety of different functions. In more recent days, it has been refactored to completely adopt Kubernetes. In this blog, I am going to focus on how to build an enterprise-grade, highly-available, and secure installation of Rancher Server on top of VMware Enterprise PKS. I’ll respond to the burning question of ‘why Rancher on PKS in the first place?
HTTPS Ingress with Enterprise PKS
Kubernetes is an awesome technology, in my humble opinion, and one of the best ways to adopt it and begin to use it in a production-worthy manner is with VMware’s Enterprise PKS. By using Enterprise PKS, you get some truly great additional value by leveraging NSX-T for the networking and security components. One of those things that comes in so handy is the ingress controller capabilities of NSX-T in which you don’t need to go roll your own.