Exploring Kyverno: Part 3, Generation
Welcome back to my Exploring Kyverno series. In part three, I'm going to cover Kyverno's generate functionality: the ability to create new and update existing resources based upon the creation of a triggering resource. If you're new to Kyverno and not sure what it is, I highly recommend starting with the introduction.
In the last two articles, I looked at the validation and mutation abilities of Kyverno. You saw that those abilities were controlled in similar ways with rules which declared the desired behavior. Those rules were organized into policies, operating at either the Namespace level or the entire cluster. And all rules/policies were authored in the same declarative style, with no coding required, just as you've come to expect with Kubernetes. With the generation ability, we do exactly the same thing.
A generate
rule is triggered off of a CREATE operation somewhere in Kubernetes. This is typically tied to the creation of another resource either by kind
(often a Namespace), or something else in the AdmissionReview payload like metadata. It can even be a combination of things based on what you provide in a match
statement.
generate
rules work in two ways: Either you're copying an existing resource from one place to another, or you're creating a new resource defined in the rule itself. In the first case, when the trigger fires, a source resource is located and copied to a destination. Obviously in that case, the source resource has to exist at the time the rule is triggered. In the second case, the rule itself contains the entire resource definition of the object being created.
Now, in addition, Kyverno can keep these generated resources in sync. If it was an existing resource that you copied, with synchronization enabled, should that source resource be changed, those changes will get propagated downstream. And Kyverno will also protect those downstream resources so if they're deleted/changed, they'll be put right back in place.
This ability of Kyverno is unique in that no other policy engine can do this. It also makes Kyverno almost like a Swiss army knife of policy and automation because you don't need another copier/syncher utility–simply let your policy engine do that.
This stuff is best showed through examples and real-world use cases, so let's illustrate these abilities with actual policies.
The most common use case for generate
rules seems to be as an automation "helper" for Namespace creation and management. Typically when creating a Namespace in Kubernetes, there are several things you need right off the bat in order to start using it. These are typically things like:
- Network policy
- Role bindings
- Quotas
- ConfigMaps
There could obviously be many more, but these are some of the basics. So let's start with the first one and see a basic generate
rule which gives us a NetworkPolicy resource whenever a new Namespace is created.
Check out the following sample ClusterPolicy below. This one is scoped at the whole cluster level. It watches for new Namespaces in the match
block, excludes some existing ones in the exclude
block, and then specifies the type of rule as a generate
rule. Next, we're setting synchronize: true
so we want to protect this resource. Under the data
key we're telling Kyverno that we want to create a new resource and that the rule will provide the spec. In this case, what follows in the spec
matches the kind
which is a NetworkPolicy. And, as you can see from that spec, we're denying all ingress to the new Namespace for all Pods.
1apiVersion: kyverno.io/v1
2kind: ClusterPolicy
3metadata:
4 name: generate-policy
5spec:
6 rules:
7 - name: create-netpol
8 match:
9 resources:
10 kinds:
11 - Namespace
12 exclude:
13 resources:
14 namespaces:
15 - kube-system
16 - default
17 - kube-public
18 - kyverno
19 generate:
20 synchronize: true
21 kind: NetworkPolicy
22 name: deny-ingress
23 namespace: "{{request.object.metadata.name}}"
24 data:
25 spec:
26 podSelector: {}
27 policyTypes:
28 - Ingress
Let's create this policy and then create a new Namespace to test it.
1$ k create -f generate.yaml
2clusterpolicy.kyverno.io/generate-policy created
1$ k create ns arial-qa
2namespace/arial-qa created
Let's check what NetworkPolicies now exist.
1$ k get netpol -A
2NAMESPACE NAME POD-SELECTOR AGE
3arial-qa deny-ingress <none> 3s
There we go, a new NetworkPolicy object has been created. If you get
or describe
said object, it should conform to the definition in our rule.
Let's now build upon this example and add more functionality.
Something else that's commonly requested (again, for which there's not a native answer in Kubernetes) and which is extremely valuable is the ability to generate ConfigMap or Secret resources. ConfigMaps and Secrets are sources of data which need to be consumed in a variety of ways, from Pods to Ingress controllers. And they're namespaced resources, too, so this presents a challenge for managing them.
Let's say, as an example in this scenario, you have apps that get deployed into every Namespace which need to establish trust in some way. In order to do that, they need to know about your company's internal root certificate authority (CA) certificate since that's what all other resources are signed with. Using Kyverno, you can make it so that this CA cert (in the form of a ConfigMap) only need exist once in a "system"-level Namespace. Any new Namespaces can automatically get a copy of this certificate when being created.
Go ahead now and create a ConfigMap in a Namespace which contains your cert. Here's what I'll use with a bogus cert I just created.
Source can be any Namespace and you probably don't want to use
default
for that.
1apiVersion: v1
2kind: ConfigMap
3metadata:
4 name: corp-ca-cert
5 namespace: default
6data:
7 ca: |-
8 -----BEGIN CERTIFICATE-----
9 MIID5zCCAs+gAwIBAgIUCl6BKlpe2QiS5IQby6QOW7vexMwwDQYJKoZIhvcNAQEL
10 BQAwgYIxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJDQTENMAsGA1UEBwwEVG93bjEQ
11 MA4GA1UECgwHQ29tcGFueTENMAsGA1UECwwEQ29ycDEYMBYGA1UEAwwPY29ycC5k
12 b21haW4uY29tMRwwGgYJKoZIhvcNAQkBFg1jb3JwQGNvcnAuY29tMB4XDTIwMTIx
13 NTE4MjkxOVoXDTIzMDkxMTE4MjkxOVowgYIxCzAJBgNVBAYTAlVTMQswCQYDVQQI
14 DAJDQTENMAsGA1UEBwwEVG93bjEQMA4GA1UECgwHQ29tcGFueTENMAsGA1UECwwE
15 Q29ycDEYMBYGA1UEAwwPY29ycC5kb21haW4uY29tMRwwGgYJKoZIhvcNAQkBFg1j
16 b3JwQGNvcnAuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA514S
17 SZ97kOwLjDj2bpVcUXVqomkx9817GRvjlrBNdFr0oY4zoZxR+q5Eic3ZnPxf46th
18 BEINWGAgvvlU7370ySQux5y4pmh4XMnK0GnbZ9zvxNMOYNl+DUsztMUakP+jG7Rp
19 f1OMfUoq4oM1hzqcBDC6V5/801avqUzHGeyVWamGAMS4G5A33h/DfYosCyI3blEk
20 7nDjVnex6bc2k5OmGTVIvFJP0OI8S08EjDmna33iAWORg6QfMrk0j43sqSQ4QQ0Z
21 BOLVQKHhYXxmcenOsgGB+GZJzgWJI3x/3//znY28i7gki//aK5dA9z+uwus0NBtB
22 q+6l24E4oL6uTpWMrQIDAQABo1MwUTAdBgNVHQ4EFgQUThJpVGB8LEh8rSzVMVac
23 S2kvRaIwHwYDVR0jBBgwFoAUThJpVGB8LEh8rSzVMVacS2kvRaIwDwYDVR0TAQH/
24 BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAlPOCZHNzFrHggeQTzuZn+S0rcH7p
25 7fZmqotL8G9mhxmzrVAbEtWtacQ71owFFu8RmnYTHykMtoml/wz2OBy1gJ6BaqhC
26 YoHx4Ky8J7OxC35cm3JVKXQ4ocC79mhw3ixI2P9UZQrJKnmGr42V1GxcBG7vl86l
27 ifSkp2j65Z1exbwnr0lcgiF2/R921FX7LCaXslug8VTUHSTc67/77RuKVxoJ6Gx4
28 JszU2icBatjLwGQDfMxKfysG2GJzOFE4TgZaInct1VCB12ij43YZlT6eATvKr9Pi
29 Rx0PftMOuWpaV0UtodJkOjXfE+hMeHXbkunUZVkIB8N9VJfouFQyBAurmw==
30 -----END CERTIFICATE-----
Create the ConfigMap and ensure you can reference corp-ca-cert
somehow. (JSONPath is great for this.)
1k -n default get cm corp-ca-cert -o jsonpath='{.data.ca}'
Now we'll build upon the last policy and add a new rule which clones this corp-ca-cert
ConfigMap into a new Namespace. But we'll also change it up a bit and apply an additional match
statement so we place more control over which new Namespaces actually are eligible to receive said cert. There may be some Namespaces which don't need it, so we want to skip those. Let's let the label app-type=corp
be the trigger we use. Any new Namespace with this label will get a copy of the ConfigMap. Here's the new rule we'll add.
1- name: copy-corp-ca-cert
2 match:
3 resources:
4 kinds:
5 - Namespace
6 selector:
7 matchLabels:
8 app-type: corp
9 exclude:
10 resources:
11 namespaces:
12 - kube-system
13 - default
14 - kube-public
15 - kyverno
16 generate:
17 kind: ConfigMap
18 name: corp-ca-cert
19 namespace: "{{request.object.metadata.name}}"
20 synchronize : true
21 clone:
22 namespace: default
23 name: corp-ca-cert
Update your ClusterPolicy with this rule, and replace it with kubectl
. Now, let's test it by creating a new Namespace which matches this criteria.
1apiVersion: v1
2kind: Namespace
3metadata:
4 name: billet-qa
5 labels:
6 app-type: corp
Check and see if our NetworkPolicy AND our new ConfigMap is there.
1$ k -n billet-qa get cm,netpol
2NAME DATA AGE
3configmap/corp-ca-cert 1 69s
4
5NAME POD-SELECTOR AGE
6networkpolicy.networking.k8s.io/deny-ingress <none> 70s
And there we go! We got both resources from both rules: a NetworkPolicy that was generated from data stored in the rule, and a ConfigMap which was cloned from an existing one.
"Oh no," you might say, "we need to renew our root certificate and update them everywhere!" No need to fret, you're not doomed. Remember that synchronize: true
statement in the rules? This will allow us to have Kyverno detect and propagate those changes wherever it was triggered.
Let's edit (or replace) our ConfigMap and replace the old CA certificate with the new corporate one which was given to us by our security team (another junk cert incoming).
1apiVersion: v1
2kind: ConfigMap
3metadata:
4 name: corp-ca-cert
5 namespace: default
6data:
7 ca: |-
8 -----BEGIN CERTIFICATE-----
9 MIIDVjCCAj6gAwIBAgIUBgTX1E6hLD+kMJ6tDKTfVWj308UwDQYJKoZIhvcNAQEL
10 BQAwGTEXMBUGA1UEAwwOQXJnb0NEIFRlc3QgQ0EwIBcNMTkwNzIwMTUzNjEzWhgP
11 MjExOTA2MjYxNTM2MTNaMBkxFzAVBgNVBAMMDkFyZ29DRCBUZXN0IENBMIIBIjAN
12 BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAw2OHfSVf/YYm9aID39PodB3BSqDG
13 SIRYReBirSk6c9fq7sLVGn6kLFZQbxmkxkDhela+JdhTquQFLj0XBI6FYL3gN/64
14 uQZx7A1gdBIACrkTjGZTJQ5ifufGJZPM8x1SFMU41NOPJxBzy3F0SWV4CG+DwPTc
15 i31vtje340sCNlBP+GdlXvUs0tVnhKuhKeBmsi4Z0sECehEKoO3l3iNWHDEh5sa6
16 sS+oRVT2YwnzX/nqQYTjHxbUZZ7mGbfzXkyLH+BDdwO96hc9Qm3tukTJkP5ArPAa
17 R2lKi+YziORdSlcYbK0TYW5sY2DJQM7bmcz+iFWuYBDe+zQBry/Ib2VnbwIDAQAB
18 o4GTMIGQMB0GA1UdDgQWBBQesUBqH6vVPaffB66rXDfDyxiWrDBUBgNVHSMETTBL
19 gBQesUBqH6vVPaffB66rXDfDyxiWrKEdpBswGTEXMBUGA1UEAwwOQXJnb0NEIFRl
20 c3QgQ0GCFAYE19ROoSw/pDCerQyk31Vo99PFMAwGA1UdEwQFMAMBAf8wCwYDVR0P
21 BAQDAgEGMA0GCSqGSIb3DQEBCwUAA4IBAQBgX1lyH/TqR1Fp1w4EcdBBsGyUKFI4
22 COW0pGzCXErlhz7r+Z9kJ75m8X0tDI1cYgUBHfzCliiIajuJcJ28HBPRgGgRujQI
23 INweSelauIZB4uVpnsCeomNj2xtpYV4j/dJ508HF1LEcsyKQvICSrzwCxIsnG1tG
24 o8QicVkGwCZDOPtKrHTf9IYgluh1KXX/by2LCxZ2S5BF7rlmA/eQOhXuvfgbmWEZ
25 hxBqiTtk2CEUqiEtwg1+0el8ds4dkDbmTnVwEABKAFMn/f3WBWcUN7zcdMN9taol
26 jJAI9NnYM28zg6jDCRdvX8IgT9Bc6k/n9mniFFthm0lN/vw17cewsYxb
27 -----END CERTIFICATE-----
Apply the new CA into the existing ConfigMap and check to ensure you can retrieve it. After a bit of time, Kyverno should sync that to existing Namespaces which received a copy of the source ConfigMap. And you're done!
Right now, it may take up to 15 minutes for Kyverno to sync the changes. There's an active issue for this to make it immediately propagate, so it should be improved soon.
Hopefully you're beginning to see how incredibly useful this generation ability is and how you can really use it as a kind of Namespace bootstrapping tool.
Now, as awesome as what I've showed is, you probably have existing Kubernetes clusters that are in use. What I've showed up to this point is great if you're creating new Namespaces, but wouldn't it be awesome if you could leverage Kyverno to do some of this for your existing Namespaces? The answer is "absolutely yes, you can" and let me show you how.
In the final scenario, we're going to simulate a brownfield environment and use a generate
rule's synchronization ability to selectively control the roll-out of the generated resources.
Let's say you're using a private image registry of some sort, or maybe a public one such as Docker Hub which uses authentication (perhaps to get around the recent rate limiting). In order to do this, you need a Secret which is of type docker-registry
that stores, in base64-encoded format, the credentials to pull from your registry. If you want to follow along, go ahead and create those credentials now. Or just use a generic Secret. I've got one that's called regcred
in my default Namespace.
1$ k -n default get secret
2NAME TYPE DATA AGE
3cluster-build-token-4q7lt kubernetes.io/service-account-token 3 14d
4default-token-9g4qp kubernetes.io/service-account-token 3 14d
5regcred kubernetes.io/dockerconfigjson 1 16h
We have some Namespaces that should not get this Secret and some that should. By using Kyverno's match
abilities, we can roll out our Secret to any Namespace we want based upon the assignment of some metadata. I've chosen to use the label secret-sync=yes
for that purpose. Any Namespace which has that label set will receive the Secret.
Let's write a new rule for this ability and tack it onto our ClusterPolicy.
1 - name: sync-image-pull-secret
2 match:
3 resources:
4 kinds:
5 - Namespace
6 selector:
7 matchLabels:
8 secret-sync: "yes"
9 exclude:
10 resources:
11 namespaces:
12 - kube-system
13 - default
14 - kube-public
15 - kyverno
16 generate:
17 kind: Secret
18 name: regcred
19 namespace: "{{request.object.metadata.name}}"
20 synchronize: true
21 clone:
22 namespace: default
23 name: regcred
Replace your ClusterPolicy with your newly-edited manifest, and let's test this ability.
I've got a Namespace called bar
which needs this Secret. Let's assign this label and see what it does.
Kyverno needs to inspect the AdmissionReview data through a webhook, so the matching label cannot already be assigned for this to work.
1$ k label ns bar sync-secret=yes
2namespace/bar labeled
Did we get a new Secret?
1$ k -n bar get secret
2NAME TYPE DATA AGE
3default-token-2jszc kubernetes.io/service-account-token 3 83s
4regcred kubernetes.io/dockerconfigjson 1 30s
Yep, we did! And, just like with the earlier ConfigMap example, if we needed to update our Secret with new credentials in the future, Kyverno will keep those downstream copies in sync with the synchronize: true
parameter. You can now bolt onto this and do things like use the resulting Secret in a mutate
rule to, say, mutate requests to add this as an imagePullSecret
, and/or validate
incoming requests so that they have this specified if maybe you know only certain apps require it. Or combine all three!
Well, I think that's about it. You can see how using Kyverno's generation ability allows us to do things we simply couldn't do before with Kubernetes, and simultaneously eliminates the need for additional, specialized tools. In a broader sense, now that we've explored all three of Kyverno's major abilities, I really hope in this series you've begun to imagine all the possibilities of how Kyverno can make your life easier in so many ways. From increasing your security posture, to eliminating hassling from developers, to automating your processes, there's a plethora of benefits you can realize by adding this tool into your environment. And the best part? You get all of this with no code required, which means you don't need to add any more technical debt to your existing pile.
You're now fully empowered to go forward and make this work for you. Hit up the docs to get more information including a quick start guide. Or head over to the GitHub repository and start contributing to this CNCF sandbox project.
With that, thanks for reading and, above all, I hope this was informative. If this helped you, I'm always glad to hear about it. And if you hated it, I'm even more glad to hear about why. Grab me on the Tweeters or LinkedIn and let me know which it was.