K8s 源码阅读 6 Solutions for Writing Operators
Following sample-controller
Bootstraping
Business Logic
We are now in the position to implement the business logic of the custom controller. That is, we implement the state transitions between the three phases–form PhasePending
to PhaseRuning
to PhaseDone
–in controller.go
.
processNextWorkItem()
processNextWorkItem() bool
will read a single work item off the workquque and attempt to process it, by calling the syncHandler
.
1func (c *Controller) processNextWorkItem() bool{
2 obj, shutdowon := c.workqueue.Get()
3
4 if shutdown{
5 return false
6 }
7
8 // We wrap this block in a func so we can defer c.workqueue.Done.
9 err := func(obj interface{})error{
10 // We call Done here so the workqueue knows we have finished processing this item.
11 // We also must remeber to call Forget if we do not want this work item being
12 // re-queued. For example, we do not call Forget if a transient error occurs, instead
13 // the item it put back on the workqueue and attempted again after a back-off period.
14 defer c.workqueue.Done(obj)
15 var key string
16 var ok bool
17 // We expect strings to come off the wordqueue. There are of the form namespace/name.
18 // We do this as the delayed nature of the workqueue means the iterms in the informer
19 // cache may actually be more up to date then when the item was initially put on to
20 // the workqueue.
21 if key,ok:=obj.(string);!ok{
22 // As the item in the workqueue is actually invalid, we call Forget here else we'd
23 // go into a loop of attemping to process a work item that is invalid.
24 c.workqueue.Forget(obj)
25 // %#v print struct and content
26 ultiruntime.HandleError(fmt.Errorf("expected string in workqueue but got %#v",obj))
27 return nil
28 }
29 // Run the syncHandler, passing it the namespace/name string of the Foo resource to
30 // be synced.
31 if err := c.syncHandler(key); err != nil{
32 // Put the item back on the workqueue to handle any transient errors.
33 c.workqueue.AddRateLimited(key)
34 return fmt.Errorf("err sync '%s':'%s', requeuing",key, error.Error(jj))
35 }
36 // Finally, if no error occurs we Forget this item so it dose not get queued again
37 // until another change happens.
38 c.workqueue.Forget(obj)
39 klog.Infof("Successfully synced '%s'",key)
40 return nil
41 }(obj)
42
43 if err != nil {
44 utilruntime.HandleError(err)
45 return true
46 }
47 return true
48}
`
syncHandler()
syncHandler(key string)
compares the actual state with the desired, and attempts to converge the two. It then updates the Status block of the Foo resource with the current status of the resource.
1func (c *Controller) syncHandler(key string) error{
2 // Convert the namespace/name string into a distinct namespace and name.
3 namespace, name, err := cache.SplitMetaNamespaceKey(key)
4 if err != nil{
5 utilruntime.HandleError(fmt.Errorf("invalid resource key :%s",key))
6 return nil
7 }
8
9 // Get the Foo resource with this namespace/name
10 foo, err:= c.foosLister.Foos(namespace).Get(name)
11 if err != nil{
12 // The foo resource may no longer exist, in which case we stop processing.
13 if errors.IsNotFound(err){
14 utilruntime.HandleError(fmt.Errorf("foo '%s' in working queue no longer exists"))
15 return nil
16 }
17 return err
18 }
19
20 deploymentName := foo.Spec.DeploymentName
21 if deploymentName == ""{
22 // we choose to absord the error here as the worker would requeue
23 // the resource otherwise. Instead, the next time the resource is update the
24 // resource will be queued again.
25 return nil
26 }
27
28 // Get the deployment with the name specified in Foo.spce
29 deployment, err := c.deploymentLister.Deployments(foo.Namespace).Get(deployment)
30 // If the resource doesn't exist, we'll create it
31 if errors.IsNotFound(err){
32 deployment, err = c.kubeclinetset.AppsV1().Deployments(foo.Namespace).
33 Create(context.TODO,newDeployment(foo),metav1.CreatedOptions{})
34 }
35 if err != nil{
36 return nil
37 }
38
39 // If the Deployment is not controllerd by this Foo resource, we should log
40 // a warning to the event recorder and return error msg.
41 if !metav1.IsControlledBy(deployment, foo){
42 msg := fmt.Sprintf(MessageResourceExists, deployment.Name)
43 c.recorder.Event(foo, corev1.EventTypeWarning, ErrorResourceExists,msg)
44 return fmt.Errorf("%s",msg)
45 }
46
47 // If this number of the replicas on the Foo resource is specified, and the number
48 // does not equal the current desired replicas on the Deployment, we should updated
49 // the Deployment resource
50 if foo.Spce.Replicas != nil && *foo.Spec.Replicas != *deployment.Spec.Replicas{
51 deployment, err != c.kubeclientset.AppsV1().Deployments(foo.Namespase).
52 Update(context.TODO),newDeployment(foo),metav1.UpdateOptions{})
53 }
54 if err != nil{
55 return err
56 }
57
58 err = c.updateFooStatus(foo,deployment)
59 if err != nil{
60 return err
61 }
62 c.recorder.Event(foo, corev1.EventTypeNormal, SuccessSynced,MessageResourceS)
63
64}
Kubebuilder
Create a Project
1source < $(kubebuilder completion zsh)
2# --repo git the git module name
3kubebuilder init --domain bytegopher.com --license apache2 --owner "ByteGopher" --repo www.github.com/airren/cnat-kubebuilder
4
5
6kubebuilder create api --group cnat --version v1alpha1 --kind CronJob
7
8# if you are editing the API definitions, generate the manifest such as Custom Resources(CRs) or Custom Resource Definitions(CRDs) using
9make manifest
Architecture Concept Diagram
The following diagram will help you better understand the Kubernetes concepts and architecture.
Process main.go
One of these per cluster, or several if using HA.
Manger sigs.k8s.io/controller-runtime/pkg/manger
One of these per process.
Handles HA(leader election), exports metrics, handles webhook certs, caches events, holds clients, broadcasts events to Controllers, handles signals, and shutdown.
Client…
Communicates with API server, handling authentication and protocols.
Cache…
Holds recently watched or GET’ed objects. Used by Controllers, and Webhooks. Uses clients.
Controller sigs.k8s.io/controller-runtime/pkg/controller
One of these per Kind that is reconciled (i.e. one per CRD)
Owns resources created by it.
Uses Caches and Clients and gets events via Filters.
The controller calls a Reconciler each time it gets an event.
Handles back-off and queuing and re-queuing of events.
Predicate sigs.k8s.io/controller-runtime/pkg/predicate
Filters a stream of events, passing only those that require action to the reconciler.
Reconciler sig.k8s.io/controller-runtile/pkg/reconciler
User-provided logic is added to the reconciler. Reconcile Function.
Webhook sigs.k8s.io/controller-runtime/pkg/webhook
Zero or one webhooks. One per Kind that is reconciled.
Scheme
Every set of controllers needs a Scheme
, which provides mappings between Kinds and their corresponding Go types. The Scheme
is simply a way to keep track of what Go type correspond to a given GVK.
Reconcile
We return an empty result and no error, which indicates to controller-runtime that we’ve successfully reconciled this object and don’t need to try again until there’s some changes.
kustomize
controller-gen CLI
What is controller-gen?
Kubebuilder makes use of a tool called controller-gen
for generating utility code and Kubernetes YAML. This code and config generation is controlled by the presence of special “marker comments” in Go code.
controller-gen is built out of different “generators” (which specify what to generate) and “output rules”(which specify how and where to write the results). Both are configured through command line options specified in marker format.
1controller-gen paths=./... crd:trivialVersions-true rbac:roleName=controller-perms \
2 output:crd:artifacts:config=config/crd/bases
Generate CRDs and RBAC, and specifically stores the generated CRD yaml in config/crd/base
. For the RBAC, it uses the default output rules(config/rbac
). It considers every package in the current directory tree(as per the normal rules of the go ...
wild card).
How to use controller-gen?
Generators
1// +webhook on package
2generates(partial) {Mutating,Validating}WebhookConfiguration objects.
3// +schemapatch: on package
4pathes existing CRDs with new schemata.
5// +rbac:roleName=<string> on package
6generates ClusterRole objects
7// +object: on package
8generate code containing DeepCopy, DeepCopyInto, and DeepCopyObject method implementations.
9// +crd on package
10generates CustomResourceDefination objects.
Output Rules
Output rules configure how a given generator outputs its results. There is always one global “fallback” output rule(specified as output:<rule>
), plus per-generator overrides(specified as output:<generator>:<rule>
).
Default Rules: When no fallback rules is specified manually, a set of default per-generator rules are used which result in YAML going to config/<generator>
, and code staying where is belongs.
The default rules are equivalent to output:<generator>:artifacts:config=config/<generator>
for each generator.
Marker for Config/Code Generation
Makers are single-line commnets that start with a plus, followed by a marker name, optionally followed by some marker specific configuration.
1// +kubebuilder:validation:Optional
2// +kubebuilder:validation:MaxItems=2
3// +kubebuilder:printcolumn:JSONPath=".status.replicas",name=Replicas, type=string
CRD Generation
These markers describe how to construct a custom resource definition form a series of Go types and packages. Generation of the actual validation schema is described by the validation markers.
Calling the Generators
Usually, the code generators are called in mostly the same way in every controller project.
Here, all
means to call all four standard code generators
for CRs.
deepcopy-gen
Generate func(t *T)DeepCopy() *T
and func(t *T)DeepCopyInfo(*T)
method.
client-gen
Creates typed client sets
informer-gen
Creates informers for CRs that offer an event-base interface to react to changes of CRs on the server.
lister-gen
Creates lister for CRs that offer a read-only caching layer for GET and LIST request.
The last two are the basis for building controllers. These four code generator make up a powerful basis for building full-featured, production-ready controllers using the same mechanisms and packages that the Kubernetes upstream controllers are using.
Controlling the Generators with Tags
While some of the code-generator behavior is controlled via command-line flags as described earlier, a lot more properties are controlled via tags in you Go files. A tag is a specially formatted Go comment in the following form.
1// +some-tag
2// +some-other-tag=value
There are to kind of tags:
- Global tags above the package line in a file called
doc.go
- Local tags above a type declaration.