k8s調(diào)度器插件開發(fā)教程

上一篇《k8s調(diào)度器 kube-scheduler 源碼解析》大概介紹一調(diào)度器的內(nèi)容客蹋,提到擴(kuò)展點(diǎn)的插件這個(gè)概念撒穷,下面我們看看如何開發(fā)一個(gè)自定義調(diào)度器岛马。

本文源碼托管在 https://github.com/cfanbo/sample-scheduler陨收。

插件機(jī)制

在Kubernetes調(diào)度器中势告,共有兩種插件機(jī)制岸啡,分別為 in-treeout-of-tree原叮。

  1. In-tree插件(內(nèi)建插件):這些插件是作為Kubernetes核心組件的一部分直接編譯和交付的。它們與Kubernetes的源代碼一起維護(hù)巡蘸,并與Kubernetes版本保持同步奋隶。這些插件以靜態(tài)庫形式打包到kube-scheduler二進(jìn)制文件中,因此在使用時(shí)不需要單獨(dú)安裝和配置悦荒。一些常見的in-tree插件包括默認(rèn)的調(diào)度算法唯欣、Packed Scheduling等。
  2. Out-of-tree插件(外部插件):這些插件是作為獨(dú)立項(xiàng)目開發(fā)和維護(hù)的搬味,它們與Kubernetes核心代碼分開境氢,并且可以單獨(dú)部署和更新。本質(zhì)上碰纬,out-of-tree插件是基于Kubernetes的調(diào)度器擴(kuò)展點(diǎn)進(jìn)行開發(fā)的萍聊。這些插件以獨(dú)立的二進(jìn)制文件形式存在,并通過自定義的方式與kube-scheduler進(jìn)行集成悦析。為了使用out-of-tree插件寿桨,您需要單獨(dú)安裝和配置它們,并在kube-scheduler的配置中指定它們强戴。

可以看到 in-tree 插件與Kubernetes的核心代碼一起進(jìn)行維護(hù)和發(fā)展亭螟,而 out-of-tree插件可以單獨(dú)開發(fā)并out-of-tree插件以獨(dú)立的二進(jìn)制文件部署挡鞍。因此 out-of-tree插件具有更大的靈活性,可以根據(jù)需求進(jìn)行自定義和擴(kuò)展媒佣,而 in-tree 插件受限于Kubernetes核心代碼的功能和限制匕累。

對于版本升級in-tree插件與Kubernetes版本保持同步,而out-of-tree插件可以單獨(dú)進(jìn)行版本升級或兼容默伍。

總的來說欢嘿,in-tree 插件是Kubernetes的一部分,可以直接使用和部署也糊,而 out-of-tree 插件則提供了更多的靈活性和定制化能力炼蹦,但需要單獨(dú)安裝和配置。

一般開發(fā)都是采用out-of-tree 這各機(jī)制狸剃。

擴(kuò)展點(diǎn)

參考官方文檔 https://kubernetes.io/zh-cn/docs/concepts/scheduling-eviction/scheduling-framework/#%E6%89%A9%E5%B1%95%E7%82%B9

下圖顯示了一個(gè) Pod 的調(diào)度上下文以及調(diào)度框架公開的擴(kuò)展點(diǎn)掐隐。

一個(gè)插件可以在多個(gè)擴(kuò)展點(diǎn)處注冊執(zhí)行,以執(zhí)行更復(fù)雜或有狀態(tài)的任務(wù)钞馁。

[圖片上傳失敗...(image-abf318-1690960748676)]

對于每個(gè)擴(kuò)展點(diǎn)的介紹虑省,可參考上面給出的官方文檔,這里不再做介紹僧凰。

我們下面開發(fā)的是一個(gè) Pod Scheduling Context 部分的 Filter調(diào)度器插件探颈,插件的功能非常的簡單,就是檢查一個(gè) Node 節(jié)點(diǎn)是否存在 cpu=true標(biāo)簽训措,如果存在此標(biāo)簽則可以節(jié)點(diǎn)有效灯蝴,否則節(jié)點(diǎn)視為無效芹啥,不參與Pod調(diào)度美旧。

插件實(shí)現(xiàn)

要實(shí)現(xiàn)一個(gè)調(diào)度器插件必須滿足兩個(gè)條件:

  1. 必須實(shí)現(xiàn)對應(yīng)擴(kuò)展點(diǎn)插件接口
  2. 將此插件在調(diào)度框架中進(jìn)行注冊凡简。

不同擴(kuò)展點(diǎn)可以啟用不同的插件。

插件實(shí)現(xiàn)

每個(gè)擴(kuò)展點(diǎn)的插件都必須要實(shí)現(xiàn)其相應(yīng)的接口呀闻,所有的接口定義在 https://github.com/kubernetes/kubernetes/blob/v1.27.3/pkg/scheduler/framework/interface.go化借。

// Plugin is the parent type for all the scheduling framework plugins.
type Plugin interface {
    Name() string
}

// FilterPlugin is an interface for Filter plugins. These plugins are called at the
// filter extension point for filtering out hosts that cannot run a pod.
// This concept used to be called 'predicate' in the original scheduler.
// These plugins should return "Success", "Unschedulable" or "Error" in Status.code.
// However, the scheduler accepts other valid codes as well.
// Anything other than "Success" will lead to exclusion of the given host from
// running the pod.
// 這個(gè)是我們要實(shí)現(xiàn)的插件
type FilterPlugin interface {
    Plugin
    // Filter is called by the scheduling framework.
    // All FilterPlugins should return "Success" to declare that
    // the given node fits the pod. If Filter doesn't return "Success",
    // it will return "Unschedulable", "UnschedulableAndUnresolvable" or "Error".
    // For the node being evaluated, Filter plugins should look at the passed
    // nodeInfo reference for this particular node's information (e.g., pods
    // considered to be running on the node) instead of looking it up in the
    // NodeInfoSnapshot because we don't guarantee that they will be the same.
    // For example, during preemption, we may pass a copy of the original
    // nodeInfo object that has some pods removed from it to evaluate the
    // possibility of preempting them to schedule the target pod.
    Filter(ctx context.Context, state *CycleState, pod *v1.Pod, nodeInfo *NodeInfo) *Status
}

// PreEnqueuePlugin is an interface that must be implemented by "PreEnqueue" plugins.
// These plugins are called prior to adding Pods to activeQ.
// Note: an preEnqueue plugin is expected to be lightweight and efficient, so it's not expected to
// involve expensive calls like accessing external endpoints; otherwise it'd block other
// Pods' enqueuing in event handlers.
type PreEnqueuePlugin interface {
    Plugin
    // PreEnqueue is called prior to adding Pods to activeQ.
    PreEnqueue(ctx context.Context, p *v1.Pod) *Status
}

// LessFunc is the function to sort pod info
type LessFunc func(podInfo1, podInfo2 *QueuedPodInfo) bool

// QueueSortPlugin is an interface that must be implemented by "QueueSort" plugins.
// These plugins are used to sort pods in the scheduling queue. Only one queue sort
// plugin may be enabled at a time.
type QueueSortPlugin interface {
    Plugin
    // Less are used to sort pods in the scheduling queue.
    Less(*QueuedPodInfo, *QueuedPodInfo) bool
}

要實(shí)現(xiàn) PreEnqueue擴(kuò)展點(diǎn)的插件必須實(shí)現(xiàn) PreEnqueuePlugin接口,而如何實(shí)現(xiàn)QueueSort擴(kuò)展點(diǎn)插件的話捡多,同需要實(shí)現(xiàn) QueueSortPlugin接口蓖康,在這里實(shí)現(xiàn) Filter 插件接口。

k8s 默認(rèn)已有一個(gè)調(diào)度器 default-scheduler, 現(xiàn)在我們自定義一個(gè)調(diào)度器 sample-scheduler局服。

image.png

對于默認(rèn)調(diào)度器它是以 staticPod 方式部署(左圖)钓瞭,其yaml定義文件一般為控制面主機(jī)的 /etc/kubernetes/manifests/kube-scheduler.yaml 文件 驳遵,而對于自定義調(diào)度器一般以Pod的形式部署(右圖)淫奔。這樣一個(gè)集群里可以有多個(gè)調(diào)度器,然后在編寫Pod的時(shí)候通過 spec.schedulerName 指定當(dāng)前Pod使用的調(diào)度器堤结。

官方給出了一些插件實(shí)現(xiàn)的示例唆迁,插件編寫參考 https://github.com/kubernetes/kubernetes/tree/v1.27.3/pkg/scheduler/framework/plugins/examples

// pkg/plugin/myplugin.go
package plugin

import (
    "context"
    "k8s.io/api/core/v1"
    "k8s.io/apimachinery/pkg/runtime"
    "k8s.io/kubernetes/pkg/scheduler/framework"
    "log"
)

// Name is the name of the plugin used in the plugin registry and configurations.

const Name = "sample"

// Sort is a plugin that implements QoS class based sorting.

type sample struct{}

var _ framework.FilterPlugin = &sample{}
var _ framework.PreScorePlugin = &sample{}

// New initializes a new plugin and returns it.
func New(_ runtime.Object, _ framework.Handle) (framework.Plugin, error) {
    return &sample{}, nil
}

// Name returns name of the plugin.
func (pl *sample) Name() string {
    return Name
}

func (pl *sample) Filter(ctx context.Context, state *framework.CycleState, pod *v1.Pod, nodeInfo *framework.NodeInfo) *framework.Status {
    log.Printf("filter pod: %v, node: %v", pod.Name, nodeInfo)
    log.Println(state)

    // 排除沒有cpu=true標(biāo)簽的節(jié)點(diǎn)
    if nodeInfo.Node().Labels["cpu"] != "true" {
        return framework.NewStatus(framework.Unschedulable, "Node: "+nodeInfo.Node().Name)
    }
    return framework.NewStatus(framework.Success, "Node: "+nodeInfo.Node().Name)
}

func (pl *sample) PreScore(ctx context.Context, state *framework.CycleState, pod *v1.Pod, nodes []*v1.Node) *framework.Status {
    log.Println(nodes)
    return framework.NewStatus(framework.Success, "Node: "+pod.Name)
}

實(shí)現(xiàn)了FilterPreScore 兩類插件鸭丛,不過本方只演示Filter

通過 app.NewSchedulerCommand() 注冊自定義插件唐责,提供插件的名稱和構(gòu)造函數(shù)即可(參考 https://github.com/kubernetes-sigs/scheduler-plugins/blob/master/cmd/scheduler/main.go)鳞溉。

// cmd/scheduler/main.go 插件調(diào)用入口 
package main

import (
    "os"

    "github.com/cfanbo/sample/pkg/plugin"
    "k8s.io/kubernetes/cmd/kube-scheduler/app"
)

func main() {
    command := app.NewSchedulerCommand(
        app.WithPlugin(plugin.Name, plugin.New),
    )

    if err := command.Execute(); err != nil {
        os.Exit(1)
    }
}

插件編譯

將應(yīng)用編譯成二進(jìn)制文件(這里是arm64)

? GOOS=linux GOARCH=arm64 go build -ldflags '-X k8s.io/component-base/version.gitVersion=$(VERSION) -w' -o bin/sample-scheduler cmd/scheduler/main.go

其命令用法參考 bin/sample-scheduler -h 了解

下面調(diào)度器的執(zhí)行也可以在本機(jī)執(zhí)行,不過由于本機(jī)已經(jīng)有了一個(gè)調(diào)度器鼠哥,可能存在一些沖突的情況熟菲,這里為了方便直接使用Pod方式進(jìn)行調(diào)度器的部署。

制作鏡像

Dockerfile 內(nèi)容

FROM --platform=$TARGETPLATFORM ubuntu:20.04
WORKDIR .
COPY bin/sample-scheduler /usr/local/bin
CMD ["sample-scheduler"]

制作鏡像朴恳,參考 https://blog.haohtml.com/archives/31052

在生產(chǎn)中要盡量使用體積最小的基礎(chǔ)鏡像抄罕,這里為了方便直接使用了 ubuntu:20.04 鏡像,鏡像大小有些大

docker buildx build --platform linux/arm64 -t cfanbo/sample-scheduler:v0.0.1

將生成的鏡像上傳到遠(yuǎn)程倉庫于颖,以便后面將通過Pod進(jìn)行部署呆贿。

docker push cfanbo/sample-scheduler:v0.0.1

這里環(huán)境為 arm64 架構(gòu)

插件部署

插件功能開發(fā)完后,剩下就是如何部署的問題了森渐。

要想讓插件運(yùn)行做入,必須先將插件在調(diào)度框架中進(jìn)行注冊,這個(gè)操作是通過編寫KubeSchedulerConfiguration 配置文件來定制 kube-scheduler 的操作實(shí)現(xiàn)的同衣。

本文我們將調(diào)度器插件以Pod 的方式運(yùn)行竟块。

插件在容器里運(yùn)行時(shí),需要指定一個(gè)調(diào)度器配置文件乳怎,這個(gè)文件內(nèi)容是一個(gè) KubeSchedulerConfiguration 對象彩郊,而這個(gè)對象內(nèi)容我們可以通過 volume 這種方式掛載到容器里,插件運(yùn)行時(shí)指定這個(gè)配置文件就可可以了蚪缀。

首先創(chuàng)建一個(gè) ConfigMap 對象秫逝,其內(nèi)容就是我們需要的 KubeSchedulerConfiguration 配置,在容器里再通過首先通過 volume 來掛載到本地目錄询枚,最后通過 --config 來指定這個(gè)配置文件违帆。

這里整個(gè)配置通過一個(gè)yaml 文件實(shí)現(xiàn)了,很方便金蜀。

# sample-scheduler.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: sample-scheduler-clusterrole
rules:
  - apiGroups:
      - ""
    resources:
      - namespaces
    verbs:
      - create
      - get
      - list
  - apiGroups:
      - ""
    resources:
      - endpoints
      - events
    verbs:
      - create
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - delete
      - get
      - list
      - watch
      - update
  - apiGroups:
      - ""
    resources:
      - bindings
      - pods/binding
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - pods/status
    verbs:
      - patch
      - update
  - apiGroups:
      - ""
    resources:
      - replicationcontrollers
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - apps
      - extensions
    resources:
      - replicasets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - apps
    resources:
      - statefulsets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - policy
    resources:
      - poddisruptionbudgets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - persistentvolumeclaims
      - persistentvolumes
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "storage.k8s.io"
    resources:
      - storageclasses
      - csinodes
      - csistoragecapacities
      - csidrivers
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "coordination.k8s.io"
    resources:
      - leases
    verbs:
      - create
      - get
      - list
      - update
  - apiGroups:
      - "events.k8s.io"
    resources:
      - events
    verbs:
      - create
      - patch
      - update
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: sample-scheduler-sa
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: sample-scheduler-clusterrolebinding
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: sample-scheduler-clusterrole
subjects:
  - kind: ServiceAccount
    name: sample-scheduler-sa
    namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: scheduler-config
  namespace: kube-system
data:
  scheduler-config.yaml: |
    apiVersion: kubescheduler.config.k8s.io/v1
    kind: KubeSchedulerConfiguration
    leaderElection:
      leaderElect: false
      leaseDuration: 15s
      renewDeadline: 10s
      resourceName: sample-scheduler
      resourceNamespace: kube-system
      retryPeriod: 2s
    profiles:
      - schedulerName: sample-scheduler
        plugins:
          filter:
            enabled:
              - name: sample
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-scheduler
  namespace: kube-system
  labels:
    component: sample-scheduler
spec:
  selector:
    matchLabels:
      component: sample-scheduler
  template:
    metadata:
      labels:
        component: sample-scheduler
    spec:
      serviceAccountName: sample-scheduler-sa
      priorityClassName: system-cluster-critical
      volumes:
        - name: scheduler-config
          configMap:
            name: scheduler-config
      containers:
        - name: scheduler
          image: cfanbo/sample-scheduler:v0.0.1
          imagePullPolicy: IfNotPresent
          command:
            - sample-scheduler
            - --config=/etc/kubernetes/scheduler-config.yaml
            - --v=3
          volumeMounts:
            - name: scheduler-config
              mountPath: /etc/kubernetes

這個(gè)yaml 文件共完成以下幾件事:

  1. 通過 ConfigMap 聲明一個(gè) KubeSchedulerConfiguration 配置
  2. 創(chuàng)建一個(gè) Deployment 對象刷后,其中容器鏡像 cfanbo/sample-scheduler:v0.0.1 是前面我們開發(fā)的插件應(yīng)用,對于調(diào)度器插件配置通過 volume 的方式存儲到容器里 /etc/kubernetes/scheduler-config.yaml渊抄,應(yīng)用啟動時(shí)指定此配置文件尝胆;這里為了調(diào)試方便指定了日志 --v=3 等級
  3. 創(chuàng)建一個(gè)ClusterRole,指定不同資源的訪問權(quán)限
  4. 創(chuàng)建一個(gè)ServiceAccount
  5. 聲明一個(gè) ClusterRoleBinding 對象护桦,綁定 ClusterRoleServiceAccount 兩者的關(guān)系

安裝插件

? kubectl apply -f sample-scheduler.yaml</pre>

此時(shí)插件以pod的形式運(yùn)行(命令空間為 `kube-system` )含衔。

<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">? kubectl get pod -n kube-system --selector=component=sample-scheduler
NAME                                READY   STATUS    RESTARTS   AGE
sample-scheduler-85cd75d775-jq4c7   1/1     Running   0          5m50s

查看pod 容器進(jìn)程啟動參數(shù)

? kubectl exec -it -n kube-system pod/sample-scheduler-85cd75d775-jq4c7 -- ps -auxww
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  4.9  0.6 764160 55128 ?        Ssl  07:41   0:05 sample-scheduler --config=/etc/kubernetes/scheduler-config.yaml --v=3
root          39  0.0  0.0   5472  2284 pts/0    Rs+  07:43   0:00 ps -auxww

可以看到插件進(jìn)程,同時(shí)指定了兩個(gè)參數(shù)。

新開一個(gè)終端贪染,持續(xù)觀察調(diào)度器插件的輸出日志

? kubectl logs -n kube-system -f sample-scheduler-85cd75d775-jq4c7

插件測試

正常情況下缓呛,這時(shí)Pod是無法被正常調(diào)度的,因?yàn)槲覀儾寮@個(gè)調(diào)度行為進(jìn)行了干預(yù)杭隙。

無法調(diào)度

創(chuàng)建一個(gè)pod哟绊,并指定調(diào)度器為 sample-scheduler

# test-scheduler.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-scheduler
spec:
  selector:
    matchLabels:
      app: test-scheduler
  template:
    metadata:
      labels:
        app: test-scheduler
    spec:
      schedulerName: sample-scheduler # 指定使用的調(diào)度器,不指定使用默認(rèn)的default-scheduler
      containers:
        - image: nginx:1.23-alpine
          imagePullPolicy: IfNotPresent
          name: nginx
          ports:
            - containerPort: 80
? kubectl apply -f test-scheduler.yaml

由于插件主要實(shí)現(xiàn)對節(jié)點(diǎn)篩選痰憎,排除那些不能運(yùn)行該 Pod 的節(jié)點(diǎn)票髓,運(yùn)行時(shí)將檢查節(jié)點(diǎn)是否存在 cpu=true 標(biāo)簽,如果不存在這個(gè)label標(biāo)簽铣耘,則說明此節(jié)點(diǎn)無法通過預(yù)選階段炬称,后面的調(diào)度與綁定步驟就不可能執(zhí)行。

我們看一下這個(gè)Pod能否被調(diào)度成功

? kubectl get pods --selector=app=test-scheduler
NAME                              READY   STATUS    RESTARTS   AGE
test-scheduler-78c89768cf-5d9ct   0/1     Pending   0          11m

可以看到一直處于 Pending 狀態(tài)涡拘,說明一直無法被調(diào)度玲躯,我們再看一下這個(gè)Pod描述信息

? kubectl describe pod test-scheduler-78c89768cf-5d9ct
Name:        test-scheduler-78c89768cf-5d9ct
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=test-scheduler
                  pod-template-hash=78c89768cf
Annotations:      <none>
Status:           Pending
IP:
IPs:              <none>
Controlled By:    ReplicaSet/test-scheduler-78c89768cf-5d9ct
Containers:
  nginx:
    Image:        nginx:1.23-alpine
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gptlh (ro)
Volumes:
  kube-api-access-gptlh:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

此時(shí) NodeEvent 字段均為空。

為什么這樣呢鳄乏,是不是我們插件發(fā)揮作用了呢跷车?我們看下調(diào)度器插件日志

# 收到一個(gè)剛剛創(chuàng)建的 Pod 日志
I0731 09:14:44.663628       1 eventhandlers.go:118] "Add event for unscheduled pod" pod="default/test-scheduler-78c89768cf-5d9ct"

# 現(xiàn)在開始調(diào)度 Pod
I0731 09:14:44.663787       1 schedule_one.go:80] "Attempting to schedule pod" pod="default/test-scheduler-78c89768cf-5d9ct"

# 這里是我們程序里的調(diào)試日志,正好對應(yīng)兩個(gè) log.Println() 
2023/07/31 09:14:44 filter pod: test-scheduler-78c89768cf-5d9ct, node: &NodeInfo{Pods:[calico-apiserver-f654d8896-c97v9 calico-node-94q9p csi-node-driver-7bjxx nginx-984448cf6-45nrp nginx-984448cf6-zx67q ingress-nginx-controller-8c4c57cd9-n4lvm coredns-7bdc4cb885-m9sbg coredns-7bdc4cb885-npz8p etcd-k8s kube-apiserver-k8s kube-controller-manager-k8s kube-proxy-fxz6j kube-scheduler-k8s controller-7948676b95-b2zfd speaker-p779d minio-operator-67c694f5f6-b4s7h], RequestedResource:&framework.Resource{MilliCPU:1150, Memory:614465536, EphemeralStorage:524288000, AllowedPodNumber:0, ScalarResources:map[v1.ResourceName]int64(nil)}, NonZeroRequest: &framework.Resource{MilliCPU:2050, Memory:3131047936, EphemeralStorage:0, AllowedPodNumber:0, ScalarResources:map[v1.ResourceName]int64(nil)}, UsedPort: framework.HostPortInfo{"0.0.0.0":map[framework.ProtocolPort]struct {}{framework.ProtocolPort{Protocol:"TCP", Port:7472}:struct {}{}, framework.ProtocolPort{Protocol:"TCP", Port:7946}:struct {}{}, framework.ProtocolPort{Protocol:"UDP", Port:7946}:struct {}{}}}, AllocatableResource:&framework.Resource{MilliCPU:4000, Memory:8081461248, EphemeralStorage:907082144291, AllowedPodNumber:110, ScalarResources:map[v1.ResourceName]int64{"hugepages-1Gi":0, "hugepages-2Mi":0, "hugepages-32Mi":0, "hugepages-64Ki":0}}}
2023/07/31 09:14:44 &{{{0 0} {[] {} 0x40003a8e80} map[PreFilterNodePorts:0x4000192280 PreFilterNodeResourcesFit:0x4000192288 PreFilterPodTopologySpread:0x40001922c8 PreFilterVolumeRestrictions:0x40001922a0 VolumeBinding:0x40001922c0 kubernetes.io/pods-to-activate:0x4000192248] 4} false map[InterPodAffinity:{} NodeAffinity:{} VolumeBinding:{} VolumeZone:{}] map[]}
2023/07/31 09:14:44 filter pod: test-scheduler-78c89768cf-5d9ct, node: &NodeInfo{Pods:[calico-apiserver-f654d8896-flfhb calico-kube-controllers-789dc4c76b-h29t5 calico-node-cgm8v calico-typha-5794d6dbd8-7gz5n csi-node-driver-jsxcz nginx-984448cf6-jcwfs nginx-984448cf6-jcwp2 nginx-984448cf6-r6vmg nginx-deployment-7554c7bd74-s2kfn kube-proxy-hgscf sample-scheduler-85cd75d775-jq4c7 speaker-lvp5l minio console-6bdf84b844-vzg72 minio-operator-67c694f5f6-g2bll tigera-operator-549d4f9bdb-txh45], RequestedResource:&framework.Resource{MilliCPU:200, Memory:268435456, EphemeralStorage:524288000, AllowedPodNumber:0, ScalarResources:map[v1.ResourceName]int64(nil)}, NonZeroRequest: &framework.Resource{MilliCPU:1800, Memory:3623878656, EphemeralStorage:0, AllowedPodNumber:0, ScalarResources:map[v1.ResourceName]int64(nil)}, UsedPort: framework.HostPortInfo{"0.0.0.0":map[framework.ProtocolPort]struct {}{framework.ProtocolPort{Protocol:"TCP", Port:5473}:struct {}{}, framework.ProtocolPort{Protocol:"TCP", Port:7472}:struct {}{}, framework.ProtocolPort{Protocol:"TCP", Port:7946}:struct {}{}, framework.ProtocolPort{Protocol:"UDP", Port:7946}:struct {}{}}}, AllocatableResource:&framework.Resource{MilliCPU:4000, Memory:8081461248, EphemeralStorage:907082144291, AllowedPodNumber:110, ScalarResources:map[v1.ResourceName]int64{"hugepages-1Gi":0, "hugepages-2Mi":0, "hugepages-32Mi":0, "hugepages-64Ki":0}}}
2023/07/31 09:14:44 &{{{0 0} {[] {} 0x40007be050} map[] 0} false map[InterPodAffinity:{} NodeAffinity:{} VolumeBinding:{} VolumeZone:{}] map[]}
I0731 09:14:44.665942       1 schedule_one.go:867] "Unable to schedule pod; no fit; waiting" pod="default/test-scheduler-78c89768cf-5d9ct" err="0/4 nodes are available: 1 Node: k8s, 1 Node: node1, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.."

# 調(diào)度失敗結(jié)果
2023/07/31 09:14:44.666174       1 schedule_one.go:943] "Updating pod condition" pod="default/test-scheduler-78c89768cf-5d9ct" conditionType=PodScheduled conditionStatus=False conditionReason="Unschedulable"

當(dāng)前環(huán)境共四個(gè)節(jié)點(diǎn), 但只有 k8snode1 兩個(gè)節(jié)點(diǎn)可以正常使用橱野。

? kubectl get node
NAME    STATUS                        ROLES           AGE   VERSION
k8s     Ready                         control-plane   65d   v1.27.1
node1   Ready                         <none>          64d   v1.27.2
node2   NotReady                      <none>          49d   v1.27.2
node3   NotReady,SchedulingDisabled   <none>          44d   v1.27.2

從日志輸出中可以看到此時(shí)插件已經(jīng)起作用了朽缴,這正是我們想要的結(jié)果,用來干預(yù)Pod的調(diào)度行為水援,目前為止一切符合預(yù)期密强。

恢復(fù)調(diào)度

下面我們實(shí)現(xiàn)讓Pod 恢復(fù)正常調(diào)度的效果,我們給 node1添加一個(gè) cpu=true 標(biāo)簽

? kubectl label nodes node1 cpu=true
node/node1 labeled

? kubectl get nodes -l=cpu=true
NAME    STATUS   ROLES    AGE   VERSION
node1   Ready    <none>   64d   v1.27.2

再次觀察插件日志

# 調(diào)度Pod
I0731 09:24:16.616059       1 schedule_one.go:80] "Attempting to schedule pod" pod="default/test-scheduler-78c89768cf-5d9ct"

# 打印日志
2023/07/31 09:24:16 filter pod: test-scheduler-78c89768cf-5d9ct, node: &NodeInfo{Pods:[calico-apiserver-f654d8896-c97v9 calico-node-94q9p csi-node-driver-7bjxx nginx-984448cf6-45nrp nginx-984448cf6-zx67q ingress-nginx-controller-8c4c57cd9-n4lvm coredns-7bdc4cb885-m9sbg coredns-7bdc4cb885-npz8p etcd-k8s kube-apiserver-k8s kube-controller-manager-k8s kube-proxy-fxz6j kube-scheduler-k8s controller-7948676b95-b2zfd speaker-p779d minio-operator-67c694f5f6-b4s7h], RequestedResource:&framework.Resource{MilliCPU:1150, Memory:614465536, EphemeralStorage:524288000, AllowedPodNumber:0, ScalarResources:map[v1.ResourceName]int64(nil)}, NonZeroRequest: &framework.Resource{MilliCPU:2050, Memory:3131047936, EphemeralStorage:0, AllowedPodNumber:0, ScalarResources:map[v1.ResourceName]int64(nil)}, UsedPort: framework.HostPortInfo{"0.0.0.0":map[framework.ProtocolPort]struct {}{framework.ProtocolPort{Protocol:"TCP", Port:7472}:struct {}{}, framework.ProtocolPort{Protocol:"TCP", Port:7946}:struct {}{}, framework.ProtocolPort{Protocol:"UDP", Port:7946}:struct {}{}}}, AllocatableResource:&framework.Resource{MilliCPU:4000, Memory:8081461248, EphemeralStorage:907082144291, AllowedPodNumber:110, ScalarResources:map[v1.ResourceName]int64{"hugepages-1Gi":0, "hugepages-2Mi":0, "hugepages-32Mi":0, "hugepages-64Ki":0}}}
2023/07/31 09:24:16 &{{{0 0} {[] {} 0x4000a0ee80} map[PreFilterNodePorts:0x400087c110 PreFilterNodeResourcesFit:0x400087c120 PreFilterPodTopologySpread:0x400087c158 PreFilterVolumeRestrictions:0x400087c130 VolumeBinding:0x400087c148 kubernetes.io/pods-to-activate:0x400087c0d0] 4} true map[InterPodAffinity:{} NodeAffinity:{} VolumeBinding:{} VolumeZone:{}] map[]}
2023/07/31 09:24:16 filter pod: test-scheduler-78c89768cf-5d9ct, node: &NodeInfo{Pods:[calico-apiserver-f654d8896-flfhb calico-kube-controllers-789dc4c76b-h29t5 calico-node-cgm8v calico-typha-5794d6dbd8-7gz5n csi-node-driver-jsxcz nginx-984448cf6-jcwfs nginx-984448cf6-jcwp2 nginx-984448cf6-r6vmg nginx-deployment-7554c7bd74-s2kfn kube-proxy-hgscf sample-scheduler-85cd75d775-jq4c7 speaker-lvp5l minio console-6bdf84b844-vzg72 minio-operator-67c694f5f6-g2bll tigera-operator-549d4f9bdb-txh45], RequestedResource:&framework.Resource{MilliCPU:200, Memory:268435456, EphemeralStorage:524288000, AllowedPodNumber:0, ScalarResources:map[v1.ResourceName]int64(nil)}, NonZeroRequest: &framework.Resource{MilliCPU:1800, Memory:3623878656, EphemeralStorage:0, AllowedPodNumber:0, ScalarResources:map[v1.ResourceName]int64(nil)}, UsedPort: framework.HostPortInfo{"0.0.0.0":map[framework.ProtocolPort]struct {}{framework.ProtocolPort{Protocol:"TCP", Port:5473}:struct {}{}, framework.ProtocolPort{Protocol:"TCP", Port:7472}:struct {}{}, framework.ProtocolPort{Protocol:"TCP", Port:7946}:struct {}{}, framework.ProtocolPort{Protocol:"UDP", Port:7946}:struct {}{}}}, AllocatableResource:&framework.Resource{MilliCPU:4000, Memory:8081461248, EphemeralStorage:907082144291, AllowedPodNumber:110, ScalarResources:map[v1.ResourceName]int64{"hugepages-1Gi":0, "hugepages-2Mi":0, "hugepages-32Mi":0, "hugepages-64Ki":0}}}
2023/07/31 09:24:16 &{{{0 0} {[] {} 0x4000a0f2b0} map[] 0} true map[InterPodAffinity:{} NodeAffinity:{} VolumeBinding:{} VolumeZone:{}] map[]}

# 綁定pod與node
I0731 09:24:16.619544       1 default_binder.go:53] "Attempting to bind pod to node" pod="default/test-scheduler-78c89768cf-5d9ct" node="node1"
I0731 09:24:16.640234       1 eventhandlers.go:161] "Delete event for unscheduled pod" pod="default/test-scheduler-78c89768cf-5d9ct"
I0731 09:24:16.642564       1 schedule_one.go:252] "Successfully bound pod to node" pod="default/test-scheduler-78c89768cf-5d9ct" node="node1" evaluatedNodes=4 feasibleNodes=1

I0731 09:24:16.643173       1 eventhandlers.go:186] "Add event for scheduled pod" pod="default/test-scheduler-78c89768cf-5d9ct"

從日志來看蜗元,應(yīng)該是調(diào)度成功了或渤,我們再根據(jù)pod的狀態(tài)確認(rèn)一下

? kubectl get pods --selector=app=test-scheduler
NAME                              READY   STATUS    RESTARTS   AGE
test-scheduler-78c89768cf-5d9ct   1/1     Running   0          10m

此時(shí)Pod狀態(tài)由 Pending 變成了Running ,表示確實(shí)調(diào)度成功了奕扣。

我們再看看此時(shí)的Pod描述信息

? kubectl describe pod test-scheduler-78c89768cf-5d9ct
Name:             test-scheduler-78c89768cf-5d9ct
Namespace:        default
Priority:         0
Service Account:  default
Node:             node1/192.168.0.205
Start Time:       Mon, 31 Jul 2023 17:24:16 +0800
Labels:           app=test-scheduler
                  pod-template-hash=78c89768cf
Annotations:      cni.projectcalico.org/containerID: 33e3ffc74e4b2fa15cae210c65d3d4be6a8eadc431e7201185ffa1b1a29cc51d
                  cni.projectcalico.org/podIP: 10.244.166.182/32
                  cni.projectcalico.org/podIPs: 10.244.166.182/32
Status:           Running
IP:               10.244.166.182
IPs:
  IP:           10.244.166.182
Controlled By:  ReplicaSet/test-scheduler-78c89768cf
Containers:
  nginx:
    Container ID:   docker://97896d4c4fec2bae294d02125562bc29d769911c7e47e5f4020b1de24ce9c367
    Image:          nginx:1.23-alpine
    Image ID:       docker://sha256:510900496a6c312a512d8f4ba0c69586e0fbd540955d65869b6010174362c313
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Mon, 31 Jul 2023 17:24:18 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wmh9d (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-api-access-wmh9d:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age    From              Message
  ----     ------            ----   ----              -------
  Warning  FailedScheduling  14m    sample-scheduler  0/4 nodes are available: 1 Node: k8s, 1 Node: node1, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling..
  Warning  FailedScheduling  9m27s  sample-scheduler  0/4 nodes are available: 1 Node: k8s, 1 Node: node1, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling..
  Normal   Scheduled         5m9s   sample-scheduler  Successfully assigned default/test-scheduler-78c89768cf-5d9ct to node1
  Normal   Pulled            5m8s   kubelet           Container image "nginx:1.23-alpine" already present on machine
  Normal   Created           5m8s   kubelet           Created container nginx
  Normal   Started           5m8s   kubelet           Started container nginx

Event 字段可以看到我們給 node1 添加 label 標(biāo)簽前后事件日志信息薪鹦。

到此我們整個(gè)插件開發(fā)工作基本完成了。

總結(jié)

當(dāng)前示例非常的簡單惯豆,主要是為了讓大家方便理解池磁。對于開發(fā)什么樣的插件,只需要看對應(yīng)的插件接口就可以了楷兽。然后在配置文件里在合適的擴(kuò)展點(diǎn)啟用即可地熄。

對于自定義調(diào)度器的實(shí)現(xiàn),是在main.go文件里通過

app.NewSchedulerCommand(
    app.WithPlugin(plugin.Name, plugin.New),
)

來注冊自定義調(diào)度器插件芯杀,然后再通過--config 指定插件是否啟用以及啟用的擴(kuò)展點(diǎn)端考。

本文調(diào)度器是以Pod 方式運(yùn)行银舱,并在容器里掛載一個(gè)配置 /etc/kubernetes/scheduler-config.yaml ,也可以直接修改集群 kube-scheduler 的配置文件添加一個(gè)新調(diào)度器配置來實(shí)現(xiàn)跛梗,不過由于對集群侵入太大了,個(gè)人不推薦這種用法棋弥。

常見問題

這里關(guān)于k8s 依賴的地方核偿,全部使用 replace 方式才運(yùn)行起來。這里全部replace到k8s源碼的staging目錄下了顽染。

[圖片上傳失敗...(image-96b2d9-1690960748676)]

如何不使用 replace 的話漾岳,會提示下載k8s依賴出錯(cuò)

k8s.io/kubernetes@v1.20.2 requires
// k8s.io/api@v0.0.0: reading https://goproxy.io/k8s.io/api/@v/v0.0.0.mod: 404 Not Found
// server response: not found: k8s.io/api@v0.0.0: invalid version: unknown revision v0.0.0

在這個(gè)問題上卡了好久好久,不理解為什么為這樣粉寞?哪怕指定了版本號也不行

以下是我的 go.mod 內(nèi)容

module github.com/cfanbo/sample

go 1.20

require (
    github.com/spf13/cobra v1.6.0
    k8s.io/kubernetes v0.0.0
)

require (
    github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
    github.com/NYTimes/gziphandler v1.1.1 // indirect
    github.com/antlr/antlr4/runtime/Go/antlr v1.4.10 // indirect
    github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a // indirect
    github.com/beorn7/perks v1.0.1 // indirect
    github.com/blang/semver/v4 v4.0.0 // indirect
    github.com/cenkalti/backoff/v4 v4.1.3 // indirect
    github.com/cespare/xxhash/v2 v2.1.2 // indirect
    github.com/coreos/go-semver v0.3.0 // indirect
    github.com/coreos/go-systemd/v22 v22.4.0 // indirect
    github.com/davecgh/go-spew v1.1.1 // indirect
    github.com/docker/distribution v2.8.1+incompatible // indirect
    github.com/emicklei/go-restful/v3 v3.9.0 // indirect
    github.com/evanphx/json-patch v4.12.0+incompatible // indirect
    github.com/felixge/httpsnoop v1.0.3 // indirect
    github.com/fsnotify/fsnotify v1.6.0 // indirect
    github.com/go-logr/logr v1.2.3 // indirect
    github.com/go-logr/stdr v1.2.2 // indirect
    github.com/go-openapi/jsonpointer v0.19.6 // indirect
    github.com/go-openapi/jsonreference v0.20.1 // indirect
    github.com/go-openapi/swag v0.22.3 // indirect
    github.com/gogo/protobuf v1.3.2 // indirect
    github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
    github.com/golang/protobuf v1.5.3 // indirect
    github.com/google/cel-go v0.12.6 // indirect
    github.com/google/gnostic v0.5.7-v3refs // indirect
    github.com/google/go-cmp v0.5.9 // indirect
    github.com/google/gofuzz v1.1.0 // indirect
    github.com/google/uuid v1.3.0 // indirect
    github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 // indirect
    github.com/grpc-ecosystem/grpc-gateway/v2 v2.7.0 // indirect
    github.com/imdario/mergo v0.3.6 // indirect
    github.com/inconshreveable/mousetrap v1.0.1 // indirect
    github.com/josharian/intern v1.0.0 // indirect
    github.com/json-iterator/go v1.1.12 // indirect
    github.com/mailru/easyjson v0.7.7 // indirect
    github.com/matttproud/golang_protobuf_extensions v1.0.2 // indirect
    github.com/mitchellh/mapstructure v1.4.1 // indirect
    github.com/moby/sys/mountinfo v0.6.2 // indirect
    github.com/moby/term v0.0.0-20221205130635-1aeaba878587 // indirect
    github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
    github.com/modern-go/reflect2 v1.0.2 // indirect
    github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
    github.com/opencontainers/go-digest v1.0.0 // indirect
    github.com/opencontainers/selinux v1.10.0 // indirect
    github.com/pkg/errors v0.9.1 // indirect
    github.com/prometheus/client_golang v1.14.0 // indirect
    github.com/prometheus/client_model v0.3.0 // indirect
    github.com/prometheus/common v0.37.0 // indirect
    github.com/prometheus/procfs v0.8.0 // indirect
    github.com/spf13/pflag v1.0.5 // indirect
    github.com/stoewer/go-strcase v1.2.0 // indirect
    go.etcd.io/etcd/api/v3 v3.5.7 // indirect
    go.etcd.io/etcd/client/pkg/v3 v3.5.7 // indirect
    go.etcd.io/etcd/client/v3 v3.5.7 // indirect
    go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.35.0 // indirect
    go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.35.1 // indirect
    go.opentelemetry.io/otel v1.10.0 // indirect
    go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.10.0 // indirect
    go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.10.0 // indirect
    go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.10.0 // indirect
    go.opentelemetry.io/otel/metric v0.31.0 // indirect
    go.opentelemetry.io/otel/sdk v1.10.0 // indirect
    go.opentelemetry.io/otel/trace v1.10.0 // indirect
    go.opentelemetry.io/proto/otlp v0.19.0 // indirect
    go.uber.org/atomic v1.7.0 // indirect
    go.uber.org/multierr v1.6.0 // indirect
    go.uber.org/zap v1.19.0 // indirect
    golang.org/x/crypto v0.1.0 // indirect
    golang.org/x/net v0.8.0 // indirect
    golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b // indirect
    golang.org/x/sync v0.1.0 // indirect
    golang.org/x/sys v0.6.0 // indirect
    golang.org/x/term v0.6.0 // indirect
    golang.org/x/text v0.8.0 // indirect
    golang.org/x/time v0.0.0-20220210224613-90d013bbcef8 // indirect
    google.golang.org/appengine v1.6.7 // indirect
    google.golang.org/genproto v0.0.0-20220502173005-c8bf987b8c21 // indirect
    google.golang.org/grpc v1.51.0 // indirect
    google.golang.org/protobuf v1.28.1 // indirect
    gopkg.in/inf.v0 v0.9.1 // indirect
    gopkg.in/natefinch/lumberjack.v2 v2.0.0 // indirect
    gopkg.in/yaml.v2 v2.4.0 // indirect
    gopkg.in/yaml.v3 v3.0.1 // indirect
    k8s.io/api v0.0.0 // indirect
    k8s.io/apimachinery v0.0.0 // indirect
    k8s.io/apiserver v0.0.0 // indirect
    k8s.io/client-go v0.0.0 // indirect
    k8s.io/cloud-provider v0.0.0 // indirect
    k8s.io/component-base v0.0.0 // indirect
    k8s.io/component-helpers v0.0.0 // indirect
    k8s.io/controller-manager v0.0.0 // indirect
    k8s.io/csi-translation-lib v0.0.0 // indirect
    k8s.io/dynamic-resource-allocation v0.0.0 // indirect
    k8s.io/klog/v2 v2.90.1 // indirect
    k8s.io/kms v0.0.0 // indirect
    k8s.io/kube-openapi v0.0.0-20230501164219-8b0f38b5fd1f // indirect
    k8s.io/kube-scheduler v0.0.0 // indirect
    k8s.io/kubelet v0.0.0 // indirect
    k8s.io/mount-utils v0.0.0 // indirect
    k8s.io/utils v0.0.0-20230209194617-a36077c30491 // indirect
    sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.1.2 // indirect
    sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
    sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect
    sigs.k8s.io/yaml v1.3.0 // indirect
)

// 使用本地的 k8s 源碼路徑替換
replace (
    k8s.io/api => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/api
    k8s.io/apiextensions-apiserver => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/apiextensions-apiserver
    k8s.io/apimachinery => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/apimachinery
    k8s.io/apiserver => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/apiserver
    k8s.io/cli-runtime => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/cli-runtime
    k8s.io/client-go => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/client-go
    k8s.io/cloud-provider => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/cloud-provider
    k8s.io/cluster-bootstrap => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/cluster-bootstrap
    k8s.io/code-generator => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/code-generator
    k8s.io/component-base => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/component-base
    k8s.io/component-helpers => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/component-helpers
    k8s.io/controller-manager => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/controller-manager
    k8s.io/cri-api => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/cri-api
    k8s.io/csi-translation-lib => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/csi-translation-lib
    k8s.io/dynamic-resource-allocation => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/dynamic-resource-allocation
    k8s.io/kms => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/kms
    k8s.io/kube-aggregator => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/kube-aggregator
    k8s.io/kube-controller-manager => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/kube-controller-manager
    k8s.io/kube-proxy => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/kube-proxy
    k8s.io/kube-scheduler => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/kube-scheduler
    k8s.io/kubectl => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/kubectl
    k8s.io/kubelet => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/kubelet
    k8s.io/kubernetes => /Users/sxf/workspace/kubernetes
    k8s.io/legacy-cloud-providers => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/legacy-cloud-providers
    k8s.io/metrics => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/metrics
    k8s.io/mount-utils => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/mount-utils
    k8s.io/pod-security-admission => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/pod-security-admission
    k8s.io/sample-apiserver => /Users/sxf/workspace/kubernetes/staging/src/k8s.io/sample-apiserver
)

這里有些依賴是沒有用的尼荆,真正開發(fā)的時(shí)候直接使用 go mod tidy 清理一下即可。

參考資料

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末唧垦,一起剝皮案震驚了整個(gè)濱河市捅儒,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌振亮,老刑警劉巖巧还,帶你破解...
    沈念sama閱讀 222,464評論 6 517
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異坊秸,居然都是意外死亡麸祷,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 95,033評論 3 399
  • 文/潘曉璐 我一進(jìn)店門褒搔,熙熙樓的掌柜王于貴愁眉苦臉地迎上來阶牍,“玉大人,你說我怎么就攤上這事星瘾∽吣酰” “怎么了?”我有些...
    開封第一講書人閱讀 169,078評論 0 362
  • 文/不壞的土叔 我叫張陵琳状,是天一觀的道長融求。 經(jīng)常有香客問我,道長算撮,這世上最難降的妖魔是什么生宛? 我笑而不...
    開封第一講書人閱讀 59,979評論 1 299
  • 正文 為了忘掉前任,我火速辦了婚禮肮柜,結(jié)果婚禮上陷舅,老公的妹妹穿的比我還像新娘。我一直安慰自己审洞,他們只是感情好莱睁,可當(dāng)我...
    茶點(diǎn)故事閱讀 69,001評論 6 398
  • 文/花漫 我一把揭開白布待讳。 她就那樣靜靜地躺著,像睡著了一般仰剿。 火紅的嫁衣襯著肌膚如雪创淡。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 52,584評論 1 312
  • 那天南吮,我揣著相機(jī)與錄音琳彩,去河邊找鬼。 笑死部凑,一個(gè)胖子當(dāng)著我的面吹牛露乏,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播涂邀,決...
    沈念sama閱讀 41,085評論 3 422
  • 文/蒼蘭香墨 我猛地睜開眼瘟仿,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了比勉?” 一聲冷哼從身側(cè)響起劳较,我...
    開封第一講書人閱讀 40,023評論 0 277
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎浩聋,沒想到半個(gè)月后兴想,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 46,555評論 1 319
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡赡勘,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 38,626評論 3 342
  • 正文 我和宋清朗相戀三年嫂便,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片闸与。...
    茶點(diǎn)故事閱讀 40,769評論 1 353
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡毙替,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出践樱,到底是詐尸還是另有隱情厂画,我是刑警寧澤,帶...
    沈念sama閱讀 36,439評論 5 351
  • 正文 年R本政府宣布拷邢,位于F島的核電站袱院,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏瞭稼。R本人自食惡果不足惜忽洛,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 42,115評論 3 335
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望环肘。 院中可真熱鬧欲虚,春花似錦、人聲如沸悔雹。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,601評論 0 25
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至梯找,卻和暖如春唆阿,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背锈锤。 一陣腳步聲響...
    開封第一講書人閱讀 33,702評論 1 274
  • 我被黑心中介騙來泰國打工驯鳖, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人牙咏。 一個(gè)月前我還...
    沈念sama閱讀 49,191評論 3 378
  • 正文 我出身青樓,卻偏偏與公主長得像嘹裂,于是被迫代替她去往敵國和親妄壶。 傳聞我的和親對象是個(gè)殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,781評論 2 361

推薦閱讀更多精彩內(nèi)容