k8s部署

k8s 部署現(xiàn)在支持一鍵部署伤溉,可參考rancher一鍵安裝https://github.com/qxl1231/2019-k8s-centos/blob/master/rancher-k8s-install.md 本人主要敘述centos7搭建k8s

建議先看粗略一遍再跟著一步步按步驟操作

準(zhǔn)備材料
  • 一個裝有VMware的Windows系統(tǒng) 配置內(nèi)存(16G或以上)
  • 三臺帶有centos7虛擬機(jī)(同一網(wǎng)段 如果是家用路由器就是192.168.0.x或192.168.0.x ,注意關(guān)閉防火墻)
配置hosts
#node1 機(jī)器上執(zhí)行 
hostnamectl set-hostname node1
#node2 機(jī)器上執(zhí)行 
hostnamectl set-hostname node2
#master 機(jī)器上執(zhí)行 
hostnamectl set-hostname master

#每臺機(jī)子上的hosts文件
vim /etc/hosts
192.168.0.158 master
192.168.0.159 node1
192.168.0.160 node2
安裝docker-ce

Master榆鼠、Node節(jié)點(diǎn)都需要安裝、配置Docker(此操作屬于公共部分二鳄,每一臺虛擬機(jī)蛇摸,可以先安裝好一臺虛擬機(jī)然后再copy出來)

# 卸載原來的docker
sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

# 安裝依賴
sudo yum update -y && sudo yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2

# 添加官方y(tǒng)um庫
sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

# 安裝docker
sudo yum install docker-ce docker-ce-cli containerd.io

# 查看docker版本
docker --version

# 開機(jī)啟動
systemctl enable --now docker

或者使用腳本一鍵安裝

curl -fsSL "https://get.docker.com/" | sh
systemctl enable --now docker
修改docker cgroup驅(qū)動,與k8s一致垄琐,使用systemd
# 修改docker cgroup驅(qū)動:native.cgroupdriver=systemd
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

systemctl restart docker  # 重啟使配置生效
安裝 kubelet kubeadm kubectl

master、node節(jié)點(diǎn)都需要安裝kubelet kubeadm kubectl经柴。(這里也是可以先安裝好一臺虛擬機(jī)狸窘,然后copy)

安裝kubernetes的時候,需要安裝kubelet, kubeadm等包坯认,但k8s官網(wǎng)給的yum源是http://packages.cloud.google.com翻擒,國內(nèi)訪問不了氓涣,此時我們可以使用阿里云的yum倉庫鏡像。

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 關(guān)閉SElinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# 安裝kubelet kubeadm kubectl
yum install -y kubelet-1.19.0 kubeadm-1.19.0 kubectl-1.19.0 --disableexcludes=kubernetes

systemctl enable --now kubelet  # 開機(jī)啟動kubelet

# centos7用戶還需要設(shè)置路由:
yum install -y bridge-utils.x86_64
modprobe  br_netfilter  # 加載br_netfilter模塊陋气,使用lsmod查看開啟的模塊
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # 重新加載所有配置文件

systemctl disable --now firewalld  # 關(guān)閉防火墻

# k8s要求關(guān)閉swap  (qxl)
swapoff -a && sysctl -w vm.swappiness=0  # 關(guān)閉swap
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab  # 取消開機(jī)掛載swap

使用虛擬機(jī)的可以做完以上步驟后劳吠,進(jìn)行克隆。實(shí)驗(yàn)環(huán)境為1 Master巩趁,2 Node
創(chuàng)建集群準(zhǔn)備工作(到這里就要開始區(qū)分開來了)

# Master端:
kubeadm config images pull # 拉取集群所需鏡像痒玩,這個需要翻墻

# --- 不能翻墻可以嘗試以下辦法 ---
kubeadm config images list # 列出所需鏡像
#(不是一定是下面的,根據(jù)實(shí)際情況來)
# 根據(jù)所需鏡像名字先拉取國內(nèi)資源
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.19.0
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.0
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.0
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.0
docker pull registry.aliyuncs.com/google_containers/etcd:3.4.9-1
docker pull registry.aliyuncs.com/google_containers/coredns:1.7.0
docker pull registry.aliyuncs.com/google_containers/pause:3.2

# 修改鏡像tag
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.19.0 k8s.gcr.io/kube-proxy:v1.19.3
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.0  k8s.gcr.io/kube-apiserver:v1.19.3
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.0 k8s.gcr.io/kube-controller-manager:v1.19.3
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.0 k8s.gcr.io/kube-scheduler:v1.19.3
docker tag registry.aliyuncs.com/google_containers/etcd:3.4.9-1 k8s.gcr.io/etcd:3.4.9-1
docker tag registry.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
docker tag registry.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2 


# 把所需的鏡像下載好,init的時候就不會再拉鏡像议慰,由于無法連接google鏡像庫導(dǎo)致出錯


# --- 不能翻墻可以嘗試使用 ---

# Node端:
# 根據(jù)所需鏡像名字先拉取國內(nèi)資源
docker pull kry1702/kube-proxy:v1.15.0
docker pull kry1702/pause:3.1


# 修改鏡像tag
docker tag kry1702/kube-proxy:v1.15.0 k8s.gcr.io/kube-proxy:v1.15.12
docker tag kry1702/pause:3.1 k8s.gcr.io/pause:3.1


使用kubeadm創(chuàng)建集群

# 第一次初始化過程中/etc/kubernetes/admin.conf該文件存在凰荚,是空文件(我自己手動創(chuàng)建的),
#會報錯:panic: runtime error: invalid memory address or nil pointer dereference
ls /etc/kubernetes/admin.conf && mv /etc/kubernetes/admin.conf.bak # 移走備份

# 初始化Master(Master需要至少2核)此處會各種報錯,異常...成功與否就在此 使用阿里云的源會較少很多因?yàn)榫W(wǎng)絡(luò)導(dǎo)致的錯誤  192.168.0.158 是master所使用的ip地址
kubeadm init --kubernetes-version=v1.15.0 --image-repository registry.aliyuncs.com/google_containers  --apiserver-advertise-address=192.168.0.158 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
# --apiserver-advertise-address 指定與其它節(jié)點(diǎn)通信的接口
# --pod-network-cidr 指定pod網(wǎng)絡(luò)子網(wǎng)褒脯,使用fannel網(wǎng)絡(luò)必須使用這個CIDR
  • 運(yùn)行初始化,程序會檢驗(yàn)環(huán)境一致性缆毁,可以根據(jù)實(shí)際錯誤提示進(jìn)一步修復(fù)問題番川。
  • 程序會訪問https://dl.k8s.io/release/stable-1.txt獲取最新的k8s版本,訪問這個連接需要FQ脊框,如果無法訪問颁督,則會使用kubeadm client的版本作為安裝的版本號,使用kubeadm version查看client版本浇雹。也可以使用--kubernetes-version明確指定版本沉御。

查看結(jié)果

···

初始化結(jié)果:

[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.503375 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: w2i0mh.5fxxz8vk5k8db0wq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

#每個機(jī)器創(chuàng)建的master以下部分都不同,需要自己保存好-qxl
kubeadm join 192.168.200.25:6443 --token our9a0.zl490imi6t81tn5u \
    --discovery-token-ca-cert-hash sha256:b93f710eb9b389a69f0cd0d6dcf7c82e389a68f009eb6b2028f69d54b099de16
普通用戶設(shè)置權(quán)限
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
應(yīng)用flannel網(wǎng)絡(luò)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
node加入機(jī)器
# node1:
kubeadm join 192.168.0.158:6443 --token w2i0mh.5fxxz8vk5k8db0wq \
    --discovery-token-ca-cert-hash sha256:65e82e987f50908f3640df7e05c7a91f390a02726c9142808faa739d4dc24252 
# node2:
kubeadm join 192.168.0.158:6443 --token w2i0mh.5fxxz8vk5k8db0wq \
    --discovery-token-ca-cert-hash sha256:65e82e987f50908f3640df7e05c7a91f390a02726c9142808faa739d4dc24252
查看結(jié)果
# master:
kubectl get pods --all-namespaces
# ---輸出信息---
AMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   coredns-6d56c8448f-p2r27   0/1     Pending   0          26m
kube-system   coredns-6d56c8448f-q25cq   0/1     Pending   0          26m
kube-system   kube-proxy-qn6db           1/1     Running   0          26m
# ---輸出信息---


kubectl get nodes
image.png

文章轉(zhuǎn)自https://zhuanlan.zhihu.com/p/62814079 并進(jìn)行部分修改

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市昭灵,隨后出現(xiàn)的幾起案子吠裆,更是在濱河造成了極大的恐慌,老刑警劉巖烂完,帶你破解...
    沈念sama閱讀 222,104評論 6 515
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件试疙,死亡現(xiàn)場離奇詭異,居然都是意外死亡抠蚣,警方通過查閱死者的電腦和手機(jī)祝旷,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 94,816評論 3 399
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來嘶窄,“玉大人怀跛,你說我怎么就攤上這事”澹” “怎么了吻谋?”我有些...
    開封第一講書人閱讀 168,697評論 0 360
  • 文/不壞的土叔 我叫張陵,是天一觀的道長现横。 經(jīng)常有香客問我滨溉,道長什湘,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 59,836評論 1 298
  • 正文 為了忘掉前任晦攒,我火速辦了婚禮闽撤,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘脯颜。我一直安慰自己哟旗,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 68,851評論 6 397
  • 文/花漫 我一把揭開白布栋操。 她就那樣靜靜地躺著闸餐,像睡著了一般。 火紅的嫁衣襯著肌膚如雪矾芙。 梳的紋絲不亂的頭發(fā)上舍沙,一...
    開封第一講書人閱讀 52,441評論 1 310
  • 那天,我揣著相機(jī)與錄音剔宪,去河邊找鬼拂铡。 笑死,一個胖子當(dāng)著我的面吹牛葱绒,可吹牛的內(nèi)容都是我干的感帅。 我是一名探鬼主播,決...
    沈念sama閱讀 40,992評論 3 421
  • 文/蒼蘭香墨 我猛地睜開眼地淀,長吁一口氣:“原來是場噩夢啊……” “哼失球!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起帮毁,我...
    開封第一講書人閱讀 39,899評論 0 276
  • 序言:老撾萬榮一對情侶失蹤实苞,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后烈疚,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體硬梁,經(jīng)...
    沈念sama閱讀 46,457評論 1 318
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 38,529評論 3 341
  • 正文 我和宋清朗相戀三年胞得,在試婚紗的時候發(fā)現(xiàn)自己被綠了荧止。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 40,664評論 1 352
  • 序言:一個原本活蹦亂跳的男人離奇死亡阶剑,死狀恐怖跃巡,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情牧愁,我是刑警寧澤素邪,帶...
    沈念sama閱讀 36,346評論 5 350
  • 正文 年R本政府宣布,位于F島的核電站猪半,受9級特大地震影響兔朦,放射性物質(zhì)發(fā)生泄漏偷线。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 42,025評論 3 334
  • 文/蒙蒙 一沽甥、第九天 我趴在偏房一處隱蔽的房頂上張望声邦。 院中可真熱鬧,春花似錦摆舟、人聲如沸亥曹。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,511評論 0 24
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽媳瞪。三九已至,卻和暖如春照宝,著一層夾襖步出監(jiān)牢的瞬間蛇受,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 33,611評論 1 272
  • 我被黑心中介騙來泰國打工厕鹃, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留兢仰,地道東北人。 一個月前我還...
    沈念sama閱讀 49,081評論 3 377
  • 正文 我出身青樓熊响,卻偏偏與公主長得像,于是被迫代替她去往敵國和親诗赌。 傳聞我的和親對象是個殘疾皇子汗茄,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,675評論 2 359