一预皇、集群環(huán)境規(guī)劃配置
生產(chǎn)環(huán)境不要使用一主多從掸鹅,要使用多主多從佳窑。這里使用三臺主機進行測試一臺Master(172.16.20.111),兩臺Node(172.16.20.112和172.16.20.113)
1已球、設(shè)置主機名
CentOS7安裝完成之后,設(shè)置固定ip辅愿,三臺主機做相同設(shè)置
vi /etc/sysconfig/network-scripts/ifcfg-ens33
#在最下面ONBOOT改為yes智亮,新增固定地址IPADDR,172.16.20.111点待,172.16.20.112阔蛉,172.16.20.113
ONBOOT=yes
IPADDR=172.16.20.111
三臺主機ip分別設(shè)置好之后,修改hosts文件癞埠,設(shè)置主機名
#master 機器上執(zhí)行
hostnamectl set-hostname master
#node1 機器上執(zhí)行
hostnamectl set-hostname node1
#node2 機器上執(zhí)行
hostnamectl set-hostname node2
vi /etc/hosts
172.16.20.111 master
172.16.20.112 node1
172.16.20.113 node2
2状原、時間同步
開啟chronyd服務(wù)
systemctl start chronyd
設(shè)置開機啟動
systemctl enable chronyd
測試
date
3、禁用firewalld和iptables(測試環(huán)境)
systemctl stop firewalld
systemctl disable firewalld
systemctl stop iptables
systemctl disable iptables
4苗踪、禁用selinux
vi /etc/selinux/config
SELINUX=disabled
5颠区、禁用swap分區(qū)
注釋掉 /dev/mapper/centos-swap swap
vi /etc/fstab
# 注釋掉
# /dev/mapper/centos-swap swap
6通铲、修改linux的內(nèi)核參數(shù)
vi /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
#重新加載配置
sysctl -p
#加載網(wǎng)橋過濾模塊
modprobe br_netfilter
#查看網(wǎng)橋過濾模塊
lsmod | grep br_netfilter
7毕莱、配置ipvs
安裝ipset和ipvsadm
yum install ipset ipvsadm -y
添加需要加載的模塊(整個執(zhí)行)
cat <<EOF> /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
添加執(zhí)行權(quán)限
chmod +x /etc/sysconfig/modules/ipvs.modules
執(zhí)行腳本
/bin/bash /etc/sysconfig/modules/ipvs.modules
查看是否加載成功
lsmod | grep -e -ip_vs -e nf_conntrack_ipv4
以上完成設(shè)置之后,一定要執(zhí)行重啟使配置生效
reboot
二、Docker環(huán)境安裝配置
1朋截、安裝依賴
docker依賴于系統(tǒng)的一些必要的工具:
yum install -y yum-utils device-mapper-persistent-data lvm2
2蛹稍、添加軟件源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum clean all
yum makecache fast
3、安裝docker-ce
#查看可以安裝的docker版本
yum list docker-ce --showduplicates
#選擇安裝需要的版本质和,直接安裝最新版稳摄,可以執(zhí)行 yum -y install docker-ce
yum install --setopt=obsoletes=0 docker-ce-19.03.13-3.el7 -y
4、啟動服務(wù)
#通過systemctl啟動服務(wù)
systemctl start docker
#通過systemctl設(shè)置開機啟動
systemctl enable docker
5饲宿、查看安裝版本
啟動服務(wù)使用docker version查看一下當(dāng)前的版本:
docker version
6厦酬、 配置鏡像加速
通過修改daemon配置文件/etc/docker/daemon.json加速,如果使用k8s瘫想,這里一定要設(shè)置 "exec-opts": ["native.cgroupdriver=systemd"]仗阅。 "insecure-registries" : ["172.16.20.175"]配置是可以通過http從我們的harbor上拉取數(shù)據(jù)。
vi /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"registry-mirrors": ["https://eiov0s1n.mirror.aliyuncs.com"],
"insecure-registries" : ["172.16.20.175"]
}
sudo systemctl daemon-reload && sudo systemctl restart docker
7国夜、安裝docker-compose
如果網(wǎng)速太慢减噪,可以直接到 https://github.com/docker/compose/releases 選擇對應(yīng)的版本進行下載,然后上傳到服務(wù)器/usr/local/bin/目錄车吹。
sudo curl -L "https://github.com/docker/compose/releases/download/v2.0.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
注意:(非必須設(shè)置)開啟Docker遠程訪問 (這里不是必須開啟的筹裕,生產(chǎn)環(huán)境不要開啟,開啟之后窄驹,可以在開發(fā)環(huán)境直連docker)
vi /lib/systemd/system/docker.service
修改ExecStart朝卒,添加 -H tcp://0.0.0.0:2375
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375 --containerd=/run/containerd/containerd.sock
修改后執(zhí)行以下命令:
systemctl daemon-reload && service docker restart
測試是否能夠連得上:
curl http://localhost:2375/version
三、Harbor私有鏡像倉庫安裝配置(重新設(shè)置一臺服務(wù)器172.16.20.175乐埠,不要放在K8S的主從服務(wù)器上)
首先需要按照前面的步驟抗斤,在環(huán)境上安裝Docker,才能安裝Harbor丈咐。
1瑞眼、選擇合適的版本進行下載,下載地址:
https://github.com/goharbor/harbor/releases
2棵逊、解壓
tar -zxf harbor-offline-installer-v2.2.4.tgz
3伤疙、配置
cd harbor
mv harbor.yml.tmpl harbor.yml
vi harbor.yml
4、將hostname改為當(dāng)前服務(wù)器地址辆影,注釋掉https配置掩浙。
......
# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: 172.16.20.175
# http related config
http:
# port for http, default is 80. If https enabled, this port will redirect to https port
port: 80
# https related config
#https:
# https port for harbor, default is 443
# port: 443
# The path of cert and key files for nginx
# certificate: /your/certificate/path
# private_key: /your/private/key/path
......
5、執(zhí)行安裝命令
mkdir /var/log/harbor/
./install.sh
6秸歧、查看安裝是否成功
[root@localhost harbor]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
de1b702759e7 goharbor/harbor-jobservice:v2.2.4 "/harbor/entrypoint.…" 13 seconds ago Up 9 seconds (health: starting) harbor-jobservice
55b465d07157 goharbor/nginx-photon:v2.2.4 "nginx -g 'daemon of…" 13 seconds ago Up 9 seconds (health: starting) 0.0.0.0:80->8080/tcp, :::80->8080/tcp nginx
d52f5557fa73 goharbor/harbor-core:v2.2.4 "/harbor/entrypoint.…" 13 seconds ago Up 10 seconds (health: starting) harbor-core
4ba09aded494 goharbor/harbor-db:v2.2.4 "/docker-entrypoint.…" 13 seconds ago Up 11 seconds (health: starting) harbor-db
647f6f46e029 goharbor/harbor-portal:v2.2.4 "nginx -g 'daemon of…" 13 seconds ago Up 11 seconds (health: starting) harbor-portal
70251c4e234f goharbor/redis-photon:v2.2.4 "redis-server /etc/r…" 13 seconds ago Up 11 seconds (health: starting) redis
21a5c408afff goharbor/harbor-registryctl:v2.2.4 "/home/harbor/start.…" 13 seconds ago Up 11 seconds (health: starting) registryctl
b0937800f88b goharbor/registry-photon:v2.2.4 "/home/harbor/entryp…" 13 seconds ago Up 11 seconds (health: starting) registry
d899e377e02b goharbor/harbor-log:v2.2.4 "/bin/sh -c /usr/loc…" 13 seconds ago Up 12 seconds (health: starting) 127.0.0.1:1514->10514/tcp harbor-log
7厨姚、harbor的啟動停止命令
docker-compose down #停止
docker-compose up -d #啟動
8、訪問harbor管理臺地址键菱,上面配置的hostname, http://172.16.20.175 (默認用戶名/密碼: admin/Harbor12345):
三谬墙、Kubernetes安裝配置
1今布、切換鏡像源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2、安裝kubeadm拭抬、kubelet和kubectl
yum install -y kubelet kubeadm kubectl
3部默、配置kubelet的cgroup
vi /etc/sysconfig/kubelet
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
4、啟動kubelet并設(shè)置開機啟動
systemctl start kubelet && systemctl enable kubelet
5造虎、初始化k8s集群(只在Master執(zhí)行)
初始化
kubeadm init --kubernetes-version=v1.22.3 \
--apiserver-advertise-address=172.16.20.111 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.20.0.0/16 --pod-network-cidr=10.222.0.0/16
創(chuàng)建必要文件
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
6傅蹂、加入集群(只在Node節(jié)點執(zhí)行)
在Node節(jié)點(172.16.20.112和172.16.20.113)運行上一步初始化成功后顯示的加入集群命令
kubeadm join 172.16.20.111:6443 --token fgf380.einr7if1eb838mpe \
--discovery-token-ca-cert-hash sha256:fa5a6a2ff8996b09effbf599aac70505b49f35c5bca610d6b5511886383878f7
在Master查看集群狀態(tài)
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane,master 2m54s v1.22.3
node1 NotReady <none> 68s v1.22.3
node2 NotReady <none> 30s v1.22.3
7、安裝網(wǎng)絡(luò)插件(只在Master執(zhí)行)
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
鏡像加速:修改kube-flannel.yml文件算凿,將quay.io/coreos/flannel:v0.15.0 改為 quay.mirrors.ustc.edu.cn/coreos/flannel:v0.15.0
執(zhí)行安裝
kubectl apply -f kube-flannel.yml
再次查看集群狀態(tài),(需要等待一段時間大概1-2分鐘)發(fā)現(xiàn)STATUS都是Ready份蝴。
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 42m v1.22.3
node1 Ready <none> 40m v1.22.3
node2 Ready <none> 39m v1.22.3
8、集群測試
使用kubectl安裝部署nginx服務(wù)
kubectl create deployment nginx --image=nginx --replicas=1
kubectl expose deploy nginx --port=80 --target-port=80 --type=NodePort
查看服務(wù)
[root@master ~]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-6799fc88d8-z5tm8 1/1 Running 0 26s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.20.0.1 <none> 443/TCP 68m
service/nginx NodePort 10.20.17.199 <none> 80:32605/TCP 9s
服務(wù)顯示service/nginx的PORT(S)為80:32605/TCP氓轰, 我們在瀏覽器中訪問主從地址的32605端口婚夫,查看nginx是否運行
http://172.16.20.111:32605/
http://172.16.20.112:32605/
http://172.16.20.113:32605/
成功后顯示如下界面:
9、安裝Kubernetes管理界面Dashboard
??Kubernetes可以通過命令行工具kubectl完成所需要的操作署鸡,同時也提供了方便操作的管理控制界面案糙,用戶可以用 Kubernetes Dashboard 部署容器化的應(yīng)用、監(jiān)控應(yīng)用的狀態(tài)靴庆、執(zhí)行故障排查任務(wù)以及管理 Kubernetes 各種資源时捌。
1、下載安裝配置文件recommended.yaml ,注意在https://github.com/kubernetes/dashboard/releases查看Kubernetes 和 Kubernetes Dashboard的版本對應(yīng)關(guān)系炉抒。
# 執(zhí)行下載
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
2奢讨、修改配置信息,在service下添加 type: NodePort和nodePort: 30010
vi recommended.yaml
......
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
# 新增
nodeName: Master
# 新增
type: NodePort
ports:
- port: 443
targetPort: 8443
# 新增
nodePort: 30010
......
注釋掉以下信息端礼,否則不能安裝到master服務(wù)器
# Comment the following tolerations if Dashboard must not be deployed on master
#tolerations:
# - key: node-role.kubernetes.io/master
# effect: NoSchedule
新增nodeName: master禽笑,安裝到master服務(wù)器
......
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
nodeName: master
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.4.0
imagePullPolicy: Always
......
3入录、執(zhí)行安裝部署命令
kubectl apply -f recommended.yaml
4蛤奥、查看運行狀態(tài)命令,可以看到service/kubernetes-dashboard 已運行僚稿,訪問端口為30010
[root@master ~]# kubectl get pod,svc -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-c45b7869d-6k87n 0/1 ContainerCreating 0 10s
pod/kubernetes-dashboard-576cb95f94-zfvc9 0/1 ContainerCreating 0 10s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.20.222.83 <none> 8000/TCP 10s
service/kubernetes-dashboard NodePort 10.20.201.182 <none> 443:30010/TCP 10s
5凡桥、創(chuàng)建訪問Kubernetes Dashboard的賬號
kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
kubectl create clusterrolebinding dashboard-admin-rb --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin
6、查詢訪問Kubernetes Dashboard的token
[root@master ~]# kubectl get secrets -n kubernetes-dashboard | grep dashboard-admin
dashboard-admin-token-84gg6 kubernetes.io/service-account-token 3 64s
[root@master ~]# kubectl describe secrets dashboard-admin-token-84gg6 -n kubernetes-dashboard
Name: dashboard-admin-token-84gg6
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 2d93a589-6b0b-4ed6-adc3-9a2eeb5d1311
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1099 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImRmbVVfRy15QzdfUUF4ZmFuREZMc3dvd0IxQ3ItZm5SdHVZRVhXV3JpZGcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tODRnZzYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMmQ5M2E1ODktNmIwYi00ZWQ2LWFkYzMtOWEyZWViNWQxMzExIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.xsDBLeZdn7IO0Btpb4LlCD1RQ2VYsXXPa-bir91VXIqRrL1BewYAyFfZtxU-8peU8KebaJiRIaUeF813x6WbGG9QKynL1fTARN5XoH-arkBTVlcjHQ5GBziLDE-KU255veVqORF7J5XtB38Ke2n2pi8tnnUUS_bIJpMTF1s-hV0aLlqUzt3PauPmDshtoerz4iafWK0u9oWBASQDPPoE8IWYU1KmSkUNtoGzf0c9vpdlUw4j0UZE4-zSoMF_XkrfQDLD32LrG56Wgpr6E8SeipKRfgXvx7ExD54b8Lq9DyAltr_nQVvRicIEiQGdbeCu9dwzGyhg-cDucULTx7TUgA
7蚀同、在頁面訪問Kubernetes Dashboard缅刽,注意一定要使用https,https://172.16.20.111:30010 蠢络,輸入token登錄成功后就進入了后臺管理界面衰猛,原先命令行的操作就可以在管理界面進操作了
四、GitLab安裝配置
??GitLab是可以部署在本地環(huán)境的Git項目倉庫刹孔,這里介紹如何安裝使用啡省,在開發(fā)過程中我們將代碼上傳到本地倉庫,然后Jenkins從倉庫中拉取代碼打包部署。
1卦睹、下載需要的安裝包畦戒,下載地址 https://packages.gitlab.com/gitlab/gitlab-ce/ ,我們這里下載最新版gitlab-ce-14.4.1-ce.0.el7.x86_64.rpm结序,當(dāng)然在項目開發(fā)中需要根據(jù)自己的需求選擇穩(wěn)定版本
2障斋、點擊需要安裝的版本,會提示安裝命令徐鹤,按照上面提示的命令進行安裝即可
curl -s https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.rpm.sh | sudo bash
sudo yum install gitlab-ce-14.4.1-ce.0.el7.x86_64
3垃环、配置并啟動Gitlab
gitlab-ctl reconfigure
4、查看Gitlab狀態(tài)
gitlab-ctl status
5凳干、設(shè)置初始登錄密碼
cd /opt/gitlab/bin
sudo ./gitlab-rails console
# 進入控制臺之后執(zhí)行
u=User.where(id:1).first
u.password='root1234'
u.password_confirmation='root1234'
u.save!
quit
5晴裹、瀏覽器訪問服務(wù)器地址,默認是80端口救赐,所以直接訪問即可涧团,在登錄界面輸入我們上面設(shè)置的密碼root/root1234。
6经磅、設(shè)置界面為中文
User Settings ----> Preferences ----> Language ----> 簡體中文 ----> 刷新界面
7泌绣、Gitlab常用命令
gitlab-ctl stop
gitlab-ctl start
gitlab-ctl restart
五、使用Docker安裝配置Jenkins+Sonar(代碼質(zhì)量檢查)
??實際項目應(yīng)用開發(fā)過程中预厌,單獨為SpringCloud工程部署一臺運維服務(wù)器阿迈,不要安裝在Kubernetes服務(wù)器上,同樣按照上面的步驟安裝docker和docker-compose轧叽,然后使用docker-compose構(gòu)建Jenkins和Sonar苗沧。
1、創(chuàng)建宿主機掛載目錄并賦權(quán)
mkdir -p /data/docker/ci/nexus /data/docker/ci/jenkins/lib /data/docker/ci/jenkins/home /data/docker/ci/sonarqube /data/docker/ci/postgresql
chmod -R 777 /data/docker/ci/nexus /data/docker/ci/jenkins/lib /data/docker/ci/jenkins/home /data/docker/ci/sonarqube /data/docker/ci/postgresql
2炭晒、新建Jenkins+Sonar安裝腳本jenkins-compose.yml腳本待逞,這里的Jenkins使用的是Docker官方推薦的鏡像jenkinsci/blueocean,在實際使用中發(fā)現(xiàn)网严,即使不修改插件下載地址识樱,也可以下載插件,所以比較推薦這個鏡像震束。
version: '3'
networks:
prodnetwork:
driver: bridge
services:
sonardb:
image: postgres:12.2
restart: always
ports:
- "5433:5432"
networks:
- prodnetwork
volumes:
- /data/docker/ci/postgresql:/var/lib/postgresql
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
sonar:
image: sonarqube:8.2-community
restart: always
ports:
- "19000:9000"
- "19092:9092"
networks:
- prodnetwork
depends_on:
- sonardb
volumes:
- /data/docker/ci/sonarqube/conf:/opt/sonarqube/conf
- /data/docker/ci/sonarqube/data:/opt/sonarqube/data
- /data/docker/ci/sonarqube/logs:/opt/sonarqube/logs
- /data/docker/ci/sonarqube/extension:/opt/sonarqube/extensions
- /data/docker/ci/sonarqube/bundled-plugins:/opt/sonarqube/lib/bundled-plugins
environment:
- TZ=Asia/Shanghai
- SONARQUBE_JDBC_URL=jdbc:postgresql://sonardb:5432/sonar
- SONARQUBE_JDBC_USERNAME=sonar
- SONARQUBE_JDBC_PASSWORD=sonar
nexus:
image: sonatype/nexus3
restart: always
ports:
- "18081:8081"
networks:
- prodnetwork
volumes:
- /data/docker/ci/nexus:/nexus-data
jenkins:
image: jenkinsci/blueocean
user: root
restart: always
ports:
- "18080:8080"
networks:
- prodnetwork
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /etc/localtime:/etc/localtime:ro
- $HOME/.ssh:/root/.ssh
- /data/docker/ci/jenkins/lib:/var/lib/jenkins/
- /usr/bin/docker:/usr/bin/docker
- /data/docker/ci/jenkins/home:/var/jenkins_home
depends_on:
- nexus
- sonar
environment:
- NEXUS_PORT=8081
- SONAR_PORT=9000
- SONAR_DB_PORT=5432
cap_add:
- ALL
3怜庸、在jenkins-compose.yml文件所在目錄下執(zhí)行安裝啟動命令
docker-compose -f jenkins-compose.yml up -d
安裝成功后,展示以下信息
[+] Running 5/5
? Network root_prodnetwork Created 0.0s
? Container root-sonardb-1 Started 1.0s
? Container root-nexus-1 Started 1.0s
? Container root-sonar-1 Started 2.1s
? Container root-jenkins-1 Started 4.2s
4垢村、查看服務(wù)的啟動情況
[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
52779025a83e jenkins/jenkins:lts "/sbin/tini -- /usr/…" 4 minutes ago Up 3 minutes 50000/tcp, 0.0.0.0:18080->8080/tcp, :::18080->8080/tcp root-jenkins-1
2f5fbc25de58 sonarqube:8.2-community "./bin/run.sh" 4 minutes ago Restarting (0) 21 seconds ago root-sonar-1
4248a8ba71d8 sonatype/nexus3 "sh -c ${SONATYPE_DI…" 4 minutes ago Up 4 minutes 0.0.0.0:18081->8081/tcp, :::18081->8081/tcp root-nexus-1
719623c4206b postgres:12.2 "docker-entrypoint.s…" 4 minutes ago Up 4 minutes 0.0.0.0:5433->5432/tcp, :::5433->5432/tcp root-sonardb-1
2b6852a57cc2 goharbor/harbor-jobservice:v2.2.4 "/harbor/entrypoint.…" 5 days ago Up 29 seconds (health: starting) harbor-jobservice
ebf2dea994fb goharbor/nginx-photon:v2.2.4 "nginx -g 'daemon of…" 5 days ago Restarting (1) 46 seconds ago nginx
adfaa287f23b goharbor/harbor-registryctl:v2.2.4 "/home/harbor/start.…" 5 days ago Up 7 minutes (healthy) registryctl
8e5bcca3aaa1 goharbor/harbor-db:v2.2.4 "/docker-entrypoint.…" 5 days ago Up 7 minutes (healthy) harbor-db
ebe845e020dc goharbor/harbor-portal:v2.2.4 "nginx -g 'daemon of…" 5 days ago Up 7 minutes (healthy) harbor-portal
68263dea2cfc goharbor/harbor-log:v2.2.4 "/bin/sh -c /usr/loc…" 5 days ago Up 7 minutes (healthy) 127.0.0.1:1514->10514/tcp harbor-log
我們發(fā)現(xiàn) jenkins端口映射到了18081 割疾,但是sonarqube沒有啟動,查看日志發(fā)現(xiàn)sonarqube文件夾沒有權(quán)限訪問嘉栓,日志上顯示容器目錄的權(quán)限不夠宏榕,但實際是宿主機的權(quán)限不夠驰凛,這里需要給宿主機賦予權(quán)限
chmod 777 /data/docker/ci/sonarqube/logs
chmod 777 /data/docker/ci/sonarqube/bundled-plugins
chmod 777 /data/docker/ci/sonarqube/conf
chmod 777 /data/docker/ci/sonarqube/data
chmod 777 /data/docker/ci/sonarqube/extension
執(zhí)行重啟命令
docker-compose -f jenkins-compose.yml restart
再次使用命令查看服務(wù)啟動情況,就可以看到j(luò)enkins映射到18081担扑,sonarqube映射到19000端口恰响,我們在瀏覽器就可以訪問jenkins和sonarqube的后臺界面了
5、Jenkins登錄初始化
從Jenkins的登錄界面提示可以知道涌献,默認密碼路徑為/var/jenkins_home/secrets/initialAdminPassword胚宦,這里顯示的事Docker容器內(nèi)部的路徑,實際對應(yīng)我們上面服務(wù)器設(shè)置的路徑為/data/docker/ci/jenkins/home/secrets/initialAdminPassword ,我們打開這個文件并輸入密碼就可以進入Jenkins管理界面
6燕垃、選擇安裝推薦插件枢劝,安裝完成之后,根據(jù)提示進行下一步操作卜壕,直到進入管理后臺界面
備注:
- sonarqube默認用戶名密碼: admin/admin
- 卸載命令:docker-compose -f jenkins-compose.yml down -v
六您旁、Jenkins自動打包部署配置
??項目部署有多種方式,從最原始的可運行jar包直接部署到JDK環(huán)境下運行轴捎,到將可運行的jar包放到docker容器中運行鹤盒,再到現(xiàn)在比較流行的把可運行的jar包和docker放到k8s的pod環(huán)境中運行。每一種新的部署方式都是對原有部署方式的改進和優(yōu)化侦副,這里不著重介紹每種方式的優(yōu)缺點侦锯,只簡單說明一下使用Kubernetes 的原因:Kubernetes 主要提供彈性伸縮、服務(wù)發(fā)現(xiàn)秦驯、自我修復(fù)尺碰,版本回退、負載均衡译隘、存儲編排等功能亲桥。
??日常開發(fā)部署過程中的基本步驟如下:
- 提交代碼到gitlab代碼倉庫
- gitlab通過webhook觸發(fā)Jenkins構(gòu)建代碼質(zhì)量檢查
- Jenkins需通過手動觸發(fā),來拉取代碼固耘、編譯题篷、打包、構(gòu)建Docker鏡像玻驻、發(fā)布到私有鏡像倉庫Harbor悼凑、執(zhí)行kubectl命令從Harbor拉取Docker鏡像部署至k8s
1偿枕、安裝Kubernetes plugin插件璧瞬、Git Parameter插件(用于流水線參數(shù)化構(gòu)建)、
Extended Choice Parameter
插件(用于多個微服務(wù)時渐夸,選擇需要構(gòu)建的微服務(wù))嗤锉、 Pipeline Utility Steps插件(用于讀取maven工程的.yaml、pom.xml等)和 Kubernetes Continuous Deploy(一定要使用1.0版本墓塌,從官網(wǎng)下載然后上傳) 瘟忱,Jenkins --> 系統(tǒng)管理 --> 插件管理 --> 可選插件 --> Kubernetes plugin /Git Parameter/Extended Choice Parameter ,選中后點擊Install without restart按鈕進行安裝
??Blueocean目前還不支持Git Parameter插件和Extended Choice Parameter插件奥额,Git Parameter是通過Git Plugin讀取分支信息,我們這里使用Pipeline script而不是使用Pipeline script from SCM访诱,是因為我們不希望把構(gòu)建信息放到代碼里垫挨,這樣做可以開發(fā)和部署分離。
2触菜、配置Kubernetes plugin插件九榔,Jenkins --> 系統(tǒng)管理 --> 節(jié)點管理 --> Configure Clouds --> Add a new cloud -> Kubernetes
3、增加kubernetes證書
cat ~/.kube/config
# 以下步驟暫不使用涡相,將certificate-authority-data哲泊、client-certificate-data、client-key-data替換為~/.kube/config里面具體的值
#echo certificate-authority-data | base64 -d > ca.crt
#echo client-certificate-data | base64 -d > client.crt
#echo client-key-data | base64 -d > client.key
# 執(zhí)行以下命令催蝗,自己設(shè)置密碼
#openssl pkcs12 -export -out cert.pfx -inkey client.key -in client.crt -certfile ca.crt
系統(tǒng)管理-->憑據(jù)-->系統(tǒng)-->全局憑據(jù)
4切威、添加訪問Kubernetes的憑據(jù)信息,這里填入上面登錄Kubernetes Dashboard所創(chuàng)建的token即可丙号,添加完成之后選擇剛剛添加的憑據(jù)先朦,然后點擊連接測試,如果提示連接成功犬缨,那么說明我們的Jenkins可以連接Kubernetes了
5烙无、jenkins全局配置jdk、git和maven
jenkinsci/blueocean鏡像默認安裝了jdk和git遍尺,這里需要登錄容器找到路徑截酷,然后配置進去。
通過命令進入jenkins容器乾戏,并查看JAVA_HOEM和git路徑
[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0520ebb9cc5d jenkinsci/blueocean "/sbin/tini -- /usr/…" 2 days ago Up 30 hours 50000/tcp, 0.0.0.0:18080->8080/tcp, :::18080->8080/tcp root-jenkins-1
[root@localhost ~]# docker exec -it 0520ebb9cc5d /bin/bash
bash-5.1# echo $JAVA_HOME
/opt/java/openjdk
bash-5.1# which git
/usr/bin/git
通過命令查詢可知迂苛,JAVA_HOME=/opt/java/openjdk GIT= /usr/bin/git , 在Jenkins全局工具配置中配置
Maven可以在宿主機映射的/data/docker/ci/jenkins/home中安裝鼓择,然后配置時三幻,配置容器路徑為/var/jenkins_home下的Maven安裝路徑
在系統(tǒng)配置中設(shè)置MAVEN_HOME供Pipeline script調(diào)用,如果執(zhí)行腳本時提示沒有權(quán)限呐能,那么在宿主Maven目錄的bin目錄下執(zhí)行chmod 777 *
6念搬、為k8s新建harbor-key,用于k8s拉取私服鏡像摆出,配置在代碼的k8s-deployment.yml中使用朗徊。
kubectl create secret docker-registry harbor-key --docker-server=172.16.20.175 --docker-username='robot$gitegg' --docker-password='Jqazyv7vvZiL6TXuNcv7TrZeRdL8U9n3'
7、新建pipeline流水線任務(wù)
8偎漫、配置流水線任務(wù)參數(shù)
9爷恳、配置pipeline發(fā)布腳本
在流水線下面選擇Pipeline script
pipeline {
agent any
parameters {
gitParameter branchFilter: 'origin/(.*)', defaultValue: 'master', name: 'Branch', type: 'PT_BRANCH', description:'請選擇需要構(gòu)建的代碼分支'
choice(name: 'BaseImage', choices: ['openjdk:8-jdk-alpine'], description: '請選擇基礎(chǔ)運行環(huán)境')
choice(name: 'Environment', choices: ['dev','test','prod'],description: '請選擇要發(fā)布的環(huán)境:dev開發(fā)環(huán)境、test測試環(huán)境象踊、prod 生產(chǎn)環(huán)境')
extendedChoice(
defaultValue: 'gitegg-gateway,gitegg-oauth,gitegg-plugin/gitegg-code-generator,gitegg-service/gitegg-service-base,gitegg-service/gitegg-service-extension,gitegg-service/gitegg-service-system',
description: '請選擇需要構(gòu)建的微服務(wù)',
multiSelectDelimiter: ',',
name: 'ServicesBuild',
quoteValue: false,
saveJSONParameterToFile: false,
type: 'PT_CHECKBOX',
value:'gitegg-gateway,gitegg-oauth,gitegg-plugin/gitegg-code-generator,gitegg-service/gitegg-service-base,gitegg-service/gitegg-service-extension,gitegg-service/gitegg-service-system',
visibleItemCount: 6)
string(name: 'BuildParameter', defaultValue: 'none', description: '請輸入構(gòu)建參數(shù)')
}
environment {
PRO_NAME = "gitegg"
BuildParameter="${params.BuildParameter}"
ENV = "${params.Environment}"
BRANCH = "${params.Branch}"
ServicesBuild = "${params.ServicesBuild}"
BaseImage="${params.BaseImage}"
k8s_token = "7696144b-3b77-4588-beb0-db4d585f5c04"
}
stages {
stage('Clean workspace') {
steps {
deleteDir()
}
}
stage('Process parameters') {
steps {
script {
if("${params.ServicesBuild}".trim() != "") {
def ServicesBuildString = "${params.ServicesBuild}"
ServicesBuild = ServicesBuildString.split(",")
for (service in ServicesBuild) {
println "now got ${service}"
}
}
if("${params.BuildParameter}".trim() != "" && "${params.BuildParameter}".trim() != "none") {
BuildParameter = "${params.BuildParameter}"
}
else
{
BuildParameter = ""
}
}
}
}
stage('Pull SourceCode Platform') {
steps {
echo "${BRANCH}"
git branch: "${Branch}", credentialsId: 'gitlabTest', url: 'http://172.16.20.188:2080/root/gitegg-platform.git'
}
}
stage('Install Platform') {
steps{
echo "==============Start Platform Build=========="
sh "${MAVEN_HOME}/bin/mvn -DskipTests=true clean install ${BuildParameter}"
echo "==============End Platform Build=========="
}
}
stage('Pull SourceCode') {
steps {
echo "${BRANCH}"
git branch: "${Branch}", credentialsId: 'gitlabTest', url: 'http://172.16.20.188:2080/root/gitegg-cloud.git'
}
}
stage('Build') {
steps{
script {
echo "==============Start Cloud Parent Install=========="
sh "${MAVEN_HOME}/bin/mvn -DskipTests=true clean install -P${params.Environment} ${BuildParameter}"
echo "==============End Cloud Parent Install=========="
def workspace = pwd()
for (service in ServicesBuild) {
stage ('buildCloud${service}') {
echo "==============Start Cloud Build ${service}=========="
sh "cd ${workspace}/${service} && ${MAVEN_HOME}/bin/mvn -DskipTests=true clean package -P${params.Environment} ${BuildParameter} jib:build -Djib.httpTimeout=200000 -DsendCredentialsOverHttp=true -f pom.xml"
echo "==============End Cloud Build ${service}============"
}
}
}
}
}
stage('Sync to k8s') {
steps {
script {
echo "==============Start Sync to k8s=========="
def workspace = pwd()
mainpom = readMavenPom file: 'pom.xml'
profiles = mainpom.getProfiles()
def version = mainpom.getVersion()
def nacosAddr = ""
def nacosConfigPrefix = ""
def nacosConfigGroup = ""
def dockerHarborAddr = ""
def dockerHarborProject = ""
def dockerHarborUsername = ""
def dockerHarborPassword = ""
def serverPort = ""
def commonDeployment = "${workspace}/k8s-deployment.yaml"
for(profile in profiles)
{
// 獲取對應(yīng)配置
if (profile.getId() == "${params.Environment}")
{
nacosAddr = profile.getProperties().getProperty("nacos.addr")
nacosConfigPrefix = profile.getProperties().getProperty("nacos.config.prefix")
nacosConfigGroup = profile.getProperties().getProperty("nacos.config.group")
dockerHarborAddr = profile.getProperties().getProperty("docker.harbor.addr")
dockerHarborProject = profile.getProperties().getProperty("docker.harbor.project")
dockerHarborUsername = profile.getProperties().getProperty("docker.harbor.username")
dockerHarborPassword = profile.getProperties().getProperty("docker.harbor.password")
}
}
for (service in ServicesBuild) {
stage ('Sync${service}ToK8s') {
echo "==============Start Sync ${service} to k8s=========="
dir("${workspace}/${service}") {
pom = readMavenPom file: 'pom.xml'
echo "group: artifactId: ${pom.artifactId}"
def deployYaml = "k8s-deployment-${pom.artifactId}.yaml"
yaml = readYaml file : './src/main/resources/bootstrap.yml'
serverPort = "${yaml.server.port}"
if(fileExists("${workspace}/${service}/k8s-deployment.yaml")){
commonDeployment = "${workspace}/${service}/k8s-deployment.yaml"
}
else
{
commonDeployment = "${workspace}/k8s-deployment.yaml"
}
script {
sh "sed 's#{APP_NAME}#${pom.artifactId}#g;s#{IMAGE_URL}#${dockerHarborAddr}#g;s#{IMAGE_PROGECT}#${PRO_NAME}#g;s#{IMAGE_TAG}#${version}#g;s#{APP_PORT}#${serverPort}#g;s#{SPRING_PROFILE}#${params.Environment}#g' ${commonDeployment} > ${deployYaml}"
kubernetesDeploy configs: "${deployYaml}", kubeconfigId: "${k8s_token}"
}
}
echo "==============End Sync ${service} to k8s=========="
}
}
echo "==============End Sync to k8s=========="
}
}
}
}
}
常見問題:
1温亲、Pipeline Utility Steps 第一次執(zhí)行會報錯Scripts not permitted to use method或者Scripts not permitted to use staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods getProperties java.lang.Object
解決:系統(tǒng)管理-->In-process Script Approval->點擊 Approval
2棚壁、通過NFS服務(wù)將所有容器的日志統(tǒng)一存放在NFS的服務(wù)端
3、Kubernetes Continuous Deploy,使用1.0.0版本栈虚,否則報錯袖外,不兼容
4、解決docker注冊到內(nèi)網(wǎng)問題
spring:
cloud:
inetutils:
ignored-interfaces: docker0
5魂务、配置ipvs模式在刺,kube-proxy監(jiān)控Pod的變化并創(chuàng)建相應(yīng)的ipvs規(guī)則。ipvs相對iptables轉(zhuǎn)發(fā)效率更高头镊。除此以外蚣驼,ipvs支持更多的LB算法。
kubectl edit cm kube-proxy -n kube-system
修改mode: "ipvs"
重新加載kube-proxy配置文件
kubectl delete pod -l k8s-app=kube-proxy -n kube-system
查看ipvs規(guī)則
ipvsadm -Ln
6相艇、k8s集群內(nèi)部訪問外部服務(wù)颖杏,nacos,redis等
- a坛芽、內(nèi)外互通模式留储,在部署的服務(wù)設(shè)置hostNetwork: true
spec:
hostNetwork: true
- b、Endpoints模式
kind: Endpoints
apiVersion: v1
metadata:
name: nacos
namespace: default
subsets:
- addresses:
- ip: 172.16.20.188
ports:
- port: 8848
apiVersion: v1
kind: Service
metadata:
name: nacos
namespace: default
spec:
type: ClusterIP
ports:
- port: 8848
targetPort: 8848
protocol: TCP
- c咙轩、service的type: ExternalName模式,“ExternalName” 使用 CNAME 重定向获讳,因此無法執(zhí)行端口重映射,域名使用
EndPoints和type: ExternalName
以上外部新建yaml活喊,不要用內(nèi)部的丐膝,這些需要在環(huán)境設(shè)置時配置好。
7钾菊、k8s常用命令:
查看pod: kubectl get pods
查看service: kubectl get svc
查看endpoints: kubectl get endpoints
安裝: kubectl apply -f XXX.yaml
刪除:kubectl delete -f xxx.yaml
刪除pod: kubectl delete pod podName
刪除service: kubectl delete service serviceName
進入容器: kubectl exec -it podsNamexxxxxx -n default -- /bin/sh
GitEgg-Cloud是一款基于SpringCloud整合搭建的企業(yè)級微服務(wù)應(yīng)用開發(fā)框架帅矗,開源項目地址:
Gitee: https://gitee.com/wmz1930/GitEgg
GitHub: https://github.com/wmz1930/GitEgg
歡迎感興趣的小伙伴Star支持一下。