1.擴(kuò)容計(jì)算結(jié)點(diǎn)
在執(zhí)行擴(kuò)容前需檢查擴(kuò)容節(jié)點(diǎn)的以下幾點(diǎn)信息:
- 內(nèi)核版本
- selinux已經(jīng)開啟enforcing
- docker數(shù)據(jù)盤已經(jīng)就緒
- /etc/resolv.conf配置正確
- hostname已經(jīng)設(shè)置
- 時(shí)間同步已配置
- 在每個(gè)節(jié)點(diǎn)都能解析新增節(jié)點(diǎn)的域名匿乃,如果是通過/etc/hosts來配置域名解析仁热,需要在配置后重啟所有節(jié)點(diǎn)的dnsmasq服務(wù)
- docker證書的問題需要添加到自動(dòng)化配置中來,特別是私有鏡像倉庫的證書浓瞪。有三個(gè)地方:
- /etc/sysconfig/docker配置矾瘾,
- /etc/pki/ca-trust/source/anchors/目錄下的證書女轿,
- /etc/docker/certs.d下docker拉取鏡像認(rèn)證證書
# /etc/ansible/hosts
[OSEv3:children]
masters
nodes
etcd
new_nodes
...
[new_nodes]
node04.internal.aws.testdrive.openshift.com openshift_node_labels="{'region': 'apps'}" openshift_hostname=node04.internal.aws.testdrive.openshift.com openshift_public_hostname=node04.580763383722.aws.testdrive.openshift.com
node05.internal.aws.testdrive.openshift.com openshift_node_labels="{'region': 'apps'}" openshift_hostname=node05.internal.aws.testdrive.openshift.com openshift_public_hostname=node05.580763383722.aws.testdrive.openshift.com
node06.internal.aws.testdrive.openshift.com openshift_node_labels="{'region': 'apps'}" openshift_hostname=node06.internal.aws.testdrive.openshift.com openshift_public_hostname=node06.580763383722.aws.testdrive.openshift.com
...
在dns中配置新增的節(jié)點(diǎn)。
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-node/scaleup.yml
注意:
如果集群是通過/etc/hosts文件來配置的解析壕翩,則需在添加好對應(yīng)配置關(guān)系后蛉迹,重啟所有節(jié)點(diǎn)的dnsmasq。否則會(huì)報(bào)“could not find csr for nodes”的錯(cuò)誤放妈。
2.OpenShift Metrics
...
[OSEv3:vars]
...
openshift_metrics_install_metrics=true
openshift_metrics_cassandra_storage_type=pv
openshift_metrics_cassandra_pvc_size=10Gi
openshift_metrics_hawkular_hostname=metrics.apps.580763383722.aws.testdrive.openshift.com
...
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-metrics.yml
3.OpenShift Logging
...
[OSEv3:vars]
...
openshift_logging_install_logging=true
openshift_logging_namespace=logging
openshift_logging_es_pvc_size=10Gi
openshift_logging_kibana_hostname=kibana.apps.580763383722.aws.testdrive.openshift.com
openshift_logging_public_master_url=https://kibana.apps.580763383722.aws.testdrive.openshift.com
...
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/openshift-cluster/openshift-logging.yml
4.OpenShift Multitenant Networking
os_sdn_network_plugin_name=redhat/openshift-ovs-multitenant
# net-proj.sh
#!/bin/bash
# create NetworkA, NetworkB projects
/usr/bin/oc new-project netproj-a
/usr/bin/oc new-project netproj-b
# deploy the DC definition into the projects
/usr/bin/oc create -f /opt/lab/support/ose.yaml -n netproj-a
/usr/bin/oc create -f /opt/lab/support/ose.yaml -n netproj-b
#ose.yaml
apiVersion: v1
kind: DeploymentConfig
metadata:
name: ose
labels:
run: ose
spec:
strategy:
type: Rolling
rollingParams:
updatePeriodSeconds: 1
intervalSeconds: 1
timeoutSeconds: 600
maxUnavailable: 25%
maxSurge: 25%
resources:
triggers:
-
type: ConfigChange
replicas: 1
test: false
selector:
run: ose
template:
metadata:
creationTimestamp: null
labels:
run: ose
spec:
containers:
-
name: ose
image: 'registry.access.redhat.com/openshift3/ose:v3.5'
command:
- bash
- '-c'
- 'while true; do sleep 60; done'
resources:
terminationMessagePath: /dev/termination-log
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext:
#podbip.sh
#!/bin/bash
/usr/bin/oc get pod -n netproj-b $(oc get pod -n netproj-b | awk '/ose-/ {print $1}') -o jsonpath='{.status.podIP}{"\n"}'
將netproj-a網(wǎng)絡(luò)與netproj-b網(wǎng)絡(luò)連接
oc adm pod-network join-projects netproj-a --to=netproj-b
oc get netnamespace
將netproj-a網(wǎng)絡(luò)脫離
oc adm pod-network isolate-projects netproj-a
oc get netnamespace
oc exec -n netproj-a $POD_A_NAME -- ping -c1 -W1 $POD_B_IP
5.Node管理
將Node隔離出集群
oc adm manage-node node02.internal.aws.testdrive.openshift.com --schedulable=false
查看指定Node上運(yùn)行的pod
oc adm manage-node node02.internal.aws.testdrive.openshift.com --list-pods
遷移指定Node上的pod
模擬遷移
oc adm manage-node node02.internal.aws.testdrive.openshift.com --evacuate --dry-run
遷移
oc adm manage-node node02.internal.aws.testdrive.openshift.com --evacuate
恢復(fù)Node的可調(diào)度
oc adm manage-node node02.internal.aws.testdrive.openshift.com --schedulable=true
創(chuàng)建volume
oc volume dc/file-uploader --add --name=my-shared-storage \
-t pvc --claim-mode=ReadWriteMany --claim-size=5Gi \
--claim-name=my-shared-storage --mount-path=/opt/app-root/src/uploaded
Increasing Storage Capacity in CNS
[...]
[cns]
node01.580763383722.aws.testdrive.openshift.com
node02.580763383722.aws.testdrive.openshift.com
node03.580763383722.aws.testdrive.openshift.com
node04.580763383722.aws.testdrive.openshift.com
node05.580763383722.aws.testdrive.openshift.com
node06.580763383722.aws.testdrive.openshift.com
[...]
ansible-playbook /opt/lab/support/configure-firewall.yaml
oc label node/node04.internal.aws.testdrive.openshift.com storagenode=glusterfs
oc label node/node05.internal.aws.testdrive.openshift.com storagenode=glusterfs
oc label node/node06.internal.aws.testdrive.openshift.com storagenode=glusterfs
export HEKETI_CLI_SERVER=http://heketi-container-native-storage.apps.580763383722.aws.testdrive.openshift.com
export HEKETI_CLI_USER=admin
export HEKETI_CLI_KEY=myS3cr3tpassw0rd
#/opt/lab/support/topology-extended.json
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"node01.internal.aws.testdrive.openshift.com"
],
"storage": [
"10.0.1.30"
]
},
"zone": 1
},
"devices": [
"/dev/xvdd"
]
},
{
"node": {
"hostnames": {
"manage": [
"node02.internal.aws.testdrive.openshift.com"
],
"storage": [
"10.0.3.130"
]
},
"zone": 2
},
"devices": [
"/dev/xvdd"
]
},
{
"node": {
"hostnames": {
"manage": [
"node03.internal.aws.testdrive.openshift.com"
],
"storage": [
"10.0.4.150"
]
},
"zone": 3
},
"devices": [
"/dev/xvdd"
]
}
]
},
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"node04.internal.aws.testdrive.openshift.com"
],
"storage": [
"10.0.1.23"
]
},
"zone": 1
},
"devices": [
"/dev/xvdd"
]
},
{
"node": {
"hostnames": {
"manage": [
"node05.internal.aws.testdrive.openshift.com"
],
"storage": [
"10.0.3.141"
]
},
"zone": 2
},
"devices": [
"/dev/xvdd"
]
},
{
"node": {
"hostnames": {
"manage": [
"node06.internal.aws.testdrive.openshift.com"
],
"storage": [
"10.0.4.234"
]
},
"zone": 3
},
"devices": [
"/dev/xvdd"
]
}
]
}
]
}
heketi-cli topology load --json=/opt/lab/support/topology-extended.json
heketi-cli topology info ##得到Cluster ID
# /opt/lab/support/second-cns-storageclass.yaml
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: cns-silver
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://heketi-container-native-storage.apps.580763383722.aws.testdrive.openshift.com"
restauthenabled: "true"
restuser: "admin"
volumetype: "replicate:3"
clusterid: "INSERT-CLUSTER-ID-HERE"
secretNamespace: "default"
secretName: "cns-secret"
添加已有節(jié)點(diǎn)的盤
# 獲取NODEID
heketi-cli node list | grep ca777ae0285ef6d8cd7237c862bd591c(CLUSTERID)
heketi-cli device add --name=/dev/xvde --node=33e0045354db4be29b18728cbe817605(NODEID)
移除有問題的盤
heketi-cli node info 33e0045354db4be29b18728cbe817605(NODEID)
以上的結(jié)果如下:
Node Id: 33e0045354db4be29b18728cbe817605
State: online
Cluster Id: ca777ae0285ef6d8cd7237c862bd591c
Zone: 1
Management Hostname: node04.internal.aws.testdrive.openshift.com
Storage Hostname: 10.0.1.23
Devices:
Id:01c94798bf6b1af87974573b420c4dff Name:/dev/xvdd State:online Size (GiB):9 Used (GiB):1 Free (GiB):8
Id:da91a2f1c9f62d9916831de18cc09952 Name:/dev/xvde State:online Size (GiB):9 Used (GiB):1 Free (GiB):8
移除盤
heketi-cli device disable 01c94798bf6b1af87974573b420c4dff
6.給Registry組件添加Volume
oc volume dc/docker-registry --add --name=registry-storage -t pvc \
--claim-mode=ReadWriteMany --claim-size=5Gi \
--claim-name=registry-storage --overwrite
7.更改dc的鏡像
oc patch dc nginx -p '{"spec":{"template":{"spec":{"containers":[{"name":"nginx","image":"harbor.apps.example.com/public/nginx:1.14"}]}}}}'
8.給A項(xiàng)目授予拉取B項(xiàng)目IS的權(quán)限
oc policy add-role-to-user system:image-puller system:serviceaccount:A:default -n B
9.給Jenkins授予管理A項(xiàng)目資源的權(quán)限
oc policy add-role-to-user edit system:serviceaccount:jenkins:jenkins -n A
10.手動(dòng)維護(hù)etcd
export ETCDCTL_API=3
etcdctl --cacert=/etc/origin/master/master.etcd-ca.crt --cert=/etc/origin/master/master.etcd-client.crt --key=/etc/origin/master/master.etcd-client.key --endpoints=https://master1.os10.openshift.com:2379,https://master2.os10.openshift.com:2379,https://master3.os10.openshift.com:2379 endpoint health
ETCDCTL_API=3 etcdctl --cacert=/etc/origin/master/master.etcd-ca.crt --cert=/etc/origin/master/master.etcd-client.crt --key=/etc/origin/master/master.etcd-client.key --endpoints=https://master1.os10.openshift.com:2379,https://master2.os10.openshift.com:2379,https://master3.os10.openshift.com:2379 get / --prefix --keys-only
ETCDCTL_API=3 etcdctl --cacert=/etc/origin/master/master.etcd-ca.crt --cert=/etc/origin/master/master.etcd-client.crt --key=/etc/origin/master/master.etcd-client.key --endpoints=https://master1.os10.openshift.com:2379,https://master2.os10.openshift.com:2379,https://master3.os10.openshift.com:2379 del /kubernetes.io/pods/bookinfo/nginx-4-bkdb4
11.執(zhí)行鏡像對應(yīng)的任務(wù)
--restart=Always 默認(rèn)值北救,創(chuàng)建一個(gè)deploymentconfig
--restart=OnFailure 創(chuàng)建一個(gè)Job(但是實(shí)踐證實(shí)為一個(gè)Pod)
--restart=OnFailure --schedule="0/5 * * * *" 創(chuàng)建一個(gè)Cron Job
--restart=Never 創(chuàng)建一個(gè)單獨(dú)的Pod
oc run nginx -it --rm --image=nginx --restart=OnFailure ls
oc run nginx -it --rm --image=nginx --restart=OnFailure bash
12.清理主機(jī)容器
當(dāng)容器存儲(chǔ)docker-storage的storage-driver引擎使用devicemapper時(shí)會(huì)出現(xiàn)如下錯(cuò)誤:devmapper: Thin Pool has 162394 free data blocks which is less than minimum required 163840 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior
。這個(gè)時(shí)候需要清理下容器主機(jī)的存儲(chǔ)芜抒。
具體操作如下:
# 清理exited進(jìn)程:
exited_containers=$(docker ps -q -f status=exited); if [ "$exited_containers" != "" ]; then docker rm $exited_containers; fi
# 清理dangling volumes:
dangling_volumes=$(docker volume ls -qf dangling=true); if [ "$dangling_volumes" != "" ]; then docker volume rm $dangling_volumes; fi
# 清理dangling image:
dangling_images=$(docker images --filter "dangling=true" -q --no-trunc); if [ "$dangling_images" != "" ]; then docker rmi $dangling_images; fi
參考文檔 http://www.cnblogs.com/mhc-fly/p/9324425.html
還可在不同在子命令下執(zhí)行 prune珍策,這樣刪除的就是某類資源:
$ docker container prune -f # 刪除所有退出狀態(tài)的容器
$ docker volume prune -f # 刪除未被使用的數(shù)據(jù)卷
$ docker image prune -f # 刪除 dangling 或所有未被使用的鏡像
13.Node節(jié)點(diǎn)內(nèi)存與CPU預(yù)留
/etc/origin/node/node-config.yaml
kubeletArguments:
system-reserved:
- cpu=200m
- memory=1G
kube-reserved:
- cpu=200m
- memory=1G
14.用oc get只查看dc的鏡像名
[root@maser]$ oc get dc test-app --template={{range.spec.template.spec.containers}}{{.image}}{{end}}
registry.example.com/test/test-app:1.13
獲取第一個(gè)dc的第一個(gè)容器的鏡像
[root@maser]$ oc get dc --template='{{with $dc:=(index .items 0)}}{{with $container:=(index $dc.spec.template.spec.containers 0)}}{{$container.image}}{{"\n"}}{{end}}{{end}}'
或者使用--jsonpath
[root@maser]$ oc get dc -o jsonpath='{range.items[*]}{range .spec.template.spec.containers[*]}{.image}{"\n"}{end}{end}'
[root@maser]$ oc get dc -o jsonpath='{.items[0].spec.template.spec.containers[0].image}{"\n"}'
15.Openshift Webconsole支持私有鏡像倉庫
- 創(chuàng)建私有鏡像倉庫的證書
[root@registry ~]# mkdir /etc/crts/ && cd /etc/crts
[root@registry ~]# openssl req \
-newkey rsa:2048 -nodes -keyout example.com.key \
-x509 -days 365 -out example.com.crt -subj \
"/C=CN/ST=GD/L=SZ/O=Global Security/OU=IT Department/CN=*.example.com"
- 將私有鏡像倉庫的CA文件拷貝到鏡像倉庫所在服務(wù)器的
/etc/pki/ca-trust/source/anchors/
目錄下 - 在鏡像倉庫中配置tls,如果是docker-distribution
/etc/docker-distribution/registry/config.yml
http: addr: :443 tls: certificate: /etc/crts/example.com.crt key: /etc/crts/example.com.key
- 重啟docker-distribution
[root@registry ~]# systemctl daemon-reload && systemctl restart docker-distribution && systemctl enable docker-distribution
- 在鏡像倉庫所在服務(wù)器上執(zhí)行
update-ca-trust extract
- 將私有鏡像倉庫的CA文件拷貝到每臺Openshift節(jié)點(diǎn)的
/etc/pki/ca-trust/source/anchors/
目錄下 - 每臺Openshift節(jié)點(diǎn)上執(zhí)行
update-ca-trust extract
16.Docker支持私有鏡像倉庫tls認(rèn)證
/etc/docker/certs.d目錄下創(chuàng)建對應(yīng)的域名目錄宅倒,如私有鏡像倉庫的域名為:example.harbor.com
$ mkdir -p /etc/docker/certs.d/example.harbor.com
將私有鏡像倉庫的CA文件拷貝到該目錄下即可攘宙。
17.查看etcd數(shù)據(jù)
etcdctl --cert-file=/etc/origin/master/master.etcd-client.crt --key-file /etc/origin/master/master.etcd-client.key --ca-file /etc/origin/master/master.etcd-ca.crt --endpoints="https://master1.os10.openshift.example.com:2379,https://master2.os10.openshift.example.com:2379,https://master3.os10.openshift.example.com:2379"
export ETCDCTL_API=3
etcdctl --cacert=/etc/origin/master/master.etcd-ca.crt --cert=/etc/origin/master/master.etcd-client.crt --key=/etc/origin/master/master.etcd-client.key --endpoints=https://master1.os10.openshift.example.com:2379,https://master2.os10.openshift.example.com:2379,https://master3.os10.openshift.example.com:2379 endpoint health
ETCDCTL_API=3 etcdctl --cacert=/etc/origin/master/master.etcd-ca.crt --cert=/etc/origin/master/master.etcd-client.crt --key=/etc/origin/master/master.etcd-client.key --endpoints=https://master1.os10.openshift.example.com:2379,https://master2.os10.openshift.example.com:2379,https://master3.os10.openshift.example.com:2379 get / --prefix --keys-only
計(jì)算某個(gè)項(xiàng)目project下所有pod的limits cpu/memory的總和
## 計(jì)算pod總的limits cpu的總和
data=$(pods=`oc get pod|awk '{print $1}'|grep -v NAME`; for pod in $pods; do oc get pod $pod --template={{range.spec.containers}}{{.resources.limits.cpu}}{{println}}{{end}}; done); i=0; for j in $(echo $data); do i=$(($i+$j)); done ; echo $i;
## 18.計(jì)算pod總的limits memory的總和
data=$(pods=`oc get pod|awk '{print $1}'|grep -v NAME`; for pod in $pods; do oc get pod $pod --template={{range.spec.containers}}{{.resources.limits.memory}}{{println}}{{end}}; done);i=0; for j in $(echo $data); do mj=$(echo $j|cut -dG -f1); i=$(($i+$mj)); done; echo $i;
19.DNSMasq啟動(dòng)失敗報(bào)錯(cuò)“DBus error: Connection ":1.180" is not allowed to own the service "uk.org.thekelleys.dnsmasq" ”
$ cat /etc/dbus-1/system.d/dnsmasq.conf
<!DOCTYPE busconfig PUBLIC
"-//freedesktop//DTD D-BUS Bus Configuration 1.0//EN"
"http://www.freedesktop.org/standards/dbus/1.0/busconfig.dtd">
<busconfig>
<policy user="root">
<allow own="uk.org.thekelleys.dnsmasq"/>
<allow send_destination="uk.org.thekelleys.dnsmasq"/>
</policy>
<policy context="default">
<allow own="uk.org.thekelleys.dnsmasq"/>
<allow send_destination="uk.org.thekelleys.dnsmasq"/>
</policy>
</busconfig>
$ systemctl daemon-reload
$ systemctl restart dbus
$ systemctl restart dnsmasq
20.ssh特別慢,卡在debug1: pledge: network
位置
重啟下systemd-logind
$ systemctl restart systemd-logind
如果是卡在Authentication上拐迁,可以把ssh client端的StrictHostKeyChecking設(shè)置為no
$ cat /etc/ssh/ssh_config
Host *
GSSAPIAuthentication no
StrictHostKeyChecking no
21.清理私有鏡像倉庫
$ cat > /usr/bin/cleanregistry.sh <<EOF
#!/bin/bash
oc login -u admin -p password
oc adm prune builds --orphans --keep-complete=25 --keep-failed=5 --keep-younger-than=60m --confirm
oc adm prune deployments --orphans --keep-complete=25 --keep-failed=10 --keep-younger-than=60m --confirm
#oc rollout latest docker-registry -n default
#sleep 20
oc adm prune images --keep-younger-than=400m --confirm
EOF
$ crontab -l
0 0 * * * /usr/bin/cleanregistry.sh >> /var/log/cleanregistry.log 2>&1
22.docker run覆蓋entrypoint
$ docker run --entrypoint="/bin/bash" --rm -it xhuaustc/nginx-openshift-router:1.15
23.oc image mirror同步鏡像
$ oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable --insecure=true
24.開通端口防火墻
# vi /etc/sysconfig/iptables
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 9100 -j ACCEPT
# systemctl restart iptables
25.查看crt證書有效時(shí)間
$ openssl x509 -noout -text -in ca.crt | grep Validity -A2
Validity
not Before: Sep 7 08:48.13 2018 GMT
not After: Sep 6 08:48.14 2020 GMT
26.將主機(jī)設(shè)為不可調(diào)度
方法一:
$ oc adm cordon $nodename
方法二:
$ oc adm manage-node --schedulable=false $nodename
27.驅(qū)逐主機(jī)上的POD
$ oc adm manage-node --evacuate $nodename
28.Service的域名
正常情況下
Service的域名格式為:service-name.project-name.svc.cluster.local
對應(yīng)的IP是Service Cluster IP
設(shè)置Service的clusterIP=None蹭劈,同時(shí)該P(yáng)od需要添加subdomain字段,如果是statefulset資源需要添加serviceName字段线召。
Service的域名格式為:service-name.project-name.svc.cluster.local
對應(yīng)的IP是后臺對應(yīng)的Pod的容器的IP
同時(shí)后臺對應(yīng)的Pod都有DNS記錄铺韧,格式為pod-name.service-name.project-name.svc.cluster.local
29.查看Docker鏡像的構(gòu)建歷史命令
docker history ${鏡像名/ID} -H --no-trunc | awk -F"[ ]{3,}" '{print $3}' | sed -n -e "s#/bin/sh -c##g" -e "s/#(nop) //g" -e '2,$p' | sed '1!G;h;$!d'
例如查看鏡像mysql:5.6.41的構(gòu)建命令
$ docker history mysql:5.6.41 -H --no-trunc | awk -F"[ ]{3,}" '{$1="";$2="";$(NF-1)="";print $0}' | sed -n -e "s#/bin/sh -c##g" -e "s/#(nop) //g" -e '2,$p' | sed '1!G;h;$!d'
#(nop) ADD file:f8f26d117bc4a9289b7cd7447ca36e1a70b11701c63d949ef35ff9c16e190e50 in /
CMD ["bash"]
groupadd -r mysql && useradd -r -g mysql mysql
apt-get update && apt-get install -y --no-install-recommends gnupg dirmngr && rm -rf /var/lib/apt/lists/*
ENV GOSU_VERSION=1.7
set -x && apt-get update && apt-get install -y --no-install-recommends ca-certificates wget && rm -rf /var/lib/apt/lists/* && wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture)" && wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture).asc" && export GNUPGHOME="$(mktemp -d)" && gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 && gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu && gpgconf --kill all && rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc && chmod +x /usr/local/bin/gosu && gosu nobody true && apt-get purge -y --auto-remove ca-certificates wget
mkdir /docker-entrypoint-initdb.d
apt-get update && apt-get install -y --no-install-recommends pwgen perl && rm -rf /var/lib/apt/lists/*
set -ex; key='A4A9406876FCBD3C456770C88C718D3B5072E1F5'; export GNUPGHOME="$(mktemp -d)"; gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; gpg --export "$key" > /etc/apt/trusted.gpg.d/mysql.gpg; gpgconf --kill all; rm -rf "$GNUPGHOME"; apt-key list > /dev/null
ENV MYSQL_MAJOR=5.6
ENV MYSQL_VERSION=5.6.41-1debian9
echo "deb http://repo.mysql.com/apt/debian/ stretch mysql-${MYSQL_MAJOR}" > /etc/apt/sources.list.d/mysql.list
{ echo mysql-community-server mysql-community-server/data-dir select ''; echo mysql-community-server mysql-community-server/root-pass password ''; echo mysql-community-server mysql-community-server/re-root-pass password ''; echo mysql-community-server mysql-community-server/remove-test-db select false; } | debconf-set-selections && apt-get update && apt-get install -y mysql-server="${MYSQL_VERSION}" && rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/mysql && mkdir -p /var/lib/mysql /var/run/mysqld && chown -R mysql:mysql /var/lib/mysql /var/run/mysqld && chmod 777 /var/run/mysqld && find /etc/mysql/ -name '*.cnf' -print0 | xargs -0 grep -lZE '^(bind-address|log)' | xargs -rt -0 sed -Ei 's/^(bind-address|log)/#&/' && echo '[mysqld]\nskip-host-cache\nskip-name-resolve' > /etc/mysql/conf.d/docker.cnf
VOLUME [/var/lib/mysql]
#(nop) COPY file:b79e447a4154d7150da6897e9bfdeac5eef0ebd39bb505803fdb0315c929d983 in /usr/local/bin/
ln -s usr/local/bin/docker-entrypoint.sh /entrypoint.sh # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306/tcp
CMD ["mysqld"]
30.應(yīng)用在完成Build后推送到內(nèi)部鏡像倉庫如下報(bào)錯(cuò)誤
Pushing image docker-registry.default.svc:5000/apb/my-test-apb:latest ...
Pushed 0/15 layers, 0% complete
Registry server Address:
Registry server User Name: serviceaccount
Registry server Email: serviceaccount@example.org
Registry server Password: <<non-empty>>
error: build error: Failed to push image: unauthorized: unable to validate token
此時(shí)很大可能是因?yàn)橐恍┳兏瑢?dǎo)致鏡像倉庫的Token有變化缓淹,但是鏡像倉庫未重啟哈打,重啟鏡像倉庫即可恢復(fù)。
$ oc get pod -n default | grep docker-registry
docker-registry-1-8tjhk 1/1 Running 0 4m
$ oc delete pod `oc get pod -n default | grep docker-registry | awk '{print $1}'`
31.為容器用戶指定用戶名
- 在鏡像構(gòu)建中將文件/etc/passwd設(shè)置為容器啟動(dòng)用戶可寫
RUN chmod g=u /etc/passwd
- 容器啟動(dòng)時(shí)設(shè)置用戶名
ENTRYPOINT/CMD 腳本中添加設(shè)置用戶名代碼
USER_NAME=${USER_NAME:-ocpuid}
USER_ID=$(id -u)
if ! whoami &> /dev/null; then
if [ -w /etc/passwd ]; then
echo "${USER_NAME}:x:${USER_ID}:0:${USER_NAME} user:${HOME}:/sbin/nologin" >> /etc/passwd
fi
fi
exec "$@"
32.升級Docker
升級不同OpenShift組件的思路是一樣讯壶,主要是如下兩條料仗。
- 逐個(gè)節(jié)點(diǎn)升級
- 升級前將業(yè)務(wù)應(yīng)用遷走
- 更新yum源中的docker包
$ cp docker-rpm/* ./extras/Packages/d/
$ createrepo --update extras
- 遷移節(jié)點(diǎn)上的POD并將它設(shè)置為不可調(diào)度
$ oc adm drain <node_name> --force --delete-local-data --ignore-daemonsets
- 排除不需要升級的軟件
$ atomic-openshift-docker-excluder exclude
$ atomic-openshift-excluder exclude
- 升級docker
$ yum clean all
$ yum update docker
- 重啟服務(wù)或者重啟主機(jī)
Master節(jié)點(diǎn)
$ systemctl restart docker
$ master-restart api
$ master-restart controllers
$ systemctl restart origin-node
Node節(jié)點(diǎn)
$ systemctl restart docker
$ systemctl restart origin-node
或者
$ reboot
- 將節(jié)點(diǎn)設(shè)置為可調(diào)度
$ oc adm uncordon <node_name>
33.獲取Token并請求OpenShift ASB服務(wù)的例子
$ curl -k -H "Authorization: Bearer `oc serviceaccounts get-token asb-client`" https://$(oc get routes -n openshift-ansible-service-broker --no-headers | awk '{print $2}')/osb/v2/catalog
{
"paths": [
"/ansible-service-broker/",
"/apis",
"/healthz",
"/healthz/ping",
"/healthz/poststarthook/generic-apiserver-start-informers",
"/metrics"
]
}
34.調(diào)用OpenShift API獲取Pod信息
$ oc get --raw /api/v1/namespaces/<namespace-name>/pods/<pod-name> | json_reformat
35.使用HostPath掛載本地目錄
$ chcon -Rt svirt_sandbox_file_t /testHostPath
or
$ chcon -R unconfined_u:object_r:svirt_sandbox_file_t:s0 /testHostPath
or
$ semanage fcontext -a -t svirt_sandbox_file_t '/testHostPath(/.*)?'
$ restorecon -Rv /testHostPath
# 確認(rèn)設(shè)置 semanage fcontext -l | grep testHostPath
# 確認(rèn)文件生效 ls -Z /testHostPath
# 刪除 配置: semanage fcontext -d '/testHostPath(/.*)?'
36.將搜索鏡像導(dǎo)出到本地文件腳本
$ docker image | grep redis | awk '{image=$1; gsub(/.*\//, "", $1); printf("docker save -o %s.tar %s:%s\n", $1, image, $2)}' | xargs -i bash -c "{}"
37.Docker日志中有錯(cuò)誤 container kill failed because of container not found or no such process
$ # 定期檢查docker日志
$ journalctl -r -u docker --since '1 day ago' --no-pager | grep -i error
$ # 處理辦法重啟docker
$ systemctl restart docker
38.查看所有應(yīng)用重啟次數(shù),并且排序
$ oc get pod --sort-by='.status.containerStatuses[0].restartCount' --all-namespace | sort -rn -k10
39.docker拉取鏡像報(bào)錯(cuò):400 unsupported docker v1 repository request
docker的配置中添加--disable-legacy-registry
配置鹏溯。
$ cat /etc/sysconfig/docker
...
OPTIONS='... --disable-legacy-registry ...'
...
原因:當(dāng)docker客戶端通過v2 API請求鏡像庫罢维,而鏡像不存在,客戶端會(huì)嘗試使用v1 API請求鏡像倉庫,而鏡像倉庫不支持v1 API請求肺孵,則會(huì)返回該錯(cuò)誤匀借。
40.應(yīng)用日志無法查看,oc exec也無法進(jìn)入容器
報(bào)錯(cuò)Error from server: Get https://master.example.com:8443/containerLogs/namespace/pod-name/console: remote error: tls: internal error
處理辦法平窘,查看csr吓肋,并將它們手動(dòng)授信
$ oc get csr
$ oc get csr -o name | xargs oc adm certificate approve
41.netmanager工具設(shè)置了dns,無法直接通過/etc/resolv.conf文件更改
$ nmcli con show # 查看所有的網(wǎng)絡(luò)連接
$ nmcli con show <net-connect-name> #查看網(wǎng)絡(luò)連接詳情瑰艘,可查看dns的配置
$ nmcli con mod <net-connect-name> -ipv4.dns <dns-server-ip> #刪除指定的dns ip
$ nmcli con mod <net-connect-name> +ipv4.dns <dns-server-ip> #添加指定的dns ip
42.查看集群當(dāng)前計(jì)算節(jié)點(diǎn)資源的分配率
$ nodes=$(oc get node --selector=node-role.kubernetes.io/compute=true --no-headers | awk '{print $1}'); for i in $nodes; do echo $i; oc describe node $i | grep Resource -A 3 | grep -v '\-\-\-'; done
node1
Resource Requests Limits
cpu 10445m (65%) 25770m (161%)
memory 22406Mi (34%) 49224Mi (76%)
node2
Resource Requests Limits
cpu 8294m (51%) 25620m (160%)
memory 18298Mi (28%) 48600Mi (75%)
43.安裝時(shí)master api服務(wù)無法訪問etcd
master主機(jī)綁定多張網(wǎng)卡是鬼,在/etc/ansible/hosts中需要指定etcd_ip,如下所示
[etcd]
master.example.com etcd_ip=10.1.2.3
另外需要確保紫新,etcd所在主機(jī)的hostname所指定的ip確切為etcd_ip指定的ip均蜜。
44.安裝時(shí)master節(jié)點(diǎn)有多張網(wǎng)卡,如何指定masterIP
在master安裝時(shí)master-config.yml中設(shè)置的masterIP為openshift.common.ip芒率,為節(jié)點(diǎn)的默認(rèn)網(wǎng)卡囤耳。可以通過編輯roles/openshift_facts/library/openshift_facts.py
文件來設(shè)置該ip
def get_defaults(self, roles):
""" Get default fact values
Args:
roles (list): list of roles for this host
Returns:
dict: The generated default facts
"""
defaults = {}
ip_addr = self.system_facts['ansible_default_ipv4']['address']
exit_code, output, _ = module.run_command(['hostname', '-f']) # noqa: F405
hostname_f = output.strip() if exit_code == 0 else ''
hostname_values = [hostname_f, self.system_facts['ansible_nodename'],
self.system_facts['ansible_fqdn']]
hostname = choose_hostname(hostname_values, ip_addr).lower()
exit_code, output, _ = module.run_command(['hostname']) # noqa: F405
raw_hostname = output.strip() if exit_code == 0 else hostname
defaults['common'] = dict(ip=ip_addr,
public_ip=ip_addr,
raw_hostname=raw_hostname,
hostname=hostname,
public_hostname=hostname,
另外可以通過將目標(biāo)網(wǎng)卡設(shè)置為默認(rèn)網(wǎng)卡來解決偶芍。
還有OpenShift通過更新hosts也可以來配置充择,通過設(shè)置openshift_node_groups來設(shè)置kubeletArguments.node-ip
的值,如下:
{'name': 'node-config-node1', 'labels': ['...,...'], 'edits': [{ 'key': 'kubeletArguments.node-ip','value': ['x.x.x.x']}]}
45.部署集群時(shí)匪蟀,采用自定義證書椎麦,Master1節(jié)點(diǎn)報(bào)x509: certificate signed by unknown authority錯(cuò)誤
檢查ansible inventory hosts文件中自定義證書名是否與openshift默認(rèn)的組件證書名重復(fù)了。如ca.crt等
46.部署時(shí)網(wǎng)絡(luò)錯(cuò)誤材彪,需要查看是否配置了默認(rèn)路由观挎,如果沒有,則需要設(shè)置
$ ip route
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 102
172.16.10.0/24 dev eth1 proto kernel scope link src 172.16.10.11 metric 101
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
$ ## 添加默認(rèn)路由
$ ip route add default via 172.16.10.1
47. 刪除指定文件夾下最近一個(gè)月的文件
$ find /dir -type f -mtime +30 -exec rm -rf {} \;
48.pod報(bào)the node was low on resource ephemeral-storage
而被驅(qū)逐
pod應(yīng)用臨時(shí)存儲(chǔ)空間不足導(dǎo)致錯(cuò)誤段化,需要查看本地磁盤键兜,特別是/var/lib/origin所在磁盤的空間情況。
49.自簽證書
- 根證書創(chuàng)建
$ openssl genrsa -out ca.key 2048
$ openssl req -new -x509 -days 36500 -key ca.key -out ca.crt -subj "/C=CN/ST=shanxi/L=taiyuan/O=cn/OU=test/CN=example.com"
$ #或者 openssl req -new -x509 -days 36500 -key ca.key -out ca.crt 手動(dòng)輸入配置
- 創(chuàng)建證書并使用根證書簽發(fā)
$ openssl genrsa -out app.key 2048
$ openssl req -new -key app.key -out app.csr
$ openssl x509 -req -in app.csr -CA ca.crt -CAkey ca.key -out app.crt -days 3650 -CAcreateserial
- 使用 Openssl 工具查看證書信息
$ openssl x509 -in signed.crt -noout -dates
$ openssl x509 -in signed.crt -noout -subject
$ openssl x509 -in signed.crt -noout -text
50. ETCD某個(gè)節(jié)點(diǎn)無法重啟穗泵,報(bào)錯(cuò)rafthttp: the clock difference against peer 27de23fad174dca is too high [1m16.89887s > 1s]
檢查ETCD服務(wù)器的時(shí)間是否同步,如果不同步谜疤,強(qiáng)制同步后佃延,ETCD會(huì)自動(dòng)恢復(fù)。
51. 查看最近一小時(shí)的Event 告警事件
集群默認(rèn)保留最近1小時(shí)的Event事件夷磕,通過field-selector過濾掉正常的事件
$ oc get event --field-selector=type=Warning --all-namespaces
52. 獲取Alertmanager的告警信息
$ oc exec -it alertmanager-main-0 -c alertmanager -n openshift-monitoring -- amtool alert query 'severity=critical' --alertmanager.url http://localhost:9093
53. 獲取statefulset中的Pod的序號
[[ $(hostname) =~ -([0-9]+)$ ]] || exit
ordinal=${BASH_REMATCH[1]}
其中ordinal
即為statefulset中的序號履肃,一般可用在initContainers中對Pod進(jìn)行初始化配置設(shè)置,具體生產(chǎn)實(shí)踐中可靈活使用坐桩。
54. 清理鏡像倉庫中的鏡像
鏡像倉庫必須開啟可刪除功能
# curl -k -I -H "Accept: application/vnd.docker.distribution.manifest.v2+json" -I http://localhost:5000/v2/openshift/ocp-router/manifests/v3.11.129
獲取鏡像層的sha256值
# curl -X DELETE http://localhost:5000/v2/openshift/ocp-router/manifests/sha256:39ad17c3e10f902d8b098ee5128a87d4293b6d07cbc2d1e52ed9ddf0076e3cf9
# #登錄到鏡像倉庫
# registry garbage-collect /etc/docker-distribution/registry/config.yml
55. AIX部署NFS服務(wù)尺棋,應(yīng)用POD無法掛載mount.nfs: Remote I/O error.
默認(rèn)情況下,NFS客戶端通過NFSv4協(xié)議訪問NFS服務(wù)绵跷,如果AIX部署NFS時(shí)不支持NFSv4協(xié)議膘螟,則在掛載時(shí)會(huì)報(bào)mount.nfs: Remote I/O error.
的錯(cuò)誤成福。可通過nfsstat -s
查看服務(wù)端支持的NFS版本荆残。
有兩種解決方法:
- 重新配置NFS Server奴艾,讓其支持NFSv4;
- 配置PV内斯,強(qiáng)制使用NFSv3來訪問后端NFS服務(wù)蕴潦。
參考配置如下:spec.mountOptions
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
mountOptions:
- hard
- nfsvers=3
nfs:
path: /tmp
server: 172.17.0.2
另外也可以通過添加annotations.volume.beta.kubernetes.io/mount-options
來設(shè)置
oc patch pv pv0003 -p '{"metadata":{"annotations":{"volume.beta.kubernetes.io/mount-options":"rw,nfsvers=3"}}}'
參考文章:https://kubernetes.io/zh/docs/concepts/storage/persistent-volumes/
56. 將POD從副本控制器中脫離
在應(yīng)用運(yùn)行過程中,某些場景下俘闯,需要將某個(gè)POD從業(yè)務(wù)流量中脫離潭苞。例如在問題排查時(shí),一旦應(yīng)用重啟真朗,將不易于具體線上問題的排查此疹,這時(shí)我們需要在盡快恢復(fù)應(yīng)用的情況下,保留問題POD的狀態(tài)蜜猾。
方法很簡單秀菱,就是利用Label。我們知道在K8S/OCP中蹭睡,各種資源的關(guān)系都是通過Label來建立的衍菱,只需要將POD的Label去掉,讓它就會(huì)成為一個(gè)孤立的POD肩豁,應(yīng)用的迭代不會(huì)對POD的生命周期有影響脊串,同時(shí)業(yè)務(wù)流量也不會(huì)分發(fā)到該P(yáng)OD。
# oc label pod xxx-pod --list //查看當(dāng)前pod所有l(wèi)abel
# oc label pod xxx-pod <LABEL-A>- <LABEL-B>- //刪除關(guān)聯(lián)的LABEL
57. Node狀態(tài)變?yōu)镹otReady清钥,且檢查狀態(tài)為Unknown.
可檢查下CSR琼锋,是否存在Pending
$ oc get csr
批準(zhǔn)這些CSR即可
$ oc get csr -o name | xargs oc adm certificate approve
或
$ kubectl get csr -o name | xargs kubectl certificate approve
58. No space left on device,但df -h
查看空間空空的
還需要檢查一下inodes
$ df -ih
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sdb 16M 502K 16M 4% /
如果發(fā)現(xiàn)IUse% 為100祟昭,就沒法再存儲(chǔ)數(shù)據(jù)了缕坎。
解決辦法 : rm -rf 一些小而多的文件,如日志等篡悟。