Kubernetes 記錄

環(huán)境

  • 7臺(tái)虛擬機(jī)最蕾,三master三node栽连,一臺(tái)作為Docker私庫
  • VIP
    • 192.168.11.222
  • master
    • 192.168.11.31
    • 192.168.11.32
    • 192.168.11.33
  • node
    • 192.168.11.34
    • 192.168.11.35
    • 192.168.11.36
  • Harbor registry
    • 192.168.11.200

系統(tǒng)配置

  • 啟動(dòng)SSH并開機(jī)自啟動(dòng)
service sshd start
systemctl enable sshd.service
  • 關(guān)閉防火墻和selinux
systemctl stop firewalld  && systemctl disable firewalld
vim /etc/selinux/config
修改SELINUX=disabled
setenforce 0
getenforce
  • 關(guān)閉swap
swapoff -a && sed -i '/swap/d' /etc/fstab
  • 配置系統(tǒng)路由參數(shù),防止kubeadm報(bào)路由警告
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
cat  /etc/sysctl.conf 確認(rèn)
sysctl -p
  • 修改節(jié)點(diǎn)名
hostnamectl set-hostname k8s1
hostnamectl set-hostname k8s2
hostnamectl set-hostname k8s3
hostnamectl set-hostname k8s4
hostnamectl set-hostname k8s5
hostnamectl set-hostname k8s6
hostnamectl set-hostname k8s-deploy
  • 修改/etc/hosts文件
192.168.11.31   k8s1
192.168.11.32   k8s2
192.168.11.33   k8s3
192.168.11.34   k8s4
192.168.11.35   k8s5
192.168.11.36   k8s6
192.168.11.200  k8s-deploy

192.30.253.113 github.com
192.30.252.131 github.com
185.31.16.185 github.global.ssl.fastly.net
74.125.237.1 dl-ssl.google.com
173.194.127.200 groups.google.com
192.30.252.131 github.com
185.31.16.185 github.global.ssl.fastly.net
74.125.128.95 ajax.googleapis.com
  • 配置免密鑰登錄

每個(gè)點(diǎn)都執(zhí)行

ssh-keygen
ssh-copy-id k8s1
ssh-copy-id k8s2
ssh-copy-id k8s3
ssh-copy-id k8s4
ssh-copy-id k8s5
ssh-copy-id k8s6
ssh-copy-id k8s-deploy

安裝 keepalived(只對(duì)master節(jié)點(diǎn)操作)

安裝

yum install -y keepalived

修改配置文件 /etc/keepalived/keepalived.conf

備份后每個(gè)節(jié)點(diǎn)修改相應(yīng)內(nèi)容

global_defs {
   router_id LVS_k8s
}
vrrp_script CheckK8sMaster {
    script "curl -k https://192.168.11.222:6443"        //VIP的IP
    interval 3
    timeout 9
    fall 2
    rise 2
}
vrrp_instance VI_1 {
    state MASTER
    interface eno16777736                   //ifconfig上面的對(duì)應(yīng)IP項(xiàng)吭狡,見下圖
    virtual_router_id 61
    priority 120
    advert_int 1
    mcast_src_ip 192.168.11.31              //本機(jī)的IP
    nopreempt
    authentication {
        auth_type PASS
        auth_pass sqP05dQgMSlzrxHj
    }
    unicast_peer {
        #192.168.11.31                      //master集群的生逸,將本機(jī)的注釋掉
        192.168.11.32
        192.168.11.33
    }
    virtual_ipaddress {
        192.168.11.222/24                       //VIP的IP
    }
    track_script {
        CheckK8sMaster
    }
}

依次啟動(dòng)

systemctl enable keepalived && systemctl restart keepalived
systemctl status keepalived

修改相應(yīng)的IP配置啦辐,啟動(dòng)起來赞厕,應(yīng)該能看到兩個(gè)節(jié)點(diǎn)是BACKUP STATE

建立install目錄榔昔,將準(zhǔn)備的文件都拷貝到這個(gè)路徑下驹闰,接下來安裝都以install目錄內(nèi)文件為準(zhǔn)

安裝Etcd https(只對(duì)master節(jié)點(diǎn)操作)

  • k8s1 執(zhí)行
export NODE_NAME=k8s1
export NODE_IP=192.168.11.31
export NODE_IPS="192.168.11.31 192.168.11.32 192.168.11.33"
export ETCD_NODES=k8s1=https://192.168.11.31:2380,k8s2=https://192.168.11.32:2380,k8s3=https://192.168.11.33:2380
  • k8s2 執(zhí)行
export NODE_NAME=k8s2
export NODE_IP=192.168.11.32
export NODE_IPS="192.168.11.31 192.168.11.32 192.168.11.33"
export ETCD_NODES=k8s1=https://192.168.11.31:2380,k8s2=https://192.168.11.32:2380,k8s3=https://192.168.11.33:2380
  • k8s3 執(zhí)行
export NODE_NAME=k8s3
export NODE_IP=192.168.11.33
export NODE_IPS="192.168.11.31 192.168.11.32 192.168.11.33"
export ETCD_NODES=k8s1=https://192.168.11.31:2380,k8s2=https://192.168.11.32:2380,k8s3=https://192.168.11.33:2380

Etcd 證書

  • 創(chuàng)建 CA 證書和秘鑰

安裝cfssl, CloudFlare 的 PKI 工具集 cfssl 來生成 Certificate Authority (CA) 證書和秘鑰文件
如果不希望將cfssl工具安裝到部署主機(jī)上撒会,可以在其他的主機(jī)上進(jìn)行該步驟嘹朗,生成以后將證書拷貝到部署etcd的主機(jī)上即可。

chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl

chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
  • 生成 ETCD 的 TLS 秘鑰和證書

ca-config.json:可以定義多個(gè) profiles诵肛,分別指定不同的過期時(shí)間屹培、使用場(chǎng)景等參數(shù);后續(xù)在簽名證書時(shí)使用某個(gè) profile怔檩;
signing:表示該證書可用于簽名其它證書褪秀;生成的 ca.pem 證書中 CA=TRUE;
server auth:表示 client 可以用該 CA 對(duì) server 提供的證書進(jìn)行驗(yàn)證薛训;
client auth:表示 server 可以用該 CA 對(duì) client 提供的證書進(jìn)行驗(yàn)證媒吗;

為了保證通信安全,客戶端(如 etcdctl) 與 etcd 集群乙埃、etcd 集群之間的通信需要使用 TLS 加密闸英,本節(jié)創(chuàng)建 etcd TLS 加密所需的證書和私鑰。
創(chuàng)建 CA 配置文件:

cat >  ca-config.json <<EOF
{
"signing": {
"default": {
  "expiry": "8760h"
},
"profiles": {
  "kubernetes": {
    "usages": [
        "signing",
        "key encipherment",
        "server auth",
        "client auth"
    ],
    "expiry": "8760h"
  }
}
}
}
EOF
cat >  ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
  "C": "CN",
  "ST": "iie",
  "L": "iie",
  "O": "k8s",
  "OU": "System"
}
]
}
EOF

"CN":Common Name介袜,kube-apiserver 從證書中提取該字段作為請(qǐng)求的用戶名 (User Name)甫何;瀏覽器使用該字段驗(yàn)證網(wǎng)站是否合法;
"O":Organization遇伞,kube-apiserver 從證書中提取該字段作為請(qǐng)求用戶所屬的組 (Group)辙喂;

  • 生成 CA 證書和私鑰:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
ls ca*
  • 創(chuàng)建 etcd 證書簽名請(qǐng)求:
cat > etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.11.31",
    "192.168.11.32",
    "192.168.11.33"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "iie",
      "L": "iie",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

hosts:指定授權(quán)使用該證書的 etcd 節(jié)點(diǎn) IP;每個(gè)節(jié)點(diǎn)IP 都要在里面 或者 每個(gè)機(jī)器申請(qǐng)一個(gè)對(duì)應(yīng)IP的證書鸠珠,此處選擇配置所有IP進(jìn)來

  • 生成 etcd 證書和私鑰:
cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

mkdir -p /etc/etcd/ssl
cp etcd.pem etcd-key.pem ca.pem /etc/etcd/ssl/

接下來需要將文件拷貝到其它master節(jié)點(diǎn):

# 先在其它主節(jié)點(diǎn)將該目錄刪除加派,再進(jìn)行拷貝
rm -rf /etc/etcd/ssl/*
scp -r /etc/etcd/ssl/ 192.168.11.31:/etc/etcd/
scp -r /etc/etcd/ssl/ 192.168.11.32:/etc/etcd/

三個(gè)點(diǎn)一起執(zhí)行

cd /root/install
tar -xvf etcd-v3.1.10-linux-amd64.tar.gz
mv etcd-v3.1.10-linux-amd64/etcd* /usr/local/bin

創(chuàng)建 etcd 的 systemd unit 文件

mkdir -p /var/lib/etcd
cd /var/lib/etcd

cat > etcd.service <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \\
  --name=${NODE_NAME} \\
  --cert-file=/etc/etcd/ssl/etcd.pem \\
  --key-file=/etc/etcd/ssl/etcd-key.pem \\
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \\
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \\
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \\
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \\
  --initial-advertise-peer-urls=https://${NODE_IP}:2380 \\
  --listen-peer-urls=https://${NODE_IP}:2380 \\
  --listen-client-urls=https://${NODE_IP}:2379,http://127.0.0.1:2379 \\
  --advertise-client-urls=https://${NODE_IP}:2379 \\
  --initial-cluster-token=etcd-cluster-0 \\
  --initial-cluster=${ETCD_NODES} \\
  --initial-cluster-state=new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • 指定 etcd 的工作目錄和數(shù)據(jù)目錄為 /var/lib/etcd,需在啟動(dòng)服務(wù)前創(chuàng)建這個(gè)目錄跳芳;
  • 為了保證通信安全,需要指定 etcd 的公私鑰(cert-file和key-file)竹勉、Peers 通信的公私鑰和 CA 證書> (peer-cert-file飞盆、peer-key-file、peer-trusted-ca-file)、客戶端的CA證書(trusted-ca-file)吓歇;
  • --initial-cluster-state 值為 new 時(shí)孽水,--name 的參數(shù)值必須位于 --initial-cluster 列表中;
  • 啟動(dòng) etcd 服務(wù)
mv etcd.service /etc/systemd/system/
systemctl daemon-reload && systemctl enable etcd && systemctl start etcd && systemctl status etcd

# 若出錯(cuò)
systemctl status etcd.service
journalctl -xe
  • 驗(yàn)證服務(wù)
 etcdctl \
  --endpoints=https://${NODE_IP}:2379  \
  --ca-file=/etc/etcd/ssl/ca.pem \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  cluster-health

安裝 Docker

# 上傳docker目錄城看,先清除之前有的docker
cd /root/install
yum -y remove docker docker-common

cd docker
yum -y localinstall docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
yum -y localinstall docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm

# 啟動(dòng)docker
systemctl start docker && systemctl enable docker
# 檢查狀態(tài)
docker ps

# 配置鏡像庫女气、私庫,編輯 /etc/docker/daemon.json 添加下面幾項(xiàng)(推薦這種方式修改 docker 配置)
{
"max-concurrent-downloads": 3,
"max-concurrent-uploads": 5,
"registry-mirrors": ["https://7bezldxe.mirror.aliyuncs.com/"],
"insecure-registries": ["192.168.11.200","192.168.11.218","192.168.11.188","192.168.11.230","192.168.11.112"]
}

安裝 kubelet测柠、kubectl炼鞠、kubeadm、kubecni

切換到k8s目錄轰胁,主從節(jié)點(diǎn)同時(shí)執(zhí)行谒主,從節(jié)點(diǎn)需要安裝 kubeadm 從而后期可用 kubeadm join 加入k8s集群

cd ../k8s
yum -y install *.rpm
vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
#Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
systemctl enable docker && systemctl restart docker
systemctl daemon-reload && systemctl restart kubelet

# 切換到docker_images目錄
cd ../docker_images
for i in `ls`;do docker load -i $i;done

在所有master節(jié)點(diǎn)創(chuàng)建 config.yaml 文件

cat <<EOF > config.yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
etcd:
  endpoints:
  - https://192.168.11.31:2379
  - https://192.168.11.32:2379
  - https://192.168.11.33:2379
  caFile: /etc/etcd/ssl/ca.pem
  certFile: /etc/etcd/ssl/etcd.pem
  keyFile: /etc/etcd/ssl/etcd-key.pem
  dataDir: /var/lib/etcd
networking:
  podSubnet: 10.244.0.0/16
kubernetesVersion: 1.9.0
api:
  advertiseAddress: "192.168.11.222"
token: "b99a00.a144ef80536d4344"
tokenTTL: "0s"
apiServerCertSANs:
- "k8s1"
- "k8s2"
- "k8s3"
- 192.168.11.31
- 192.168.11.32
- 192.168.11.33
- 192.168.11.222
featureGates:
  CoreDNS: true
EOF

在VIP所在master節(jié)點(diǎn)執(zhí)行以下命令:

kubeadm init --config config.yaml
# 記錄生成的join命令
kubeadm join --token b99a00.a144ef80536d4344 192.168.11.222:6443 --discovery-token-ca-cert-hash sha256:cf68966ae386e10c0233e008f21597d1d54a60ea9202d0c360a4b19fa8443328

kubectl apply -f kubeadm-kuberouter.yaml
kubectl get pod --all-namespaces

在其它主節(jié)點(diǎn)執(zhí)行

systemctl enable kubelet && systemctl start kubelet

啟動(dòng)之后,拷貝VIP所在master節(jié)點(diǎn)上的配置到其他master

# 先建立 mkdir /etc/kubernetes/pki/
scp /etc/kubernetes/pki/* 192.168.11.31:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/* 192.168.11.32:/etc/kubernetes/pki/

# 使用命令 systemctl status kubelet 確認(rèn)狀態(tài)保持activating (auto-restart)赃阀,即使當(dāng)前沒運(yùn)行霎肯,但init后就會(huì)運(yùn)行
kubeadm init --config config.yaml # 此時(shí)三個(gè)點(diǎn)應(yīng)該都啟動(dòng)了

# 允許master部署pod(可選)
kubectl taint nodes --all node-role.kubernetes.io/master-
# 禁止master部署pod(可選)
kubectl taint nodes centos-master-1 node-role.kubernetes.io/master=true:NoSchedule

# 驗(yàn)證服務(wù)狀態(tài)
kubectl get nodes
kubectl get cs
kubectl get pods --all-namespaces

在node節(jié)點(diǎn)執(zhí)行:

kubeadm join --token b99a00.a144ef80536d4344 192.168.11.222:6443 --discovery-token-ca-cert-hash sha256:cf68966ae386e10c0233e008f21597d1d54a60ea9202d0c360a4b19fa8443328

集群添加用戶名密碼

默認(rèn)驗(yàn)證方式有kubeconfig和token,但這里使用basicauth的方式進(jìn)行apiserver的驗(yàn)證

# 編輯 /etc/kubernetes/pki/basic_auth_file 加入以下內(nèi)容存放用戶名和密碼
#user,password,userid
admin,admin,2

編輯 /etc/kubernetes/manifests/kube-apiserver.yaml榛斯,給kube-apiserver添加basic_auth驗(yàn)證

- --basic_auth_file=/etc/kubernetes/pki/basic_auth_file

重啟 kubelet观游,不重啟的話,會(huì)報(bào)The connection to the server 192.168.11.223:6443 was refused - did you specify the right host or port?

systemctl daemon-reload && systemctl restart kubelet.service
  • 給admin授權(quán)

默認(rèn)cluster-admin是擁有全部權(quán)限的驮俗,將admin和cluster-admin bind這樣admin就有cluster-admin的權(quán)限懂缕。

kubectl get clusterrole/cluster-admin -o yaml
kubectl create clusterrolebinding login-on-dashboard-with-cluster-admin --clusterrole=cluster-admin --user=admin
kubectl get clusterrolebinding/login-on-dashboard-with-cluster-admin -o yaml

k8s 注意點(diǎn):

  1. The connection to the server 192.168.11.160:6443 was refused - did you specify the right host or port?

    #看是不是 Active: inactive (dead)
    systemctl status kubelet
    systemctl daemon-reload && systemctl restart kubelet
    
  2. 查看日志的命令

    kubectl logs kube-apiserver --namespace=kube-system
    
  3. 常用命令

    kubectl get componentstatuses //查看node節(jié)點(diǎn)組件狀態(tài)
    kubectl get svc -n kube-system //查看應(yīng)用
    kubectl cluster-info //查看集群信息
    kubectl describe --namespace kube-system service kubernetes-dashboard //詳細(xì)服務(wù)信息
    kubectl apply -f kube-apiserver.yaml   //更新kube-apiserver容器
    kubectl delete -f /root/k8s/k8s_images/kubernetes-dashboard.yaml //刪除應(yīng)用
    kubectl  delete service example-server //刪除服務(wù)
    systemctl  start kube-apiserver.service //啟動(dòng)服務(wù)。
    kubectl get deployment --all-namespaces //啟動(dòng)的應(yīng)用
    kubectl get pod  -o wide  --all-namespaces //查看pod上跑哪些服務(wù)
    kubectl get pod -o wide -n kube-system //查看應(yīng)用在哪個(gè)node上
    kubectl describe pod --namespace=kube-system //查看pod上活動(dòng)信息
    kubectl describe depoly kubernetes-dashboard -n kube-system
    kubectl get depoly kubernetes-dashboard -n kube-system -o yaml
    kubectl get service kubernetes-dashboard -n kube-system //查看應(yīng)用
    kubectl delete -f kubernetes-dashboard.yaml //刪除應(yīng)用
    kubectl get events //查看事件
    kubectl get rc/kubectl get svc
    kubectl get namespace //獲取namespace信息
    find -type f -print -exec grep hello {} \;
    
    kubeadm reset
    netstat -lutpn | grep 6443
    

Harbor 安裝和配置

安裝Harbor需要docker-compose意述,docker-compose可通過 pip 安裝

安裝 pip 和 docker-compose

# 通過pip安裝提佣,先安裝pip
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get-pip.py

# 安裝docker-compose
pip install -U docker-compose

centos7 下 pip install docker-compose 安裝 docker-compose 下報(bào)錯(cuò):
Cannot uninstall ‘requests’. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
解決辦法:pip install docker-compose --ignore-installed requests

安裝 harbor

下載 Harbor v1.6.0,修改 harbor.cfg 使 hostname = 192.168.11.200荤崇,在harbor目錄執(zhí)行 ./install.sh 完成安裝:

# 查看 harbor 組件狀態(tài)
docker-compose ps
# 啟動(dòng)/停止/重啟harbor
docker-compose start/stop/restart

K8S 應(yīng)用日志收集——EFK(Elasticsearch/Filebeat/Kibana)

系統(tǒng)環(huán)境

  • 3臺(tái)虛擬機(jī)拌屏,一個(gè)master兩個(gè)node
  • master
    • 192.168.11.218 k8s-master1-test
  • node
    • 192.168.11.219 k8s-node1-test
    • 192.168.11.221 k8s-node2-test
  • 其它
    • 3臺(tái)機(jī)器全部安裝jdk1.8,因?yàn)閑lasticsearch是java開發(fā)的
    • 3臺(tái)全部安裝elasticsearch
    • 192.168.11.218 作為主節(jié)點(diǎn)
    • 192.168.11.219以及192.168.11.221作為數(shù)據(jù)節(jié)點(diǎn)
    • 主節(jié)點(diǎn)192.168.11.218上需要安裝kibana
  • ELK版本信息:
    • Elasticsearch-6.0.0
    • kibana-6.0.0
    • filebeat-5.4.0

安裝es

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.0.0.rpm
rpm -ivh elasticsearch-6.0.0.rpm

### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
sudo systemctl start elasticsearch.service

配置es

elasticsearch配置文件在這兩個(gè)地方术荤,有兩個(gè)配置文件:

  • /etc/elasticsearch/elasticsearch.yml
  • /etc/sysconfig/elasticsearch

elasticsearch.yml 文件用于配置集群節(jié)點(diǎn)等相關(guān)信息的倚喂,elasticsearch 文件則是配置服務(wù)本身相關(guān)的配置,例如某個(gè)配置文件的路徑以及java的一些路徑配置

在 192.168.77.128 上編輯配置文件:

cluster.name: es-master-node  # 集群中的名稱
node.name: k8s-master1-test  # 該節(jié)點(diǎn)名稱
node.master: true  # 意思是該節(jié)點(diǎn)為主節(jié)點(diǎn)
node.data: false  # 表示這不是數(shù)據(jù)節(jié)點(diǎn)
network.host: 0.0.0.0  # 監(jiān)聽全部ip瓣戚,在實(shí)際環(huán)境中應(yīng)設(shè)置為一個(gè)安全的ip
http.port: 9200  # es服務(wù)的端口號(hào)
discovery.zen.ping.unicast.hosts: ["192.168.11.218", "192.168.11.219", "192.168.11.221"] # 配置自動(dòng)發(fā)現(xiàn)

其余兩節(jié)點(diǎn)同樣編輯配置文件:

cluster.name: es-master-node  # 集群中的名稱
node.name: k8s-node1-test  # 該節(jié)點(diǎn)名稱
node.master: false  # 意思是該節(jié)點(diǎn)為主節(jié)點(diǎn)
node.data: true  # 表示這不是數(shù)據(jù)節(jié)點(diǎn)
network.host: 0.0.0.0  # 監(jiān)聽全部ip端圈,在實(shí)際環(huán)境中應(yīng)設(shè)置為一個(gè)安全的ip
http.port: 9200  # es服務(wù)的端口號(hào)
discovery.zen.ping.unicast.hosts: ["192.168.11.218", "192.168.11.219", "192.168.11.221"] # 配置自動(dòng)發(fā)現(xiàn)

安裝完啟動(dòng):

systemctl start elasticsearch.service

curl查看es集群情況:

curl 'localhost:9200/_cluster/health?pretty'
# 返回結(jié)果
{
  "cluster_name" : "es-master-node",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 1,
  "active_shards" : 1,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 1,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 50.0
}

查看集群的詳細(xì)信息:

curl '192.168.11.218:9200/_cluster/state?pretty'

安裝 kibana

wget https://artifacts.elastic.co/downloads/kibana/kibana-6.0.0-x86_64.rpm
rpm -ivh kibana-6.0.0-x86_64.rpm

配置 kibana

編輯 vim /etc/kibana/kibana.yml 增加以下內(nèi)容

server.port: 5601  # 配置kibana的端口
server.host: 192.168.11.218  # 配置監(jiān)聽ip
elasticsearch.url: "http://192.168.11.218:9200"  # 配置es服務(wù)器的ip,如果是集群則配置該集群中主節(jié)點(diǎn)的ip
logging.dest: /var/log/kibana.log  # 配置kibana的日志文件路徑子库,不然默認(rèn)是messages里記錄日志

啟動(dòng):

systemctl start kibana

瀏覽器里進(jìn)行訪問 http://192.168.11.218:5601/ 即可

  • 常用:
#查看es索引:
curl '192.168.11.218:9200/_cat/indices?v'
#獲取指定索引詳細(xì)信息:
curl -XGET '192.168.77.128:9200/system-syslog-2018.03?pretty'
#需要?jiǎng)h除索引的話舱权,使用以下命令可以刪除指定索引:
curl -XDELETE 'localhost:9200/logcollection-test'

日志收集

制作 filebeat 鏡像:

首先準(zhǔn)備 filebeat-5.4.0-linux-x86_64.tar.gz

  • docker-entrypoint.sh
#!/bin/bash
config=/etc/filebeat/filebeat.yml
env
echo 'Filebeat init process done. Ready for start up.'
echo "Using the following configuration:"
cat /etc/filebeat/filebeat.yml
exec "$@"
  • dockerfile
FROM centos
MAINTAINER YangLiangWei <ylw@fjhb.cn>

# Install Filebeat
WORKDIR /usr/local
COPY filebeat-5.4.0-linux-x86_64.tar.gz  /usr/local
RUN cd /usr/local && \
    tar xvf filebeat-5.4.0-linux-x86_64.tar.gz && \
    rm -f filebeat-5.4.0-linux-x86_64.tar.gz && \
    ln -s /usr/local/filebeat-5.4.0-linux-x86_64 /usr/local/filebeat && \
    chmod +x /usr/local/filebeat/filebeat && \
    mkdir -p /etc/filebeat

ADD ./docker-entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/docker-entrypoint.sh

ENTRYPOINT ["docker-entrypoint.sh"]

CMD ["/usr/local/filebeat/filebeat","-e","-c","/etc/filebeat/filebeat.yml"]
  • build docker 鏡像
docker build -t filebeat:v5.4.0
docker tag filebeat:v5.4.0 192.168.11.218/hlg_web/filebeat:v5.4.0

配置 k8s Secret 和私庫認(rèn)證

# 其中 .dockerconfigjson 字段值可以在login到私庫后用 cat /root/.docker/config.json | base64 -w 0 獲取
kind: Secret
apiVersion: v1
metadata:
  name: regsecret
type: kubernetes.io/dockerconfigjson
data:
  ".dockerconfigjson": ewoJImF1dGhzIjogewoJCSIxOTIuMTY4LjExLjIxOCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZTR0Z5WW05eU1USXpORFU9IgoJCX0KCX0KfQ==

這樣,以后若在pod中定義的鏡像需要拉取私庫私有鏡像則只要添加imagePullSecrets字段即可:

      containers:
      - image: 192.168.11.200/hlg_web/datalower:1.1
        name: app
        ports:
        - containerPort: 80
        volumeMounts:
        - name: app-logs
          mountPath: /app/log
      - image: 192.168.11.200/hlg_web/filebeat:v5.4.0
        name: filebeat
        volumeMounts:
        - name: app-logs
          mountPath: /log
        - name: filebeat-config
          mountPath: /etc/filebeat/
      volumes:
      - name: app-logs
        emptyDir: {}
      - name: filebeat-config
        configMap:
          name: filebeat-config
      imagePullSecrets:
      - name: regsecret

Deployment 配置

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: logcollection-test
  namespace: default
spec:
  replicas: 1
  template:
    metadata:
      labels:
        run: logcollection-test
    spec:
      containers:
      - image: 192.168.11.200/hlg_web/datalower:1.1
        name: app
        ports:
        - containerPort: 80
        volumeMounts:
        - name: app-logs
          mountPath: /app/log
      - image: 192.168.11.200/hlg_web/filebeat:v5.4.0
        name: filebeat
        volumeMounts:
        - name: app-logs
          mountPath: /log
        - name: filebeat-config
          mountPath: /etc/filebeat/
      volumes:
      - name: app-logs
        emptyDir: {}
      - name: filebeat-config
        configMap:
          name: filebeat-config
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
data:
  filebeat.yml: |
    filebeat.prospectors:
    - input_type: log
      paths:
        - "/log/*"
    output.elasticsearch:
      hosts: ["192.168.11.218:9200"]
      index: "logcollection-test"

參考:

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末宴倍,一起剝皮案震驚了整個(gè)濱河市张症,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌鸵贬,老刑警劉巖俗他,帶你破解...
    沈念sama閱讀 206,311評(píng)論 6 481
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場(chǎng)離奇詭異阔逼,居然都是意外死亡兆衅,警方通過查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,339評(píng)論 2 382
  • 文/潘曉璐 我一進(jìn)店門嗜浮,熙熙樓的掌柜王于貴愁眉苦臉地迎上來羡亩,“玉大人,你說我怎么就攤上這事周伦∠Υ海” “怎么了?”我有些...
    開封第一講書人閱讀 152,671評(píng)論 0 342
  • 文/不壞的土叔 我叫張陵专挪,是天一觀的道長(zhǎng)及志。 經(jīng)常有香客問我,道長(zhǎng)寨腔,這世上最難降的妖魔是什么速侈? 我笑而不...
    開封第一講書人閱讀 55,252評(píng)論 1 279
  • 正文 為了忘掉前任,我火速辦了婚禮迫卢,結(jié)果婚禮上倚搬,老公的妹妹穿的比我還像新娘喳瓣。我一直安慰自己漂洋,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 64,253評(píng)論 5 371
  • 文/花漫 我一把揭開白布钱反。 她就那樣靜靜地躺著家卖,像睡著了一般眨层。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上上荡,一...
    開封第一講書人閱讀 49,031評(píng)論 1 285
  • 那天趴樱,我揣著相機(jī)與錄音,去河邊找鬼酪捡。 笑死叁征,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的逛薇。 我是一名探鬼主播捺疼,決...
    沈念sama閱讀 38,340評(píng)論 3 399
  • 文/蒼蘭香墨 我猛地睜開眼,長(zhǎng)吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼永罚!你這毒婦竟也來了啤呼?” 一聲冷哼從身側(cè)響起议薪,我...
    開封第一講書人閱讀 36,973評(píng)論 0 259
  • 序言:老撾萬榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎媳友,沒想到半個(gè)月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體产捞,經(jīng)...
    沈念sama閱讀 43,466評(píng)論 1 300
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡醇锚,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 35,937評(píng)論 2 323
  • 正文 我和宋清朗相戀三年,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了坯临。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片焊唬。...
    茶點(diǎn)故事閱讀 38,039評(píng)論 1 333
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡,死狀恐怖看靠,靈堂內(nèi)的尸體忽然破棺而出赶促,到底是詐尸還是另有隱情,我是刑警寧澤挟炬,帶...
    沈念sama閱讀 33,701評(píng)論 4 323
  • 正文 年R本政府宣布鸥滨,位于F島的核電站,受9級(jí)特大地震影響谤祖,放射性物質(zhì)發(fā)生泄漏婿滓。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,254評(píng)論 3 307
  • 文/蒙蒙 一粥喜、第九天 我趴在偏房一處隱蔽的房頂上張望凸主。 院中可真熱鬧,春花似錦额湘、人聲如沸卿吐。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,259評(píng)論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽嗡官。三九已至,卻和暖如春供置,著一層夾襖步出監(jiān)牢的瞬間谨湘,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 31,485評(píng)論 1 262
  • 我被黑心中介騙來泰國(guó)打工芥丧, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留紧阔,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 45,497評(píng)論 2 354
  • 正文 我出身青樓续担,卻偏偏與公主長(zhǎng)得像擅耽,于是被迫代替她去往敵國(guó)和親。 傳聞我的和親對(duì)象是個(gè)殘疾皇子物遇,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 42,786評(píng)論 2 345

推薦閱讀更多精彩內(nèi)容