使用kubeadm 搭建k8s+calico集群

最近著手使用k8s搭建內(nèi)部容器云平臺,在搭建kubernetes集群時遇到一些問題乓序,網(wǎng)上有不少搭建文檔可以參考批狱,但是滿足以下網(wǎng)絡(luò)互通才能算k8s集群ready

node <-> pod              #主機和pod之前IP可互相ping通
pod  <-> pod              #同/跨主機Pod之間可互相ping通
pod  -> svc cluster ip    #pod可以訪問Service 的cluster ip
node -> svc cluster ip    #node可以訪問Service 的cluster ip

kubernetes 集群結(jié)構(gòu)圖

image.png

以下是版本和機器信息:

  • kubernetes 1.7.2
  • docker 1.12
  • calico 2.3.0
  • centos 7 x86_64 三個節(jié)點

10.12.0.18 -> k8s master
10.12.0.19 -> k8s node1
10.12.0.22 -> k8s node2, etcd node


節(jié)點初始化

  • 更新CentOS-Base.repo為阿里云yum源
mv -f /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bk; 
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

設(shè)置bridge

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
EOF
sudo sysctl --system
  • disable selinux (請不要用setenforce 0)
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
  • 關(guān)閉防火墻
sudo systemctl disable firewalld.service
sudo systemctl stop firewalld.service
  • 關(guān)閉iptables
sudo yum install -y iptables-services;iptables -F;   #可略過
sudo systemctl disable iptables.service
sudo systemctl stop iptables.service
  • 清理之前的K8S環(huán)境(若有)
systemctl daemon-reload
systemctl stop kubelet;systemctl stop kube-proxy
rm -rf /etc/systemd/system/kube-proxy.service /etc/systemd/system/kubelet.service
systemctl daemon-reload
docker ps -aq |xargs docker rm -f 
rm -rf /etc/kubernetes/ssl/*  /var/lib/kube*

systemctl stop etcd
rm -rf /etc/etcd/ssl /var/lib/etcd /etc/systemd/system/etcd.service
systemctl daemon-reload
  • 安裝相關(guān)軟件
sudo yum install -y vim wget curl screen git etcd ebtables flannel
sudo yum install -y socat net-tools.x86_64 iperf bridge-utils.x86_64
  • 安裝docker (目前默認安裝是1.12)
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum install -y libdevmapper* docker
  • 安裝kubernetes
##設(shè)置kubernetes.repo為阿里云源阳懂,適合國內(nèi)
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

##設(shè)置kubernetes.repo為阿里云源梅尤,適合能連通google的網(wǎng)絡(luò)
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
    https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

## 安裝k8s 1.7.2 (kubernetes-cni會作為依賴一并安裝,在此沒有做版本指定)
export K8SVERSION=1.7.2
sudo yum install -y "kubectl-${K8SVERSION}-0.x86_64" "kubelet-${K8SVERSION}-0.x86_64" "kubeadm-${K8SVERSION}-0.x86_64"

  • 升級kernel到最新(4.12.5 ,可選)
uname -sr
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
yum --enablerepo=elrepo-kernel install -y kernel-ml

awk -F\' '$1=="menuentry " {print i++ " @ " $2}' /etc/grub2.cfg
grub2-set-default 0
  • 重啟機器 (這一步是需要的)
reboot

重啟機器后執(zhí)行如下步驟

  • 配置docker daemon并啟動docker
cat <<EOF >/etc/sysconfig/docker
OPTIONS="-H unix:///var/run/docker.sock -H tcp://127.0.0.1:2375 --storage-driver=overlay --exec-opt native.cgroupdriver=cgroupfs --graph=/localdisk/docker/graph --insecure-registry=gcr.io --insecure-registry=quay.io  --insecure-registry=registry.cn-hangzhou.aliyuncs.com --registry-mirror=http://138f94c6.m.daocloud.io"
EOF

systemctl start docker
systemctl status docker -l
  • 拉取k8s 1.7.2 需要的鏡像
quay.io/calico/node:v1.3.0
quay.io/calico/cni:v1.9.1
quay.io/calico/kube-policy-controller:v0.6.0

gcr.io/google_containers/pause-amd64:3.0
gcr.io/google_containers/kube-proxy-amd64:v1.7.2
gcr.io/google_containers/kube-apiserver-amd64:v1.7.2
gcr.io/google_containers/kube-controller-manager-amd64:v1.7.2
gcr.io/google_containers/kube-scheduler-amd64:v1.7.2
gcr.io/google_containers/etcd-amd64:3.0.17

gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
  • 在非k8s master節(jié)點 10.12.0.22 上啟動ETCD (圖省事,也可搭建成ETCD集群)
screen etcd -name="EtcdServer" -initial-advertise-peer-urls=http://10.12.0.22:2380 -listen-peer-urls=http://0.0.0.0:2380 -listen-client-urls=http://10.12.0.22:2379 -advertise-client-urls http://10.12.0.22:2379 -data-dir /var/lib/etcd/default.etcd
  • 在每個節(jié)點上check是否可通達ETCD岩调, 必須可通才行, 不通需要看下防火墻是不是沒有關(guān)閉
etcdctl --endpoint=http://10.12.0.22:2379 member list
etcdctl --endpoint=http://10.12.0.22:2379 cluster-health
  • 在k8s master節(jié)點上使用kubeadm啟動克饶,
    pod-ip網(wǎng)段設(shè)定為10.68.0.0/16, cluster-ip網(wǎng)段為默認10.96.0.0/16
    如下命令在master節(jié)點上執(zhí)行
cat << EOF >kubeadm_config.yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
  advertiseAddress: 10.12.0.18
  bindPort: 6443
etcd:
  endpoints:
  - http://10.12.0.22:2379
networking:
 dnsDomain: cluster.local
 serviceSubnet: 10.96.0.0/16
 podSubnet: 10.68.0.0/16
kubernetesVersion: v1.7.2
#token: <string>
#tokenTTL: 0
EOF

##
kubeadm init --config kubeadm_config.yaml
  • 執(zhí)行kubeadm init命令后稍等幾十秒,master上api-server, scheduler, controller-manager容器都啟動起來誊辉,以下命令來check下master
    如下命令在master節(jié)點上執(zhí)行
rm -rf $HOME/.kube
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl get cs -o wide --show-labels
kubectl get nodes -o wide --show-labels
  • 節(jié)點加入, 需要kubeadm init命令輸出的token, 如下命令在node節(jié)點上執(zhí)行
systemctl start docker
systemctl start kubelet
kubeadm join --token *{6}.*{16} 10.12.0.18:6443 --skip-preflight-checks
  • 在master節(jié)點上觀察節(jié)點加入情況矾湃, 因為還沒有創(chuàng)建網(wǎng)絡(luò),所以堕澄,所有master和node節(jié)點都是NotReady狀態(tài)邀跃, kube-dns也是pending狀態(tài)
kubectl get nodes -o wide
watch kubectl get all --all-namespaces -o wide
  • 對calico.yaml做了修改
    刪除ETCD創(chuàng)建部分,使用外部ETCD
    修改CALICO_IPV4POOL_CIDR為10.68.0.0/16
    calico.yaml如下
# Calico Version v2.3.0
# http://docs.projectcalico.org/v2.3/releases#v2.3.0
# This manifest includes the following component versions:
#   calico/node:v1.3.0
#   calico/cni:v1.9.1
#   calico/kube-policy-controller:v0.6.0

# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
  name: calico-config
  namespace: kube-system
data:
  # The location of your etcd cluster.  This uses the Service clusterIP defined below.
  etcd_endpoints: "http://10.12.0.22:2379"
  # Configure the Calico backend to use.
  calico_backend: "bird"

  # The CNI network configuration to install on each node.
  cni_network_config: |-
    {
        "name": "k8s-pod-network",
        "cniVersion": "0.1.0",
        "type": "calico",
        "etcd_endpoints": "__ETCD_ENDPOINTS__",
        "log_level": "info",
        "ipam": {
            "type": "calico-ipam"
        },
        "policy": {
            "type": "k8s",
             "k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
             "k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
        },
        "kubernetes": {
            "kubeconfig": "/etc/cni/net.d/__KUBECONFIG_FILENAME__"
        }
    }
---
# This manifest installs the calico/node container, as well
# as the Calico CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: calico-node
  namespace: kube-system
  labels:
    k8s-app: calico-node
spec:
  selector:
    matchLabels:
      k8s-app: calico-node
  template:
    metadata:
      labels:
        k8s-app: calico-node
      annotations:
        # Mark this pod as a critical add-on; when enabled, the critical add-on scheduler
        # reserves resources for critical add-on pods so that they can be rescheduled after
        # a failure.  This annotation works in tandem with the toleration below.
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      hostNetwork: true
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      # Allow this pod to be rescheduled while the node is in "critical add-ons only" mode.
      # This, along with the annotation above marks this pod as a critical add-on.
      - key: CriticalAddonsOnly
        operator: Exists
      serviceAccountName: calico-cni-plugin
      containers:
        # Runs calico/node container on each Kubernetes node.  This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: quay.io/calico/node:v1.3.0
          env:
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # Enable BGP.  Disable to enforce policy only.
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            # Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"
            # Configure the IP Pool from which Pod IPs will be chosen.
            - name: CALICO_IPV4POOL_CIDR
              value: "10.68.0.0/16"
            - name: CALICO_IPV4POOL_IPIP
              value: "always"
            # Disable IPv6 on Kubernetes.
            - name: FELIX_IPV6SUPPORT
              value: "false"
            # Set Felix logging to "info"
            - name: FELIX_LOGSEVERITYSCREEN
              value: "info"
            # Auto-detect the BGP IP address.
            - name: IP
              value: ""
          securityContext:
            privileged: true
          resources:
            requests:
              cpu: 250m
          volumeMounts:
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - mountPath: /var/run/calico
              name: var-run-calico
              readOnly: false
        # This container installs the Calico CNI binaries
        # and CNI network config file on each node.
        - name: install-cni
          image: quay.io/calico/cni:v1.9.1
          command: ["/install-cni.sh"]
          env:
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # The CNI network config to install on each node.
            - name: CNI_NETWORK_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: cni_network_config
          volumeMounts:
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
            - mountPath: /host/etc/cni/net.d
              name: cni-net-dir
      volumes:
        # Used by calico/node.
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: var-run-calico
          hostPath:
            path: /var/run/calico
        # Used to install CNI.
        - name: cni-bin-dir
          hostPath:
            path: /opt/cni/bin
        - name: cni-net-dir
          hostPath:
            path: /etc/cni/net.d

---

# This manifest deploys the Calico policy controller on Kubernetes.
# See https://github.com/projectcalico/k8s-policy
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: calico-policy-controller
  namespace: kube-system
  labels:
    k8s-app: calico-policy
spec:
  # The policy controller can only have a single active instance.
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      name: calico-policy-controller
      namespace: kube-system
      labels:
        k8s-app: calico-policy-controller
      annotations:
        # Mark this pod as a critical add-on; when enabled, the critical add-on scheduler
        # reserves resources for critical add-on pods so that they can be rescheduled after
        # a failure.  This annotation works in tandem with the toleration below.
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      # The policy controller must run in the host network namespace so that
      # it isn't governed by policy that would prevent it from working.
      hostNetwork: true
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      # Allow this pod to be rescheduled while the node is in "critical add-ons only" mode.
      # This, along with the annotation above marks this pod as a critical add-on.
      - key: CriticalAddonsOnly
        operator: Exists
      serviceAccountName: calico-policy-controller
      containers:
        - name: calico-policy-controller
          image: quay.io/calico/kube-policy-controller:v0.6.0
          env:
            # The location of the Calico etcd cluster.
            - name: ETCD_ENDPOINTS
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: etcd_endpoints
            # The location of the Kubernetes API.  Use the default Kubernetes
            # service for API access.
            - name: K8S_API
              value: "https://kubernetes.default:443"
            # Since we're running in the host namespace and might not have KubeDNS
            # access, configure the container's /etc/hosts to resolve
            # kubernetes.default to the correct service clusterIP.
            - name: CONFIGURE_ETC_HOSTS
              value: "true"
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: calico-cni-plugin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-cni-plugin
subjects:
- kind: ServiceAccount
  name: calico-cni-plugin
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: calico-cni-plugin
  namespace: kube-system
rules:
  - apiGroups: [""]
    resources:
      - pods
      - nodes
    verbs:
      - get
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-cni-plugin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: calico-policy-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-policy-controller
subjects:
- kind: ServiceAccount
  name: calico-policy-controller
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: calico-policy-controller
  namespace: kube-system
rules:
  - apiGroups:
    - ""
    - extensions
    resources:
      - pods
      - namespaces
      - networkpolicies
    verbs:
      - watch
      - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-policy-controller
  namespace: kube-system
  • 創(chuàng)建calico跨主機網(wǎng)絡(luò), 在master節(jié)點上執(zhí)行如下命令
kubectl apply -f calico.yaml
  • 注意觀察每個節(jié)點上會有名為calico-node-****的pod起來蛙紫, calico-policy-controller和kube-dns也會起來拍屑, 這些pod都在kube-system名字空間里
>kubectl get all --all-namespaces

NAMESPACE     NAME                                                 READY     STATUS    RESTARTS   AGE
kube-system   po/calico-node-2gqf2                                 2/2       Running   0          19h
kube-system   po/calico-node-fg8gh                                 2/2       Running   0          19h
kube-system   po/calico-node-ksmrn                                 2/2       Running   0          19h
kube-system   po/calico-policy-controller-1727037546-zp4lp         1/1       Running   0          19h
kube-system   po/etcd-izuf6fb3vrfqnwbct6ivgwz                      1/1       Running   0          19h
kube-system   po/kube-apiserver-izuf6fb3vrfqnwbct6ivgwz            1/1       Running   0          19h
kube-system   po/kube-controller-manager-izuf6fb3vrfqnwbct6ivgwz   1/1       Running   0          19h
kube-system   po/kube-dns-2425271678-3t4g6                         3/3       Running   0          19h
kube-system   po/kube-proxy-6fg1l                                  1/1       Running   0          19h
kube-system   po/kube-proxy-fdbt2                                  1/1       Running   0          19h
kube-system   po/kube-proxy-lgf3z                                  1/1       Running   0          19h
kube-system   po/kube-scheduler-izuf6fb3vrfqnwbct6ivgwz            1/1       Running   0          19h

NAMESPACE     NAME                       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default       svc/kubernetes             10.96.0.1       <none>        443/TCP         19h
kube-system   svc/kube-dns               10.96.0.10      <none>        53/UDP,53/TCP   19h


NAMESPACE     NAME                              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   deploy/calico-policy-controller   1         1         1            1           19h
kube-system   deploy/kube-dns                   1         1         1            1           19h


NAMESPACE     NAME                                     DESIRED   CURRENT   READY     AGE
kube-system   rs/calico-policy-controller-1727037546   1         1         1         19h
kube-system   rs/kube-dns-2425271678                   1         1         1         19h
  • 部署dash-board
wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
kubectl create -f kubernetes-dashboard.yaml

  • 部署heapster
wget https://github.com/kubernetes/heapster/archive/v1.4.0.tar.gz
tar -zxvf v1.4.0.tar.gz
cd heapster-1.4.0/deploy/kube-config/influxdb
kubectl create -f ./

其他命令

  • 強制刪除某個pod
kubectl delete pod <podname> --namespace=<namspacer>  --grace-period=0 --force
  • 重置某個node節(jié)點
kubeadm reset 
systemctl stop kubelet;
docker ps -aq | xargs docker rm -fv
find /var/lib/kubelet | xargs -n 1 findmnt -n -t tmpfs -o TARGET -T | uniq | xargs -r umount -v;
rm -rf /var/lib/kubelet /etc/kubernetes/ /var/lib/etcd 
systemctl start kubelet;
  • 訪問dashboard (在master節(jié)點上執(zhí)行)
kubectl proxy --address=0.0.0.0 --port=8001 --accept-hosts='^.*'
or
kubectl proxy --port=8011 --address=192.168.61.100 --accept-hosts='^192\.168\.61\.*'

access to http://0.0.0.0:8001/ui
  • Access to API with authentication token
APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t')
curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure

  • 讓master節(jié)點參與調(diào)度,默認master是不參與到任務(wù)調(diào)度中的
kubectl taint nodes --all node-role.kubernetes.io/master-
or
kubectl taint nodes --all dedicated-
  • kubernetes master 消除隔離之前 Annotations
Name:           izuf6fb3vrfqnwbct6ivgwz
Role:
Labels:         beta.kubernetes.io/arch=amd64
            beta.kubernetes.io/os=linux
            kubernetes.io/hostname=izuf6fb3vrfqnwbct6ivgwz
            node-role.kubernetes.io/master=
Annotations:        node.alpha.kubernetes.io/ttl=0
            volumes.kubernetes.io/controller-managed-attach-detach=true
  • kubernetes master 消除隔離之后 Annotations
Name:           izuf6fb3vrfqnwbct6ivgwz
Role:
Labels:         beta.kubernetes.io/arch=amd64
            beta.kubernetes.io/os=linux
            kubernetes.io/hostname=izuf6fb3vrfqnwbct6ivgwz
            node-role.kubernetes.io/master=
Annotations:        node.alpha.kubernetes.io/ttl=0
            volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:         <none>





另外我也遇到比較坑爹的事坑傅,同樣的步驟在阿里云僵驰、ucloud上搭建k8s集群都沒問題,但是在Azure上calico網(wǎng)絡(luò) 跨主機pod間不通唁毒,到現(xiàn)在還不知道問題出在哪里蒜茴。。浆西。

后續(xù)分享命令行方式搭建k8s集群粉私,以及k8s高可用的實施


附一些參考鏈接

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市近零,隨后出現(xiàn)的幾起案子诺核,更是在濱河造成了極大的恐慌,老刑警劉巖久信,帶你破解...
    沈念sama閱讀 216,651評論 6 501
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件窖杀,死亡現(xiàn)場離奇詭異,居然都是意外死亡裙士,警方通過查閱死者的電腦和手機入客,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,468評論 3 392
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人痊项,你說我怎么就攤上這事锅风∷址蹋” “怎么了鞍泉?”我有些...
    開封第一講書人閱讀 162,931評論 0 353
  • 文/不壞的土叔 我叫張陵,是天一觀的道長肮帐。 經(jīng)常有香客問我,道長,這世上最難降的妖魔是什么靶擦? 我笑而不...
    開封第一講書人閱讀 58,218評論 1 292
  • 正文 為了忘掉前任醉旦,我火速辦了婚禮,結(jié)果婚禮上恒界,老公的妹妹穿的比我還像新娘睦刃。我一直安慰自己,他們只是感情好十酣,可當我...
    茶點故事閱讀 67,234評論 6 388
  • 文/花漫 我一把揭開白布涩拙。 她就那樣靜靜地躺著,像睡著了一般耸采。 火紅的嫁衣襯著肌膚如雪兴泥。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,198評論 1 299
  • 那天虾宇,我揣著相機與錄音搓彻,去河邊找鬼。 笑死嘱朽,一個胖子當著我的面吹牛旭贬,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播搪泳,決...
    沈念sama閱讀 40,084評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼骑篙,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了森书?” 一聲冷哼從身側(cè)響起靶端,我...
    開封第一講書人閱讀 38,926評論 0 274
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎凛膏,沒想到半個月后杨名,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,341評論 1 311
  • 正文 獨居荒郊野嶺守林人離奇死亡猖毫,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,563評論 2 333
  • 正文 我和宋清朗相戀三年台谍,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片吁断。...
    茶點故事閱讀 39,731評論 1 348
  • 序言:一個原本活蹦亂跳的男人離奇死亡趁蕊,死狀恐怖坞生,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情掷伙,我是刑警寧澤是己,帶...
    沈念sama閱讀 35,430評論 5 343
  • 正文 年R本政府宣布,位于F島的核電站任柜,受9級特大地震影響卒废,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜宙地,卻給世界環(huán)境...
    茶點故事閱讀 41,036評論 3 326
  • 文/蒙蒙 一摔认、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧宅粥,春花似錦参袱、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,676評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至风纠,卻和暖如春况鸣,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背竹观。 一陣腳步聲響...
    開封第一講書人閱讀 32,829評論 1 269
  • 我被黑心中介騙來泰國打工镐捧, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人臭增。 一個月前我還...
    沈念sama閱讀 47,743評論 2 368
  • 正文 我出身青樓懂酱,卻偏偏與公主長得像,于是被迫代替她去往敵國和親誊抛。 傳聞我的和親對象是個殘疾皇子列牺,可洞房花燭夜當晚...
    茶點故事閱讀 44,629評論 2 354

推薦閱讀更多精彩內(nèi)容

  • 版權(quán)聲明:原創(chuàng)作品,謝絕轉(zhuǎn)載拗窃!否則將追究法律責任瞎领。 前言 最近中國和印度的局勢也是愈演愈烈。作為一個愛國青年我有些...
    李偉銘MIng閱讀 2,062評論 0 5
  • 安裝k8s Master高可用集群 主機 角色 組件 172.18.6.101 K8S Master Kubele...
    jony456123閱讀 8,052評論 0 9
  • 環(huán)境規(guī)劃 手里的環(huán)境是四臺安裝了CentOS 7的主機随夸。環(huán)境規(guī)劃如下: Kubernetes Master 節(jié)點:...
    負二貸閱讀 3,256評論 6 26
  • 前言:Kubernetes 是Google開源的容器集群管理系統(tǒng)九默,它構(gòu)建于docker技術(shù)之上,基于Docker構(gòu)...
    擼大師閱讀 5,803評論 2 11
  • 指縫間的余光 模糊背影的溫暖 在我的視線 青絲披在肩上 小小的幻想 將你散落的秀發(fā) 輕輕挽在耳旁 陪你踱過大街小巷...
    默彧閱讀 95評論 0 0