Clickhouse on K8s

1.安裝chi-operator

ClickHouse Operator creates, configures and manages ClickHouse clusters running on Kubernetes.

kubectl apply -f https://raw.githubusercontent.com/Altinity/clickhouse-operator/master/deploy/operator/clickhouse-operator-install.yaml

1.1 查看chi-operator

kubectl -n kube-system get pod | grep clickhouse-operator

如果pod的狀態(tài)是running的虎眨,說明chi-operator部署成功透且≡奥妫可通過下面的命令查看其日志奏司。

kubectl -n kube-system logs -f clickhouse-operator-5b45484748-kpg6t clickhouse-operator 

2. 部署集群

2.1 部署架構(gòu)

按照下圖,將要部署一個2shard,2replica的一個集群,即需要四個pod抚太。每個pod的存儲使用loca pv的方式。也就是需要四臺機器昔案。


image.png

2.2 部署集群

下面的代碼包含兩個部分

  • local pv的部署yaml,注意此處選定了四臺機器尿贫。
  • chi的部署yaml。
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: clickhouse-local-volume
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-clickhouse-0
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: clickhouse-local-volume
  hostPath:
    path: /mnt/data/clickhouse
    type: DirectoryOrCreate
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - "clickhouse1"

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-clickhouse-1
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: clickhouse-local-volume
  hostPath:
    path: /mnt/data/clickhouse
    type: DirectoryOrCreate
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - "clickhouse2"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-clickhouse-2
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: clickhouse-local-volume
  hostPath:
    path: /mnt/data/clickhouse
    type: DirectoryOrCreate
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - "clickhouse3"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-clickhouse-3
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: clickhouse-local-volume
  hostPath:
    path: /mnt/data/clickhouse
    type: DirectoryOrCreate
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - "clickhouse4"
---
apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:
  name: "aibee"
spec:
  defaults:
    templates:
      serviceTemplate: service-template
      podTemplate: pod-template
      dataVolumeClaimTemplate: volume-claim
  configuration:
    settings:
      compression/case/method: zstd
      disable_internal_dns_cache: 1
      timezone: Asia/Shanghai
    zookeeper:
      nodes:
        - host: zk-svc
          port: 2181
      session_timeout_ms: 30000
      operation_timeout_ms: 10000
    clusters:
      - name: "clickhouse"
        layout:
          shardsCount: 2
          replicasCount: 2
  templates:
    serviceTemplates:
      - name: service-template
        spec:
          ports:
            - name: http
              port: 8123
            - name: tcp
              port: 9000
          type: LoadBalancer

    podTemplates:
      - name: pod-template
        spec:
          containers:
            - name: clickhouse
              imagePullPolicy: Always
              image: yandex/clickhouse-server:latest
              volumeMounts:
                # 掛載數(shù)據(jù)文件路徑
                - name: volume-claim
                  mountPath: /var/lib/clickhouse
                # 掛載數(shù)據(jù)文件路徑
                - name: volume-claim
                  mountPath: /var/log/clickhouse-server
              resources:
                # 配置cpu和內(nèi)存大小
                limits:
                  memory: "1Gi"
                  cpu: "1"
                requests:
                  memory: "1Gi"
                  cpu: "1"

    volumeClaimTemplates:
      - name: volume-claim
        reclaimPolicy: Retain
        spec:
          storageClassName: "clickhouse-local-volume"
          accessModes:
            - ReadWriteOnce
          resources:
            # pv的存儲大小
            requests:
              storage: 100Gi

注意: volumeClaimTemplates的reclaimPolicy必須是Retain踏揣,這樣即使刪除集群庆亡,數(shù)據(jù)會保留下來。否則在刪除集群的時候會刪除所有以"Replica*"開頭的table捞稿。我被這個坑了很久又谋。源碼如下:

// hostGetDropTables returns set of 'DROP TABLE ...' SQLs
func (s *Schemer) hostGetDropTables(host *chop.ChiHost) ([]string, []string, error) {
   // There isn't a separate query for deleting views. To delete a view, use DROP TABLE
   // See https://clickhouse.yandex/docs/en/query_language/create/
   sql := heredoc.Doc(`
      SELECT
         distinct name, 
         concat('DROP TABLE IF EXISTS "', database, '"."', name, '"') AS drop_db_query
      FROM system.tables
      WHERE engine like 'Replicated%'`,
   )

   names, sqlStatements, _ := s.getObjectListFromClickHouse([]string{CreatePodFQDN(host)}, sql)
   return names, sqlStatements, nil

部署成功后的pod的分布情況:

chi-aibee-clickhouse-0-0-0       1/1     Running        0          20m     192.168.35.196    clickhouse3   <none>           <none>
chi-aibee-clickhouse-0-1-0       1/1     Running        0          20m     192.168.132.103   clickhouse2   <none>           <none>
chi-aibee-clickhouse-1-0-0       1/1     Running        0          20m     192.168.13.41     clickhouse4   <none>           <none>
chi-aibee-clickhouse-1-1-0       1/1     Running        0          19m     192.168.133.164   clickhouse1   <none>           <none>

2.3 查看svc的地址

kubectl get svc clickhouse-aibee
NAME               TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
clickhouse-aibee   LoadBalancer   10.100.185.34   <pending>     8123:30745/TCP,9000:32346/TCP   22m

2.4 連接集群

使用上面的svc的ClusterIP,默認(rèn)賬戶密碼:clickhouse_operator/clickhouse_operator_password

clickhouse-client -h 10.100.185.34 -u clickhouse_operator --password clickhouse_operator_password 

更多自定義的情況請參考這個地址:https://github.com/Altinity/clickhouse-operator/blob/master/docs/custom_resource_explained.md

2.4 內(nèi)置的宏

Operator provides set of macros, which are:

  1. {installation} -- ClickHouse Installation name
  2. {cluster} -- primary cluster name
  3. {replica} -- replica name in the cluster, maps to pod service name
  4. {shard} -- shard id

ClickHouse also supports internal macros {database} and {table} that maps to current database and table respectively.

下面的代碼展示的是當(dāng)前集群自動創(chuàng)建的macros娱局,我們可以在創(chuàng)建表的時候使用彰亥。

<yandex>
    <macros>
        <installation>aibee</installation>
        <all-sharded-shard>0</all-sharded-shard>
        <cluster>clickhouse</cluster>
        <shard>0</shard>
        <replica>chi-aibee-clickhouse-0-0</replica>
    </macros>
</yandex>

3 創(chuàng)建表

CREATE TABLE events_local on cluster '{cluster}' (
    event_date  Date,
    event_type  Int32,
    article_id  Int32,
    title       String
) engine=ReplicatedMergeTree('/clickhouse/{installation}/{cluster}/tables/{shard}/{database}/{table}', '{replica}', event_date, (event_type, article_id), 8192);
CREATE TABLE events on cluster '{cluster}' AS events_local
ENGINE = Distributed('{cluster}', default, events_local, rand());

3.1 插入數(shù)據(jù)

INSERT INTO events SELECT today(), rand()%3, number, 'my title' FROM numbers(100);

3.2 查看數(shù)據(jù)

SELECT count() FROM events;
SELECT count() FROM events_local;

4 集群監(jiān)控

chi-operator已經(jīng)集成了metrics-operator。下面的命令查看監(jiān)控地址

kubectl get service clickhouse-operator-metrics -n kube-system
NAME                          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
clickhouse-operator-metrics   ClusterIP   10.102.111.74   <none>        8888/TCP   48d

Prometheus可以使用這個地址抓取metrics衰齐。
http://<service/clickhouse-operator-metrics>:8888/metrics

Grafana Dashbord

https://github.com/Altinity/clickhouse-operator/blob/master/grafana-dashboard/Altinity_ClickHouse_Operator_dashboard.json

更多請參考 https://github.com/Altinity/clickhouse-operator/blob/master/docs/prometheus_setup.md

附錄:
https://github.com/Altinity/clickhouse-operator

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末任斋,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子耻涛,更是在濱河造成了極大的恐慌废酷,老刑警劉巖瘟檩,帶你破解...
    沈念sama閱讀 207,248評論 6 481
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異澈蟆,居然都是意外死亡墨辛,警方通過查閱死者的電腦和手機,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,681評論 2 381
  • 文/潘曉璐 我一進(jìn)店門趴俘,熙熙樓的掌柜王于貴愁眉苦臉地迎上來睹簇,“玉大人,你說我怎么就攤上這事寥闪〈颍” “怎么了?”我有些...
    開封第一講書人閱讀 153,443評論 0 344
  • 文/不壞的土叔 我叫張陵橙垢,是天一觀的道長。 經(jīng)常有香客問我伦糯,道長柜某,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 55,475評論 1 279
  • 正文 為了忘掉前任敛纲,我火速辦了婚禮喂击,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘淤翔。我一直安慰自己翰绊,他們只是感情好,可當(dāng)我...
    茶點故事閱讀 64,458評論 5 374
  • 文/花漫 我一把揭開白布旁壮。 她就那樣靜靜地躺著监嗜,像睡著了一般。 火紅的嫁衣襯著肌膚如雪抡谐。 梳的紋絲不亂的頭發(fā)上裁奇,一...
    開封第一講書人閱讀 49,185評論 1 284
  • 那天,我揣著相機與錄音麦撵,去河邊找鬼刽肠。 笑死,一個胖子當(dāng)著我的面吹牛免胃,可吹牛的內(nèi)容都是我干的音五。 我是一名探鬼主播,決...
    沈念sama閱讀 38,451評論 3 401
  • 文/蒼蘭香墨 我猛地睜開眼羔沙,長吁一口氣:“原來是場噩夢啊……” “哼躺涝!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起撬碟,我...
    開封第一講書人閱讀 37,112評論 0 261
  • 序言:老撾萬榮一對情侶失蹤诞挨,失蹤者是張志新(化名)和其女友劉穎莉撇,沒想到半個月后,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體惶傻,經(jīng)...
    沈念sama閱讀 43,609評論 1 300
  • 正文 獨居荒郊野嶺守林人離奇死亡棍郎,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 36,083評論 2 325
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發(fā)現(xiàn)自己被綠了银室。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片涂佃。...
    茶點故事閱讀 38,163評論 1 334
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖蜈敢,靈堂內(nèi)的尸體忽然破棺而出辜荠,到底是詐尸還是另有隱情,我是刑警寧澤抓狭,帶...
    沈念sama閱讀 33,803評論 4 323
  • 正文 年R本政府宣布伯病,位于F島的核電站,受9級特大地震影響否过,放射性物質(zhì)發(fā)生泄漏午笛。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點故事閱讀 39,357評論 3 307
  • 文/蒙蒙 一苗桂、第九天 我趴在偏房一處隱蔽的房頂上張望药磺。 院中可真熱鬧,春花似錦煤伟、人聲如沸癌佩。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,357評論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽围辙。三九已至,卻和暖如春放案,著一層夾襖步出監(jiān)牢的瞬間酌畜,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 31,590評論 1 261
  • 我被黑心中介騙來泰國打工卿叽, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留桥胞,地道東北人。 一個月前我還...
    沈念sama閱讀 45,636評論 2 355
  • 正文 我出身青樓考婴,卻偏偏與公主長得像贩虾,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子沥阱,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 42,925評論 2 344

推薦閱讀更多精彩內(nèi)容