使用ansible一鍵部署MongoDB分片集群

1.環(huán)境準(zhǔn)備:

Mongodb 部署信息:

os: centos7.7 x64 配置好NTP,防火墻默認(rèn)不做任何限制.

mongos部署兩個(gè)點(diǎn),config3個(gè)點(diǎn),和3個(gè)分片集群(每個(gè)分片一主一副一仲裁)

注意:因?yàn)榕袛噙壿嫷膯栴},不要在任何節(jié)點(diǎn)同時(shí)運(yùn)行兩種服務(wù)的主點(diǎn)

Primiary Secondary Arbiter
mongos 10.1.99.77 10.1.99.78
config 10.1.99.72 10.1.99.74 10.1.99.76
shard1 10.1.99.71 10.1.99.72 10.1.99.77
shard2 10.1.99.73 10.1.99.74 10.1.99.78
shard3 10.1.99.75 10.1.99.76 10.1.99.77

ansible的hosts文件配置:
環(huán)境變量全部都在這里設(shè)置,需要都寫入到ansible的hosts文件里

[mongo_new]    # 根據(jù)角色定義對(duì)應(yīng)的變量
10.1.99.71 hostname=hlet-prod-mongo-01 shard1=true primary=true
10.1.99.72 hostname=hlet-prod-mongo-02 shard1=true config_server=true primary=true
10.1.99.73 hostname=hlet-prod-mongo-03 shard2=true primary=true
10.1.99.74 hostname=hlet-prod-mongo-04 shard2=true config_server=true
10.1.99.75 hostname=hlet-prod-mongo-05 shard3=true primary=true
10.1.99.76 hostname=hlet-prod-mongo-06 shard3=true config_server=true
10.1.99.77 hostname=hlet-prod-mongo-07 mongos=true shard1=true shard3=true primary=true
10.1.99.78 hostname=hlet-prod-mongo-08 mongos=true shard2=true
[mongo_new:vars]
shard1_port=20001
shard2_port=20002
shard3_port=20003
config_port=20000
mongos_port=27017
shard1_name=shard1
shard2_name=shard2
shard3_name=shard3
config_name=configs
shard1_server_1_ip=10.1.99.71
shard1_server_2_ip=10.1.99.72
shard1_server_3_ip=10.1.99.77
shard2_server_1_ip=10.1.99.73
shard2_server_2_ip=10.1.99.74
shard2_server_3_ip=10.1.99.78
shard3_server_1_ip=10.1.99.75
shard3_server_2_ip=10.1.99.76
shard3_server_3_ip=10.1.99.77
config_server_1_ip=10.1.99.72
config_server_2_ip=10.1.99.74
config_server_3_ip=10.1.99.76
root_user=mongoroot      # 開啟驗(yàn)證使用的root權(quán)限用戶
root_password=mongopassowrd    # 開啟驗(yàn)證使用的root權(quán)限用戶的密碼
based_dir=/home/mongodb/sharded_cluster      # 數(shù)據(jù)及l(fā)og存放路徑

變量中

server_1_ip都設(shè)置為主點(diǎn)

server_2_ip都設(shè)置為副點(diǎn)

server_3_ip都設(shè)置為仲裁節(jié)點(diǎn)(config server沒有仲裁節(jié)點(diǎn),但是也要寫,不然mongos的配置文件模板會(huì)報(bào)錯(cuò))

這些IP的配置后面在初始化時(shí)會(huì)用到.

ansible文件夾結(jié)構(gòu):

[root@hlet-prod-k8s-rancher mongo_new]# tree
.
├── iptables.yml
├── mongod.conf.mongos.j2
├── mongod.conf.normal.j2
├── mongod.service.j2
├── mongo.key
├── passwd.j2
├── setup_all.yml
├── ssh.yml
├── start_all.yml
├── stop_all.yml
└── uninstall_all.yml

首先是模板文件:
mongod.conf.mongos.j2 mongos的配置文件模板

# mongod.conf

# for documentation of all options, see:
#   http://docs.mongodb.org/manual/reference/configuration-options/

# where to write logging data.
systemLog:
  destination: file
  logAppend: true
  path: {{ based_dir }}/mysharedrs_{{ item.server_port }}/log/mongod.log

# Where and how to store data.
#storage:
#  dbPath: {{ based_dir }}/mysharedrs_{{ item.server_port }}/data/db
#  journal:
#    enabled: true
#  engine:
#  wiredTiger:

# how the process runs
processManagement:
  fork: true  # fork and run in background
  pidFilePath: /var/run/mongodb/mongod_{{ item.server_port }}.pid  # location of pidfile
  timeZoneInfo: /usr/share/zoneinfo

# network interfaces
net:
  port: {{ item.server_port }}
  bindIp: 127.0.0.1,{{ansible_default_ipv4.address}}  # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.


security:
  # authorization: enabled
  keyFile: /etc/mongo/mongo.key
#operationProfiling:

# replication:
  # replSetName: myrs

# sharding:
sharding:
  configDB: {{ config_name }}/{{ config_server_1_ip }}:{{ config_port }},{{ config_server_2_ip }}:{{ config_port}},{{ config_server_3_ip }}:{{ config_port}}
## Enterprise-Only Options

#auditLog:

#snmp:

mongod.conf.normal.j2 shard server和config server的配置文件模板

# mongod.conf

# for documentation of all options, see:
#   http://docs.mongodb.org/manual/reference/configuration-options/

# where to write logging data.
systemLog:
  destination: file
  logAppend: true
  path: {{ based_dir }}/mysharedrs_{{ item.server_port }}/log/mongod.log

# Where and how to store data.
storage:
  dbPath: {{ based_dir }}/mysharedrs_{{ item.server_port }}/data/db
  journal:
    enabled: true
# engine:
  wiredTiger:
    engineConfig:
      directoryForIndexes: true
# how the process runs
processManagement:
  fork: true  # fork and run in background
  pidFilePath: /var/run/mongodb/mongod_{{ item.server_port }}.pid  # location of pidfile
  timeZoneInfo: /usr/share/zoneinfo

# network interfaces
net:
  port: {{ item.server_port }}
  bindIp: 127.0.0.1,{{ansible_default_ipv4.address}}  # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.


security:
  authorization: enabled
  keyFile: /etc/mongo/mongo.key
#operationProfiling:

replication:
  replSetName: {{ item.server_name }}
  enableMajorityReadConcern: false
# sharding:
sharding:
  clusterRole: {{ item.cluster_role }}
## Enterprise-Only Options

#auditLog:

#snmp:

mongod.service.j2 service文件模板

[Unit]
Description=MongoDB Database Server
Documentation=https://docs.mongodb.org/manual
After=network.target

[Service]
User=mongod
Group=mongod
Environment="OPTIONS=-f /etc/mongod_{{ item.server_port }}.conf"
EnvironmentFile=-/etc/sysconfig/mongod
ExecStart=/usr/bin/mongod $OPTIONS
ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb
ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb
ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb
PermissionsStartOnly=true
PIDFile=/var/run/mongodb/mongod_{{ item.server_port }}.pid
Type=forking
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=64000
# processes/threads
LimitNPROC=64000
# locked memory
LimitMEMLOCK=infinity
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false
# Recommended limits for for mongod as specified in
# http://docs.mongodb.org/manual/reference/ulimit/#recommended-settings

[Install]
WantedBy=multi-user.target

mongo.key 直接手動(dòng)生成,用戶MongoDB集群之間的加密連接,記得權(quán)限改為600

openssl rand -base64 100 > mongo.key
chmod 600 mongo.key 

passwd.j2 用于配置MongoDB的root用戶名密碼

conn = new Mongo("127.0.0.1:{{ item.server_port }}");
db = conn.getDB("admin");
printjson(db.createUser({user:"{{ root_user }}",pwd:"{{ root_password }}",roles:[ { "role" : "root", "db" : "admin" } ]}));

2.ansible腳本準(zhǔn)備

2.1 ssh.yml

---
- hosts: mongo_new
  gather_facts: no

  tasks:

  - name: install ssh key
    authorized_key: user=root
                    key="{{ lookup('file', '/root/.ssh/id_rsa.pub') }}"
                    state=present

執(zhí)行:

按提示輸入服務(wù)器密碼

ansible-playbook ssh.yml -k

2.2 setup_all.yml

包含了系統(tǒng)優(yōu)化相關(guān)設(shè)置,注意中間會(huì)重啟一次讓一些系統(tǒng)優(yōu)化項(xiàng)生效

---
- hosts: mongo_new
  gather_facts: yes
  tasks:                    
    - name: set hostname
      tags:
        - test1
      hostname:
        name: "{{ hostname }}"
  
    - name: mod hosts
      tags:
        - test1 
      lineinfile:
        dest: /etc/hosts
        regexp: '.*{{ item }}$'
        line: "{{item}} {{ hostvars[item].hostname }}"
        state: present
      when: hostvars[item].hostname is defined
      with_items: "{{ groups.mongo_new }}"

    - name: Disable SELinux tempoary
      tags:
        - test1
      selinux:
        state: disabled
  
    - name: Disable unnecessary services
      tags:
        - test1
      service:
        name: "{{ item }}"
        state: stopped
        enabled: false
      with_items:
        - firewalld
        - postfix
  
    - name: Set timezone to Asia/Shanghai
      tags:
        - test1
      timezone:
        name: Asia/Shanghai
  
    - name: install epel release
      tags:
        - test1
      yum: 
        name: epel-release
        
    - name: create mongodb repo file
      tags:
        - test1
      blockinfile:
        path: /etc/yum.repos.d/mongodb-org-4.2.repo
        create: True
        block: |
          [mongodb-org-4.2]
          name=MongoDB Repository
          baseurl=http://mirrors.aliyun.com/mongodb/yum/redhat/7Server/mongodb-org/4.2/x86_64/
          gpgcheck=1
          enabled=1
          gpgkey=https://www.mongodb.org/static/pgp/server-4.2.asc
          
    - name: update repo
      tags:
        - test1
      shell: yum clean all && yum makecache


- hosts: mongo_new
  gather_facts: yes
  tasks:            
    - name: install necessary tools
      tags:
        - yum
      yum: 
        name: bash-completion,unzip,conntrack,ntpdate,ntp,curl,sysstat,libseccomp,wget,vim,net-tools,git,nfs-utils,rpcbind,nload,htop,tree,telnet,python-pip,numactl
  
    - name: install mongodb 
      yum: 
        name: mongodb-org
  
    - name: install pymongo
      pip:
        name: pymongo
        extra_args: "-i https://mirrors.aliyun.com/pypi/simple/"
    # - name: set ntp restrict
    #   tags:
    #     - ntp
    #   lineinfile:
    #     dest: /etc/ntp.conf
    #     regexp: '^restrict 192\.[0-9]{1,3}\.255\.1'
    #     line: restrict 192.168.1.1
    #   notify:
    #     - restart ntpd
    # - name: set ntp server
    #   tags:
    #     - ntp
    #   lineinfile:
    #     dest: /etc/ntp.conf
    #     regexp: '^server 192\.[0-9]{1,3}\.255\.1 iburst minpoll 3 maxpoll 4 prefer'
    #     line: 'server 192.168.1.1 iburst minpoll 3 maxpoll 4 prefer'
    #   notify:
    #     - restart ntpd
             
    - name: disable auto update for mongodb
      lineinfile:
        path: /etc/yum.conf
        line: 'exclude=mongodb-org,mongodb-org-server,mongodb-org-shell,mongodb-org-mongos,mongodb-org-tools'    
   
    - name: disable hugepage
      tags:
        - test5
      blockinfile:
        path: /etc/rc.local
        create: True
        block: |
          if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
             echo never >> /sys/kernel/mm/transparent_hugepage/enabled
          fi
   
          if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
             echo never >> /sys/kernel/mm/transparent_hugepage/defrag
          fi
          
    - name: setting
      tags:
        - init
      lineinfile:
        path: /etc/security/limits.conf
        line: "{{ item }}"
      with_items:
        - '* soft nofile 65535'
        - '* hard nofile 65535'
        - '* soft nproc 65535'
        - '* hard nproc 65535'
      notify:
        - ulimit

    - name: disable hugepage for sure
      tags:
        - test5
      file:
        dest: /etc/rc.local
        mode: 0755

    - name: cteate dirs for mongo.key
      tags:
        - key
      file: 
        dest: "{{ item }}"
        state: directory
        owner: mongod
        group: mongod
        mode: 0755
      with_items:
        - /etc/mongo

    - name: cteate dirs for shard1
      tags:
        - key
      file: 
        dest: "{{ item }}"
        state: directory
        owner: mongod
        group: mongod
        mode: 0755
      with_items:
        - "{{ based_dir }}/mysharedrs_{{ shard1_port }}/log"
        - "{{ based_dir }}/mysharedrs_{{ shard1_port }}/data/db"
      when: shard1 is defined and shard1 == "true" 

    - name: cteate dirs for shard2
      tags:
        - key
      file: 
        dest: "{{ item }}"
        state: directory
        owner: mongod
        group: mongod
        mode: 0755
      with_items:
        - "{{ based_dir }}/mysharedrs_{{ shard2_port }}/log"
        - "{{ based_dir }}/mysharedrs_{{ shard2_port }}/data/db"
      when: shard2 is defined and shard2 == "true" 

    - name: cteate dirs for shard3
      tags:
        - key
      file: 
        dest: "{{ item }}"
        state: directory
        owner: mongod
        group: mongod
        mode: 0755
      with_items:
        - "{{ based_dir }}/mysharedrs_{{ shard3_port }}/log"
        - "{{ based_dir }}/mysharedrs_{{ shard3_port }}/data/db"
      when: shard3 is defined and shard3 == "true" 

    - name: cteate dirs for config_server
      tags:
        - key
      file: 
        dest: "{{ item }}"
        state: directory
        owner: mongod
        group: mongod
        mode: 0755
      with_items:
        - "{{ based_dir }}/mysharedrs_{{ config_port }}/log"
        - "{{ based_dir }}/mysharedrs_{{ config_port }}/data/db"
      when: config_server is defined and config_server == "true" 

    - name: cteate dirs for mongos
      tags:
        - key
      file: 
        dest: "{{ item }}"
        state: directory
        owner: mongod
        group: mongod
        mode: 0755
      with_items:
        - "{{ based_dir }}/mysharedrs_{{ mongos_port }}/log"
        - "{{ based_dir }}/mysharedrs_{{ mongos_port }}/data/db"
      when: mongos is defined and mongos == "true" 
      
    - name: copy key file
      tags:
        - key   
      copy: 
        src: mongo.key
        dest: /etc/mongo
        owner: mongod
        group: mongod
        mode: '0600'

    - name: copy config_server service
      template: src=mongod.service.j2 dest=/usr/lib/systemd/system/mongod_{{ item.server_port }}.service
      with_items:
        - { server_port: "{{ config_port }}" }
      when: config_server is defined and config_server == "true" 

    - name: copy shard1 service
      template: src=mongod.service.j2 dest=/usr/lib/systemd/system/mongod_{{ item.server_port }}.service
      with_items:
        - { server_port: "{{ shard1_port }}" }
      when: shard1 is defined and shard1 == "true" 

    - name: copy shard2 service
      template: src=mongod.service.j2 dest=/usr/lib/systemd/system/mongod_{{ item.server_port }}.service
      with_items:
        - { server_port: "{{ shard2_port }}" }
      when: shard2 is defined and shard2 == "true" 

    - name: copy shard3 service
      template: src=mongod.service.j2 dest=/usr/lib/systemd/system/mongod_{{ item.server_port }}.service
      with_items:
        - { server_port: "{{ shard3_port }}" }
      when: shard3 is defined and shard3 == "true" 

    - name: copy mongos service
      template: src=mongod.service.j2 dest=/usr/lib/systemd/system/mongod_{{ item.server_port }}.service
      with_items:
        - { server_port: "{{ mongos_port }}" }
      when: mongos is defined and mongos == "true"

    - name: set mongos service 
      lineinfile: 
        path: /usr/lib/systemd/system/mongod_{{ item.server_port }}.service
        regexp: '^ExecStart=/usr/bin/m'
        line: 'ExecStart=/usr/bin/mongos $OPTIONS'
      with_items:
        - { server_port: "{{ mongos_port }}" }
      when: mongos is defined and mongos == "true"

    - name: Reboot all nodes make sure all changes effected
      reboot:
        reboot_timeout: 3600

  handlers:
    - name: ulimit
      shell: ulimit -n
  # handlers:
  #   - name: restart ntpd
  #     service:
  #       name=ntpd
  #       state=restarted
  #       enabled=true   

- hosts: mongo_new
  gather_facts: yes
  tasks:
    - name: copy mongos conf
      tags:
        - key   
      template: src=mongod.conf.mongos.j2 dest=/etc/mongod_{{ item.server_port }}.conf
      with_items:
        - { server_port: "{{ mongos_port }}" }
      when: mongos is defined and mongos == "true"

    - name: stop and disable default mongo service
      tags:
        - init_mongos
      service:
        name: mongod
        state: stopped
        enabled: false

    - name: create config file for config_server
      tags:
        - key
      template: src=mongod.conf.normal.j2 dest=/etc/mongod_{{ item.server_port }}.conf
      with_items:
        - { server_port: "{{ config_port }}", server_name: '{{ config_name }}', cluster_role: 'configsvr' }
      when: config_server is defined and config_server == "true" 

    - name: create config file for shard1
      tags:
        - key
      template: src=mongod.conf.normal.j2 dest=/etc/mongod_{{ item.server_port }}.conf
      with_items:
        - { server_port: "{{ shard1_port }}", server_name: '{{ shard1_name }}', cluster_role: 'shardsvr' }
      when: shard1 is defined and shard1 == "true" 


    - name: create config file for shard2
      tags:
        - key
      template: src=mongod.conf.normal.j2 dest=/etc/mongod_{{ item.server_port }}.conf
      with_items:
        - { server_port: "{{ shard2_port }}", server_name: '{{ shard2_name }}', cluster_role: 'shardsvr' }
      when: shard2 is defined and shard2 == "true" 

    - name: create config file for shard3
      tags:
        - key
      template: src=mongod.conf.normal.j2 dest=/etc/mongod_{{ item.server_port }}.conf
      with_items:
        - { server_port: "{{ shard3_port }}", server_name: '{{ shard3_name }}', cluster_role: 'shardsvr' }
      when: shard3 is defined and shard3 == "true" 

    - name: start config_server services
      tags:
        - init
      service:
        name: "{{ item }}"
        state: started
      with_items:
        - mongod_{{ config_port }}
      when: config_server is defined and config_server == "true" 

    - name: start shard1 services
      tags:
        - init
      service:
        name: "{{ item }}"
        state: started
      with_items:
         - mongod_{{ shard1_port }}
      when: shard1 is defined and shard1 == "true" 

    - name: start shard2 services
      tags:
        - init
      service:
        name: "{{ item }}"
        state: started
      with_items:
         - mongod_{{ shard2_port }}
      when: shard2 is defined and shard2 == "true" 

    - name: start shard3 services
      tags:
        - init
      service:
        name: "{{ item }}"
        state: started
      with_items:
         - mongod_{{ shard3_port }}
      when: shard3 is defined and shard3 == "true" 

    - name: init shard1 
      tags:
        - initdone
      shell: "mongo --port {{ shard1_port }} --eval \"rs.initiate({_id : '{{ shard1_name }}',members : [{_id : 0, host : '{{ shard1_server_1_ip }}:{{ shard1_port }}', priority : 2 },{_id : 1, host : '{{ shard1_server_2_ip }}:{{ shard1_port }}', priority : 1 },{_id : 2, host : '{{ shard1_server_3_ip }}:{{ shard1_port }}', arbiterOnly : true }]})\""
      when: shard1 is defined and shard1 == "true" and primary is defined and primary == "true" and config_server is not defined and mongos is not defined
 
    - name: copy key file
      tags:
        - key   
      template:
        src: passwd.j2
        dest: /etc/mongo/passwd.js
      with_items:
        - { server_port: "{{ shard1_port }}" }  
      when: shard1 is defined and shard1 == "true" and primary is defined  and primary == "true"

    - name: init shard2 
      tags:
        - initdone
      shell: "mongo --port {{ shard2_port }} --eval \"rs.initiate({_id : '{{ shard2_name }}',members : [{_id : 0, host : '{{ shard2_server_1_ip }}:{{ shard2_port }}', priority : 2 },{_id : 1, host : '{{ shard2_server_2_ip }}:{{ shard2_port }}', priority : 1 },{_id : 2, host : '{{ shard2_server_3_ip }}:{{ shard2_port }}', arbiterOnly : true }]})\""
      when: shard2 is defined and shard2 == "true" and primary is defined  and primary == "true" and config_server is not defined and mongos is not defined

    - name: copy key file
      tags:
        - key   
      template:
        src: passwd.j2
        dest: /etc/mongo/passwd.js
      with_items:
        - { server_port: "{{ shard2_port }}" }
      when: shard2 is defined and shard2 == "true" and primary is defined  and primary == "true"

    - name: init shard3 
      tags:
        - initdone
      shell: "mongo --port {{ shard3_port }} --eval \"rs.initiate({_id : '{{ shard3_name }}',members : [{_id : 0, host : '{{ shard3_server_1_ip }}:{{ shard3_port }}', priority : 2 },{_id : 1, host : '{{ shard3_server_2_ip }}:{{ shard3_port }}', priority : 1 },{_id : 2, host : '{{ shard3_server_3_ip }}:{{ shard3_port }}', arbiterOnly : true }]})\""
      when: shard3 is defined and shard3 == "true" and primary is defined  and primary == "true" and config_server is not defined and mongos is not defined

    - name: copy key file
      tags:
        - key   
      template:
        src: passwd.j2
        dest: /etc/mongo/passwd.js
      with_items:
        - { server_port: "{{ shard3_port }}" }
      when: shard3 is defined and shard3 == "true" and primary is defined  and primary == "true"

    - name: init config server 
      tags:
        - init
      shell: "mongo --port {{ config_port }} --eval \"rs.initiate({_id : '{{ config_name }}', configsvr: true, members : [{_id : 0, host : '{{ config_server_1_ip }}:{{ config_port }}', priority : 3 },{_id : 1, host : '{{ config_server_2_ip }}:{{ config_port }}', priority : 2 },{_id : 2, host : '{{ config_server_3_ip }}:{{ config_port }}', priority : 1 }]})\""
      when: config_server is defined and config_server == "true" and primary is defined  and primary == "true" and mongos is not defined

    - name: copy key file
      tags:
        - key   
      template: 
        src: passwd.j2
        dest: /etc/mongo/passwd.js
      with_items:
        - { server_port: "{{ config_port }}" }
      when: config_server is defined and config_server == "true" and primary is defined  and primary == "true" and mongos is not defined

    - name: create password
      tags:
        - create_password
      shell: "sleep 20 && mongo --nodb /etc/mongo/passwd.js"
      when: primary is defined and primary == "true" and config_server is not defined and mongos is not defined

    - name: create password
      tags:
        - create_password
      shell: "sleep 10 && mongo --nodb /etc/mongo/passwd.js"
      when: primary is defined and primary == "true" and config_server is defined and mongos is not defined

    - name: start mongos
      service:
        name: "{{ item }}"
        state: started
      with_items:
        - mongod_{{ mongos_port }}
      when: mongos is defined and mongos == "true"

    - name: add shard
      tags:
        - add_shard
      mongodb_shard:
        login_host: 127.0.0.1
        login_port: "{{ mongos_port }}"
        login_user: "{{ root_user }}"
        login_password: "{{ root_password }}"
        shard: "{{ item }}"
        state: present
      with_items:
        - "{{ shard1_name }}/{{ shard1_server_1_ip }}:{{ shard1_port }},{{ shard1_server_2_ip }}:{{ shard1_port }},{{ shard1_server_3_ip }}:{{ shard1_port }}"
        - "{{ shard2_name }}/{{ shard2_server_1_ip }}:{{ shard2_port }},{{ shard2_server_2_ip }}:{{ shard2_port }},{{ shard2_server_3_ip }}:{{ shard2_port }}"
        - "{{ shard3_name }}/{{ shard3_server_1_ip }}:{{ shard3_port }},{{ shard3_server_2_ip }}:{{ shard3_port }},{{ shard3_server_3_ip }}:{{ shard3_port }}"
      when: primary is defined and primary == "true" and mongos is defined

注意:因?yàn)榕袛噙壿嫷膯栴},不要在任何節(jié)點(diǎn)同時(shí)運(yùn)行兩種服務(wù)的主點(diǎn)

執(zhí)行:

ansible-playbook setup_all.yml

如果執(zhí)行有問題建議多檢查變量配置

全部完成后可以驗(yàn)證一下集群環(huán)境:

登陸分片集群:

[root@hlet-prod-mongo-01 ~]# mongo --host 10.1.99.71 --port 20001 -u mongoroot -p mongopassowrd
MongoDB shell version v4.2.8
connecting to: mongodb://127.0.0.1:20001/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("a3b92fef-08b2-4c3b-bab6-f2a84fcc29da") }
MongoDB server version: 4.2.8
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).

The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.

To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---

shard1:PRIMARY> rs.status()
{
    "set" : "shard1",
    "date" : ISODate("2020-07-20T08:41:28.060Z"),
    "myState" : 1,
    "term" : NumberLong(4),
    "syncingTo" : "",
    "syncSourceHost" : "",
    "syncSourceId" : -1,
    "heartbeatIntervalMillis" : NumberLong(2000),
    "majorityVoteCount" : 2,
    "writeMajorityCount" : 2,
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(1595234485, 3),
            "t" : NumberLong(4)
        },
        "lastCommittedWallTime" : ISODate("2020-07-20T08:41:25.405Z"),
        "readConcernMajorityOpTime" : {
            "ts" : Timestamp(1595234485, 3),
            "t" : NumberLong(4)
        },
        "readConcernMajorityWallTime" : ISODate("2020-07-20T08:41:25.405Z"),
        "appliedOpTime" : {
            "ts" : Timestamp(1595234485, 3),
            "t" : NumberLong(4)
        },
        "durableOpTime" : {
            "ts" : Timestamp(1595234485, 3),
            "t" : NumberLong(4)
        },
        "lastAppliedWallTime" : ISODate("2020-07-20T08:41:25.405Z"),
        "lastDurableWallTime" : ISODate("2020-07-20T08:41:25.405Z")
    },
    "electionCandidateMetrics" : {
        "lastElectionReason" : "priorityTakeover",
        "lastElectionDate" : ISODate("2020-07-18T03:01:35.660Z"),
        "electionTerm" : NumberLong(4),
        "lastCommittedOpTimeAtElection" : {
            "ts" : Timestamp(1595041285, 2),
            "t" : NumberLong(3)
        },
        "lastSeenOpTimeAtElection" : {
            "ts" : Timestamp(1595041285, 2),
            "t" : NumberLong(3)
        },
        "numVotesNeeded" : 2,
        "priorityAtElection" : 2,
        "electionTimeoutMillis" : NumberLong(10000),
        "priorPrimaryMemberId" : 1,
        "numCatchUpOps" : NumberLong(0),
        "newTermStartDate" : ISODate("2020-07-18T03:01:35.709Z"),
        "wMajorityWriteAvailabilityDate" : ISODate("2020-07-18T03:01:37.743Z")
    },
    "members" : [
        {
            "_id" : 0,
            "name" : "10.1.99.71:20001",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 193204,
            "optime" : {
                "ts" : Timestamp(1595234485, 3),
                "t" : NumberLong(4)
            },
            "optimeDate" : ISODate("2020-07-20T08:41:25Z"),
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "",
            "electionTime" : Timestamp(1595041295, 1),
            "electionDate" : ISODate("2020-07-18T03:01:35Z"),
            "configVersion" : 1,
            "self" : true,
            "lastHeartbeatMessage" : ""
        },
        {
            "_id" : 1,
            "name" : "10.1.99.72:20001",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 192792,
            "optime" : {
                "ts" : Timestamp(1595234485, 3),
                "t" : NumberLong(4)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1595234485, 3),
                "t" : NumberLong(4)
            },
            "optimeDate" : ISODate("2020-07-20T08:41:25Z"),
            "optimeDurableDate" : ISODate("2020-07-20T08:41:25Z"),
            "lastHeartbeat" : ISODate("2020-07-20T08:41:26.802Z"),
            "lastHeartbeatRecv" : ISODate("2020-07-20T08:41:26.605Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "",
            "syncingTo" : "10.1.99.71:20001",
            "syncSourceHost" : "10.1.99.71:20001",
            "syncSourceId" : 0,
            "infoMessage" : "",
            "configVersion" : 1
        },
        {
            "_id" : 2,
            "name" : "10.1.99.77:20001",
            "health" : 1,
            "state" : 7,
            "stateStr" : "ARBITER",
            "uptime" : 193202,
            "lastHeartbeat" : ISODate("2020-07-20T08:41:26.801Z"),
            "lastHeartbeatRecv" : ISODate("2020-07-20T08:41:27.800Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "",
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "",
            "configVersion" : 1
        }
    ],
    "ok" : 1,
    "$gleStats" : {
        "lastOpTime" : Timestamp(0, 0),
        "electionId" : ObjectId("7fffffff0000000000000004")
    },
    "lastCommittedOpTime" : Timestamp(1595234485, 3),
    "$configServerState" : {
        "opTime" : {
            "ts" : Timestamp(1595234481, 3),
            "t" : NumberLong(1)
        }
    },
    "$clusterTime" : {
        "clusterTime" : Timestamp(1595234485, 3),
        "signature" : {
            "hash" : BinData(0,"I33b4TAkVAv+PEn6XYoLxLgklGc="),
            "keyId" : NumberLong("6850640688736895007")
        }
    },
    "operationTime" : Timestamp(1595234485, 3)
}

分片的狀態(tài)顯示正常

登陸mongos

[root@hlet-prod-mongo-01 ~]# mongo --host 10.1.99.77 --port 27017 -u mongoroot -p mongopassowrd
MongoDB shell version v4.2.8
connecting to: mongodb://10.1.99.77:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("b94fba70-96df-4934-8111-3ce0c9cc30fa") }
MongoDB server version: 4.2.8
mongos> sh.status()
--- Sharding Status ---
  sharding version: {
    "_id" : 1,
    "minCompatibleVersion" : 5,
    "currentVersion" : 6,
    "clusterId" : ObjectId("5f125d69e7af9138c41835ea")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/10.1.99.71:20001,10.1.99.72:20001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/10.1.99.73:20002,10.1.99.74:20002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/10.1.99.75:20003,10.1.99.76:20003",  "state" : 1 }
  active mongoses:
        "4.2.8" : 2
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  342
                                shard2  341
                                shard3  341
                        too many chunks to print, use verbose if you want to force print
        ...
        ...

可以看到mongos的狀態(tài)也正常,分片也全部都添加進(jìn)去. 全部安裝至此完成.

2.3 uninstall_all.yml

一鍵刪除MongoDB安裝及相關(guān)數(shù)據(jù),方便重裝...

- hosts: mongo_new
  gather_facts: no
  tags:
    - stop_all
  tasks:
    - name: stop mongos
      tags:
        - stop
      service:
        name: "{{ item }}"
        state: stopped
      with_items:
        - mongod_{{ mongos_port }}
      when: mongos is defined and mongos == "true"

    - name: stop shard1 services
      tags:
        - stop
      service:
        name: "{{ item }}"
        state: stopped
      with_items:
         - mongod_{{ shard1_port }}
      when: shard1 is defined and shard1 == "true"

    - name: stop shard2 services
      tags:
        - stop
      service:
        name: "{{ item }}"
        state: stopped
      with_items:
         - mongod_{{ shard2_port }}
      when: shard2 is defined and shard2 == "true"

    - name: stop shard3 services
      tags:
        - stop
      service:
        name: "{{ item }}"
        state: stopped
      with_items:
         - mongod_{{ shard3_port }}
      when: shard3 is defined and shard3 == "true"

    - name: stop config_server services
      tags:
        - stop
      service:
        name: "{{ item }}"
        state: stopped
      with_items:
        - mongod_{{ config_port }}
      when: config_server is defined and config_server == "true"


    - name: delete data and log path
      tags:
        - delete
      file:
        path: "{{ based_dir }}"
        state: absent

    - name: delete conf file
      tags:
        - delete
      file:
        path: /etc/mongod_{{ item.server_port }}.conf
        state: absent
      with_items:
        - { server_port: "{{ config_port }}" }
        - { server_port: "{{ shard1_port }}" }
        - { server_port: "{{ shard2_port }}" }
        - { server_port: "{{ shard3_port }}" }
        - { server_port: "{{ mongos_port }}" }

    - name: delete service file
      tags:
        - delete
      file:
        path: /usr/lib/systemd/system/mongod_{{ item.server_port }}.service
        state: absent
      with_items:
        - { server_port: "{{ config_port }}" }
        - { server_port: "{{ shard1_port }}" }
        - { server_port: "{{ shard2_port }}" }
        - { server_port: "{{ shard3_port }}" }
        - { server_port: "{{ mongos_port }}" }

    - name: delete key file
      tags:
        - delete
      file:
        path: /etc/mongo
        state: absent

    - name: uninstall mongodb
      yum:
        name: mongodb-org
        state: absent

    - name: undo disable auto update for mongodb
      lineinfile:
        path: /etc/yum.conf
        line: 'exclude=mongodb-org,mongodb-org-server,mongodb-org-shell,mongodb-org-mongos,mongodb-org-tools'
        state: absent

2.4 start_all.yml

一鍵啟動(dòng)整個(gè)MongoDB集群

- hosts: mongo_new
  gather_facts: no
  tags:
    - start
  tasks:
    - name: start config_server services
      tags:
        - init
      service:
        name: "{{ item }}"
        state: started
      with_items:
        - mongod_{{ config_port }}
      when: config_server is defined and config_server == "true"

    - name: start shard1 services
      tags:
        - init
      service:
        name: "{{ item }}"
        state: started
      with_items:
         - mongod_{{ shard1_port }}
      when: shard1 is defined and shard1 == "true"

    - name: start shard2 services
      tags:
        - init
      service:
        name: "{{ item }}"
        state: started
      with_items:
         - mongod_{{ shard2_port }}
      when: shard2 is defined and shard2 == "true"

    - name: start shard3 services
      tags:
        - init
      service:
        name: "{{ item }}"
        state: started
      with_items:
         - mongod_{{ shard3_port }}
      when: shard3 is defined and shard3 == "true"

    - name: start mongos
      service:
        name: "{{ item }}"
        state: started
      with_items:
        - mongod_{{ mongos_port }}
      when: mongos is defined and mongos == "true"

2.5 stop_all.yml

一鍵停止所有MongoDB集群

- hosts: mongo_new
  gather_facts: no
  tags:
    - stop
  tasks:
    - name: stop mongos
      service:
        name: "{{ item }}"
        state: stopped
      with_items:
        - mongod_{{ mongos_port }}
      when: mongos is defined and mongos == "true"

    - name: stop shard1 services
      tags:
        - init
      service:
        name: "{{ item }}"
        state: stopped
      with_items:
         - mongod_{{ shard1_port }}
      when: shard1 is defined and shard1 == "true"

    - name: stop shard2 services
      tags:
        - init
      service:
        name: "{{ item }}"
        state: stopped
      with_items:
         - mongod_{{ shard2_port }}
      when: shard2 is defined and shard2 == "true"

    - name: stop shard3 services
      tags:
        - init
      service:
        name: "{{ item }}"
        state: stopped
      with_items:
         - mongod_{{ shard3_port }}
      when: shard3 is defined and shard3 == "true"

    - name: stop config_server services
      tags:
        - init
      service:
        name: "{{ item }}"
        state: stopped
      with_items:
        - mongod_{{ config_port }}
      when: config_server is defined and config_server == "true"

以上就是ansible在MongoDB的一鍵部署腳本,而且附帶了對(duì)服務(wù)器的優(yōu)化相關(guān)配置

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末秋度,一起剝皮案震驚了整個(gè)濱河市翠勉,隨后出現(xiàn)的幾起案子对碌,更是在濱河造成了極大的恐慌朽们,老刑警劉巖诉位,帶你破解...
    沈念sama閱讀 206,378評(píng)論 6 481
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件叁丧,死亡現(xiàn)場(chǎng)離奇詭異岳瞭,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī)条舔,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,356評(píng)論 2 382
  • 文/潘曉璐 我一進(jìn)店門孟抗,熙熙樓的掌柜王于貴愁眉苦臉地迎上來凄硼,“玉大人摊沉,你說我怎么就攤上這事痒给。” “怎么了尼斧?”我有些...
    開封第一講書人閱讀 152,702評(píng)論 0 342
  • 文/不壞的土叔 我叫張陵,是天一觀的道長(zhǎng)烛恤。 經(jīng)常有香客問我,道長(zhǎng)苹熏,這世上最難降的妖魔是什么柜裸? 我笑而不...
    開封第一講書人閱讀 55,259評(píng)論 1 279
  • 正文 為了忘掉前任疙挺,我火速辦了婚禮铐然,結(jié)果婚禮上搀暑,老公的妹妹穿的比我還像新娘跨琳。我一直安慰自己脉让,他們只是感情好溅潜,可當(dāng)我...
    茶點(diǎn)故事閱讀 64,263評(píng)論 5 371
  • 文/花漫 我一把揭開白布滚澜。 她就那樣靜靜地躺著设捐,像睡著了一般萝招。 火紅的嫁衣襯著肌膚如雪即寒。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 49,036評(píng)論 1 285
  • 那天,我揣著相機(jī)與錄音凹嘲,去河邊找鬼。 笑死趋艘,一個(gè)胖子當(dāng)著我的面吹牛瓷胧,可吹牛的內(nèi)容都是我干的搓萧。 我是一名探鬼主播宛畦,決...
    沈念sama閱讀 38,349評(píng)論 3 400
  • 文/蒼蘭香墨 我猛地睜開眼次和,長(zhǎng)吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼踏施!你這毒婦竟也來了抓督?” 一聲冷哼從身側(cè)響起束亏,我...
    開封第一講書人閱讀 36,979評(píng)論 0 259
  • 序言:老撾萬榮一對(duì)情侶失蹤定铜,失蹤者是張志新(化名)和其女友劉穎怕敬,沒想到半個(gè)月后畸陡,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 43,469評(píng)論 1 300
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡曹动,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 35,938評(píng)論 2 323
  • 正文 我和宋清朗相戀三年,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了贡必。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片仔拟。...
    茶點(diǎn)故事閱讀 38,059評(píng)論 1 333
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡理逊,死狀恐怖晋被,靈堂內(nèi)的尸體忽然破棺而出羡洛,到底是詐尸還是另有隱情欲侮,我是刑警寧澤,帶...
    沈念sama閱讀 33,703評(píng)論 4 323
  • 正文 年R本政府宣布肋联,位于F島的核電站橄仍,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏虑粥。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,257評(píng)論 3 307
  • 文/蒙蒙 一锁孟、第九天 我趴在偏房一處隱蔽的房頂上張望彬祖。 院中可真熱鬧茁瘦,春花似錦、人聲如沸储笑。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,262評(píng)論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽南蓬。三九已至,卻和暖如春哑了,著一層夾襖步出監(jiān)牢的瞬間赘方,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 31,485評(píng)論 1 262
  • 我被黑心中介騙來泰國(guó)打工弱左, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人币叹。 一個(gè)月前我還...
    沈念sama閱讀 45,501評(píng)論 2 354
  • 正文 我出身青樓,卻偏偏與公主長(zhǎng)得像匹舞,于是被迫代替她去往敵國(guó)和親又憨。 傳聞我的和親對(duì)象是個(gè)殘疾皇子,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 42,792評(píng)論 2 345