1.環(huán)境準(zhǔn)備:
Mongodb 部署信息:
os: centos7.7 x64 配置好NTP,防火墻默認(rèn)不做任何限制.
mongos部署兩個(gè)點(diǎn),config3個(gè)點(diǎn),和3個(gè)分片集群(每個(gè)分片一主一副一仲裁)
注意:因?yàn)榕袛噙壿嫷膯栴},不要在任何節(jié)點(diǎn)同時(shí)運(yùn)行兩種服務(wù)的主點(diǎn)
Primiary | Secondary | Arbiter | |
---|---|---|---|
mongos | 10.1.99.77 10.1.99.78 | ||
config | 10.1.99.72 | 10.1.99.74 10.1.99.76 | |
shard1 | 10.1.99.71 | 10.1.99.72 | 10.1.99.77 |
shard2 | 10.1.99.73 | 10.1.99.74 | 10.1.99.78 |
shard3 | 10.1.99.75 | 10.1.99.76 | 10.1.99.77 |
ansible的hosts文件配置:
環(huán)境變量全部都在這里設(shè)置,需要都寫入到ansible的hosts文件里
[mongo_new] # 根據(jù)角色定義對(duì)應(yīng)的變量
10.1.99.71 hostname=hlet-prod-mongo-01 shard1=true primary=true
10.1.99.72 hostname=hlet-prod-mongo-02 shard1=true config_server=true primary=true
10.1.99.73 hostname=hlet-prod-mongo-03 shard2=true primary=true
10.1.99.74 hostname=hlet-prod-mongo-04 shard2=true config_server=true
10.1.99.75 hostname=hlet-prod-mongo-05 shard3=true primary=true
10.1.99.76 hostname=hlet-prod-mongo-06 shard3=true config_server=true
10.1.99.77 hostname=hlet-prod-mongo-07 mongos=true shard1=true shard3=true primary=true
10.1.99.78 hostname=hlet-prod-mongo-08 mongos=true shard2=true
[mongo_new:vars]
shard1_port=20001
shard2_port=20002
shard3_port=20003
config_port=20000
mongos_port=27017
shard1_name=shard1
shard2_name=shard2
shard3_name=shard3
config_name=configs
shard1_server_1_ip=10.1.99.71
shard1_server_2_ip=10.1.99.72
shard1_server_3_ip=10.1.99.77
shard2_server_1_ip=10.1.99.73
shard2_server_2_ip=10.1.99.74
shard2_server_3_ip=10.1.99.78
shard3_server_1_ip=10.1.99.75
shard3_server_2_ip=10.1.99.76
shard3_server_3_ip=10.1.99.77
config_server_1_ip=10.1.99.72
config_server_2_ip=10.1.99.74
config_server_3_ip=10.1.99.76
root_user=mongoroot # 開啟驗(yàn)證使用的root權(quán)限用戶
root_password=mongopassowrd # 開啟驗(yàn)證使用的root權(quán)限用戶的密碼
based_dir=/home/mongodb/sharded_cluster # 數(shù)據(jù)及l(fā)og存放路徑
變量中
server_1_ip
都設(shè)置為主點(diǎn)
server_2_ip
都設(shè)置為副點(diǎn)
server_3_ip
都設(shè)置為仲裁節(jié)點(diǎn)(config server沒有仲裁節(jié)點(diǎn),但是也要寫,不然mongos的配置文件模板會(huì)報(bào)錯(cuò))
這些IP的配置后面在初始化時(shí)會(huì)用到.
ansible文件夾結(jié)構(gòu):
[root@hlet-prod-k8s-rancher mongo_new]# tree
.
├── iptables.yml
├── mongod.conf.mongos.j2
├── mongod.conf.normal.j2
├── mongod.service.j2
├── mongo.key
├── passwd.j2
├── setup_all.yml
├── ssh.yml
├── start_all.yml
├── stop_all.yml
└── uninstall_all.yml
首先是模板文件:
mongod.conf.mongos.j2 mongos的配置文件模板
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: {{ based_dir }}/mysharedrs_{{ item.server_port }}/log/mongod.log
# Where and how to store data.
#storage:
# dbPath: {{ based_dir }}/mysharedrs_{{ item.server_port }}/data/db
# journal:
# enabled: true
# engine:
# wiredTiger:
# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod_{{ item.server_port }}.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
# network interfaces
net:
port: {{ item.server_port }}
bindIp: 127.0.0.1,{{ansible_default_ipv4.address}} # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
security:
# authorization: enabled
keyFile: /etc/mongo/mongo.key
#operationProfiling:
# replication:
# replSetName: myrs
# sharding:
sharding:
configDB: {{ config_name }}/{{ config_server_1_ip }}:{{ config_port }},{{ config_server_2_ip }}:{{ config_port}},{{ config_server_3_ip }}:{{ config_port}}
## Enterprise-Only Options
#auditLog:
#snmp:
mongod.conf.normal.j2 shard server和config server的配置文件模板
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: {{ based_dir }}/mysharedrs_{{ item.server_port }}/log/mongod.log
# Where and how to store data.
storage:
dbPath: {{ based_dir }}/mysharedrs_{{ item.server_port }}/data/db
journal:
enabled: true
# engine:
wiredTiger:
engineConfig:
directoryForIndexes: true
# how the process runs
processManagement:
fork: true # fork and run in background
pidFilePath: /var/run/mongodb/mongod_{{ item.server_port }}.pid # location of pidfile
timeZoneInfo: /usr/share/zoneinfo
# network interfaces
net:
port: {{ item.server_port }}
bindIp: 127.0.0.1,{{ansible_default_ipv4.address}} # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
security:
authorization: enabled
keyFile: /etc/mongo/mongo.key
#operationProfiling:
replication:
replSetName: {{ item.server_name }}
enableMajorityReadConcern: false
# sharding:
sharding:
clusterRole: {{ item.cluster_role }}
## Enterprise-Only Options
#auditLog:
#snmp:
mongod.service.j2 service文件模板
[Unit]
Description=MongoDB Database Server
Documentation=https://docs.mongodb.org/manual
After=network.target
[Service]
User=mongod
Group=mongod
Environment="OPTIONS=-f /etc/mongod_{{ item.server_port }}.conf"
EnvironmentFile=-/etc/sysconfig/mongod
ExecStart=/usr/bin/mongod $OPTIONS
ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb
ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb
ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb
PermissionsStartOnly=true
PIDFile=/var/run/mongodb/mongod_{{ item.server_port }}.pid
Type=forking
# file size
LimitFSIZE=infinity
# cpu time
LimitCPU=infinity
# virtual memory size
LimitAS=infinity
# open files
LimitNOFILE=64000
# processes/threads
LimitNPROC=64000
# locked memory
LimitMEMLOCK=infinity
# total threads (user+kernel)
TasksMax=infinity
TasksAccounting=false
# Recommended limits for for mongod as specified in
# http://docs.mongodb.org/manual/reference/ulimit/#recommended-settings
[Install]
WantedBy=multi-user.target
mongo.key 直接手動(dòng)生成,用戶MongoDB集群之間的加密連接,記得權(quán)限改為600
openssl rand -base64 100 > mongo.key
chmod 600 mongo.key
passwd.j2 用于配置MongoDB的root用戶名密碼
conn = new Mongo("127.0.0.1:{{ item.server_port }}");
db = conn.getDB("admin");
printjson(db.createUser({user:"{{ root_user }}",pwd:"{{ root_password }}",roles:[ { "role" : "root", "db" : "admin" } ]}));
2.ansible腳本準(zhǔn)備
2.1 ssh.yml
---
- hosts: mongo_new
gather_facts: no
tasks:
- name: install ssh key
authorized_key: user=root
key="{{ lookup('file', '/root/.ssh/id_rsa.pub') }}"
state=present
執(zhí)行:
按提示輸入服務(wù)器密碼
ansible-playbook ssh.yml -k
2.2 setup_all.yml
包含了系統(tǒng)優(yōu)化相關(guān)設(shè)置,注意中間會(huì)重啟一次讓一些系統(tǒng)優(yōu)化項(xiàng)生效
---
- hosts: mongo_new
gather_facts: yes
tasks:
- name: set hostname
tags:
- test1
hostname:
name: "{{ hostname }}"
- name: mod hosts
tags:
- test1
lineinfile:
dest: /etc/hosts
regexp: '.*{{ item }}$'
line: "{{item}} {{ hostvars[item].hostname }}"
state: present
when: hostvars[item].hostname is defined
with_items: "{{ groups.mongo_new }}"
- name: Disable SELinux tempoary
tags:
- test1
selinux:
state: disabled
- name: Disable unnecessary services
tags:
- test1
service:
name: "{{ item }}"
state: stopped
enabled: false
with_items:
- firewalld
- postfix
- name: Set timezone to Asia/Shanghai
tags:
- test1
timezone:
name: Asia/Shanghai
- name: install epel release
tags:
- test1
yum:
name: epel-release
- name: create mongodb repo file
tags:
- test1
blockinfile:
path: /etc/yum.repos.d/mongodb-org-4.2.repo
create: True
block: |
[mongodb-org-4.2]
name=MongoDB Repository
baseurl=http://mirrors.aliyun.com/mongodb/yum/redhat/7Server/mongodb-org/4.2/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.2.asc
- name: update repo
tags:
- test1
shell: yum clean all && yum makecache
- hosts: mongo_new
gather_facts: yes
tasks:
- name: install necessary tools
tags:
- yum
yum:
name: bash-completion,unzip,conntrack,ntpdate,ntp,curl,sysstat,libseccomp,wget,vim,net-tools,git,nfs-utils,rpcbind,nload,htop,tree,telnet,python-pip,numactl
- name: install mongodb
yum:
name: mongodb-org
- name: install pymongo
pip:
name: pymongo
extra_args: "-i https://mirrors.aliyun.com/pypi/simple/"
# - name: set ntp restrict
# tags:
# - ntp
# lineinfile:
# dest: /etc/ntp.conf
# regexp: '^restrict 192\.[0-9]{1,3}\.255\.1'
# line: restrict 192.168.1.1
# notify:
# - restart ntpd
# - name: set ntp server
# tags:
# - ntp
# lineinfile:
# dest: /etc/ntp.conf
# regexp: '^server 192\.[0-9]{1,3}\.255\.1 iburst minpoll 3 maxpoll 4 prefer'
# line: 'server 192.168.1.1 iburst minpoll 3 maxpoll 4 prefer'
# notify:
# - restart ntpd
- name: disable auto update for mongodb
lineinfile:
path: /etc/yum.conf
line: 'exclude=mongodb-org,mongodb-org-server,mongodb-org-shell,mongodb-org-mongos,mongodb-org-tools'
- name: disable hugepage
tags:
- test5
blockinfile:
path: /etc/rc.local
create: True
block: |
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never >> /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never >> /sys/kernel/mm/transparent_hugepage/defrag
fi
- name: setting
tags:
- init
lineinfile:
path: /etc/security/limits.conf
line: "{{ item }}"
with_items:
- '* soft nofile 65535'
- '* hard nofile 65535'
- '* soft nproc 65535'
- '* hard nproc 65535'
notify:
- ulimit
- name: disable hugepage for sure
tags:
- test5
file:
dest: /etc/rc.local
mode: 0755
- name: cteate dirs for mongo.key
tags:
- key
file:
dest: "{{ item }}"
state: directory
owner: mongod
group: mongod
mode: 0755
with_items:
- /etc/mongo
- name: cteate dirs for shard1
tags:
- key
file:
dest: "{{ item }}"
state: directory
owner: mongod
group: mongod
mode: 0755
with_items:
- "{{ based_dir }}/mysharedrs_{{ shard1_port }}/log"
- "{{ based_dir }}/mysharedrs_{{ shard1_port }}/data/db"
when: shard1 is defined and shard1 == "true"
- name: cteate dirs for shard2
tags:
- key
file:
dest: "{{ item }}"
state: directory
owner: mongod
group: mongod
mode: 0755
with_items:
- "{{ based_dir }}/mysharedrs_{{ shard2_port }}/log"
- "{{ based_dir }}/mysharedrs_{{ shard2_port }}/data/db"
when: shard2 is defined and shard2 == "true"
- name: cteate dirs for shard3
tags:
- key
file:
dest: "{{ item }}"
state: directory
owner: mongod
group: mongod
mode: 0755
with_items:
- "{{ based_dir }}/mysharedrs_{{ shard3_port }}/log"
- "{{ based_dir }}/mysharedrs_{{ shard3_port }}/data/db"
when: shard3 is defined and shard3 == "true"
- name: cteate dirs for config_server
tags:
- key
file:
dest: "{{ item }}"
state: directory
owner: mongod
group: mongod
mode: 0755
with_items:
- "{{ based_dir }}/mysharedrs_{{ config_port }}/log"
- "{{ based_dir }}/mysharedrs_{{ config_port }}/data/db"
when: config_server is defined and config_server == "true"
- name: cteate dirs for mongos
tags:
- key
file:
dest: "{{ item }}"
state: directory
owner: mongod
group: mongod
mode: 0755
with_items:
- "{{ based_dir }}/mysharedrs_{{ mongos_port }}/log"
- "{{ based_dir }}/mysharedrs_{{ mongos_port }}/data/db"
when: mongos is defined and mongos == "true"
- name: copy key file
tags:
- key
copy:
src: mongo.key
dest: /etc/mongo
owner: mongod
group: mongod
mode: '0600'
- name: copy config_server service
template: src=mongod.service.j2 dest=/usr/lib/systemd/system/mongod_{{ item.server_port }}.service
with_items:
- { server_port: "{{ config_port }}" }
when: config_server is defined and config_server == "true"
- name: copy shard1 service
template: src=mongod.service.j2 dest=/usr/lib/systemd/system/mongod_{{ item.server_port }}.service
with_items:
- { server_port: "{{ shard1_port }}" }
when: shard1 is defined and shard1 == "true"
- name: copy shard2 service
template: src=mongod.service.j2 dest=/usr/lib/systemd/system/mongod_{{ item.server_port }}.service
with_items:
- { server_port: "{{ shard2_port }}" }
when: shard2 is defined and shard2 == "true"
- name: copy shard3 service
template: src=mongod.service.j2 dest=/usr/lib/systemd/system/mongod_{{ item.server_port }}.service
with_items:
- { server_port: "{{ shard3_port }}" }
when: shard3 is defined and shard3 == "true"
- name: copy mongos service
template: src=mongod.service.j2 dest=/usr/lib/systemd/system/mongod_{{ item.server_port }}.service
with_items:
- { server_port: "{{ mongos_port }}" }
when: mongos is defined and mongos == "true"
- name: set mongos service
lineinfile:
path: /usr/lib/systemd/system/mongod_{{ item.server_port }}.service
regexp: '^ExecStart=/usr/bin/m'
line: 'ExecStart=/usr/bin/mongos $OPTIONS'
with_items:
- { server_port: "{{ mongos_port }}" }
when: mongos is defined and mongos == "true"
- name: Reboot all nodes make sure all changes effected
reboot:
reboot_timeout: 3600
handlers:
- name: ulimit
shell: ulimit -n
# handlers:
# - name: restart ntpd
# service:
# name=ntpd
# state=restarted
# enabled=true
- hosts: mongo_new
gather_facts: yes
tasks:
- name: copy mongos conf
tags:
- key
template: src=mongod.conf.mongos.j2 dest=/etc/mongod_{{ item.server_port }}.conf
with_items:
- { server_port: "{{ mongos_port }}" }
when: mongos is defined and mongos == "true"
- name: stop and disable default mongo service
tags:
- init_mongos
service:
name: mongod
state: stopped
enabled: false
- name: create config file for config_server
tags:
- key
template: src=mongod.conf.normal.j2 dest=/etc/mongod_{{ item.server_port }}.conf
with_items:
- { server_port: "{{ config_port }}", server_name: '{{ config_name }}', cluster_role: 'configsvr' }
when: config_server is defined and config_server == "true"
- name: create config file for shard1
tags:
- key
template: src=mongod.conf.normal.j2 dest=/etc/mongod_{{ item.server_port }}.conf
with_items:
- { server_port: "{{ shard1_port }}", server_name: '{{ shard1_name }}', cluster_role: 'shardsvr' }
when: shard1 is defined and shard1 == "true"
- name: create config file for shard2
tags:
- key
template: src=mongod.conf.normal.j2 dest=/etc/mongod_{{ item.server_port }}.conf
with_items:
- { server_port: "{{ shard2_port }}", server_name: '{{ shard2_name }}', cluster_role: 'shardsvr' }
when: shard2 is defined and shard2 == "true"
- name: create config file for shard3
tags:
- key
template: src=mongod.conf.normal.j2 dest=/etc/mongod_{{ item.server_port }}.conf
with_items:
- { server_port: "{{ shard3_port }}", server_name: '{{ shard3_name }}', cluster_role: 'shardsvr' }
when: shard3 is defined and shard3 == "true"
- name: start config_server services
tags:
- init
service:
name: "{{ item }}"
state: started
with_items:
- mongod_{{ config_port }}
when: config_server is defined and config_server == "true"
- name: start shard1 services
tags:
- init
service:
name: "{{ item }}"
state: started
with_items:
- mongod_{{ shard1_port }}
when: shard1 is defined and shard1 == "true"
- name: start shard2 services
tags:
- init
service:
name: "{{ item }}"
state: started
with_items:
- mongod_{{ shard2_port }}
when: shard2 is defined and shard2 == "true"
- name: start shard3 services
tags:
- init
service:
name: "{{ item }}"
state: started
with_items:
- mongod_{{ shard3_port }}
when: shard3 is defined and shard3 == "true"
- name: init shard1
tags:
- initdone
shell: "mongo --port {{ shard1_port }} --eval \"rs.initiate({_id : '{{ shard1_name }}',members : [{_id : 0, host : '{{ shard1_server_1_ip }}:{{ shard1_port }}', priority : 2 },{_id : 1, host : '{{ shard1_server_2_ip }}:{{ shard1_port }}', priority : 1 },{_id : 2, host : '{{ shard1_server_3_ip }}:{{ shard1_port }}', arbiterOnly : true }]})\""
when: shard1 is defined and shard1 == "true" and primary is defined and primary == "true" and config_server is not defined and mongos is not defined
- name: copy key file
tags:
- key
template:
src: passwd.j2
dest: /etc/mongo/passwd.js
with_items:
- { server_port: "{{ shard1_port }}" }
when: shard1 is defined and shard1 == "true" and primary is defined and primary == "true"
- name: init shard2
tags:
- initdone
shell: "mongo --port {{ shard2_port }} --eval \"rs.initiate({_id : '{{ shard2_name }}',members : [{_id : 0, host : '{{ shard2_server_1_ip }}:{{ shard2_port }}', priority : 2 },{_id : 1, host : '{{ shard2_server_2_ip }}:{{ shard2_port }}', priority : 1 },{_id : 2, host : '{{ shard2_server_3_ip }}:{{ shard2_port }}', arbiterOnly : true }]})\""
when: shard2 is defined and shard2 == "true" and primary is defined and primary == "true" and config_server is not defined and mongos is not defined
- name: copy key file
tags:
- key
template:
src: passwd.j2
dest: /etc/mongo/passwd.js
with_items:
- { server_port: "{{ shard2_port }}" }
when: shard2 is defined and shard2 == "true" and primary is defined and primary == "true"
- name: init shard3
tags:
- initdone
shell: "mongo --port {{ shard3_port }} --eval \"rs.initiate({_id : '{{ shard3_name }}',members : [{_id : 0, host : '{{ shard3_server_1_ip }}:{{ shard3_port }}', priority : 2 },{_id : 1, host : '{{ shard3_server_2_ip }}:{{ shard3_port }}', priority : 1 },{_id : 2, host : '{{ shard3_server_3_ip }}:{{ shard3_port }}', arbiterOnly : true }]})\""
when: shard3 is defined and shard3 == "true" and primary is defined and primary == "true" and config_server is not defined and mongos is not defined
- name: copy key file
tags:
- key
template:
src: passwd.j2
dest: /etc/mongo/passwd.js
with_items:
- { server_port: "{{ shard3_port }}" }
when: shard3 is defined and shard3 == "true" and primary is defined and primary == "true"
- name: init config server
tags:
- init
shell: "mongo --port {{ config_port }} --eval \"rs.initiate({_id : '{{ config_name }}', configsvr: true, members : [{_id : 0, host : '{{ config_server_1_ip }}:{{ config_port }}', priority : 3 },{_id : 1, host : '{{ config_server_2_ip }}:{{ config_port }}', priority : 2 },{_id : 2, host : '{{ config_server_3_ip }}:{{ config_port }}', priority : 1 }]})\""
when: config_server is defined and config_server == "true" and primary is defined and primary == "true" and mongos is not defined
- name: copy key file
tags:
- key
template:
src: passwd.j2
dest: /etc/mongo/passwd.js
with_items:
- { server_port: "{{ config_port }}" }
when: config_server is defined and config_server == "true" and primary is defined and primary == "true" and mongos is not defined
- name: create password
tags:
- create_password
shell: "sleep 20 && mongo --nodb /etc/mongo/passwd.js"
when: primary is defined and primary == "true" and config_server is not defined and mongos is not defined
- name: create password
tags:
- create_password
shell: "sleep 10 && mongo --nodb /etc/mongo/passwd.js"
when: primary is defined and primary == "true" and config_server is defined and mongos is not defined
- name: start mongos
service:
name: "{{ item }}"
state: started
with_items:
- mongod_{{ mongos_port }}
when: mongos is defined and mongos == "true"
- name: add shard
tags:
- add_shard
mongodb_shard:
login_host: 127.0.0.1
login_port: "{{ mongos_port }}"
login_user: "{{ root_user }}"
login_password: "{{ root_password }}"
shard: "{{ item }}"
state: present
with_items:
- "{{ shard1_name }}/{{ shard1_server_1_ip }}:{{ shard1_port }},{{ shard1_server_2_ip }}:{{ shard1_port }},{{ shard1_server_3_ip }}:{{ shard1_port }}"
- "{{ shard2_name }}/{{ shard2_server_1_ip }}:{{ shard2_port }},{{ shard2_server_2_ip }}:{{ shard2_port }},{{ shard2_server_3_ip }}:{{ shard2_port }}"
- "{{ shard3_name }}/{{ shard3_server_1_ip }}:{{ shard3_port }},{{ shard3_server_2_ip }}:{{ shard3_port }},{{ shard3_server_3_ip }}:{{ shard3_port }}"
when: primary is defined and primary == "true" and mongos is defined
注意:因?yàn)榕袛噙壿嫷膯栴},不要在任何節(jié)點(diǎn)同時(shí)運(yùn)行兩種服務(wù)的主點(diǎn)
執(zhí)行:
ansible-playbook setup_all.yml
如果執(zhí)行有問題建議多檢查變量配置
全部完成后可以驗(yàn)證一下集群環(huán)境:
登陸分片集群:
[root@hlet-prod-mongo-01 ~]# mongo --host 10.1.99.71 --port 20001 -u mongoroot -p mongopassowrd
MongoDB shell version v4.2.8
connecting to: mongodb://127.0.0.1:20001/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("a3b92fef-08b2-4c3b-bab6-f2a84fcc29da") }
MongoDB server version: 4.2.8
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
shard1:PRIMARY> rs.status()
{
"set" : "shard1",
"date" : ISODate("2020-07-20T08:41:28.060Z"),
"myState" : 1,
"term" : NumberLong(4),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"majorityVoteCount" : 2,
"writeMajorityCount" : 2,
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1595234485, 3),
"t" : NumberLong(4)
},
"lastCommittedWallTime" : ISODate("2020-07-20T08:41:25.405Z"),
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1595234485, 3),
"t" : NumberLong(4)
},
"readConcernMajorityWallTime" : ISODate("2020-07-20T08:41:25.405Z"),
"appliedOpTime" : {
"ts" : Timestamp(1595234485, 3),
"t" : NumberLong(4)
},
"durableOpTime" : {
"ts" : Timestamp(1595234485, 3),
"t" : NumberLong(4)
},
"lastAppliedWallTime" : ISODate("2020-07-20T08:41:25.405Z"),
"lastDurableWallTime" : ISODate("2020-07-20T08:41:25.405Z")
},
"electionCandidateMetrics" : {
"lastElectionReason" : "priorityTakeover",
"lastElectionDate" : ISODate("2020-07-18T03:01:35.660Z"),
"electionTerm" : NumberLong(4),
"lastCommittedOpTimeAtElection" : {
"ts" : Timestamp(1595041285, 2),
"t" : NumberLong(3)
},
"lastSeenOpTimeAtElection" : {
"ts" : Timestamp(1595041285, 2),
"t" : NumberLong(3)
},
"numVotesNeeded" : 2,
"priorityAtElection" : 2,
"electionTimeoutMillis" : NumberLong(10000),
"priorPrimaryMemberId" : 1,
"numCatchUpOps" : NumberLong(0),
"newTermStartDate" : ISODate("2020-07-18T03:01:35.709Z"),
"wMajorityWriteAvailabilityDate" : ISODate("2020-07-18T03:01:37.743Z")
},
"members" : [
{
"_id" : 0,
"name" : "10.1.99.71:20001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 193204,
"optime" : {
"ts" : Timestamp(1595234485, 3),
"t" : NumberLong(4)
},
"optimeDate" : ISODate("2020-07-20T08:41:25Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1595041295, 1),
"electionDate" : ISODate("2020-07-18T03:01:35Z"),
"configVersion" : 1,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "10.1.99.72:20001",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 192792,
"optime" : {
"ts" : Timestamp(1595234485, 3),
"t" : NumberLong(4)
},
"optimeDurable" : {
"ts" : Timestamp(1595234485, 3),
"t" : NumberLong(4)
},
"optimeDate" : ISODate("2020-07-20T08:41:25Z"),
"optimeDurableDate" : ISODate("2020-07-20T08:41:25Z"),
"lastHeartbeat" : ISODate("2020-07-20T08:41:26.802Z"),
"lastHeartbeatRecv" : ISODate("2020-07-20T08:41:26.605Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "10.1.99.71:20001",
"syncSourceHost" : "10.1.99.71:20001",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "10.1.99.77:20001",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 193202,
"lastHeartbeat" : ISODate("2020-07-20T08:41:26.801Z"),
"lastHeartbeatRecv" : ISODate("2020-07-20T08:41:27.800Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "",
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"configVersion" : 1
}
],
"ok" : 1,
"$gleStats" : {
"lastOpTime" : Timestamp(0, 0),
"electionId" : ObjectId("7fffffff0000000000000004")
},
"lastCommittedOpTime" : Timestamp(1595234485, 3),
"$configServerState" : {
"opTime" : {
"ts" : Timestamp(1595234481, 3),
"t" : NumberLong(1)
}
},
"$clusterTime" : {
"clusterTime" : Timestamp(1595234485, 3),
"signature" : {
"hash" : BinData(0,"I33b4TAkVAv+PEn6XYoLxLgklGc="),
"keyId" : NumberLong("6850640688736895007")
}
},
"operationTime" : Timestamp(1595234485, 3)
}
分片的狀態(tài)顯示正常
登陸mongos
[root@hlet-prod-mongo-01 ~]# mongo --host 10.1.99.77 --port 27017 -u mongoroot -p mongopassowrd
MongoDB shell version v4.2.8
connecting to: mongodb://10.1.99.77:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("b94fba70-96df-4934-8111-3ce0c9cc30fa") }
MongoDB server version: 4.2.8
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5f125d69e7af9138c41835ea")
}
shards:
{ "_id" : "shard1", "host" : "shard1/10.1.99.71:20001,10.1.99.72:20001", "state" : 1 }
{ "_id" : "shard2", "host" : "shard2/10.1.99.73:20002,10.1.99.74:20002", "state" : 1 }
{ "_id" : "shard3", "host" : "shard3/10.1.99.75:20003,10.1.99.76:20003", "state" : 1 }
active mongoses:
"4.2.8" : 2
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
shard1 342
shard2 341
shard3 341
too many chunks to print, use verbose if you want to force print
...
...
可以看到mongos的狀態(tài)也正常,分片也全部都添加進(jìn)去. 全部安裝至此完成.
2.3 uninstall_all.yml
一鍵刪除MongoDB安裝及相關(guān)數(shù)據(jù),方便重裝...
- hosts: mongo_new
gather_facts: no
tags:
- stop_all
tasks:
- name: stop mongos
tags:
- stop
service:
name: "{{ item }}"
state: stopped
with_items:
- mongod_{{ mongos_port }}
when: mongos is defined and mongos == "true"
- name: stop shard1 services
tags:
- stop
service:
name: "{{ item }}"
state: stopped
with_items:
- mongod_{{ shard1_port }}
when: shard1 is defined and shard1 == "true"
- name: stop shard2 services
tags:
- stop
service:
name: "{{ item }}"
state: stopped
with_items:
- mongod_{{ shard2_port }}
when: shard2 is defined and shard2 == "true"
- name: stop shard3 services
tags:
- stop
service:
name: "{{ item }}"
state: stopped
with_items:
- mongod_{{ shard3_port }}
when: shard3 is defined and shard3 == "true"
- name: stop config_server services
tags:
- stop
service:
name: "{{ item }}"
state: stopped
with_items:
- mongod_{{ config_port }}
when: config_server is defined and config_server == "true"
- name: delete data and log path
tags:
- delete
file:
path: "{{ based_dir }}"
state: absent
- name: delete conf file
tags:
- delete
file:
path: /etc/mongod_{{ item.server_port }}.conf
state: absent
with_items:
- { server_port: "{{ config_port }}" }
- { server_port: "{{ shard1_port }}" }
- { server_port: "{{ shard2_port }}" }
- { server_port: "{{ shard3_port }}" }
- { server_port: "{{ mongos_port }}" }
- name: delete service file
tags:
- delete
file:
path: /usr/lib/systemd/system/mongod_{{ item.server_port }}.service
state: absent
with_items:
- { server_port: "{{ config_port }}" }
- { server_port: "{{ shard1_port }}" }
- { server_port: "{{ shard2_port }}" }
- { server_port: "{{ shard3_port }}" }
- { server_port: "{{ mongos_port }}" }
- name: delete key file
tags:
- delete
file:
path: /etc/mongo
state: absent
- name: uninstall mongodb
yum:
name: mongodb-org
state: absent
- name: undo disable auto update for mongodb
lineinfile:
path: /etc/yum.conf
line: 'exclude=mongodb-org,mongodb-org-server,mongodb-org-shell,mongodb-org-mongos,mongodb-org-tools'
state: absent
2.4 start_all.yml
一鍵啟動(dòng)整個(gè)MongoDB集群
- hosts: mongo_new
gather_facts: no
tags:
- start
tasks:
- name: start config_server services
tags:
- init
service:
name: "{{ item }}"
state: started
with_items:
- mongod_{{ config_port }}
when: config_server is defined and config_server == "true"
- name: start shard1 services
tags:
- init
service:
name: "{{ item }}"
state: started
with_items:
- mongod_{{ shard1_port }}
when: shard1 is defined and shard1 == "true"
- name: start shard2 services
tags:
- init
service:
name: "{{ item }}"
state: started
with_items:
- mongod_{{ shard2_port }}
when: shard2 is defined and shard2 == "true"
- name: start shard3 services
tags:
- init
service:
name: "{{ item }}"
state: started
with_items:
- mongod_{{ shard3_port }}
when: shard3 is defined and shard3 == "true"
- name: start mongos
service:
name: "{{ item }}"
state: started
with_items:
- mongod_{{ mongos_port }}
when: mongos is defined and mongos == "true"
2.5 stop_all.yml
一鍵停止所有MongoDB集群
- hosts: mongo_new
gather_facts: no
tags:
- stop
tasks:
- name: stop mongos
service:
name: "{{ item }}"
state: stopped
with_items:
- mongod_{{ mongos_port }}
when: mongos is defined and mongos == "true"
- name: stop shard1 services
tags:
- init
service:
name: "{{ item }}"
state: stopped
with_items:
- mongod_{{ shard1_port }}
when: shard1 is defined and shard1 == "true"
- name: stop shard2 services
tags:
- init
service:
name: "{{ item }}"
state: stopped
with_items:
- mongod_{{ shard2_port }}
when: shard2 is defined and shard2 == "true"
- name: stop shard3 services
tags:
- init
service:
name: "{{ item }}"
state: stopped
with_items:
- mongod_{{ shard3_port }}
when: shard3 is defined and shard3 == "true"
- name: stop config_server services
tags:
- init
service:
name: "{{ item }}"
state: stopped
with_items:
- mongod_{{ config_port }}
when: config_server is defined and config_server == "true"
以上就是ansible在MongoDB的一鍵部署腳本,而且附帶了對(duì)服務(wù)器的優(yōu)化相關(guān)配置