- 拉鏡像
docker pull elasticsearch:6.5.4
6.5.4: Pulling from library/elasticsearch
a02a4930cb5d: Downloading [===================> ] 30MB/75.17MB
dd8a94cca3f9: Downloading [=> ] 6.421MB/188.1MB
bd73f551dee4: Download complete
70de352c4efc: Downloading [===================> ] 2.637MB/6.859MB
0b5ae4c7310f: Waiting
489d9f8b18f1: Waiting
8ba96caf5951: Waiting
f1df04f27c5f: Waiting
- 查看鏡像
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
elasticsearch 6.5.4 93109ce1d590 5 weeks ago 774MB
- 啟動一個容器
elasticsearch/jvm.options 默認(rèn)配置 -Xms2g - Xmx2g 來指定內(nèi)存 我使用的是1G內(nèi)存 所以需要指定-Xms -Xmx 大小
內(nèi)存夠大就使用默認(rèn)-Xmx 啟動容器如下:
docker run -d --name elasticsearch --net somenetwork -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:6.5.4
d2953375ec7ea5eef1f84d9d39f3f0678a17274d7698716456034c1563aab864
內(nèi)存比較小比如我1g 就需要指定-Xms -Xmx 大小
docker run -d --name elasticsearch --net somenetwork -p 9200:9200 -p 9300:9300 -e "ES_JAVA_OPTS=-Xms512m -Xmx512m" -e "discovery.type=single-node" elasticsearch:6.5.4
ed40afba226b0ca3a148f41d142d195529b902726b0019742a83a8d595ed5583
9300端口: ES節(jié)點之間通訊使用
9200端口: ES節(jié)點 和 外部 通訊使用
- 查看啟動容器
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d2953375ec7e elasticsearch:6.5.4 "/usr/local/bin/dock…" 37 seconds ago Exited (1) 36 seconds ago elasticsearch
curl -v 127.0.0.1:9200
* Rebuilt URL to: 127.0.0.1:9200/
* Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 9200 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:9200
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 494
<
{
"name" : "JFvwCOs",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "gFw-ERtCRs-5vc-zEMBbIg",
"version" : {
"number" : "6.5.4",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "d2ef93d",
"build_date" : "2018-12-17T21:17:40.758843Z",
"build_snapshot" : false,
"lucene_version" : "7.5.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
* Connection #0 to host 127.0.0.1 left intact
- 安裝head插件
最簡單方式可以直接安裝谷歌插件的elasticsearch-head-chrome能耻,也可以在Chrome網(wǎng)上應(yīng)用店上找到
下面是通過docker 安裝方式
docker pull mobz/elasticsearch-head:5
* Pulling from mobz/elasticsearch-head
75a822cd7888: Pulling fs layer
57de64c72267: Pulling fs layer
4306be1e8943: Pulling fs layer
871436ab7225: Waiting
0110c26a367a: Waiting
1f04fe713f1b: Waiting
723bac39028e: Waiting
7d8cb47f1c60: Waiting
7328dcf65c42: Waiting
b451f2ccfb9a: Waiting
304d5c28a4cf: Waiting
4cf804850db1: Waiting
啟動head
docker run -d -p 9100:9100 --name elasticsearch-head mobz/elasticsearch-head:5
a31c966d1eec8c83fceefd0515df2f9e91986f08315d0a0d07b9ae261086d7d4
-
然后瀏覽器訪問 127.0.0.1:9100
image.png
出現(xiàn)這個界面表示 elasticsearch-head 安裝成功
但是發(fā)現(xiàn)“集群健康值:未連接” 說明沒有和elasticsearch 連接成功,需要elasticsearch配置跨域 - elasticsearch 跨域配置
1.進(jìn)入elasticsearch容器
docker exec -it 9d53699397a8 /bin/bash
[root@9d53699397a8 elasticsearch]#
2.安裝vim
[root@9d53699397a8 elasticsearch]# yum install -y vim
3.修改/usr/share/elasticsearch/config/elasticsearch.yml
vim elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
# minimum_master_nodes need to be explicitly set when bound on a public IP
# set to 1 to allow single node clusters
# Details: https://github.com/elastic/elasticsearch/pull/17288
discovery.zen.minimum_master_nodes: 1
# headR件設(shè)置
http.cors.enabled: true
http.cors.allow-origin: "*"
3.重啟容器
docker restart 9d53699397a8
- 使用 Logstash 將mysql 數(shù)據(jù)庫數(shù)據(jù)同步到 elasticsearch
1.下載
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.5.4.tar.gz
2.解壓
tar -zvxf logstash-6.5.4.tar.gz
3.修改jvm
jvm.options 默認(rèn)
-Xms1g
-Xmx1g
我機(jī)器內(nèi)存很小所以需要修改
/opt/logstash-6.5.4/config# vim jvm.options
-Xms512m
-Xmx512m
4.運(yùn)行
/opt/logstash-6.5.4/bin#./logstash -e 'input { stdin { } } output { stdout {} }'
3.安裝 jdbc 和 elasticsearch 插件
/opt/logstash-6.5.4# bin/logstash-plugin install logstash-input-jdbc
Validating logstash-input-jdbc
Installing logstash-input-jdbc
Installation successful
/opt/logstash-6.5.4# bin/logstash-plugin install logstash-output-elasticsearch
Validating logstash-output-elasticsearch
Installing logstash-output-elasticsearch
Installation successful
4.下載mysql-connector-java
5.編寫配置文件 sync_table.conf
注意:數(shù)據(jù)庫中刪除的數(shù)據(jù)無法同步到ES中饭宾,只能同步insert update 數(shù)據(jù)
/opt/logstash-6.5.4/config# vim sync_table.conf
input {
jdbc {
# mysql相關(guān)jdbc配置
jdbc_connection_string => "jdbc:mysql://127.0.0.1:3306/test?useUnicode=true&characterEncoding=utf-8&useSSL=false"
jdbc_user => "root"
jdbc_password => "123456"
# jdbc連接mysql驅(qū)動的文件 此處路徑一定要正確 否則會報com.mysql.cj.jdbc.Driver could not be loaded
jdbc_driver_library => "/opt/logstash-6.5.4/sync_config/mysql-connector-java-8.0.12.jar"
# the name of the driver class for mysql
jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
jdbc_paging_enabled => true
jdbc_page_size => "50000"
jdbc_default_timezone =>"Asia/Shanghai"
# mysql文件, 也可以直接寫SQL語句在此處蓝晒,如下:
# 如果要使字段和實體類的駝峰命名法一致 則需要這樣寫sql select d_name as dName, c_id as cId from area where update_time >= :sql_last_value order by update_time asc
statement => "select * from area where update_time >= :sql_last_value order by update_time asc"
# statement_filepath => "./config/jdbc.sql"
# 這里類似crontab,可以定制定時操作鲤孵,比如每分鐘執(zhí)行一次同步(分 時 天 月 年)
schedule => "* * * * *"
#type => "jdbc"
# 是否記錄上次執(zhí)行結(jié)果, 如果為真,將會把上次執(zhí)行到的 tracking_column 字段的值記錄下來,保存到 last_run_metadata_path 指定的文件中
#record_last_run => true
# 是否需要記錄某個column 的值,如果record_last_run為真,可以自定義我們需要 track 的 column 名稱歹叮,此時該參數(shù)就要為 true. 否則默認(rèn) track 的是 timestamp 的值.
use_column_value => true
# 如果 use_column_value 為真,需配置此參數(shù). track 的數(shù)據(jù)庫 column 名,該 column 必須是遞增的. 一般是mysql主鍵
tracking_column => "update_time"
tracking_column_type => "timestamp"
last_run_metadata_path => "area_logstash_capital_bill_last_id"
# 是否清除 last_run_metadata_path 的記錄,如果為真那么每次都相當(dāng)于從頭開始查詢所有的數(shù)據(jù)庫記錄
clean_run => false
#是否將 字段(column) 名稱轉(zhuǎn)小寫
#lowercase_column_names => false
}
}
filter {
date {
match => [ "update_time", "yyyy-MM-dd HH:mm:ss" ]
timezone => "Asia/Shanghai"
}
}
output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
# index名 自定義 相當(dāng)于數(shù)據(jù)庫 對于實體類上@Document(indexName = "sys_core", type = "area")indexName
index => "sys_core"
#索引的類型 相當(dāng)于數(shù)據(jù)庫里面的表 對于實體類上@Document(indexName = "sys_core", type = "area")type
document_type => "area"
#需要關(guān)聯(lián)的數(shù)據(jù)庫中有有一個id字段寓涨,對應(yīng)索引的id號
document_id => "%{id}"
template_overwrite => true
}
# 這里輸出調(diào)試澎蛛,正式運(yùn)行時可以注釋掉
stdout {
codec => json_lines
}
}
- 啟動
/opt/logstash-6.5.4# bin/logstash -f config/sync_table.cfg
7.配置同步多張表
比如想同步tableA tableB tableC 3張表 則需要創(chuàng)建3個 sync_table.conf 文件 sync_tableA.conf sync_tableB.conf sync_tableC.conf
只是修改里面的sql語句和索引名
sync_table.conf 文件創(chuàng)建好后最后在 /opt/logstash-6.5.4/config/pipelines.yml 配置
- pipeline.id: table1
path.config: "/opt/logstash-6.5.4/sync_config/sync_tableA.conf"
- pipeline.id: table2
path.config: "/opt/logstash-6.5.4/sync_config/ sync_tableB.conf"
- pipeline.id: table3
path.config: "/opt/logstash-6.5.4/sync_config/sync_tableC.conf"
然后啟動
/opt/logstash-6.5.4# bin/logstash
最后成功同步數(shù)據(jù)
[2019-01-24T22:40:00,333][INFO ][logstash.inputs.jdbc ] (0.013511s) SELECT version()
[2019-01-24T22:40:00,340][INFO ][logstash.inputs.jdbc ] (0.002856s) SELECT version()
[2019-01-24T22:40:00,349][INFO ][logstash.inputs.jdbc ] (0.009841s) SELECT version()
[2019-01-24T22:40:00,408][INFO ][logstash.inputs.jdbc ] (0.005667s) SELECT count(*) AS `count` FROM (select * from area where update_time >= '2019-01-23 22:36:24' order by update_time asc) AS `t1` LIMIT 1
[2019-01-24T22:40:00,410][INFO ][logstash.inputs.jdbc ] (0.002467s) SELECT count(*) AS `count` FROM (select * from dictionaries where update_time >= '2019-01-24 06:52:53' order by update_time asc) AS `t1` LIMIT 1
[2019-01-24T22:41:00,361][INFO ][logstash.inputs.jdbc ] (0.000663s) SELECT version()
8.單機(jī)版(只有一個節(jié)點) 集群狀態(tài)為yellow 和索引為Unassigned
這里解釋一下為什么集群狀態(tài)為yellow
由于我們是單節(jié)點部署elasticsearch近上,而默認(rèn)的分片副本數(shù)目配置為1压真,而相同的分片不能在一個節(jié)點上寂玲,所以就存在副本分片指定不明確的問題,所以顯示為yellow染苛,我們可以通過在elasticsearch集群上添加一個節(jié)點來解決問題,如果你不想這么做主到,你可以刪除那些指定不明確的副本分片(當(dāng)然這不是一個好辦法)但是作為測試和解決辦法還是可以嘗試的茶行,下面我們試一下刪除副本分片的辦法
刪除副本分片 即可解決
curl -H "Content-Type: application/json" -X PUT http://localhost:9200/_settings -d '{"number_of_replicas":0}'
{"acknowledged":true}
curl -v http://localhost:9200/_cluster/health?pretty
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 9200 (#0)
> GET /_cluster/health?pretty HTTP/1.1
> Host: localhost:9200
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 470
<
{
"cluster_name" : "docker-cluster",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 10,
"active_shards" : 10,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
-
Elasticsearch設(shè)置最大返回條數(shù)
解決異常
Caused by: org.elasticsearch.search.query.QueryPhaseExecutionException: Result window is too large, from + size must be less than or equal to: [10000] but was [100000]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting.
curl -H "Content-Type: application/json" -X PUT http://localhost:9200/_settings -d '{"max_result_window":2147483647}'
注意:
1.size的大小不能超過index.max_result_window這個參數(shù)的設(shè)置,默認(rèn)為10,000登钥。
2.需要搜索分頁畔师,可以通過from size組合來進(jìn)行。from表示從第幾行開始牧牢,size表示查詢多少條文檔看锉。from默認(rèn)為0,size默認(rèn)為10;
通過頁面設(shè)置方法參考:https://blog.csdn.net/chenhq_/article/details/77507956