docker下搭建簡易ELK系統(tǒng)

ELK是ElasticSearch舔箭、Logstash、Kibana的簡稱蚊逢,一般用于日志系統(tǒng)层扶,從日志收集,日志轉儲烙荷,日志展示等入手镜会,用以提供簡潔高效的日志處理機制。


一個簡易ELK系統(tǒng)的服務圖

鑒于沒有額外的機器终抽,這里就用docker來簡單模擬下一個簡單ELK系統(tǒng)的部署和使用戳表。

搭建Logstash

準備好鏡像

docker search logstash // 省略輸出
docker pull logstash // 省略輸出
?  logstash docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
es_test             latest              43cf30ef8591        2 days ago          349MB
redis               latest              0f55cf3661e9        9 days ago          95MB
logstash            5.5.2               98f8400d2944        17 months ago       724MB
elasticsearch       2.3.5               1c3e7681c53c        2 years ago         346MB

運行實例

?  logstash docker run -i -t 98f8400d2944 /bin/bash
root@36134d90b5cd:/# ls
bin  boot  dev  docker-entrypoint.sh  docker-java-home  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
root@36134d90b5cd:/# cd /home
root@36134d90b5cd:/home# ls
root@36134d90b5cd:/home# cd ../
root@36134d90b5cd:/# ls
bin  boot  dev  docker-entrypoint.sh  docker-java-home  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
root@36134d90b5cd:/# cd /etc/logstash/
root@36134d90b5cd:/etc/logstash# ls
conf.d  jvm.options  log4j2.properties  log4j2.properties.dist  logstash.yml  startup.options
root@36134d90b5cd:/etc/logstash# cat logstash.yml

發(fā)現(xiàn)沒有l(wèi)ogstash.conf(名字是隨意的,logstash可以通過-f參數(shù)來指定對應的input昼伴,filter匾旭,output行為)

剛才pull到的鏡像中沒有vim等編輯器,所以不能直接手動編輯圃郊,所以為了使用自定義的配置文件价涝,這里用docker的cp命令,把外部編輯好的配置文件共享到容器內部使用描沟。

  logstash docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS                              NAMES
36134d90b5cd        98f8400d2944        "/docker-entrypoint.…"   About a minute ago   Up About a minute                                      priceless_feistel
687bf289bb43        43cf30ef8591        "/docker-entrypoint.…"   5 minutes ago        Up 5 minutes        0.0.0.0:9200->9200/tcp, 9300/tcp   my_elasticsearch
?  logstash docker cp logstash.conf 36134d90b5cd:/etc/logstash
  logstash cat logstash.conf
input { stdin {} }
output {
  elasticsearch {
    action => "index"
    hosts => "http://172.17.0.2:9200"
    index => "my_log"#在es中的索引
  }
  # 這里輸出調試飒泻,正式運行時可以注釋掉
  stdout {
      codec => json_lines
  }
  #redis {
  #  codec => plain
  #  host => ["127.0.0.1:6379"]
  #  data_type => list
  #  key => logstash
  #}
}
?  logstash

配置文件中的172.17.0.2:9200 是elasticsearch在docker容器中的IP地址,具體的獲取方式為:

?  logstash docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                              NAMES
36134d90b5cd        98f8400d2944        "/docker-entrypoint.…"   9 minutes ago       Up 9 minutes                                           priceless_feistel
687bf289bb43        43cf30ef8591        "/docker-entrypoint.…"   13 minutes ago      Up 13 minutes       0.0.0.0:9200->9200/tcp, 9300/tcp   my_elasticsearch
?  logstash docker inspect 687bf289bb43 | grep IP
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.2",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
                    "IPAMConfig": null,
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
?  logstash

測試logstash

如此吏廉,關于Logstash的準備工作算是完成了泞遗,接下來就要部署logstash服務了。

?  logstash docker run -i -t 98f8400d2944 /bin/bash
root@36134d90b5cd:/etc/logstash# ls
conf.d  jvm.options  log4j2.properties  log4j2.properties.dist  logstash.yml  startup.options
root@36134d90b5cd:/etc/logstash# cd conf.d/
root@36134d90b5cd:/etc/logstash/conf.d# ls
root@36134d90b5cd:/etc/logstash/conf.d# cd ..
root@36134d90b5cd:/etc/logstash# ls
conf.d  jvm.options  log4j2.properties  log4j2.properties.dist  logstash.yml  startup.options
root@36134d90b5cd:/etc/logstash# ls
conf.d  jvm.options  log4j2.properties  log4j2.properties.dist  logstash.conf  logstash.yml  startup.options
root@36134d90b5cd:/etc/logstash# logstash -f logstash.conf


Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
03:00:36.516 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
03:00:36.520 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/var/lib/logstash/dead_letter_queue"}
03:00:36.552 [LogStash::Runner] INFO  logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"fc54205f-077a-4c07-9ae4-85f689c970b1", :path=>"/var/lib/logstash/uuid"}
03:00:37.093 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://172.17.0.2:9200/]}}
03:00:37.094 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://172.17.0.2:9200/, :path=>"/"}
03:00:37.188 [[main]-pipeline-manager] WARN  logstash.outputs.elasticsearch - Restored connection to ES instance {:url=>"http://172.17.0.2:9200/"}
03:00:37.200 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Using mapping template from {:path=>nil}
03:00:37.460 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Attempting to install template {:manage_template=>{"template"=>"logstash-*", "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "omit_norms"=>true}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"string", "index"=>"analyzed", "omit_norms"=>true, "fielddata"=>{"format"=>"disabled"}}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"string", "index"=>"analyzed", "omit_norms"=>true, "fielddata"=>{"format"=>"disabled"}, "fields"=>{"raw"=>{"type"=>"string", "index"=>"not_analyzed", "doc_values"=>true, "ignore_above"=>256}}}}}, {"float_fields"=>{"match"=>"*", "match_mapping_type"=>"float", "mapping"=>{"type"=>"float", "doc_values"=>true}}}, {"double_fields"=>{"match"=>"*", "match_mapping_type"=>"double", "mapping"=>{"type"=>"double", "doc_values"=>true}}}, {"byte_fields"=>{"match"=>"*", "match_mapping_type"=>"byte", "mapping"=>{"type"=>"byte", "doc_values"=>true}}}, {"short_fields"=>{"match"=>"*", "match_mapping_type"=>"short", "mapping"=>{"type"=>"short", "doc_values"=>true}}}, {"integer_fields"=>{"match"=>"*", "match_mapping_type"=>"integer", "mapping"=>{"type"=>"integer", "doc_values"=>true}}}, {"long_fields"=>{"match"=>"*", "match_mapping_type"=>"long", "mapping"=>{"type"=>"long", "doc_values"=>true}}}, {"date_fields"=>{"match"=>"*", "match_mapping_type"=>"date", "mapping"=>{"type"=>"date", "doc_values"=>true}}}, {"geo_point_fields"=>{"match"=>"*", "match_mapping_type"=>"geo_point", "mapping"=>{"type"=>"geo_point", "doc_values"=>true}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "doc_values"=>true}, "@version"=>{"type"=>"string", "index"=>"not_analyzed", "doc_values"=>true}, "geoip"=>{"type"=>"object", "dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip", "doc_values"=>true}, "location"=>{"type"=>"geo_point", "doc_values"=>true}, "latitude"=>{"type"=>"float", "doc_values"=>true}, "longitude"=>{"type"=>"float", "doc_values"=>true}}}}}}}}
03:00:37.477 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - Installing elasticsearch template to _template/logstash
03:00:37.626 [[main]-pipeline-manager] INFO  logstash.outputs.elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://172.17.0.2:9200"]}
03:00:37.629 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
03:00:37.648 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started
The stdin plugin is now waiting for input:
03:00:37.814 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
{"@timestamp":"2019-02-16T03:00:37.715Z","@version":"1","host":"36134d90b5cd","message":""}
{"@timestamp":"2019-02-16T03:00:37.681Z","@version":"1","host":"36134d90b5cd","message":""}
helloworld
{"@timestamp":"2019-02-16T03:00:53.619Z","@version":"1","host":"36134d90b5cd","message":"helloworld"}
what's up?
{"@timestamp":"2019-02-16T03:01:41.383Z","@version":"1","host":"36134d90b5cd","message":"what's up?"}

示例中的helloworld和what'sup? 是通過input的stdin測試的席覆,沒有報錯史辙,說明logstash已經能正確運行了。

搭建ElasticSearch

準備鏡像

docker search elasticsearch // 省略輸出
docker pull elasticsearch // 省略輸出

由于單純的通過HTTP去查看elasticsearch運行的信息,有點繁瑣聊倔,所以這里學著網上的教程晦毙,自己創(chuàng)建一個Dockerfile,然后直接安裝好head插件耙蔑,這樣就可以很輕松的在webUI上操作elasticsearch

Dockerfile

FROM elasticsearch:2.3.5
 
RUN /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
 
EXPOSE 9200

build 自己的鏡像

# 進入到此Dockerfile所在的同級目錄见妒,然后開始創(chuàng)建鏡像
 docker build --name "es_test" .

查看創(chuàng)建好的鏡像

?  elasticsearch ls
Dockerfile        elasticsearch.yml operate.py
?  elasticsearch docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
es_test             latest              43cf30ef8591        2 days ago          349MB
redis               latest              0f55cf3661e9        9 days ago          95MB
logstash            5.5.2               98f8400d2944        17 months ago       724MB
elasticsearch       2.3.5               1c3e7681c53c        2 years ago         346MB
?  elasticsearch

運行elasticsearch

  elasticsearch docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
?  elasticsearch docker run -d -p 9200:9200 --name="my_elasticsearch" 43cf30ef8591

687bf289bb43d0a037293a2f308b59ce50f5fb475f0b50dd075a0f304be789a3
?  elasticsearch docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                              NAMES
687bf289bb43        43cf30ef8591        "/docker-entrypoint.…"   18 seconds ago      Up 17 seconds       0.0.0.0:9200->9200/tcp, 9300/tcp   my_elasticsearch
?  elasticsearch docker inspect 687bf289bb43 | grep IP
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.2",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
                    "IPAMConfig": null,
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
?  elasticsearch

docker的run命令中通過-p參數(shù)指定了宿主機端口9200 與容器的9200端口互通,所以我們就可以在宿主機的9200端口查看到具體的運行信息甸陌。


宿主機9200端口運行

因為在鏡像中已經安裝好了elasticsearch的head插件须揣,下面訪問head下的elasticsearch的運行情況吧。


在elasticsearch的head插件中查看運行情況

在elasticsearch中查看logstash中的數(shù)據(jù)

在開始查看之前钱豁,我們現(xiàn)在剛才logstash配置的終端中輸入一些數(shù)據(jù)耻卡,作為數(shù)據(jù)源的積累。


隨意加一些數(shù)據(jù)源

然后我們去elasticsearch中查看是否有對應的數(shù)據(jù)牲尺。


elasticsearch中查看是否有對應信息

至此卵酪,elasticsearch服務也算是搭建完成了。

搭建kibana服務

docker search kibana
docker pull nshou/elasticsearch-kibana
Using default tag: latest
latest: Pulling from nshou/elasticsearch-kibana
8e3ba11ec2a2: Pull complete
311ad0da4533: Pull complete
391a6a6b3651: Pull complete
b80b8b42a95a: Pull complete
0ae9073eaa12: Pull complete
Digest: sha256:e504d283be8cd13f9e1d1ced9a67a140b40a10f5b39f4dde4010dbebcbdd6da0
Status: Downloaded newer image for nshou/elasticsearch-kibana:latest

開始運行

?  kibana docker images
REPOSITORY                   TAG                 IMAGE ID            CREATED             SIZE
es_test                      latest              43cf30ef8591        2 days ago          349MB
redis                        latest              0f55cf3661e9        9 days ago          95MB
nshou/elasticsearch-kibana   latest              1509f8ccdbf3        2 weeks ago         383MB
logstash                     5.5.2               98f8400d2944        17 months ago       724MB
elasticsearch                2.3.5               1c3e7681c53c        2 years ago         346MB
?  kibana docker run --name my_kibana -p 5601:5601 -d -e ELASTICSEARCH_URL=http://172.17.0.2:9200 1509f8ccdbf3
e0e7eab877ea3a1e603efbf041038f7e3c36e2b21e3e1934bf7e7d39606b8bac
?  kibana docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                              NAMES
e0e7eab877ea        1509f8ccdbf3        "/bin/sh -c 'sh elas…"   4 seconds ago       Up 4 seconds        0.0.0.0:5601->5601/tcp, 9200/tcp   my_kibana
36134d90b5cd        98f8400d2944        "/docker-entrypoint.…"   33 minutes ago      Up 33 minutes                                          priceless_feistel
687bf289bb43        43cf30ef8591        "/docker-entrypoint.…"   37 minutes ago      Up 37 minutes       0.0.0.0:9200->9200/tcp, 9300/tcp   my_elasticsearch
?  kibana

注意上方的ELASTICSEARCH_URL是根據(jù)之前部署好的elasticsearch的內網IP服務地址谤碳,否則可能導致連接不上溃卡。

kibana運行示例

總結

到這里真的很慶幸,docker著實有用啊蜒简,雖說也可以在本機直接部署安裝塑煎,但是那樣污染了宿主機不說,還沒辦法做好很好的隔離效果臭蚁。

不管我們有什么想法,把服務圖畫出來讯赏,然后就是把一個個容器跑起來垮兑,再進行對應的配置部署,一個demo服務體系就可以很輕松的被構建出來漱挎。

感謝docker系枪,感恩技術。

?著作權歸作者所有,轉載或內容合作請聯(lián)系作者
  • 序言:七十年代末磕谅,一起剝皮案震驚了整個濱河市私爷,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌膊夹,老刑警劉巖衬浑,帶你破解...
    沈念sama閱讀 218,682評論 6 507
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異放刨,居然都是意外死亡工秩,警方通過查閱死者的電腦和手機,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,277評論 3 395
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來助币,“玉大人浪听,你說我怎么就攤上這事∶剂猓” “怎么了迹栓?”我有些...
    開封第一講書人閱讀 165,083評論 0 355
  • 文/不壞的土叔 我叫張陵,是天一觀的道長俭缓。 經常有香客問我克伊,道長,這世上最難降的妖魔是什么尔崔? 我笑而不...
    開封第一講書人閱讀 58,763評論 1 295
  • 正文 為了忘掉前任答毫,我火速辦了婚禮,結果婚禮上季春,老公的妹妹穿的比我還像新娘洗搂。我一直安慰自己,他們只是感情好载弄,可當我...
    茶點故事閱讀 67,785評論 6 392
  • 文/花漫 我一把揭開白布耘拇。 她就那樣靜靜地躺著,像睡著了一般宇攻。 火紅的嫁衣襯著肌膚如雪惫叛。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,624評論 1 305
  • 那天逞刷,我揣著相機與錄音嘉涌,去河邊找鬼。 笑死夸浅,一個胖子當著我的面吹牛仑最,可吹牛的內容都是我干的。 我是一名探鬼主播帆喇,決...
    沈念sama閱讀 40,358評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼警医,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了坯钦?” 一聲冷哼從身側響起预皇,我...
    開封第一講書人閱讀 39,261評論 0 276
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎婉刀,沒想到半個月后吟温,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經...
    沈念sama閱讀 45,722評論 1 315
  • 正文 獨居荒郊野嶺守林人離奇死亡路星,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 37,900評論 3 336
  • 正文 我和宋清朗相戀三年溯街,在試婚紗的時候發(fā)現(xiàn)自己被綠了诱桂。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 40,030評論 1 350
  • 序言:一個原本活蹦亂跳的男人離奇死亡呈昔,死狀恐怖挥等,靈堂內的尸體忽然破棺而出,到底是詐尸還是另有隱情堤尾,我是刑警寧澤肝劲,帶...
    沈念sama閱讀 35,737評論 5 346
  • 正文 年R本政府宣布,位于F島的核電站郭宝,受9級特大地震影響辞槐,放射性物質發(fā)生泄漏。R本人自食惡果不足惜粘室,卻給世界環(huán)境...
    茶點故事閱讀 41,360評論 3 330
  • 文/蒙蒙 一榄檬、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧衔统,春花似錦鹿榜、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,941評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至险掀,卻和暖如春沪袭,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背樟氢。 一陣腳步聲響...
    開封第一講書人閱讀 33,057評論 1 270
  • 我被黑心中介騙來泰國打工冈绊, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人埠啃。 一個月前我還...
    沈念sama閱讀 48,237評論 3 371
  • 正文 我出身青樓焚碌,卻偏偏與公主長得像,于是被迫代替她去往敵國和親霸妹。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 44,976評論 2 355

推薦閱讀更多精彩內容