測試環(huán)境采用了三臺虛擬機译仗,每臺4G內(nèi)存
用到的軟件有
mysql, hadoop-2.7.1, kafka_2.11-0.9.0.1, imply-2.2.3(druid的套件,集成了dsql像mysql一樣去查詢數(shù)據(jù),pivot數(shù)據(jù)可視化組件)
Java 8 or better
Node.js 4.x or better
Master Server(master)
1527 (Derby; not needed if you are using a separate metadata store like MySQL or PostgreSQL)
2181 (ZooKeeper; not needed if you are using a separate ZooKeeper cluster)
8081 (Druid Coordinator)
8090 (Druid Overlord)
#啟動master
bin/supervise -c conf/supervise/master-with-zk.conf
Query Server (slave1)
8082 (Druid Broker)
9095 (Pivot)
#啟動query
bin/supervise -c conf/supervise/query.conf
Data Server(slave2)
8083 (Druid Historical)
8091 (Druid Middle Manager)
8100–8199 (Druid Task JVMs, spawned by Middle Managers)
8200 (Tranquility Server; optional)
#啟動data
bin/supervise -c conf/supervise/data.conf
tar -xzf imply-2.2.3.tar.gz
cd imply-2.2.3
[全局Common配置]
vim conf/druid/_common/common.runtime.properties
#
# Extensions
#
druid.extensions.directory=dist/druid/extensions
druid.extensions.hadoopDependenciesDir=dist/druid/hadoop-dependencies
druid.extensions.loadList=["druid-caffeine-cache","druid-lookups-cached-global","druid-histogram","druid-datasketches","mysql-metadata-storage","druid-hdfs-storage","druid-kafka-indexing-service"]
#
# Logging
#
# Log all runtime properties on startup. Disable to avoid logging properties on startup:
druid.startup.logging.logProperties=true
#
# Zookeeper
#
druid.zk.service.host=192.168.31.162
druid.zk.paths.base=/druid
# For MySQL:
druid.metadata.storage.type=mysql
druid.metadata.storage.connector.connectURI=jdbc:mysql://192.168.31.162:3306/druid
druid.metadata.storage.connector.user=root
druid.metadata.storage.connector.password=root
#
# Deep storage
#
# For local disk (only viable in a cluster if this is a network mount):
# For HDFS:
druid.storage.type=hdfs
druid.storage.storageDirectory=hdfs://master:9000/druid/segments
Configure Master server address
協(xié)調(diào)節(jié)點:
vim conf/druid/coordinator/jvm.config
-server
-Xms500m
-Xmx500m
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
-Dderby.stream.error.file=var/druid/derby.log
vim conf/druid/coordinator/runtime.properties
druid.service=druid/coordinator
druid.host=master
druid.port=8081
druid.coordinator.startDelay=PT30S
druid.coordinator.period=PT30S
統(tǒng)治節(jié)點:-server
vim conf/druid/overlord/jvm.config
-Xms500m
-Xmx500m
-XX:NewSize=256m
-XX:MaxNewSize=256m
-XX:+UseConcMarkSweepGC
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
vim conf/druid/overlord/runtime.properties
druid.service=druid/overlord
druid.host=192.168.31.162
druid.port=8090
druid.indexer.queue.startDelay=PT30S
druid.indexer.runner.type=remote
druid.indexer.storage.type=metadata
Configure query storage
查詢節(jié)點
vim conf/druid/broker/jvm.config
-server
-Xms1g
-Xmx1g
-XX:NewSize=256m
-XX:NewSize=256m
-XX:MaxDirectMemorySize=1g
-XX:+UseConcMarkSweepGC
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
vim conf/druid/broker/runtime.properties
druid.service=druid/broker
druid.host=master
druid.port=8082
# HTTP server threads
druid.broker.http.numConnections=5
druid.server.http.numThreads=40
# Processing threads and buffers
druid.processing.buffer.sizeBytes=536870912
druid.processing.numMergeBuffers=2
druid.processing.numThreads=7
druid.processing.tmpDir=var/druid/processing
# Query cache disabled -- push down caching and merging instead
druid.broker.cache.useCache=false
druid.broker.cache.populateCache=false
#druid.broker.cache.unCacheable=[]
#JVM堆內(nèi)LUR緩存大小,單位Byte
#druid.cache.sizeInBytes=60000000
# SQL
druid.sql.enable=true
# Query config
# 查詢節(jié)點請求歷史節(jié)點方式贤壁, 有random和connectionCount兩種連接方式
druid.broker.balancer.type=connectionCount
pivot配置
vim conf/pivot/config.yaml
# The port on which the Pivot server will listen on.
port: 9095
# Pivot runtime directory
varDir: var/pivot
settingsLocation:
location: file
format: 'json-pretty'
initialSettings:
clusters:
- name: druid
type: druid
host: localhost:8082
Configure deep storage
歷史節(jié)點
vim conf/druid/historical/jvm.config
-server
-Xms1g
-Xmx1g
-XX:NewSize=256m
-XX:MaxNewSize=256m
-XX:MaxDirectMemorySize=4096m
-XX:UseConcMarkSweepGC
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
vim conf/druid/historical/runtime.properties
druid.service=druid/historical
druid.host=master
druid.port=8083
# HTTP server threads
druid.server.http.numThreads=40
# Processing threads and buffers
druid.processing.buffer.sizeBytes=536870912
druid.processing.numMergeBuffers=2
druid.processing.numThreads=7
druid.processing.tmpDir=var/druid/processing
# Segment storage
# Segment 本地加載路徑與最大存儲空間大小奋姿,單位為Byte
druid.segmentCache.locations=[{"path":"var/druid/segment-cache","maxSize"\:130000000000}]
# 最大存儲空間大小石挂,該值只用作Coordinator調(diào)配
Segment加載的依據(jù)
druid.server.maxSize=130000000000
# Query cache
druid.historical.cache.useCache=true
druid.historical.cache.populateCache=true
druid.cache.type=caffeine
druid.cache.sizeInBytes=2000000000
#Tier
# 自定義數(shù)據(jù)層名稱井厌,默認(rèn)為_default_tier, 不同的數(shù)據(jù)層的Segment數(shù)據(jù)無法相互復(fù)制
#druid.server.tier=hot
# 自定義數(shù)據(jù)層優(yōu)化級, 默認(rèn)值為0, 值越大優(yōu)先級越高址晕, 該功能用于冷熱數(shù)據(jù)層的劃分.
#druid.server.priority=10
MiddleManager配置
vim conf/druid/middleManager/jvm.config
-server
-Xms64m
-Xmx64m
-XX:+UseConcMarkSweepGC
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
vim conf/druid/middleManager/runtime.properties
druid.service=druid/middlemanager
druid.host=slave2
druid.port=8091
# Number of tasks per middleManager
druid.worker.capacity=3
# Task launch parameters
druid.indexer.runner.javaOpts=-server -Xmx1g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=var/druid/task
druid.indexer.task.restoreTasksOnRestart=true
# HTTP server threads
druid.server.http.numThreads=40
# Processing threads and buffers
druid.processing.buffer.sizeBytes=100000000
druid.processing.numMergeBuffers=2
druid.processing.numThreads=2
druid.processing.tmpDir=var/druid/processing
# Hadoop indexing
druid.indexer.task.hadoopWorkingPath=var/druid/hadoop-tmp
druid.indexer.task.defaultHadoopCoordinates=["org.apache.hadoop:hadoop-client:2.7.1"]
啟動三個節(jié)點相應(yīng)服務(wù)
master:
cd /opt/zookeeper-3.4.6
bin/zkServer.sh start
cd /opt/kafka_2.11-0.9.0.1/
bin/kafka-server-start.sh config/server.properties &
nohup bin/supervise -c conf/supervise/master-no-zk.conf > master.log &
slave1:
nohup bin/supervise -c conf/supervise/query.conf > query.log &
slave2:
nohup bin/supervise -c conf/supervise/data.conf > data.log &
./bin/kafka-topics.sh --create --zookeeper master:2181 --replication-factor 1 --partitions 1 --topic wikiticker
curl -XPOST -H'Content-Type: application/json' -d @quickstart/wikiticker-kafka-supervisor.json http://master:8090/druid/indexer/v1/supervisor
export KAFKA_OPTS="-Dfile.encoding=UTF-8"
/opt/kafka_2.11-0.9.0.1/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic wikiticker < /opt/imply-2.2.3/quickstart/wikiticker-2016-06-27-sampled.json
curl -O https://static.imply.io/quickstart/wikiticker-0.4.tar.gz
tar -xzf wikiticker-0.4.tar.gz
cd wikiticker-0.4
bin/wikiticker -J-Dfile.encoding=UTF-8 -out kafka -topic wikiticker
bin/dsql
dsql> SELECT FLOOR(__time TO DAY) AS "Day", SUM("count") AS Edits FROM "wikiticker-kafka" GROUP BY FLOOR(__time TO DAY);
┌──────────────────────────┬───────┐
│ Day │ Edits │
├──────────────────────────┼───────┤
│ 2016-06-27T00:00:00.000Z │ 24433 │
│ 2017-03-07T00:00:00.000Z │ 642 │
└──────────────────────────┴───────┘
Retrieved 2 rows in 0.04s.
注意事項:
1. Not enough direct memory. Please adjust -XX:MaxDirectMemorySize, druid.processing.buffer.sizeBytes, druid.processing.numThreads, or druid.processing.numMergeBuffers: maxDirectMemory[268,435,456], memoryNeeded[1,342,177,280] = druid.processing.buffer.sizeBytes[268,435,456] * (druid.processing.numMergeBuffers[2] + druid.processing.numThreads[2] + 1)
堆外內(nèi)存的設(shè)置需要根據(jù)設(shè)置的線程數(shù)和MergeBuffers的大小來確定.
2. 因為metadata保存在mysql膀懈,所以在初始化mysql數(shù)據(jù)庫的時候一定要設(shè)置utf8的編碼;
CREATE DATABASE druid DEFAULT CHARACTER SET utf8;