1饶米、啟動(dòng)/停止OushuDB
啟動(dòng)OushuDB有兩種方式脑漫,一種是通過(guò)”hawq start cluster”命令來(lái)啟動(dòng)整個(gè)集群,包括master和segment咙崎。啟動(dòng)哪些segment是由”/hawq-install-path/etc/slaves”中包含的節(jié)點(diǎn)確定的。
source /usr/local/hawq/greenplum_path.sh # 設(shè)置OushuDB環(huán)境變量hawq start cluster # 啟動(dòng)整個(gè)OushuDB集群
另外一種方式是分別啟動(dòng)OushuDB master和segment吨拍。因?yàn)镺ushuDB master和segment是解耦合的褪猛,分別啟動(dòng)master和segment是可行的。
hawq start master # 啟動(dòng)master羹饰,指的是啟動(dòng)本地masterhawq start segment # 啟動(dòng)segment伊滋,指的是啟動(dòng)本地segment
重新啟動(dòng)或者停止OushuDB也有兩種方式:
# 方式一hawq restart cluster # 重啟OushuDB集群hawq stop cluster # 停止OushuDB集群# 方式二hawq restart master # 重啟本機(jī)的OushuDB masterhawq restart segment # 重啟本機(jī)的OushuDB segmenthawq stop master # 停止本機(jī)OushuDB masterhawq stop segment # 停止本機(jī)OushuDB segment
啟動(dòng)/停止Magma
OushuDB4.0 實(shí)現(xiàn)了單獨(dú)起停Magma服務(wù),具體命令如下:
# 方式一 OushuDB4.0 集群起停帶Magma服務(wù) [只有hawq init|start|stop cluster命令可以帶--with_magma選項(xiàng)]hawq init cluster --with_magma # 啟動(dòng)OushuDB集群時(shí)队秩,使用--with_magma選項(xiàng)笑旺,同時(shí)啟動(dòng)Magma服務(wù), 3.X版本不支持。# 方式二 Magma服務(wù)單獨(dú)起停magma start|stop|restart clustermagma start|stop|restart node
關(guān)于OushuDB hawq命令的詳細(xì)用法馍资,可以通過(guò)”hawq –help”命令得到筒主。
changlei:build ChangLei$ hawq --help
usage: hawq <command> [<object>] [options]
[--version]
The most commonly used hawq "commands" are:
start Start hawq service.
stop Stop hawq service.
init Init hawq service.
restart Restart hawq service.
activate Activate hawq standby master as master.
version Show hawq version information.
config Set hawq GUC values.
state Show hawq cluster status.
filespace Create hawq filespaces.
extract Extract table metadata into a YAML formatted file.
load Load data into hawq.
scp Copies files between multiple hosts at once.
ssh Provides ssh access to multiple hosts at once.
ssh-exkeys Exchanges SSH public keys between hosts.
check Verifies and validates HAWQ settings.
checkperf Verifies the baseline hardware performance of hosts.
register Register parquet files generated by other system into the corrsponding table in HAWQ
2、創(chuàng)建數(shù)據(jù)庫(kù)和表
本節(jié)通過(guò)使用OushuDB的命令行工具psql來(lái)說(shuō)明如何創(chuàng)建基本數(shù)據(jù)庫(kù)對(duì)象:database和table鸟蟹。因?yàn)镺ushuDB和PostgreSQL兼容乌妙,所以使用OushuDB的方式和使用PostgresSQL的方式基本相同,如果OushuDB的文檔有些地方說(shuō)明不清楚的話建钥,用戶也可以通過(guò)查閱PostgresSQL的幫助文檔來(lái)了解更多關(guān)于OushuDB的信息藤韵。
下面這條命令使用psql連接OushuDB缺省安裝的數(shù)據(jù)庫(kù)postgres,然后創(chuàng)建一個(gè)新的數(shù)據(jù)庫(kù)test熊经,并在新的數(shù)據(jù)庫(kù)中創(chuàng)建一個(gè)表foo泽艘。
changlei:build ChangLei$ psql -d postgres
psql (8.2.15)
Type "help" for help.
postgres=# create database test; # 創(chuàng)建數(shù)據(jù)庫(kù)test
CREATE DATABASE
postgres=# c test # 連接進(jìn)入test數(shù)據(jù)庫(kù)
You are now connected to database "test" as user "ChangLei".
test=# create table foo(id int, name varchar); # 創(chuàng)建表foo
CREATE TABLE
test=# d # 顯示當(dāng)前數(shù)據(jù)庫(kù)test中所有表
List of relations
Schema | Name | Type | Owner | Storage
--------+------+-------+----------+-------------
public | foo | table | ChangLei | append only
(1 row)
INSERT 0 2
test=# select * from foo; # 從表foo中選擇數(shù)據(jù)
id | name
----+------
1 | hawq
2 | hdfs
(2 rows)
如果想刪除表或者數(shù)據(jù)庫(kù)的話可以使用drop語(yǔ)句欲险。
test=# drop table foo;
DROP TABLE
test=# d
No relations found.
test=# drop database test; # 因?yàn)楝F(xiàn)在在test數(shù)據(jù)庫(kù)中,所以不能刪除
ERROR: cannot drop the currently open database
test=# c postgres # 首先連接到postgres數(shù)據(jù)庫(kù)匹涮,然后刪除test數(shù)據(jù)庫(kù)
You are now connected to database "postgres" as user "ChangLei".
postgres=# drop database test;
DROP DATABASE
3天试、查看查詢執(zhí)行情況
使用 iming命令可以打印出查詢執(zhí)行的時(shí)間。
test=# iming on
Timing is on.
test=# select * from foo; # 這時(shí)再執(zhí)行SQL語(yǔ)句會(huì)給出語(yǔ)句執(zhí)行時(shí)間焕盟。
id | name
----+------
1 | hawq
2 | hdfs
(2 rows)
Time: 16.369 ms
test=# iming off # 關(guān)閉時(shí)間輸出
Timing is off.
使用explain語(yǔ)句可以顯示出查詢計(jì)劃秋秤。
test=# explain select count(*) from foo;
QUERY PLAN
----------------------------------------------------------------------------------
Aggregate (cost=1.07..1.08 rows=1 width=8)
-> Gather Motion 1:1 (slice1; segments: 1) (cost=1.03..1.06 rows=1 width=8)
-> Aggregate (cost=1.03..1.04 rows=1 width=8)
-> Append-only Scan on foo (cost=0.00..1.02 rows=2 width=0)
Settings: default_hash_table_bucket_number=6
(5 rows)
使用explain analyze可以顯示出查詢?cè)诰唧w執(zhí)行時(shí)的狀態(tài),包括每一個(gè)操作符開(kāi)始執(zhí)行時(shí)間脚翘,以及結(jié)束時(shí)間灼卢,可以幫助用戶找到查詢的瓶頸,進(jìn)而優(yōu)化查詢来农。關(guān)于查詢計(jì)劃以及explain analyze的執(zhí)行結(jié)果的解釋可以參考查詢計(jì)劃與查詢執(zhí)行章節(jié)鞋真。針對(duì)一個(gè)查詢,可能會(huì)有無(wú)數(shù)個(gè)查詢計(jì)劃沃于。得出優(yōu)化的查詢計(jì)劃是查詢優(yōu)化器的功能涩咖。一個(gè)查詢執(zhí)行時(shí)間的長(zhǎng)短與查詢的計(jì)劃有很大關(guān)系,所以熟悉查詢計(jì)劃以及具體查詢的執(zhí)行對(duì)查詢優(yōu)化有很大意義繁莹。
test=# explain analyze select count(*) from foo;
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=1.07..1.08 rows=1 width=8)
Rows out: Avg 1.0 rows x 1 workers. Max/Last(seg-1:changlei/seg-1:changlei) 1/1 rows with 5.944/5.944 ms to end, start offset by 6.568/6.568 ms.
-> Gather Motion 1:1 (slice1; segments: 1) (cost=1.03..1.06 rows=1 width=8)
Rows out: Avg 1.0 rows x 1 workers at destination. Max/Last(seg-1:changlei/seg-1:changlei) 1/1 rows with 5.941/5.941 ms to first row, 5.942/5.942 ms to end, start offset by 6.569/6.569 ms.
-> Aggregate (cost=1.03..1.04 rows=1 width=8)
Rows out: Avg 1.0 rows x 1 workers. Max/Last(seg0:changlei/seg0:changlei) 1/1 rows with 5.035/5.035 ms to first row, 5.036/5.036 ms to end, start offset by 7.396/7.396 ms.
-> Append-only Scan on foo (cost=0.00..1.02 rows=2 width=0)
Rows out: Avg 2.0 rows x 1 workers. Max/Last(seg0:changlei/seg0:changlei) 2/2 rows with 5.011/5.011 ms to first row, 5.032/5.032 ms to end, start offset by 7.397/7.397 ms.
Slice statistics:
(slice0) Executor memory: 223K bytes.
(slice1) Executor memory: 279K bytes (seg0:changlei).
Statement statistics:
Memory used: 262144K bytes
Settings: default_hash_table_bucket_number=6
Dispatcher statistics:
executors used(total/cached/new connection): (1/1/0); dispatcher time(total/connection/dispatch data): (1.462 ms/0.000 ms/0.029 ms).
dispatch data time(max/min/avg): (0.029 ms/0.029 ms/0.029 ms); consume executor data time(max/min/avg): (0.012 ms/0.012 ms/0.012 ms); free executor time(max/min/avg): (0.000 ms/0.000 ms/0.000 ms).
Data locality statistics:
data locality ratio: 1.000; virtual segment number: 1; different host number: 1; virtual segment number per host(avg/min/max): (1/1/1); segment size(avg/min/max): (56.000 B/56 B/56 B); segment size with penalty(avg/min/max): (56.000 B/56 B/56 B); continuity(avg/min/max): (1.000/1.000/1.000); DFS metadatacache: 0.049 ms; resource allocation: 0.612 ms; datalocality calculation: 0.085 ms.
Total runtime: 13.398 ms
(20 rows)