Centos 下 Apache 原生 Hbase + Phoenix 集群安裝

前置條件

  • 各軟件版本:hadoop-2.7.7戚啥、hbase-2.1.5 、jdk1.8.0_211锉试、zookeeper-3.4.10猫十、apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz
  • 至少 3 臺 Centos 服務(wù)器,主機名分別為:hadoop0001呆盖、hadoop0002拖云、hadoop0003
  • 這里所有的軟件將安裝在 hadoop 用戶的 /home/hadoop/app 目錄下
  • 在每臺服務(wù)器設(shè)置 hosts
[hadoop@hadoop0001 ~]$ vim /etc/hosts

host 內(nèi)容如下:

# 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
# ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.2.1.102  hadoop0001
10.2.1.103  hadoop0002
10.2.1.104  hadoop0003
  • ssh 免密登錄(此步驟可以忽略,但 Hadoop 每次啟動都需要輸入密碼)

在 hadoop0001 終端執(zhí)行以下命令:

[hadoop@hadoop0001 ~]$ ssh-keygen -t rsa -P "" //一直回車即可
[hadoop@hadoop0001 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[hadoop@hadoop0001 ~]$ cat ~/.ssh/id_rsa.pub >> hadoop@hadoop0002:~/.ssh/authorized_keys
[hadoop@hadoop0001 ~]$ cat ~/.ssh/id_rsa.pub >> hadoop@hadoop0003:~/.ssh/authorized_keys

在 hadoop0002 終端執(zhí)行以下命令:

[hadoop@hadoop0001 ~]$ ssh-keygen -t rsa -P "" //一直回車即可
[hadoop@hadoop0001 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[hadoop@hadoop0001 ~]$ cat ~/.ssh/id_rsa.pub >> hadoop@hadoop0001:~/.ssh/authorized_keys
[hadoop@hadoop0001 ~]$ cat ~/.ssh/id_rsa.pub >> hadoop@hadoop0003:~/.ssh/authorized_keys

在 hadoop0003 終端執(zhí)行以下命令:

[hadoop@hadoop0001 ~]$ ssh-keygen -t rsa -P "" //一直回車即可
[hadoop@hadoop0001 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[hadoop@hadoop0001 ~]$ cat ~/.ssh/id_rsa.pub >> hadoop@hadoop0001:~/.ssh/authorized_keys
[hadoop@hadoop0001 ~]$ cat ~/.ssh/id_rsa.pub >> hadoop@hadoop0002:~/.ssh/authorized_keys

驗證免密登錄

[hadoop@hadoop0001 ~]$ ssh localhost
Last login: Fri Jan  4 13:45:54 2019 //出現(xiàn)這個結(jié)果表示免密登錄成功

JDK 環(huán)境變量配置:

# 用戶家目錄下
[hadoop@hadoop0001 ~]$ vim .bashrc

添加以下內(nèi)容:

JAVA_HOME=/home/hadoop/app/jdk1.8.0_192
CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar 
PATH=$JAVA_HOME/bin:$HOME/bin:$HOME/.local/bin:$PATH

最后使環(huán)境變量生效:

# 用戶家目錄下
[hadoop@hadoop0001 ~]$ . .bashrc

JDK 驗證:

java -version
java version "1.8.0_192"
Java(TM) SE Runtime Environment (build 1.8.0_192-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.192-b12, mixed mode) java -version

將 hadoop0001 的 JDK 復(fù)制到其他服務(wù)器上

[hadoop@hadoop0001 app]$ scp -r jdk1.8.0_192/ hadoop@hadoop0002:~/app/jdk1.8.0_192/
[hadoop@hadoop0001 app]$ scp -r jdk1.8.0_192/ hadoop@hadoop0003:~/app/jdk1.8.0_192/
[hadoop@hadoop0001 ~]$ scp /etc/profile hadoop@hadoop0002:/etc/profile
[hadoop@hadoop0001 ~]$ scp /etc/profile hadoop@hadoop0003:/etc/profile
  • NTP 服務(wù)搭建
    每臺服務(wù)器上安裝 ntp
[hadoop@hadoop0001 ~]$ yum install -y ntp

hadoop0001 配置 ntp

[hadoop@hadoop0001 ~]$ vim /etc/ntp.conf

添加以下配置:

restrict 10.2.1.0 mask 255.255.255.0 nomodify notrap
logfile /var/log/ntpd.log
server ntp1.aliyun.com
server ntp2.aliyun.com
server ntp3.aliyun.com
server 127.0.0.1
fudge 127.0.0.1 stratum 10

完整配置文件(ntp.conf):

# For more information about this file, see the man pages
# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5).

driftfile /var/lib/ntp/drift

logfile /var/log/ntpd.log

# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
restrict default nomodify notrap nopeer noquery

# Permit all access over the loopback interface.  This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
restrict 127.0.0.1
restrict ::1

# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
restrict 10.2.1.0 mask 255.255.255.0 nomodify notrap

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ntp1.aliyun.com
server ntp2.aliyun.com
server ntp3.aliyun.com

server 127.0.0.1
fudge 127.0.0.1 stratum 10

#broadcast 192.168.1.255 autokey        # broadcast server
#broadcastclient                        # broadcast client
#broadcast 224.0.1.1 autokey            # multicast server
#multicastclient 224.0.1.1              # multicast client
#manycastserver 239.255.254.254         # manycast server
#manycastclient 239.255.254.254 autokey # manycast client

# Enable public key cryptography.
#crypto

includefile /etc/ntp/crypto/pw

# Key file containing the keys and key identifiers used when operating
# with symmetric key cryptography. 
keys /etc/ntp/keys

# Specify the key identifiers which are trusted.
#trustedkey 4 8 42

# Specify the key identifier to use with the ntpdc utility.
#requestkey 8

# Specify the key identifier to use with the ntpq utility.
#controlkey 8

# Enable writing of statistics records.
#statistics clockstats cryptostats loopstats peerstats

# Disable the monitoring facility to prevent amplification attacks using ntpdc
# monlist command when default restrict does not include the noquery flag. See
# CVE-2013-5211 for more details.
# Note: Monitoring will not be disabled with the limited restriction flag.
disable monitor

時間服務(wù)器可參考:https://www.pool.ntp.org/zone/asia

時間同步:

[hadoop@hadoop0001 ~]$ sudo ntpdate -u ntp1.aliyun.com
16 Jul 16:46:39 ntpdate[12700]: adjust time server 120.25.115.20 offset -0.002546 sec

啟動時間服務(wù):

[hadoop@hadoop0001 ~]$ sudo systemctl start ntpd

時間服務(wù)開機自啟:

[hadoop@hadoop0001 ~]$ sudo systemctl enable ntpd

在 hadoop0002 和 hadoop0003 配置 ntp 客戶端
在 /etc/ntp.conf 配置如下代碼

server hadoop0001

查看 ntp 是否同步
如下表示未同步

[root@hadoop0002 ~]# ntpstat 
unsynchronised
  time server re-starting
   polling server every 8 s

如下表示已同步

[root@hadoop0001 ~]# ntpstat
synchronised to NTP server (120.25.115.20) at stratum 3 
   time correct to within 976 ms
   polling server every 64 s

注意:同步需要 10 分鐘左右

Hadoop 安裝

下載 Hadoop

wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.7.7/hadoop-2.7.7.tar.gz

解壓 Hadoop

tar -zxvf hadoop-2.7.7.tar.gz

配置 hadoop-env.sh

# 根據(jù)實際業(yè)務(wù)需要配置
export HADOOP_HEAPSIZE=1024

配置 mapred-env.sh

export JAVA_HOME=${JAVA_HOME}

配置 yarn-env.sh

# 根據(jù)實際業(yè)務(wù)需要配置
JAVA_HEAP_MAX=-Xmx512m
YARN_HEAPSIZE=1024

配置 core-site.xml

<!-- hdfs 端口 -->
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://hadoop0001:8020</value>
  </property>
  <!-- hadoop 臨時數(shù)據(jù)目錄 -->
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/hadoop/application/hadoop-2.7.7/data</value>
  </property>
  <property>
    <name>fs.trash.interval</name>
    <value>14400</value>
  </property>

配置 yarn-site.xml

<property>
    <name>yarn.resourcemanager.hostname</name>
    <value>hadoop0001</value>
    <discription>指定 YARN 的 ResourceManager 的地址</discription>
  </property>

  <property>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
    <discription>日志聚集功能</discription>
  </property>

  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
    <discription>Reducer 獲取數(shù)據(jù)方式</discription>
  </property>

  <property>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
  </property>

  <property>
    <name>yarn.log-aggregation.retain-seconds</name>
    <value>604800</value>
    <discription>日志保留時間設(shè)置 7 天</discription>
  </property>

  <property>
    <name>yarn.nodemanager.pmem-check-enabled</name>
    <value>false</value>
  </property>

  <property>
    <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
  </property>

  <property>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>15000</value>
    <discription>每個節(jié)點可用內(nèi)存,單位MB</discription>
  </property>

  <property>
    <name>yarn.scheduler.minimum-allocation-mb</name>
    <value>100</value>
    <discription>單個任務(wù)可申請最少內(nèi)存应又,默認(rèn)1024MB</discription>
  </property>

  <property>
    <name>yarn.scheduler.maximum-allocation-mb</name>
    <value>15000</value>
    <discription>單個任務(wù)可申請最大內(nèi)存宙项,默認(rèn)8192MB</discription>
  </property>

  <property>
    <name>yarn.nodemanager.resource.cpu-vcores</name>
    <value>2</value>
    <discription>NodeManager總的可用虛擬CPU個數(shù)</discription>
  </property>

  <property>
    <name>yarn.scheduler.minimum-allocation-vcores</name>
    <value>1</value>
    <discription>單個可申請的最小。比如設(shè)置為1株扛,則運行MapRedce作業(yè)時尤筐,每個Task最少可申請1個虛擬CPU</discription>
  </property>

  <property>
    <name>yarn.scheduler.maximum-allocation-vcores</name>
    <value>4</value>
    <discription>單個可申請的最大虛擬CPU個數(shù)。比如設(shè)置為4洞就,則運行MapRedce作業(yè)時,最多可申請4個虛擬CPU</discription>
  </property>

  <property>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
  </property>

  <property>
    <name>yarn.scheduler.fair.preemption</name>
    <value>true</value>
  </property>

  <property>
    <name>yarn.scheduler.fair.preemption.cluster-utilization-threshold</name>
    <value>0.8</value>
  </property>

配置 hdfs-site.xml

<!-- hdfs 數(shù)據(jù)副本數(shù)目  -->
  <property>
    <name>dfs.replication</name>
    <value>3</value>
  </property>

  <!-- hdfs 存儲 fsimage 的地方
         <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/home/hadoop/application/hadoop-2.8.5/data/hdfs/name</value>
  </property>
  -->

  <!-- hdfs 數(shù)據(jù)存放 block 的地方
         <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/home/hadoop/application/hadoop-2.8.5/data/hdfs/data</value>
  </property>
  -->

  <property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>hadoop0001:50090</value>
  </property>

  <property>
    <name>dfs.namenode.http-address</name>
    <value>hadoop0001:50070</value>
  </property>

  <property>
    <name>dfs.permissions.enabled</name>
    <value>false</value>
  </property>

配置 mapred-site.xml

<!-- 歷史服務(wù)器端地址 -->
  <property>
    <name>mapreduce.jobhistory.address</name>
    <value>hadoop0001:10020</value>
  </property>
  <!-- 歷史服務(wù)器 web 端地址 -->
  <property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>hadoop0001:19888</value>
  </property>
  <!-- 指定 MR 運行在 Yarn 上 -->
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>

配置 slaves (/home/hadoop/app/hadoop-2.7.7)

hadoop0001
hadoop0002
hadoop0003

配置 Hadoop 環(huán)境變量

在用戶家目錄下的 .bashrc

# added by Hadoop installer
export HADOOP_HOME=/home/hadoop/app/hadoop-2.7.7
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export CLASSPATH=$CLASSPATH:$HADOOP_HOME/lib

使環(huán)境生效:

. .bashrc

將配置好的 hadoop 發(fā)送到其他服務(wù)器

[hadoop@hadoop0001 app]$ scp -r /hadoop-2.7.7 hadoop@hadoop0002:~/app/hadoop-2.7.7
[hadoop@hadoop0001 app]$ scp -r /hadoop-2.7.7 hadoop@hadoop0003:~/app/hadoop-2.7.7
[hadoop@hadoop0001 ~]$ scp .bashrc hadoop@hadoop0002:~/
[hadoop@hadoop0001 ~]$ scp .bashrc hadoop@hadoop0003:~/

在主 master 初始化 namenode

hadoop namenode -format

啟動 hadoop 集群

# mater 節(jié)點 出現(xiàn) NameNode盆繁、SecondaryNameNode,其他機器上出現(xiàn) DataNode 說明集群搭建成功
start-all.sh

停止集群

stop-all.sh

Zookeeper 分布式集群搭建

下載 Zookeeper

wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz

解壓 Zookeeper

tar -zxvf zookeeper-3.4.10.tar.gz

配置 zoo.cfg

cp zoo_sample.cfg zoo.cfg
vim zoo.cfg

配置內(nèi)容如下:

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=20
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=10
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/root/app/zookeeper-3.4.10/data
dataLogDir=/root/app/zookeeper-3.4.10/logs
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=hadoop0001:2888:3888
server.2=hadoop0002:2888:3888
server.3=hadoop0003:2888:3888

在 zookeeper 根目錄下創(chuàng)建 data 和 logs 文件夾

mkdir data
mkdir logs

在 data 目錄下創(chuàng)建 myid

vim myid

內(nèi)容為:

1

配置 zookeeper 環(huán)境變量

在用戶家目錄下的 .bashrc

# added by zookeeper installer
export ZOOKEEPER_HOME=/home/hadoop/app/zookeeper-3.4.10
export CLASSPATH=$CLASSPATH:$ZOOKEEPER_HOME/lib
export PATH=$PATH:$ZOOKEEPER_HOME/bin

將配置好的 zookeeper 發(fā)送到其他機器上

[hadoop@hadoop0001 app]$ scp -r /zookeeper-3.4.10 hadoop@hadoop0002:~/app/zookeeper-3.4.10
[hadoop@hadoop0001 app]$ scp -r /zookeeper-3.4.10 hadoop@hadoop0003:~/app/zookeeper-3.4.10
[hadoop@hadoop0001 ~]$ scp .bashrc hadoop@hadoop0002:~/
[hadoop@hadoop0001 ~]$ scp .bashrc hadoop@hadoop0003:~/

修改其他機器的 myid

將其他節(jié)點的 myid 修改為 2旬蟋、3油昂,保證每臺機器的 myid 在集群內(nèi)唯一

啟動 zookeeper 服務(wù)

每臺機器執(zhí)行:

zkServer.sh start

查看 zookeeper 狀態(tài)

zkServer.sh status

Hbase HA 分布式集群搭建

下載 hbase

wget http://mirror.bit.edu.cn/apache/hbase/2.1.5/hbase-2.1.5-bin.tar.gz

解壓 hbase

tar -zxvf hbase-2.1.5-bin.tar.gz

配置 hbase-site.xml

  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://hadoop0001:8020/hbase</value>
  </property>

  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>

  <!-- 0.98 后的新變動,之前版本沒有.port,默認(rèn)端口為 60000 -->
  <property>
    <name>hbase.master.port</name>
    <value>16000</value>
  </property>

  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>hadoop0001,hadoop0002,hadoop0003</value>
  </property>

  <property>
    <name>hbase.regionserver.restart.on.zk.expire</name>
    <value>true</value>
  </property>

  <property>
    <name>hbase.coprocessor.abortonerror</name>
    <value>false</value>
  </property>

  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/root/app/zookeeper-3.4.10/data</value>
  </property>

  <property>
    <name>hbase.unsafe.stream.capability.enforce</name>
    <value>false</value>
    <description>
        Controls whether HBase will check for stream capabilities (hflush/hsyn    c).

        Disable this if you intend to run on LocalFileSystem, denoted by a roo    tdir
        with the 'file://' scheme, but be mindful of the NOTE below.

        WARNING: Setting this to false blinds you to potential data loss and
        inconsistent system state in the event of process and/or node failures    . If
        HBase is complaining of an inability to use hsync or hflush it's most
        likely not a false positive.
    </description>
  </property>

配置 regionservers

在 hbase 根目錄下的 conf 目錄下的 regionservers 文件加入如下配置:

# 主機名即 host
hadoop0001
hadoop0002
hadoop0003

配置 hbase 環(huán)境變量

在用戶家目錄下的 .bashrc

# added by hbase installer
export HBASE_HOME=/root/app/hbase-2.1.5/
export CLASSPATH=$CLASSPATH:$HBASE_HOME/lib
export PATH=$PATH:$HBASE_HOME/bin

將配置好的 hbase 發(fā)送到其他機器

[hadoop@hadoop0001 app]$ scp -r /hbase-2.1.5 hadoop@hadoop0002:~/app/hbase-2.1.5
[hadoop@hadoop0001 app]$ scp -r /hbase-2.1.5 hadoop@hadoop0003:~/app/hbase-2.1.5
[hadoop@hadoop0001 ~]$ scp .bashrc hadoop@hadoop0002:~/
[hadoop@hadoop0001 ~]$ scp .bashrc hadoop@hadoop0003:~/

配置 backup-masters(備用 master 節(jié)點)

在 hbase 根目錄下的 conf 目錄下的 backup-masters文件加入如下配置:

# master 節(jié)點配置倾贰,可配置多個
hadoop0002

啟動 hbse 集群

start-hbase.sh

注意:在主節(jié)點出現(xiàn) HMaster冕碟、HRegionServer(有可能沒有,屬于正常)及備用節(jié)點 出現(xiàn) HMaster匆浙、HRegionServer鸣哀;其他節(jié)點出現(xiàn) HRegionServer;說明Hbase集群搭建成功吞彤;

停止 hbase 集群

stop-hbase.sh

Phoenix 集群安裝

下載 Phoenix

wget http://mirror.bit.edu.cn/apache/phoenix/apache-phoenix-5.0.0-HBase-2.0/bin/apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz

解壓 Phoenix

tar -zxvf apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz

復(fù)制以下 jar 包到所有節(jié)點的 Habse 根目錄下的 lib 目錄下

[hadoop@hadoop0001 apache-phoenix-5.0.0-HBase-2.0-bin]$ cp phoenix-5.0.0-HBase-2.0-queryserver.jar ~/app/hbase-2.1.5/lib/
[hadoop@hadoop0001 apache-phoenix-5.0.0-HBase-2.0-bin]$ scp phoenix-5.0.0-HBase-2.0-queryserver.jar hadoop@hadoop0002:~/app/hbase-2.1.5/lib/
[hadoop@hadoop0001 apache-phoenix-5.0.0-HBase-2.0-bin]$ scp phoenix-5.0.0-HBase-2.0-queryserver.jar hadoop@hadoop0003:~/app/hbase-2.1.5/lib/

[hadoop@hadoop0001 apache-phoenix-5.0.0-HBase-2.0-bin]$ cp phoenix-5.0.0-HBase-2.0-server.jar ~/app/hbase-2.1.5/lib/
[hadoop@hadoop0001 apache-phoenix-5.0.0-HBase-2.0-bin]$ scp phoenix-5.0.0-HBase-2.0-server.jar hadoop@hadoop0002:~/app/hbase-2.1.5/lib/
[hadoop@hadoop0001 apache-phoenix-5.0.0-HBase-2.0-bin]$ scp phoenix-5.0.0-HBase-2.0-server.jar hadoop@hadoop0003:~/app/hbase-2.1.5/lib/

[hadoop@hadoop0001 apache-phoenix-5.0.0-HBase-2.0-bin]$ cp phoenix-core-5.0.0-HBase-2.0.jar ~/app/hbase-2.1.5/lib/
[hadoop@hadoop0001 apache-phoenix-5.0.0-HBase-2.0-bin]$ scp phoenix-core-5.0.0-HBase-2.0.jar hadoop@hadoop0002:~/app/hbase-2.1.5/lib/
[hadoop@hadoop0001 apache-phoenix-5.0.0-HBase-2.0-bin]$ scp phoenix-core-5.0.0-HBase-2.0.jar hadoop@hadoop0003:~/app/hbase-2.1.5/lib/

配置 Phoenix 環(huán)境變量(無需復(fù)制到其他節(jié)點)

# added by phoenix installer
export PHOENIX_HOME=/root/app/apache-phoenix-5.0.0-HBase-2.0-bin
export CLASSPATH=$CLASSPATH:$PHOENIX_HOME
export PATH=$PATH:$PHOENIX_HOME/bin

啟動 Phoenix queryserver 模式

queryserver.py start

停止 Phoenix queryserver 模式

queryserver.py stop

連接 Phoenix queryserver

sqlline-thin.py hadoop0001:8765

客戶端 jdbc 連接(jdbcUrl)

jdbc:phoenix:thin:url=http://10.2.1.102:8765?doAs=alice
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末我衬,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子饰恕,更是在濱河造成了極大的恐慌挠羔,老刑警劉巖,帶你破解...
    沈念sama閱讀 217,826評論 6 506
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件埋嵌,死亡現(xiàn)場離奇詭異破加,居然都是意外死亡,警方通過查閱死者的電腦和手機雹嗦,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,968評論 3 395
  • 文/潘曉璐 我一進(jìn)店門范舀,熙熙樓的掌柜王于貴愁眉苦臉地迎上來合是,“玉大人,你說我怎么就攤上這事锭环〈先” “怎么了?”我有些...
    開封第一講書人閱讀 164,234評論 0 354
  • 文/不壞的土叔 我叫張陵辅辩,是天一觀的道長难礼。 經(jīng)常有香客問我,道長玫锋,這世上最難降的妖魔是什么蛾茉? 我笑而不...
    開封第一講書人閱讀 58,562評論 1 293
  • 正文 為了忘掉前任,我火速辦了婚禮撩鹿,結(jié)果婚禮上谦炬,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當(dāng)我...
    茶點故事閱讀 67,611評論 6 392
  • 文/花漫 我一把揭開白布隅津。 她就那樣靜靜地躺著,像睡著了一般稚机。 火紅的嫁衣襯著肌膚如雪幕帆。 梳的紋絲不亂的頭發(fā)上获搏,一...
    開封第一講書人閱讀 51,482評論 1 302
  • 那天,我揣著相機與錄音失乾,去河邊找鬼常熙。 笑死,一個胖子當(dāng)著我的面吹牛碱茁,可吹牛的內(nèi)容都是我干的裸卫。 我是一名探鬼主播,決...
    沈念sama閱讀 40,271評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼纽竣,長吁一口氣:“原來是場噩夢啊……” “哼墓贿!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起蜓氨,我...
    開封第一講書人閱讀 39,166評論 0 276
  • 序言:老撾萬榮一對情侶失蹤聋袋,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后穴吹,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體幽勒,經(jīng)...
    沈念sama閱讀 45,608評論 1 314
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,814評論 3 336
  • 正文 我和宋清朗相戀三年港令,在試婚紗的時候發(fā)現(xiàn)自己被綠了啥容。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片锈颗。...
    茶點故事閱讀 39,926評論 1 348
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖咪惠,靈堂內(nèi)的尸體忽然破棺而出击吱,到底是詐尸還是另有隱情,我是刑警寧澤硝逢,帶...
    沈念sama閱讀 35,644評論 5 346
  • 正文 年R本政府宣布姨拥,位于F島的核電站,受9級特大地震影響渠鸽,放射性物質(zhì)發(fā)生泄漏叫乌。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點故事閱讀 41,249評論 3 329
  • 文/蒙蒙 一徽缚、第九天 我趴在偏房一處隱蔽的房頂上張望憨奸。 院中可真熱鬧,春花似錦凿试、人聲如沸排宰。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,866評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽板甘。三九已至,卻和暖如春详炬,著一層夾襖步出監(jiān)牢的瞬間盐类,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 32,991評論 1 269
  • 我被黑心中介騙來泰國打工呛谜, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留在跳,地道東北人。 一個月前我還...
    沈念sama閱讀 48,063評論 3 370
  • 正文 我出身青樓隐岛,卻偏偏與公主長得像猫妙,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子聚凹,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 44,871評論 2 354

推薦閱讀更多精彩內(nèi)容