環(huán)境如下
linux版本:ubuntu 14.04 LTS
jdk版本:jdk1.7.0_67
hadoop版本:hadoop-2.0.0-cdh4.1.0.tar.gz
impala版本:impala_1.4.0-1.impala1.4.0.p0.7~precise-impala1.4.0_all.deb
hadoop-cdh下載地址:http://archive.cloudera.com/cdh4/cdh/4/
ubuntu impala下載地址:
http://archive.cloudera.com/impala/ubuntu/precise/amd64/impala/pool/contrib/i/impala/
建議:hadoop的版本不要太高痕慢,還是用cdh4比較靠譜工窍,之前我用了apache-hadoop2.7吐根,hadoop2.6-cdh5,impala啟動時都報了錯誤莺奔,原因為protobuf不兼容饭庞,該錯誤我查了幾天负拟。
為了方便沐祷,以下教程我在root用戶下進(jìn)行,及root作為使用用戶纵寝。
1论寨、安裝hadoop
# apt-get install openssh-server
設(shè)置免密碼登錄
# ssh-keygen -t rsa -P ""
# cat .ssh/id_rsa.pub >> .ssh/authorized_keys
下載jdk-7u67-linux-x64.tar.gz,解壓后配置環(huán)境變量
# tar -vzxf jdk-7u67-linux-x64.tar.gz
# mkdir /usr/java
# mv jdk1.7.0_67 /usr/java/
# vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.7.0_67
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH
export CLASSPATH=$CLASSPATH:.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
# tar -xvzf hadoop-2.0.0-cdh4.1.0.tar.gz
# mv hadoop-2.0.0-cdh4.1.0 /usr/local/
# vi /etc/profile
export HADOOP_HOME=/usr/local/hadoop-2.0.0-cdh4.1.0
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
export HADOOP_PREFIX=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_LIB=$HADOOP_HOME/lib
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
# source /etc/profile
2、配置hadoop葬凳,偽分布式配置
# cd /etc/local/hadoop-2.0.0-cdh4.1.0
# cd /etc/hadoop
# vi hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_67
# vi core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name> <!-- 臨時目錄 -->
<value>file:/root/hadoop/tmp</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
# vi hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name> <!-- namenode目錄-->
<value>file:/root/hadoop/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name> <!-- datanode目錄 -->
<value>file:/root/hadoop/tmp/dfs/data</value>
</property>
</configuration>
# cd ~
# mkdir -p hadoop/tmp/dfs/name
# mkdir hadoop/tmp/dfs/data
注:需要保證用戶為hadoop-2.0.0-cdh4.1.0目錄绰垂、namenode目錄和datanode目錄的擁有者
3、啟動hadoop
格式化namenode
# hadoop namenode -format
# start-all.sh (該命令在$HADOOP_HOME/sbin)
測試
# hadoop fs -ls / #查看hdfs的/目錄
# hadoop fs -mkdir /user #在hdfs創(chuàng)建目錄user
# hadoop fs -put a.out /user #在hdfs的/user下上傳文件a.out
# hadoop fs -get /user/a.out #下載a.out文件到本地
關(guān)閉hadoop
# stop-all.sh
4火焰、安裝impala
修改源:
# vi /etc/apt/sources.list.d/cloudera.list
deb [arch=amd64] http://archive.cloudera.com/cm5/ubuntu/trusty/amd64/cm trusty-cm5 contrib
deb-src http://archive.cloudera.com/cm5/ubuntu/trusty/amd64/cm trusty-cm5 contrib
deb [arch=amd64] http://archive.cloudera.com/impala/ubuntu/precise/amd64/impala precise-impala1 contrib
deb-src http://archive.cloudera.com/impala/ubuntu/precise/amd64/impala precise-impala1 contrib
# apt-get update
# apt-get install bigtop-utils
用apt-get下載impala太慢了劲装,可在
http://archive.cloudera.com/impala/ubuntu/precise/amd64/impala/pool/contrib/i/impala/
下載相應(yīng)安裝包。
# dpkg -i impala_1.4.0-1.impala1.4.0.p0.7~precise-impala1.4.0_all.deb
# dpkg -i impala-server_1.4.0-1.impala1.4.0.p0.7-precise-impala1.4.0_all.deb
# dpkg -i impala-state-store_1.4.0-1.impala1.4.0.p0.7-precise-impala1.4.0_all.deb
# dpkg -i impala-catalog_1.4.0-1.impala1.4.0.p0.7-precise-impala1.4.0_all.deb
# apt-get install python-setuptools
出錯則根據(jù)錯誤修改(apt-get -f install)
# dpkg -i impala-shell_1.4.0-1.impala1.4.0.p0.7-precise-impala1.4.0_all.deb
impala安裝完畢荐健。
5、impala配置
# vi /etc/hosts
127.0.0.1 localhost
在$HADOOP_HOME/etc/hadoop下將core-site.xml及hdfs-site.xml拷貝到/etc/impala/conf
# cd /usr/local/hadoop-2.0.0-cdh4.1.0/etc/hadoop/
# cp core-site.xml hdfs-site.xml /etc/impala/conf
# cd /etc/impala/conf
# vi hdfs-site.xml
增加:
<property>
<name>dfs.client.read.shortcircuit</name>
<value>true</value>
</property>
<property>
<name>dfs.domain.socket.path</name>
<value>/var/run/hadoop-hdfs/dn._PORT</value>
</property>
<property>
<name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.client.use.legacy.blockreader.local</name>
<value>true</value>
</property>
<property>
<name>dfs.datanode.data.dir.perm</name>
<value>750</value>
</property>
<property>
<name>dfs.block.local-path-access.user</name>
<value>impala</value>
</property>
<property>
<name>dfs.client.file-block-storage-locations.timeout</name>
<value>3000</value>
</property>
# mkdir /var/run/hadoop-hdfs
注:保證/var/run/hadoop-hdfs為用戶所有
6琳袄、impala啟動
# service impala-state-store start
# service impala-catalog start
# service impala-server start
查看是否啟動:
# ps -ef | grep impala
錯誤信息查看日志
啟動impala-shell
# impala-shell -i localhost --quiet
[localhost:21000] > select version();
...
[localhost:21000] > select current_database();
...
impala-shell操作見
http://www.cloudera.com/documentation/enterprise/latest/topics/impala_tutorial.html#tutorial
7江场、impala日志錯誤處理
impala日志位置為:/var/log/impala
impala啟動錯誤1:
Failed on local exception:
com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: callId, status; Host Details : local host is: "database32/127.0.1.1"; destination host is: "localhost":9000;
原因:
hadoop2.6的protobuf版本為2.5,為impala用的版本為protobuf2.4
解決:
將hadoop的版本降低時與impala的版本匹配窖逗,這里impala采用二進(jìn)制方式安裝址否,無法
重新編譯,解決為將hadoop的版本與impala版本一致碎紊。我下載的hadoop為hadoop-2.0.0-cdh4.1.0佑附,impala為impala_1.4.0
impala啟動錯誤2:
dfs.client.read.shortcircuit is not enabled because - dfs.client.use.legacy
.blockreader.local is not enabled
原因:
hdfs-site.xml配置出錯
解決:
將dfs.datanode.hdfs-blocks-metadata.enabled選項設(shè)為true
impala啟動錯誤3:
Impalad services did not start correctly, exiting. Error: Couldn't open
transport for 127.0.0.1:24000(connect() failed: Connection refused)
原因:
未啟動impala-state-store,impala-catalog
解決:
# service impala-state-store start
# service impala-catalog start
# service impala start