1 配置
1.1 開發(fā)環(huán)境:
- HBase:hbase-1.0.0-cdh5.4.5.tar.gz
- Hadoop:hadoop-2.6.0-cdh5.4.5.tar.gz
- ZooKeeper:zookeeper-3.4.5-cdh5.4.5.tar.gz
- Spark:spark-2.1.0-bin-hadoop2.6
1.2 Spark的配置
- Jar包:需要HBase的Jar如下(經(jīng)過測試栖茉,正常運行,但是是否存在冗余的Jar并未證實剩胁,若發(fā)現(xiàn)多余的jar可自行進行刪除)
- spark-env.sh
添加以下配置:export SPARK_CLASSPATH=/home/hadoop/data/lib1/*
注:如果使用spark-shell的yarn模式進行測試的話,那么最好每個NodeManager節(jié)點都有配置jars和hbase-site.xml - spark-default.sh
spark.yarn.historyServer.address=slave11:18080
spark.history.ui.port=18080
spark.eventLog.enabled=true
spark.eventLog.dir=hdfs:///tmp/spark/events
spark.history.fs.logDirectory=hdfs:///tmp/spark/events
spark.driver.memory=1g
spark.serializer=org.apache.spark.serializer.KryoSerializer
1.3 數(shù)據(jù)
1)格式: barCode@item@value@standardValue@upperLimit@lowerLimit
01055HAXMTXG10100001@KEY_VOLTAGE_TEC_PWR@1.60@1.62@1.75@1.55
01055HAXMTXG10100001@KEY_VOLTAGE_T_C_PWR@1.22@1.24@1.45@0.8
01055HAXMTXG10100001@KEY_VOLTAGE_T_BC_PWR@1.16@1.25@1.45@0.8
01055HAXMTXG10100001@KEY_VOLTAGE_11@1.32@1.25@1.45@0.8
01055HAXMTXG10100001@KEY_VOLTAGE_T_RC_PWR@1.24@1.25@1.45@0.8
01055HAXMTXG10100001@KEY_VOLTAGE_T_VCC_5V@1.93@1.90@1.95@1.65
01055HAXMTXG10100001@KEY_VOLTAGE_T_VDD3V3@1.59@1.62@1.75@1.55
2 代碼演示
2.1 準(zhǔn)備動作
1)既然是與HBase相關(guān)日月,那么首先需要使用hbase shell來創(chuàng)建一個表
創(chuàng)建表格:create ‘data’,’v’,create ‘data1’,’v’
2)使用spark-shell進行操作,命令如下:
bin/spark-shell --master yarn --deploy-mode client --num-executors 5 --executor-memory 1g --executor-cores 2
3)import 各種類
import org.apache.spark._
import org.apache.spark.rdd.NewHadoopRDD
import org.apache.hadoop.mapred.JobConf
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.mapreduce.Job
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
import org.apache.hadoop.fs.Path
import org.apache.hadoop.hbase.client.Put
import org.apache.hadoop.hbase.io.ImmutableBytesWritable
import org.apache.hadoop.hbase.mapred.TableOutputFormat
import org.apache.hadoop.hbase.mapreduce.TableInputFormat
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.client.HBaseAdmin
import org.apache.hadoop.hbase.client.HTable
import org.apache.hadoop.hbase.client.Scan
import org.apache.hadoop.hbase.client.Get
import org.apache.hadoop.hbase.protobuf.ProtobufUtil
import org.apache.hadoop.hbase.util.{Base64,Bytes}
import org.apache.hadoop.hbase.KeyValue
import org.apache.hadoop.hbase.mapreduce.HFileOutputFormat
import org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
import org.apache.hadoop.hbase.HColumnDescriptor
import org.apache.commons.codec.digest.DigestUtils
2.2 代碼實戰(zhàn)
創(chuàng)建conf和table
val conf= HBaseConfiguration.create()
conf.set(TableInputFormat.INPUT_TABLE,"data1")
val table = new HTable(conf,"data1")
2.2.1 數(shù)據(jù)寫入
格式:
val put = new Put(Bytes.toBytes("rowKey"))
put.add("cf","q","value")
使用for來插入5條數(shù)據(jù)
for(i <- 1 to 5){ var put= new Put(Bytes.toBytes("row"+i));put.add(Bytes.toBytes("v"),Bytes.toBytes("value"),Bytes.toBytes("value"+i));table.put(put)}
到hbase shell中查看結(jié)果
2.2.2 數(shù)據(jù)讀取
val hbaseRdd = sc.newAPIHadoopRDD(conf, classOf[TableInputFormat],classOf[org.apache.hadoop.hbase.io.ImmutableBytesWritable],classOf[org.apache.hadoop.hbase.client.Result])
1)take
hbaseRdd take 1
2)scan
var scan = new Scan();
scan.addFamily(Bytes.toBytes(“v”));
var proto = ProtobufUtil.toScan(scan)
var scanToString = Base64.encodeBytes(proto.toByteArray());
conf.set(TableInputFormat.SCAN,scanToString)
val datas = hbaseRdd.map( x=>x._2).map{result => (result.getRow,result.getValue(Bytes.toBytes("v"),Bytes.toBytes("value")))}.map(row => (new String(row._1),new String(row._2))).collect.foreach(r => (println(r._1+":"+r._2)))
2.3 批量插入
2.3.1 普通插入
1)代碼
val rdd = sc.textFile("/data/produce/2015/2015-03-01.log")
val data = rdd.map(_.split("@")).map{x=>(x(0)+x(1),x(2))}
val result = data.foreachPartition{x => {val conf= HBaseConfiguration.create();conf.set(TableInputFormat.INPUT_TABLE,"data");conf.set("hbase.zookeeper.quorum","slave5,slave6,slave7");conf.set("hbase.zookeeper.property.clientPort","2181");conf.addResource("/home/hadoop/data/lib/hbase-site.xml");val table = new HTable(conf,"data");table.setAutoFlush(false,false);table.setWriteBufferSize(3*1024*1024); x.foreach{y => {
var put= new Put(Bytes.toBytes(y._1));put.add(Bytes.toBytes("v"),Bytes.toBytes("value"),Bytes.toBytes(y._2));table.put(put)};table.flushCommits}}}
2)執(zhí)行時間如下:7.6 min
2.3.2 Bulkload
- 代碼:
val conf = HBaseConfiguration.create();
val tableName = "data1"
val table = new HTable(conf,tableName)
conf.set(TableOutputFormat.OUTPUT_TABLE,tableName)
lazy val job = Job.getInstance(conf)
job.setMapOutputKeyClass(classOf[ImmutableBytesWritable])
job.setMapOutputValueClass(classOf[KeyValue])
HFileOutputFormat.configureIncrementalLoad(job,table)
val rdd = sc.textFile("/data/produce/2015/2015-03-01.log").map(_.split("@")).map{x => (DigestUtils.md5Hex(x(0)+x(1)).substring(0,3)+x(0)+x(1),x(2))}.sortBy(x =>x._1).map{x=>{val kv:KeyValue = new KeyValue(Bytes.toBytes(x._1),Bytes.toBytes("v"),Bytes.toBytes("value"),Bytes.toBytes(x._2+""));(new ImmutableBytesWritable(kv.getKey),kv)}}
rdd.saveAsNewAPIHadoopFile("/tmp/data1",classOf[ImmutableBytesWritable],classOf[KeyValue],classOf[HFileOutputFormat],job.getConfiguration())
val bulkLoader = new LoadIncrementalHFiles(conf)
bulkLoader.doBulkLoad(new Path("/tmp/data1"),table)
2) 執(zhí)行時間:7s
3)執(zhí)行結(jié)果:
到hbase shell 中查看 list “data1”
通過對比我們可以發(fā)現(xiàn)bulkload批量導(dǎo)入所用時間遠遠少于普通導(dǎo)入郭脂,速度提升了60多倍稽荧,當(dāng)然我沒有使用更大的數(shù)據(jù)量測試橘茉,但是我相信導(dǎo)入速度的提升是非常顯著的,強烈建議使用BulkLoad批量導(dǎo)入數(shù)據(jù)到HBase中姨丈。
關(guān)于Spark與Hbase之間操作就寫到這里畅卓,如果有什么地方寫得不對或者運行不了,歡迎指出蟋恬,謝謝