Apache Parquet作為文件格式最近獲得了顯著關(guān)注挺庞,假設(shè)你有一個(gè)100列的表,大部分時(shí)間你只需要訪問3-10列稼病,行存儲(chǔ)选侨,不管你需要不需要它們,你必須掃描所有然走。Apache Parquet是列存儲(chǔ)援制,如果需要3列,那么只有這3列被load芍瑞。并且datatype晨仑、compression和quality非常好。下面我們來介紹如何把一個(gè)表存儲(chǔ)為Parquet和如何加載拆檬。首先建立一個(gè)表格:
first_name | last_name | gender |
---|---|---|
Barack | Obama | M |
Bill | Clinton | M |
Hillary | Clinton | F |
Spark SQL:
val hc = new org.apache.spark.sql.hive.HiveContext(sc)
import hc.implicits._
case class Person(firstName: String, lastName: String, gender: String)
val personRDD = sc.textFile("person").map(_.split("\t")).map(p => Person(p(0),p(1),p(2)))
val person = personRDD.toDFperson.registerTempTable("person")
val males = hc.sql("select * from person where gender='M'")
males.collect.foreach(println)
保存DF為Parquet格式:
person.write.parquet("person.parquet")
Hive中建立Parquet格式的表:
create table person_parquet like person stored as parquet;
insert overwrite table person_parquet select * from person;
加載Parquet文件不再需要case class洪己。
val personDF = hc.read.parquet("person.parquet")personDF.registerAsTempTable("pp")
val males = hc.sql("select * from pp where gender='M'")
males.collect.foreach(println)
Sometimes Parquet files pulled from other sources like Impala save String as binary. To fix that issue, add the following line right after creating SqlContext:
sqlContext.setConf("spark.sql.parquet.binaryAsString","true")