注:參考Thrift JDBC/ODBC server
環(huán)境:spark 1.6 mysql
- sparksql 可作為分布式sql查詢引擎酬核,web程序通過JDBC連接實現(xiàn)sql查詢;
- 實現(xiàn)步驟:
- 運行Thrift JDBC/ODBC server
在Spark目錄下運行下面這個命令,啟動一個JDBC/ODBC server
sbin/start-thriftserver.sh --master spark://192.168.172.103:7077 --driver-class-path /usr/local/hive-2.1/lib/mysql-connector-java-5.1.32.jar
腳本參數(shù)可以參考 bin/spark-submit命令支持的選項參數(shù),外加一個 –hiveconf 選項,來指定Hive屬性。運行./sbin/start-thriftserver.sh –help可以查看完整的選項列表闸婴。默認情況下,啟動的server將會在localhost:10000端口上監(jiān)聽芍躏。要改變監(jiān)聽主機名或端口邪乍,可以用以下環(huán)境變量:export HIVE_SERVER2_THRIFT_PORT=<listening-port> export HIVE_SERVER2_THRIFT_BIND_HOST=<listening-host> ./sbin/start-thriftserver.sh --master <master-uri>
或者Hive系統(tǒng)屬性 來指定
- 運行Thrift JDBC/ODBC server
./sbin/start-thriftserver.sh \
--hiveconf hive.server2.thrift.port=<listening-port> \
--hiveconf hive.server2.thrift.bind.host=<listening-host> \
--master <master-uri>
通過beeline 進行連接
接下來,你就可以開始在beeline中測試這個Thrift JDBC/ODBC server;
可能需要輸入用戶名和密碼对竣。在非安全模式下庇楞,只要輸入你本機的用戶名和一個空密碼即可。對于安全模式柏肪,請參考beeline documentation.
./bin/beeline beeline> !connect jdbc:hive2://192.168.172.103:10000
、-
代碼連接
import java.sql.SQLException; import java.sql.Connection; import java.sql.ResultSet; import java.sql.Statement; import java.sql.DriverManager; public class HiveJdbcClient { private static String driverName = "org.apache.hive.jdbc.HiveDriver"; /** * @param args * @throws SQLException */ public static void main(String[] args) throws SQLException { try { Class.forName(driverName); } catch (ClassNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); System.exit(1); } //replace "hive" here with the name of the user the queries should run as Connection con = DriverManager.getConnection("jdbc:hive2://localhost:10000/default", "hive", ""); Statement stmt = con.createStatement(); String tableName = "testHiveDriverTable"; stmt.execute("drop table if exists " + tableName); stmt.execute("create table " + tableName + " (key int, value string)"); // show tables String sql = "show tables '" + tableName + "'"; System.out.println("Running: " + sql); ResultSet res = stmt.executeQuery(sql); if (res.next()) { System.out.println(res.getString(1)); } // describe table sql = "describe " + tableName; System.out.println("Running: " + sql); res = stmt.executeQuery(sql); while (res.next()) { System.out.println(res.getString(1) + "\t" + res.getString(2)); } // load data into table // NOTE: filepath has to be local to the hive server // NOTE: /tmp/a.txt is a ctrl-A separated file with two fields per line String filepath = "/tmp/a.txt"; sql = "load data local inpath '" + filepath + "' into table " + tableName; System.out.println("Running: " + sql); stmt.execute(sql); // select * query sql = "select * from " + tableName; System.out.println("Running: " + sql); res = stmt.executeQuery(sql); while (res.next()) { System.out.println(String.valueOf(res.getInt(1)) + "\t" + res.getString(2)); } // regular hive query sql = "select count(1) from " + tableName; System.out.println("Running: " + sql); res = stmt.executeQuery(sql); while (res.next()) { System.out.println(res.getString(1)); } } }```
hive-site.xml 配置beeline