Sqoop基本語法簡介

簡介:
本篇文章主要介紹sqoop的基本語法及簡單使用方法誓酒。

1.查看命令幫助
[hadoop@hadoop000 ~]$ sqoop help
usage: sqoop COMMAND [ARGS]

Available commands:
  codegen            Generate code to interact with database records
  create-hive-table  Import a table definition into Hive
  eval               Evaluate a SQL statement and display the results
  export             Export an HDFS directory to a database table
  help               List available commands
  import             Import a table from a database to HDFS
  import-all-tables  Import tables from a database to HDFS
  import-mainframe   Import datasets from a mainframe server to HDFS
  job                Work with saved jobs
  list-databases     List available databases on a server
  list-tables        List available tables in a database
  merge              Merge results of incremental imports
  metastore          Run a standalone Sqoop metastore
  version            Display version information

See 'sqoop help COMMAND' for information on a specific command.

# 這里提示我們使用sqoop help command(要查詢的命令)進行該命令的詳細(xì)查詢
2.list-databases
# 查看list-databases命令幫助
[hadoop@hadoop000 ~]$ sqoop help list-databases
usage: sqoop list-databases [GENERIC-ARGS] [TOOL-ARGS]

Common arguments:
   --connect <jdbc-uri>                         Specify JDBC connect
                                                string
   --connection-manager <class-name>            Specify connection manager
                                                class name
   --connection-param-file <properties-file>    Specify connection
                                                parameters file
   --driver <class-name>                        Manually specify JDBC
                                                driver class to use
   --hadoop-home <hdir>                         Override
                                                $HADOOP_MAPRED_HOME_ARG
   --hadoop-mapred-home <dir>                   Override
                                                $HADOOP_MAPRED_HOME_ARG
   --help                                       Print usage instructions
-P                                              Read password from console
   --password <password>                        Set authentication
                                                password
   --password-alias <password-alias>            Credential provider
                                                password alias
   --password-file <password-file>              Set authentication
                                                password file path
   --relaxed-isolation                          Use read-uncommitted
                                                isolation for imports
   --skip-dist-cache                            Skip copying jars to
                                                distributed cache
   --username <username>                        Set authentication
                                                username
   --verbose                                    Print more information
                                                while working

# 簡單使用
[hadoop@oradb3 ~]$ sqoop list-databases \
> --connect jdbc:mysql://localhost:3306 \
> --username root \
> --password 123456

# 結(jié)果
information_schema
mysql
performance_schema
slow_query_log
sys
test
3.list-tables
# 命令幫助
[hadoop@hadoop000 ~]$ sqoop help list-tables
usage: sqoop list-tables [GENERIC-ARGS] [TOOL-ARGS]

Common arguments:
   --connect <jdbc-uri>                         Specify JDBC connect
                                                string
   --connection-manager <class-name>            Specify connection manager
                                                class name
   --connection-param-file <properties-file>    Specify connection
                                                parameters file
   --driver <class-name>                        Manually specify JDBC
                                                driver class to use
   --hadoop-home <hdir>                         Override
                                                $HADOOP_MAPRED_HOME_ARG
   --hadoop-mapred-home <dir>                   Override
                                                $HADOOP_MAPRED_HOME_ARG
   --help                                       Print usage instructions
-P                                              Read password from console
   --password <password>                        Set authentication
                                                password
   --password-alias <password-alias>            Credential provider
                                                password alias
   --password-file <password-file>              Set authentication
                                                password file path
   --relaxed-isolation                          Use read-uncommitted
                                                isolation for imports
   --skip-dist-cache                            Skip copying jars to
                                                distributed cache
   --username <username>                        Set authentication
                                                username
   --verbose                                    Print more information
                                                while working

# 使用方法
[hadoop@hadoop000 ~]$ sqoop list-tables \
> --connect jdbc:mysql://localhost:3306/test \
> --username root \
> --password 123456

# 結(jié)果
t_order
test0001
test_1013
test_dyc
test_tb
4.將mysql導(dǎo)入HDFS中(import)

(默認(rèn)導(dǎo)入當(dāng)前用戶目錄下/user/用戶名/表名)
說到這里擴展一個小知識點:

  • hadoop fs -ls 顯示的是當(dāng)前的用戶目錄 即/user/hadoop
    hadoop fs -ls / 顯示的是HDFS根目錄
# 查看命令幫助
[hadoop@hadoop000 ~]$ sqoop help import
# 執(zhí)行import
[hadoop@hadoop000 ~]$ sqoop import \
> --connect jdbc:mysql://localhost:3306/test \
> --username root \
> --password 123456 \
> --table students

這時很可能會出現(xiàn)這個錯誤
Exception in thread "main" java.lang.NoClassDefFoundError: org/json/JSONObject
這里我們需要導(dǎo)入java-json.jar包 下載地址 把java-json.jar添加到../sqoop/lib目錄下即可

# 再次執(zhí)行 import導(dǎo)入
[hadoop@hadoop000 ~]$ sqoop import \
> --connect jdbc:mysql://localhost:3306/test \
> --username root \
> --password 123456 \
> --table students

18/07/04 13:28:35 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.7.0
18/07/04 13:28:35 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/07/04 13:28:35 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
18/07/04 13:28:35 INFO tool.CodeGenTool: Beginning code generation
18/07/04 13:28:35 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `students` AS t LIMIT 1
18/07/04 13:28:35 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `students` AS t LIMIT 1
18/07/04 13:28:35 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hadoop/app/hadoop-2.6.0-cdh5.7.0
18/07/04 13:28:37 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/3024b8df04f623e8c79ed9b5b30ace75/students.jar
18/07/04 13:28:37 WARN manager.MySQLManager: It looks like you are importing from mysql.
18/07/04 13:28:37 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
18/07/04 13:28:37 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
18/07/04 13:28:37 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
18/07/04 13:28:37 INFO mapreduce.ImportJobBase: Beginning import of students
18/07/04 13:28:38 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
18/07/04 13:28:39 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
18/07/04 13:28:39 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
18/07/04 13:28:41 INFO db.DBInputFormat: Using read commited transaction isolation
18/07/04 13:28:41 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`id`), MAX(`id`) FROM `students`
18/07/04 13:28:41 INFO db.IntegerSplitter: Split size: 0; Num splits: 4 from: 1001 to: 1003
18/07/04 13:28:41 INFO mapreduce.JobSubmitter: number of splits:3
18/07/04 13:28:42 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1530598609758_0015
18/07/04 13:28:42 INFO impl.YarnClientImpl: Submitted application application_1530598609758_0015
18/07/04 13:28:42 INFO mapreduce.Job: The url to track the job: http://oradb3:8088/proxy/application_1530598609758_0015/
18/07/04 13:28:42 INFO mapreduce.Job: Running job: job_1530598609758_0015
18/07/04 13:28:52 INFO mapreduce.Job: Job job_1530598609758_0015 running in uber mode : false
18/07/04 13:28:52 INFO mapreduce.Job:  map 0% reduce 0%
18/07/04 13:28:58 INFO mapreduce.Job:  map 33% reduce 0%
18/07/04 13:28:59 INFO mapreduce.Job:  map 67% reduce 0%
18/07/04 13:29:00 INFO mapreduce.Job:  map 100% reduce 0%
18/07/04 13:29:00 INFO mapreduce.Job: Job job_1530598609758_0015 completed successfully
18/07/04 13:29:00 INFO mapreduce.Job: Counters: 30
...
18/07/04 13:29:00 INFO mapreduce.ImportJobBase: Transferred 40 bytes in 21.3156 seconds (1.8766 bytes/sec)
18/07/04 13:29:00 INFO mapreduce.ImportJobBase: Retrieved 3 records.
# 生成的日志信息大家一定要好好理解
# 查看HDFS上的文件
[hadoop@hadoop000 ~]$ hadoop fs -ls /user/hadoop/students
Found 4 items
-rw-r--r--   1 hadoop supergroup          0 2018-07-04 13:28 /user/hadoop/students/_SUCCESS
-rw-r--r--   1 hadoop supergroup         13 2018-07-04 13:28 /user/hadoop/students/part-m-00000
-rw-r--r--   1 hadoop supergroup         13 2018-07-04 13:28 /user/hadoop/students/part-m-00001
-rw-r--r--   1 hadoop supergroup         14 2018-07-04 13:28 /user/hadoop/students/part-m-00002
[hadoop@hadoop000 ~]$ hadoop fs -cat /user/hadoop/students/"part*"
1001,lodd,23
1002,sdfs,21
1003,sdfsa,24

我們還可以加一些其他參數(shù) 使導(dǎo)入過程更加可控

-m 指定啟動map進程個數(shù),默認(rèn)是4個
--delete-target-dir 刪除目標(biāo)目錄
--mapreduce-job-name 指定mapreduce的job的名字
--target-dir 導(dǎo)入到指定目錄
--fields-terminated-by 指定字段之間的分隔符
--null-string 含義是 string類型的字段,當(dāng)Value是NULL徽曲,替換成指定的字符
--null-non-string 含義是非string類型的字段,當(dāng)Value是NULL,替換成指定字符
--columns 導(dǎo)入表中的部分字段
--where 按條件導(dǎo)入數(shù)據(jù)
--query 按照sql語句進行導(dǎo)入 使用--query關(guān)鍵字,就不能使用--table和--columns
--options-file 在文件中執(zhí)行

# 執(zhí)行導(dǎo)入
[hadoop@hadoop000 ~]$ sqoop import \
> --connect jdbc:mysql://localhost:3306/test \
> --username root --password 123456 \
> --mapreduce-job-name FromMySQL2HDFS \
> --delete-target-dir \
> --table students \
> -m 1

# HDFS中查看
[hadoop@hadoop000 ~]$ hadoop fs -ls /user/hadoop/students              
Found 2 items
-rw-r--r--   1 hadoop supergroup          0 2018-07-04 13:53 /user/hadoop/students/_SUCCESS
-rw-r--r--   1 hadoop supergroup         40 2018-07-04 13:53 /user/hadoop/students/part-m-00000
[hadoop@oradb3 ~]$ hadoop fs -cat /user/hadoop/students/"part*"
1001,lodd,23
1002,sdfs,21
1003,sdfsa,24
# 使用where 參數(shù)
[hadoop@hadoop000 ~]$ sqoop import \
> --connect jdbc:mysql://localhost:3306/test \
> --username root --password 123456 \
> --table students \
> --mapreduce-job-name FromMySQL2HDFS2 \
> --delete-target-dir \
> --fields-terminated-by '\t' \
> -m 1 \
> --null-string 0 \
> --columns "name" \
> --target-dir STU_COLUMN_WHERE \
> --where 'id<1002'

# HDFS 結(jié)果
[hadoop@hadoop000 ~]$ hadoop fs -cat STU_COLUMN_WHERE/"part*"
lodd
# 使用query 參數(shù)
[hadoop@hadoop000 ~]$ sqoop import \
> --connect jdbc:mysql://localhost:3306/test \
> --username root --password 123456 \
> --mapreduce-job-name FromMySQL2HDFS3 \
> --delete-target-dir \
> --fields-terminated-by '\t' \
> -m 1 \
> --null-string 0 \
> --target-dir STU_COLUMN_QUERY \
> --query "select * from students where id>1001 and \$CONDITIONS"

# HDFS查看
[hadoop@hadoop000 ~]$ hadoop fs -cat STU_COLUMN_QUERY/"part*"
1002    sdfs    21
1003    sdfsa   24
# 使用options-file參數(shù)
[hadoop@hadoop000 ~]$ vi sqoop-import-hdfs.txt
import
--connect
jdbc:mysql://localhost:3306/test
--username
root
--password
123456
--table
students
--target-dir
STU_option_file
# 執(zhí)行導(dǎo)入
[hadoop@hadoop000 ~]$ sqoop --options-file /home/hadoop/sqoop-import-hdfs.txt
# HDFS查看
[hadoop@hadoop000 ~]$ hadoop fs -cat STU_option_file/"part*"
1001,lodd,23
1002,sdfs,21
1003,sdfsa,24
5.eval

查看幫助命令對與該命令的解釋為: Evaluate a SQL statement and display the results腿堤,也就是說執(zhí)行一個SQL語句并查詢出結(jié)果。

# 查看命令幫助
[hadoop@hadoop000 ~]$ sqoop help eval
usage: sqoop eval [GENERIC-ARGS] [TOOL-ARGS]

Common arguments:
   --connect <jdbc-uri>                         Specify JDBC connect
                                                string
   --connection-manager <class-name>            Specify connection manager
                                                class name
   --connection-param-file <properties-file>    Specify connection
                                                parameters file
   --driver <class-name>                        Manually specify JDBC
                                                driver class to use
   --hadoop-home <hdir>                         Override
                                                $HADOOP_MAPRED_HOME_ARG
   --hadoop-mapred-home <dir>                   Override
                                                $HADOOP_MAPRED_HOME_ARG
   --help                                       Print usage instructions
-P                                              Read password from console
   --password <password>                        Set authentication
                                                password
   --password-alias <password-alias>            Credential provider
                                                password alias
   --password-file <password-file>              Set authentication
                                                password file path
   --relaxed-isolation                          Use read-uncommitted
                                                isolation for imports
   --skip-dist-cache                            Skip copying jars to
                                                distributed cache
   --username <username>                        Set authentication
                                                username
   --verbose                                    Print more information
                                                while working

SQL evaluation arguments:
-e,--query <statement>    Execute 'statement' in SQL and exit
# 執(zhí)行
[hadoop@hadoop000 ~]$ sqoop eval \
> --connect jdbc:mysql://localhost:3306/test \
> --username root --password 123456 \
> --query "select * from students"

18/07/04 14:28:44 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.7.0
18/07/04 14:28:44 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/07/04 14:28:44 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
----------------------------------------------------
| id          | name                 | age         | 
----------------------------------------------------
| 1001        | lodd                 | 23          | 
| 1002        | sdfs                 | 21          | 
| 1003        | sdfsa                | 24          | 
----------------------------------------------------
6.export (HDFS數(shù)據(jù)導(dǎo)出到MySQL或Hive中的數(shù)據(jù)導(dǎo)入到MySQL)

常用參數(shù):

--table 指定導(dǎo)出表的名稱
--input-fields-terminated-by 指定hdfs上文件的分隔符,默認(rèn)是逗號
--export-dir 導(dǎo)出數(shù)據(jù)的目錄
--columns 指定導(dǎo)出的字段

在執(zhí)行導(dǎo)出語句前mysql要先創(chuàng)建表(不創(chuàng)建表會報錯):

# HDFS原文件
[hadoop@hadoop000 ~]$ hadoop fs -cat /user/hadoop/students/part-m-00000
1001,lodd,23
1002,sdfs,21
1003,sdfsa,24
# export導(dǎo)出到mysql
[hadoop@hadoop000 ~]$ sqoop export \
> --connect jdbc:mysql://localhost:3306/test \
> --username root \
> --password 123456 \
> --table students_demo \
> --export-dir /user/hadoop/students/

18/07/04 14:46:20 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.7.0
18/07/04 14:46:20 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/07/04 14:46:20 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
18/07/04 14:46:20 INFO tool.CodeGenTool: Beginning code generation
18/07/04 14:46:21 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `students_demo` AS t LIMIT 1
18/07/04 14:46:21 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `students_demo` AS t LIMIT 1
18/07/04 14:46:21 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hadoop/app/hadoop-2.6.0-cdh5.7.0
18/07/04 14:46:24 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/fc7b53dd6eef701c0731c7a7c4a4b340/students_demo.jar
18/07/04 14:46:24 INFO mapreduce.ExportJobBase: Beginning export of students_demo
18/07/04 14:46:25 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
18/07/04 14:46:25 INFO Configuration.deprecation: mapred.map.max.attempts is deprecated. Instead, use mapreduce.map.maxattempts
18/07/04 14:46:26 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
18/07/04 14:46:26 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
18/07/04 14:46:26 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
...
18/07/04 14:46:55 INFO mapreduce.ExportJobBase: Transferred 672 bytes in 29.3122 seconds (22.9256 bytes/sec)
18/07/04 14:46:55 INFO mapreduce.ExportJobBase: Exported 3 records.

# mysql中查看
mysql> select * from students_demo;
+------+-------+------+
| id   | name  | age  |
+------+-------+------+
| 1001 | lodd  |   23 |
| 1002 | sdfs  |   21 |
| 1003 | sdfsa |   24 |
+------+-------+------+
3 rows in set (0.00 sec)

如果再導(dǎo)入一次會追加在表中

# 增加columns參數(shù)
[hadoop@hadoop000 ~]$ sqoop export \
> --connect jdbc:mysql://localhost:3306/test \
> --username root \
> --password 123456 \
> --table students_demo2 \
> --export-dir /user/hadoop/students/ \
> --columns id,name

# mysql結(jié)果
mysql> select * from students_demo2;
+------+-------+------+
| id   | name  | age  |
+------+-------+------+
| 1001 | lodd  | NULL |
| 1002 | sdfs  | NULL |
| 1003 | sdfsa | NULL |
+------+-------+------+
3 rows in set (0.00 sec)
7.MySQL的中的數(shù)據(jù)導(dǎo)入到Hive中

常用參數(shù):

--create-hive-table 創(chuàng)建目標(biāo)表,如果有會報錯
--hive-database 指定hive數(shù)據(jù)庫
--hive-import 指定導(dǎo)入hive(沒有這個條件導(dǎo)入到hdfs中)
--hive-overwrite 覆蓋
--hive-table 指定hive中表的名字枣接,如果不指定使用導(dǎo)入的表的表名
--hive-partition-key 指定Hive分區(qū)表字段
--hive-partition-value 指定導(dǎo)入的分區(qū)值

首次導(dǎo)入可能會報錯如下:
18/07/04 15:06:26 ERROR hive.HiveConfig: Could not load org.apache.hadoop.hive.conf.HiveConf. Make sure HIVE_CONF_DIR is set correctly. 18/07/04 15:06:26 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: java.lang.ClassNotFoundException: org.apache.hadoop.hive.conf.HiveConf
解決方法:到hive目錄的lib下拷貝幾個jar包,問題就解決了

# 報錯解決方法
[hadoop@hadoop000 lib]$ pwd
/home/hadoop/app/hive-1.1.0-cdh5.7.0/lib
[hadoop@hadoop000 lib]$ cp hive-common-1.1.0-cdh5.7.0.jar /home/hadoop/app/sqoop-1.4.6-cdh5.7.0/lib/
[hadoop@hadoop000 lib]$ cp hive-shims* /home/hadoop/app/sqoop-1.4.6-cdh5.7.0/lib/
# 報錯解決后執(zhí)行導(dǎo)入
[hadoop@hadoop000 ~]$ sqoop import \
> --connect jdbc:mysql://localhost:3306/test \
> --username root --password 123456 \
> --table students \
> --create-hive-table \
> --hive-database hive \
> --hive-import \
> --hive-overwrite \
> --hive-table stu_import \
> --mapreduce-job-name FromMySQL2HIVE \
> --delete-target-dir \
> --fields-terminated-by '\t' \
> -m 1 \
> --null-non-string 0

# Hive中查看
hive> show tables;
OK
stu_import
Time taken: 0.051 seconds, Fetched: 1 row(s)
hive> select * from stu_import;
OK
1001    lodd    23
1002    sdfs    21
1003    sdfsa   24
Time taken: 0.969 seconds, Fetched: 3 row(s)

建議:導(dǎo)入Hive不建議大家使用–create-hive-table參數(shù),建議事先創(chuàng)建好hive表酗洒;因為自動創(chuàng)建的表字段類型可能并不是我們想要的。

# 增加partition參數(shù)
[hadoop@hadoop000 ~]$ sqoop import \
> --connect jdbc:mysql://localhost:3306/test \
> --username root --password 123456 \
> --table students \
> --create-hive-table \
> --hive-database hive \
> --hive-import \
> --hive-overwrite \
> --hive-table stu_import2 \
> --mapreduce-job-name FromMySQL2HIVE2 \
> --delete-target-dir \
> --fields-terminated-by '\t' \
> -m 1 \
> --null-non-string 0 \
> --hive-partition-key dt \
> --hive-partition-value "2018-08-08"
# Hive中查看
hive> select * from stu_import2;
OK
1001    lodd    23      2018-08-08
1002    sdfs    21      2018-08-08
1003    sdfsa   24      2018-08-08
Time taken: 0.192 seconds, Fetched: 3 row(s)
8.sqoop job的使用

sqoop job可以將執(zhí)行的語句變成一個job枷遂,并不是在創(chuàng)建語句的時候執(zhí)行樱衷,你可以查看該job,可以任何時候執(zhí)行該job酒唉,也可以刪除job矩桂,這樣就方便我們進行任務(wù)的調(diào)度。

--create <job-id> 創(chuàng)建一個新的job.
--delete <job-id> 刪除job
--exec <job-id> 執(zhí)行job
--show <job-id> 顯示job的參數(shù)
--list 列出所有的job

# 創(chuàng)建job
[hadoop@hadoop000 ~]$ sqoop job --create person_job1 -- import --connect jdbc:mysql://localhost:3306/test \
> --username root \
> --password 123456 \
> --table students_demo \
> -m 1 \
> --delete-target-dir
# 查看job
[hadoop@hadoop000 ~]$ sqoop job --list
Available jobs:
  person_job1
# 執(zhí)行job 會提示輸入mysql root用戶密碼
[hadoop@hadoop000 ~]$ sqoop job --exec person_job1
# HDFS查看
[hadoop@hadoop000 lib]$ hadoop fs -ls /user/hadoop/students_demo
Found 2 items
-rw-r--r--   1 hadoop supergroup          0 2018-07-04 15:34 /user/hadoop/students_demo/_SUCCESS
-rw-r--r--   1 hadoop supergroup         40 2018-07-04 15:34 /user/hadoop/students_demo/part-m-00000

我們發(fā)現(xiàn)執(zhí)行person_job的時候痪伦,需要輸入數(shù)據(jù)庫的密碼耍鬓,怎么樣能不輸入密碼呢
配置sqoop-site.xml即可解決

# 將sqoop.metastore.client.record.password參數(shù)的注釋去掉 或者再添加一下
[hadoop@hadoop000 conf]$ pwd
/home/hadoop/app/sqoop-1.4.6-cdh5.7.0/conf
[hadoop@hadoop000 conf]$ vi sqoop-site.xml
  <property>
    <name>sqoop.metastore.client.record.password</name>
    <value>true</value>
    <description>If true, allow saved passwords in the metastore.
    </description>
  </property>

參考文章:https://blog.csdn.net/yu0_zhang0/article/details/79069251

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末阔籽,一起剝皮案震驚了整個濱河市流妻,隨后出現(xiàn)的幾起案子牲蜀,更是在濱河造成了極大的恐慌,老刑警劉巖绅这,帶你破解...
    沈念sama閱讀 218,546評論 6 507
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件涣达,死亡現(xiàn)場離奇詭異,居然都是意外死亡证薇,警方通過查閱死者的電腦和手機度苔,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,224評論 3 395
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來浑度,“玉大人寇窑,你說我怎么就攤上這事÷嵴牛” “怎么了甩骏?”我有些...
    開封第一講書人閱讀 164,911評論 0 354
  • 文/不壞的土叔 我叫張陵,是天一觀的道長先慷。 經(jīng)常有香客問我饮笛,道長,這世上最難降的妖魔是什么论熙? 我笑而不...
    開封第一講書人閱讀 58,737評論 1 294
  • 正文 為了忘掉前任福青,我火速辦了婚禮,結(jié)果婚禮上脓诡,老公的妹妹穿的比我還像新娘无午。我一直安慰自己,他們只是感情好祝谚,可當(dāng)我...
    茶點故事閱讀 67,753評論 6 392
  • 文/花漫 我一把揭開白布宪迟。 她就那樣靜靜地躺著,像睡著了一般踊跟。 火紅的嫁衣襯著肌膚如雪踩验。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,598評論 1 305
  • 那天商玫,我揣著相機與錄音箕憾,去河邊找鬼。 笑死拳昌,一個胖子當(dāng)著我的面吹牛袭异,可吹牛的內(nèi)容都是我干的炬藤。 我是一名探鬼主播御铃,決...
    沈念sama閱讀 40,338評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼碴里,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了上真?” 一聲冷哼從身側(cè)響起咬腋,我...
    開封第一講書人閱讀 39,249評論 0 276
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎睡互,沒想到半個月后根竿,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,696評論 1 314
  • 正文 獨居荒郊野嶺守林人離奇死亡就珠,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,888評論 3 336
  • 正文 我和宋清朗相戀三年寇壳,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片妻怎。...
    茶點故事閱讀 40,013評論 1 348
  • 序言:一個原本活蹦亂跳的男人離奇死亡壳炎,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出逼侦,到底是詐尸還是另有隱情匿辩,我是刑警寧澤,帶...
    沈念sama閱讀 35,731評論 5 346
  • 正文 年R本政府宣布偿洁,位于F島的核電站撒汉,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏涕滋。R本人自食惡果不足惜睬辐,卻給世界環(huán)境...
    茶點故事閱讀 41,348評論 3 330
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望宾肺。 院中可真熱鬧溯饵,春花似錦、人聲如沸锨用。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,929評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽增拥。三九已至啄巧,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間掌栅,已是汗流浹背秩仆。 一陣腳步聲響...
    開封第一講書人閱讀 33,048評論 1 270
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留猾封,地道東北人澄耍。 一個月前我還...
    沈念sama閱讀 48,203評論 3 370
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親齐莲。 傳聞我的和親對象是個殘疾皇子痢站,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 44,960評論 2 355

推薦閱讀更多精彩內(nèi)容