pyspark的DataFrame薇组,在直接保存為hive表時燃逻,如果字符串中帶有換行符,會導致?lián)Q行錯誤腥泥。以spark 3.0.0版本為例。我們向hive表保存1條包含換行符字符串的數(shù)據(jù)啃匿,統(tǒng)計行數(shù)時卻得到2行:
>>> df = spark.createDataFrame([(1,'hello\nworld')], ('id','msg'))
>>> df.write.format('hive').saveAsTable('test.newline0')
>>> spark.sql('SELECT COUNT(1) FROM test.newline0').show()
+--------+
|count(1)|
+--------+
| 2|
+--------+
這一問題的相關文檔我找了很久蛔外,最后發(fā)現(xiàn)是在Specifying storage format for Hive tables一節(jié)。直接使用hive格式保存時溯乒,底層是'textfile'且默認換行符是'\n'夹厌,因此自然會出現(xiàn)換行錯誤●汕模可以通過以下代碼進行驗證:
>>> df.write.format('hive').option('fileFormat', 'textfile').option('lineDelim', '\x13').saveAsTable('test.newline1')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/share/spark-3.0.0-bin-hadoop2.7/python/pyspark/sql/readwriter.py", line 868, in saveAsTable
self._jwrite.saveAsTable(name)
File "/usr/share/spark-3.0.0-bin-hadoop2.7/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in call
File "/usr/share/spark-3.0.0-bin-hadoop2.7/python/pyspark/sql/utils.py", line 137, in deco
raise_from(converted)
File "<string>", line 3, in raise_from
pyspark.sql.utils.IllegalArgumentException: Hive data source only support newline '\n' as line delimiter, but given: ?
解決的方法也很簡單矛纹,使用其他格式進行保存:
>>> df.write.format('hive').option('fileFormat', 'parquet').saveAsTable('test.newline1')
>>> spark.sql('SELECT COUNT(1) FROM test.newline1').show()
+--------+
|count(1)|
+--------+
| 1|
+--------+