第一步灭必,ipython調(diào)用pyspark
步驟可以參考這里,
生成 notebook 配置文件
jupyter notebook --generate-config
修改生成 的notebook 配置文件
vi ~/.jupyter/jupyter_notebook_config.py
c.NotebookApp.ip = '1xx.xxx.xxx.xxx'
如果想外網(wǎng)也可以訪問跟衅,ip 就設(shè)為外網(wǎng) IP 地址,否則就設(shè)為127.0.0.1伶跷,代表本機(jī)訪問秘狞。
報(bào)錯:Unrecognized alias: '--profile=pyspark', it will probably have no effect.
原因:
“ipython has moved to version 5.0,which means that if you are using it,it will be reading its configuraiton from ~/.jupyter,not ~/.ipython
You have to create a new configuration file with
jupyter notebook --generate-config
and then edit the resulting
~/.jupyter/jupyter_notebook_config.py .”
簡單意思就是:ipython版本在5.0之后,配置目錄為~/.jupyter谒撼,而不是 ~/.ipython
修改
vi ~/.jupyter/jupyter_notebook_config.py
修改 c.NotebookApp.ip = '127.0.0.1'。
如果想外網(wǎng)也可以訪問抵皱,ip 就設(shè)為外網(wǎng) IP 地址
啟動
jupyter-notebook --config='~/.jupyter/jupyter_notebook_config.py'
在jupyter上測試pyspark,創(chuàng)建SparkContext對象
import findspark
import os
findspark.init()
import pyspark
sc = pyspark.SparkContext()
第二步呻畸,為ipython添加scala的kernel:
基本思路悼院,參考這里:
#添加toree
pip install toree
#配置spark目錄
jupyter toree install --spark_home=your-spark-home
這里的spark-home:
也就是你進(jìn)入/opt/spark-2.0.0-bin-hadoop2.7/sbin,可以
#停止spark
./stop-all.sh
#啟動spark
./start-all.sh
上面的your-spark-home就是/opt/spark-2.0.0-bin-hadoop2.7/
查看kernels列表:
jupyter kernelspec list
結(jié)果:
啟動jupyter
jupyter-notebook --config='~/.jupyter/jupyter_notebook_config.py'
報(bào)錯如下:
java.lang.NoSuchMethodError: scala.collection.immutable.HashSet$.empty()Lscala/collection/immutable/HashSet;
這里在查找解決辦法的過程上绞愚,出了個小插曲颖医,嘗試jupyter官網(wǎng)一個方法的時(shí)候,遇到了sbt的問題
sbt: command not found
sbt:Getting org.scala-sbt sbt 0.13.6
多次折騰之后熔萧,無法解決僚祷,決定重新配置贮缕。
第三步,仍然無法解決問題感昼,決定重新配置(所以其實(shí)可以直接從這里開始。抑诸。。)
先把kernel全部刪除奸绷,首先查看kernel的詳情和安裝路徑:
jupyter kernelspec list
將kernels目錄下的都刪除层玲,但不刪除kernels目錄本身
#下面的 /root/.local/share/jupyter/kernels/,對應(yīng)本機(jī)的kernel路徑辛块,然后最后的'*'代表該目錄下全刪除。
rm -rf /root/.local/share/jupyter/kernels/*
然后卸載toree
pip uninstall toree
3.1润绵、安裝toree方法a
在這個問題下發(fā)現(xiàn)當(dāng)環(huán)境配置為:spark 2.0 +2.11scala,應(yīng)該安裝toree的版本為toree 0.2.0.dev1此方法需要 python 2.7 +conda憨愉,
如圖
按提示輸入命令:
pip install -i https://pypi.anaconda.org/hyoon/simple toree
3.2卿捎、安裝toree方法b
如果沒有 python 2.7 & conda,就下載tgz文件然后
tar zxvf toree-0.2.0.dev1.tar.gz
pip install -e toree-0.2.0.dev1
3.3午阵、安裝toree方法c
wget https://dist.apache.org/repos/dist/dev/incubator/toree/0.2.0/snapshots/dev1/toree-pip/toree-0.2.0.dev1.tar.gz
pip install toree-0.2.0.dev1.tar.gz
3.4、toree安裝完成后底桂,配置kernel
重新裝好toree后,重新將spark目錄配置給jupyter toree:
jupyter toree install --spark_home=/opt/spark-2.0.0-bin-hadoop2.7
檢查一下現(xiàn)在的kernel列表:
jupyter kernelspec list
啟動jupyter:
jupyter-notebook --config='~/.jupyter/jupyter_notebook_config.py'
3.5于个、出錯為:Unsupported major.minor version 52.0的解決辦法
報(bào)錯如下:
Exception in thread "main" java.lang.UnsupportedClassVersionError: com/typesafe/config/ConfigMergeable : Unsupported major.minor version 52.0
出錯原因?yàn)椋篣nsupported major.minor version 52.0
經(jīng)分析猫十,問題應(yīng)該出在版本不對應(yīng),
查閱資料:
Java SE 9 = 53,
Java SE 8 = 52,
Java SE 7 = 51,
Java SE 6.0 = 50,
Java SE 5.0 = 49,
JDK 1.4 = 48,
JDK 1.3 = 47,
JDK 1.2 = 46,
JDK 1.1 = 45
檢查版本情況:
Java版本:
java -version
結(jié)果:
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)
javac版本
javac -version
結(jié)果:
javac 1.8.0_121
檢查是否有多個java JDK被安裝拖云,
sudo update-alternatives --config javac
結(jié)果
There is 1 program that provides 'javac'.
Selection Command
-----------------------------------------------
*+ 1 java-1.8.0-openjdk.x86_64 (/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.121-0.b13.el7_3.x86_64/bin/javac)
以本機(jī)為例,root用戶檢查/etc/profile文件
vim /etc/profile
可以看到這里的JAVA_HOME JAVA_BIN配置均為jdk1.8.0.151
判斷應(yīng)該出在spark的Java版本和本機(jī)java版本不對應(yīng)的問題
參考這里:spark提交jar包時(shí)出現(xiàn)unsupported major.minor version 52.0錯誤的解決方案
檢查spark安裝conf目錄下的spark-env.sh文件
vim /otp/spark-2.0.0-bin-hadoop2.7/conf/spark-env.sh
結(jié)果:
果然這里的java路徑配置出錯乏苦,和系統(tǒng)的環(huán)境/etc/profile文件不一致尤筐,應(yīng)該是之前學(xué)弟在舊版本java時(shí)候配置的,將/etc/profile文件的JAVA_HOME和JAVA_BIN粘貼過來盆繁,保存。
查看/usr/java目錄
這里應(yīng)該是有一個學(xué)弟之前配置spark環(huán)境革娄,嘗試了jdk1.7和jdk1.8發(fā)現(xiàn)jdk1.7會和spark2.0.0不兼容。然而他的工作并沒有留下文檔和日志之類的拦惋,這里嚴(yán)重體現(xiàn)了工作記錄得重要性!2扪挑庶!
重新啟動jupyter,終于成功了挠羔。
本機(jī)環(huán)境版本:
linux(centos)+jdk1.8+Spark 2.0.0+Scala 2.11.8+hadoop 2.7.3+ Python 2.7.12 |Anaconda 4.2.0 (64-bit)
參考文章:
https://m.2cto.com/kf/201611/566880.html
https://www.cnblogs.com/NaughtyBaby/p/5469469.html
http://blog.csdn.net/u012948976/article/details/52372644
http://blog.csdn.net/qq_30901367/article/details/73296887
http://blog.csdn.net/xmo_jiao/article/details/72674687?utm_source=itdadao&utm_medium=referral
https://datascience.stackexchange.com/questions/6555/issue-with-ipython-jupyter-on-spark-unrecognized-alias
https://issues.apache.org/jira/browse/TOREE-354
http://blog.csdn.net/qq_30901367/article/details/73296887
https://stackoverflow.com/questions/39535858/installing-scala-kernel-or-spark-toree-for-jupyter-anaconda
http://jupyter-client.readthedocs.io/en/latest/kernels.html#kernelspecs
https://www.cnblogs.com/liujStudy/p/7217480.html