之前用k3s搭建了大數(shù)據(jù)環(huán)境双肤,但是發(fā)現(xiàn)不是特別方便钮惠,并且會反復修改,因此考慮到實驗的便捷性又用docker搭了一下開發(fā)環(huán)境蔑赘,所有的物料鏈接見https://github.com/henrywangx/dev-cluster
Dev cluster搭建
1.安裝
前提:docker和docker-compose已經(jīng)安裝
1.拉起容器
make up
2.到 minio中國下載 下載mc客戶端
3.添加dev集群到mc
mc config host add dev http://localhost:9000 minio minio123 --api s3v4
4.創(chuàng)建minio的accesskey/secret, 并保存到本地
5.使用minio的access key信息更新.env
文件
# AWS_REGION is used by Spark
AWS_REGION=us-east-1
# This must match if using minio
MINIO_REGION=us-east-1
# Used by pyIceberg
AWS_DEFAULT_REGION=us-east-1
# AWS Credentials (this can use minio credential, to be filled in later)
AWS_ACCESS_KEY_ID=qUgyOn1f3rbQkXAgCYLa
AWS_SECRET_ACCESS_KEY=MJA9lmnlESWEJZgmJ5Itdee94DUF16wSMfyhsIzT
# If using Minio, this should be the API address of Minio Server
AWS_S3_ENDPOINT=http://minio:9000
# Location where files will be written when creating new tables
WAREHOUSE=s3a://openlake/
# URI of Nessie Catalog
NESSIE_URI=http://nessie:19120/api/v1
GRANT_SUDO=yes
重新拉起容器
make up
2.數(shù)據(jù)準備
1.創(chuàng)建bucket openlake/spark/sample-data/
# 輸入bucket
mc mb dev/openlake/spark/sample-data/
# 輸出bucket
mc mb dev/openlake-tmp/spark/nyc/taxis_small
2.下載出租車數(shù)據(jù)拷貝到minio中
wget https://data.cityofnewyork.us/api/views/t29m-gskq/rows.csv ./
mc cp rows.csv dev/openlake/spark/sample-data/
3.運行spark任務
1.網(wǎng)頁訪問jupyter地址: <localhost:8888>
2.運行spark-minio.py
腳本
python3 spark-minio.py
3.查看spark管理頁面:<localhost:8080>和jupyter的4040端口<localhost:4040>,分別可以查看運行的application信息和job的詳情信息
application信息:
job詳情:
4.等待python執(zhí)行完畢酥馍,查看結(jié)果阅酪,可以看到外面算出來超過6名乘客的taxi為898
jovyan@jupyter-lab:~$ python3 spark-minio.py
Setting default log level to "WARN".
...
2024-01-28 07:55:20,121 - MinIOSparkJob - INFO - Total Rows for NYC Taxi Data: 91704300
2024-01-28 07:55:20,121 - MinIOSparkJob - INFO - Total Rows for Passenger Count > 6: 898
4.使用pyspark-iceberg管理table
1.創(chuàng)建warehouse
bucket
mc mb dev/warehouse
2.運行spark-iceberg-minio.py
python3 spark-iceberg-minio.py
5.配置dremio
1.登錄dremio頁面:<localhost:9047>术辐,創(chuàng)建s3的source
s3 source配置:
s3 advanced配置:
s3配置這里加了以下配置:
-
fs.s3a.path.style.access
:true
-
fs.s3a.endpoint
:http://minio:9000
-
dremio.s3.compat
:true
- 勾選enable compatibility mode, 因為我們是minio
2.format table為iceberg辉词,進入到nyc.taxis_large這個目錄,然后點擊format table的按鈕敷搪,保存為iceberg
3.format為iceberg后,我們就能發(fā)現(xiàn)一個table襟企,選中table狮含,運行sql曼振,發(fā)現(xiàn)我們可以用sql來操作iceberg表了,哈哈
SELECT * FROM taxis_large limit 10
參考
https://www.cnblogs.com/rongfengliang/p/17970071
https://github.com/minio/openlake/tree/main
https://www.linkedin.com/pulse/creating-local-data-lakehouse-using-alex-merced/
https://medium.com/@ongxuanhong/dataops-02-spawn-up-apache-spark-infrastructure-by-using-docker-fec518698993
https://medium.com/@ongxuanhong/are-you-looking-for-a-powerful-way-to-streamline-your-data-analytics-pipeline-and-produce-cc13ea326790