海量數(shù)據(jù)伴澄,無論批處理還是流處理,沃爾瑪認(rèn)為阱缓,完美選擇就是Apache Spark非凌!
- Spark Streaming從Kafka讀數(shù)據(jù)存入Cassandra,
- Spark SQL 每隔六小時(shí)從Cassandra做聚合荆针,再把結(jié)果以Parquet格式存起來
- 數(shù)據(jù)可視化敞嗡,用Spark SQL把Parquet讀出來發(fā)給Tableau!
Data processing had to be carried out at two places in the pipeline. First, during write, where we have to stream data from Kafka, process it and save it to Cassandra. Second, while generating business reports, where we have to read complete Cassandra table, join it with other data sources and aggregate it at multiple columns.
For both the requirements,?Apache Spark?was a perfect choice. This is because Spark achieves high performance for both batch and streaming data, using a state-of-the-art DAG scheduler, a query optimizer, and a physical execution engine.