一. 本文之前的工作
a berkeley view of 系列共出現(xiàn)過(guò)2篇溉贿,除了本文要總結(jié)的這篇鲜结,還有2009年發(fā)布的另一篇《Above the Clouds:A Berkeley View of Cloud 》迟郎,其Google scolar的引用數(shù)達(dá)到了12042坎弯。
簡(jiǎn)單回顧下2009年 對(duì)于 云計(jì)算的技術(shù)的預(yù)測(cè)拯钻,從今天的角度回看過(guò)去钧嘶,預(yù)測(cè)的還是比較準(zhǔn)的
1. 云提供商是否可以盈利棠众?
在偏遠(yuǎn)地點(diǎn),提供中等配置服務(wù)器有决,足夠規(guī)模和利用率下闸拿,是可以盈利的(省電,就近部署節(jié)省外網(wǎng)帶寬)
對(duì)應(yīng)共有云出現(xiàn)的年份
aws 2006(現(xiàn)世界最大)
阿里云 2009 (現(xiàn)中國(guó)最大)
百度云 2015
騰訊云 2016
2. 企業(yè)研發(fā)用戶采用共有云是否可行书幕,是否劃算新荤?
結(jié)論是可行的,且是劃算的
硬件隨著時(shí)間 價(jià)格會(huì)降低台汇。 且不同的硬件減低的速度不同
摩爾定律
google 早期的其它產(chǎn)品苛骨,如 gmail篱瞎,why not an “unlimited” inbox?也是這一思想的產(chǎn)物痒芝。梅姐說(shuō)俐筋,我們堅(jiān)信摩爾定律,所以我們大膽地做了這個(gè)嘗試:gmail 被口口相傳严衬,每個(gè)人都等待著自己有幸被邀請(qǐng)澄者,而一般的用戶頭一兩年用不了多少存儲(chǔ),等累積的數(shù)據(jù)多起來(lái)時(shí)请琳,每 GB 存儲(chǔ)的價(jià)格早已掉了個(gè)量級(jí)粱挡。所以你看,當(dāng)觀念轉(zhuǎn)變俄精,想別人不敢想之事時(shí)询筏,思路就開(kāi)闊許多,做事的路子陡然不同竖慧,進(jìn)而成本結(jié)構(gòu)也完全不一樣嫌套。當(dāng) 4M 以下免費(fèi),4M 以上收費(fèi)的 yahoo 郵箱發(fā)現(xiàn)用戶像潮水般涌入 gmail测蘑,急忙跟進(jìn)時(shí)灌危,卻發(fā)現(xiàn),自己用的 IOE 體系碳胳,成本結(jié)構(gòu)根本無(wú)法競(jìng)爭(zhēng)勇蝙,這就尷尬了:是硬著頭皮流血跟進(jìn),還是壯士斷腕挨约,重建系統(tǒng)味混?
3. 云計(jì)算需要解決的問(wèn)題 challenges
1)用戶的程序要適應(yīng)虛擬機(jī)(性能損失:cpu ram 4% ,disk io 26%)
應(yīng)用服務(wù)和存儲(chǔ)服務(wù)分離提供
2)要支持快速的啟動(dòng)和停止
虛擬機(jī)要求盡可能小
3)怎樣更高可用(分布化诫惭,微服務(wù)化)
微服務(wù)化翁锡,且跨區(qū)部署
4)防止云鎖定
企業(yè)用戶會(huì)嘗試部署在多個(gè)云,靈活遷移
5)隱私數(shù)據(jù)訪問(wèn)和保護(hù)
數(shù)據(jù)可以同步到私有云
二. 本文背景介紹
1. 作者以及實(shí)驗(yàn)室介紹
1)作者們(14位)大部分是學(xué)界和工業(yè)界的牛人夕土,startup創(chuàng)始人
https://people.eecs.berkeley.edu/~istoica/
https://people.eecs.berkeley.edu/~dawnsong/
http://people.eecs.berkeley.edu/~jordan/
2)作者從事多個(gè)方向的研究
分布式系統(tǒng)馆衔,AI,安全怨绣,統(tǒng)計(jì)學(xué)算法角溃,網(wǎng)絡(luò), 數(shù)據(jù)庫(kù)系統(tǒng),嵌入式系統(tǒng)
三. 主要討論的問(wèn)題
4種應(yīng)用場(chǎng)景以及對(duì)應(yīng)的 9個(gè)待解決的技術(shù)問(wèn)題
1. AI4個(gè)應(yīng)用上的趨勢(shì)
1)Mission-critical AI
協(xié)助人完成特定任務(wù)篮撑,且有可能比人完成的更好:手術(shù)機(jī)器人减细,自動(dòng)駕駛,掃地機(jī)器人
Challenges: Design AI systems that learn continually by interact-ing with a dynamic environment, while making decisions that aretimely, robust, and secure.
2)Personalized AI
通過(guò)收集用戶的數(shù)據(jù)赢笨,提供更個(gè)性化的AI服務(wù)未蝌,如:個(gè)人助理(助理來(lái)也)驮吱,iphone X的個(gè)性化的人臉開(kāi)鎖,不同駕駛風(fēng)格的自動(dòng)駕駛系統(tǒng)
Challenges: Design AI systems that enable personalized applica-tions and services yet do not compromise users’ privacy and security.
3)AI across organizations
在保護(hù)數(shù)據(jù)歸屬的前提下:共享訓(xùn)練的數(shù)據(jù)萧吠,如:醫(yī)院和銀行行業(yè)(其行業(yè)之間有競(jìng)爭(zhēng)關(guān)系)
Challenges: Design AI systems that can train on datasets owned by diferent organizations without compromising their condentiality, and in the process provide AI capabilities that span the boundaries of potentially competing organization.
4)AI demands outpacing the Moore’s Law
預(yù)期2018年有400ZB(1ZB=102410241024*1024GB)數(shù)據(jù)產(chǎn)生左冬,到2025年還會(huì)有指數(shù)級(jí)增加。 摩爾定律已經(jīng)失效纸型,不能滿足AI需求.無(wú)論從計(jì)算能力又碌,存儲(chǔ)能力還是網(wǎng)絡(luò)能力
Challenges: Develop domain-specic architectures and software systems to address the performance needs of future AI applicationsin the post-Moore’s Law era, including custom chips for AI work-loads, edge-cloud systems to eciently process data at the edge, and techniques for abstracting and sampling data.
2. 9個(gè)待解決的技術(shù)問(wèn)題
Acting in dynamic environments
巡邏機(jī)器人的例子
consider a group of robots providing security for an building. When one robot breaks or a new one is added, the other robots must update their strategies for navigation, planning, and control in a coordinatedmanner.
Similarly, when the environment changes, either due to the robots’ own actions or to external conditions (e.g., an elevator goingout of service, or a malicious intruder), all robots must re-calibratetheir actions in light of the change.
R1: Continual learning.
現(xiàn)在訓(xùn)練模型的方式 離線訓(xùn)練 -> 優(yōu)化 ->在線預(yù)測(cè)。 最高的時(shí)效性也需要幾小時(shí)級(jí)別
為了提升適應(yīng)性绊袋,會(huì)引進(jìn)更自動(dòng)的pipeline,這就會(huì)帶來(lái)后面所說(shuō)的安全問(wèn)題
online learning铸鹰,在線訓(xùn)練更新模型
RL預(yù)期是方向癌别,在模擬的環(huán)境中充分訓(xùn)練,但是系統(tǒng)上需要很多優(yōu)化
requiring millions or even billions of simulations to explore the solution space and “solve"complex tasks. 現(xiàn)在還沒(méi)有合適的系統(tǒng)
Simulated reality (SR).
SR enables an agent to learn not only much faster but also much more safely.
Consider a robot cleaning an environment that encoun-ters an object it has not seen before, e.g., a new cellphone. The robotcould physically experiment with the cellphone to determine howto grasp it, but this may require a long time and might damage thephone. In contrast, the robot could scan the 3D shape of the phone into a simulator, perform a few physical experiments to determinerigidity, texture, and weight distribution, and then use SR to learnhow to successfully grasp it without damage.
在 Apollo 1.5 模擬系統(tǒng)上要花 30 分鐘進(jìn)行的測(cè)試任務(wù)蹋笼,在優(yōu)化后的模擬系統(tǒng)上測(cè)試只需要 30 秒展姐。” —baidu王京傲 ces 2018
待解決的技術(shù)點(diǎn):
(1) Build systems for RL that fully exploit parallelism,while allowing dynamic task graphs, providing milli second-level latencies, and running on heterogeneous hardware under stringent deadlines.
(2)Build systems that can faithfully simulate the real-worldenvironment, as the environment changes continually and unexpect-edly, and run faster than real time.
R2: Robust decisions.
一個(gè)例子
the Microsoft Tay chat bot relied heavily on human interaction to develop rich naturaldialogue capabilities. However, when exposed to Twitter messages, Tay quickly took on a dark personality
如果已經(jīng)上線了在線學(xué)習(xí)剖毯,如果遇到負(fù)面的數(shù)據(jù)或者非常不確定的數(shù)據(jù)圾笨。AI系統(tǒng)應(yīng)該不做決策操作或者只做預(yù)定的保險(xiǎn)操作。(比如:自動(dòng)駕駛的減速停車)
待解決的技術(shù)點(diǎn):
(1) Build fine grained provenance support into AI systems to connect outcome changes (e.g., reward or state) to the data sources that caused these changes, and automatically learn causal,source-specic noise models.
(2) Design API and language support for developing systems that maintain condence intervals for decision-making, and in particular can process unforeseen inputs.
R3: Explainable decisions.
尤其在醫(yī)療AI領(lǐng)域
輸入數(shù)據(jù)的哪些部分導(dǎo)致了結(jié)論
For example, one may wish to know what features of a particular or-gan in an X-ray (e.g., size, color, position, form) led to a particulardiagnosis and how the diagnosis would change under minor pertur-bations of those features.
待解決的技術(shù)點(diǎn):
(1) Build AI systems that can support interactive diagnostic analysis, that faithfully replay past executions, and that can help to determine the features of the input that are responsible for a particular decision, possibly by replaying the decision task against past perturbed inputs. More generally, provide systems support for causal inference.
Secure AI
直接攻擊逊谋,掌握系統(tǒng)
tensorfow 披露漏洞
“這個(gè)漏洞出問(wèn)題的點(diǎn)是在處理 AI 模型的時(shí)候擂达,一個(gè)攻擊場(chǎng)景是,黑客在網(wǎng)上提供一個(gè)AI 模型給大家用胶滋,大家下載回來(lái)一運(yùn)行就中招了板鬓。或者黑客能夠控制某個(gè)系統(tǒng)的 AI 模型就能實(shí)施攻擊究恤。所以俭令,使用 TensorFlow 的系統(tǒng)要注意不要使用有問(wèn)題/被黑客修改過(guò)的 AI 模型。
目前已知的公開(kāi)發(fā)現(xiàn) AI 框架漏洞有兩個(gè): 一個(gè)是之前 360 發(fā)現(xiàn)的三個(gè) AI 框架引入的第三方組件帶來(lái)的漏洞,另一個(gè)是此次我們發(fā)現(xiàn)的框架本身的漏洞”
R4: Secure enclaves.
例如:在公有云等集群部署是部宿,在代碼運(yùn)行時(shí)上的隔離抄腔,隔離區(qū)的代碼可以訪問(wèn)到數(shù)據(jù),其他進(jìn)程訪問(wèn)不到隔離區(qū)運(yùn)行的代碼理张,硬件上執(zhí)行赫蛇。實(shí)際使用建議 將代碼分為保密區(qū)和非保密區(qū),運(yùn)行在不同的runtime涯穷。
Intel’sSoftware Guard Extensions (SGX) [5], which provides a hardware-enforced isolated execution environment. Code inside SGX cancompute on data, while even a compromised operating system orhypervisor (running outside the enclave) cannot see this code or data. SGX also provides remote attestation [6], a protocol enabling aremote client to verify that the enclave is running the expected code.
待解決的技術(shù)點(diǎn):
(1)Build AI systems that leverage secure enclaves to ensure data con dentiality, user privacy and decision integrity, possibly by splitting the AI system’s code between a minimal code base runningwithin the enclave, and code running outside the enclave. Ensure thecode inside the enclave does not leak information, or compromisedecision integrity.
R5: Adversarial learning.
evasion attacks
inference 階段: 修改圖像導(dǎo)致錯(cuò)誤的分類
現(xiàn)階段沒(méi)有什么好辦法data poisoning attacks
train階段:混入錯(cuò)誤label的數(shù)據(jù)到訓(xùn)練數(shù)據(jù)集棍掐,尤其在AI系統(tǒng)持續(xù)學(xué)習(xí)的前提下,未授信的train data 更容易導(dǎo)致錯(cuò)誤
可以利用回放和可解釋性剔除部分影響數(shù)據(jù)
待解決的技術(shù)點(diǎn):
(1) Build AI systems that are robust against adversarialinputs both during training and prediction (e.g., decision making),possibly by designing new machine learning models and network architectures, leveraging provenance to track down fraudulent datasources, and replaying to redo decisions after eliminating the fraudu-lent sources.
R6: Shared learning on condential data.
示例:既是競(jìng)爭(zhēng)又是合作
銀行共享防欺詐的模型和數(shù)據(jù)
醫(yī)院共享流感識(shí)別的數(shù)據(jù)和模型
訓(xùn)練模型保證數(shù)據(jù)的私密性
一個(gè)方法是全部使用R4中所說(shuō)的安全隔離的硬件環(huán)境
另一個(gè)方法是使用特殊的算法拷况,但是對(duì)train性能影響比較大
multi-party com-putation (MPC) :多個(gè)團(tuán)體 共同完成一個(gè)計(jì)算:
(1) local computation and 本地 算梯度
(2) computation using MPC 合并梯度
待解決的技術(shù)點(diǎn):
Build AI systems that (1) can learn across multipledata sources without leaking information from a data source duringtraining or serving, and (2) provide incentives to potentially competing organizations to share their data or models.
AI-specic architectures
R7: Domain specic hardware.
摩爾定律失效作煌,并且AI對(duì)于計(jì)算掘殴,對(duì)于內(nèi)存訪問(wèn)的需求更強(qiáng)
對(duì)于CPU的更新:
TPU, FPGA
對(duì)于DRAM和SSD的更新:
3D XPoint from Intel and Micron aims to provide 10? storagecapacity with DRAM-like performance. (更牛的內(nèi)存)
STT MRAM aims to succeed Flash, which may hit similar scaling limits as DRAM. (更牛的ssd)
服務(wù)器的配置 會(huì)多種多樣粟誓,更加異構(gòu)
架構(gòu)設(shè)計(jì)參考:
https://bar.eecs.berkeley.edu/projects/2015-firebox.html
待解決的技術(shù)點(diǎn):
(1) Design domain-specic hardware architectures to improve the performance and reduce power consumption of AI ap-plications by orders of magnitude, or enhance the security of theseapplications. (多奏寨,省電,安全)
(2) Design AI software systems to take advantage of these domain-specic architectures, resource disaggregation architectures, and future non-volatile storage technologies.(調(diào)度更多異構(gòu)硬件的系統(tǒng))
R8: Composable AI systems
- Model composition
模塊化鹰服,復(fù)用的重要性:類比現(xiàn)在的微服務(wù)架構(gòu)
預(yù)期未來(lái)AI系統(tǒng)也會(huì)有分層的api服務(wù)
組合的方式:比如我們的設(shè)計(jì):一次檢測(cè)多次識(shí)別分類
準(zhǔn)確度從低到高排序的模型序列病瞳,串行查詢 (平衡延遲和準(zhǔn)確度)
小模型在終端,大模型在云端
待解決的技術(shù)點(diǎn):
(1) designing a declarative language to capture the topology of these components and specifying performance targets of the applications,
(2) providing accurate performance models for each component, including resource demands, latency and throughput, and
(3) scheduling and optimization algorithms to compute the execution plan across components, and map components to the available resources to satisfy latency and throughput requirements while minimizing costs.
類似于sql 查詢 解析器的工作悲酷,充分利用資源套菜,batch with configurable latency controls.
參考架構(gòu)(服務(wù)器端):
tensorflow serving
clipper
- Action composition
把更細(xì)粒度的操作組合成高級(jí)別的option
更高級(jí)別的option,較少選擇數(shù)目设易,更快的訓(xùn)練速度
示例:
比如自動(dòng)駕駛逗柴,抽象出來(lái)的option:換車道線 = ( 加速 or 減速 左轉(zhuǎn) or 右轉(zhuǎn) 打變道信號(hào)燈)
待解決的技術(shù)點(diǎn):
(1) Design AI systems and APIs that allow the composition of models and actions in a modular and exible manner, and develop rich libraries of models and options using these APIs to dramatically simplify the development of AI applications
R9: Cloud-edge systems
終端的優(yōu)勢(shì)
edge devices to improve security, privacy, latency and safety
技術(shù)上的困難
適配多種終端和軟件系統(tǒng)的難度
compilers and just-in-time (JIT) technologies to eciently compile on-the-fly complex algorithms and run them on edge devices. This approach can leverage recent code generation tools, such as TensorFlow’s XLA [107], Halide [50], and Weld [83].
終端小模型 云端大模型 已經(jīng)應(yīng)用于video識(shí)別系統(tǒng),負(fù)載需要靈活的在終端和云端切換
終端模型:小顿肺,準(zhǔn)確度低戏溺,更新頻率低
云上模型:大,準(zhǔn)確度高屠尊,更新頻率高
即便是有了5g和強(qiáng)大的云端旷祸,從網(wǎng)絡(luò)和存儲(chǔ)的能力和成本考慮,我們都不能全部存儲(chǔ)設(shè)備產(chǎn)生的數(shù)據(jù).所以需要對(duì)端上的數(shù)據(jù)進(jìn)行samples and sketches(上傳統(tǒng)計(jì)數(shù)據(jù) 和 抽樣存儲(chǔ))
待解決的技術(shù)點(diǎn):
Design cloud-edge AI systems that
(1) leverage the edge to reduce latency, improve safety and security, and implement intelligent data retention techniques,
(2) leverage the cloud to share data and models across edge devices, train sophisticated computation-intensive models, and take high quality decisions.