TEDxASU: How Self-taught Artificial Intelligence Understands Medical Images?

Author: Zongwei Zhou | 周縱葦
Weibo: @MrGiovanni
Email: zongweiz@asu.edu

How do your studies/endeavors impact your field?


My research focuses on developing novel computational methodologies to minimize the annotation efforts for computer-aided diagnosis, therapy, and surgery. Intense interest in applying convolutional neural networks (CNNs) in biomedical image analysis is widespread, but its success is impeded by the lack of large annotated datasets in biomedical imaging. Annotating biomedical images is not only tedious and time-consuming, but also demanding of costly, specialty-oriented knowledge and skills, which are not easily accessible. Therefore, we seek to answer this critical question: How to dramatically reduce the cost of annotation when applying CNNs in medical imaging. I believe that developing novel learning algorithms is essential to this quest.

To dramatically reduce annotation cost, one of our studies presents a novel method called AIFT (active, incremental fine-tuning) to naturally integrate active learning and transfer learning into a single framework. By repeatedly recommending the most informative and representative samples for experts to label, this work has shown that the cost of expert annotation can be cut by at least half. Due to its precise diagnosis performance with a significant budget reduction, this novel active learning framework has led to one US patent with an additional two patents pending.

To further reduce annotation efforts on varying visual tasks in medical imaging, our recent work has built a set of models, called Models Genesis, because they learn representation directly from a large number of unlabeled images to generate powerful target models through transfer learning. We envision that Models Genesis may serve as a primary source of transfer learning for 3D medical imaging applications, in particular, with limited annotated data. In recognition of my contributions, I have received a Young Scientist Award from MICCAI-2019, one of the two most prestigious conferences in medical image analysis.

In as few words as possible, what is the essence of the idea you are trying to share?


Topic: How self-taught artificial intelligence understands medical images?
Introduction: Vision, one of the oldest perceptual systems, starts in animals 540 million years ago. Today, computer vision systems manage not only to recognize objects in natural images but also to detect and analyze diseases in medical images, and eventually support radiologists and pathologists to diagnose a wide variety of conditions, leading to adequate clinical decision-making. How? Our team provides a generic algorithm that can enlighten computers with common visual representation from hundreds of available medical images. This common representation shows off remarkable progress for more advanced medical visual applications like numerous disease detection, identification, and segmentation.

What does FLUX mean to you?


The trends in artificial intelligence (AI) are bridging the state of medical image analysis between today and tomorrow. Working on medical imaging for nearly five years, I have witnessed, been influenced and, meanwhile, producing FLUX in medical imaging. The application of AI in radiology to process and analyze a medical image is becoming speedier, more efficient, and more precise. To me, the changes in the past few years in this field are unprecedented.

First, the marriage between AI and medical imaging has, by far, been the hottest topic at nearly all medical conferences for radiology, cardiology, and several other subspecialties over the past two years. Taking the example of MICCAI, one of the most prestigious conferences in medical image analysis, a total of 538 papers were accepted this year from a record of more than 1,800 submissions, increased by 63% compared with last year; there were over 2,300 registered attendees this year, doubled from 2017.

Second, computer-aided diagnosis becomes more accurate and applicable to a variety of medical imaging tasks, resulting in eye-catching headlines in media like “The AI Doctor Will See You Now”, “Your Future Doctor May Not Be Human”, and “This AI Just Beat Human Doctors on a Clinical Exam.” Ten years ago, the machine learning algorithms could merely achieve 70-80% accuracy; now, thanks to these innovative analytics strategies named Deep Learning, the accuracy of most disease recognition was boosted into a 90% level, subsequently applicable in clinical practice. In recent years, a steeply increased number of FDA approvals reveals the versatility of AI in medical applications.

As seen, the accelerating prosperity of AI in both academy and industry encourages me, as an AI researcher, to explore the frontier of AI in healthcare applications. My own research objective aims at promoting application-orientated intelligence into general intelligence, capable across disease, organs, and, most importantly, modalities. Such an overwhelming FLUX of AI in medical imaging, therefore, definitely will serve both spurs and opportunity for me.

How will your talk relate to our theme of FLUX?


Artificial intelligence (AI) technologies have the potential to transform medical image analysis by deriving new and important insights from the vast amount of image data generated during the delivery of health care every day. However, its success is impeded by the lack of large annotated datasets because a high-performing AI system is extremely data-hungry. Annotating biomedical images is not only tedious and time-consuming, but also demanding of costly, specialty-oriented knowledge and skills, which has severely obstructed the development of AI in medical imaging.

In my talk, I will discuss the next generation of AI systems, Models Genesis, that learn directly from medical images, without requiring as much data. They are faster, more flexible, and, like humans, more innately intelligent. The computers navigate the medical images to understand characteristic organ texture, layout, and anatomical structure.

The idea of self-taught Models Genesis is not only highly innovative methodologically but also expected to exert substantial impacts clinically. For instance, we offered the leading AI solution in the world four years ago for detecting pulmonary embolism (PE) from CT scans, which is one of the leading causes of preventable hospital deaths with early diagnosis and treatment. Today, upon the state-of-the-art PE detection system, we have further increased the diagnosis accuracy by 8% using self-taught Models Genesis learning from hundreds of thousands of patient CT scans. Other than pulmonary embolism, we have demonstrated that Models Genesis will benefit many other medical applications, including multiple types of disease localization, identification, and segmentation, tested on CT, MRI, Ultrasound, and X-ray images.

As in TED’s fashion, to benefit the research and industry community, we make the development of Models Genesis open science, releasing our Models Genesis to the public for free, and inviting researchers around the world to contribute to this effort. We believe this self-taught AI, learning common knowledge from a tremendous number of patient medical images, will lead to a remarkable slash on diverse medical imaging applications, offering a noticeable impact in such a FLUX of AI trend.

Why is a TEDx talk the best format to showcase your idea?


TEDxASU audiences are largely distributed to a variety of fields. While most people are not very familiar with the concept of artificial intelligence (AI), they are somewhat transformed by this technology every walk of life, such as face recognition, news recommendation, and, in the coming future, disease diagnosis. I would like to bring the recent progress of AI in medical image analysis to the TEDxASU community with an illustration of our state-of-the-art pulmonary embolism detection system. My hope through this brief talk is to narrate a self-taught AI, as a “gift”, to an audience from various subjects, opinions, and interests. Thereby, the broad, multi-domain TEDxASU audience is a perfect example of a mixed audience.

Through this stage, I would also like to spotlight our team (JLiang Lab) in the Department of Biomedical Informatics at ASU. To help achieve ASU Charter and Goals: “establish, with Mayo Clinic, innovative health solutions pathways capable of ... enhancing treatment for 2 million patients”, Prof. Jianming Liang has established strong collaborations with Mayo Clinic across multiple departments and divisions. Cooperating with Mayo Clinic, the No. 1 hospital in the nation, our team is one of the leading teams of medical imaging, especially in pulmonary embolism detection. As a medical AI researcher, I am fully convinced that a joint research agenda will give the hospital and academy the ability to develop the high-tech healthcare of tomorrow.

We are continuing to seek a worldwide corporation, which encourages me to stand on the TEDxASU stage and to share our self-taught medical AI technology, Models Genesis. In fact, we envision that Models Genesis may serve as a primary source of transfer learning for 3D medical imaging applications, in particular, with limited annotated data. As an open science, we will release Models Genesis to the public for free and invite researchers around the world to contribute to this effort. We hope that our collective efforts will lead to the Holy Grail of Models Genesis, effective across diseases, organs, and modalities.

What experience do you have with public speaking? Can you provide link(s) to recording(s) of previous talks?


https://www.youtube.com/watch?v=PPKGCvBbj_k&t=206s
Title: “Oral Presentation in MICCAI 2019”
This is the record of my talk at the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI-2019). I am a finalist of the Best Presentation Award in MICCAI-2019.

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末刃跛,一起剝皮案震驚了整個(gè)濱河市早芭,隨后出現(xiàn)的幾起案子孵坚,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 206,126評(píng)論 6 481
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件弟头,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī)捆姜,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,254評(píng)論 2 382
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來迎膜,“玉大人泥技,你說我怎么就攤上這事】慕觯” “怎么了珊豹?”我有些...
    開封第一講書人閱讀 152,445評(píng)論 0 341
  • 文/不壞的土叔 我叫張陵,是天一觀的道長榕订。 經(jīng)常有香客問我店茶,道長,這世上最難降的妖魔是什么劫恒? 我笑而不...
    開封第一講書人閱讀 55,185評(píng)論 1 278
  • 正文 為了忘掉前任贩幻,我火速辦了婚禮,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘段直。我一直安慰自己吃溅,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 64,178評(píng)論 5 371
  • 文/花漫 我一把揭開白布鸯檬。 她就那樣靜靜地躺著决侈,像睡著了一般。 火紅的嫁衣襯著肌膚如雪喧务。 梳的紋絲不亂的頭發(fā)上赖歌,一...
    開封第一講書人閱讀 48,970評(píng)論 1 284
  • 那天,我揣著相機(jī)與錄音功茴,去河邊找鬼庐冯。 笑死,一個(gè)胖子當(dāng)著我的面吹牛坎穿,可吹牛的內(nèi)容都是我干的展父。 我是一名探鬼主播,決...
    沈念sama閱讀 38,276評(píng)論 3 399
  • 文/蒼蘭香墨 我猛地睜開眼玲昧,長吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼栖茉!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起孵延,我...
    開封第一講書人閱讀 36,927評(píng)論 0 259
  • 序言:老撾萬榮一對(duì)情侶失蹤吕漂,失蹤者是張志新(化名)和其女友劉穎,沒想到半個(gè)月后尘应,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體惶凝,經(jīng)...
    沈念sama閱讀 43,400評(píng)論 1 300
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 35,883評(píng)論 2 323
  • 正文 我和宋清朗相戀三年犬钢,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了苍鲜。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 37,997評(píng)論 1 333
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡玷犹,死狀恐怖坡贺,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情箱舞,我是刑警寧澤,帶...
    沈念sama閱讀 33,646評(píng)論 4 322
  • 正文 年R本政府宣布拳亿,位于F島的核電站晴股,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏肺魁。R本人自食惡果不足惜电湘,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,213評(píng)論 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧寂呛,春花似錦怎诫、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,204評(píng)論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至劫拢,卻和暖如春肉津,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背舱沧。 一陣腳步聲響...
    開封第一講書人閱讀 31,423評(píng)論 1 260
  • 我被黑心中介騙來泰國打工妹沙, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人熟吏。 一個(gè)月前我還...
    沈念sama閱讀 45,423評(píng)論 2 352
  • 正文 我出身青樓距糖,卻偏偏與公主長得像,于是被迫代替她去往敵國和親牵寺。 傳聞我的和親對(duì)象是個(gè)殘疾皇子悍引,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 42,722評(píng)論 2 345