当anant madabhushi教授开始计划emory的第一次健康领域人工智能研讨会时,他想知道出席人数是否足以填满健康科学研究大楼礼堂的160个座位。 “事实证明,”他说,“我们不得不在前一天停止注册,因为我们的注册人数超过了450人。 看到这种兴奋和热情真是太了不起了。 “madabhushi领导新成立的emory移情ai健康研究所,该研究所是全大学ai的一部分。 人类倡议,其创新目标是打破人工智能、医学和人文学科研究人员之间的学科障碍,让他们以共同的目的工作,利用机器学习和大数据的力量预防疾病和改善患者护理。 他将大量的投票归因于强大的人工智能社区,不仅是埃默里大学的学生和教职员工,还有佐治亚理工大学、佐治亚大学、亚特兰大退伍军人事务医疗中心、乔治亚州立大学和莫尔豪斯大学,以及多家企业合作伙伴的兴趣。 前来了解尽可能广泛的人工智能主题的与会者并没有感到失望。 虽然一些会议聚焦于人工智能在医疗诊断、基因组学和病理学等领域的临床潜力,但另一些会议则研究了人工智能对医疗隐私和安全的影响,或者创建无偏见数据库的挑战。 人工智能研究人员需要面对的困难和麻烦的问题被列入了研讨会的项目,因此他们交替讨论医疗创新。 在两天的时间里,与会者了解了从放射学到急性护理再到公共卫生的前沿研究,包括人工智能对患者隐私的挑战,人工智能模型如何进行更精确的诊断,以及如何让研究人员更广泛地获得不同的emory医疗保健患者数据。 “一个例子,”madabhushi说,“我们的工作表明,人工智能可以用来发现黑人和白人之间前列腺癌外观的细微差异。”。 我们利用这些细微的差异创建了针对人群的模型,与不可知人群的模型相比,这些模型在黑人男性的风险分层和疾病复发预测方面更准确。 第一天,madabhushi还与emory新任首席数据官joedepa在午餐时间进行了炉边谈话,讨论了ai所需的管理医疗数据海洋的艰巨任务。 主题的多样性有希望的发展以及警告是这次聚会的一部分要点。 教务长拉维·贝拉姆孔达强调,人们需要成为他所说的“双语者”,理解工程和医学,而不需要人为的隔离。 他说:“这是我们看到人工智能真正力量的时候。”。 其中一些力量已经在埃默里实现了。 贝拉姆孔达所要求的双语能力的一个例子是生物医学信息学教授加里·克利福德和人类学教授雷切尔·霍尔·克利福德的夫妻团队。 这两人与危地马拉当地的玛雅助产士合作,创造了低识字率助产士可以用来衡量孕妇一系列问题的设备。 clifford指出:“你无法模拟顺产。”。 “这是不道德的。 但你可以建立模型,我们已经做到了。 美国食品药品监督管理局(food and drug administration)回来后说:“如果从病理角度来看,胎儿心率低于母亲的心率,会发生什么?”我们说,好吧,我们没有任何数据。 但我们可以模拟它。 我们准确地模拟了它将如何发生,美国食品药品监督管理局接受了这一点,我们表明我们的算法将适用于特定类型的病理学。 我认为这是令人兴奋的地方。 越来越多地从人类实验转向硅实验。 在护理学院数据科学中心,莫尼克·布维尔教授正致力于通过将人工智能纳入患者护理来解决护士严重短缺的问题。 bouvier说:“两年内,32%的新手护士将离职,因为他们没有资源提供所需的护理。”。 “改变的是前所未有的技术进步。 “在其他项目中,该中心正在试验使用人工智能监测医院患者。 这包括一个虚拟平台,进行一对一的观察,以确保患者不会拔线或起身行走,然后摔倒。 她说:“一个能够为患者提供无需干预的护理的虚拟护士,会让床边护士有更多的时间陪伴患者,提供我们护理学院教授的整体护理。”。 一些参与者通过深入研究创建临床医生可以使用的人工智能模型和数据库的问题来实现“双语”目标。 这是一个严峻的技术挑战。 生物医学信息学教授tonypan讨论了开发一种试验算法的需求,该算法可以在不损害患者隐私的情况下,利用emory的患者记录系统epic中关于调节系统、肝功能、心血管功能和神经系统的数据来预测败血症。 最终目标是开发一种临床医生可以用来预测患者患败血症可能性的模型。 marlyvanassen,放射学和影像学教授,报道了通过将多种风险数据整合到一个模型中来预测心脏和血管疾病的努力。 她说:“我们已经看到所有这些研究都证明了多模式数据是有效的。”。 “为什么我们目前不使用这个?获取大型数据库很困难。”。 病人四处走动,更换医生和医院。 没有很多数据被积极记录,尤其是当涉及到并非直接来自实验室测试的风险因素时。 许多感兴趣的患者不一定会出现在或没有被转诊进行影像学检查,因为例如,他们缺乏机会或没有出现典型症状。 这些患者将非常有兴趣进行研究,看看我们是否也能改善他们的结果。 “人工智能的力量、覆盖范围和侵入性让许多研讨会参与者担心它会影响患者的隐私。 madabhushi指出:“你可以预测个体的性别。”。 “你可以预测种族。 你可以通过扫描眼睛来预测一系列不同的心脏代谢状况。 考虑到人工智能可以用这么少的钱赚这么多钱,我们真的认为我们会真正保护个人隐私吗?”其他参与者担心,未来人工智能辅助的攻击可能会侵入医院系统,并通过将私人患者数据与社交媒体等上的信息相关联来重新识别这些数据。 整个讨论都集中在对道德问题的持续关注上,这些问题是由被广泛认为嵌入用于训练人工智能模型的数据中的隐性偏见引起的。 这是一个笼罩在整个人工智能领域的问题。 放射学教授贾尼斯·纽瑟姆说:“我对伦理学有一个奇怪的定义。”。 “我问‘这对谁有好处?’当我们开始思考如何在道德上将颠覆性技术引入太空时,这是我们在开始、中间和结束时都必须问的问题。 计算机科学教授何表示:“人工智能伦理中的灰色地带无处不在。 “令人不安的事实是,学生们不喜欢灰色。 在我们获得伦理之前,我们需要教导的是,灰色已经融入了计划。 埃默里大学伦理中心的医学伦理学家johnbanja指出,公平有许多不同的定义。 他说:“没有一刀切的办法。”。 “当我们谈论人工智能时,我们将谈论非常具体的案例,这些案例中出现的具体道德困境。 我们正处于这类问题的初级阶段。 “这种富有同情心的ai for health研究所背后的伦理思想可能有助于推动研讨会的大量参与。 madabhushi吹嘘自己没有在chatgpt上发表自己的言论,他回忆起自己在印度长大时看到的医疗保健差距。 “我十几岁的时候因为癌症失去了姑姑,”他说。 “我想在很小的时候我就意识到了同理心在医学实践中的重要性。 该研究所名称中的“移情”一词反映了移情的质量和实现移情的挑战在记忆中有多重要。 该研究所的领导人知道,美国的医疗保健结果仍然显示出不同群体之间的巨大差异,他们正致力于使人工智能模型更具包容性。 当他们努力利用emory在肿瘤学、心血管健康、脑健康、糖尿病、艾滋病毒和免疫学等领域的优势时,他们还专注于三个目标:开发新的人工智能技术,将其引入临床实践,并通过行业合作扩大其规模。 “当我们思考人工智能时,”madabhushi总结道,“我们需要确保在精准医学人工智能工具的开发和应用中灌输同样的同理心。”。 ”。 when professor anant madabhushi began to plan emory’s first symposium on artificial intelligence in health, he wondered if attendance would be enough to fill the 160 seats in the health sciences research building auditorium. “it turns out,” he says, “we had to shut off registration the day before, because we topped 450. it was remarkable to see the excitement, the enthusiasm.” madabhushi leads the newly established emory empathetic ai for health institute, part of the university-wide ai.humanity initiative, which was created with the innovative goal of breaking down disciplinary barriers between researchers working in ai, medicine and the humanities, letting them work with common purposes to use the power of machine learning and big data for disease prevention and better patient care. he attributes the large turnout to the robust ai community, not only among students and faculty at emory but at georgia tech, the university of georgia, the atlanta veterans affair medical center, georgia state and morehouse as well, and to interest by multiple corporate partners.attendees who came to learn about the broadest possible spectrum of ai topics weren’t disappointed. while some sessions focused on ai’s clinical potential in areas like health care diagnosis, genomics and pathology, others examined ai’s impact on medical privacy and security or the challenge of creating databases free of bias. difficult, troublesome issues ai researchers will need to face were placed in the symposium’s program so they alternated with discussions about medical innovations.in two days, attendees learned about leading-edge research from radiology to acute care to public health, including ai’s challenge to patient privacy, how ai models can perform more precise diagnoses and how to make diverse emory healthcare patient data more widely available to investigators. “an example,” said madabhushi, “is the work where we showed ai could be used to prise out subtle differences in the appearance of prostate cancer between black men and white men. we use these subtle differences to create population-tailored models that we showed were more accurate in risk stratification and prediction of disease recurrence in black men compared to a population agnostic model.” on day one, madabhushi also held a lunchtime fireside chat with joe depa, emory’s new chief data officer, to discuss the daunting task of managing the oceans of medical data ai requires. the diversity of subjectsthe promising developments as well as the cautionswas part of the point of the gathering. provost ravi bellamkonda emphasized the need for people to become what he called “bilingual,” understanding both engineering and medicine with no artificial silos to separate them. “this is when we see the true power of ai,” he said.some of that power is already being realized at emory. one example of the bilingual abilities bellamkonda called for is the husband-and-wife team of gari clifford, professor of biomedical informatics, and rachel hall-clifford, professor of anthropology. the pair worked with indigenous mayan midwives in guatemala to create devices low-literacy midwives can use to gauge a range of problems in pregnant women. “you cant simulate an adverse birth,” clifford notes. “its unethical. but you can build models, and we’ve done that. the food and drug administration came back and said, ‘what would happen when, pathologically the fetal heart rate goes below the mother’s?’ we said, well, we dont have any data. but we can simulate it. we simulated exactly how it would happen and the fda accepted that, and we showed that our algorithm would work on that particular type of pathology. i think thats where its exciting. were moving from experimenting on humans to experimenting in silicon more and more.” at the school of nursing center for data science, professor monique bouvier is working to address a critical shortage of nurses by incorporating ai into patient care. “within two years, 32 percent of our novice nurses are leaving the workforce because they dont have the resources to provide the care they need,” bouvier said. “what has changed is unprecedented advances in technology.” among other projects, the center is experimenting with using ai to monitor hospital patients. that includes a virtual platform doing one-on-one observation to make sure patients dont pull out their lines or get up to walk, then fall. “a virtual nurse who can provide that hands-off care to the patient will leave that bedside nurse more time with his or her patient to provide the holistic nurturing care that were taught in our schools of nursing,” she said. some participants approached the “bilingual” goal through a deep dive into the problem of creating ai models and databases that clinicians can use. it’s a steep technical challenge. tony pan, professor of biomedical informatics, discussed the demands of developing a trial algorithm that could predict sepsis, drawing on data from epic, emory’s patient record system, about the regulatory system, liver functions, cardiovascular functions, the nervous system, without compromising patient privacy. the ultimate goal is to develop a model clinicians could use to predict a patient’s likelihood of developing sepsis.marly van assen, professor of radiology and imaging sciences, reported on efforts to predict heart and vascular disease, by integrating multiple kinds of risk data into a single model. “we’ve seen all these studies that prove that multimodal data works,” she said. “why are we not using this currently? getting large databases is hard. patients move around and change doctors and hospitals. not a lot of data is actively recorded, especially when it comes to risk factors that are not coming directly from, for example, lab tests. a lot of the patients that were interested in dont necessarily show up at or are not referred for an imaging exam because for example they lack access or dont experience the typical symptoms. and those are patients that would be very interesting for studies to see if we also can improve their outcomes.”the power, reach and intrusiveness of ai had many symposium participants worried about effects on patient privacy. “you can predict the sex of the individual,” madabhushi noted. “you can predict race. you can look at a scan of the eye and predict a whole series of different cardio metabolic conditions. do we really think were going to be at a point where we can truly preserve the privacy of an individual, given that ai is able to prise out so much for so little?” other participants worried that future ai-assisted attacks might break into hospital systems and re-identify private patient data by associating it with information on social media and the like. an entire discussion focused on persistent concerns about ethical questions raised by the hidden biases widely recognized as embedded in the data used to train ai models. it’s an issue looming over the entire field of ai. “i have this weird definition of ethics,” said professor of radiology janice newsome. “i ask ‘who is this good for?’ this is a question we have to ask at the beginning, in the middle and at the end as we start thinking about how we ethically introduce disruptive technologies into our space.” gray areas are everywhere in ai ethics, according to computer science professor joyce ho. “the uncomfortable truth is that students dont like gray. what we need to teach before we even get the ethics is the fact that gray is built into the plan.” john banja, a medical ethicist at the emory university center for ethics, noted that there are many different definitions of fairness. “there is no one-size-fits-all,” he said. “when we talk about ai, we’re going to be talking about very specific kinds of cases, about specific ethical dilemmas that emerge from these cases. we are at the infancy of these kinds of problems.” this kind of ethically informed thinking behind the empathetic ai for health institute may have helped drive the symposium’s large turnout. madabhushi, bragging that he didn’t compose his remarks on chatgpt, recalled the health care disparity he saw growing up in india. “i lost my aunt due to breast cancer when i was in my teens,” he observed. “and i think at a very young age i realized the importance of empathy in the practice of medicine.” the word “empathetic” in the institute’s name is a reflection of how much the quality of empathy and the challenge of achieving it, matter at emory. the institute’s leaders know that american health care outcomes still show big disparities between different groups and they’re focused on working to make ai models more inclusive. as they work to leverage emory’s strengths in areas like oncology, cardiovascular health, brain health, diabetes, hiv and immunology, they’re also focused on three goals: to develop new ai technologies, to introduce them into clinical practice and to scale them up through industry partnerships. “as we think about ai,” madabhushi concluded, “we need to make sure that were imbuing that same sense of empathy in the development and the application of ai tools for precision medicine.”.
埃默里大学留学推荐:
本文来源: 第一届埃默里人工智能健康研讨会投票率创历史新高