Zobrazeno 1 - 10
of 274
pro vyhledávání: '"Zitnik, Marinka"'
Autor:
Chen, Jintai, Hu, Yaojun, Wang, Yue, Lu, Yingzhou, Cao, Xu, Lin, Miao, Xu, Hongxia, Wu, Jian, Xiao, Cao, Sun, Jimeng, Glass, Lucas, Huang, Kexin, Zitnik, Marinka, Fu, Tianfan
Clinical trials are pivotal for developing new medical treatments, yet they typically pose some risks such as patient mortality, adverse events, and enrollment failure that waste immense efforts spanning over a decade. Applying artificial intelligenc
Externí odkaz:
http://arxiv.org/abs/2407.00631
Autor:
Zheng, Kangyu, Lu, Yingzhou, Zhang, Zaixi, Wan, Zhongwei, Ma, Yao, Zitnik, Marinka, Fu, Tianfan
Currently, the field of structure-based drug design is dominated by three main types of algorithms: search-based algorithms, deep generative models, and reinforcement learning. While existing works have typically focused on comparing models within a
Externí odkaz:
http://arxiv.org/abs/2406.03403
This paper introduces a min-max optimization formulation for the Graph Signal Denoising (GSD) problem. In this formulation, we first maximize the second term of GSD by introducing perturbations to the graph structure based on Laplacian distance and t
Externí odkaz:
http://arxiv.org/abs/2406.02059
Medical image interpretation using deep learning has shown promise but often requires extensive expert-annotated datasets. To reduce this annotation burden, we develop an Image-Graph Contrastive Learning framework that pairs chest X-rays with structu
Externí odkaz:
http://arxiv.org/abs/2405.09594
Autor:
Gao, Shanghua, Fang, Ada, Huang, Yepeng, Giunchiglia, Valentina, Noori, Ayush, Schwarz, Jonathan Richard, Ektefaie, Yasha, Kondic, Jovana, Zitnik, Marinka
We envision 'AI scientists' as systems capable of skeptical learning and reasoning that empower biomedical research through collaborative agents that integrate machine learning tools with experimental platforms. Rather than taking humans out of the d
Externí odkaz:
http://arxiv.org/abs/2404.02831
Autor:
Jeong, Hyewon, Jabbour, Sarah, Yang, Yuzhe, Thapta, Rahul, Mozannar, Hussein, Han, William Jongwon, Mehandru, Nikita, Wornow, Michael, Lialin, Vladislav, Liu, Xin, Lozano, Alejandro, Zhu, Jiacheng, Kocielnik, Rafal Dariusz, Harrigian, Keith, Zhang, Haoran, Lee, Edward, Vukadinovic, Milos, Balagopalan, Aparna, Jeanselme, Vincent, Matton, Katherine, Demirel, Ilker, Fries, Jason, Rashidi, Parisa, Beaulieu-Jones, Brett, Xu, Xuhai Orson, McDermott, Matthew, Naumann, Tristan, Agrawal, Monica, Zitnik, Marinka, Ustun, Berk, Choi, Edward, Yeom, Kristen, Gursoy, Gamze, Ghassemi, Marzyeh, Pierson, Emma, Chen, George, Kanjilal, Sanjat, Oberst, Michael, Zhang, Linying, Singh, Harvineet, Hartvigsen, Tom, Zhou, Helen, Okolo, Chinasa T.
The third ML4H symposium was held in person on December 10, 2023, in New Orleans, Louisiana, USA. The symposium included research roundtable sessions to foster discussions between participants and senior researchers on timely and relevant topics for
Externí odkaz:
http://arxiv.org/abs/2403.01628
Autor:
Gao, Shanghua, Koker, Teddy, Queen, Owen, Hartvigsen, Thomas, Tsiligkaridis, Theodoros, Zitnik, Marinka
Advances in time series models are driving a shift from conventional deep learning methods to pre-trained foundational models. While pre-trained transformers and reprogrammed text-based LLMs report state-of-the-art results, the best-performing archit
Externí odkaz:
http://arxiv.org/abs/2403.00131
Autor:
Sun, Lichao, Huang, Yue, Wang, Haoran, Wu, Siyuan, Zhang, Qihui, Li, Yuan, Gao, Chujie, Huang, Yixin, Lyu, Wenhan, Zhang, Yixuan, Li, Xiner, Liu, Zhengliang, Liu, Yixin, Wang, Yijue, Zhang, Zhikun, Vidgen, Bertie, Kailkhura, Bhavya, Xiong, Caiming, Xiao, Chaowei, Li, Chunyuan, Xing, Eric, Huang, Furong, Liu, Hao, Ji, Heng, Wang, Hongyi, Zhang, Huan, Yao, Huaxiu, Kellis, Manolis, Zitnik, Marinka, Jiang, Meng, Bansal, Mohit, Zou, James, Pei, Jian, Liu, Jian, Gao, Jianfeng, Han, Jiawei, Zhao, Jieyu, Tang, Jiliang, Wang, Jindong, Vanschoren, Joaquin, Mitchell, John, Shu, Kai, Xu, Kaidi, Chang, Kai-Wei, He, Lifang, Huang, Lifu, Backes, Michael, Gong, Neil Zhenqiang, Yu, Philip S., Chen, Pin-Yu, Gu, Quanquan, Xu, Ran, Ying, Rex, Ji, Shuiwang, Jana, Suman, Chen, Tianlong, Liu, Tianming, Zhou, Tianyi, Wang, William, Li, Xiang, Zhang, Xiangliang, Wang, Xiao, Xie, Xing, Chen, Xun, Wang, Xuyu, Liu, Yan, Ye, Yanfang, Cao, Yinzhi, Chen, Yong, Zhao, Yue
Large language models (LLMs), exemplified by ChatGPT, have gained considerable attention for their excellent natural language processing capabilities. Nonetheless, these LLMs present many challenges, particularly in the realm of trustworthiness. Ther
Externí odkaz:
http://arxiv.org/abs/2401.05561
Post hoc explanations have emerged as a way to improve user trust in machine learning models by providing insight into model decision-making. However, explanations tend to be evaluated based on their alignment with prior knowledge while the faithfuln
Externí odkaz:
http://arxiv.org/abs/2312.05690
Autor:
Zhong, Shanshan, Huang, Zhongzhan, Gao, Shanghua, Wen, Wushao, Lin, Liang, Zitnik, Marinka, Zhou, Pan
Chain-of-Thought (CoT) guides large language models (LLMs) to reason step-by-step, and can motivate their logical reasoning ability. While effective for logical tasks, CoT is not conducive to creative problem-solving which often requires out-of-box t
Externí odkaz:
http://arxiv.org/abs/2312.02439