Zobrazeno 1 - 10
of 627
pro vyhledávání: '"Wang Zichao"'
Autor:
DU Jinlei, ZHANG Minbo1,2,, ZHANG Dianji, ZHANG Dangyu, ZHANG Zhen, CUI Li, WANG Zichao, LI Chunxin, ZHANG Fujian
Publikováno v:
Gong-kuang zidonghua, Vol 47, Iss 7, Pp 98-105 (2021)
The effect of a single hydraulic cutting is not very ideal for thick coal seam with low permeability or coal seam with gangue to increase the permeability and release the pressure. In order to solve the above problem, taking the 21212 working face of
Externí odkaz:
https://doaj.org/article/bcf5452078dd424da9a686b44242f231
Autor:
Lin, Zihao, Wang, Zichao, Pan, Yuanting, Manjunatha, Varun, Rossi, Ryan, Lau, Angela, Huang, Lifu, Sun, Tong
Suggested questions (SQs) provide an effective initial interface for users to engage with their documents in AI-powered reading applications. In practical reading sessions, users have diverse backgrounds and reading goals, yet current SQ features typ
Externí odkaz:
http://arxiv.org/abs/2412.12445
Autor:
Zhang, Zhehao, Rossi, Ryan A., Kveton, Branislav, Shao, Yijia, Yang, Diyi, Zamani, Hamed, Dernoncourt, Franck, Barrow, Joe, Yu, Tong, Kim, Sungchul, Zhang, Ruiyi, Gu, Jiuxiang, Derr, Tyler, Chen, Hongjie, Wu, Junda, Chen, Xiang, Wang, Zichao, Mitra, Subrata, Lipka, Nedim, Ahmed, Nesreen, Wang, Yu
Personalization of Large Language Models (LLMs) has recently become increasingly important with a wide range of applications. Despite the importance and recent progress, most existing works on personalized LLMs have focused either entirely on (a) per
Externí odkaz:
http://arxiv.org/abs/2411.00027
Autor:
Zhang, Zhehao, Rossi, Ryan, Yu, Tong, Dernoncourt, Franck, Zhang, Ruiyi, Gu, Jiuxiang, Kim, Sungchul, Chen, Xiang, Wang, Zichao, Lipka, Nedim
While vision-language models (VLMs) have demonstrated remarkable performance across various tasks combining textual and visual information, they continue to struggle with fine-grained visual perception tasks that require detailed pixel-level analysis
Externí odkaz:
http://arxiv.org/abs/2410.16400
Despite rapid advancements in large language models (LLMs), QG remains a challenging problem due to its complicated process, open-ended nature, and the diverse settings in which question generation occurs. A common approach to address these challenge
Externí odkaz:
http://arxiv.org/abs/2406.13188
Autor:
Zhou, Yufan, Zhang, Ruiyi, Zheng, Kaizhi, Zhao, Nanxuan, Gu, Jiuxiang, Wang, Zichao, Wang, Xin Eric, Sun, Tong
In subject-driven text-to-image generation, recent works have achieved superior performance by training the model on synthetic datasets containing numerous image pairs. Trained on these datasets, generative models can produce text-aligned images for
Externí odkaz:
http://arxiv.org/abs/2406.09305
Large language models (LLMs) have demonstrated impressive zero-shot abilities in solving a wide range of general-purpose tasks. However, it is empirically found that LLMs fall short in recognizing and utilizing temporal information, rendering poor pe
Externí odkaz:
http://arxiv.org/abs/2405.02778
Publikováno v:
Renmin Zhujiang, Vol 42 (2021)
Sludge pretreatment is important to the reduction,harmlessness and recycling of sludge.In order to clarify the frontiers and hotspots of international research in the field of sludge pretreatment in the past 10 years,taking the core data collected by
Externí odkaz:
https://doaj.org/article/7a6c24f7b3034fc2ac00c0635ab407ba
Autor:
Zhu, Sicheng, Zhang, Ruiyi, An, Bang, Wu, Gang, Barrow, Joe, Wang, Zichao, Huang, Furong, Nenkova, Ani, Sun, Tong
Safety alignment of Large Language Models (LLMs) can be compromised with manual jailbreak attacks and (automatic) adversarial attacks. Recent studies suggest that defending against these attacks is possible: adversarial attacks generate unlimited but
Externí odkaz:
http://arxiv.org/abs/2310.15140
We propose novel evaluations for mathematical reasoning capabilities of Large Language Models (LLMs) based on mathematical misconceptions. Our primary approach is to simulate LLMs as a novice learner and an expert tutor, aiming to identify the incorr
Externí odkaz:
http://arxiv.org/abs/2310.02439