Zobrazeno 1 - 10
of 455
pro vyhledávání: '"Wu, ZiHan"'
Co-clustering simultaneously clusters rows and columns, revealing more fine-grained groups. However, existing co-clustering methods suffer from poor scalability and cannot handle large-scale data. This paper presents a novel and scalable co-clusterin
Externí odkaz:
http://arxiv.org/abs/2410.18113
Autor:
Padiyath, Aadarsh, Hou, Xinying, Pang, Amy, Vargas, Diego Viramontes, Gu, Xingjian, Nelson-Fromm, Tamara, Wu, Zihan, Guzdial, Mark, Ericson, Barbara
The capability of large language models (LLMs) to generate, debug, and explain code has sparked the interest of researchers and educators in undergraduate programming, with many anticipating their transformative potential in programming education. Ho
Externí odkaz:
http://arxiv.org/abs/2406.06451
Autor:
Wu, Zihan, Smith IV, David H.
Parsons problems are a type of programming activity that present learners with blocks of existing code and requiring them to arrange those blocks to form a program rather than write the code from scratch. Micro Parsons problems extend this concept by
Externí odkaz:
http://arxiv.org/abs/2405.19460
Large Vision-Language Models (LVLMs) are gaining traction for their remarkable ability to process and integrate visual and textual data. Despite their popularity, the capacity of LVLMs to generate precise, fine-grained textual descriptions has not be
Externí odkaz:
http://arxiv.org/abs/2404.17534
To address the growing demand for privacy protection in machine learning, we propose a novel and efficient machine unlearning approach for \textbf{L}arge \textbf{M}odels, called \textbf{LM}Eraser. Existing unlearning research suffers from entangled t
Externí odkaz:
http://arxiv.org/abs/2404.11056
Real-time collaborative editing in computational notebooks can improve the efficiency of teamwork for data scientists. However, working together through synchronous editing of notebooks introduces new challenges. Data scientists may inadvertently int
Externí odkaz:
http://arxiv.org/abs/2404.04695
CodeTailor: LLM-Powered Personalized Parsons Puzzles for Engaging Support While Learning Programming
Learning to program can be challenging, and providing high-quality and timely support at scale is hard. Generative AI and its products, like ChatGPT, can create a solution for most intro-level programming problems. However, students might use these t
Externí odkaz:
http://arxiv.org/abs/2401.12125
Semi-supervised semantic segmentation aims to utilize limited labeled images and abundant unlabeled images to achieve label-efficient learning, wherein the weak-to-strong consistency regularization framework, popularized by FixMatch, is widely used a
Externí odkaz:
http://arxiv.org/abs/2312.08631
After pre-training by generating the next word conditional on previous words, the Language Model (LM) acquires the ability of In-Context Learning (ICL) that can learn a new task conditional on the context of the given in-context examples (ICEs). Simi
Externí odkaz:
http://arxiv.org/abs/2312.00351
Publikováno v:
IEEE Transactions on Emerging Topics in Computational Intelligence 2024
Machine learning models may inadvertently memorize sensitive, unauthorized, or malicious data, posing risks of privacy breaches, security vulnerabilities, and performance degradation. To address these issues, machine unlearning has emerged as a criti
Externí odkaz:
http://arxiv.org/abs/2308.07061