Zobrazeno 1 - 10
of 1 334
pro vyhledávání: '"Liu Mingjie"'
The remarkable reasoning and code generation capabilities of large language models (LLMs) have spurred significant interest in applying LLMs to enable task automation in digital chip design. In particular, recent work has investigated early ideas of
Externí odkaz:
http://arxiv.org/abs/2410.23299
Despite the significant progress made in code generation with large language models, challenges persist, especially with hardware description languages such as Verilog. This paper first presents an analysis of fine-tuned LLMs on Verilog coding, with
Externí odkaz:
http://arxiv.org/abs/2409.12993
The application of large-language models (LLMs) to digital hardware code generation is an emerging field. Most LLMs are primarily trained on natural language and software code. Hardware code, such as Verilog, represents only a small portion of the tr
Externí odkaz:
http://arxiv.org/abs/2408.11053
Recent work targeting large language models (LLMs) for code generation demonstrated that increasing the amount of training data through synthetic code generation often leads to exceptional performance. In this paper we explore data pruning methods ai
Externí odkaz:
http://arxiv.org/abs/2407.05040
Autor:
Wang, Wentao, Xiao, Xi, Liu, Mingjie, Tian, Qing, Huang, Xuanyao, Lan, Qizhen, Roy, Swalpa Kumar, Wang, Tianyang
The accurate segmentation of medical images is crucial for diagnosing and treating diseases. Recent studies demonstrate that vision transformer-based methods have significantly improved performance in medical image segmentation, primarily due to thei
Externí odkaz:
http://arxiv.org/abs/2405.12328
This paper presents a comparative analysis of total cost of ownership (TCO) and performance between domain-adapted large language models (LLM) and state-of-the-art (SoTA) LLMs , with a particular emphasis on tasks related to coding assistance for chi
Externí odkaz:
http://arxiv.org/abs/2404.08850
This paper presents RTLFixer, a novel framework enabling automatic syntax errors fixing for Verilog code with Large Language Models (LLMs). Despite LLM's promising capabilities, our analysis indicates that approximately 55% of errors in LLM-generated
Externí odkaz:
http://arxiv.org/abs/2311.16543
Autor:
Liu, Mingjie, Ene, Teodor-Dumitru, Kirby, Robert, Cheng, Chris, Pinckney, Nathaniel, Liang, Rongjian, Alben, Jonah, Anand, Himyanshu, Banerjee, Sanmitra, Bayraktaroglu, Ismet, Bhaskaran, Bonita, Catanzaro, Bryan, Chaudhuri, Arjun, Clay, Sharon, Dally, Bill, Dang, Laura, Deshpande, Parikshit, Dhodhi, Siddhanth, Halepete, Sameer, Hill, Eric, Hu, Jiashang, Jain, Sumit, Jindal, Ankit, Khailany, Brucek, Kokai, George, Kunal, Kishor, Li, Xiaowei, Lind, Charley, Liu, Hao, Oberman, Stuart, Omar, Sujeet, Pasandi, Ghasem, Pratty, Sreedhar, Raiman, Jonathan, Sarkar, Ambar, Shao, Zhengjiang, Sun, Hanfei, Suthar, Pratik P, Tej, Varun, Turner, Walker, Xu, Kaizhe, Ren, Haoxing
ChipNeMo aims to explore the applications of large language models (LLMs) for industrial chip design. Instead of directly deploying off-the-shelf commercial or open-source LLMs, we instead adopt the following domain adaptation techniques: domain-adap
Externí odkaz:
http://arxiv.org/abs/2311.00176
Autor:
Gao, Xiaohan, Zhang, Haoyi, Ye, Siyuan, Liu, Mingjie, Pan, David Z., Shen, Linxiao, Wang, Runsheng, Lin, Yibo, Huang, Ru
Post-layout simulation provides accurate guidance for analog circuit design, but post-layout performance is hard to be directly optimized at early design stages. Prior work on analog circuit sizing often utilizes pre-layout simulation results as the
Externí odkaz:
http://arxiv.org/abs/2310.14049
The increasing popularity of large language models (LLMs) has paved the way for their application in diverse domains. This paper proposes a benchmarking framework tailored specifically for evaluating LLM performance in the context of Verilog code gen
Externí odkaz:
http://arxiv.org/abs/2309.07544