Zobrazeno 1 - 10
of 420
pro vyhledávání: '"SCHLICHTMANN, ULF"'
General-purpose optical accelerators (GOAs) have emerged as a promising platform to accelerate deep neural networks (DNNs) due to their low latency and energy consumption. Such an accelerator is usually composed of a given number of interleaving Mach
Externí odkaz:
http://arxiv.org/abs/2409.12966
While large language models (LLMs) have demonstrated the ability to generate hardware description language (HDL) code for digital circuits, they still suffer from the hallucination problem, which leads to the generation of incorrect HDL code or misun
Externí odkaz:
http://arxiv.org/abs/2407.18326
In digital circuit design, testbenches constitute the cornerstone of simulation-based hardware verification. Traditional methodologies for testbench generation during simulation-based hardware verification still remain partially manual, resulting in
Externí odkaz:
http://arxiv.org/abs/2407.03891
In High-Level Synthesis (HLS), converting a regular C/C++ program into its HLS-compatible counterpart (HLS-C) still requires tremendous manual effort. Various program scripts have been introduced to automate this process. But the resulting codes usua
Externí odkaz:
http://arxiv.org/abs/2407.03889
Autor:
Eldebiky, Amro, Zhang, Grace Li, Yin, Xunzhao, Zhuo, Cheng, Lin, Ing-Chao, Schlichtmann, Ulf, Li, Bing
Deep neural networks (DNNs) have made breakthroughs in various fields including image recognition and language processing. DNNs execute hundreds of millions of multiply-and-accumulate (MAC) operations. To efficiently accelerate such computations, ana
Externí odkaz:
http://arxiv.org/abs/2407.03738
In this paper, we introduce a novel low-latency inference framework for large language models (LLMs) inference which enables LLMs to perform inferences with incomplete prompts. By reallocating computational processes to prompt input phase, we achieve
Externí odkaz:
http://arxiv.org/abs/2406.14319
Large language models (LLMs) have recently transformed natural language processing, enabling machines to generate human-like text and engage in meaningful conversations. This development necessitates speed, efficiency, and accessibility in LLM infere
Externí odkaz:
http://arxiv.org/abs/2406.08413
Deep neural networks (DNNs) have achieved great breakthroughs in many fields such as image classification and natural language processing. However, the execution of DNNs needs to conduct massive numbers of multiply-accumulate (MAC) operations on hard
Externí odkaz:
http://arxiv.org/abs/2402.18595
Autor:
Qiu, Ruidi, Eldebiky, Amro, Zhang, Grace Li, Yin, Xunzhao, Zhuo, Cheng, Schlichtmann, Ulf, Li, Bing
Having the potential for high speed, high throughput, and low energy cost, optical neural networks (ONNs) have emerged as a promising candidate for accelerating deep learning tasks. In conventional ONNs, light amplitudes are modulated at the input an
Externí odkaz:
http://arxiv.org/abs/2312.01403
Neural networks (NNs) have been successfully deployed in various fields. In NNs, a large number of multiplyaccumulate (MAC) operations need to be performed. Most existing digital hardware platforms rely on parallel MAC units to accelerate these MAC o
Externí odkaz:
http://arxiv.org/abs/2309.10510