Zobrazeno 1 - 10
of 34
pro vyhledávání: '"Liebenwein, Lucas"'
Filter pruning of a CNN is typically achieved by applying discrete masks on the CNN's filter weights or activation maps, post-training. Here, we present a new filter-importance-scoring concept named pruning by active attention manipulation (PAAM), th
Externí odkaz:
http://arxiv.org/abs/2210.11114
Autor:
Liebenwein, Lucas
Modern machine learning often relies on deep neural networks that are prohibitively expensive in terms of the memory and computational footprint. This in turn significantly inhibits the potential range of applications where we are faced with non-negl
In this paper, we present a novel sensitivity-based filter pruning algorithm (SbF-Pruner) to learn the importance scores of filters of each layer end-to-end. Our method learns the scores from the filter weights, enabling it to account for the correla
Externí odkaz:
http://arxiv.org/abs/2204.07412
We present a novel global compression framework for deep neural networks that automatically analyzes each layer to identify the optimal per-layer compression ratio, while simultaneously achieving the desired overall compression. Our algorithm hinges
Externí odkaz:
http://arxiv.org/abs/2107.11442
Autor:
Hasani, Ramin, Lechner, Mathias, Amini, Alexander, Liebenwein, Lucas, Ray, Aaron, Tschaikowski, Max, Teschl, Gerald, Rus, Daniela
Publikováno v:
Nature Machine Intelligence 4, 992--1003 (2022)
Continuous-time neural processes are performant sequential decision-makers that are built by differential equations (DE). However, their expressive power when they are deployed on computers is bottlenecked by numerical DE solvers. This limitation has
Externí odkaz:
http://arxiv.org/abs/2106.13898
Continuous deep learning architectures enable learning of flexible probabilistic models for predictive modeling as neural ordinary differential equations (ODEs), and for generative modeling as continuous normalizing flows. In this work, we design a f
Externí odkaz:
http://arxiv.org/abs/2106.12718
We develop an online learning algorithm for identifying unlabeled data points that are most informative for training (i.e., active learning). By formulating the active learning problem as the prediction with sleeping experts problem, we provide a reg
Externí odkaz:
http://arxiv.org/abs/2104.02822
Neural network pruning is a popular technique used to reduce the inference costs of modern, potentially overparameterized, networks. Starting from a pre-trained network, the process is as follows: remove redundant parameters, retrain, and repeat whil
Externí odkaz:
http://arxiv.org/abs/2103.03014
Autor:
Schwarting, Wilko, Seyde, Tim, Gilitschenski, Igor, Liebenwein, Lucas, Sander, Ryan, Karaman, Sertac, Rus, Daniela
Learning competitive behaviors in multi-agent settings such as racing requires long-term reasoning about potential adversarial interactions. This paper presents Deep Latent Competition (DLC), a novel reinforcement learning algorithm that learns compe
Externí odkaz:
http://arxiv.org/abs/2102.09812
Autor:
Liebenwein, Lucas
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Sp
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Sp
Externí odkaz:
http://hdl.handle.net/1721.1/120366