Zobrazeno 1 - 10
of 17
pro vyhledávání: '"Faramarzi, Mojtaba"'
Autor:
Olewicki, Doriane, Habchi, Sarra, Nayrolles, Mathieu, Faramarzi, Mojtaba, Chandar, Sarath, Adams, Bram
Publikováno v:
46th International Conference on Software Engineering: Software Engineering in Practice 2024
Nowadays, software analytics tools using machine learning (ML) models to, for example, predict the risk of a code change are well established. However, as the goals of a project shift over time, and developers and their habits change, the performance
Externí odkaz:
http://arxiv.org/abs/2305.09824
Autor:
Faramarzi, Mojtaba
Les modèles d’apprentissage profond à large capacité ont souvent tendance à présenter de hauts écarts de généralisation lorsqu’ils sont entrainés avec une quantité limitée de données étiquetées. Dans ce cas, des réseaux de neurones
Externí odkaz:
http://hdl.handle.net/1866/26068
Autor:
Sodhani, Shagun, Faramarzi, Mojtaba, Mehta, Sanket Vaibhav, Malviya, Pranshu, Abdelsalam, Mohamed, Janarthanan, Janarthanan, Chandar, Sarath
This primer is an attempt to provide a detailed summary of the different facets of lifelong learning. We start with Chapter 2 which provides a high-level overview of lifelong learning systems. In this chapter, we discuss prominent scenarios in lifelo
Externí odkaz:
http://arxiv.org/abs/2207.04354
Autor:
Shahtalebi, Soroosh, Gagnon-Audet, Jean-Christophe, Laleh, Touraj, Faramarzi, Mojtaba, Ahuja, Kartik, Rish, Irina
A major bottleneck in the real-world applications of machine learning models is their failure in generalizing to unseen domains whose data distribution is not i.i.d to the training domains. This failure often stems from learning non-generalizable fea
Externí odkaz:
http://arxiv.org/abs/2106.02266
We introduce the "Incremental Implicitly-Refined Classi-fication (IIRC)" setup, an extension to the class incremental learning setup where the incoming batches of classes have two granularity levels. i.e., each sample could have a high-level (coarse)
Externí odkaz:
http://arxiv.org/abs/2012.12477
Autor:
Faramarzi, Mojtaba, Amini, Mohammad, Badrinaaraayanan, Akilesh, Verma, Vikas, Chandar, Sarath
Publikováno v:
AAAI, vol. 36, no. 1, pp. 589-597, Jun. 2022
Large capacity deep learning models are often prone to a high generalization gap when trained with a limited amount of labeled training data. A recent class of methods to address this problem uses various ways to construct a new training sample by mi
Externí odkaz:
http://arxiv.org/abs/2006.07794
Autor:
Bashivan, Pouya, Bayat, Reza, Ibrahim, Adam, Ahuja, Kartik, Faramarzi, Mojtaba, Laleh, Touraj, Richards, Blake Aaron, Rish, Irina
Neural networks are known to be vulnerable to adversarial attacks -- slight but carefully constructed perturbations of the inputs which can drastically impair the network's performance. Many defense methods have been proposed for improving robustness
Externí odkaz:
http://arxiv.org/abs/2006.04621
Autor:
Olewicki, Doriane, Habchi, Sarra, Nayrolles, Mathieu, Faramarzi, Mojtaba, Chandar, Sarath, Adams, Bram
Nowadays, software analytics tools using machine learning (ML) models to, for example, predict the risk of a code change are well established. However, as the goals of a project shift over time, and developers and their habits change, the performance
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::52197ee6f043e3320b8babe85032af2f
http://arxiv.org/abs/2305.09824
http://arxiv.org/abs/2305.09824
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.