Zobrazeno 1 - 10
of 65
pro vyhledávání: '"Anoop Cherian"'
Autor:
Panagiotis Stanitsas, Anoop Cherian, Vassilios Morellas, Resha Tejpaul, Nikolaos Papanikolopoulos, Alexander Truskinovsky
Publikováno v:
Frontiers in Digital Health, Vol 2 (2020)
Introduction: Cancerous Tissue Recognition (CTR) methodologies are continuously integrating advancements at the forefront of machine learning and computer vision, providing a variety of inference schemes for histopathological data. Histopathological
Externí odkaz:
https://doaj.org/article/27c6b2159bf142ba90f15455d8a99c64
Autor:
Jue Wang, Anoop Cherian
Publikováno v:
IEEE Transactions on Pattern Analysis and Machine Intelligence. 44:6993-7009
One-class learning is the classic problem of fitting a model to the data for which annotations are available only for a single class. In this paper, we explore novel objectives for one-class learning, which we collectively refer to as Generalized One
Publikováno v:
Gene Expression Patterns. 47:119304
Autor:
Srinivas Sunkara, Luis A. Lastras, Jonathan K. Kummerfeld, Hannes Schulz, Walter S. Lasecki, Anoop Cherian, Adam Atkinson, Seokhwan Kim, Chiori Hori, Xiaoxue Zang, Jinchao Li, Sungjin Lee, Minlie Huang, R. Chulaka Gunasekara, Michel Galley, Tim K. Marks, Raghav Gupta, Mahmoud Adada, Baolin Peng, Abhinav Rastogi, Jianfeng Gao
Publikováno v:
IEEE/ACM Transactions on Audio, Speech, and Language Processing. 29:2529-2540
This paper introduces the Eighth Dialog System Technology Challenge. In line with recent challenges, the eighth edition focuses on applying end-to-end dialog technologies in a pragmatic way for multi-domain task-completion, noetic response selection,
Publikováno v:
IEEE Transactions on Pattern Analysis and Machine Intelligence. 41:3100-3114
We present a principled approach to uncover the structure of visual data by solving a deep learning task coined visual permutation learning . The goal of this task is to find the permutation that recovers the structure of data from shuffled versions
Autor:
Ankit Shah, Shijie Geng, Peng Gao, Anoop Cherian, Takaaki Hori, Tim K. Marks, Jonathan Le Roux, Chiori Hori
In previous work, we have proposed the Audio-Visual Scene-Aware Dialog (AVSD) task, collected an AVSD dataset, developed AVSD technologies, and hosted an AVSD challenge track at both the 7th and 8th Dialog System Technology Challenges (DSTC7, DSTC8).
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::94c3921cb48a2437233496ab59372571
http://arxiv.org/abs/2110.06894
http://arxiv.org/abs/2110.06894
Autor:
Anoop Cherian, Vassilios Morellas, Panagiotis Stanitsas, Nikolaos Papanikolopoulos, Alexander Truskinovsky, Resha Tejpaul
Publikováno v:
Frontiers in Digital Health, Vol 2 (2020)
Frontiers in digital health
Frontiers in digital health
Introduction: Cancerous Tissue Recognition (CTR) methodologies are continuously integrating advancements at the forefront of machine learning and computer vision, providing a variety of inference schemes for histopathological data. Histopathological
Autor:
Xiaoming Liu, Abhinav Kumar, Wenxuan Mou, Tim K. Marks, Michael Jones, Ye Wang, Anoop Cherian, Toshiaki Koike-Akino, Chen Feng
Publikováno v:
CVPR
Modern face alignment methods have become quite accurate at predicting the locations of facial landmarks, but they do not typically estimate the uncertainty of their predicted locations nor predict whether landmarks are visible. In this paper, we pre
Publikováno v:
CVPR Workshops
This paper presents a framework to recognize temporal compositions of atomic actions in videos. Specifically, we propose to express temporal compositions of actions as semantic regular expressions and derive an inference framework using probabilistic
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::47ab380f61653b967553dfdc672a8ac9
http://arxiv.org/abs/2004.13217
http://arxiv.org/abs/2004.13217
Publikováno v:
WACV
We propose a self-supervised approach to improve the training of Generative Adversarial Networks (GANs) via inducing the discriminator to examine the structural consistency of images. Although natural image samples provide ideal examples of both vali