Zobrazeno 1 - 10
of 1 690
pro vyhledávání: '"P Prasant"'
The study of linear codes over a finite field of odd cardinality, derived from determinantal varieties obtained from symmetric matrices of bounded rank, was initiated in a recent paper by the authors. There, one found the minimum distance of the code
Externí odkaz:
http://arxiv.org/abs/2412.05936
The Unruh effect is a well-understood phenomenon, where one considers a vacuum state of a quantum field in Minkowski spacetime, which appears to be thermally populated for a uniformly accelerating Rindler observer. In this article, we derive a varian
Externí odkaz:
http://arxiv.org/abs/2412.02560
The increasing reliance on diffusion models for generating synthetic images has amplified concerns about the unauthorized use of personal data, particularly facial images, in model training. In this paper, we introduce a novel identity inference fram
Externí odkaz:
http://arxiv.org/abs/2410.10177
Let $G$ be a simple finite connected graph. The line graph $L(G)$ of graph $G$ is the graph whose vertices are the edges of $G$, where $ef \in E(L(G))$ when $e \cap f \neq \emptyset$. Iteratively, the higher order line graphs are defined inductively
Externí odkaz:
http://arxiv.org/abs/2410.04607
Denoising diffusion models have emerged as state-of-the-art in generative tasks across image, audio, and video domains, producing high-quality, diverse, and contextually relevant data. However, their broader adoption is limited by high computational
Externí odkaz:
http://arxiv.org/abs/2409.13894
Semantic segmentation has emerged as a pivotal area of study in computer vision, offering profound implications for scene understanding and elevating human-machine interactions across various domains. While 2D semantic segmentation has witnessed sign
Externí odkaz:
http://arxiv.org/abs/2407.16102
We introduce FedDM, a novel training framework designed for the federated training of diffusion models. Our theoretical analysis establishes the convergence of diffusion models when trained in a federated setting, presenting the specific conditions u
Externí odkaz:
http://arxiv.org/abs/2407.14730
Large Language Models (LLMs) have achieved state-of-the-art performance at zero-shot generation of abstractive summaries for given articles. However, little is known about the robustness of such a process of zero-shot summarization. To bridge this ga
Externí odkaz:
http://arxiv.org/abs/2406.03993
Autor:
Huh, Dom, Mohapatra, Prasant
Sample efficiency remains a key challenge in multi-agent reinforcement learning (MARL). A promising approach is to learn a meaningful latent representation space through auxiliary learning objectives alongside the MARL objective to aid in learning a
Externí odkaz:
http://arxiv.org/abs/2406.02890
A core data-centric learning challenge is the identification of training samples that are detrimental to model performance. Influence functions serve as a prominent tool for this task and offer a robust framework for assessing training data influence
Externí odkaz:
http://arxiv.org/abs/2405.03869