Zobrazeno 1 - 10
of 93 071
pro vyhledávání: '"Simon S."'
Autor:
Taboada Ibarra, Eunice Leticia1
Publikováno v:
Gestión y Estrategia. ene-jun2023, Issue 63, p43-58. 16p.
In the wake of a fabricated explosion image at the Pentagon, an ability to discern real images from fake counterparts has never been more critical. Our study introduces a novel multi-modal approach to detect AI-generated images amidst the proliferati
Externí odkaz:
http://arxiv.org/abs/2409.07913
For electric vehicles, the Adaptive Cruise Control (ACC) in Advanced Driver Assistance Systems (ADAS) is designed to assist braking based on driving conditions, road inclines, predefined deceleration strengths, and user braking patterns. However, the
Externí odkaz:
http://arxiv.org/abs/2409.05346
We initiate the study of Multi-Agent Reinforcement Learning from Human Feedback (MARLHF), exploring both theoretical foundations and empirical validations. We define the task as identifying Nash equilibrium from a preference-only offline dataset in g
Externí odkaz:
http://arxiv.org/abs/2409.00717
We present Blind-Match, a novel biometric identification system that leverages homomorphic encryption (HE) for efficient and privacy-preserving 1:N matching. Blind-Match introduces a HE-optimized cosine similarity computation method, where the key id
Externí odkaz:
http://arxiv.org/abs/2408.06167
Deepfake detection is critical in mitigating the societal threats posed by manipulated videos. While various algorithms have been developed for this purpose, challenges arise when detectors operate externally, such as on smartphones, when users take
Externí odkaz:
http://arxiv.org/abs/2407.10399
The fabrication of visual misinformation on the web and social media has increased exponentially with the advent of foundational text-to-image diffusion models. Namely, Stable Diffusion inpainters allow the synthesis of maliciously inpainted images o
Externí odkaz:
http://arxiv.org/abs/2407.10277
Self-Distillation is a special type of knowledge distillation where the student model has the same architecture as the teacher model. Despite using the same architecture and the same training data, self-distillation has been empirically observed to i
Externí odkaz:
http://arxiv.org/abs/2407.04600
We study the gradient Expectation-Maximization (EM) algorithm for Gaussian Mixture Models (GMM) in the over-parameterized setting, where a general GMM with $n>1$ components learns from data that are generated by a single ground truth Gaussian distrib
Externí odkaz:
http://arxiv.org/abs/2407.00490