Zobrazeno 1 - 10
of 557
pro vyhledávání: '"Higham, Desmond J"'
We introduce the concept of deceptive diffusion -- training a generative AI model to produce adversarial images. Whereas a traditional adversarial attack algorithm aims to perturb an existing image to induce a misclassificaton, the deceptive diffusio
Externí odkaz:
http://arxiv.org/abs/2406.19807
Autor:
Sutton, Oliver J., Zhou, Qinghua, Wang, Wei, Higham, Desmond J., Gorban, Alexander N., Bastounis, Alexander, Tyukin, Ivan Y.
We reveal the theoretical foundations of techniques for editing large language models, and present new methods which can do so without requiring retraining. Our theoretical insights show that a single metric (a measure of the intrinsic dimension of t
Externí odkaz:
http://arxiv.org/abs/2406.12670
We discuss the design of an invariant measure-preserving transformed dynamics for the numerical treatment of Langevin dynamics based on rescaling of time, with the goal of sampling from an invariant measure. Given an appropriate monitor function whic
Externí odkaz:
http://arxiv.org/abs/2403.11993
Publikováno v:
J. Phys. Complexity 5, 015022 (2024)
Higher-order networks encode the many-body interactions existing in complex systems, such as the brain, protein complexes, and social interactions. Simplicial complexes are higher-order networks that allow a comprehensive investigation of the interpl
Externí odkaz:
http://arxiv.org/abs/2402.07631
Generative artificial intelligence (AI) refers to algorithms that create synthetic but realistic output. Diffusion models currently offer state of the art performance in generative AI for images. They also form a key component in more general tools,
Externí odkaz:
http://arxiv.org/abs/2312.14977
We consider a random geometric hypergraph model based on an underlying bipartite graph. Nodes and hyperedges are sampled uniformly in a domain, and a node is assigned to those hyperedges that lie with a certain radius. From a modelling perspective, w
Externí odkaz:
http://arxiv.org/abs/2309.09305
Autor:
Bastounis, Alexander, Gorban, Alexander N., Hansen, Anders C., Higham, Desmond J., Prokhorov, Danil, Sutton, Oliver, Tyukin, Ivan Y., Zhou, Qinghua
In this work, we assess the theoretical limitations of determining guaranteed stability and accuracy of neural networks in classification tasks. We consider classical distribution-agnostic framework and algorithms minimising empirical risks and poten
Externí odkaz:
http://arxiv.org/abs/2309.07072
Autor:
Sutton, Oliver J., Zhou, Qinghua, Tyukin, Ivan Y., Gorban, Alexander N., Bastounis, Alexander, Higham, Desmond J.
Adversarial attacks dramatically change the output of an otherwise accurate learning system using a seemingly inconsequential modification to a piece of input data. Paradoxically, empirical evidence indicates that even systems which are robust to lar
Externí odkaz:
http://arxiv.org/abs/2309.03665
Stochastic optimization methods have been hugely successful in making large-scale optimization problems feasible when computing the full gradient is computationally prohibitive. Using the theory of modified equations for numerical integrators, we pro
Externí odkaz:
http://arxiv.org/abs/2309.02082
Autor:
Higham, Desmond J.
Over the last decade, adversarial attack algorithms have revealed instabilities in deep learning tools. These algorithms raise issues regarding safety, reliability and interpretability in artificial intelligence; especially in high risk settings. Fro
Externí odkaz:
http://arxiv.org/abs/2308.15092