Zobrazeno 1 - 2
of 2
pro vyhledávání: '"Hans, Abhimanyu"'
Autor:
Hans, Abhimanyu, Wen, Yuxin, Jain, Neel, Kirchenbauer, John, Kazemi, Hamid, Singhania, Prajwal, Singh, Siddharth, Somepalli, Gowthami, Geiping, Jonas, Bhatele, Abhinav, Goldstein, Tom
Large language models can memorize and repeat their training data, causing privacy and copyright risks. To mitigate memorization, we introduce a subtle modification to the next-token training objective that we call the goldfish loss. During training,
Externí odkaz:
http://arxiv.org/abs/2406.10209
Autor:
Hans, Abhimanyu, Schwarzschild, Avi, Cherepanova, Valeriia, Kazemi, Hamid, Saha, Aniruddha, Goldblum, Micah, Geiping, Jonas, Goldstein, Tom
Detecting text generated by modern large language models is thought to be hard, as both LLMs and humans can exhibit a wide range of complex behaviors. However, we find that a score based on contrasting two closely related language models is highly ac
Externí odkaz:
http://arxiv.org/abs/2401.12070