Zobrazeno 1 - 10
of 216
pro vyhledávání: '"Lie, David"'
Supervised learning-based software vulnerability detectors often fall short due to the inadequate availability of labelled training data. In contrast, Large Language Models (LLMs) such as GPT-4, are not trained on labelled data, but when prompted to
Externí odkaz:
http://arxiv.org/abs/2408.16028
The adoption of large cloud-based models for inference has been hampered by concerns about the privacy leakage of end-user data. One method to mitigate this leakage is to add local differentially private noise to queries before sending them to the cl
Externí odkaz:
http://arxiv.org/abs/2405.16361
Autor:
Chung, Mu-Huan Miles, Li, Sharon, Kongmanee, Jaturong, Wang, Lu, Yang, Yuhong, Giang, Calvin, Jerath, Khilan, Raman, Abhay, Lie, David, Chignell, Mark
Redacted emails satisfy most privacy requirements but they make it more difficult to detect anomalous emails that may be indicative of data exfiltration. In this paper we develop an enhanced method of Active Learning using an information gain maximiz
Externí odkaz:
http://arxiv.org/abs/2405.07440
Web tracking harms user privacy. As a result, the use of tracker detection and blocking tools is a common practice among Internet users. However, no such tool can be perfect, and thus there is a trade-off between avoiding breakage (caused by unintent
Externí odkaz:
http://arxiv.org/abs/2402.08031
Autor:
Chung, Mu-Huan, Wang, Lu, Li, Sharon, Yang, Yuhong, Giang, Calvin, Jerath, Khilan, Raman, Abhay, Lie, David, Chignell, Mark
Research on email anomaly detection has typically relied on specially prepared datasets that may not adequately reflect the type of data that occurs in industry settings. In our research, at a major financial services company, privacy concerns preven
Externí odkaz:
http://arxiv.org/abs/2303.00870
When learning from sensitive data, care must be taken to ensure that training algorithms address privacy concerns. The canonical Private Aggregation of Teacher Ensembles, or PATE, computes output labels by aggregating the predictions of a (possibly d
Externí odkaz:
http://arxiv.org/abs/2209.10732
Publikováno v:
In Decision Support Systems December 2024 187
Autor:
Travers, Adelin, Licollari, Lorna, Wang, Guanghan, Chandrasekaran, Varun, Dziedzic, Adam, Lie, David, Papernot, Nicolas
Machine learning (ML) models are known to be vulnerable to adversarial examples. Applications of ML to voice biometrics authentication are no exception. Yet, the implications of audio adversarial examples on these real-world systems remain poorly und
Externí odkaz:
http://arxiv.org/abs/2108.02010
Publikováno v:
In Decision Support Systems March 2024 178
Autor:
Qiu, Wenjun, Lie, David
Privacy policies are statements that notify users of the services' data practices. However, few users are willing to read through policy texts due to the length and complexity. While automated tools based on machine learning exist for privacy policy
Externí odkaz:
http://arxiv.org/abs/2008.02954