Autor: |
Malfa, Emanuele La, Petrov, Aleksandar, Frieder, Simon, Weinhuber, Christoph, Burnell, Ryan, Nazar, Raza, Cohn, Anthony G., Shadbolt, Nigel, Wooldridge, Michael |
Předmět: |
|
Zdroj: |
Journal of Artificial Intelligence Research; 2024, Vol. 80, p1497-1523, 27p |
Abstrakt: |
Some of the most powerful language models currently are proprietary systems, accessible only via (typically restrictive) web or software programming interfaces. This is the LanguageModels-as-a-Service (LMaaS) paradigm. In contrast with scenarios where full model access is available, as in the case of open-source models, such closed-off language models present specific challenges for evaluating, benchmarking, and testing them. This paper has two goals: on the one hand, we delineate how the aforementioned challenges act as impediments to the accessibility, reproducibility, reliability, and trustworthiness of LMaaS. We systematically examine the issues that arise from a lack of information about language models for each of these four aspects. We conduct a detailed analysis of existing solutions, put forth a number of recommendations, and highlight directions for future advancements. On the other hand, it serves as a synthesized overview of the licences and capabilities of the most popular LMaaS. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|