Zobrazeno 1 - 4
of 4
pro vyhledávání: '"Aggarwal, Aniya"'
Data values in a dataset can be missing or anomalous due to mishandling or human error. Analysing data with missing values can create bias and affect the inferences. Several analysis methods, such as principle components analysis or singular value de
Externí odkaz:
http://arxiv.org/abs/2205.04731
The increasing usage of machine learning models raises the question of the reliability of these models. The current practice of testing with limited data is often insufficient. In this paper, we provide a framework for automated test data synthesis t
Externí odkaz:
http://arxiv.org/abs/2111.02161
Autor:
Gupta, Nitin, Patel, Hima, Afzal, Shazia, Panwar, Naveen, Mittal, Ruhi Sharma, Guttula, Shanmukha, Jain, Abhinav, Nagalapatti, Lokesh, Mehta, Sameep, Hans, Sandeep, Lohia, Pranay, Aggarwal, Aniya, Saha, Diptikalyan
The quality of training data has a huge impact on the efficiency, accuracy and complexity of machine learning tasks. Various tools and techniques are available that assess data quality with respect to general cleaning and profiling checks. However th
Externí odkaz:
http://arxiv.org/abs/2108.05935
Autor:
Aggarwal, Aniya, Shaikh, Samiulla, Hans, Sandeep, Haldar, Swastik, Ananthanarayanan, Rema, Saha, Diptikalyan
With widespread adoption of AI models for important decision making, ensuring reliability of such models remains an important challenge. In this paper, we present an end-to-end generic framework for testing AI Models which performs automated test gen
Externí odkaz:
http://arxiv.org/abs/2102.06166