Quality-Efficiency Trade-offs in Machine Learning for Text Processing
Autor: | Ricardo Baeza-Yates, Zeinab Liaghat |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2017 |
Předmět: |
FOS: Computer and information sciences
Process (engineering) Computer science Big data 02 engineering and technology 010501 environmental sciences Machine learning computer.software_genre 01 natural sciences Machine Learning (cs.LG) Computer Science - Information Retrieval Text processing Named-entity recognition 0202 electrical engineering electronic engineering information engineering 0105 earth and related environmental sciences Training set Computer Science - Computation and Language business.industry Document classification Sentiment analysis Computer Science - Learning 020201 artificial intelligence & image processing Artificial intelligence business computer Computation and Language (cs.CL) Information Retrieval (cs.IR) |
Zdroj: | IEEE BigData |
Popis: | Data mining, machine learning, and natural language processing are powerful techniques that can be used together to extract information from large texts. Depending on the task or problem at hand, there are many different approaches that can be used. The methods available are continuously being optimized, but not all these methods have been tested and compared in a set of problems that can be solved using supervised machine learning algorithms. The question is what happens to the quality of the methods if we increase the training data size from, say, 100 MB to over 1 GB? Moreover, are quality gains worth it when the rate of data processing diminishes? Can we trade quality for time efficiency and recover the quality loss by just being able to process more data? We attempt to answer these questions in a general way for text processing tasks, considering the trade-offs involving training data size, learning time, and quality obtained. We propose a performance trade-off framework and apply it to three important text processing problems: Named Entity Recognition, Sentiment Analysis and Document Classification. These problems were also chosen because they have different levels of object granularity: words, paragraphs, and documents. For each problem, we selected several supervised machine learning algorithms and we evaluated the trade-offs of them on large publicly available data sets (news, reviews, patents). To explore these trade-offs, we use different data subsets of increasing size ranging from 50 MB to several GB. We also consider the impact of the data set and the evaluation technique. We find that the results do not change significantly and that most of the time the best algorithms is the fastest. However, we also show that the results for small data (say less than 100 MB) are different from the results for big data and in those cases the best algorithm is much harder to determine. Ten pages, long version of paper that will be presented at IEEE Big Data 2017 (8 pages) |
Databáze: | OpenAIRE |
Externí odkaz: |