Measuring Spark on AWS: A Case Study on Mining Scientific Publications with Annotation Query
Autor: | McBeath, Darin, Daniel, Ron |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2018 |
Předmět: | |
Popis: | Annotation Query (AQ) is a program that provides the ability to query many different types of NLP annotations on a text, as well as the original content and structure of the text. The query results may provide new annotations, or they may select subsets of the content and annotations for deeper processing. Like GATE's Mimir, AQ is based on region algebras. Our AQ is implemented to run on a Spark cluster. In this paper we look at how AQ's runtimes are affected by the size of the collection, the number of nodes in the cluster, the type of node, and the characteristics of the queries. Cluster size, of course, makes a large difference in performance so long as skew can be avoided. We find that there is minimal difference in performance when persisting annotations serialized to local SSD drives as opposed to deserialized into local memory. We also find that if the number of nodes is kept constant, then AWS' storage-optimized instance performs the best. But if we factor in total cost, the compute-optimized nodes provides the best performance relative to cost. 19 pages, 12 figures |
Databáze: | OpenAIRE |
Externí odkaz: |