Improved performance optimization for massive small files in cloud computing environment
Autor: | Junho Choi, Chang Choi, Pankoo Kim, Chulwoong Choi |
---|---|
Rok vydání: | 2016 |
Předmět: |
020203 distributed computing
Computer science business.industry Big data General Decision Sciences Cloud computing 02 engineering and technology Parallel computing Management Science and Operations Research computer.software_genre Scheduling (computing) Data_FILES 0202 electrical engineering electronic engineering information engineering Operating system 020201 artificial intelligence & image processing business Distributed File System computer |
Zdroj: | Annals of Operations Research. 265:305-317 |
ISSN: | 1572-9338 0254-5330 |
DOI: | 10.1007/s10479-016-2376-0 |
Popis: | Hadoop uses the Hadoop distributed file system for storing big data, and uses MapReduce to process big data in cloud computing environments. Because Hadoop is optimized for large file sizes, it has difficulties processing large numbers of small files. A small file can be defined as any file that is significantly smaller than the Hadoop block size, which is typically set to 64 MB. Hadoop is optimized to store data in relatively large files, and thus suffers from name node memory insufficiency and increased scheduling and processing time when processing large numbers of small files. This study proposes a performance improvement method for MapReduce processing, which integrates the CombineFileInputFormat method and the reuse feature of the Java Virtual Machine (JVM). Existing methods create a mapper for every small file. Unlike these methods, the proposed method reduces the number of created mappers by processing large numbers of files that are combined by a single split using CombineFileInputFormat. Moreover, to improve MapReduce processing performance, the proposed method reduces JVM creation time by reusing a single JVM to run multiple mappers (rather than creating a JVM for every mapper). |
Databáze: | OpenAIRE |
Externí odkaz: |