Zobrazeno 1 - 10
of 16
pro vyhledávání: '"Hameeza Ahmed"'
Autor:
Hameeza Ahmed, Muhammad Fahim Ul Haque, Hashim Raza Khan, Ghalib Nadeem, Kamran Arshad, Khaled Assaleh, Paulo Cesar Santos
Publikováno v:
IEEE Access, Vol 12, Pp 121700-121711 (2024)
Compiler is a tool that converts the high-level language into assembly code after enabling relevant optimizations. The automatic selection of suitable optimizations from an ample optimization space is a non-trivial task mainly accomplished through ha
Externí odkaz:
https://doaj.org/article/b013f57bc5ba40f58291ca9339fdba28
Autor:
Hameeza Ahmed, Muhammad Ali Ismail
Publikováno v:
IEEE Access, Vol 8, Pp 186304-186322 (2020)
Big data is a ”relative” concept. It is the combination of data, application, and platform properties. Recently, big data specific technologies have emerged, including software frameworks, databases, hardware accelerators, storage technologies, e
Externí odkaz:
https://doaj.org/article/100f9abba8e14ff8b1a37dd999c994b8
Autor:
Hameeza Ahmed, Muhammad Ali Ismail
Publikováno v:
IEEE Transactions on Big Data. 9:147-159
Autor:
Hameeza Ahmed, Muhammad Ali Ismail
Publikováno v:
Software: Practice and Experience. 52:1262-1293
Autor:
Muhammad Ali Ismail, Hameeza Ahmed
Publikováno v:
COMPUTING AND INFORMATICS; Vol. 40 No. 3 (2021): Computing and Informatics; 543–574
Low Level Virtual Machine (LLVM) is a widely adopted open source compiler providing numerous optimization opportunities. The discovery of the best optimization sequence in this large space is done via iterative compilation, which incurs substantial o
Autor:
Antonio Carlos Schneider Beck, Luigi Carro, Marco A. Z. Alves, Paulo C. Santos, Rafael Fao de Moura, Hameeza Ahmed, João Paulo Cardoso de Lima
Publikováno v:
Microprocessors and Microsystems. 69:101-111
Smart devices based on Internet of Things (IoT) and Cyber-Physical System (CPS) are emerging as an important and complex set of applications in the modern world. These systems can generate a massive amounts of data, due to the enormous quantity of se
Autor:
Marco A. Z. Alves, João Paulo Cardoso de Lima, Rafael Fao de Moura, Paulo C. Santos, Hameeza Ahmed, Luigi Carro, Antonio Carlos Schneider Beck
Publikováno v:
DATE
Although not a new technique, due to the advent of 3D-stacked technologies, the integration of large memories and logic circuitry able to compute large amount of data has revived the Processing-in-Memory (PIM) techniques. PIM is a technique to increa
Publikováno v:
Procedia Computer Science. 82:99-106
Apache Spark is an open source cluster computing technology specifically designed for large scale data processing. This paper deals with the deployment of Spark cluster as a cloud service on the OpenStack based cloud. HiBench benchmark suite is used
Autor:
Marco A. Z. Alves, Antonio Carlos Schneider Beck, Paulo C. Santos, Luigi Carro, Rafael Fao de Moura, João Paulo Cardoso de Lima, Hameeza Ahmed
Publikováno v:
INTESA@ESWEEK
Since modern Internet of Things (IoT) applications generate massive amounts of data, they either stress the communication mechanism or need extra resources to treat the data locally. The massive volume of data is commonly collected by sensors, and it
Autor:
Hameeza Ahmed, Muhammad Khurram
Publikováno v:
International Journal of Information Engineering and Electronic Business. 6:44-52
Media Access Control (MAC) layer protocols have a critical role in making a typical Wireless Sensor Network (WSN) more reliable and efficient. Choice of MAC layer protocol and other factors including number of nodes, mobility, traffic rate and playgr