Zobrazeno 1 - 10
of 25
pro vyhledávání: '"Jianchen Shan"'
Publikováno v:
IEEE Transactions on Cloud Computing. :1-12
Publikováno v:
Proceedings of the Eighteenth European Conference on Computer Systems.
Publikováno v:
IEEE Transactions on Parallel and Distributed Systems. 32:2557-2570
Despite great advancements in hardware-assisted virtualization of the x86 architecture, certain workloads still suffer significant overhead. This article dissects said overhead in the context of multi-threading. We describe the state-of-the-art, pinp
Publikováno v:
Proceedings of the 13th Symposium on Cloud Computing.
Autor:
Jianchen Shan, Cristian Borcea, Narain Gehani, Reza Curtmola, Xiaoning Ding, Nafize R. Paiker
Publikováno v:
IEEE Transactions on Cloud Computing. 8:97-111
With cloud assistance, mobile apps can offload their resource-demanding computation tasks to the cloud. This leads to a scenario where computation tasks in the same program run concurrently on both the mobile device and the cloud. An important challe
Publikováno v:
2021 IEEE International Performance, Computing, and Communications Conference (IPCCC).
Publikováno v:
PACT
Substantial renovations in hardware cache have been focused on reducing cache interference between workloads recently. However, cache conflicts within each workload are surprisingly overlooked. The paper identifies that cache conflicts cannot be effe
Publikováno v:
ISPA/BDCloud/SocialCom/SustainCom
The cost of TLB consistency is steadily increasing as we evolve towards ever more parallel and consolidated systems. In many cases the application memory allocator is responsible for much of this cost. Existing allocators to our knowledge universally
Autor:
Matthew Elbing, Jianchen Shan
Publikováno v:
2020 IEEE Cloud Summit.
With the rise of multicore machines, the Linux scheduler has introduced a sophisticated load balancing mechanism to spread the tasks over the cores. Study [1] has shown that the load balancer implemented in the default Linux scheduler, Complete Fair
Publikováno v:
2020 IEEE Cloud Summit.
This paper proposes a neuron drop-out mechanism to control the training paces of mobile devices in federated deep learning. The aim is to accelerate the speed of local training on slow mobile devices with minimal impact on training quality, such that