Zobrazeno 1 - 10
of 36
pro vyhledávání: '"Jinkun, Geng"'
Autor:
Runhua, Zhang, Hongxu, Jiang, Fangzheng, Tian, Jinkun, Geng, Xiaobin, Li, Yuhang, Ma, Chenhui, Zhu, Dong, Dong, Xin, Li, Haojie, Wang
Edge computing has been emerging as a popular scenario for model inference. However, the inference performance on edge devices (e.g., Multi-Core DSP, FGPA, etc.) suffers from inefficiency due to the lack of highly optimized inference frameworks. Prev
Externí odkaz:
http://arxiv.org/abs/2302.00282
Publikováno v:
Dianxin kexue, Vol 33, Pp 65-70 (2017)
In recent years,network function virtualization (NFV) is arousing concern widely from both academia and industry due to its flexible deployment and low cost.However,the performance bottleneck becomes increasingly distinctive and hinders the progress
Externí odkaz:
https://doaj.org/article/4525d6401f684e80b0cd268fb48a48cb
Publikováno v:
IEEE/ACM Transactions on Networking. 30:572-585
Autor:
Runhua Zhang, Hongxu Jiang, Fangzheng Tian, Jinkun Geng, Xiaobin Li, Yuhang Ma, Chenhui Zhu, Dong Dong, Xin Li, Haojie Wang
Publikováno v:
Database Systems for Advanced Applications ISBN: 9783031306365
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::671ad225d070673cb738e146469f90a8
https://doi.org/10.1007/978-3-031-30637-2_35
https://doi.org/10.1007/978-3-031-30637-2_35
Autor:
Junfeng Li, Dan Li, Huiyou Jiang, Du Lin, Jinkun Geng, Yukai Huang, K.K. Ramakrishnan, Kai Zheng
Publikováno v:
Computer Networks. 229:109756
This paper presents a high-performance consensus protocol, Nezha, which can be deployed by cloud tenants without any support from their cloud provider. Nezha bridges the gap between protocols such as Multi-Paxos and Raft, which can be readily deploye
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::36e0695e2d9bb3ff3646fbbff9f2d901
http://arxiv.org/abs/2206.03285
http://arxiv.org/abs/2206.03285
Autor:
Jianping Wu, Dan Li, Shuai Wang, Shu-Tao Xia, Songtao Wang, Yanshu Wang, Jinkun Geng, Yang Cheng
Publikováno v:
IEEE/ACM Transactions on Networking. 28:1752-1764
In large-scale distributed machine learning (DML), the network performance between machines significantly impacts the speed of iterative training. In this paper we propose BML , a scalable, high-performance and fault-tolerant DML network architecture
Autor:
Balaji Prabhakar, Yilong Geng, Anirudh Sivaraman, Ahmad Ghalayini, Vighnesh Sachidananda, Vinay Sriram, Jinkun Geng, Mendel Rosenblum
Publikováno v:
HotOS
Financial exchanges have begun a move from on-premise and custom-engineered datacenters to the public cloud, accelerated by a rush of new investors, the rise of remote work, cost savings from the cloud, and the desire for more resilient infrastructur
Publikováno v:
SERVICES
With the rapid development of Internet, the number of Web services is increasing sharply, which makes it more difficult for Mashup developers to find suitable Web services. Nowadays, there are numerous methods to improve Web service recommendation, b
Publikováno v:
INFOCOM
Increasingly rich data sets and complicated models make distributed machine learning more and more important. However, the cost of extensive and frequent parameter synchronizations can easily diminish the benefits of distributed training across multi