Popis: |
The Streaming Multiprocessors (SMs) of a Graphics Processing Unit (GPU) execute instructions from a group of consecutive threads, called warps. At each cycle, an SM schedules a warp from a group of active warps and can context switch among the active warps to hide various stalls. Hence the performance of warp scheduler is critical to the performance of GPU. Several heuristic warp scheduling algorithms have been proposed which work well only for the situations they are designed for. GPU workloads are becoming very diverse in nature and hence one heuristic may not work for all cases. To work well over a diverse range of workloads, which might exhibit hitherto unseen characteristics, a warp scheduling algorithm must be able to adapt on-line. We propose a Reinforcement Learning based Warp Scheduler (RLWS) which learns to schedule warps based on the current state of the core and the long-term benefits of scheduling actions, adapting not only to different types of workloads, but also to different execution phases in each workload. As the design space involving the state variables and the parameters (such as learning and exploration rates, reward and penalty values) used by RLWS is large, we use Genetic Algorithm to identify the useful subset of state variables and parameter values. We evaluated the proposed RLWS using the GPGPU-SIM simulator on a large number of workloads from the Rodinia, Parboil, CUDA-SDK and GPGPU-SIM benchmark suites and compared with other state-of-the-art warp scheduling methods. Our RL based implementation achieved either the best or very close to the best performance in 80\% of kernels with an average speedup of 1.06x over the Loose Round Robin strategy and 1.07x over the Two-Level strategy. |