抽象的な

GPU Acceleration Using CUDA Framework

Pratul P Nambiar, V.Saveetha, S.Sophia, V.Anusha Sowbarnika

This paper deals with functioning and application of graphics processing units to general purpose computing and the high performance capability of a Graphics Processing Unit(GPU) using CUDA(Compute Unified Device Architecture ) to do parallel computing. GPGPU which stands for General-purpose computing on Graphics Processing Units is the technique in which the GPU is employed to handle and perform computations that were previously handled only by the CPU. Parallel computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently. There are many advantages in doing so, primary amongst which is speed, but unfortunately, getting the GPU to handle tasks traditionally performed by the CPU isn’t quite so simple .CUDA was developed by NVIDIA to execute simple programs using GPGPU which were executed on CPU. The logic behind the idea is that GPU consists of multi core processing units which operate in parallel and can be used to execute multiple instructions concurrently. CUDA gives program developers direct access to the virtual instruction set and memory of parallel computation elements in CUDA GPU’s.

免責事項: この要約は人工知能ツールを使用して翻訳されており、まだレビューまたは確認されていません