NVidia Cuda

Digitalnerds blog is reader-supported. When you buy through links on our site, we may earn an affiliate commission. we never accept free products from manufacturers.Learn more.

What does CUDA represent for the usual people?
CUDA is a platform developed by nVidia which was designed to use parallel computing, in order to speed up computer operations and reducing the workload for the CPU in certain applications by using available computing power graphics processors (GPU) but the best thing about it is that it offers many advantages from which many applications can benefit given their destination. This means that applications that process similarly multiple sets of data such as images and video processing, biology or chemistry calculations, the simulation of fluid dynamics, seismic analysis and more.

Cuda (Source: hothardware.com)

Cuda ( Source: hothardware.com )

Historically, the power of graphics processors has greatly increased in the last decade years, which is the main reason why the number of transistors used in graphics chips (GPU) far exceeds those used in central processing units (CPUs). For example, the a GF110 graphics processor that the Nvidia GTX 580 cards used had about 3.1 billion transistors and in medium  class chips as those used on nVidia GTX 560 Ti that number goes up to 1950 billion. GPU manufactured by ATI also use 2.640 billion transistors using top processors from several years ago, the Cayman XT that the Radeon HD 6970 cards used. A central processor for a high-end desktop is done with “only” 915 million transistors (a Sandy Bridge quad core), and the current flagship of the AMD processor family which provides 6 cores, the AMD Phenom II x6, uses 904 million transistors.

Unlike the CPU, which are considered general purpose processors, GPUs are optimized for parallel computing on multiple data sets such as a FPGA but another major difference between the CPU and GPU architectures lies in how the memory is accessed.

Cuda (Source: media.bestofmicro.com)

Cuda ( Source: media.bestofmicro.com )

A CPU uses between 1 and 3 channels (64-bit wide) for each memory while current GPUs can use up to 8 memory parallel channels with 64 – bit width each. This can provide an joint bandwidth of more than double if the memory used is of the same type. Usually, graphics cards use GDDR memory type, which is one or two generations in advance compared to the RAM memories mainly used in computers.

What can be done with this new power, when the chipset is not used for 3D rendering?
What this power can be used for when not used for 3 D rendering is real time MPEG 2 decoding for video streams. This also offers great computational power that in many cases is independent from the processors and the whole computer to say the least. The advancement in this technology means that soon the GPUs will no longer have dedicated purposes but also may become the central processing units if that is not overly exaggerated.