Hi all,

today i’m going to talk to you about an academic project i’ve developed: it’s an algorithm for image denoising in parallel, on systems provided with a Nvidia GPU.

If you’re confused about what is an image affected by noise, take a look at here. This project’s aim is to reduce the noise upon the image. The kind of noise we want to reduce is called *Gaussian noise*,* *which is a is statistical noise having a *probability density function (PDF)* equal to that of the *normal distribution*. It’s very likely to have pictures affected by Gaussian noise when there’s no enough illumination during the acquisition of the photo.

There are several algorithms that are able to reduce the noise on images. The easiest way to do that is to perform a convolution in the spatial domain with a gaussian kernel, this technique does of course what you need but it’s too slow in some cases. In order to do it faster, let’s do it in the frequency domain, thanks to the convolution theorem. I used the cuFFT library to perform the transformations, CUDA runtime routines to move data between the *host* memory and the *device (GPU) *one, and a simple component-wise matrix multiplication kernel for the filtering.

You can find the code on my GitHub profile and if you want, feel free to contribute in any way you want.

### Like this:

Like Loading...