In computer science, digital image processing is the use of computer algorithms to perform image processing on digital images.^{[1]} As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the buildup of noise and signal distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modeled in the form of multidimensional systems.
Many of the techniques of digital image processing, or digital picture processing as it often was called, were developed in the 1960s at the Jet Propulsion Laboratory, Massachusetts Institute of Technology, Bell Laboratories, University of Maryland, and a few other research facilities, with application to satellite imagery, wirephoto standards conversion, medical imaging, videophone, character recognition, and photograph enhancement.^{[2]} The cost of processing was fairly high, however, with the computing equipment of that era.
That changed in the 1970s, when digital image processing proliferated as cheaper computers and dedicated hardware became available. Images then could be processed in real time, for some dedicated problems such as television standards conversion. As generalpurpose computers became faster, they started to take over the role of dedicated hardware for all but the most specialized and computerintensive operations. With the fast computers and signal processors available in the 2000s, digital image processing has become the most common form of image processing and generally, is used because it is not only the most versatile method, but also the cheapest.
Digital image processing technology for medical applications was inducted into the Space Foundation Space Technology Hall of Fame in 1994.^{[3]}
Digital image processing allows the use of much more complex algorithms, and hence, can offer both more sophisticated performance at simple tasks, and the implementation of methods which would be impossible by analog means.
In particular, digital image processing is the only practical technology for^{[citation needed]}:
Some techniques which are used in digital image processing include:
Digital filters are used to blur and sharpen digital images. Filtering can be performed by:
The following examples show both methods:^{[4]}
Filter type  Kernel or mask  Example 

Original Image  
Spatial Lowpass  
Spatial Highpass  
Fourier Representation  Pseudocode:
image = checkerboard F = Fourier Transform of image Show Image: log(1+Absolute Value(F)) 

Fourier Lowpass  
Fourier Highpass 
Images are typically padded before being transformed to the Fourier space, the highpass filtered images below illustrate the consequences of different padding techniques:
Zero padded  Repeated edge padded 

Notice that the highpass filter shows extra edges when zero padded compared to the repeated edge padding.
MATLAB example for spatial domain highpass filtering.
img=checkerboard(20); % generate checkerboard
% ************************** SPATIAL DOMAIN ***************************
klaplace=[0 1 0; 1 5 1; 0 1 0]; % Laplacian filter kernel
X=conv2(img,klaplace); % convolve test img with
% 3x3 Laplacian kernel
figure()
imshow(X,[]) % show Laplacian filtered
title('Laplacian Edge Detection')
Affine transformations enable basic image transformations including scale, rotate, translate, mirror and shear as is shown in the following examples:^{[5]}
Transformation Name  Affine Matrix  Example 

Identity  
Reflection  
Scale  
Rotate  where θ = π/6 =30°  
Shear 
To apply the affine matrix to an image, the image is converted to matrix in which each entry corresponds to the pixel intensity at that location. Then each pixel's location can be represented as a vector indicating the coordinates of that pixel in the image, [x, y], where x and y are the row and column of a pixel in the image matrix. This allows the coordinate to be multiplied by an affinetransformation matrix, which gives the position that the pixel value will be copied to in the output image.
However, to allow transformations that require translation transformations, 3 dimensional homogeneous coordinates are needed. The third dimension is usually set to a nonzero constant, usually 1, so that the new coordinate is [x, y, 1]. This allows the coordinate vector to be multiplied by a 3 by 3 matrix, enabling translation shifts. So the third dimension, which is the constant 1, allows translation.
Because matrix multiplication is associative, multiple affine transformations can be combined into a single affine transformation by multiplying the matrix of each individual transformation in the order that the transformations are done. This results in a single matrix that, when applied to a point vector, gives the same result as all the individual transformations performed on the vector [x, y, 1] in sequence. Thus a sequence of affine transformation matrices can be reduced to a single affine transformation matrix.
For example, 2 dimensional coordinates only allow rotation about the origin (0, 0). But 3 dimensional homogeneous coordinates can be used to first translate any point to (0, 0), then perform the rotation, and lastly translate the origin (0, 0) back to the original point (the opposite of the first translation). These 3 affine transformations can be combined into a single matrix, thus allowing rotation around any point in the image.^{[6]}
Digital cameras generally include specialized digital image processing hardware – either dedicated chips or added circuitry on other chips – to convert the raw data from their image sensor into a colorcorrected image in a standard image file format.
Westworld (1973) was the first feature film to use the digital image processing to pixellate photography to simulate an android's point of view.^{[7]}
deadurl=
(help)