What is image registration and how does it work?


SOURCE: ANALYTICSINDIAMAG.COM
MAY 07, 2022

The image registration process is an automated or manual operation that attempts to discover matching spots between two photos and spatially align them to minimise the desired error, i.e. a uniform proximity measurement between two images. Medical sciences, remote sensing, and computer vision all use image registration. This article discusses the technicality of image registration with implementation using OpenCV. Following are the topics to be covered.

Table of contents

  1. About image registration
  2. The use of image registration
  3. Working of image registration
  4. Application of image registration
  5. Image registration using OpenCV

Let’s start with a high-level overview of image registration.

About image registration

Once a correspondence is established between the pictures, the study of the link between two or more photos can typically be simply regulated or dealt with. The process of establishing this correlation is known as image registration.

It could be said that Image registration is the process of calculating spatial transforms which align a set of images to a common observational frame of reference, often one of the images in the set. Registration is a key step in any image analysis or understanding task where different sources of data must be combined. During the registration process, two situations become evident:

  1. It is impossible to imagine; this is known as a matching issue, and it is also the most time-consuming step of the algorithm’s execution.
  2. There is a requirement for transformation in the three-dimensional information of one of the photos in terms of its coordinate system and related to the image chosen as its reference.

GIF source

Are you looking for a complete repository of Python libraries used in data science, check out here.

The use of image registration

Image registration could be used in two ways:

  1. Image-Image Registration: Two or more pictures are aligned to integrate or fuse matching pixels representing the same objects.
  2. Image to Map Registration: In this step, the input image is distorted to match the map information of a base image while keeping its original spatial resolution

It is frequently used to align pictures from diverse camera sources in medical and satellite photography. Image registration is a technique used by digital cameras to align and link nearby pictures into a single panoramic image.

Working of image registration

There are major four steps that every method of image registration has to go through for image alignment. These could be listed as follows:

  • Feature detection: A domain expert detects salient and distinctive objects (closed boundary areas, edges, contours, line intersections, corners, etc.) in both the reference and sensed images.
  • Feature matching: It establishes the correlation between the features in the reference and sensed images. The matching approach is based on the content of the picture or the symbolic description of the control point-set.
  • Estimating the transform model: The parameters and kind of the so-called mapping functions are calculated, which align the detected picture with the reference image.
  • Image resampling and transformation: The detected image is changed using mapping functions.

Image registration methods are majorly classified into two types: area-based approaches and feature-based methods. When significant features are lacking in photos and distinguishing information is given by grey levels/colours rather than local forms and structure, area-based approaches are preferred.

When picture intensities provide more local structural information, feature-based matching algorithms are used. Image characteristics produced from the feature extraction technique are used in these procedures. But these two classifications could be further classified into various methods. Let’s have a look at those classifications.

Pixel Based Method

For registration, a cross-correlation statistical methodology is employed in this procedure. It is frequently used for template matching or pattern recognition, which involves finding the location and orientation of a template or pattern in an image. Cross-correlation is a measure of similarity or a match metric.

For example, the two-dimensional normalised cross-correlation function assesses the similarity for each translation for template (an image for reference) and image, where the template is tiny in comparison to the image.

If the template fits the image, the cross-correlation will be at its peak. Because the measure might be influenced by local picture intensity, the cross-correlation should be adjusted.

Drawbacks

The key disadvantages of correlation approaches are the flatness of the similarity measure maximum (owing to the self-similarity of the pictures) and the high processing complexity. The maximum can be successfully sharpened by pre-processing or by applying edge or vector correlation.

Point Mapping Method

Grey values are not used to describe matched entities in point-based matching algorithms. It employs picture features produced from a feature extraction algorithm/process.

The fundamental goal of feature extraction is to abstract a huge quantity of information from the original data input and filter out the redundant information, i.e. to filter out the information that is no longer needed.

Features that are detected only in both photos and are more tolerant of local distortions are chosen. These characteristics influence the computation of suitable transformation. As a result, a significant number of characteristics must be discovered for computations to be performed.

After detecting characteristics in each image, they must be matched. This is the most common method for registering two photos with unknown misalignment.

Control points for point matching are crucial in this strategy. Corners, lines of intersection, points of locally greatest curvature on contour lines, centres of windows with locally maximum curvature, and centres of gravity of closed-boundary areas are all examples of control points.

Point mapping methods are divided into three stages, which are as follows:

  • Image characteristics that can be computed.
  • The data image’s feature points.
  • Spatial mapping.

Drawbacks

The downside of the Feature-based technique is the exceptionality of the window content. Because of its non-saliency, a window containing a smooth area devoid of critical information is likely to be mistakenly matched with other smooth areas in the reference picture.

The registration characteristics should preferably be recognised in discrete areas of the picture. Windows may lack this feature, and their selection is frequently not based on their content evaluation.

Contour Based Image Registration

In this strategy, strong statistical characteristics are employed to match picture feature points. Colour picture segmentation is used to extract regions of interest from images.

To produce the contour of an image, the mean for a given collection of colours is computed. Then each RGB pixel in an image is categorised as having a colour in a specific range or not during the segmentation process. In addition, the Euclidean distance is necessary to determine similarity.

The locus of points is a sphere with a radius equal to the threshold value, and every point situated inside or on the surface of the sphere meets the stated colour requirement. By coding these two sets of points in the image with black and white, a binary, segmented image is produced.

A Gaussian filter is used to eliminate noise following the segmentation process. Thresholds blurred the image, and then the contour of an image was obtained.

Drawbacks

The accuracy of the Contour Based Image Registration method is but the limitation is that it is manual and slow.

Multimodal Image Registration Using Mutual Information

Registration of multispectral/multisensory pictures is a difficult task. In general, such pictures have varying grey level properties, and basic approaches based on area correlation cannot be easily used.

It is discovered that the image’s entropy remains constant even though the histogram changes. Even after randomly shuffling the image’s pixels, the entropy of the image stays constant. Natural photographs also have less ambiguity.

In a realistic image, the pixel’s value is likely to be quite near to that of some of its neighbours. As a result, this reliance decreases overall entropy.

Drawbacks

When pictures have a poor resolution or the overlap region is narrow, mutual information causes misregistration.

Image Registration in Frequency Domain

The Correlation theorem is important in this procedure because the Fourier function contains two values of the image’s real and imaginary parts at each frequency of the frequency spectrum.

By calculating the inverse of the function, we get an impulse function that is essentially zero everywhere except at displacement, which is required to optimally register two pictures. The approach described above is used to register pictures that just have the translation.

Drawbacks

It should be emphasised that some type of interpolation must be utilised in the frequency domain.

Image Registration using Genetic Algorithm

Genetic Algorithms (GA) are founded on the natural notion of evolution, which suggests that variety aids in the survival of a population in the face of changing environmental conditions. GAs are iterative methods with inherent parallelism.

They keep a population of candidate solutions recorded in the form of chromosomes, which are generally the binary symbols “0” and “1.” The beginning population might be chosen at random.

Fit candidates, defined as those with fitness values greater than a given threshold, will be chosen for reproduction in the following generation. The chosen candidates are merged using a genetic reproduction procedure known as “crossover.” The crossover operator swaps out chromosomal segments to create superior candidates with greater fitness in the following generation.

In the next generation, the entire population is reevaluated. The operation is repeated until a termination requirement is met. The termination criterion might include discovering an acceptable approximation solution, attaining a certain number of generations, or solution convergence.

For satellite imageries based on digital vector maps, automated registration using a genetic algorithm has shown to be particularly efficient and trustworthy.

Applications of Image Registration

In the field of Medical Science

  • Multimodal MRI-MEG Co-registration

Image source

In the axial view of brain imaging, the top-yellow dots indicate anatomical markers or fiducial points (anatomical information). Bottom: The pink dots show the positions of the MEG sensors, while the green dots represent the sites of the scalp-EEG sensors. The functional information is contained in these MEG and EEG data, and the bottom image depicts the coregistered brain image (sagittal view)

In the field of Photography

  • Satellite uses the point mapping method for image registration.

Image source

Image registration using OpenCV

Let’s implement one of the mentioned methods using OpenCV

Import necessary libraries:

import numpy as np import imutils import cv2

If you are using google colab notebook then also need to import an extra library as mentioned below for displaying images because “cv2.imshow” has been disabled as it crashes the terminal.

from google.colab.patches import cv2_imshow

Read the images and convert them into grey values

img_align = cv2.imread("IMG_2523.jpg") img_temp = cv2.imread("IMG_2524.JPG") img1 = cv2.cvtColor(img_align, cv2.COLOR_BGR2GRAY) img2 = cv2.cvtColor(img_temp, cv2.COLOR_BGR2GRAY) height, width = img2.shape

Setting up some key points and descriptors that will be used for matching the pixels from the template file.

orb_detector = cv2.ORB_create(5000) kp1, d1 = orb_detector.detectAndCompute(img1, None) kp2, d2 = orb_detector.detectAndCompute(img2, None) matcher = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck = True) matches = matcher.match(d1, d2)

Create a homography for the validation of the resultant image

homography, mask = cv2.findHomography(p1, p2, cv2.RANSAC)

Registering Image:

transformed_img = cv2.warpPerspective(img_align, homography, (width, height)) matchedVis =cv2.drawMatches(img1, kp1, img2, kp2, matches, None) matchedVis = imutils.resize(matchedVis, width=1000) cv2_imshow(matchedVis) cv2.waitKey(0)

Comparing the resultant image:

Output image

Input image

Validating the result:

A total of 5000 pixels have been selected from the input image which is to be matched from the template image based on which the output image has been generated. By observing the output image we can confidently say that the image is properly aligned to the reference image.

Summarizing the image registration

Image registration is a necessary step in integrating, fusing, and evaluating data from numerous sensors (sources). It has a wide range of applications in medical sciences, computer vision, and remote sensing.

Image registrations with complicated nonlinear distortions, multi-modal registration, and registrations of occluded images, among other things, contribute to the robustness of the techniques belonging to the hardest tasks in the current environment.

In this article, we have understood the concept of image registration, different methods of registering images and implementation in Python using OpenCV.

References

More Great AIM Stories

How To Do Hierarchical Clustering Using UPGMA

How 2020 Turned It Around For Zoom: Interview with Sameer Raje, Head of India

Boutique Analytics Firms Hiring Actively In India

Getting Started With Computer Vision Using TensorFlow Keras

Now Red Hat Is Giving Away This For Free

Will Bill And Melinda Gates’ Divorce Slow Down AI Innovation?

Sourabh Mehta

Sourabh has worked as a full-time data scientist for an ISP organisation, experienced in analysing patterns and their implementation in product development. He has a keen interest in developing solutions for real-time problems with the help of data both in this universe and metaverse.