Image filtering and processing#
The sdt.image module contains filters for image data:
wavelet filters for background estimation and subtraction:
wavelet()andwavelet_bg()a bandpass filter (as suggested in [Croc1996]) for background estimation and subtraction:
cg()andcg_bg()a gaussian blur filter:
gaussian_filter()
All filters make use of the sdt.helper.pipeline() mechanism, meaning
that they will only by applied to image data (if it is of the right type,
e.g. a io.ImageSequence) as needed.
If OpenCV is available, the functions adaptive_thresh(),
otsu_thresh(), and percentile_thresh() provide ways
generating binary mask from images by thresholding.
Furthermore, the module supports easy creation of boolean image masks. There
are classes for producing rectangular (RectMask) and circular
(CircleMask) masks.
Examples
Subtract the background (as estimated by a bandpass filter) from images:
>>> img = io.ImageSequence("images.tif").open() # load data
>>> img_nobg = cg(img, 3) # only creates the pipeline, no calculation yet
>>> first_frame = img_nobg[0] # now (only) first image is loaded and filtered
This works similarly for the wavelet and gaussian filters, too.
Create a rectangular boolean image mask:
>>> mask = RectMask((5, 3), shape=(7, 5))
>>> mask
array([[False, False, False, False, False],
[False, True, True, True, False],
[False, True, True, True, False],
[False, True, True, True, False],
[False, True, True, True, False],
[False, True, True, True, False],
[False, False, False, False, False]], dtype=bool)
Filters#
- sdt.image.cg(image, feature_radius, noise_radius=1, nonneg=True)[source]#
Remove background using a bandpass filter according to Crocker & Grier
Convolve with kernel
\[K(i, j) = \frac{1}{K_0} \left[\frac{1}{B} \exp\left(-\frac{i^2 + j^2}{4\lambda^2}\right) - \frac{1}{(2w+1)^2}\right]\]where \(w\) is
feature_radius, \(\lambda\) is thenoise_radius, and \(B, K_0\) are normalization constants\[B = \left[\sum_{i=-w}^w \exp\left(-\frac{i^2}{4\lambda^2}\right) \right]^2\]\[K_0 = \frac{1}{B} \left[\sum_{i=-w}^w \exp\left(-\frac{i^2}{2\lambda^2}\right)\right]^2 - \frac{B}{(2w+1)^2}.\]The first term in the sum in \(K\) does Gaussian smoothing, the second one is a boxcar filter to get rid of long-range fluctuations.
The algorithm has been described in [Croc1996].
This is a
slicerator.pipeline(), meaning that it can be applied to single images or image sequences (as long as they are of typeslicerator.Slicerator).- Parameters:
image (numpy.ndarray) – image data
feature_radius (int) – This should be a number a little greater than the radius of the peaks.
noise_radius (float, optional) – Noise correlation length in pixels. Defaults to 1.
nonneg (bool, optional) – If True, clip values of the filtered image to [0, infinity). Defaults to True.
- Returns:
Bandpass filtered image (sequence)
- Return type:
Slicerator or numpy.ndarray
- sdt.image.cg_bg(image, *args, **kwargs)[source]#
Estimate background using bandpass filter according to Crocker & Grier
This returns
image - cg(image, *args, **kwargs). See thecg()documentation for details.This is a
sdt.helper.pipeline(), meaning that it can be applied to single images or image sequences (as long as they are of typesdt.helper.Slicerator).
- sdt.image.wavelet(image, *args, **kwargs)[source]#
Remove the background using wavelets
This returns
image - wavelet_bg(image, *args, **kwargs). See thewavelet_bg()documentation for details.This is a
sdt.helper.pipeline(), meaning that it can be applied to single images or image sequences (as long as they are of typesdt.helper.Slicerator).
- sdt.image.wavelet_bg(image, feat_thresh, feat_mask=None, wtype='db4', wlevel=3, initial={}, ext_mode='smooth', max_iterations=20, detail=0, conv_threshold=0.005, no_conv='raise')[source]#
Estimate the background using wavelets
This is an implementation of the algorithm described in [Galloway2009] for 2D image data. It works by iteratively estimating the background using wavelet approximations and removing feature (non-background) data from the estimate.
A
NoConvergenceexception is raised if the estimate does not converge within max_iterations.This is a
sdt.helper.pipeline(), meaning that it can be applied to single images or image sequences (as long as they are of typesdt.helper.Slicerator).[Galloway2009]Galloway, C. M. et al.: “An iterative algorithm for background removal in spectroscopy by wavelet transforms”. Appl Spectrosc, 2009, 63, 1370-1376
- Parameters:
image (Slicerator or numpy.ndarray) – Image data. Either a sequence (Slicerator) or a single image.
feat_thresh (float) – Consider everything that is brighter than the estimated background by at least threshold a feature and do not include it in the consecutive iteration of the background estimation process.
feat_mask (int or numpy.ndarray or None, optional) – Setting feat_thresh rather high will most probably cause unwanted effects around the edges of a feature (since those, although above background, will be treated as background). The can feat_mask be used to increase the size of regions considered occupied by a feature by dilation. If feat_mask is None, don’t do that. If it is int, create a circular mask with radius feat_mask. If it is an array, use it as the mask for dilation. Defaults to None.
initial (dict) – Parameters for the wavelet transform of the initial background guess. The dict may contain “wtype”, “wlevel”, “ext_mode”, and “detail” keys. If any of those is not given, use the corresponding parameters from the function call.
wtype (str or pywt.Wavelet, optional) – Wavelet type. See
pywt.wavelist()for available wavelet types. Defaults to “db4”.wlevel (int, optional) – Wavelet decomposition level. The maximum level depends on the wavelet type. See
pywt.dwt_max_level()for details. Defaults to 3.ext_mode (str, optional) – Signal extension mode for wavelet de/recomposition. Refer to the pywavelets documentation for details. Defaults to “smooth”.
max_iterations (int, optional) – Maximum number of cycles of background estimation and removing features from the estimate. Defaults to 20.
detail (int, optional) – Number of wavelet detail coefficients to retain in the background estimate. Defaults to 0, i. e. only use approximation coefficients.
conv_threshold (float, optional) – If the relative difference of estimates between two iterations is less than conv_threshold, consider the result converged and return it. If this does not happen within max_iterations, raise a
NoConvergenceexception instead (with the last result as its last_result attribute).no_conv ({"raise", "ignore"} or number, optional) – What to do if the result does not converge. “raise” will raise a
NoConvergenceexception. If a number is passed, construct an array of the same type and shape as the input that is filled with the scalar.
- Returns:
Slicerator or numpy.ndarray – Estimate for the background
img = np.copy(image)
bg = np.zeros_like(img)
- Raises:
NoConvergence – when the estimate did not converge within max_iterations and
no_conv="raise"was passed.
Thresholding#
- sdt.image.adaptive_thresh(img, block_size, c, smooth=1.5, method='mean')[source]#
Generate binary mask from image using adaptive thresholding
The image will be smoothed using a Gaussian blur. The mask is then calculated using
cv2.adaptiveThreshold()from the OpenCV package.- Parameters:
img (array-like) – Image data
block_size (int) –
2 * block_size + 1is passed as the the block_size parameter tocv2.adaptiveThreshold()(as it has to be an odd number).c (float) – Passed as the C parameter to
cv2.adaptiveThreshold().smooth (float, optional) – Gaussian smoothing radius. Set to 0 to disable. Defaults to 5.
method ({"gaussian", "mean"}, optional) – Adaptive method. Defaults to “mean” (i.e.,
cv2.ADAPTIVE_THRESH_MEAN_C).
- Returns:
Boolean mask image
- Return type:
numpy.ndarray, dtype(bool)
- sdt.image.otsu_thresh(img, factor=1.0, smooth=1.5)[source]#
Generate binary mask from image using Otsu’s binarization
The image will be smoothed using a Gaussian blur. Otsu’s method is used to calculate a global threshold, which is then applied to the image data.
- Parameters:
img (array-like) – Image data
factor (float) – Multiply the result of Otsu’s method with this factor to adjust the threshold.
smooth (float, optional) – Gaussian smoothing radius. Set to 0 to disable. Defaults to 5.
- Returns:
Boolean mask image
- Return type:
numpy.ndarray, dtype(bool)
- sdt.image.percentile_thresh(img, percentile, smooth=1.5)[source]#
Generate binary mask with a threshold based on a percentile
The image will be smoothed using a Gaussian blur. Calculate the desired percentile as a global threshold, which is then applied to the image data.
- Parameters:
img (array-like) – Image data
percentile (float) – Percentile to be used as a threshold.
smooth (float, optional) – Gaussian smoothing radius. Set to 0 to disable. Defaults to 5.
- Returns:
Boolean mask image
- Return type:
numpy.ndarray, dtype(bool)
Masks#
- class sdt.image.RectMask(ext, shape=None)[source]#
Boolean array representing a rectangular mask
- Parameters:
ext (Tuple[int, ...]) – Extension (shape) of the masked rectangle
shape (Tuple[int, ...] | None) – Shape of the resulting array. If this is larger than ext, the mask will be centered in the array. By default, the smallest possible size is chosen.
- Return type:
ndarray
- class sdt.image.CircleMask(radius, extra=0.0, shape=None)[source]#
Boolean array representing a circular mask
True for all coordinates \(x, y\) where \(x^2 + y^2 <= (r+e)^2\), where \(r\) is the radius argument to the constructor and \(e\) the extra argument. The origin (i. e. where \(x, y\) are 0) is the center of the image.
Examples
>>> CircleMask(2) array([[False, False, True, False, False], [False, True, True, True, False], [ True, True, True, True, True], [False, True, True, True, False], [False, False, True, False, False]], dtype=bool) >>> CircleMask(2, 0.5) array([[False, True, True, True, False], [ True, True, True, True, True], [ True, True, True, True, True], [ True, True, True, True, True], [False, True, True, True, False]], dtype=bool)
- Parameters:
radius (int) – Circle radius and mask size. The shape of the created array is
(2*radius+1, 2*radius+1).extra (float) – Add extra to the radius before determining which coordinates are inside the circle. Defaults to 0.
shape (Tuple[int, int] | None) – Shape of the resulting array. If this is larger than the mask, it will be centered in the array. By default, the smallest possible size is chosen.
- Return type:
ndarray
- class sdt.image.DiamondMask(radius, extra=0.0, shape=None)[source]#
Boolean array representing a diamond-shaped (rotated square) mask
True for all coordinates \(x, y\) where \(|x + y| <= r + e\), where \(r\) is the radius argument to the constructor and \(e\) the extra argument. The origin (i. e. where \(x, y\) are 0) is the center of the image.
Examples
>>> DiamondMask(2) array([[False, False, True, False, False], [False, True, True, True, False], [ True, True, True, True, True], [False, True, True, True, False], [False, False, True, False, False]])
- Parameters:
radius (int) – Diamond radius and mask size. The shape of the created array is
(2 * radius + 1, 2 * radius + 1).extra (float) – Add extra to the radius before determining which coordinates are inside the diamond.
shape (Tuple[int, int] | None) – Shape of the resulting array. If this is larger than the mask, it will be centered in the array. By default, the smallest possible size is chosen.
- Return type:
ndarray
Utilities#
- sdt.image.fill_gamut(img, dtype=None)[source]#
Scale image values to fill datatype range
Sets the lowest value to 0 and the highest to whatever is the highest that dtype allows, or 1. if dtype is a floating point type.
- Parameters:
img (array-like) – Image data
dtype (numpy.dtype or None, optional) – dtype of the output array. Image will be scaled to fill the value range of the type. E.g. if
dtype=numpy.uint8, the resulting image will take values between 0 and 255. If None, useimg.dtype. Defaults to None.
- Returns:
Scaled image with dtype as data type.
- Return type:
numpy.ndarray
- sdt.image.center(obj, shape, fill_value=0)[source]#
Center an image in an array of different size
If the new shape is larger, the image will be padded, otherwise it will be cropped.
- Parameters:
obj (ndarray) – Image array
shape (Tuple[int, ...]) – Output shape
fill_value (Any) – Value to use for padding
- Return type:
New array with obj centered.
References