Bases: skimage.measure.fit.BaseModel
Total least squares estimator for 2D circles.
The functional model of the circle is:
r**2 = (x - xc)**2 + (y - yc)**2
This estimator minimizes the squared distances from all points to the circle:
min{ sum((r - sqrt((x_i - xc)**2 + (y_i - yc)**2))**2) }
A minimum number of 3 points is required to solve for the parameters.
Attributes
params | tuple | Circle model parameters in the following order xc, yc, r. |
Estimate circle model from data using total least squares.
Parameters: | data : (N, 2) array
|
---|
Predict x- and y-coordinates using the estimated model.
Parameters: | t : array
params : (3, ) array, optional
|
---|---|
Returns: | xy : (..., 2) array
|
Determine residuals of data to model.
For each point the shortest distance to the circle is returned.
Parameters: | data : (N, 2) array
|
---|---|
Returns: | residuals : (N, ) array
|
Bases: skimage.measure.fit.BaseModel
Total least squares estimator for 2D ellipses.
The functional model of the ellipse is:
xt = xc + a*cos(theta)*cos(t) - b*sin(theta)*sin(t)
yt = yc + a*sin(theta)*cos(t) + b*cos(theta)*sin(t)
d = sqrt((x - xt)**2 + (y - yt)**2)
where (xt, yt) is the closest point on the ellipse to (x, y). Thus d is the shortest distance from the point to the ellipse.
This estimator minimizes the squared distances from all points to the ellipse:
min{ sum(d_i**2) } = min{ sum((x_i - xt)**2 + (y_i - yt)**2) }
Thus you have 2 * N equations (x_i, y_i) for N + 5 unknowns (t_i, xc, yc, a, b, theta), which gives you an effective redundancy of N - 5.
The params attribute contains the parameters in the following order:
xc, yc, a, b, theta
A minimum number of 5 points is required to solve for the parameters.
Attributes
Estimate circle model from data using total least squares.
Parameters: | data : (N, 2) array
|
---|
Predict x- and y-coordinates using the estimated model.
Parameters: | t : array
params : (5, ) array, optional
|
---|---|
Returns: | xy : (..., 2) array
|
Determine residuals of data to model.
For each point the shortest distance to the ellipse is returned.
Parameters: | data : (N, 2) array
|
---|---|
Returns: | residuals : (N, ) array
|
Bases: skimage.measure.fit.BaseModel
Total least squares estimator for 2D lines.
Lines are parameterized using polar coordinates as functional model:
dist = x * cos(theta) + y * sin(theta)
This parameterization is able to model vertical lines in contrast to the standard line model y = a*x + b.
This estimator minimizes the squared distances from all points to the line:
min{ sum((dist - x_i * cos(theta) + y_i * sin(theta))**2) }
A minimum number of 2 points is required to solve for the parameters.
Attributes
params | tuple | Line model parameters in the following order dist, theta. |
Estimate line model from data using total least squares.
Parameters: | data : (N, 2) array
|
---|
Predict x-coordinates using the estimated model.
Parameters: | y : array
params : (2, ) array, optional
|
---|---|
Returns: | x : array
|
Predict y-coordinates using the estimated model.
Parameters: | x : array
params : (2, ) array, optional
|
---|---|
Returns: | y : array
|
Determine residuals of data to model.
For each point the shortest distance to the line is returned.
Parameters: | data : (N, 2) array
|
---|---|
Returns: | residuals : (N, ) array
|
skimage.measure.approximate_polygon(coords, ...) | Approximate a polygonal chain with the specified tolerance. |
skimage.measure.block_reduce(image, block_size) | Down-sample image by applying function to local blocks. |
skimage.measure.correct_mesh_orientation(...) | Correct orientations of mesh faces. |
skimage.measure.find_contours(array, level) | Find iso-valued contours in a 2D array for a given level value. |
skimage.measure.marching_cubes(volume, level) | Marching cubes algorithm to find iso-valued surfaces in 3d volumetric data |
skimage.measure.mesh_surface_area(verts, faces) | Compute surface area, given vertices & triangular faces |
skimage.measure.moments | Calculate all raw image moments up to a certain order. |
skimage.measure.moments_central | Calculate all central image moments up to a certain order. |
skimage.measure.moments_hu | Calculate Hu’s set of image moments. |
skimage.measure.moments_normalized | Calculate all normalized central image moments up to a certain order. |
skimage.measure.perimeter(image[, neighbourhood]) | Calculate total perimeter of all objects in binary image. |
skimage.measure.profile_line(img, src, dst) | Return the intensity profile of an image measured along a scan line. |
skimage.measure.ransac(data, model_class, ...) | Fit a model to data with the RANSAC (random sample consensus) algorithm. |
skimage.measure.regionprops(label_image[, ...]) | Measure properties of labeled image regions. |
skimage.measure.structural_similarity(X, Y) | Compute the mean structural similarity index between two images. |
skimage.measure.subdivide_polygon(coords[, ...]) | Subdivision of polygonal curves using B-Splines. |
Approximate a polygonal chain with the specified tolerance.
It is based on the Douglas-Peucker algorithm.
Note that the approximated polygon is always within the convex hull of the original polygon.
Parameters: | coords : (N, 2) array
tolerance : float
|
---|---|
Returns: | coords : (M, 2) array
|
References
[R224] | http://en.wikipedia.org/wiki/Ramer-Douglas-Peucker_algorithm |
Down-sample image by applying function to local blocks.
Parameters: | image : ndarray
block_size : array_like
func : callable
cval : float
|
---|---|
Returns: | image : ndarray
|
Examples
>>> from skimage.measure import block_reduce
>>> image = np.arange(3*3*4).reshape(3, 3, 4)
>>> image
array([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]],
[[12, 13, 14, 15],
[16, 17, 18, 19],
[20, 21, 22, 23]],
[[24, 25, 26, 27],
[28, 29, 30, 31],
[32, 33, 34, 35]]])
>>> block_reduce(image, block_size=(3, 3, 1), func=np.mean)
array([[[ 16., 17., 18., 19.]]])
>>> image_max1 = block_reduce(image, block_size=(1, 3, 4), func=np.max)
>>> image_max1
array([[[11]],
[[23]],
[[35]]])
>>> image_max2 = block_reduce(image, block_size=(3, 1, 4), func=np.max)
>>> image_max2
array([[[27],
[31],
[35]]])
Correct orientations of mesh faces.
Parameters: | volume : (M, N, P) array of doubles
verts : (V, 3) array of floats
faces : (F, 3) array of ints
spacing : length-3 tuple of floats
gradient_direction : string
|
---|---|
Returns: | faces_corrected (F, 3) array of ints :
|
Notes
Certain applications and mesh processing algorithms require all faces to be oriented in a consistent way. Generally, this means a normal vector points “out” of the meshed shapes. This algorithm corrects the output from skimage.measure.marching_cubes by flipping the orientation of mis-oriented faces.
Because marching cubes could be used to find isosurfaces either on gradient descent (where the desired object has greater values than the exterior) or ascent (where the desired object has lower values than the exterior), the gradient_direction kwarg allows the user to inform this algorithm which is correct. If the resulting mesh appears to be oriented completely incorrectly, try changing this option.
The arguments expected by this function are the exact outputs from skimage.measure.marching_cubes. Only faces is corrected and returned, as the vertices do not change; only the order in which they are referenced.
This algorithm assumes faces provided are all triangles.
Find iso-valued contours in a 2D array for a given level value.
Uses the “marching squares” method to compute a the iso-valued contours of the input 2D array for a particular level value. Array values are linearly interpolated to provide better precision for the output contours.
Parameters: | array : 2D ndarray of double
level : float
fully_connected : str, {‘low’, ‘high’}
positive_orientation : either ‘low’ or ‘high’
|
---|---|
Returns: | contours : list of (n,2)-ndarrays
|
Notes
The marching squares algorithm is a special case of the marching cubes algorithm [R225]. A simple explanation is available here:
http://www.essi.fr/~lingrand/MarchingCubes/algo.html
There is a single ambiguous case in the marching squares algorithm: when a given 2 x 2-element square has two high-valued and two low-valued elements, each pair diagonally adjacent. (Where high- and low-valued is with respect to the contour value sought.) In this case, either the high-valued elements can be ‘connected together’ via a thin isthmus that separates the low-valued elements, or vice-versa. When elements are connected together across a diagonal, they are considered ‘fully connected’ (also known as ‘face+vertex-connected’ or ‘8-connected’). Only high-valued or low-valued elements can be fully-connected, the other set will be considered as ‘face-connected’ or ‘4-connected’. By default, low-valued elements are considered fully-connected; this can be altered with the ‘fully_connected’ parameter.
Output contours are not guaranteed to be closed: contours which intersect the array edge will be left open. All other contours will be closed. (The closed-ness of a contours can be tested by checking whether the beginning point is the same as the end point.)
Contours are oriented. By default, array values lower than the contour value are to the left of the contour and values greater than the contour value are to the right. This means that contours will wind counter-clockwise (i.e. in ‘positive orientation’) around islands of low-valued pixels. This behavior can be altered with the ‘positive_orientation’ parameter.
The order of the contours in the output list is determined by the position of the smallest x,y (in lexicographical order) coordinate in the contour. This is a side-effect of how the input array is traversed, but can be relied upon.
Warning
Array coordinates/values are assumed to refer to the center of the array element. Take a simple example input: [0, 1]. The interpolated position of 0.5 in this array is midway between the 0-element (at x=0) and the 1-element (at x=1), and thus would fall at x=0.5.
This means that to find reasonable contours, it is best to find contours midway between the expected “light” and “dark” values. In particular, given a binarized array, do not choose to find contours at the low or high value of the array. This will often yield degenerate contours, especially around structures that are a single array element wide. Instead choose a middle value, as above.
References
[R225] | (1, 2) Lorensen, William and Harvey E. Cline. Marching Cubes: A High Resolution 3D Surface Construction Algorithm. Computer Graphics (SIGGRAPH 87 Proceedings) 21(4) July 1987, p. 163-170). |
Examples
>>> a = np.zeros((3, 3))
>>> a[0, 0] = 1
>>> a
array([[ 1., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]])
>>> find_contours(a, 0.5)
[array([[ 0. , 0.5],
[ 0.5, 0. ]])]
Marching cubes algorithm to find iso-valued surfaces in 3d volumetric data
Parameters: | volume : (M, N, P) array of doubles
level : float
spacing : length-3 tuple of floats
|
---|---|
Returns: | verts : (V, 3) array
faces : (F, 3) array
|
Notes
The marching cubes algorithm is implemented as described in [R226]. A simple explanation is available here:
http://www.essi.fr/~lingrand/MarchingCubes/algo.html
There are several known ambiguous cases in the marching cubes algorithm. Using point labeling as in [R226], Figure 4, as shown:
v8 ------ v7
/ | / | y
/ | / | ^ z
v4 ------ v3 | | /
| v5 ----|- v6 |/ (note: NOT right handed!)
| / | / ----> x
| / | /
v1 ------ v2
Most notably, if v4, v8, v2, and v6 are all >= level (or any generalization of this case) two parallel planes are generated by this algorithm, separating v4 and v8 from v2 and v6. An equally valid interpretation would be a single connected thin surface enclosing all four points. This is the best known ambiguity, though there are others.
This algorithm does not attempt to resolve such ambiguities; it is a naive implementation of marching cubes as in [R226], but may be a good beginning for work with more recent techniques (Dual Marching Cubes, Extended Marching Cubes, Cubic Marching Squares, etc.).
Because of interactions between neighboring cubes, the isosurface(s) generated by this algorithm are NOT guaranteed to be closed, particularly for complicated contours. Furthermore, this algorithm does not guarantee a single contour will be returned. Indeed, ALL isosurfaces which cross level will be found, regardless of connectivity.
The output is a triangular mesh consisting of a set of unique vertices and connecting triangles. The order of these vertices and triangles in the output list is determined by the position of the smallest x,y,z (in lexicographical order) coordinate in the contour. This is a side-effect of how the input array is traversed, but can be relied upon.
The generated mesh does not guarantee coherent orientation because of how symmetry is used in the algorithm. If this is required, e.g. due to a particular visualization package or for generating 3D printing STL files, the utility skimage.measure.correct_mesh_orientation is available to fix this in post-processing.
To quantify the area of an isosurface generated by this algorithm, pass the outputs directly into skimage.measure.mesh_surface_area.
Regarding visualization of algorithm output, the mayavi package is recommended. To contour a volume named myvolume about the level 0.0:
>>> from mayavi import mlab
>>> verts, faces = marching_cubes(myvolume, 0.0, (1., 1., 2.))
>>> mlab.triangular_mesh([vert[0] for vert in verts],
... [vert[1] for vert in verts],
... [vert[2] for vert in verts],
... faces)
>>> mlab.show()
References
[R226] | (1, 2, 3, 4) Lorensen, William and Harvey E. Cline. Marching Cubes: A High Resolution 3D Surface Construction Algorithm. Computer Graphics (SIGGRAPH 87 Proceedings) 21(4) July 1987, p. 163-170). |
Compute surface area, given vertices & triangular faces
Parameters: | verts : (V, 3) array of floats
faces : (F, 3) array of ints
|
---|---|
Returns: | area : float
|
Notes
The arguments expected by this function are the exact outputs from skimage.measure.marching_cubes. For unit correct output, ensure correct spacing was passed to skimage.measure.marching_cubes.
This algorithm works properly only if the faces provided are all triangles.
Calculate all raw image moments up to a certain order.
Note that raw moments are neither translation, scale nor rotation invariant.
Parameters: | image : 2D double array
order : int, optional
|
---|---|
Returns: | m : (order + 1, order + 1) array
|
References
[R227] | Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009. |
[R228] | B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005. |
[R229] | T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993. |
[R230] | http://en.wikipedia.org/wiki/Image_moment |
Calculate all central image moments up to a certain order.
Note that central moments are translation invariant but not scale and rotation invariant.
Parameters: | image : 2D double array
cr : double
cc : double
order : int, optional
|
---|---|
Returns: | mu : (order + 1, order + 1) array
|
References
[R231] | Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009. |
[R232] | B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005. |
[R233] | T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993. |
[R234] | http://en.wikipedia.org/wiki/Image_moment |
Calculate Hu’s set of image moments.
Note that this set of moments is proofed to be translation, scale and rotation invariant.
Parameters: | nu : (M, M) array
|
---|---|
Returns: | nu : (7, 1) array
|
References
[R235] | M. K. Hu, “Visual Pattern Recognition by Moment Invariants”, IRE Trans. Info. Theory, vol. IT-8, pp. 179-187, 1962 |
[R236] | Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009. |
[R237] | B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005. |
[R238] | T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993. |
[R239] | http://en.wikipedia.org/wiki/Image_moment |
Calculate all normalized central image moments up to a certain order.
Note that normalized central moments are translation and scale invariant but not rotation invariant.
Parameters: | mu : (M, M) array
order : int, optional
|
---|---|
Returns: | nu : (order + 1, order + 1) array
|
References
[R240] | Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009. |
[R241] | B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005. |
[R242] | T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993. |
[R243] | http://en.wikipedia.org/wiki/Image_moment |
Calculate total perimeter of all objects in binary image.
Parameters: | image : array
neighbourhood : 4 or 8, optional
|
---|---|
Returns: | perimeter : float
|
References
[R244] | K. Benkrid, D. Crookes. Design and FPGA Implementation of a Perimeter Estimator. The Queen’s University of Belfast. http://www.cs.qub.ac.uk/~d.crookes/webpubs/papers/perimeter.doc |
Return the intensity profile of an image measured along a scan line.
Parameters: | img : numeric array, shape (M, N[, C])
src : 2-tuple of numeric scalar (float or int)
dst : 2-tuple of numeric scalar (float or int)
linewidth : int, optional
order : int in {0, 1, 2, 3, 4, 5}, optional
mode : string, one of {‘constant’, ‘nearest’, ‘reflect’, ‘wrap’}, optional
cval : float, optional
|
---|---|
Returns: | return_value : array
|
Notes
The destination point is included in the profile, in contrast to standard numpy indexing.
Examples
>>> x = np.array([[1, 1, 1, 2, 2, 2]])
>>> img = np.vstack([np.zeros_like(x), x, x, x, np.zeros_like(x)])
>>> img
array([[0, 0, 0, 0, 0, 0],
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 2, 2, 2],
[1, 1, 1, 2, 2, 2],
[0, 0, 0, 0, 0, 0]])
>>> profile_line(img, (2, 1), (2, 4))
array([ 1., 1., 2., 2.])
Fit a model to data with the RANSAC (random sample consensus) algorithm.
RANSAC is an iterative algorithm for the robust estimation of parameters from a subset of inliers from the complete data set. Each iteration performs the following tasks:
These steps are performed either a maximum number of times or until one of the special stop criteria are met. The final model is estimated using all inlier samples of the previously determined best model.
Parameters: | data : [list, tuple of] (N, D) array
model_class : object
min_samples : int
residual_threshold : float
is_data_valid : function, optional
is_model_valid : function, optional
max_trials : int, optional
stop_sample_num : int, optional
stop_residuals_sum : float, optional
|
---|---|
Returns: | model : object
inliers : (N, ) array
|
References
[R245] | “RANSAC”, Wikipedia, http://en.wikipedia.org/wiki/RANSAC |
Examples
Generate ellipse data without tilt and add noise:
>>> t = np.linspace(0, 2 * np.pi, 50)
>>> a = 5
>>> b = 10
>>> xc = 20
>>> yc = 30
>>> x = xc + a * np.cos(t)
>>> y = yc + b * np.sin(t)
>>> data = np.column_stack([x, y])
>>> np.random.seed(seed=1234)
>>> data += np.random.normal(size=data.shape)
Add some faulty data:
>>> data[0] = (100, 100)
>>> data[1] = (110, 120)
>>> data[2] = (120, 130)
>>> data[3] = (140, 130)
Estimate ellipse model using all available data:
>>> model = EllipseModel()
>>> model.estimate(data)
>>> model.params
array([ -3.30354146e+03, -2.87791160e+03, 5.59062118e+03,
7.84365066e+00, 7.19203152e-01])
Estimate ellipse model using RANSAC:
>>> ransac_model, inliers = ransac(data, EllipseModel, 5, 3, max_trials=50)
>>> ransac_model.params
array([ 20.12762373, 29.73563063, 4.81499637, 10.4743584 , 0.05217117])
>>> inliers
array([False, False, False, False, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True], dtype=bool)
Robustly estimate geometric transformation:
>>> from skimage.transform import SimilarityTransform
>>> src = 100 * np.random.random((50, 2))
>>> model0 = SimilarityTransform(scale=0.5, rotation=1,
... translation=(10, 20))
>>> dst = model0(src)
>>> dst[0] = (10000, 10000)
>>> dst[1] = (-100, 100)
>>> dst[2] = (50, 50)
>>> model, inliers = ransac((src, dst), SimilarityTransform, 2, 10)
>>> inliers
array([False, False, False, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True,
True, True, True, True, True], dtype=bool)
Measure properties of labeled image regions.
Parameters: | label_image : (N, M) ndarray
intensity_image : (N, M) ndarray, optional
cache : bool, optional
|
---|---|
Returns: | properties : list
|
Notes
The following properties can be accessed as attributes or keys:
Spatial moments up to 3rd order:
m_ji = sum{ array(x, y) * x^j * y^i }
where the sum is over the x, y coordinates of the region.
Central moments (translation invariant) up to 3rd order:
mu_ji = sum{ array(x, y) * (x - x_c)^j * (y - y_c)^i }
where the sum is over the x, y coordinates of the region, and x_c and y_c are the coordinates of the region’s centroid.
Normalized moments (translation and scale invariant) up to 3rd order:
nu_ji = mu_ji / m_00^[(i+j)/2 + 1]
where m_00 is the zeroth spatial moment.
Spatial moments of intensity image up to 3rd order:
wm_ji = sum{ array(x, y) * x^j * y^i }
where the sum is over the x, y coordinates of the region.
Central moments (translation invariant) of intensity image up to 3rd order:
wmu_ji = sum{ array(x, y) * (x - x_c)^j * (y - y_c)^i }
where the sum is over the x, y coordinates of the region, and x_c and y_c are the coordinates of the region’s centroid.
Normalized moments (translation and scale invariant) of intensity image up to 3rd order:
wnu_ji = wmu_ji / wm_00^[(i+j)/2 + 1]
where wm_00 is the zeroth spatial moment (intensity-weighted area).
References
[R246] | Wilhelm Burger, Mark Burge. Principles of Digital Image Processing: Core Algorithms. Springer-Verlag, London, 2009. |
[R247] | B. Jähne. Digital Image Processing. Springer-Verlag, Berlin-Heidelberg, 6. edition, 2005. |
[R248] | T. H. Reiss. Recognizing Planar Objects Using Invariant Image Features, from Lecture notes in computer science, p. 676. Springer, Berlin, 1993. |
[R249] | http://en.wikipedia.org/wiki/Image_moment |
Examples
>>> from skimage import data, util
>>> from skimage.morphology import label
>>> img = util.img_as_ubyte(data.coins()) > 110
>>> label_img = label(img)
>>> props = regionprops(label_img)
>>> props[0].centroid # centroid of first labeled object
(22.729879860483141, 81.912285234465827)
>>> props[0]['centroid'] # centroid of first labeled object
(22.729879860483141, 81.912285234465827)
Compute the mean structural similarity index between two images.
Parameters: | X, Y : (N,N) ndarray
win_size : int
gradient : bool
dynamic_range : int
|
---|---|
Returns: | s : float
grad : (N * N,) ndarray
|
References
[R250] | Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13, 600-612. |
Subdivision of polygonal curves using B-Splines.
Note that the resulting curve is always within the convex hull of the original polygon. Circular polygons stay closed after subdivision.
Parameters: | coords : (N, 2) array
degree : {1, 2, 3, 4, 5, 6, 7}, optional
preserve_ends : bool, optional
|
---|---|
Returns: | coords : (M, 2) array
|
References
[R251] | http://mrl.nyu.edu/publications/subdiv-course2000/coursenotes00.pdf |