segmentationRefinement.predict.Rd
A random forest implementation of the corrective learning wrapper introduced in Wang, et al., Neuroimage 2011 (http://www.ncbi.nlm.nih.gov/pubmed/21237273). The prediction process involves using the label-specific training models to refine an initial segmentation.
segmentationRefinement.predict( segmentationImage, labelSet, labelModels, featureImages, featureImageNames, dilationRadius = 2, neighborhoodRadius = 0, normalizeSamplesPerLabel = TRUE, useEntireLabeledRegion = TRUE )
segmentationImage | image to refine via corrective learning. |
---|---|
labelSet | a vector specifying the labels of interest. Must be specified. |
labelModels | a list of models. Each element of the labelSet requires a model. |
featureImages | a list of feature images. |
featureImageNames | is a vector of character strings naming the set of features. Must be specified. |
dilationRadius | specifies the dilation radius for determining the ROI for each label using binary morphology. Alternatively, the user can specify a float distance value, e.g., "dilationRadius = '2.75mm'", to employ an isotropic dilation based on physical distance. For the latter, the distance value followed by the character string 'mm' (for millimeters) is necessary. |
neighborhoodRadius | specifies which voxel neighbors should be included in prediction. The user can specify a scalar or vector but it must match with what was used for training. |
normalizeSamplesPerLabel | if TRUE, the samples from each ROI are normalized by the mean of the voxels in that ROI. Can be a vector (one element per feature). |
useEntireLabeledRegion | if TRUE, estimation is performed on the
full dilated ROI for each label. If FALSE, estimation is performed on the
combined inner and outer boundary region determined by the
|
a list consisting of the refined segmentation estimate (RefinedSegmentationImage) and a list of the foreground probability images (ForegroundProbabilityImages).
Tustison NJ
if (FALSE) { library( ANTsR ) library( ggplot2 ) imageIDs <- c( "r16", "r27", "r30", "r62", "r64", "r85" ) # Perform simple 3-tissue segmentation. For convenience we are # going to use atropos segmentation to define the "ground-truth" # segmentations and the kmeans to define the segmentation we # want to "correct". We collect feature images for each image. # The gradient and laplacian images chosen below as feature # images are simply selected for convenience. segmentationLabels <- c( 1, 2, 3 ) featureImageNames <- c( 'T1', 'Gradient', 'Laplacian' ) images <- list() kmeansSegs <- list() atroposSegs <- list() featureImages <- list() for( i in 1:length( imageIDs ) ) { cat( "Processing image", imageIDs[i], "\n" ) images[[i]] <- antsImageRead( getANTsRData( imageIDs[i] ) ) mask <- getMask( images[[i]] ) kmeansSegs[[i]] <- kmeansSegmentation( images[[i]], length( segmentationLabels ), mask, mrf = 0.0 )$segmentation atroposSegs[[i]] <- atropos( images[[i]], mask, i = "KMeans[3]", m = "[0.25,1x1]", c = "[5,0]" )$segmentation featureImageSetPerImage <- list() featureImageSetPerImage[[1]] <- images[[i]] featureImageSetPerImage[[2]] <- iMath( images[[i]], "Grad", 1.0 ) featureImageSetPerImage[[3]] <- iMath( images[[i]], "Laplacian", 1.0 ) featureImages[[i]] <- featureImageSetPerImage } # Perform training. We train on images "r27", "r30", # "r62", "r64", "r85" and # test/predict on image "r16". cat( "\nTraining\n\n" ) segLearning <- segmentationRefinement.train( featureImages = featureImages[2:6], truthLabelImages = atroposSegs[2:6], segmentationImages = kmeansSegs[2:6], featureImageNames = featureImageNames, labelSet = segmentationLabels, maximumNumberOfSamplesOrProportionPerClass = 100, dilationRadius = 1, neighborhoodRadius = c( 1, 1 ), normalizeSamplesPerLabel = TRUE, useEntireLabeledRegion = FALSE ) cat( "\nPrediction\n\n" ) refinement <- segmentationRefinement.predict( segmentationImage = kmeansSegs[[1]], labelSet = segmentationLabels, segLearning$LabelModels, featureImages[[1]], featureImageNames, dilationRadius = 1, neighborhoodRadius = c( 1, 1 ), normalizeSamplesPerLabel = TRUE ) # Compare "ground truth" = atroposSegs[[1]] with # refinement$RefinedSegmentationImage antsImageWrite( refinement$RefinedSegmentationImage, "r16RefinedSegmentation.nii.gz" ) }