Converting annotations to object segmentation mask images¶
Created on Fri Jan 24 2020.
- histomicstk.annotations_and_masks.annotations_to_object_mask_handler.annotations_to_contours_no_mask(gc, slide_id, MPP=5.0, MAG=None, mode='min_bounding_box', bounds=None, idx_for_roi=None, slide_annotations=None, element_infos=None, linewidth=0.2, get_rgb=True, get_visualization=True, text=True)¶
Process annotations to get RGB and contours without intermediate masks.
gc (object) – girder client object to make requests, for example: gc = girder_client.GirderClient(apiUrl = APIURL) gc.authenticate(interactive=True)
slide_id (str) – girder id for item (slide)
mode (str) – This specifies which part of the slide to get the mask from. Allowed modes include the following - wsi: get scaled up or down version of mask of whole slide - min_bounding_box: get minimum box for all annotations in slide - manual_bounds: use given ROI bounds provided by the ‘bounds’ param - polygonal_bounds: use the idx_for_roi param to get coordinates
bounds (dict or None) – if not None, has keys ‘XMIN’, ‘XMAX’, ‘YMIN’, ‘YMAX’ for slide region coordinates (AT BASE MAGNIFICATION) to get labeled image (mask) for. Use this with the ‘manual_bounds’ run mode.
idx_for_roi (int) – index of ROI within the element_infos dataframe. Use this with the ‘polygonal_bounds’ run mode.
slide_annotations (list or None) – Give this parameter to avoid re-getting slide annotations. If you do provide the annotations, though, make sure you have used scale_slide_annotations() to scale them up or down by sf BEFOREHAND.
element_infos (pandas DataFrame.) – The columns annidx and elementidx encode the dict index of annotation document and element, respectively, in the original slide_annotations list of dictionaries. This can be obained by get_bboxes_from_slide_annotations() method. Make sure you have used scale_slide_annotations().
linewidth (float) – visualization line width
get_rgb (bool) – get rgb image?
get_visualization (bool) – get overlayed annotation bounds over RGB for visualization
text (bool) – add text labels to visualization?
Results dict containing one or more of the following keys - bounds: dict of bounds at scan magnification - rgb: (mxnx3 np array) corresponding rgb image - contours: dict - visualization: (mxnx3 np array) visualization overlay
- Return type
- histomicstk.annotations_and_masks.annotations_to_object_mask_handler.combs_with_unique_products(low, high, k)¶
- histomicstk.annotations_and_masks.annotations_to_object_mask_handler.contours_to_labeled_object_mask(contours, gtcodes, mode='object', verbose=False, monitorprefix='')¶
Process contours to get and object segmentation labeled mask.
contours (DataFrame) – contours corresponding to annotation elemeents from the slide. All coordinates are relative to the mask that you want to output. The following columns are expected. - group: str, annotation group (ground truth label). - ymin: int, minimun y coordinate - ymax: int, maximum y coordinate - xmin: int, minimum x coordinate - xmax: int, maximum x coordinate - coords_x: str, vertix x coordinates comma-separated values - coords_y: str, vertix y coordinated comma-separated values
gtcodes (DataFrame) – the ground truth codes and information dataframe. This is a dataframe that is indexed by the annotation group name and has the following columns. - group: str, group name of annotation, eg. mostly_tumor. - GT_code: int, desired ground truth code (in the mask). Pixels of this value belong to corresponding group (class). - color: str, rgb format. eg. rgb(255,0,0).
mode (str) – run mode for getting masks. Must be in - object: get 3-channel mask where first channel encodes label (tumor, stroma, etc) while product of second and third channel encodes the object ID (i.e. individual contours) This is useful for object localization and segmentation tasks. - semantic: get a 1-channel mask corresponding to the first channel of the object mode.
verbose (bool) – print to screen?
monitorprefix (str) – prefix to add to printed statemens
If mode is “object”, this returns an (m, n, 3) np array of dtype uint8 that can be saved as a png First channel: encodes label (can be used for semantic segmentation) Second & third channels: multiplication of second and third channel gives the object id (255 choose 2 = 32,385 max unique objects). This allows us to save into a convenient 3-channel png object labels and segmentation masks, which is more compact than traditional mask-rcnn save formats like having one channel per object and a separate csv file for object labels. This is also more convenient than simply saving things into pickled np array objects, and allows compatibility with data loaders that expect an image or mask. If mode is “semantic” only the labels (corresponding to first channel of the object mode) is output. ** IMPORTANT NOTE ** When you read this mask and decide to reconstruct the object codes, convert it to float32 so that the product doesn’t saturate at 255.
- Return type
- histomicstk.annotations_and_masks.annotations_to_object_mask_handler.get_all_rois_from_slide_v2(gc, slide_id, GTCodes_dict, save_directories, annotations_to_contours_kwargs=None, mode='object', get_mask=True, slide_name=None, verbose=True, monitorprefix='', callback=None, callback_kwargs=None)¶
Get all ROIs for a slide without an intermediate mask form.
This mainly relies on contours_to_labeled_object_mask(), which should be referred to for extra documentation.
This can be run in either the “object” mode, whereby the saved masks are a three-channel png where first channel encodes class label (i.e. same as semantic segmentation) and the product of the values in the second and third channel encodes the object ID. Otherwise, the user may decide to run in the “semantic” mode and the resultant mask would consist of only one channel (semantic segmentation with no object differentiation).
The difference between this and version 1, found at histomicstk.annotations_and_masks.annotations_to_masks_handler. get_all_rois_from_slide() is that this (version 2) gets the contours first, including cropping to wanted ROI boundaries and other processing using shapely, and THEN parses these into masks. This enables us to differentiate various objects to use the data for object localization or classification or segmentation tasks. If you would like to get semantic segmentation masks, i.e. you do not really care about individual objects, you can use either version 1 or this method. They re-use much of the same code-base, but some edge cases maybe better handled by version 1. For example, since this version uses shapely first to crop, some objects may be incorrectly parsed by shapely. Version 1, using PIL.ImageDraw may not have these problems.
Bottom line is: if you need semantic segmentation masks, it is probably safer to use version 1, whereas if you need object segmentation masks, this method should be used.
slide_id (str) – girder id for item (slide)
GTCodes_dict (dict) – the ground truth codes and information dict. This is a dict that is indexed by the annotation group name and each entry is in turn a dict with the following keys: - group: group name of annotation (string), eg. mostly_tumor - overlay_order: int, how early to place the annotation in the mask. Larger values means this annotation group is overlayed last and overwrites whatever overlaps it. - GT_code: int, desired ground truth code (in the mask) Pixels of this value belong to corresponding group (class) - is_roi: Flag for whether this group encodes an ROI - is_background_class: Flag, whether this group is the default fill value inside the ROI. For example, you may descide that any pixel inside the ROI is considered stroma.
save_directories (dict) – paths to directories to save data. Each entry is a string, and the following keys are allowed - ROI: path to save masks (labeled images) - rgb: path to save rgb images - contours: path to save annotation contours - visualization: path to save rgb visualzation overlays
mode (str) – run mode for getting masks. Must me in - object: get 3-channel mask where first channel encodes label (tumor, stroma, etc) while product of second and third channel encodes the object ID (i.e. individual contours) This is useful for object localization and segmentation tasks. - semantic: get a 1-channel mask corresponding to the first channel of the object mode.
get_mask (bool) – While the main purpose of this method IS to get object segmentation masks, it is conceivable that some users might just want to get the RGB and contours. Default is True.
annotations_to_contours_kwargs (dict) – kwargs to pass to annotations_to_contours_no_mask() default values are assigned if specific parameters are not given.
verbose (bool) – Print progress to screen?
monitorprefix (str) – text to prepend to printed statements
callback (function) – a callback function to run on the roi dictionary output. This is internal, but if you really want to use this, make sure the callback can accept the following keys and that you do NOT assign them yourself gc, slide_id, slide_name, MPP, MAG, verbose, monitorprefix Also, this callback MUST ONLY return thr roi dictionary, whether or not it is modified inside it. If it is modified inside the callback then the modified version is the one that will be saved to disk.
callback_kwargs (dict) – kwargs to pass to callback, not including the mandatory kwargs that will be passed internally (mentioned earlier here).
each entry contains the following keys mask - path to saved mask rgb - path to saved rgb image contours - path to saved annotation contours visualization - path to saved rgb visualzation overlay
- Return type
list of dicts