module documentation

Utilities for working with Lightning Flash.

Copyright 2017-2025, Voxel51, Inc.

Function apply_flash_model Applies the given Lightning Flash model to the samples in the collection.
Function compute_flash_embeddings Computes embeddings for the samples in the collection using the given Lightning Flash model.
Function _do_export_array Undocumented
Function _export_arrays Undocumented
Function _get_output Undocumented
Constant _MODEL_TO_DATAMODULE_MAP Undocumented
Constant _SUPPORTED_MODELS Undocumented
def apply_flash_model(samples, model, label_field='predictions', confidence_thresh=None, store_logits=False, batch_size=None, num_workers=None, output_dir=None, rel_dir=None, transform_kwargs=None, trainer_kwargs=None): (source)

Applies the given Lightning Flash model to the samples in the collection.

Parameters
samplesa fiftyone.core.collections.SampleCollection
modela flash:flash.core.model.Task
label_field:"predictions"the name of the field in which to store the model predictions. When performing inference on video frames, the "frames." prefix is optional
confidence_thresh:Nonean optional confidence threshold to apply to any applicable labels generated by the model
store_logits:Falsewhether to store logits for the model predictions. This is only supported when the provided model has logits
batch_size:Nonean optional batch size to use. If not provided, a default batch size is used
num_workers:Nonethe number of workers for the data loader to use
output_dir:Nonean optional output directory in which to write segmentation images. Only applicable if the model generates segmentations. If none is provided, the segmentations are stored in the database
rel_dir:Nonean optional relative directory to strip from each input filepath to generate a unique identifier that is joined with output_dir to generate an output path for each segmentation image. This argument allows for populating nested subdirectories in output_dir that match the shape of the input paths. The path is converted to an absolute path (if necessary) via fiftyone.core.storage.normalize_path
transform_kwargs:Nonean optional dict of transform kwargs to pass into the created data module used by some models
trainer_kwargs:Nonean optional dict of kwargs used to initialize the Trainer. These can be used to, for example, configure the number of GPUs to use and other distributed inference parameters
def compute_flash_embeddings(samples, model, embeddings_field=None, batch_size=None, num_workers=None, transform_kwargs=None, trainer_kwargs=None): (source)

Computes embeddings for the samples in the collection using the given Lightning Flash model.

This method only supports applying an :ref:`ImageEmbedder <flash:image_embedder>` to an image collection.

If an embeddings_field is provided, the embeddings are saved to the samples; otherwise, the embeddings are returned in-memory.

Parameters
samplesa fiftyone.core.collections.SampleCollection
modela flash:flash.core.model.Task
embeddings_field:Nonethe name of a field in which to store the embeddings
batch_size:Nonean optional batch size to use. If not provided, a default batch size is used
num_workers:Nonethe number of workers for the data loader to use
transform_kwargs:Nonean optional dict of transform kwargs to pass into the created data module used by some models
trainer_kwargs:Nonean optional dict of kwargs used to initialize the Trainer. These can be used to, for example, configure the number of GPUs to use and other distributed inference parameters
Returns
one of the following
  • None, if an embeddings_field is provided
  • a num_samples x num_dim array of embeddings, if embeddings_field is None
def _do_export_array(label, input_path, filename_maker): (source)

Undocumented

def _export_arrays(label, input_path, filename_maker): (source)

Undocumented

def _get_output(model, confidence_thresh, store_logits): (source)

Undocumented

_MODEL_TO_DATAMODULE_MAP = (source)

Undocumented

Value
{fi.ObjectDetector: fi.ObjectDetectionData,
 fi.ImageClassifier: fi.ImageClassificationData,
 fi.SemanticSegmentation: fi.SemanticSegmentationData,
 fv.VideoClassifier: fv.VideoClassificationData}
_SUPPORTED_MODELS = (source)

Undocumented

Value
(fi.ImageClassifier, fi.ObjectDetector, fi.SemanticSegmentation)