class documentation

Wrapper for running an eta.core.learning.Model model.

Parameters
configan ETAModelConfig
Class Method from_eta_model Builds an ETAModel for running the provided eta.core.learning.Model instance.
Method __enter__ Undocumented
Method __exit__ Undocumented
Method __init__ Undocumented
Method embed Generates an embedding for the given data.
Method embed_all Generates embeddings for the given iterable of data.
Method get_embeddings Returns the embeddings generated by the last forward pass of the model.
Method predict Performs prediction on the given data.
Method predict_all Performs prediction on the given iterable of data.
Method preprocess.setter Undocumented
Instance Variable config Undocumented
Property has_embeddings Whether this instance can generate embeddings.
Property has_logits Whether this instance can generate logits for its predictions.
Property media_type The media type processed by the model.
Property preprocess Whether to apply transforms during inference (True) or to assume that they have already been applied (False).
Property ragged_batches True/False whether transforms may return tensors of different sizes. If True, then passing ragged lists of data to predict_all is not allowed.
Property transforms The preprocessing function that will/must be applied to each input before prediction, or None if no preprocessing is performed.
Method _ensure_embeddings Undocumented
Method _parse_predictions Undocumented
Instance Variable _model Undocumented

Inherited from Model:

Property can_embed_prompts Whether this instance can generate prompt embeddings.

Inherited from LogitsMixin (via Model, EmbeddingsMixin):

Method store_logits.setter Undocumented
Property store_logits Whether the model should store logits in its predictions.
Instance Variable _store_logits Undocumented
@classmethod
def from_eta_model(cls, model): (source)

Builds an ETAModel for running the provided eta.core.learning.Model instance.

Parameters
modelan eta.core.learning.Model instance
Returns
an ETAModel
def __enter__(self): (source)

Undocumented

def __exit__(self, *args): (source)

Undocumented

def __init__(self, config, _model=None): (source)
def embed(self, arg): (source)

Generates an embedding for the given data.

Subclasses can override this method to increase efficiency, but, by default, this method simply calls predict and then returns get_embeddings.

Parameters
argthe data. See predict for details
Returns
a numpy array containing the embedding
def embed_all(self, args): (source)

Generates embeddings for the given iterable of data.

Subclasses can override this method to increase efficiency, but, by default, this method simply iterates over the data and applies embed to each.

Parameters
argsan iterable of data. See predict_all for details
Returns
a numpy array containing the embeddings stacked along axis 0
def get_embeddings(self): (source)

Returns the embeddings generated by the last forward pass of the model.

By convention, this method should always return an array whose first axis represents batch size (which will always be 1 when predict was last used).

Returns
a numpy array containing the embedding(s)
def predict(self, arg): (source)

Performs prediction on the given data.

Image models should support, at minimum, processing arg values that are uint8 numpy arrays (HWC).

Video models should support, at minimum, processing arg values that are eta.core.video.VideoReader instances.

Parameters
argthe data
Returns
a fiftyone.core.labels.Label instance or dict of fiftyone.core.labels.Label instances containing the predictions
def predict_all(self, args): (source)

Performs prediction on the given iterable of data.

Image models should support, at minimum, processing args values that are either lists of uint8 numpy arrays (HWC) or numpy array tensors (NHWC).

Video models should support, at minimum, processing args values that are lists of eta.core.video.VideoReader instances.

Subclasses can override this method to increase efficiency, but, by default, this method simply iterates over the data and applies predict to each.

Parameters
argsan iterable of data
Returns
a list of fiftyone.core.labels.Label instances or a list of dicts of fiftyone.core.labels.Label instances containing the predictions
@preprocess.setter
def preprocess(self, value): (source)

Undocumented

@property
has_embeddings = (source)

Whether this instance can generate embeddings.

This method returns False by default. Methods that can generate embeddings will override this via implementing the EmbeddingsMixin interface.

Whether this instance can generate logits for its predictions.

This method returns False by default. Methods that can generate logits will override this via implementing the LogitsMixin interface.

The media type processed by the model.

Supported values are "image" and "video".

Whether to apply transforms during inference (True) or to assume that they have already been applied (False).

@property
ragged_batches = (source)

True/False whether transforms may return tensors of different sizes. If True, then passing ragged lists of data to predict_all is not allowed.

The preprocessing function that will/must be applied to each input before prediction, or None if no preprocessing is performed.

def _ensure_embeddings(self): (source)

Undocumented

def _parse_predictions(self, eta_labels_or_batch): (source)

Undocumented

Undocumented