module documentation

Core utilities.

Copyright 2017-2025, Voxel51, Inc.

Class add_sys_path Context manager that temporarily inserts a path to sys.path.
Class BaseDynamicBatcher Class for iterating over the elements of an iterable with a dynamic batch size to achieve a desired target measurement.
Class Batcher Base class for iterating over the elements of an iterable in batches.
Class ContentSizeDynamicBatcher Class for iterating over the elements of an iterable with a dynamic batch size to achieve a desired content size.
Class LatencyDynamicBatcher Class for iterating over the elements of an iterable with a dynamic batch size to achieve a desired latency.
Class LazyModule Proxy module that lazily imports the underlying module the first time it is actually used.
Class LoggingLevel Context manager that allows for a temporary change to the level of a logging.Logger.
Class MonkeyPatchFunction Context manager that temporarily monkey patches the given function.
Class ProgressBar No summary
Class ResourceLimit Context manager that allows for a temporary change to a resource limit exposed by the resource package.
Class ResponseStream Wrapper around a requests.Response that provides a file-like object interface with read(), seek(), and tell() methods.
Class SetAttributes Context manager that temporarily sets the attributes of a class to new values.
Class StaticBatcher Class for iterating over the elements of an iterable with a static batch size.
Class SuppressLogging Context manager that temporarily disables system-wide logging.
Class UniqueFilenameMaker A class that generates unique output paths in a directory.
Function available_patterns Returns the available patterns that can be used by fill_patterns.
Function call_on_exit Registers the given callback function so that it will be called when the process exits for (almost) any reason
Function compute_filehash Computes the hash of the given file.
Function datetime_to_timestamp Converts a datetime.date or datetime.datetime to milliseconds since epoch.
Function deserialize_numpy_array Loads a serialized numpy array generated by serialize_numpy_array.
Function disable_progress_bars Context manager that temporarily disables all progress bars.
Function ensure_import Verifies that the given requirement is installed and importable.
Function ensure_package Verifies that the given package is installed.
Function ensure_requirements Verifies that the package requirements from a requirements.txt file on disk are installed.
Function ensure_tf Verifies that tensorflow is installed and importable.
Function ensure_tfds Verifies that tensorflow_datasets is installed and importable.
Function ensure_torch Verifies that torch and torchvision are installed and importable.
Function extract_kwargs_for_class Extracts keyword arguments for the given class's constructor from the given kwargs.
Function extract_kwargs_for_function Extracts keyword arguments for the given function from the given kwargs.
Function fill_patterns Fills the patterns in in the given string.
Function find_files Finds all files in the given root directory whose filename matches the given glob pattern(s).
Function get_default_batcher Returns a Batcher over iterable using defaults from your FiftyOne config.
Function get_multiprocessing_context Returns the preferred multiprocessing context for the current OS.
Function handle_error Handles the error at the specified error level.
Function indent_lines Indents the lines in the given string.
Function install_package Installs the given package via pip.
Function install_requirements Installs the package requirements from a requirements.txt file on disk.
Function is_32_bit Determines whether the system is 32-bit.
Function is_arm_mac Determines whether the system is an ARM-based Mac (Apple Silicon).
Function is_container Determines if we're currently running as a container.
Function iter_batches Iterates over the given iterable in batches.
Function iter_slices Iterates over batches of the given object via slicing.
Function justify_headings Justifies the headings in a list of (heading, content) string tuples by appending whitespace as necessary to each heading.
Function lazy_import Returns a proxy module object that will lazily import the given module the first time it is used.
Function load_requirements Loads the package requirements from a requirements.txt file on disk.
Function load_xml_as_json_dict Loads the XML file as a JSON dictionary.
Function parse_batching_strategy Parses the given batching strategy configuration, applying any default config settings as necessary.
Function parse_serializable Parses the given object as an instance of the given eta.core.serial.Serializable class.
Function pformat Returns a pretty string representation of the Python object.
Function pprint Pretty-prints the Python object.
Function recommend_batch_size_for_value Computes a recommended batch size for the given value type such that a request involving a list of values of this size will be less than alpha * fo.config.batcher_target_size_bytes bytes.
Function recommend_process_pool_workers Recommends a number of workers for a process pool.
Function recommend_thread_pool_workers Recommends a number of workers for a thread pool.
Function report_progress Wraps the provided progress function such that it will only be called at the specified increments or time intervals.
Async Function run_sync_task Run a synchronous function as an async background task.
Function safe_relpath A safe version of os.path.relpath that returns a configurable default value if the given path if it does not lie within the given relative start.
Function serialize_numpy_array Serializes a numpy array.
Function set_resource_limit Uses the resource package to change a resource limit for the current process.
Function split_frame_fields Splits the given fields into sample and frame fields.
Function stream_objects Streams the iterable of objects to stdout via less.
Function timedelta_to_ms Converts a datetime.timedelta to milliseconds.
Function timestamp_to_datetime Converts a timestamp (number of milliseconds since epoch) to a datetime.datetime.
Function to_slug Returns the URL-friendly slug for the given string.
Function validate_color Validates that the given value is a valid css color name.
Function validate_hex_color Validates that the given value is a hex color string or css name.
Variable fos Undocumented
Variable logger Undocumented
Variable sync_task_executor Undocumented
Function _extract_kwargs Undocumented
Function _get_sync_task_executor Undocumented
Function _is_docker Undocumented
Function _is_podman Undocumented
Function _report_progress_dt Undocumented
Function _report_progress_n Undocumented
Function _sanitize_char Undocumented
Function _split_frame_fields_dict Undocumented
Function _strip_comments Undocumented
Constant _HYPHEN_CHARS Undocumented
Constant _NAME_LENGTH_RANGE Undocumented
Constant _REQUIREMENT_ERROR_SUFFIX Undocumented
Constant _SAFE_CHARS Undocumented
def available_patterns(): (source)

Returns the available patterns that can be used by fill_patterns.

Returns
a dict mapping patterns to their replacements
def call_on_exit(callback): (source)

Registers the given callback function so that it will be called when the process exits for (almost) any reason

Note that this should only be used from non-interactive scripts because it intercepts ctrl + c.

Covers the following cases: - normal program termination - a Python exception is raised - a SIGTERM signal is received

Parameters
callbackthe function to execute upon termination
def compute_filehash(filepath, method=None, chunk_size=None): (source)

Computes the hash of the given file.

Parameters
filepaththe path to the file
method:Nonean optional hashlib method to use. If not specified, the builtin str.__hash__ will be used
chunk_size:Nonean optional chunk size to use to read the file, in bytes. Only applicable when a method is provided. The default is 64kB. If negative, the entire file is read at once
Returns
the hash
def datetime_to_timestamp(dt): (source)

Converts a datetime.date or datetime.datetime to milliseconds since epoch.

Parameters
dta datetime.date or datetime.datetime
Returns
the float number of milliseconds since epoch
def deserialize_numpy_array(numpy_bytes, ascii=False): (source)

Loads a serialized numpy array generated by serialize_numpy_array.

Parameters
numpy_bytesthe serialized numpy array bytes
ascii:Falsewhether the bytes were generated with the ascii == True parameter of serialize_numpy_array
Returns
the numpy array
@contextmanager
def disable_progress_bars(): (source)

Context manager that temporarily disables all progress bars.

Example usage:

import fiftyone as fo
import fiftyone.zoo as foz

with fo.disable_progress_bars():
    dataset = foz.load_zoo_dataset("quickstart")
def ensure_import(requirement_str, error_level=None, error_msg=None, log_success=False): (source)

Verifies that the given requirement is installed and importable.

This function imports the specified module and optionally enforces any version requirements included in requirement_str.

Therefore, unlike ensure_package, requirement_str should refer to the module name (e.g., "tensorflow"), not the package name (e.g., "tensorflow-gpu").

Parameters
requirement_stra PEP 440-like module requirement, like "tensorflow", "tensorflow<2", "tensorflow==2.3.0", or "tensorflow>=1.13,<1.15". This can also be an iterable of multiple requirements, all of which must be installed, or this can be a single "|"-delimited string specifying multiple requirements, at least one of which must be installed
error_level:None

the error level to use, defined as:

  • 0: raise error if requirement is not satisfied
  • 1: log warning if requirement is not satisfied
  • 2: ignore unsatisifed requirements

By default, fiftyone.config.requirement_error_level is used

error_msg:Nonean optional custom error message to use
log_success:Falsewhether to generate a log message if the requirement is satisfied
Returns
True/False whether the requirement is satisfied
def ensure_package(requirement_str, error_level=None, error_msg=None, log_success=False): (source)

Verifies that the given package is installed.

This function uses importlib.metadata to locate the package by its pip name and does not actually import the module.

Therefore, unlike ensure_import, requirement_str should refer to the package name (e.g., "tensorflow-gpu"), not the module name (e.g., "tensorflow").

Parameters
requirement_stra PEP 440 compliant package requirement, like "tensorflow", "tensorflow<2", "tensorflow==2.3.0", or "tensorflow>=1.13,<1.15". This can also be an iterable of multiple requirements, all of which must be installed, or this can be a single "|"-delimited string specifying multiple requirements, at least one of which must be installed
error_level:None

the error level to use, defined as:

  • 0: raise error if requirement is not satisfied
  • 1: log warning if requirement is not satisfied
  • 2: ignore unsatisifed requirements

By default, fiftyone.config.requirement_error_level is used

error_msg:Nonean optional custom error message to use
log_success:Falsewhether to generate a log message if the requirement is satisfied
Returns
True/False whether the requirement is satisfied
def ensure_requirements(requirements_path, error_level=None, log_success=False): (source)

Verifies that the package requirements from a requirements.txt file on disk are installed.

Parameters
requirements_paththe path to a requirements file
error_level:None

the error level to use, defined as:

  • 0: raise error if requirement is not satisfied
  • 1: log warning if requirement is not satisfied
  • 2: ignore unsatisifed requirements

By default, fiftyone.config.requirement_error_level is used

log_success:Falsewhether to generate a log message if a requirement is satisfied
def ensure_tf(eager=False, error_level=None, error_msg=None): (source)

Verifies that tensorflow is installed and importable.

Parameters
eager:Falsewhether to require that TF is executing eagerly. If True and TF is not currently executing eagerly, this method will attempt to enable it
error_level:None

the error level to use, defined as:

  • 0: raise error if requirement is not satisfied
  • 1: log warning if requirement is not satisfied
  • 2: ignore unsatisifed requirements

By default, fiftyone.config.requirement_error_level is used

error_msg:Nonean optional custom error message to print
Returns
True/False whether the requirement is satisfied
def ensure_tfds(error_level=None, error_msg=None): (source)

Verifies that tensorflow_datasets is installed and importable.

Parameters
error_level:None

the error level to use, defined as:

  • 0: raise error if requirement is not satisfied
  • 1: log warning if requirement is not satisfied
  • 2: ignore unsatisifed requirements

By default, fiftyone.config.requirement_error_level is used

error_msg:Nonean optional custom error message to print
Returns
True/False whether the requirement is satisfied
def ensure_torch(error_level=None, error_msg=None): (source)

Verifies that torch and torchvision are installed and importable.

Parameters
error_level:None

the error level to use, defined as:

  • 0: raise error if requirement is not satisfied
  • 1: log warning if requirement is not satisfied
  • 2: ignore unsatisifed requirements

By default, fiftyone.config.requirement_error_level is used

error_msg:Nonean optional custom error message to print
Returns
True/False whether the requirement is satisfied
def extract_kwargs_for_class(cls, kwargs): (source)

Extracts keyword arguments for the given class's constructor from the given kwargs.

Parameters
clsa class
kwargsa dictionary of keyword arguments
Returns
a tuple of
  • class_kwargs: a dictionary of keyword arguments for cls
  • other_kwargs: a dictionary containing the remaining kwargs
def extract_kwargs_for_function(fcn, kwargs): (source)

Extracts keyword arguments for the given function from the given kwargs.

Parameters
fcna function
kwargsa dictionary of keyword arguments
Returns
a tuple of
  • fcn_kwargs: a dictionary of keyword arguments for fcn
  • other_kwargs: a dictionary containing the remaining kwargs
def fill_patterns(string): (source)

Fills the patterns in in the given string.

Use available_patterns to see the available patterns that can be used.

Parameters
stringa string
Returns
a copy of string with any patterns replaced
def find_files(root_dir, patt, max_depth=1): (source)

Finds all files in the given root directory whose filename matches the given glob pattern(s).

Both root_dir and patt may contain glob patterns.

Exammples:

import fiftyone.core.utils as fou

# Find .txt files in `/tmp`
fou.find_files("/tmp", "*.txt")

# Find .txt files in subdirectories of `/tmp` that begin with `foo-`
fou.find_files("/tmp/foo-*", "*.txt")

# Find .txt files in `/tmp` or its subdirectories
fou.find_files("/tmp", "*.txt", max_depth=2)
Parameters
root_dirthe root directory
patta glob pattern or list of patterns
max_depth:1a maximum depth to search. 1 means root_dir only, 2 means root_dir and its immediate subdirectories, etc
Returns
a list of matching paths
def get_default_batcher(iterable, progress=False, total=None): (source)

Returns a Batcher over iterable using defaults from your FiftyOne config.

Uses fiftyone.config.default_batcher to determine the implementation to use, and related configuration values as needed for each.

Parameters
iterablean iterable to batch over. If None, the result of next() will be a batch size instead of a batch, and is an infinite iterator.
progress:Falsewhether to render a progress bar tracking the consumption of the batches (True/False), use the default value fiftyone.config.show_progress_bars (None), or a progress callback function to invoke instead
total:Nonethe length of iterable. Only applicable when progress=True. If not provided, it is computed via len(iterable), if possible
Returns
a Batcher
def get_multiprocessing_context(): (source)

Returns the preferred multiprocessing context for the current OS.

Returns
a multiprocessing context
def handle_error(error, error_level, base_error=None): (source)

Handles the error at the specified error level.

Parameters
erroran Exception instance
error_levelthe error level to use, defined as:
base_error(optional) a base Exception from which to raise error
- 0raise the error
- 1log the error as a warning
- 2ignore the error
def indent_lines(s, indent=4, skip=0): (source)

Indents the lines in the given string.

Parameters
sthe string
indent:4the number of spaces to indent
skip:0the number of lines to skip before indenting
Returns
the indented string
def install_package(requirement_str, error_level=None, error_msg=None): (source)

Installs the given package via pip.

Installation is performed via:

python -m pip install <requirement_str>
Parameters
requirement_stra PEP 440 compliant package requirement, like "tensorflow", "tensorflow<2", "tensorflow==2.3.0", or "tensorflow>=1.13,<1.15"
error_level:None

the error level to use, defined as:

  • 0: raise error if the install fails
  • 1: log warning if the install fails
  • 2: ignore install fails
error_msg:Nonean optional custom error message to use
def install_requirements(requirements_path, error_level=None): (source)

Installs the package requirements from a requirements.txt file on disk.

Parameters
requirements_paththe path to a requirements file
error_level:None

the error level to use, defined as:

  • 0: raise error if the install fails
  • 1: log warning if the install fails
  • 2: ignore install fails

By default, fiftyone.config.requirement_error_level is used

def is_32_bit(): (source)

Determines whether the system is 32-bit.

Returns
True/False
def is_arm_mac(): (source)

Determines whether the system is an ARM-based Mac (Apple Silicon).

Returns
True/False
def is_container(): (source)

Determines if we're currently running as a container.

Returns
True/False
def iter_batches(iterable, batch_size): (source)

Iterates over the given iterable in batches.

Parameters
iterablean iterable
batch_sizethe desired batch size, or None to return the contents in a single batch
Returns
a generator that emits tuples of elements of the requested batch size from the input
def iter_slices(sliceable, batch_size): (source)

Iterates over batches of the given object via slicing.

Parameters
sliceablean object that supports slicing
batch_sizethe desired batch size, or None to return the contents in a single batch
Returns
a generator that emits batches of elements of the requested batch size from the input
def justify_headings(elements, width=None): (source)

Justifies the headings in a list of (heading, content) string tuples by appending whitespace as necessary to each heading.

Parameters
elementsa list of (heading, content) tuples
width:Nonean optional justification width. By default, the maximum heading length is used
Returns
a list of justified (heading, content) tuples
def lazy_import(module_name, callback=None): (source)

Returns a proxy module object that will lazily import the given module the first time it is used.

Example usage:

# Lazy version of `import tensorflow as tf`
tf = lazy_import("tensorflow")

# Other commands

# Now the module is loaded
tf.__version__
Parameters
module_namethe fully-qualified module name to import
callback:Nonea callback function to call before importing the module
Returns
a proxy module object that will be lazily imported when first used
def load_requirements(requirements_path): (source)

Loads the package requirements from a requirements.txt file on disk.

Comments and extra whitespace are automatically stripped.

Parameters
requirements_paththe path to a requirements file
Returns
a list of requirement strings
def load_xml_as_json_dict(xml_path): (source)

Loads the XML file as a JSON dictionary.

Parameters
xml_paththe path to the XML file
Returns
a JSON dict
def parse_batching_strategy(batch_size=None, batching_strategy=None): (source)

Parses the given batching strategy configuration, applying any default config settings as necessary.

Parameters
batch_size:Nonethe batch size to use. If a batching_strategy is provided, this parameter configures that strategy as described below. If no batching_strategy is provided, this can either be an integer specifying the number of samples to save in a batch (in which case batching_strategy is implicitly set to "static") or a float number of seconds between batched saves (in which case batching_strategy is implicitly set to "latency")
batching_strategy:None

the batching strategy to use for each save operation. Supported values are:

  • "static": a fixed sample batch size for each save
  • "size": a target batch size, in bytes, for each save
  • "latency": a target latency, in seconds, between saves

By default, fo.config.default_batcher is used

Returns
a tuple of (batch_size, batching_strategy)
def parse_serializable(obj, cls): (source)

Parses the given object as an instance of the given eta.core.serial.Serializable class.

Parameters
objan instance of cls, or a serialized string or dictionary representation of one
clsa eta.core.serial.Serializable class
Returns
an instance of cls
def pformat(obj, indent=4, width=80, depth=None): (source)

Returns a pretty string representation of the Python object.

Parameters
objthe Python object
indent:4the number of spaces to use when indenting
width:80the max width of each line in the pretty representation
depth:Nonethe maximum depth at which to pretty render nested dicts
Returns
the pretty-formatted string
def pprint(obj, stream=None, indent=4, width=80, depth=None): (source)

Pretty-prints the Python object.

Parameters
objthe Python object
stream:Nonethe stream to write to. The default is sys.stdout
indent:4the number of spaces to use when indenting
width:80the max width of each line in the pretty representation
depth:Nonethe maximum depth at which to pretty render nested dicts
def recommend_batch_size_for_value(value, alpha=0.9, max_size=None): (source)

Computes a recommended batch size for the given value type such that a request involving a list of values of this size will be less than alpha * fo.config.batcher_target_size_bytes bytes.

Parameters
valuea value
alpha:0.9a safety factor
max_size:Nonean optional max batch size
Returns
a recommended batch size
def recommend_process_pool_workers(num_workers=None): (source)

Recommends a number of workers for a process pool.

If a fo.config.max_process_pool_workers is set, this limit is applied.

Parameters
num_workers:Nonea suggested number of workers
Returns
a number of workers
def recommend_thread_pool_workers(num_workers=None): (source)

Recommends a number of workers for a thread pool.

If a fo.config.max_thread_pool_workers is set, this limit is applied.

Parameters
num_workers:Nonea suggested number of workers
Returns
a number of workers
def report_progress(progress, n=None, dt=None): (source)

Wraps the provided progress function such that it will only be called at the specified increments or time intervals.

Example usage:

import fiftyone as fo
import fiftyone.zoo as foz

def print_progress(pb):
    if pb.complete:
        print("COMPLETE")
    else:
        print("PROGRESS: %0.3f" % pb.progress)

dataset = foz.load_zoo_dataset("cifar10", split="test")

# Print progress at 10 equally-spaced increments
progress = fo.report_progress(print_progress, n=10)
dataset.compute_metadata(progress=progress)

# Print progress every 0.5 seconds
progress = fo.report_progress(print_progress, dt=0.5)
dataset.compute_metadata(progress=progress, overwrite=True)
Parameters
progressa function that accepts a ProgressBar as input
n:Nonea number of equally-spaced increments to invoke progress
dt:Nonea number of seconds between progress calls
Returns
a function that accepts a ProgressBar as input
async def run_sync_task(func, *args): (source)

Run a synchronous function as an async background task.

Parameters
funca synchronous callable
*argsfunction arguments
Returns
the function's return value(s)
def safe_relpath(path, start=None, default=None): (source)

A safe version of os.path.relpath that returns a configurable default value if the given path if it does not lie within the given relative start.

Parameters
patha path
start:Nonethe relative prefix to strip from path
default:Nonea default value to return if path does not lie within start. By default, the basename of the path is returned
Returns
the relative path
def serialize_numpy_array(array, ascii=False): (source)

Serializes a numpy array.

Parameters
arraya numpy array-like
ascii:Falsewhether to return a base64-encoded ASCII string instead of raw bytes
Returns
the serialized bytes
def set_resource_limit(limit, soft=None, hard=None, warn_on_failure=False): (source)

Uses the resource package to change a resource limit for the current process.

If the resource package cannot be imported, this command does nothing.

Parameters
limitthe name of the resource to limit. Must be the name of a constant in the resource module starting with RLIMIT. See the documentation of the resource module for supported values
soft:Nonea new soft limit to apply, which cannot exceed the hard limit. If omitted, the current soft limit is maintained
hard:Nonea new hard limit to apply. If omitted, the current hard limit is maintained
warn_on_failure:Falsewhether to issue a warning rather than an error if the resource limit change is not successful
def split_frame_fields(fields): (source)

Splits the given fields into sample and frame fields.

Frame fields are those prefixed by "frames.", and this prefix is removed from the returned frame fields.

Parameters
fieldsa field, iterable of fields, or dict mapping field names to new field names
Returns
a tuple of
  • sample_fields: a list or dict of sample fields
  • frame_fields: a list or dict of frame fields
def stream_objects(objects): (source)

Streams the iterable of objects to stdout via less.

The output can be interactively traversed via scrolling and can be terminated via keyboard interrupt.

Parameters
objectsan iterable of objects that can be printed via str(obj)
def timedelta_to_ms(td): (source)

Converts a datetime.timedelta to milliseconds.

Parameters
tda datetime.timedelta
Returns
the float number of milliseconds
def timestamp_to_datetime(ts): (source)

Converts a timestamp (number of milliseconds since epoch) to a datetime.datetime.

Parameters
tsa number of milliseconds since epoch
Returns
a datetime.datetime
def to_slug(name): (source)

Returns the URL-friendly slug for the given string.

The following strategy is used to generate slugs:

  • The characters A-Za-z0-9 are converted to lowercase
  • Whitespace and +_.- are converted to -
  • All other characters are omitted
  • All consecutive - characters are reduced to a single -
  • All leading and trailing - are stripped
  • Both the input name and the resulting string must be [1, 100] characters in length

Examples:

name                             | slug
---------------------------------+-----------------------
coco_2017                        | coco-2017
c+o+c+o 2-0-1-7                  | c-o-c-o-2-0-1-7
cat.DOG                          | cat-dog
---name----                      | name
Brian's #$&@ (Awesome?) Dataset! | brians-awesome-dataset
sPaM     aNd  EgGs               | spam-and-eggs
Parameters
namea string
Returns
the slug string
Raises
ValueErrorif the name is invalid
def validate_color(value): (source)

Validates that the given value is a valid css color name.

Parameters
valuea value
Raises
ValueErrorif value is not a valid css color name.
def validate_hex_color(value): (source)

Validates that the given value is a hex color string or css name.

Parameters
valuea value
Raises
ValueErrorif value is not a hex color string

Undocumented

Undocumented

sync_task_executor = (source)

Undocumented

def _extract_kwargs(cls_or_fcn, kwargs): (source)

Undocumented

def _get_sync_task_executor(): (source)

Undocumented

def _is_docker(): (source)

Undocumented

def _is_podman(): (source)

Undocumented

def _report_progress_dt(progress, dt): (source)

Undocumented

def _report_progress_n(progress, n): (source)

Undocumented

def _sanitize_char(c): (source)

Undocumented

def _split_frame_fields_dict(fields): (source)

Undocumented

def _strip_comments(requirement_str): (source)

Undocumented

_HYPHEN_CHARS = (source)

Undocumented

Value
set(string.whitespace) | set('+_.-')
_NAME_LENGTH_RANGE: tuple[int, ...] = (source)

Undocumented

Value
(1, 100)
_REQUIREMENT_ERROR_SUFFIX: str = (source)

Undocumented

Value
'''If you think this error is inaccurate, you can set `fiftyone.config.requireme
nt_error_level` to 1 (warning) or 2 (ignore).
See https://docs.voxel51.com/user_guide/config.html for details.'''
_SAFE_CHARS = (source)

Undocumented

Value
set(string.ascii_letters) | set(string.digits)