module documentation

CLIP text tokenizer from https://github.com/openai/CLIP.

Copyright 2017-2025, Voxel51, Inc.

Class SimpleTokenizer Undocumented
Function basic_clean Undocumented
Function bytes_to_unicode Returns a list of utf-8 bytes and a corresponding list of unicode strings.
Function default_bpe Undocumented
Function get_pairs Returns the set of symbol pairs in a word.
Function whitespace_clean Undocumented
def basic_clean(text): (source)

Undocumented

@lru_cache()
def bytes_to_unicode(): (source)

Returns a list of utf-8 bytes and a corresponding list of unicode strings.

The reversible bpe codes work on unicode strings.

This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.

When you're at something like a 10B token dataset you end up needing around 5K for decent coverage. This is a significant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup tables between utf-8 bytes and unicode strings.

And avoids mapping to whitespace/control characters the bpe code barfs on.

@lru_cache()
def default_bpe(): (source)

Undocumented

def get_pairs(word): (source)

Returns the set of symbol pairs in a word.

word is represented as tuple of symbols (symbols being variable-length strings).

def whitespace_clean(text): (source)

Undocumented