module documentation
CLIP text tokenizer from https://github.com/openai/CLIP.
Class |
|
Undocumented |
Function | basic |
Undocumented |
Function | bytes |
Returns a list of utf-8 bytes and a corresponding list of unicode strings. |
Function | default |
Undocumented |
Function | get |
Returns the set of symbol pairs in a word. |
Function | whitespace |
Undocumented |
Returns a list of utf-8 bytes and a corresponding list of unicode strings.
The reversible bpe codes work on unicode strings.
This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
When you're at something like a 10B token dataset you end up needing around 5K for decent coverage. This is a significant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
And avoids mapping to whitespace/control characters the bpe code barfs on.