I just released a new version of Skeletoken, a package for editing tokenizers. New in this version is the ability to automatically adapt a model to an edited tokenizer.
For example, you can a new token to your tokenizer, and then ask skeletoken to add new token indices in the correct embedding tables.
Let me know what you think!