This is an implementation of the ChatTTS as a Cog model.
This model is cold. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.