Readme
Latent Consistency Models (LCM) provide a streamlined approach to image generation with Stable Diffusion XL (SDXL). By distilling the original model, LCM reduces the required steps from 25-50 to just 4-8. Distillation, a training technique, is employed to recreate source model outputs in a more efficient manner. Unlike traditional methods, LCM’s distillation process is resource-intensive, demanding substantial data, time, and GPU resources.