[SDXL] Allow SDXL LoRA to be run with less than 16GB of VRAM by patrickvonplaten · Pull Request #4470 · huggingface/diffusers

As the issues: LoRA training for sdxl on diffusers CUDA out of memory? #4368 Gradient checkpointing not applied to UNet mid_block #4377 how can i run lora training with sdxl1.0 on google colab? #4...