Image generation models like Stability AI's SDXL and Black Forest Labs' Flux include a safety checker to prevent the model from generating images that portray nudity, violence, and other unsafe content.
To protect users from generating unsafe content, we enable the safety checker for web predictions on the SDXL base model, the Flux base model, and all derivative fine-tunes of both SDXL and Flux.
The safety checker is intended to protect users, but it can sometimes be too restrictive or generate false positives, incorrectly flagging safe content as unsafe. For those cases, you can disable the safety checker when running the model with the API. This gives you the flexibility to use a custom safety-checking model or a third-party service as part of your workflow.
For more details on allowed use, see the terms of service.