Readme
PrometheusV1
PrometheusV1 is presumed to be the first full rank finetune of Playground v2.5, developed by the creator of the Proteus model. This text-to-image generation model has been specifically adapted to enhance accessibility for the open-source community.
Key Features and Considerations
Presumed First Full Rank Finetune of Playground v2.5:
- Complete parameter update of Playground v2.5 architecture
- Unique approach to fine-tuning this particular base model
Enhanced Accessibility:
- Custom sampling methods have been removed through brute force techniques
- Designed to be more compatible with standard open-source tools and workflows
- post processing applied to ensure backwards compatibility with most SDXL LoRAs and tools
Output Characteristics:
- Aims to provide a balance between consistency and diversity in outputs
- May exhibit some stylistic tendencies inherited from the training process
Training Approach:
- Utilizes the extensive Proteus datasets of over 400,000 images
- Brute force at scale training methodology
- Focused on maintaining model capabilities while increasing compatibility
Advanced Custom CLIP Integration:
- Incorporates a meticulously trained custom CLIP model
- Steadily developed over an extended period
- Further fine-tuned for specific qualities in Proteus and Prometheus
- Estimated to contribute 90% of the model’s performance improvements
- Requires a clip skip setting of 2 for optimal performance
About PrometheusV1
PrometheusV1 represents a significant effort to make advanced text-to-image generation more accessible to the open-source community. Built upon the Playground v2.5 architecture, it has undergone a full rank finetune using an extensive dataset of over 400,000 images from the Proteus collection. A key aspect of its development was the removal of custom sampling methods through brute force techniques at scale, allowing the model to work more seamlessly with standard open-source tools and pipelines. Additionally, PrometheusV1 has been made backwards compatible with most SDXL LoRAs and tools. This approach aims to balance the model’s performance capabilities with wider compatibility and ease of use. Users can expect outputs that reflect the model’s intensive training on the large Proteus dataset while benefiting from improved interoperability with common open-source frameworks and existing SDXL ecosystem.
Training Details
Base Model: Playground v2.5 Finetune Type: Full rank (all layers updated) Training Dataset: Over 400,000 images from Proteus datasets, extensively curated and processed Training Approach: Brute force at scale, focused on removing custom sampling methods while maintaining model capabilities Fine-tuning Techniques: Standard optimization methods compatible with open-source tools Special Processing: post processing applied for SDXL LoRA and tool compatibility
Recommended Settings
Clip Skip: 2
CFG Scale: 7
Steps: 25 - 50
Sampler: DPM++ 2M SDE
Scheduler: Karras
Resolution: 1024x1024