jackieyaung/qwen-image-edit-2509-gguf

Cost-efficient, reproducible image editing with full diffusion parameter control.

Public
7 runs

Run time and cost

This model runs on Nvidia L40S GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

Qwen-Image-Edit-2509 (GGUF, q4_k_m)

This model is a GGUF-based deployment of Qwen-Image-Edit-2509, using the q4_k_m quantized weights from QuantStack.

It is designed for cost-efficient, reproducible image editing with fully exposed diffusion parameters, making it suitable for engineering workflows, experimentation, and batch processing.

Model Characteristics

  • Base model: Qwen-Image-Edit-2509
  • Format: GGUF (q4_k_m)
  • Tasks: image editing, inpainting, multi-image conditioning
  • Hardware target: single-GPU inference with reduced memory footprint
  • Focus: parameter transparency and reproducibility rather than black-box generation

Input Images

  • image (required): primary input image
  • image2, image3 (optional): additional reference images for multi-image editing or conditioning

Parameter Explanation

This model exposes core diffusion parameters that are often hidden or constrained in managed image editing APIs. These parameters directly affect quality, edit strength, reproducibility, latency, and cost.

Prompting style

This model follows a diffusion-style conditioning paradigm. Prompts should describe visual attributes and styles (similar to Stable Diffusion), rather than natural-language instructions used in general-purpose multimodal models.

steps (integer)

Controls the number of diffusion sampling steps.

  • Higher values:

  • Potentially better visual quality

  • Increased latency and cost
  • Lower values:

  • Faster inference

  • Reduced detail or stability

Typical usage:

  • Quick previews or batch runs: low steps
  • Final outputs or higher fidelity edits: higher steps

cfg_scale (number)

Classifier-Free Guidance scale, controlling how strongly the model follows the prompt.

  • Lower values:

  • More image-driven results

  • Subtler prompt influence
  • Higher values:

  • Stronger prompt adherence

  • Risk of over-constraining or artifacts

This parameter is useful when balancing prompt intent vs. original image structure.


denoise (number, 0–1)

Controls the strength of the edit applied to the input image.

  • Low denoise:

  • Minimal changes

  • Preserves original image structure
  • High denoise:

  • More aggressive edits

  • Allows larger visual changes

This is the primary control for edit intensity in image-to-image workflows.


seed (integer)

Random seed for diffusion sampling.

  • Same seed + same inputs → reproducible output
  • -1 enables random seeding

This is critical for:

  • A/B testing
  • Regression testing
  • Iterative refinement with controlled changes

negative_prompt (string)

Specifies attributes the model should avoid (e.g. low quality, distortion).

This is applied consistently across runs and can help stabilize output quality in batch scenarios.


Intended Use

This model is well-suited for:

  • Engineering-driven image editing pipelines
  • Reproducible experiments and comparisons
  • Batch generation with controlled variability
  • Cost-sensitive deployments

It may be less suitable for:

  • One-click consumer use cases
  • Fully automated prompt-only generation without parameter tuning

Notes

  • The model runs on GGUF quantized weights to reduce memory usage and inference cost.
  • Parameter defaults are conservative and can be tuned based on quality, latency, and cost requirements.
  • Cold start latency may occur on the first request.
Model created
Model updated