casia-iva-lab / fastsam

Fast Segment Anything

  • Public
  • 25.1K runs
  • T4
  • GitHub
  • Paper
  • License

Input

input_image
*file

Input image

string

choose a model

Default: "FastSAM-x"

integer

Choose image size

Default: 640

number

iou threshold for filtering the annotations

Default: 0.7

string
Shift + Return to add a new line

use text prompt eg: "a black dog"

number

object confidence threshold

Default: 0.25

boolean

draw high-resolution segmentation masks

Default: true

string
Shift + Return to add a new line

[x,y,w,h]

Default: "[0,0,0,0]"

string
Shift + Return to add a new line

[[x1,y1],[x2,y2]]

Default: "[[0,0]]"

string
Shift + Return to add a new line

[1,0] 0:background, 1:foreground

Default: "[0]"

boolean

draw the edges of the masks

Default: false

boolean

better quality using morphologyEx

Default: false

Output

output
Generated in

Run time and cost

This model costs approximately $0.0042 to run on Replicate, or 238 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia T4 GPU hardware. Predictions typically complete within 19 seconds. The predict time for this model varies significantly based on the inputs.

Readme

Fast Segment Anything

FastSAM Speed

The Fast Segment Anything Model(FastSAM) is a CNN Segment Anything Model trained by only 2% of the SA-1B dataset published by SAM authors. The FastSAM achieve comparable performance with the SAM method at 50× higher run-time speed.

FastSAM design

Citing FastSAM

If you find this project useful for your research, please consider citing the following BibTeX entry.

@misc{zhao2023fast,
      title={Fast Segment Anything},
      author={Xu Zhao and Wenchao Ding and Yongqi An and Yinglong Du and Tao Yu and Min Li and Ming Tang and Jinqiao Wang},
      year={2023},
      eprint={2306.12156},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}