jigsawstack/object-detection

Recognise objects within an image with great accuracy.

Public
13 runs

Run time and cost

This model runs on CPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

๐ŸŽฏ JigsawStack Object Detection โ€“ Replicate Wrapper

This model wraps the JigsawStack Object Detection API

This model wraps the JigsawStack Speech-to-Text API and leverages the powerful Whisper V3 model to transcribe and optionally translate audio/video files.

Detect and highlight objects in images with high accuracy using JigsawStackโ€™s Object Detection API. This model on Replicate supports generic detection, prompt-based targeting, and optional annotated image output โ€” all powered by a fast and scalable vision backend.


๐Ÿง  What It Does

You provide an image (via URL or file storage key), and the model returns: - Detected objects with labels and coordinates - Optionally, an annotated image - Support for prompt-guided detection (e.g., only detect โ€œcatโ€ or โ€œhelmetโ€)


๐Ÿ”‘ Inputs

Name Type Required Description
url string โŒ No Public URL to an image file
file_store_key string โŒ No Key of an image stored on JigsawStack File Storage
prompts list of strings โŒ No Optional array of prompts (e.g. ["dog", "car"]) for targeted detection
features list of enums โŒ No Features to enable. Options: object_detection, gui. At least one required
annotated_image boolean โŒ No If true, returns image with bounding boxes drawn
return_type string โŒ No url or base64 image format (default: url)
api_key string โœ… Yes Your JigsawStack API key

๐Ÿ“Œ You must provide either url or file_store_key, not both.