creatorrr / flan-t5-large-squad-qag

Use lmqg/flan-t5-large-squad-qag for question-answer generation.

  • Public
  • 338 runs
  • T4

Input

pip install replicate
Set the REPLICATE_API_TOKEN environment variable:
export REPLICATE_API_TOKEN=<paste-your-token-here>

Find your API token in your account settings.

Import the client:
import replicate

Run creatorrr/flan-t5-large-squad-qag using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.

output = replicate.run(
    "creatorrr/flan-t5-large-squad-qag:9f0aa2a35a13e213b9aa08be495ab8592bf4ca1b5eb7e513207011eed40294a8",
    input={}
)

print(output)

To learn more, take a look at the guide on getting started with Python.

Output

No output yet! Press "Submit" to start a prediction.

Run time and cost

This model costs approximately $0.0015 to run on Replicate, or 666 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia T4 GPU hardware. Predictions typically complete within 7 seconds.