stphtan94117 / qwen-chat

This model is the 7B-parameter version of the large language model series.

  • Public
  • 70 runs
  • L40S
  • License

Input

pip install replicate
Set the REPLICATE_API_TOKEN environment variable:
export REPLICATE_API_TOKEN=<paste-your-token-here>

Find your API token in your account settings.

Import the client:
import replicate

Run stphtan94117/qwen-chat using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.

output = replicate.run(
    "stphtan94117/qwen-chat:8e6414724f8109ca63dbe6645231622b966062f915f339722252b782bbaa9705",
    input={}
)
print(output)

To learn more, take a look at the guide on getting started with Python.

Output

No output yet! Press "Submit" to start a prediction.

Run time and cost

This model costs approximately $0.00098 to run on Replicate, or 1020 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 1 seconds.

Readme

Type clear, bye or 88 command to clear history.

輸入clear, bye 或 88 指令可清除歷史紀錄