Report output

Machine learning models are powerful, but they're also unmoderated. Sometimes the output of the models can be inappropriate or incorrect. It might be broken or buggy. It might cause copyright issues. It might be inappropriate content.

Whatever the reason, please use this form to report to us when something is wrong with the output of a model you've run.

We'll investigate the reported output and take appropriate action. We may flag this output to the model author if we think they should be aware.

Your report

Output

Large language models (LLMs) are artificial intelligence systems that have been trained on massive amounts of text data to generate human-like language output. They are capable of understanding and generating natural language, and can be used for a wide range of tasks such as language translation, summarization, question answering, and even creative writing. LLMs have become increasingly popular in recent years due to their ability to perform complex language tasks with high accuracy and speed. However, they also raise ethical concerns around issues such as bias and privacy.

Give us a few details about how this output is unsafe or broken.