Readme
Model: https://huggingface.co/TheBloke/airoboros-7B-gpt4-1.4-GPTQ
Fast inference thanks to https://github.com/turboderp/exllama
Test out fast inference with ExLlama and 4bit quantization!
This model costs approximately $0.0014 to run on Replicate, or 714 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.
This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 1 seconds.
Model: https://huggingface.co/TheBloke/airoboros-7B-gpt4-1.4-GPTQ
Fast inference thanks to https://github.com/turboderp/exllama