Source: NousResearch/Obsidian-3B-V0.5 ✦ Worlds smallest multi-modal LLM
Source: meta-llama/Llama-2-7b-chat-hf ✦ Quant: TheBloke/Llama-2-7B-Chat-AWQ ✦ Intended for assistant-like chat
Source: ajibawa-2023/carl-llama-2-13b ✦ Quant: TheBloke/Carl-Llama-2-13B-AWQ ✦ Carl: A Therapist AI
Source: bavest/fin-llama-33b ✦ Quant: TheBloke/fin-llama-33B-AWQ ✦ Efficient Finetuning of Quantized LLMs for Finance
Source: migtissera/Synthia-13B-v1.2 ✦ Quant: TheBloke/Synthia-13B-v1.2-AWQ ✦ SynthIA (Synthetic Intelligent Agent) is a LLama-2-13B model trained on Orca style datasets
Source: teknium/Mistral-Trismegistus-7B ✦ Quant: TheBloke/Mistral-Trismegistus-7B-AWQ ✦ Mistral Trismegistus is a model made for people interested in the esoteric, occult, and spiritual
Source: HuggingFaceH4/zephyr-7b-beta ✦ Quant: TheBloke/zephyr-7B-beta-AWQ ✦ Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-β is the second model in the series
Source: haoranxu/ALMA-7B ✦ Quant: TheBloke/ALMA-7B-AWQ ✦ ALMA (Advanced Language Model-based trAnslator) is an LLM-based translation model
Source: Nexusflow/NexusRaven-13B ✦ Quant: TheBloke/NexusRaven-13B-AWQ ✦ Surpassing the state-of-the-art in open-source function calling LLMs
Source: Severian/ANIMA-Phi-Neptune-Mistral-7B ✦ Quant: TheBloke/ANIMA-Phi-Neptune-Mistral-7B-AWQ ✦ Biomimicry Enhanced LLM
Source: meta-math/MetaMath-Mistral-7B ✦ Quant: TheBloke/MetaMath-Mistral-7B-AWQ ✦ Bootstrap Your Own Mathematical Questions for Large Language Models
Source: PocketDoc/Dans-AdventurousWinds-Mk2-7b ✦ Quant: TheBloke/Dans-AdventurousWinds-Mk2-7B-AWQ ✦ This model is proficient in crafting text-based adventure games
Source: Intel/neural-chat-7b-v3-1 ✦ Quant: TheBloke/neural-chat-7B-v3-1-AWQ ✦ Fine-tuned model based on mistralai/Mistral-7B-v0.1
Source: gorilla-llm/gorilla-openfunctions-v1 ✦ Quant: TheBloke/gorilla-openfunctions-v1-AWQ ✦ Extend Large Language Model (LLM) Chat Completion feature to formulate executable APIs call given natural language instructions and API context
Source: monology/openinstruct-mistral-7b ✦ Quant: TheBloke/openinstruct-mistral-7B-AWQ ✦ Commercially-usable 7B model, based on mistralai/Mistral-7B-v0.1 and finetuned on VMware/open-instruct
Source: TokenBender/evolvedSeeker_1_3 ✦ Quant: TheBloke/evolvedSeeker_1_3-AWQ ✦ A fine-tuned version of deepseek-ai/deepseek-coder-1.3b-base on 50k instructions for 3 epochs
Source: berkeley-nest/Starling-LM-7B-alpha ✦ Quant: TheBloke/Starling-LM-7B-alpha-AWQ ✦ An open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF)
Source: fblgit/juanako-7b-UNA ✦ Quant: TheBloke/juanako-7B-UNA-AWQ ✦ juanako uses UNA, Uniform Neural Alignment. A training technique that ease alignment between transformer layers yet to be published
Source: chargoddard/loyal-piano-m7 ✦ Quant: TheBloke/loyal-piano-m7-AWQ ✦ Intended to be a roleplay-focused model with some smarts and good long-context recall
Source: fblgit/una-cybertron-7b-v2-bf16 ✦ Quant: TheBloke/una-cybertron-7B-v2-AWQ ✦ A 7B MistralAI based model, best on it's series. Trained on SFT, DPO and UNA (Unified Neural Alignment) on multiple datasets
Source: Pclanglais/MonadGPT ✦ Quant: TheBloke/MonadGPT-AWQ ✦ What would have happened if ChatGPT was invented in the 17th century?
This model is cold. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.
This model runs on L40S. View more.