tomasmcm / llama-3-8b-instruct-gradient-4194k
Source: gradientai/Llama-3-8B-Instruct-Gradient-4194k ✦ Quant: solidrust/Llama-3-8B-Instruct-Gradient-4194k-AWQ ✦ Extending LLama-3 8B's context length from 8k to 4194K
tomasmcm / pandalyst-13b-v1.0
Source: pipizhao/Pandalyst_13B_V1.0 ✦ Quant: TheBloke/Pandalyst_13B_V1.0-AWQ ✦ Pandalyst: A large language model for mastering data analysis using pandas
tomasmcm / sensei-7b-v1
Source: SciPhi/Sensei-7B-V1 ✦ Quant: TheBloke/Sensei-7B-V1-AWQ ✦ Sensei is specialized in performing RAG over detailed web search results
tomasmcm / whiterabbitneo-13b
Source: WhiteRabbitNeo/WhiteRabbitNeo-13B-v1 ✦ TheBloke/WhiteRabbitNeo-13B-AWQ ✦ WhiteRabbitNeo is a model series that can be used for offensive and defensive cybersecurity
tomasmcm / digital-socrates-13b
Source: allenai/digital-socrates-13b ✦ Quant: TheBloke/digital-socrates-13B-AWQ ✦ Digital Socrates is an open-source, automatic explanation-critiquing model
tomasmcm / towerinstruct-7b-v0.1
Source: Unbabel/TowerInstruct-7B-v0.1 ✦ Quant: TheBloke/TowerInstruct-7B-v0.1-AWQ ✦ This model is trained to handle several translation-related tasks, such as general machine translation, gramatical error correction, and paraphrase generation
tomasmcm / neuronovo-7b-v0.3
Source: Neuronovo/neuronovo-7B-v0.3 ✦ Quant: TheBloke/neuronovo-7B-v0.3-AWQ ✦ Neuronovo/neuronovo-7B-v0.3 model represents an advanced and fine-tuned version of a large language model, initially based on CultriX/MistralTrix-v1.
tomasmcm / pandalyst-7b-v1.2
Source: pipizhao/Pandalyst-7B-V1.2 ✦ Quant: TheBloke/Pandalyst-7B-v1.2-AWQ ✦ Pandalyst: A large language model for mastering data analysis using pandas
tomasmcm / tinyllama-1.1b-chat-v1.0
Source: TinyLlama/TinyLlama-1.1B-Chat-v1.0 ✦ Quant: TheBloke/TinyLlama-1.1B-Chat-v1.0-AWQ ✦ The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
tomasmcm / docsgpt-7b-mistral
Source: Arc53/docsgpt-7b-mistral ✦ Quant: TheBloke/docsgpt-7B-mistral-AWQ ✦ DocsGPT is optimized for Documentation (RAG), fine-tuned for providing answers that are based on context
tomasmcm / sam-7b
Source: SuperAGI/SAM ✦ Quant: TheBloke/SAM-AWQ ✦ SAM (Small Agentic Model), a 7B model that demonstrates impressive reasoning abilities despite its smaller size
tomasmcm / prometheus-13b-v1.0
Source: kaist-ai/prometheus-13b-v1.0 ✦ Quant: TheBloke/prometheus-13B-v1.0-AWQ ✦ An alternative to GPT-4 when evaluating LLMs & Reward models for RLHF
tomasmcm / openbuddy-zephyr-7b-v14.1
Source: OpenBuddy/openbuddy-zephyr-7b-v14.1 ✦ Quant: TheBloke/openbuddy-zephyr-7B-v14.1-AWQ ✦ Open Multilingual Chatbot
tomasmcm / solar-10.7b-instruct-v1.0
Source: upstage/SOLAR-10.7B-Instruct-v1.0 ✦ Quant: TheBloke/SOLAR-10.7B-Instruct-v1.0-AWQ ✦ Elevating Performance with Upstage Depth UP Scaling!
tomasmcm / v1olet-marcoroni-go-bruins-merge-7b
Source: v1olet/v1olet_marcoroni-go-bruins-merge-7B ✦ Quant: TheBloke/v1olet_marcoroni-go-bruins-merge-7B-AWQ ✦ Merge AIDC-ai-business/Marcoroni-7B-v3 and rwitz/go-bruins-v2 using slerp merge
tomasmcm / mistral-7b-instruct-v0.2
Source: mistralai/Mistral-7B-Instruct-v0.2 ✦ Quant: TheBloke/Mistral-7B-Instruct-v0.2-AWQ ✦ Improved instruct fine-tuned version of Mistral-7B-Instruct-v0.1
tomasmcm / metamath-cybertron-starling
Source: Q-bert/MetaMath-Cybertron-Starling ✦ Quant: TheBloke/MetaMath-Cybertron-Starling-AWQ ✦ Merge Q-bert/MetaMath-Cybertron and berkeley-nest/Starling-LM-7B-alpha using slerp merge
tomasmcm / claude2-alpaca-13b
Source: umd-zhou-lab/claude2-alpaca-13B ✦ Quant: TheBloke/claude2-alpaca-13B-AWQ ✦ This model is trained by fine-tuning llama-2 with claude2 alpaca data
tomasmcm / go-bruins-v2
Source: rwitz/go-bruins-v2 ✦ Quant: TheBloke/go-bruins-v2-AWQ ✦ Designed to push the boundaries of NLP applications, offering unparalleled performance in generating human-like text
tomasmcm / monad-gpt
Source: Pclanglais/MonadGPT ✦ Quant: TheBloke/MonadGPT-AWQ ✦ What would have happened if ChatGPT was invented in the 17th century?
tomasmcm / una-cybertron-7b-v2
Source: fblgit/una-cybertron-7b-v2-bf16 ✦ Quant: TheBloke/una-cybertron-7B-v2-AWQ ✦ A 7B MistralAI based model, best on it's series. Trained on SFT, DPO and UNA (Unified Neural Alignment) on multiple datasets
tomasmcm / loyal-piano-m7
Source: chargoddard/loyal-piano-m7 ✦ Quant: TheBloke/loyal-piano-m7-AWQ ✦ Intended to be a roleplay-focused model with some smarts and good long-context recall
tomasmcm / juanako-7b-una
Source: fblgit/juanako-7b-UNA ✦ Quant: TheBloke/juanako-7B-UNA-AWQ ✦ juanako uses UNA, Uniform Neural Alignment. A training technique that ease alignment between transformer layers yet to be published
tomasmcm / starling-lm-7b-alpha
Source: berkeley-nest/Starling-LM-7B-alpha ✦ Quant: TheBloke/Starling-LM-7B-alpha-AWQ ✦ An open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF)
tomasmcm / openinstruct-mistral-7b
Source: monology/openinstruct-mistral-7b ✦ Quant: TheBloke/openinstruct-mistral-7B-AWQ ✦ Commercially-usable 7B model, based on mistralai/Mistral-7B-v0.1 and finetuned on VMware/open-instruct
tomasmcm / evolved-seeker-1.3b
Source: TokenBender/evolvedSeeker_1_3 ✦ Quant: TheBloke/evolvedSeeker_1_3-AWQ ✦ A fine-tuned version of deepseek-ai/deepseek-coder-1.3b-base on 50k instructions for 3 epochs
tomasmcm / gorilla-openfunctions-v1
Source: gorilla-llm/gorilla-openfunctions-v1 ✦ Quant: TheBloke/gorilla-openfunctions-v1-AWQ ✦ Extend Large Language Model (LLM) Chat Completion feature to formulate executable APIs call given natural language instructions and API context
tomasmcm / obsidian-3b-v0.5
Source: NousResearch/Obsidian-3B-V0.5 ✦ Worlds smallest multi-modal LLM
tomasmcm / dans-adventurouswinds-mk2-7b
Source: PocketDoc/Dans-AdventurousWinds-Mk2-7b ✦ Quant: TheBloke/Dans-AdventurousWinds-Mk2-7B-AWQ ✦ This model is proficient in crafting text-based adventure games
tomasmcm / neural-chat-7b-v3-1
Source: Intel/neural-chat-7b-v3-1 ✦ Quant: TheBloke/neural-chat-7B-v3-1-AWQ ✦ Fine-tuned model based on mistralai/Mistral-7B-v0.1
tomasmcm / llama-2-7b-chat-hf
Source: meta-llama/Llama-2-7b-chat-hf ✦ Quant: TheBloke/Llama-2-7B-Chat-AWQ ✦ Intended for assistant-like chat
tomasmcm / metamath-mistral-7b
Source: meta-math/MetaMath-Mistral-7B ✦ Quant: TheBloke/MetaMath-Mistral-7B-AWQ ✦ Bootstrap Your Own Mathematical Questions for Large Language Models
tomasmcm / anima-phi-neptune-mistral-7b
Source: Severian/ANIMA-Phi-Neptune-Mistral-7B ✦ Quant: TheBloke/ANIMA-Phi-Neptune-Mistral-7B-AWQ ✦ Biomimicry Enhanced LLM
tomasmcm / nexusraven-13b
Source: Nexusflow/NexusRaven-13B ✦ Quant: TheBloke/NexusRaven-13B-AWQ ✦ Surpassing the state-of-the-art in open-source function calling LLMs
tomasmcm / alma-7b
Source: haoranxu/ALMA-7B ✦ Quant: TheBloke/ALMA-7B-AWQ ✦ ALMA (Advanced Language Model-based trAnslator) is an LLM-based translation model
tomasmcm / zephyr-7b-beta
Source: HuggingFaceH4/zephyr-7b-beta ✦ Quant: TheBloke/zephyr-7B-beta-AWQ ✦ Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-β is the second model in the series
tomasmcm / mistral-trismegistus-7b
Source: teknium/Mistral-Trismegistus-7B ✦ Quant: TheBloke/Mistral-Trismegistus-7B-AWQ ✦ Mistral Trismegistus is a model made for people interested in the esoteric, occult, and spiritual
tomasmcm / fin-llama-33b
Source: bavest/fin-llama-33b ✦ Quant: TheBloke/fin-llama-33B-AWQ ✦ Efficient Finetuning of Quantized LLMs for Finance
tomasmcm / synthia-13b-v1.2
Source: migtissera/Synthia-13B-v1.2 ✦ Quant: TheBloke/Synthia-13B-v1.2-AWQ ✦ SynthIA (Synthetic Intelligent Agent) is a LLama-2-13B model trained on Orca style datasets
tomasmcm / carl-llama-2-13b
Source: ajibawa-2023/carl-llama-2-13b ✦ Quant: TheBloke/Carl-Llama-2-13B-AWQ ✦ Carl: A Therapist AI
tomasmcm / llamaguard-7b
Source: llamas-community/LlamaGuard-7b ✦ Quant: TheBloke/LlamaGuard-7B-AWQ ✦ Llama-Guard is a 7B parameter Llama 2-based input-output safeguard model