tomasmcm / llama-3-8b-instruct-gradient-4194k

Source: gradientai/Llama-3-8B-Instruct-Gradient-4194k ✦ Quant: solidrust/Llama-3-8B-Instruct-Gradient-4194k-AWQ ✦ Extending LLama-3 8B's context length from 8k to 4194K

138 runs
Public

tomasmcm / pandalyst-13b-v1.0

Source: pipizhao/Pandalyst_13B_V1.0 ✦ Quant: TheBloke/Pandalyst_13B_V1.0-AWQ ✦ Pandalyst: A large language model for mastering data analysis using pandas

17 runs
Public

tomasmcm / sensei-7b-v1

Source: SciPhi/Sensei-7B-V1 ✦ Quant: TheBloke/Sensei-7B-V1-AWQ ✦ Sensei is specialized in performing RAG over detailed web search results

34 runs
Public

tomasmcm / whiterabbitneo-13b

Source: WhiteRabbitNeo/WhiteRabbitNeo-13B-v1 ✦ TheBloke/WhiteRabbitNeo-13B-AWQ ✦ WhiteRabbitNeo is a model series that can be used for offensive and defensive cybersecurity

115 runs
Public

tomasmcm / digital-socrates-13b

Source: allenai/digital-socrates-13b ✦ Quant: TheBloke/digital-socrates-13B-AWQ ✦ Digital Socrates is an open-source, automatic explanation-critiquing model

17 runs
Public

tomasmcm / towerinstruct-7b-v0.1

Source: Unbabel/TowerInstruct-7B-v0.1 ✦ Quant: TheBloke/TowerInstruct-7B-v0.1-AWQ ✦ This model is trained to handle several translation-related tasks, such as general machine translation, gramatical error correction, and paraphrase generation

186 runs
Public

tomasmcm / neuronovo-7b-v0.3

Source: Neuronovo/neuronovo-7B-v0.3 ✦ Quant: TheBloke/neuronovo-7B-v0.3-AWQ ✦ Neuronovo/neuronovo-7B-v0.3 model represents an advanced and fine-tuned version of a large language model, initially based on CultriX/MistralTrix-v1.

37 runs
Public

tomasmcm / pandalyst-7b-v1.2

Source: pipizhao/Pandalyst-7B-V1.2 ✦ Quant: TheBloke/Pandalyst-7B-v1.2-AWQ ✦ Pandalyst: A large language model for mastering data analysis using pandas

18 runs
Public

tomasmcm / tinyllama-1.1b-chat-v1.0

Source: TinyLlama/TinyLlama-1.1B-Chat-v1.0 ✦ Quant: TheBloke/TinyLlama-1.1B-Chat-v1.0-AWQ ✦ The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.

102 runs
Public

tomasmcm / docsgpt-7b-mistral

Source: Arc53/docsgpt-7b-mistral ✦ Quant: TheBloke/docsgpt-7B-mistral-AWQ ✦ DocsGPT is optimized for Documentation (RAG), fine-tuned for providing answers that are based on context

74 runs
Public

tomasmcm / sam-7b

Source: SuperAGI/SAM ✦ Quant: TheBloke/SAM-AWQ ✦ SAM (Small Agentic Model), a 7B model that demonstrates impressive reasoning abilities despite its smaller size

76 runs
Public

tomasmcm / prometheus-13b-v1.0

Source: kaist-ai/prometheus-13b-v1.0 ✦ Quant: TheBloke/prometheus-13B-v1.0-AWQ ✦ An alternative to GPT-4 when evaluating LLMs & Reward models for RLHF

34.5K runs
Public

tomasmcm / openbuddy-zephyr-7b-v14.1

Source: OpenBuddy/openbuddy-zephyr-7b-v14.1 ✦ Quant: TheBloke/openbuddy-zephyr-7B-v14.1-AWQ ✦ Open Multilingual Chatbot

29 runs
Public

tomasmcm / solar-10.7b-instruct-v1.0

Source: upstage/SOLAR-10.7B-Instruct-v1.0 ✦ Quant: TheBloke/SOLAR-10.7B-Instruct-v1.0-AWQ ✦ Elevating Performance with Upstage Depth UP Scaling!

4K runs
Public

tomasmcm / v1olet-marcoroni-go-bruins-merge-7b

Source: v1olet/v1olet_marcoroni-go-bruins-merge-7B ✦ Quant: TheBloke/v1olet_marcoroni-go-bruins-merge-7B-AWQ ✦ Merge AIDC-ai-business/Marcoroni-7B-v3 and rwitz/go-bruins-v2 using slerp merge

70 runs
Public

tomasmcm / mistral-7b-instruct-v0.2

Source: mistralai/Mistral-7B-Instruct-v0.2 ✦ Quant: TheBloke/Mistral-7B-Instruct-v0.2-AWQ ✦ Improved instruct fine-tuned version of Mistral-7B-Instruct-v0.1

6.6K runs
Public

tomasmcm / metamath-cybertron-starling

Source: Q-bert/MetaMath-Cybertron-Starling ✦ Quant: TheBloke/MetaMath-Cybertron-Starling-AWQ ✦ Merge Q-bert/MetaMath-Cybertron and berkeley-nest/Starling-LM-7B-alpha using slerp merge

182 runs
Public

tomasmcm / claude2-alpaca-13b

Source: umd-zhou-lab/claude2-alpaca-13B ✦ Quant: TheBloke/claude2-alpaca-13B-AWQ ✦ This model is trained by fine-tuning llama-2 with claude2 alpaca data

3.8K runs
Public

tomasmcm / go-bruins-v2

Source: rwitz/go-bruins-v2 ✦ Quant: TheBloke/go-bruins-v2-AWQ ✦ Designed to push the boundaries of NLP applications, offering unparalleled performance in generating human-like text

218 runs
Public

tomasmcm / monad-gpt

Source: Pclanglais/MonadGPT ✦ Quant: TheBloke/MonadGPT-AWQ ✦ What would have happened if ChatGPT was invented in the 17th century?

466 runs
Public

tomasmcm / una-cybertron-7b-v2

Source: fblgit/una-cybertron-7b-v2-bf16 ✦ Quant: TheBloke/una-cybertron-7B-v2-AWQ ✦ A 7B MistralAI based model, best on it's series. Trained on SFT, DPO and UNA (Unified Neural Alignment) on multiple datasets

85 runs
Public

tomasmcm / loyal-piano-m7

Source: chargoddard/loyal-piano-m7 ✦ Quant: TheBloke/loyal-piano-m7-AWQ ✦ Intended to be a roleplay-focused model with some smarts and good long-context recall

41 runs
Public

tomasmcm / juanako-7b-una

Source: fblgit/juanako-7b-UNA ✦ Quant: TheBloke/juanako-7B-UNA-AWQ ✦ juanako uses UNA, Uniform Neural Alignment. A training technique that ease alignment between transformer layers yet to be published

38 runs
Public

tomasmcm / starling-lm-7b-alpha

Source: berkeley-nest/Starling-LM-7B-alpha ✦ Quant: TheBloke/Starling-LM-7B-alpha-AWQ ✦ An open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF)

48.8K runs
Public

tomasmcm / openinstruct-mistral-7b

Source: monology/openinstruct-mistral-7b ✦ Quant: TheBloke/openinstruct-mistral-7B-AWQ ✦ Commercially-usable 7B model, based on mistralai/Mistral-7B-v0.1 and finetuned on VMware/open-instruct

293 runs
Public

tomasmcm / evolved-seeker-1.3b

Source: TokenBender/evolvedSeeker_1_3 ✦ Quant: TheBloke/evolvedSeeker_1_3-AWQ ✦ A fine-tuned version of deepseek-ai/deepseek-coder-1.3b-base on 50k instructions for 3 epochs

26 runs
Public

tomasmcm / gorilla-openfunctions-v1

Source: gorilla-llm/gorilla-openfunctions-v1 ✦ Quant: TheBloke/gorilla-openfunctions-v1-AWQ ✦ Extend Large Language Model (LLM) Chat Completion feature to formulate executable APIs call given natural language instructions and API context

415 runs
Public

tomasmcm / obsidian-3b-v0.5

Source: NousResearch/Obsidian-3B-V0.5 ✦ Worlds smallest multi-modal LLM

115 runs
Public

tomasmcm / dans-adventurouswinds-mk2-7b

Source: PocketDoc/Dans-AdventurousWinds-Mk2-7b ✦ Quant: TheBloke/Dans-AdventurousWinds-Mk2-7B-AWQ ✦ This model is proficient in crafting text-based adventure games

129 runs
Public

tomasmcm / neural-chat-7b-v3-1

Source: Intel/neural-chat-7b-v3-1 ✦ Quant: TheBloke/neural-chat-7B-v3-1-AWQ ✦ Fine-tuned model based on mistralai/Mistral-7B-v0.1

719 runs
Public

tomasmcm / llama-2-7b-chat-hf

Source: meta-llama/Llama-2-7b-chat-hf ✦ Quant: TheBloke/Llama-2-7B-Chat-AWQ ✦ Intended for assistant-like chat

71 runs
Public

tomasmcm / metamath-mistral-7b

Source: meta-math/MetaMath-Mistral-7B ✦ Quant: TheBloke/MetaMath-Mistral-7B-AWQ ✦ Bootstrap Your Own Mathematical Questions for Large Language Models

378 runs
Public

tomasmcm / anima-phi-neptune-mistral-7b

Source: Severian/ANIMA-Phi-Neptune-Mistral-7B ✦ Quant: TheBloke/ANIMA-Phi-Neptune-Mistral-7B-AWQ ✦ Biomimicry Enhanced LLM

20 runs
Public

tomasmcm / nexusraven-13b

Source: Nexusflow/NexusRaven-13B ✦ Quant: TheBloke/NexusRaven-13B-AWQ ✦ Surpassing the state-of-the-art in open-source function calling LLMs

53 runs
Public

tomasmcm / alma-7b

Source: haoranxu/ALMA-7B ✦ Quant: TheBloke/ALMA-7B-AWQ ✦ ALMA (Advanced Language Model-based trAnslator) is an LLM-based translation model

86 runs
Public

tomasmcm / zephyr-7b-beta

Source: HuggingFaceH4/zephyr-7b-beta ✦ Quant: TheBloke/zephyr-7B-beta-AWQ ✦ Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-β is the second model in the series

188.4K runs
Public

tomasmcm / mistral-trismegistus-7b

Source: teknium/Mistral-Trismegistus-7B ✦ Quant: TheBloke/Mistral-Trismegistus-7B-AWQ ✦ Mistral Trismegistus is a model made for people interested in the esoteric, occult, and spiritual

596 runs
Public

tomasmcm / fin-llama-33b

Source: bavest/fin-llama-33b ✦ Quant: TheBloke/fin-llama-33B-AWQ ✦ Efficient Finetuning of Quantized LLMs for Finance

294 runs
Public

tomasmcm / synthia-13b-v1.2

Source: migtissera/Synthia-13B-v1.2 ✦ Quant: TheBloke/Synthia-13B-v1.2-AWQ ✦ SynthIA (Synthetic Intelligent Agent) is a LLama-2-13B model trained on Orca style datasets

575 runs
Public

tomasmcm / carl-llama-2-13b

Source: ajibawa-2023/carl-llama-2-13b ✦ Quant: TheBloke/Carl-Llama-2-13B-AWQ ✦ Carl: A Therapist AI

543 runs
Public

tomasmcm / llamaguard-7b

Source: llamas-community/LlamaGuard-7b ✦ Quant: TheBloke/LlamaGuard-7B-AWQ ✦ Llama-Guard is a 7B parameter Llama 2-based input-output safeguard model

592K runs
Public