Русские видео

Сейчас в тренде

Иностранные видео




Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса savevideohd.ru



#Mathematics

Prompt "Question": A bat and a ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the ball cost? #Answer: $0.05 (5 cents) Comparison of: Devstral-Small-2505 Q5, Mistral-Small-24B Instruct 2501 Q4, Gemma-3-27b-it Q4, Qwen3-30B-A3B Q5, GLM-4-32B-0414 Q5 Model files: Devstral-Small-2505-Q5_K_M.gguf Mistral-Small-24B-Instruct-2501-Q4_K_M.gguf gemma-3-27b-it-q4_0.gguf Qwen3-30B-A3B-Q5_K_M.gguf GLM-4-32B-0414-Q5_K_M.gguf 🚀 Welcome to Quantized AI Benchmark! Please also visit and subscribe to my other channel, ( CW.Only.Channel ). This channel focuses on radio electronics and CW (Morse code) training. I've included some questions in my benchmarks about CW and electronics topics to help with learning and engagement."    / @cw.only.channel   This channel is your go-to source for fast, objective, and in-depth benchmarks comparing quantized AI models. As my experience, Q5 and Q4 are the best quantized levels for getting the job done. Q5 is very close to BF16 in terms of quality, and Q4 is also great, though slightly lower in performance than Q5. Depending on the size and weight of the AI model, I usually choose between Q5 and Q4. However, for very small models (like tiny ones), I might go for Q8 instead. My goal is to work exclusively with local models in this channel. All results, charts, opinions, and recommendations are based on my personal experiences. I'm an Electronics and Computer Engineer with academic background in multiple foreign languages. This unique mix of abilities gives me a special edge when it comes to benchmarking Large Language Models (LLMs). I don’t just test how well these models can chat or write — I put them through realistic, hands-on technical challenges. 🧪 My goal is to make sure every benchmark is accurate, meaningful, and relevant to real-world use. You’ll discover which models can genuinely help professionals in STEM fields tackle actual tasks — not just casual conversations. It’s all about real performance, backed by precision. 🎯 Whether you're a developer, engineer, student, or AI enthusiast, this channel will help you understand which LLMs are actually capable of handling technical work , and where they still fall short. Benchmark quantized AI models are on topics like: 🧠 Coding & Programming 💻 Sysadmin 🔋 Electronics 🌐 Languages 🔬 Physics & Mathematics 🌍 Geography & Cosmology 🌿 Life & People All LLM models can be downloaded from Hugging Face: https://huggingface.co/ To run these models, I use llama.cpp : https://github.com/ggml-org/llama.cpp What is llama.cpp? llama.cpp is an open-source project that lets you run powerful AI language models like LLaMA locally on your PC or Mac — no internet or GPU needed. It’s fast, lightweight, and supports model quantization for better performance on low-resource devices. Great for developers and AI enthusiasts who want to experiment with large language models offline. What is Quantization? Quantization in LLMs is a technique that reduces the model's size and speeds up inference by using lower-precision numbers (like 4-bit or 5-bit) instead of full-precision 32-bit floating points. This makes large language models like LLaMA run faster and use less memory, allowing them to work efficiently on personal computers, laptops, and even devices like Raspberry Pi. It’s a key reason projects like llama.cpp can run powerful AI models locally without needing a GPU. I'm also using Termux version 0.118.1 to run llama.cpp on my Android phone. There are many websites and videos on YouTube that show how to install and run this software. In the future, I may create videos on this topic as well. What is Termux? Termux is a powerful terminal emulator and Linux environment for Android. It lets you run command-line tools, scripts, and even compile code directly on your phone. You can install packages like Python, Git, SSH, and more — making it great for developers, students, or anyone who wants a portable Linux-like experience. Contact me via email: [email protected] ----------------------------- Your donation helps this channel grow, thank you for your support! 🙏 ko-fi: https://www.ko-fi.com/cwonly Monero(XMR): 89DVUbtefLhLNkLrttKjFja6R4dVZJorMdGf7gRU3ya9Je2ATFmcw82TihWpwbJPkZK29vr4iLbxfdHxXSBJ39Rq1a8NjHT Bitcoin(BTC): 1JmtegSpf8Vt1nEuTF9jRqiiKg9CAB8CFP Litecoin(LTC): MWpHTgSH3GbP9t3aQGbZZ4S3WZw4xC8r1w Tether(USDT): 0x556801995557453efcc4ff47186689cba88fa6a5 Ethereum(ETH): 0x59e4e457090718354e66cbc9e0a4964350c790bb Bitcoin Cash(BCH): 19sxLCsZXQKwMPKafCXB7uqjRH39JTnYaW DonationAlerts: https://www.donationalerts.com/r/cw_only ----------------------------- Hardware & Software: CPU: AMD Ryzen 5 8600G GPU: NVIDIA GeForce RTX 4090 RAM: 64GB MBoard: ASUS TUF GAMING B650M-PLUS Kernel Linux 6.1.0-34-amd64 Debian GNU/Linux 12

Comments