Русские видео

Сейчас в тренде

Иностранные видео


Скачать с ютуб Install and Run DeepSeek-V3 LLM Locally on GPU using llama.cpp (build from source) в хорошем качестве

Install and Run DeepSeek-V3 LLM Locally on GPU using llama.cpp (build from source) 3 месяца назад


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса savevideohd.ru



Install and Run DeepSeek-V3 LLM Locally on GPU using llama.cpp (build from source)

#llm #machinelearning #deepseek #llamacpp #llama It takes a significant amount of time and energy to create these free video tutorials. You can support my efforts in this way: Buy me a Coffee: https://www.buymeacoffee.com/Aleksand... PayPal: https://www.paypal.me/AleksandarHaber Patreon: https://www.patreon.com/user?u=320801... You Can also press the Thanks YouTube Dollar button In this tutorial, we explain how to install and run a (quantized) version of DeepSeek-V3 on a local computer by using the llama.cpp. To be able to use the GPU resources, we will first explain how to build llama.cpp from source by using CUDA and C++ compilers. llama.cpp is a powerful and simple to use program for running large language models on local computers. We will install and run a quantized version of DeepSeek-V3 on a local computer. Prerequisites: 200 GB of disk space for the smallest model and more than 400 GB disk space for the larger models. Significant amount of RAM memory. In our case, we have 48 GB of RAM memory and the model inference is relatively slow. Probably the inference speed can be improved by adding more RAM memory. Decent GPU. We performed tests on NVIDIA 3090 GPU with 24 GB VRAM. Better GPU will definitely increase the inference speed.

Comments