Из-за периодической блокировки нашего сайта РКН сервисами, просим воспользоваться резервным адресом:
Загрузить через dTub.ru Загрузить через ycliper.com Загрузить через ClipSaver.ruУ нас вы можете посмотреть бесплатно Scaling up Terascale Deep Learning on Commodity CPUs with ThirdAI and Ray или скачать в максимальном доступном качестве, которое было загружено на ютуб. Для скачивания выберите вариант из формы ниже:
Роботам не доступно скачивание файлов. Если вы считаете что это ошибочное сообщение - попробуйте зайти на сайт через браузер google chrome или mozilla firefox. Если сообщение не исчезает - напишите о проблеме в обратную связь. Спасибо.
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса savevideohd.ru
ThirdAI is an early-stage startup dedicated to democratizing AI through algorithmic and software innovations that enable training and deploying large-scale neural networks on commodity CPU hardware. The core component of ThirdAI's efficient model training is our proprietary BOLT engine, a new deep learning framework built from scratch with sparsity as a first-class design principle. In certain tasks, ThirdAI's sparse deep learning models can even outperform the analogous dense architecture on GPUs in both training time and inference latency. In this talk, we introduce our new distributed data parallel engine powered by Ray Core to scale ThirdAI models to terabyte-scale datasets and billion-parameter models. We discuss how Ray enabled us to quickly build an industry-grade distributed training solution on top of BOLT with key features such as fault-tolerance, multiple modes of communication, and seamless scalability. In addition, we highlight the unique scientific challenges that arise from the challenge of performing distributed deep learning training on CPUs. Specifically, we highlight the fact that the unprecedented efficiency of ThirdAI's BOLT models leaves us with a considerable communication bottleneck, which we address through novel gradient compression techniques. Finally, we present results from our rigorous evaluation of distributed BOLT on the terabyte-sized Criteo dataset, where we observe near-linear scaling up to 200 nodes and training times 42x faster than TensorFlow-CPU while using only one-sixth of computing resources. About Anyscale --- Anyscale is the AI Application Platform for developing, running, and scaling AI. https://www.anyscale.com/ If you're interested in a managed Ray service, check out: https://www.anyscale.com/signup/ About Ray --- Ray is the most popular open source framework for scaling and productionizing AI workloads. From Generative AI and LLMs to computer vision, Ray powers the world’s most ambitious AI workloads. https://docs.ray.io/en/latest/ #llm #machinelearning #ray #deeplearning #distributedsystems #python #genai