Из-за периодической блокировки нашего сайта РКН сервисами, просим воспользоваться резервным адресом:
Загрузить через dTub.ru Загрузить через ClipSaver.ruУ нас вы можете посмотреть бесплатно Eric Niebler - Working with Asynchrony Generally and AMA at CppEurope 2022 или скачать в максимальном доступном качестве, которое было загружено на ютуб. Для скачивания выберите вариант из формы ниже:
Роботам не доступно скачивание файлов. Если вы считаете что это ошибочное сообщение - попробуйте зайти на сайт через браузер google chrome или mozilla firefox. Если сообщение не исчезает - напишите о проблеме в обратную связь. Спасибо.
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса savevideohd.ru
Watch Eric Niebler do a live coding session on implementation of async patterns. The code is available here: https://godbolt.org/z/dnrabsbdq Check out news about CppEurope here: https://cppeurope.com/ ABOUT ERIC NIEBLER Eric is a long-time member of the ISO C++ Standardization Committee, and is probably best known for his work bringing ranges support to the C++20 Standard Library. He specializes in modern C++ library design, authoring several Boost libraries and the popular range-v3 library for computing with ranges and range pipelines. He currently works at NVIDIA where he is trying to build abstractions for asynchrony that are fit for standardization. ericniebler.com ABOUT THE SESSION Asynchrony is increasingly important across the software industry, yet C++ lacks a standard async programming model or a comprehensive set of asynchronous algorithms. In this talk, Eric will describe the approach to asynchrony taken by standard Executors, currently targeting C++26. This talk will focus on how to use the new facilities, with glimpses under the covers to see how things work. “Asynchrony” means many things. It means concurrency (e.g., thread pools) and parallelism (e.g., GPUs). It means parameterizing a computation on where it should execute. It means the ability to chain work, propagating values and errors to user-specified continuations. It means forking and joining async control flow without blocking. It requires guarantees that async work can make forward progress. It means a standard way to request that a computation stop early, and a way to propagate the “I have now stopped” notification back to the caller. And — since this is C++ — it means doing all that with bare-metal performance and seamless integration with C++20 coroutines. The async programming model espoused by the recent standard Executors papers satisfies these constraints while also enabling a full suite of Generic asynchronous algorithms. C++23 will likely come with algorithms such as `then` for chaining work, `timeout` and `stop_when` for cancelling work after certain criteria are met, `when_all` for launching concurrent work and joining (without blocking), and `sync_wait` for blocking until work has finished. The core abstraction of the Executors design, known as “Sender/Receiver”, is to asynchronous algorithms what the STL’s “Iterator” abstraction is to sequence algorithms: an enabling abstraction that makes reusable algorithms possible. With the containers and algorithms of the STL, the C++ Standard Library took a giant step forward in power and reusability. With C++23 Executors, the Standard Library takes the next step.