NTR organizes and hosts scientific webinars on neural networks and invites speakers from all over the world to present their recent work at the webinars.
On May 4 Ilia Kulikov, Computational Intelligence, Learning, Vision, and Robotics Lab, New York University, New York, USA, led a technical Zoom webinar on Learning and Decoding with Neural Autoregressive Language Models.

About the webinar:
Neural autoregressive language model is a de-facto standard approach in many natural language processing tasks, such as machine translation or conversation modeling.
I addressed the neural sequence modeling pipeline and inevitable approximations we have to make in order to practically utilize the system.
These approximations may lead to undesirable predictions being generated such as repetitive, inconsistent degenerate sequences.
Further, we considered a couple of methods that can be applied along the learning pipeline to help mitigate the aforementioned issues.
Materials available:
Webinar presentation
Moderator and contact:
NTR CEO Nick Mikhailovsky: nickm@ntrlab.com.