Transformers for NLP
Cuándo: 15/02 17:00 — 15/02 19:30
Lugar: Aula 1
Ponente
- Katarzyna Baraniak. ML Researcher. Warsaw, Mazowieckie, Poland
Descripción
This talk presents the transformers architectures for natural language processing. The ideas of encoder- decoder, attention mechanism, and language model are explained Then the example Large Language Models like BERT and other architectures are presented.
During the second part, we will introduce the hugging face library and code analysis of a transformers architecture. We will also work on how to prepare data and fine tune the pre-trained model for a nlp task like sentiment analysis.
Please bring your laptops for the hands-on session and make sure you have access to Google Colab or Jupyter-notebooks with Python and Pytorch installed.
Esta charla/taller es en inglés
Puedes descargar el material de este taller en este repositorio
Registro de la actividad
Si quieres que tu participación se contabilice en la Copa de las Casas, usa este formulario de inscripción.