Wednesday, May 7th
11:15 AM - 1:15 PM EST
Room 6417
Programming Neural Networks in Python: Part II (continued)
We invite students and researchers to participate in a three-part workshop series focused on the theory and practice of programming neural networks. The series is designed as an accessible introduction for individuals with minimal programming background who wish to develop practical skills in implementing neural networks from first principles and using modern machine learning libraries. We will be using Google Colab, which is a free online platform provided by Google where you can write and run Python code in the cloud.
Link here for the second lecture notebook:
Neural nets from scratch on Google Colab
For interested students who missed the first lecture, below is the intro to python colab notebook:
Intro to Python tutorial on Google Colab
Requirements and Preparation
A laptop
Attended first two sessions OR have a basic understanding of Python and neural networks
Workshop Schedule and Topics
Lecture 2: crash course on programming neural networks
In the continuation of the second session, we focus on the training loop for neural networks.
Defining and computing a loss function (cross-entropy loss)
Deriving gradients using the chain rule and implementing backpropagation
Training the network using (stochastic) gradient descent
Visualizing the learning process through loss plots and prediction results
ORGANIZERS
Simon Locke (Initiative for Theoretical Sciences / GC-CUNY) & Enrique Pujals (GC-CUNY)