Statistical Natural Language Processing (CS779) : Fall 2020

Natural language (NL) refers to the language spoken/written by humans. NL is the primary mode of communication for humans. With the growth of the world wide web, data in the form of text has grown exponentially. It calls for the development of algorithms and techniques for processing natural language for the automation and development of intelligent machines. This course will primarily focus on understanding and developing linguistic techniques, statistical learning algorithms, and models for processing language. We will have a statistical approach towards natural language processing, wherein we will learn how one could develop natural language understanding models from statistical regularities in large corpora of natural language texts while leveraging linguistics theories.

CS779 is a research project based course, participants are required to work on open and unsolved research problems in NLP and consequently considerable effort is expected from the participant.


Must: Introduction to Machine Learning (CS771) or equivalent course, Proficiency in Linear Algebra, Probability and Statistics, Proficiency in Python Programming
Desirable: Probabilistic Machine Learning (CS772), Topics in Probabilistic Modeling and Inference (CS775), Deep Learning for Computer Vision (CS776)

Course Instructor:

Dr. Ashutosh Modi

Course TAs:

Samik Some (Email: )
Shubham Kumar Nigam (Email: )
Tushar Shandhilya (Email: )
Karishma Satchidanand Laud (Email: )
Gargi Singh (Email: )
Chayan Dhaddha (Email: )
Ashwani Bhat (Email: )

Course Email:

In case you want to communicate with the instructor, please do not send any direct emails to the instructor (these will most likely end in spam), use this course email for the communication:

Weekly Meeting Session:

Monday 3:30PM to 5PM

Virtual Classroom:

Lectures, assignments and quizzes will be uploaded/conducted on HelloIITK.

Virtual classes will be held on MS Teams. A separate team/channel (CS779: Statistical Natural Language Processing) has been set up for the course. All course announcements will be made on the Teams channel, HelloIITK and Telegram.

In case you are not there on MS Teams please create an account via IITK subscription. To get IITK sub-scription please fill this form: If you are outside IITK, you would need to log into IITK network via VPN to fill the form. Once you have the account, please contact TAs to add you to the Teams channel.


Please check the Resources tab.


This is a research project oriented course and the project carries the maximum weightage. Given that course is going to be online, all exams will be conducted online. The tentative weightage for different components are as follows. Please note that this grading scheme is tentative (due to COVID uncertainties and factors beyond Instructor’s control), and weightage might change.

Quizzes: 20%
Scribe Notes and Cheat Sheets: 20%
Project: 60%
Mid-Sem Exam: Project paper and presentation
End-Sem Exam: Project paper and presentation


Date Topic References
01/09/2020 Introduction and Logistics -

Course Contents:

Tentative list of topics we will be covering in this course:
  1. Introduction to Natural Language (NL): why is it hard to process NL, linguistics fundamentals, etc.
  2. Language Models: n-grams, smoothing, class-based, brown clustering
  3. Sequence Labeling: HMM, MaxEnt, CRFs, related applications of these models e.g. Part of Speech tagging, etc.
  4. Parsing: CFG, Lexicalized CFG, PCFGs, Dependency parsing
  5. Applications: Named Entity Recognition, Coreference Resolution, text classification, toolkits e.g., Spacy, etc.
  6. Distributional Semantics: distributional hypothesis, vector space models, etc.
  7. Distributed Representations: Neural Networks (NN), Backpropogation, Softmax, Hierarchical Softmax
  8. Word Vectors: Feedforward NN, Word2Vec, GloVE, Contextualization (ELMo etc.), Subword information (FastText, etc.)
  9. Deep Models: RNNs, LSTMs, Attention, CNNs, applications in language, etc.
  10. Sequence to Sequence models: machine translation and other applications
  11. Transformers: BERT, transfer learning and applications


There are no specific references, this course gleans information from a variety of sources likebooks, research papers, other courses, etc. Relevant references would be suggested in the lectures. Some of the frequent references are as follows:
  1. Speech and Language Processing, Daniel Jurafsky, James H.Martin
  2. Foundations of Statistical Natural Language Processing, CH Manning, H Schtze
  3. Introduction to Natural Language Processing, Jacob Eisenstein
  4. Neural Network Methods for NLP, Yoav Goldberg, Morgan Claypool (If you are in IITK network you can download at: Morgan Claypool Subscription for IITK )
  5. Linguistic Fundamentals for Natural Language Processing, Emily Bender, Morgan Claypool (If you are in IITK network you can download at: Morgan Claypool Subscription for IITK )
  6. Natural Language Understanding, James Allen

This is a project oriented course. Participants will be working on different NLP research projects. Once the projects have been finalized by the participants, this page will be populated with the list of projects.

Course Logistics Related Resources:

  1. Course lectures, assignments and quizzes will be on HelloIITK. Log in using your IITK username and password.
  2. Course MS Teams Channel: CS779: Statistical Natural Language Processing
  3. Telegram Channel: CS779_Fall_2020
  4. Projects List and Registration
  5. Scribe Notes and Cheatsheet Guidelines
  6. Project Paper Template

Study Resources:

  1. Linear Algebra Refresher
  2. Probability Refresher
  3. PyTorch Tutorials
  4. Deep Learning with PyTorch Book
  5. Spacy ToolKit
  6. Writing Code for NLP Research
  7. Repository of NLP research papers: ACL Anthology
  8. Human Language Technology Series by Morgan Claypool. This can be accessed only from the IITK network.
  9. Guide to ML Research
  10. Using Google CoLab for research
  11. Using Kaggle Notebooks for research