We are an informal machine learning reading group here at UC Berkeley. Our goal is to provide a low-stakes space for people to engage with and better understand modern ML research. We are structured around a topic of the month format. Each month, we pick a focused research area and read deeply into it, covering foundational papers, recent advances, and open questions. At every meeting, a subset of people volunteer to present a paper to the group. Those who don't present participate by asking questions and engaging in the group discussion.
We don't have formal prerequisites, but you'll get the most out of discussions if you have exposure to mathematical ML fundamentals, the kind covered in CS 189, CS 182, or similar. If you're newer to reading research papers, don't worry! This is a skill the group helps you build.
Overall, the group is pretty low-stakes and informal, and we are not affiliated with any orgs on campus or the department. If this sounds interesting to you, feel free to reach out using the link below!
Topic: Reliable AI (March)
We prioritize papers that are:
When: Fridays, 5–7 PM
Where: Contact Vijay for location by clicking the link below
Hi everyone, my name's Vijay, and I'm a third-year computer science major.
I'm interested in natural language processing (NLP), machine learning systems, and AI in general.
Outside of school, I enjoy cooking and hiking.
vkethana@berkeley.edu
Continual Learning
Meeting notes coming soon.
Reliable AI
Representation Learning for Reinforcement Learning
Before the topic-of-the-month format, meetings covered a wide range of papers each week.
No notes exist for this meeting.
Continual Learning
Continual learning = models that learn from experience over time, as opposed to static models that are trained once and cannot update ''online'' from new user feedback or interactions.