Below are some candid on my career so far! For a full job history, please see my LinkedIn here, or email me for an updated CV :)
Research (~2015 - 2020)
I took Andrew Ng’s Machine Learning course on edX back in 2014, and it sparked my love for two things: online education and artificial intelligence. I was fascinated by computer vision, especially the remarkable new techniques called “Deep Learning”. I read Karpathy and Nielsen, and began experimenting, buying a GPU and shortly after bricking a computer attempting to install CUDA drivers (it was much harder back then!!). I went on to intern for the startup Clarifai, where I built tools to compare and contrast convolutional neural networks.
Shortly after I matriculated at Brown, I began working with Professor Michael Littman in “Deep Reinforcement Learning”, a new field that married the older Reinforcement Learning with the newer Deep Learning. I was hooked on research from my first year: I loved reading the literature, solving new problems and talking with the wicked-smart, but most importantly, kind students in the lab. I interned at NVIDIA, working on projects in Active Learning for their self-driving car software, and at Preferred Networks in Tokyo, working on uncertainty prediction in the context of control using convolutional neural networks.
During my time at Brown, Professor Littman and I worked on two threads of research. For all publications with PDFs please see google scholar.
AI + Education
This collaboration took place with Sam Saarinen, a great friend and collaborator to this day. We immediately connected because neither of us are content with the way educational systems, be them physical schools, or MOOCs, work.
In educational recommendation systems (and educational systems generally), it’s important for the system to have a solid model of what a student does and does not know. If this model is inaccurate, the system cannot serve their need: they may recommend content too advanced, or content to basic. To that end, Sam and I explored a new approach to pre-requisite testing in:
Sam and I were always brainstorming on new approaches towards fixing schooling, and in this short article, we tried to re-imagine learning systems from scratch, taking a stab at using Partially Observable Markov Decision Processes as the primary framework for understanding education.
We have much more work to do in this space.
Model-Based Reinforcement Learning
Kavosh Asadi was kind enough to take me under his wing late in my sophomore year. I credit much of my technical writing ability (if I have any!) to him. I always enjoyed working in front of the whiteboard with Kavosh, as the sessions were always as fun as they were pedagogical.
Reinforcement learning, a general agent-based learning paradigm, is usually free of an explicit world model. But when we do use an explicit model as in the case of “Model Based Reinforcement Learning”, we want the model to be accurate. This is the unifying idea we explored in the two papers we wrote, one more practical, one more theoretical.
In Model-Based Reinforcment Learning we “rollout” our world model given some policy (usually our optimal policy), and then act based on the observed model reward in our approximate model. Typically the rollout is performed by using a model which predicts the next state given a previous state and one action, repeatedly. That is, the predicted next state is fed as input into the model in the next step, for some finite number of steps. This can cause compounding error if the model is inaccurate (which in the “deep” setting is almost certainly the case). We wrote a paper exploring how model-based reinforcement learning algorithms performed if instead of using “one-step” models, we trained many “n-step” models, trading off data and compute for higher-accuracy. Simple, hence the title! But it works :)
We want our models in MBRL to be most accurate in states of high value (because we hope to be in those states). We can teach our models to be accurate as such by weighting higher-value states higher in our optimization function. This technique is called “value-aware model learning”. We wrote a paper connecting that paradigm to optimizing models by using the “Wasserstein Metric”, also known as the “Earth Movers Distance”.
Software Engineer and Engineering Manager at Studio.com (2020-2022)
I joined studio.com shortly after graduating. I thought it had a great mission: make creative fulfillment attainable for everyone. We worked on online courses with the best educators in many fields such as Music Production, Baking and Dance. Our model was different from other online courses, which, in our opinion, veered closer to “edutainment” than actual pedagogy. Our courses required homework, and difficult homework at that. We believed that deliberate practice was required to actually learn skills, and we attempted to provide the learner with the best practices and theory in the world. As the world moves towards automation, I still believe that creative fulfillment will be all the more important.
And wow, it was so much fun. I cut my teeth with real software engineering problems for the first time. The engineering served real users, and bugs meant losing out on real money for a company I was deeply invested in. I learned how to test code, how to setup robust systems, how to design abstractions with future refactors in mind, how to work on a team. Later, while we were experiencing outsized growth, I had the privilege of working with our great recruiting team, scaling up the backend and infrastructure team from two to seven. I loved balancing longer strategic plans with day to day fires as well as building a culture that made work enjoyable (at least I hope!). But I don’t want to overstate my experience: I was a professional full-stack engineer for a year, and a manager of a growing team for a little over that. I know almost nothing. I look forward to furthering my skills along all fronts!
Some of my favorite resources and references that I used constantly while at Studio:
- Designing Data-Intensive Applications
- Grug Brained Developer
- Big Ball of Mud
- The MDN docs!
A couple of months after a large layoff, I decided to leave the company to venture out. I wanted to read, write, and work on the projects which are the primary focus of this website.