Manan Shah

Home

      

Reading

      

Code

      

LinkedIn

      

[first].[last].777@gmail.com


I suppose if we couldn’t laugh at things that don’t make sense, we couldn’t react to a lot of life.

Bill Watterson

Learning Undirected Graphical Models

Undirected graphical models formed a large part of the initial push for machine intelligence, and remain relevant today. Here, I motivate and derive Monte Carlo-based learning algorithms for such models.

Generalizing Backpropagation

We motivate and derive the generalized backpropagation algorithm for arbitrarily structured networks.

Grokking Fully Convolutional Networks

We discuss the fundamental ideas behind fully convolutional networks, including the transformation of fully connected layers to convolutional layers and upsampling via transposed convolutions ("deconvolutions").

Learning Convolutional Networks

We motivate and derive the backpropagation learning algorithm for convolutional networks.

Learning Feedforward Networks

We motivate and derive the backpropagation learning algorithm for feedforward networks.

On Ken Thompson's "Reflections on Trusting Trust"

A detailed look at one of my favorite software security papers, and its implications on bootstrapping trust.

Learning Directed Latent Variable Models

Directed latent variable models provide a powerful way to represent complex distributions by combining simple ones. However, they often have intractable log-likelihoods, yielding complicated learning algorithms. In this post, we build intuition for these concepts.