Manan Shah

Home

      

Blog

      

Code

      

LinkedIn

      

[first].[last].777@gmail.com


I suppose if we couldn’t laugh at things that don’t make sense, we couldn’t react to a lot of life.

Bill Watterson

Learning Undirected Graphical Models

Undirected graphical models formed a large part of the initial push for machine intelligence, and remain relevant today. Here, I motivate and derive Monte Carlo-based learning algorithms for such models.

Fully Convolutional Networks

I discuss some fundamental ideas behind fully convolutional networks, including the transformation of fully connected layers to convolutional layers and upsampling via transposed convolutions ("deconvolutions").

Generalized Backpropagation

I motivate and derive the generalized backpropagation algorithm for arbitrarily structured networks.

Learning Convolutional Networks

I motivate and derive the backpropagation learning algorithm for convolutional networks.

Learning Feedforward Networks

I motivate and derive the backpropagation learning algorithm for feedforward networks.

A Discussion of Ken Thompson's "Reflections on Trusting Trust"

Ken Thompson's Turing award lecture "Reflections on Trusting Trust" took me a while to grasp, but proved immensely rewarding to understand. Here, I discuss the exploit presented in an approachable manner.

A (Formal) Probabilistic Interpretation of Linear Regression

Linear regression is a prolific and natural algorithm often justified probabilistically by assuming that the error in the relationship between target and input variables is Gaussian. Here, I provide a formal proof of this justification.

Latent Variable Models

Directed latent variable models provide a powerful way to represent complex distributions by combining simple ones. However, they often have intractable log-likelihoods, yielding complicated learning algorithms. In this post, I hope to build intuition for these concepts.