I suppose if we couldn’t laugh at things that don’t make sense, we couldn’t react to a lot of life.
Bill WattersonLearning Undirected Graphical Models
06 September 2020
Undirected graphical models formed a large part of the initial push for machine intelligence, and remain relevant today. Here, I motivate and derive Monte Carlo-based learning algorithms for such models.
116 August 2020
I discuss some fundamental ideas behind fully convolutional networks, including the transformation of fully connected layers to convolutional layers and upsampling via transposed convolutions ("deconvolutions").
215 August 2020
I motivate and derive the generalized backpropagation algorithm for arbitrarily structured networks.
3Learning Convolutional Networks
06 July 2020
I motivate and derive the backpropagation learning algorithm for convolutional networks.
405 July 2020
I motivate and derive the backpropagation learning algorithm for feedforward networks.
5Ken Thompson's Turing award lecture "Reflections on Trusting Trust" took me a while to grasp, but proved immensely rewarding to understand. Here, I discuss the exploit presented in an approachable manner.
6Linear regression is a prolific and natural algorithm often justified probabilistically by assuming that the error in the relationship between target and input variables is Gaussian. Here, I provide a formal proof of this justification.
723 June 2020
Directed latent variable models provide a powerful way to represent complex distributions by combining simple ones. However, they often have intractable log-likelihoods, yielding complicated learning algorithms. In this post, I hope to build intuition for these concepts.
8