Last year at NIPS 2017 Ali Rahimi and Ben Recht won the test of time award for their paper "Random Features for Large-Scale Kernel Machines" where they introduced random features, later codified as the random kitchen sinks algorithm. As part of publicising their paper, they showed that their model could be implemented in 5 lines of matlab.
% Approximates Gaussian Process regression
% with Gaussian kernel of variance gamma^2
% lambda: regularization parameter
% dataset: X is dxN, y is 1xN
% test: xtest is dx1
% D: dimensionality of random feature
% training
w = randn(D,d);
b = 2 * pi * rand(D, 1);
Z = cos(gamma * w * X + b * ones(1,N));
alpha = (lambda * eye(D) +Z * Z') \ (Z * y);
% testing
ztest = alpha' * cos(gamma * w * xtest + b);
How the above algorithm learns anything is unclear to me. How does a random kitchen sink work? How does it approximate Gaussian processes and support vector machines?
Edit
Rewatching Rahimi's talk, the term random kitchen sinks is not introduced in the paper for which they won the award but rather at the end of the trilogy of papers beginning with "Random Features for Large-Scale Kernel Machines". The other papers are:
I think the code snippet introduced above is a specialisation of Algorithm 1 in last paper.