[Notes] Representation Learning with Contrastive Predictive Coding (Jan 2019)

Framework for unsupervised learning in computer vision tasks.

In our projects, one of the limitation or pain points is the amount of annotated data needed to train our models. In this paper, the authors are exploring what they called Contrastive Predictive Coding. This method allows their models to learn representations (features) in an unsupervised manner. At the end, the goal is to provide a training process that would require much less annotated data.

[Read More]

[Notes] EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks (May 2019)

This paper was published by Mingxing Tan and Quoc V. Le, members of Google Research, Brain Team. They propose a method to scale up neural nets. Their biggest model achieves 84.4% top-1 / 97.1% top-5 accuracy on ImageNet and the state-of-the-art on CIFAR-100, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet.

[Read More]