- insta : @deep_papers
- fb : AI.Lookbook
좋아하는 논문 읽고 제 나름대로 포인트를 생각한 리뷰들 모음입니다.
리뷰는 Issue 또는 페이스북 페이지글로 구성되어 있습니다.
Exploiting Contemporary ML에서 이야기하는 논문들.
- Attention is All You Need (Transformer) : review
- Deep contextualized word representations (ELMo) : review
- GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding (GLUE) : review
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (BERT) : review
- Momentum Contrast for Unsupervised Visual Representation Learning (MoCo) : review
- A Simple Framework for Contrastive Learning of Visual Representations (SimCLR) : review
- End-to-End Object Detection with Transformers (DETR) : review
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (ViT) : review
- XLNet: Generalized Autoregressive Pretraining for Language Understanding (XLNet): review
- RoBERTa: A Robustly Optimized BERT Pretraining Approach (RoBERTa) : review
- ALBERT: A Lite BERT for Self-supervised Learning of Language Representations (ALBERT) : review
- Evaluating Machine Accuracy on ImageNet : review
- Do Better ImageNet Models Transfer Better? : review
- ResNet strikes back: An improved training procedure in timm : review
- Image-to-Image Translation with Conditional Adversarial Networks (Pix2Pix) : review
- Progressive Growing of GANs for Improved Quality, Stability, and Variation (PGGAN) : review
- Self-Attention Generative Adversarial Networks (SAGAN) : review
- Large Scale GAN Training for High Fidelity Natural Image Synthesis (BigGAN) : review
- A Style-Based Generator Architecture for Generative Adversarial Networks (StyleGAN) : review
- Analyzing and Improving the Image Quality of StyleGAN (StyleGAN2) : review
- SinGAN: Learning a Generative Model from a Single Natural Image (SinGAN) : review
- Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis : review