๋ฐ˜์‘ํ˜•

๐ŸŒŒ Deep Learning/Overview 6

[StyleGAN ์‹œ๋ฆฌ์ฆˆ] ProGAN/PGGAN, StyleGAN, StyleGAN2

ProGAN๋ถ€ํ„ฐ StyleGAN2๊นŒ์ง€, style transfer์—์„œ ๊ฐ€์žฅ ์œ ๋ช…ํ•œ ๋ชจ๋ธ์ธ StyleGAN์˜ ๋ณ€์ฒœ์‚ฌ์™€ ๊ฐ ๋ชจ๋ธ์˜ ํŠน์ง•์„ ๊ฐ„๋‹จํžˆ ์ •๋ฆฌํ•ด ๋ณด๊ณ ์ž ํ•œ๋‹ค. 1. ProGAN/PGGAN (ICLR 2018) Paper: Progressive Growing of GANs for Improved Quality, Stability, and Variation (link) GAN์„ ์ด์šฉํ•ด ๊ณ ํ•ด์ƒ๋„ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ์€ ์‰ฝ์ง€ ์•Š๋‹ค. ๋”ฐ๋ผ์„œ latent vector์—์„œ ํ•œ๋ฒˆ์— ๊ณ ํ•ด์ƒ๋„์˜ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ๋ณด๋‹ค๋Š”, ๋‚ฎ์€ ํ•ด์ƒ๋„์˜ ์ด๋ฏธ์ง€(4x4)๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ๋ถ€ํ„ฐ ํ•™์Šตํ•ด์„œ ์ ์ง„์ ์œผ๋กœ(progressive) ๋ ˆ์ด์–ด๋ฅผ ์ถ”๊ฐ€ํ•˜๋ฉฐ ๊ณ ํ•ด์ƒ๋„ ์ด๋ฏธ์ง€(1024x1024)๋ฅผ ๋งŒ๋“œ๋Š” ๋ฐฉ๋ฒ•์„ ํ•™์Šตํ•œ๋‹ค. ๋ ˆ์ด์–ด๋ฅผ ์ถ”๊ฐ€ํ•  ๋•Œ๋Š” fade..

[GAN Overview] GAN ์ฃผ์š” ๋ชจ๋ธ ์ •๋ฆฌ (GAN survey ๋…ผ๋ฌธ ๋ฆฌ๋ทฐ)

Generative Adversarial Networks in Computer Vision: A Survey and Taxonomy (CSUR 2021) ์„ ๋ฐ”ํƒ•์œผ๋กœ, ์ค‘์š”ํ•œ GAN ๋ชจ๋ธ๋“ค์„ ์ •๋ฆฌํ•ด ๋ณด๊ณ ์ž ํ•ฉ๋‹ˆ๋‹ค. ๋…ผ๋ฌธ์—๋Š” ๋” ๋‹ค์–‘ํ•œ ๋ชจ๋ธ๋“ค์ด ์†Œ๊ฐœ๋˜์–ด ์žˆ์œผ๋‚˜, ๊ทธ ์ค‘ ์ผ๋ถ€๋งŒ ์ •๋ฆฌํ•˜์˜€์Šต๋‹ˆ๋‹ค. GAN์— ๋Œ€ํ•ด ์–ด๋Š ์ •๋„ ๋ฐฐ๊ฒฝ์ง€์‹์ด ์žˆ๋Š” ๋ถ„๋“ค์„ ์œ„ํ•œ ๊ธ€์ด๋ฉฐ, ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๊ฐ ๋ชจ๋ธ์— ๋Œ€ํ•œ ๊ฐ„๋‹จํ•œ ์š”์•ฝ๋งŒ ํฌํ•จํ•˜๊ณ  ์žˆ์–ด, ์ถ”๊ฐ€๋กœ ์กฐ์‚ฌํ•œ ๋‚ด์šฉ์„ ํฌํ•จ์‹œ์ผฐ์œผ๋ฉฐ ์ฐธ๊ณ ํ•  ๋งŒํ•œ ์™ธ๋ถ€ ๊ธ€๋“ค์€ ๋งํฌ๋ฅผ ๊ฑธ์–ด๋†“์•˜์Šต๋‹ˆ๋‹ค. Paper: https://dl.acm.org/doi/pdf/10.1145/3439723 Code: https://github.com/sheqi/GAN_Review ๋ชฉ์ฐจ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. Introduction B..

[Overview] Attention ์ •๋ฆฌ - (2) seq2seq, +attention

์ˆœ์„œ: (1) LSTM (2) seq2seq, +attention (3) Show, Attend and Tell Reference: Visualization of seq2seqmodel Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention) Translations: Chinese (Simplified), Japanese, Korean, Russian, Turkish Watch: MITโ€™s Deep Learning State of the Art lecture referencing this post May 25th update: New graphics (RNN animation, word embedd..

[Overview] Attention ์ •๋ฆฌ - (1) LSTM

์ˆœ์„œ: (1) LSTM (2) seq2seq, +attention (3) Show, Attend and Tell reference: colah.github.io/posts/2015-08-Understanding-LSTMs/ Recurrent Neural Network ๊ธฐ๋ณธ์ ์ธ RNN์˜ ๊ตฌ์กฐ๋Š” ์œ„์™€ ๊ฐ™๋‹ค. ์ด์ „์˜ state๋ฅผ ํ•จ๊ป˜ input์œผ๋กœ ์ฃผ์–ด ์ด์ „ input๊ณผ์˜ ์—ฐ๊ด€์„ฑ์„ ํ•จ๊ป˜ ํ•™์Šตํ•ด ๋‚˜๊ฐ„๋‹ค. ํ•˜์ง€๋งŒ ์ด๋Ÿฌํ•œ ๊ตฌ์กฐ๋Š”, input์˜ ๊ธธ์ด๊ฐ€ ๊ธธ์–ด์งˆ์ˆ˜๋ก, ๋„คํŠธ์›Œํฌ์˜ ๋’ท๋ถ€๋ถ„์œผ๋กœ ๊ฐˆ ์ˆ˜๋ก ์•ž ๋ถ€๋ถ„์˜ ์ •๋ณด๋ฅผ ์žŠ์–ด๋ฒ„๋ฆฌ๋Š” ๋ฌธ์ œ์ ์ด ์žˆ๋‹ค. LSTM์€ ์ด๋Ÿฌํ•œ ๋ฌธ์ œ์ ์„ ๊ทน๋ณตํ•˜๊ธฐ ์œ„ํ•ด ์ œ์‹œ๋˜์—ˆ๋‹ค. Long Short Term Memory ์œ„๋Š” ๊ฐ„๋‹จํ•œ RNN์˜ ๊ตฌ์กฐ, ์•„๋ž˜๋Š” LSTM์˜ ๊ตฌ์กฐ์ด๋‹ค. ๊ฐ ๊ธฐํ˜ธ์˜ ์˜๋ฏธ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค..

[Overview] YOLO ๊ณ„์—ด Object Detection ์ •๋ฆฌ - (1) YOLO

์ˆœ์„œ: (1) YOLO (2016) (2) YOLOv2 (3) YOLOv3 (4) YOLOv4 YOLO (2016) Redmon, Joseph, et al. "You only look once: Unified, real-time object detection." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. Paper: www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Redmon_You_Only_Look_CVPR_2016_paper.pdf Official code: pjreddie.com/darknet/yolo/ ๋…ผ๋ฌธ์—์„œ ์ œ์‹œํ•œ ๋ชจ๋ธ ๊ตฌ์กฐ๋Š” ์œ„ ..

[Overview] R-CNN ๊ณ„์—ด Object Detection ์ •๋ฆฌ (Two-stage detector)

์ˆœ์„œ: 1. R-CNN (2014) 2. SPP-Net (2015) 3. Fast R-CNN (2015) 4. Faster R-CNN (2016) 5. FPN (2017) (์ถ”๊ฐ€์˜ˆ์ •) ์ฐธ๊ณ : yeomko.tistory.com/category/%EA%B0%88%EC%95%84%EB%A8%B9%EB%8A%94%20Object%20Detection?page=1 1. R-CNN (2014) Girshick, Ross, et al. "Rich feature hierarchies for accurate object detection and semantic segmentation." Proceedings of the IEEE conference on computer vision and pattern recogniti..

๋ฐ˜์‘ํ˜•