๋ฐ˜์‘ํ˜•

์ „์ฒด ๊ธ€ 176

Ubuntu 22.04์— nvidia driver, cuda, cuDNN, torch ์„ค์น˜ํ•˜๊ธฐ (2024 ver)

์„œ๋ฒ„ ์„ธํŒ…์€ ํ•ญ์ƒ ํž˜๋“ค๋‹ค.. ๋ชฉ์ฐจ 0. ubuntu ๋ฒ„์ „ ํ™•์ธ1. nvidia driver ์„ค์น˜2. cuda ์„ค์น˜3. cuDNN ์„ค์น˜4. torch ์„ค์น˜  0. ubuntu ๋ฒ„์ „ ํ™•์ธ์ผ๋‹จ ์‚ฌ์šฉ์ค‘์ธ ubuntu์˜ ๋ฒ„์ „์„ ํ™•์ธํ•ด์ค€๋‹ค.lsb_release -a  1. nvidia driver ์„ค์น˜1-1. ์™ธ๋ถ€์ €์žฅ์†Œ ์ถ”๊ฐ€ nvidia driver์„ ํฌํ•จํ•˜๋Š” ์™ธ๋ถ€์ €์žฅ์†Œ(PPA)๋ฅผ ์ถ”๊ฐ€ํ•˜๊ณ  ํ•ด๋‹น ๋‚ด์šฉ์„ ํŒจํ‚ค์ง€ ๋ฆฌ์ŠคํŠธ์— ์—…๋ฐ์ดํŠธํ•ด์ค€๋‹ค.sudo add-apt-repository ppa:graphics-drivers/ppasudo apt update  1-2. ์„ค์น˜ ๊ฐ€๋Šฅํ•œ ๋“œ๋ผ์ด๋ฒ„ ๋ชฉ๋ก ํ™•์ธ ubuntu-drivers๋ฅผ ์ด์šฉํ•ด ์„ค์น˜ ๊ฐ€๋Šฅํ•œ nvidia driver ๋ชฉ๋ก์„ ํ™•์ธํ•œ๋‹ค.ubuntu-drivers devices..

ํ† ์Šค ๊ฐœ๋ฐœ์ž ์ปจํผ๋Ÿฐ์Šค SLASH24 ์‹œ๊ฐ„ํ‘œ ๊ณต์œ 

๋‹ด๋‹ค ๋ณด๋‹ˆ ๋ฐ์ดํ„ฐ ํŠธ๋ž™์œผ๋กœ ๋‹ค ๋‹ด์•˜๋‹ค.์ŠคํŽ˜์…œ ์„ธ์…˜์—๋„ ๋“ฃ๊ณ  ์‹ถ์€ ๊ฒƒ๋“ค์ด ๋ช‡๊ฐ€์ง€ ์žˆ์—ˆ๋Š”๋ฐ ๊ฒน์น˜๋Š” ๋ฐ์ดํ„ฐ ํŠธ๋ž™ ์„ธ์…˜๋“ค์ด ๊ผญ ๋“ฃ๊ณ  ์‹ถ์€ ๊ฒƒ๋“ค์ด๋ผ..์ถ”์ฒจ์œผ๋กœ ์ดˆ์ฒญ๊ถŒ์„ ์ค€๋‹ค๋Š”๋ฐ ๊ผญ ๋‹น์ฒจ๋์œผ๋ฉด ~~

์นด์นด์˜ค๋ธŒ๋ ˆ์ธ Multimodal LLM Honeybee ๋…ผ๋ฌธ ๋ฆฌ๋ทฐ

์นด์นด์˜ค๋ธŒ๋ ˆ์ธ์—์„œ ์ž‘๋…„ ๋ง Multimodal LLM์ธ Honeybee๋ฅผ ๋ฐœํ‘œํ–ˆ๋‹ค. ์•„์‰ฝ๊ฒŒ๋„ ํ•œ๊ตญ์–ด ๋ชจ๋ธ์€ ์•„๋‹ˆ๊ณ  ์˜์–ด ๋ชจ๋ธ์ด๊ณ , 5๊ฐœ์˜ ๋ฒค์น˜๋งˆํฌ์—์„œ SoTA๋ฅผ ๋‹ฌ์„ฑํ–ˆ๋‹ค๊ณ  ํ•ด์„œ ๋‰ด์Šค๊ฐ€ ์—„์ฒญ ๋งŽ์ด ๋‚˜์™”๋‹ค. ๋…ผ๋ฌธ: https://arxiv.org/pdf/2312.06742.pdf ๊นƒํ—™: https://github.com/kakaobrain/honeybee GitHub - kakaobrain/honeybee: The official implementation of project "Honeybee" The official implementation of project "Honeybee". Contribute to kakaobrain/honeybee development by creating an account o..

[๋”ฅ๋Ÿฌ๋‹ ๋…ผ๋ฌธ๋ฆฌ๋ทฐ] MeZO: Fine-Tuning Language Models with Just Forward Passes (NeurIPS 2023)

๋…ผ๋ฌธ ๋งํฌ: https://arxiv.org/pdf/2305.17333.pdf ๋ฐœํ‘œ ์˜์ƒ: https://neurips.cc/virtual/2023/poster/71437 ์ฝ”๋“œ: https://github.com/princeton-nlp/MeZO NeurIPS 2023 Abstract: Fine-tuning language models (LMs) has yielded success on diverse downstream tasks, but as LMs grow in size, backpropagation requires a prohibitively large amount of memory. Zeroth-order (ZO) methods can in principle estimate gradients us..

[๋”ฅ๋Ÿฌ๋‹ ๋…ผ๋ฌธ๋ฆฌ๋ทฐ] AIM: Scalable Pre-training of Large Autoregressive Image Models (Apple, 2024)

Apple์—์„œ 2024๋…„ 1์›” large pretrained image model์ธ AIM(Autoregressive Image Models)์„ ๋ฐœํ‘œํ–ˆ๋‹ค. ์ฝ”๋“œ์™€ model weight์ด Github์— ๊ณต๊ฐœ๋˜์–ด ์žˆ๋‹ค. ๋…ผ๋ฌธ ๋งํฌ: https://arxiv.org/pdf/2401.08541.pdf GitHub: https://github.com/apple/ml-aim/tree/main AIM์€ LLM์— ์˜๊ฐ์„ ๋ฐ›์•„ ๋งŒ๋“ค์–ด์ง„ ๋Œ€๊ทœ๋ชจ vision ๋ชจ๋ธ์ด๋‹ค. BEiT (2021), Masked autoencoder(MAE) (2021) ๋“ฑ์ด masked language modeling (MLM)์„ ํ†ตํ•ด ์‚ฌ์ „ํ•™์Šต ์‹œํ‚จ ๊ฒƒ๊ณผ ๋‹ค๋ฅด๊ฒŒ, ์ฃผ์–ด์ง„ ํŒจ์น˜๋กœ ๋‹ค์Œ ํŒจ์น˜๋ฅผ ์˜ˆ์ธกํ•˜๋Š” autoregressive object๋ฅผ ์ด์šฉ..

ํ•œ๊ตญ์–ด ์˜คํ”ˆ์†Œ์Šค ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ๋ชจ๋ธ ๋ชจ์Œ (image-text)

ํ˜น์€ awesome-korean-multimodal ๊ฐ™์€๊ฒƒ ์‚ฌ์‹ค ํ•œ๊ตญ์–ด LLM๋„ ๋งŽ์ด ์—†๊ฑฐ๋‹ˆ์™€, ์˜คํ”ˆ์†Œ์Šค๋กœ ๊ณต๊ฐœ๋œ ํ•œ๊ตญ์–ด ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ LLM(MLLM)์€ ์ •๋ง ์–ผ๋งˆ ์•ˆ๋˜๋Š”๋“ฏ ํ•˜๋‹ค. (์ฐธ๊ณ : ํ•œ๊ตญ์–ด LLM ๋ชจ๋ธ ๋ชจ์Œ - awesome-korean-llm) GitHub - NomaDamas/awesome-korean-llm: Awesome list of Korean Large Language Models. Awesome list of Korean Large Language Models. Contribute to NomaDamas/awesome-korean-llm development by creating an account on GitHub. github.com ํ•œ๊ตญ์–ด multimodal llm ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ m..

Apple์˜ Multimodal LLM Ferret ๋…ผ๋ฌธ ๋ฆฌ๋ทฐ

Apple์—์„œ 2023๋…„ 10์›” ๋‚ด๋†“์€ Multimodal LLM์ธ Ferret์˜ ๋…ผ๋ฌธ์ด๋‹ค. ๋ชจ๋ธ ํฌ๊ธฐ๋Š” 7B, 13B ๋‘๊ฐ€์ง€์ด๋ฉฐ Github์— ์ฝ”๋“œ์™€ checkpoint๊ฐ€ ๊ณต๊ฐœ๋˜์–ด ์žˆ๊ณ , ๋น„์ƒ์—…์  ์šฉ๋„๋กœ ์‚ฌ์šฉ๊ฐ€๋Šฅํ•˜๋‹ค. ๋…ผ๋ฌธ ๋งํฌ: https://arxiv.org/pdf/2310.07704.pdf Github: https://github.com/apple/ml-ferret GitHub - apple/ml-ferret Contribute to apple/ml-ferret development by creating an account on GitHub. github.com Introduction Vision-language learning ๋ชจ๋ธ์˜ ์ฃผ์š”ํ•œ ๋‘ capability๋Š” referring๊ณผ groun..

[Optuna] ๋”ฅ๋Ÿฌ๋‹ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ์ตœ์ ํ™”ํ•˜๊ธฐ

Optuna๋Š” ํŒŒ์ด์ฌ ๊ธฐ๋ฐ˜์˜ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ์ตœ์ ํ™” (hyperparameter optimization) ํ”„๋ ˆ์ž„์›Œํฌ๋กœ, ์‹ฌํ”Œํ•˜๊ณ  ์œ ์—ฐํ•œ API๋ฅผ ์ œ๊ณตํ•œ๋‹ค. ๋ณธ ๊ธ€์—์„œ๋Š” Optuna์˜ ์ฃผ์š” ๊ธฐ๋Šฅ๊ณผ ์‚ฌ์šฉ๋ฐฉ๋ฒ•์„ ๊ฐ„๋‹จํžˆ ์†Œ๊ฐœํ•˜๊ณ ์ž ํ•œ๋‹ค. ๊ณต์‹ Docs: https://optuna.readthedocs.io/en/stable/index.html Optuna: A hyperparameter optimization framework — Optuna 3.4.0 documentation © Copyright 2018, Optuna Contributors. Revision 4ea580fc. optuna.readthedocs.io Basic concepts Optuna๋Š” study์™€ trial์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ •์˜ํ•œ๋‹ค. Stu..

[PyTorch] Autograd ์ž‘๋™๋ฐฉ์‹ ์•Œ์•„๋ณด๊ธฐ

์œ„ ๋™์˜์ƒ์—์„œ PyTorch Autograd๋ฅผ ์ดํ•ดํ•˜๊ธฐ ์‰ฝ๊ฒŒ ์„ค๋ช…ํ•ด์ฃผ๊ณ  ์žˆ๋‹ค. ๋‹ค์Œ์€ ์œ„ ๋™์˜์ƒ์„ ๊ฐ„๋‹จํžˆ ์ •๋ฆฌํ•œ ๊ธ€์ด๋‹ค. 1. torch.Tensor ๊ฐ tensor์€ ๋‹ค์Œ์˜ attr์„ ๊ฐ–๋Š”๋‹ค data: tensor์˜ ๊ฐ’ grad: tensor์˜ gradient ๊ฐ’. is_leaf์ธ ๊ฒฝ์šฐ์—๋งŒ gradient๊ฐ€ ์ž๋™์œผ๋กœ ์ €์žฅ๋œ๋‹ค. grad_fn: gradient function. ํ•ด๋‹น tensor๊ฐ€ ์–ด๋–ค ์—ฐ์‚ฐ์„ ํ†ตํ•ด forward๋˜์—ˆ๋Š”์ง€์— ๋”ฐ๋ผ ๊ฒฐ์ •๋œ๋‹ค. ex) a * b = c ์ธ ๊ฒฝ์šฐ c์˜ grad_fn์€ MulBackward์ด๋‹ค. is_leaf์ธ ๊ฒฝ์šฐ None is_leaf: (backward ๊ธฐ์ค€) ๊ฐ€์žฅ ๋งˆ์ง€๋ง‰ tensor์ธ์ง€ requires_grad: ๊ณ„์‚ฐ ๊ทธ๋ž˜ํ”„์˜ ์ผ๋ถ€๋กœ ๋“ค์–ด๊ฐˆ ๊ฒƒ์ธ์ง€ 2. gr..

[๋Ÿฌ๋‹ ์ŠคํŒŒํฌ] ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„ ์—ฐ์‚ฐ๊ณผ ์ „์ฒ˜๋ฆฌ

spark์˜ ๋ฐ์ดํ„ฐํ”„๋ ˆ์ž„ ์—ฐ์‚ฐ๋“ค์„ ์ด์šฉํ•ด ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ, ๋ณ€ํ™˜, ํ†ต๊ณ„ ๋“ฑ ๋‹ค์–‘ํ•œ ์ผ์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋‹ค. ๋‹ค์Œ์€ ๋ช‡๊ฐ€์ง€ ์—ฐ์‚ฐ๋“ค๊ณผ ํ™œ์šฉ ์˜ˆ์‹œ์ด๋‹ค. ํ”„๋กœ์ ์…˜๊ณผ ํ•„ํ„ฐ df = df.select(df.colA, df.colB) # ํ”„๋กœ์ ์…˜ (colA์™€ colB๋งŒ ์„ ํƒ) df = df.where(df.colB 10000")) # colA์˜ ๊ฐ’์ด 10000์ด์ƒ์ด๋ฉด True๋ฅผ ๊ฐ–๋Š” column largeA๋ฅผ ์ถ”๊ฐ€ df = df.drop("colA") # colA ์‚ญ์ œ ์ฐธ๊ณ ) alias์™€..

๋ฐ˜์‘ํ˜•