논문 링크: https://arxiv.org/pdf/2305.17333.pdf 발표 영상: https://neurips.cc/virtual/2023/poster/71437 코드: https://github.com/princeton-nlp/MeZO NeurIPS 2023 Abstract: Fine-tuning language models (LMs) has yielded success on diverse downstream tasks, but as LMs grow in size, backpropagation requires a prohibitively large amount of memory. Zeroth-order (ZO) methods can in principle estimate gradients us..