Fine-tune LLAMA2 with DPO (Direct Preference Optimization) in AWS

0

I'm exploring fine-tuning with DPO and successfully trained facebook/opt-model (HF model) with DPO (Ref: https://huggingface.co/blog/dpo-trl). As part of DPO training, I first performed SFT training, and using the final checkpoint I performed DPO training.

Now, I'm working on fine-tuning Llama2 with DPO in AWS. I have successfully fine-tuned Llama2 in AWS SageMaker Jumpstart, but stuck there figuring out how to perform DPO using the fine-tuned model artifact which is stored in S3 bucket.

It would be helpful if anyone could share some resources or insights on how to proceed DPO training in AWS. Thanks in advance!

Jyothi
질문됨 5달 전1958회 조회
답변 없음

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠