How to finetune llama 2 7B model from jumpstart using pdf data

0

I have multiple PDF data which consists of bunch of paragraphs, I need to finetune llama 2 7B model and ask question about the content in the PDF. Earlier, I tried llama 2 7B chat in which I provided data by extracting the text from PDF using langchain.

Whereas now I would like to finetune the Llama 2 7B model, so can someone guide me on how to finetune the model with pdf data, like what is the correct format to preprocess the data and how to pass the data to finetune the model.

1개 답변
2

Hi, The optimal path is to use: AWS Textract to convert your pdf back to text and then train your ML model on this text.

AWS Textract service page: https://aws.amazon.com/textract/

Textract developer guide: https://docs.aws.amazon.com/textract/latest/dg/what-is.html

To have a detailled use case of Textract applied to ML, this video is very interesting: https://www.youtube.com/watch?v=WA0T8dy0aGQ

Finally, to apply to Llama2 fine tuning: https://www.anyscale.com/blog/fine-tuning-llama-2-a-comprehensive-case-study-for-tailoring-models-to-unique-applications

Finally, to do that finetuning on SageMaker: https://www.linkedin.com/pulse/enhancing-language-models-qlora-efficient-fine-tuning-vraj-routu

You have a SageMaker notebook for it: https://github.com/philschmid/huggingface-llama-2-samples/blob/master/training/sagemaker-notebook.ipynb

Best,

Didier

profile pictureAWS
전문가
답변함 9달 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인