How to finetune llama 2 7B model from jumpstart using pdf data


I have multiple PDF data which consists of bunch of paragraphs, I need to finetune llama 2 7B model and ask question about the content in the PDF. Earlier, I tried llama 2 7B chat in which I provided data by extracting the text from PDF using langchain.

Whereas now I would like to finetune the Llama 2 7B model, so can someone guide me on how to finetune the model with pdf data, like what is the correct format to preprocess the data and how to pass the data to finetune the model.

1 Answer

Hi, The optimal path is to use: AWS Textract to convert your pdf back to text and then train your ML model on this text.

AWS Textract service page:

Textract developer guide:

To have a detailled use case of Textract applied to ML, this video is very interesting:

Finally, to apply to Llama2 fine tuning:

Finally, to do that finetuning on SageMaker:

You have a SageMaker notebook for it:



profile pictureAWS
answered 6 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions