How to finetune llama 2 7B model from jumpstart using pdf data

0

I have multiple PDF data which consists of bunch of paragraphs, I need to finetune llama 2 7B model and ask question about the content in the PDF. Earlier, I tried llama 2 7B chat in which I provided data by extracting the text from PDF using langchain.

Whereas now I would like to finetune the Llama 2 7B model, so can someone guide me on how to finetune the model with pdf data, like what is the correct format to preprocess the data and how to pass the data to finetune the model.

1 Answer
2

Hi, The optimal path is to use: AWS Textract to convert your pdf back to text and then train your ML model on this text.

AWS Textract service page: https://aws.amazon.com/textract/

Textract developer guide: https://docs.aws.amazon.com/textract/latest/dg/what-is.html

To have a detailled use case of Textract applied to ML, this video is very interesting: https://www.youtube.com/watch?v=WA0T8dy0aGQ

Finally, to apply to Llama2 fine tuning: https://www.anyscale.com/blog/fine-tuning-llama-2-a-comprehensive-case-study-for-tailoring-models-to-unique-applications

Finally, to do that finetuning on SageMaker: https://www.linkedin.com/pulse/enhancing-language-models-qlora-efficient-fine-tuning-vraj-routu

You have a SageMaker notebook for it: https://github.com/philschmid/huggingface-llama-2-samples/blob/master/training/sagemaker-notebook.ipynb

Best,

Didier

profile pictureAWS
EXPERT
answered 8 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions