AWS Sagemaker: Model Parallelism for NLP GPT-2 model


Hello Dears,

I am working on an NLP product based on GPT-2, I have some problems with that is included in AWS repo:

Please, I need your help with how to customize this script for my own dataset in detail.

Below is the class of my dataset:

class MyDataset(Dataset):

def __init__(self, data, tokenizer, randomize=True):
    title, text,Claims = [], [],[]
    for k, v in data.items():
    self.randomize = randomize
    self.tokenizer = tokenizer 
    self.title     = title
    self.text      = text
    self.Claims      = Claims
def __len__(self):
    return len(self.text)
def __getitem__(self, i):

    input = SPECIAL_TOKENS['bos_token'] + self.title[i] + \
            SPECIAL_TOKENS['sep_token'] + self.text[i] + SPECIAL_TOKENS['sep_token'] + self.Claims[i] + SPECIAL_TOKENS['eos_token']

    encodings_dict = tokenizer(input,                                   
    input_ids = encodings_dict['input_ids']
    attention_mask = encodings_dict['attention_mask']
    return {'label': torch.tensor(input_ids),
            'input_ids': torch.tensor(input_ids), 
            'attention_mask': torch.tensor(attention_mask)}

Thanks in advance!


profile picture
asked 3 months ago29 views
1 Answer

Can't give you the exact method as do not know your context, but this are some things you can do for this

First, you will need to determine the format and structure of your data. For example, is it stored in a file or in a database? What are the fields or columns that are included in your data?

Based on the structure of your data, you will need to modify the init method of the MyDataset class to correctly parse and extract the relevant data. For example, if your data is stored in a file with three columns, you will need to modify the code to read in these columns and store them in the appropriate variables (e.g., title, text, and Claims).

You will also need to modify the getitem method of the MyDataset class to properly process and encode your data for use with the transformer model. For example, you may need to modify the input variable to reflect the structure of your data (e.g., if your data has additional fields or columns that you want to include). You may also need to modify the tokenizer function call to reflect the specific tokenization and encoding options that you want to use.

Finally, you may need to modify the len method of the MyDataset class to accurately return the number of data points in your dataset.

answered 3 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions