1 Answer
- Newest
- Most votes
- Most comments
0
Hi there,
I was able to reproduce this behavior on a ml.m5d.2xlarge notebook instance using similar code.
tf_hub_embedding_layer = hub.KerasLayer("https://tfhub.dev/google/universal-sentence-encoder/4",
trainable=False,
name="universal_sentence_encoder")
embeddings = tf_hub_embedding_layer(train_examples)
In my case, I was able to run it with 25K lines of text. However, when I ran it with 50K lines of text (train_examples.repeat(2)), I also experienced OOM errors. Running free -h
in terminal also showed that the notebook instance did in fact run out of free memory while running the code above, and hence the OOM errors.
total used free shared buff/cache available
Mem: 30G 22G 900M 676K 7.2G 7.6G
Swap: 0B 0B 0B
In order to run code similar to this, please consider choosing a bigger instance size with more memory.
Relevant content
- asked 8 months ago
- asked 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
Hi @Peter_X, I ended up running the experiment on a
ml.md5.4xlarge
instance and was successful. Having said that, it does not answer the question whether the allocation of memory to a Jupyter Notebook (or kernel) can be configured.