1 réponse
- Le plus récent
- Le plus de votes
- La plupart des commentaires
0
Hi there,
I was able to reproduce this behavior on a ml.m5d.2xlarge notebook instance using similar code.
tf_hub_embedding_layer = hub.KerasLayer("https://tfhub.dev/google/universal-sentence-encoder/4",
trainable=False,
name="universal_sentence_encoder")
embeddings = tf_hub_embedding_layer(train_examples)
In my case, I was able to run it with 25K lines of text. However, when I ran it with 50K lines of text (train_examples.repeat(2)), I also experienced OOM errors. Running free -h
in terminal also showed that the notebook instance did in fact run out of free memory while running the code above, and hence the OOM errors.
total used free shared buff/cache available
Mem: 30G 22G 900M 676K 7.2G 7.6G
Swap: 0B 0B 0B
In order to run code similar to this, please consider choosing a bigger instance size with more memory.
Contenus pertinents
- demandé il y a 6 mois
- demandé il y a un an
- demandé il y a un mois
- demandé il y a un an
- AWS OFFICIELA mis à jour il y a 2 ans
- AWS OFFICIELA mis à jour il y a un an
- AWS OFFICIELA mis à jour il y a un an
Hi @Peter_X, I ended up running the experiment on a
ml.md5.4xlarge
instance and was successful. Having said that, it does not answer the question whether the allocation of memory to a Jupyter Notebook (or kernel) can be configured.