Data Wrangler: Data Flow: Export to S3 using Jupyter Notebook

0

When I have created a Data flow using data wrangler and when I am trying to Export to export to S3 using Jupyter Notebook and when I am running the notebook, I am getting the below mentioned error every time when creating a processing job:

Error: An error occurred (ResourceLimitExceeded) when calling the CreateProcessingJob operation: The account-level service limit 'ml.m5.4xlarge for processing job usage' is 0 Instances, with current utilization of 0 Instances and a request delta of 2 Instances. Please contact AWS support to request an increase for this limit.

Please provide me with the solution for this. I have increased the service quota for running apps and Notebook instance also but then also same issue arises.

1 回答
1

When you export a data flow to S3, you're starting a SageMaker processing job that will run the script and store data in S3.

So, increase your account limit for Processing Job instances, for the instance type ml.m5.4xlarge (instead of running apps/notebook instances).

AWS
Durga_S
已回答 2 年前
AWS
专家
Alex_T
已审核 2 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则