Get the Last Execution Code block time on EMR notebook/workspace

0

I have an EMR workspace under which I have 4 Jupyter notebooks created on which PySpark code blocks are run. I want to get the last execution code block time across all 4 notebooks to determine the time when was that EMR workspace last used, using some APIs if available.

Basically the goal of this is to determine if the EMR cluster that is launched, is actually in Waiting state or it is being used by the underlying EMR workspace for execution of code-snippets in Jupyter notebooks from EMR Workspace.

Thanks in advance.

Sukrit
已提问 1 个月前144 查看次数
1 回答
0
已接受的回答

I am not aware of any inbuilt mechanism to achieve this use case.

But you can have some custom logic to see if there are any applications from user livy is running. User livy is being used by EMR notebook to submit jobs in EMR cluster.

[root@ip-172-31-42-13 ~]# yarn application -list |grep -i hadoop
24/05/08 11:27:15 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-13.ec2.internal/172.31.42.13:8032
24/05/08 11:27:15 INFO client.AHSProxy: Connecting to Application History server at ip-172-31-42-13.ec2.internal/172.31.42.13:10200
[root@ip-172-31-42-13 ~]# echo $?
1
[root@ip-172-31-42-13 ~]#
[root@ip-172-31-42-13 ~]#
[root@ip-172-31-42-13 ~]# yarn application -list |grep -i livy
24/05/08 11:27:22 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-42-13.ec2.internal/172.31.42.13:8032
24/05/08 11:27:22 INFO client.AHSProxy: Connecting to Application History server at ip-172-31-42-13.ec2.internal/172.31.42.13:10200
application_1715160855732_0002	      livy-session-1	               SPARK	      livy	   default	           RUNNING	         UNDEFINED	            10%	http://ip-172-31-42-13.ec2.internal:4041
[root@ip-172-31-42-13 ~]# echo $?
0

echo $? determine the exit status of the command and if its 0 then it means the application is present and the session is active. But it doesn't necessarily mean that the notebook execution is going on or finished. It could also be a case where the kernel is staying idle and waiting for the command execution.

for knowing the job execution, you would need to track it using YARN with per the corresponding application id.

AWS
已回答 25 天前
AWS
支持工程师
已审核 20 天前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则