1 réponse
- Le plus récent
- Le plus de votes
- La plupart des commentaires
0
I believe that steps are submitted and run in the order, so to confirm the same I went ahead and tested the same on a test cluster using your code with little changes as below.
steps = []
for i in range(1,10):
args = [
'spark-example',
'--deploy-mode',
'cluster',
'SparkPi',
'10'
]
step = {
'Name': "TestStepOrder" + str(i),
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': args
}
}
steps.append(step)
response = client.add_job_flow_steps(JobFlowId=clusterId, Steps=steps)
I can confirm the order is maintained as expected.
I ran a second round of tests for the same with concurrency set as 5 to see if that has any impacts on this. In this case by looking at the Start Time I can confirm the order is still maintained.
Interested to know more about how you get the order mixed up, please share reproduction steps to reproduce the behavior you are observing.
Note: I'm using the latest boto3 version(1.20.26), not sure if that makes it any different
Contenus pertinents
- demandé il y a 3 mois
- Réponse acceptéedemandé il y a un an
- demandé il y a un an
- AWS OFFICIELA mis à jour il y a 3 ans
- AWS OFFICIELA mis à jour il y a un an
- AWS OFFICIELA mis à jour il y a un an