Skip to content

Can AWS Device Farm runs be split to avoid timeout limitations?

0

I'm currently running automated tests using AWS Device Farm with Java, Appium, and TestNG. My test suite sometimes takes over two hours to complete, which causes AWS to terminate the run. I'm looking for solutions to this problem:

  • Is there a way to increase the timeout limit?
  • Can I split my test runs within the same project?
  • Are there best practices for dividing test suites in Device Farm?
  • How do others handle extensive test automation in Device Farm?

Any suggestions or recommendations would be greatly appreciated. Thank you in advance!

asked 10 months ago137 views
2 Answers
0

To address your concerns about AWS Device Farm runs timing out, there are several approaches you can consider:

  1. Increasing the timeout limit: Unfortunately, there is a hard limit of 150 minutes (2.5 hours) for the duration of an automated test run in AWS Device Farm. This limit cannot be extended beyond this point.

  2. Splitting test runs: Yes, you can split your test runs within the same project. This is actually a recommended approach for handling extensive test suites that exceed the timeout limit. You can create multiple smaller test runs, each containing a subset of your test cases, to ensure they complete within the 150-minute limit.

  3. Best practices for dividing test suites: When splitting your test suite, consider the following approaches:

  • Group tests by functionality or feature areas
  • Separate long-running tests from shorter ones
  • Prioritize critical tests in earlier runs
  • Use test data to estimate run times and balance the load across multiple runs
  1. Handling extensive test automation: To manage large-scale test automation in Device Farm:
  • Use custom test environments for faster execution times, as they run tests more similarly to local environments
  • Optimize your test code to reduce execution time where possible
  • Consider running tests in parallel across multiple devices to maximize efficiency
  • Regularly review and refactor your test suite to remove redundant or unnecessary tests

Additionally, it's worth noting that Device Farm provides granular reporting in the standard environment, which can lead to longer execution times compared to running tests locally. If faster execution is a priority, consider using a custom test environment.

Remember to set your execution timeout with some buffer. For example, if your tests typically take 120 minutes, set the timeout to 140 minutes to allow for some variability while staying within the 150-minute limit.

By implementing these strategies, you should be able to work within the constraints of AWS Device Farm while still achieving comprehensive test coverage for your application.
Sources
Setting the execution timeout for test runs in AWS Device Farm - AWS Device Farm
Runs in AWS Device Farm - AWS Device Farm
Test environments in AWS Device Farm - AWS Device Farm
Limits in AWS Device Farm - AWS Device Farm

answered 10 months ago
AWS
SUPPORT ENGINEER
reviewed 10 months ago
0

Kindly also review the following sample script. This script shows an example architecture of running parallel tests in device farm, which you may consider trying after breaking up the test suites.

import boto3
import os
import requests
import string
import random
import time
import datetime
import json
import tempfile

# The following script runs a test through Device Farm
#
# Things you have to change:
config = {
    # This is our app under test.
    "appFilePath":"app-debug.apk",
    "projectArn": "arn:aws: devicefarm:us-west-2:111122223333:project:1b99bcff-1111-2222-ab2f-8c3c733c55ed",
    # Since we care about the most popular devices, we'll use a curated pool.
    "testSpecArn":"arn:aws: devicefarm:us-west-2::upload:101e31e8-12ac-11e9-ab14-d663bd873e83",
    "devicePoolArn":"arn:aws: devicefarm:us-west-2::devicepool:082d10e5-d7d7-48a5-ba5c-b33d66efa1f5",
    "namePrefix":"MyAppTest",
    # This is our test package. This tutorial won't go into how to make these. 
    "testPackage":"tests.zip",
}

NUMBER_OF_SHARDS = 5

sample_test_spec_file = """
version: 0.1

# Phases are collection of commands that get executed on Device Farm.
phases:
  # The install phase includes commands that install dependencies that your tests use.
  # Default dependencies for testing frameworks supported on Device Farm are already installed.
  install:
    commands:

  # The pre-test phase includes commands that setup your test environment.
  pre_test:
    commands:

  test:
    commands:
      - echo "I am running test suite number {test_shard_number}"

  # The post test phase includes commands that are run after your tests are executed.
  post_test:
    commands:

# The artifacts phase lets you specify the location where your tests logs, device logs will be stored.
# And also let you specify the location of your test logs and artifacts which you want to be collected by Device Farm.
# These logs and artifacts will be available through ListArtifacts API in Device Farm.
artifacts:
  # By default, Device Farm will collect your artifacts from following directories
  - $DEVICEFARM_LOG_DIR
"""

client = boto3.client('devicefarm')

unique = config['namePrefix']+"-"+(datetime.date.today().isoformat())+(''.join(random.sample(string.ascii_letters,8)))

print(f"The unique identifier for this run is going to be {unique} -- all uploads will be prefixed with this.")

def upload_df_file(filename, type_, mime='application/octet-stream'):
    response = client.create_upload(projectArn=config['projectArn'],
        name = (unique)+"_"+os.path.basename(filename),
        type=type_,
        contentType=mime
        )
    # Get the upload ARN, which we'll return later.
    upload_arn = response['upload']['arn']
    # We're going to extract the URL of the upload and use Requests to upload it 
    upload_url = response['upload']['url']
    with open(filename, 'rb') as file_stream:
        print(f"Uploading {filename} to Device Farm as {response['upload']['name']}... ",end='')
        put_req = requests.put(upload_url, data=file_stream, headers={"content-type":mime})
        print(' done')
        if not put_req.ok:
            raise Exception("Couldn't upload, requests said we're not ok. Requests says: "+put_req.reason)
    started = datetime.datetime.now()
    while True:
        print(f"Upload of {filename} in state {response['upload']['status']} after "+str(datetime.datetime.now() - started))
        if response['upload']['status'] == 'FAILED':
            raise Exception("The upload failed processing. DeviceFarm says reason is: \n"+(response['upload']['message'] if 'message' in response['upload'] else response['upload']['metadata']))
        if response['upload']['status'] == 'SUCCEEDED':
            break
        time.sleep(5)
        response = client.get_upload(arn=upload_arn)
    print("")
    return upload_arn

our_upload_arn = upload_df_file(config['appFilePath'], "ANDROID_APP")
our_test_package_arn = upload_df_file(config['testPackage'], 'APPIUM_PYTHON_TEST_PACKAGE')
print(our_upload_arn, our_test_package_arn)
# Now that we have those out of the way, we can start the test runs...

running_runs = []
for test_shard_index in range(NUMBER_OF_SHARDS):
    # Use NamedTemporaryFile to create a temporary file
    with tempfile.NamedTemporaryFile(mode='w', suffix='.yml', delete=False) as f:
        f.write(sample_test_spec_file.format(test_shard_number=test_shard_index))
        test_spec_file_name = f.name  # Get the name of the temporary file

    our_test_spec_arn = upload_df_file(test_spec_file_name, "APPIUM_PYTHON_TEST_SPEC")
    os.remove(test_spec_file_name)  # Clean up the temporary file

    name = unique+"_shard_"+str(test_shard_index) 
    response = client.schedule_run(
        projectArn = config["projectArn"],
        appArn = our_upload_arn,
        devicePoolArn = config["devicePoolArn"],
        name=name,
        test = {
            "type":"APPIUM_PYTHON",
            "testSpecArn": our_test_spec_arn,
            "testPackageArn": our_test_package_arn
            }
        )
    running_runs.append(response["run"]["arn"])
    print(f"Run {name} is scheduled as arn {running_runs[-1]} ")

start_time = datetime.datetime.now()
completed_runs = []
try:

    while running_runs:
        time.sleep(10)
        i = 0
        while i < len(running_runs):
            response = client.get_run(arn=running_runs[i])
            state = response['run']['status']
            if state == 'COMPLETED':
                completed_runs.append(running_runs.pop(i))
            else:
                print(f" Run {running_runs[i]} in state {state}, total time "+str(datetime.datetime.now()-start_time))
                i+=1
except:
    # If something goes wrong in this process, we stop the run and exit. 
    for run_arn in running_runs:
        client.stop_run(arn=run_arn)
    exit(1)

print(f"Tests finished after "+str(datetime.datetime.now() - start_time))
# now, we pull all the logs.
for shard_index in range(len(completed_runs)):
    jobs_response = client.list_jobs(arn=completed_runs[i])
    # Save the output somewhere. We're using the unique value, but you could use something else
    save_path = os.path.join(os.getcwd(), unique, "shard_"+str(shard_index))
    os.mkdir(save_path)
    # Save the last run information
    for job in jobs_response['jobs']:
        # Make a directory for our information
        job_name = job['name']
        os.makedirs(os.path.join(save_path, job_name), exist_ok=True)
        # Get each suite within the job
        suites = client.list_suites(arn=job['arn'])['suites']
        for suite in suites:
            for test in client.list_tests(arn=suite['arn'])['tests']:
                # Get the artifacts
                for artifact_type in ['FILE','SCREENSHOT','LOG']:
                    artifacts = client.list_artifacts(
                        type=artifact_type,
                        arn = test['arn']
                    )['artifacts']
                    for artifact in artifacts:
                        # We replace : because it has a special meaning in Windows & macos
                        path_to = os.path.join(save_path, job_name, suite['name'], test['name'].replace(':','_') )
                        os.makedirs(path_to, exist_ok=True)
                        filename = artifact['type']+"_"+artifact['name']+"."+artifact['extension']
                        artifact_save_path = os.path.join(path_to, filename)
                        print("Downloading "+artifact_save_path)
                        with open(artifact_save_path, 'wb') as fn, requests.get(artifact['url'],allow_redirects=True) as request:
                            fn.write(request.content)
                        #/for artifact in artifacts
                    #/for artifact type in []
                #/ for test in ()[]
            #/ for suite in suites
        #/ for job in _[]
    # done
    print("Finished")
AWS
SUPPORT ENGINEER
answered 10 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.