2 Answers
- Newest
- Most votes
- Most comments
1
The problem was related to the display, so adding this extra options arguments:
options.add_argument("--window-size=800,600")
options.add_argument("--disable-dev-shm-usage")
options.add_argument("--enable-automation")
and use a third party library to simulate fake display the problem was fixed
from pyvirtualdisplay import Display
display = Display(visible=0, size=(800, 600))
display.start()
...
display.stop()
answered 2 months ago
0
Hello.
Is it possible that there is a problem with the specs of the EC2 that is running the code?
It may depend on the size of the website you are scraping, but I think it may stop working if the EC2 specs are low.
Have you confirmed that this code completes the process normally?
For example, does the process complete normally when run on a local PC?
Looking at the documentation, I think the usage of wait is as follows.
There may be a problem with the waiting time around here, so please check it.
https://www.selenium.dev/documentation/webdriver/waits/
wait = WebDriverWait(driver, 3)
element = wait.until(EC.element_to_be_clickable((By.XPATH, "//button[@class='sc-beySPh gNAvzR mde-consent-accept-btn']")))
element.click()
Relevant content
- asked 3 years ago
- asked a year ago
- asked a year ago
- AWS OFFICIALUpdated 6 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 8 months ago
- AWS OFFICIALUpdated 8 months ago
It works perfectly locally and checking metrics and instance resources: RAM is 47.3%, CPU is 64.9%, I/O, etc.... So the maximum resources of the instance are not exceeded.