By using AWS re:Post, you agree to the Terms of Use
/Amazon Elastic Block Store/

Questions tagged with Amazon Elastic Block Store

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Can't see EBS Snapshot tags from other accounts

Hi, I have private snapshots in one account (source) that I have shared with another account (target). I am able to see the snapshots themselves from the target account, but the tags are not available, neither on the console nor via the cli. This makes it impossible to filter for a desired snapshot from the target account. For background, the user in the target account has the following policy in effect: ``` "Effect": "Allow", "Action": "ec2:*", "Resource": "*" ``` Here's an example of what I'm seeing; from the source account: ``` $ aws --region us-east-2 ec2 describe-snapshots --snapshot-ids snap-XXXXX { "Snapshots": [ { "Description": "snapshot for testing", "VolumeSize": 50, "Tags": [ { "Value": "test-snapshot", "Key": "Name" } ], "Encrypted": true, "VolumeId": "vol-XXXXX", "State": "completed", "KmsKeyId": "arn:aws:kms:us-east-2:XXXXX:key/mrk-XXXXX", "StartTime": "2022-04-19T18:29:36.069Z", "Progress": "100%", "OwnerId": "XXXXX", "SnapshotId": "snap-XXXXX" } ] } ``` but from the target account ``` $ aws --region us-east-2 ec2 describe-snapshots --owner-ids 012345678900 --snapshot-ids snap-11111111111111111 { "Snapshots": [ { "Description": "snapshot for testing", "VolumeSize": 50, "Encrypted": true, "VolumeId": "vol-22222222222222222", "State": "completed", "KmsKeyId": "arn:aws:kms:us-east-2:012345678900:key/mrk-00000000000000000000000000000000", "StartTime": "2022-04-19T18:29:36.069Z", "Progress": "100%", "OwnerId": "012345678900", "SnapshotId": "snap-11111111111111111" } ] } ``` Any ideas on what's going on here? Cheers!
1
answers
0
votes
4
views
asked a month ago

Design questions on asg, backup restore, ebs and efs

Hi experts, We are designing to deploy a BI application in AWS. We have a default setting to repave the ec2 instance every 14 days which means it will rebuild the whole cluster instances with services and bring back it to last known good state. We want to have a solution with no/minimal downtime. The application has different services provisioned on different ec2 instances. First server will be like a main node and rest are additional nodes with different services running on them. We install all additional nodes same way but configure services later in the code deploy. 1. Can we use asg? If yes, how can we distribute the topology? Which mean out of 5 instances, if one server repaves, then that server should come up with the same services as the previous one. Is there a way to label in asg saying that this server should configure as certain service? 1. Each server should have its own ebs volume and stores some data in it. - what is the fastest way to copy or attach the ebs volume to new repaves server without downtime? 2. For shared data we want to use EFS 3. for metadata from embedded Postgres - we need to take a backup periodically and restore after repave(create new instance with install and same service) - how can we achieve this without downtime? We do not want to use customized AMI as we have a big process for ami creation and we often need to change it if we want to add install and config in it. Sorry if this is a lot to answers. Some guidance is helpful.
1
answers
0
votes
6
views
asked a month ago

Understanding RDS throughput limits

I have trouble understanding what throughput limit(s) my RDS instance is supposed to have. Based on this [blog post](https://aws.amazon.com/blogs/database/making-better-decisions-about-amazon-rds-with-amazon-cloudwatch-metrics/): > An Amazon RDS instance has two types of throughput limits: Instance level and EBS volume level limits. > You can monitor instance level throughput with the metrics WriteThroughput and ReadThroughput. WriteThroughput is the average number of bytes written to disk per second. ReadThroughput is the average number of bytes read from disk per second. For example, a db.m4.16xlarge instance class supports 1,250-MB/s maximum throughput. The EBS volume throughput limit is 250 MiB/S for GP2 storage based on 16 KiB I/O size, and 1,000 MiB/s for Provisioned IOPS storage type. If you experience degraded performance due to a throughput bottleneck, you should validate both of these limits and modify the instance as needed. My RDS instance is of db.r6g.8xlarge type, which according to https://aws.amazon.com/rds/instance-types/ has 9000 Mbps (= 1125 MB/s) EBS dedicated bandwidth. On the other hand, according to https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html the underlying gp2 volume (5TB size) has a 250 MB/s throughput limit. So how are these two limits applied? Should I be able to reach close to 1125 MB/s or am I restricted to 250 MiB/s because of gp2 volume limit? In CloudWatch, during bulk write operations I have observed Total (Read + Write) Throughput momentarily reach ~ 1000 MB/s but mostly it was steady around 420 MB/s, i.e. somewhere in between the two limits.
2
answers
0
votes
10
views
asked 2 months ago

Is there a way to identify an EBS Volume inside a Linux EC2 instance using its volume ID ?

We are working on a use case where we need to map the disk label within the instance to the corresponding Volume ID in EBS. While performing some validations on some AMIs, we found that there is a difference between the behavior for Windows and Linux We have observed that the requirement we need is present in case of Windows (AMI Used: Windows_Server-2016-English-Full-Containers-2022.01.19) The following query yields the required result. Here the serial number of the disk is mapping to the EBS volume id The device driver for this instance was the AWS PV Storage Host Adapter ``` PS C:\Users\Administrator> Get-WmiObject Win32_DiskDrive | select-object -property serialnumber,index serialnumber index ------------ ----- vol0b44250cf530aa7f3 0 vol0f38be626e3137975 1 vol0bdc570ca980fb5fb 2 ``` However in case of Linux instances (AMI Used: amzn2-ami-kernel-5.10-hvm-2.0.20220121.0-x86_64-gp2) we are seeing that the EBS volume ID is not present within the disk metadata. We checked the following points inside the Linux: 1. Directories within /dev/disk: For the above AMI, the disk serial number is not being exposed in the /dev/disk/by-id directory. In the /dev/disk/by-path directory, there are entries present in the following format xen-vbd-51712 -> ../../xvda . Is it possible to map the string xen-vbd-51712 to the EBS volume ? 2. udevadm info <disk_label>: This is yielding the following information attached below, however the volume id is not present in the below. ``` P: /devices/vbd-51712/block/xvda N: xvda S: disk/by-path/xen-vbd-51712 S: sda E: DEVLINKS=/dev/disk/by-path/xen-vbd-51712 /dev/sda E: DEVNAME=/dev/xvda E: DEVPATH=/devices/vbd-51712/block/xvda E: DEVTYPE=disk E: ID_PART_TABLE_TYPE=gpt E: ID_PART_TABLE_UUID=08cf25fb-6b18-47c3-b4cb-fea548b3a3a2 E: ID_PATH=xen-vbd-51712 E: ID_PATH_TAG=xen-vbd-51712 E: MAJOR=202 E: MINOR=0 E: SUBSYSTEM=block E: TAGS=:systemd: E: USEC_INITIALIZED=34430 ``` As per https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html, the device name which is provided when the EBS volume is attached to the instance is not guaranteed to be the same which is visible inside the instance. ``` "When you attach a volume to your instance, you include a device name for the volume. This device name is used by Amazon EC2. The block device driver for the instance assigns the actual volume name when mounting the volume, and the name assigned can be different from the name that Amazon EC2 uses" ``` Since our use case can involve frequent addition/removal of EBS volumes on an instance, we wanted to find a deterministic method to identify a volume inside a Linux instance. Could you please let us know that if there is a route by which we can relate the disk within EC2 instance with the corresponding EBS volume id ?
1
answers
0
votes
81
views
asked 3 months ago

EC2 - Could not set DHCPv4 address: Connection timed out (sa-east-1a)

Our c6i.2xlarge 3-year reserved instance, running for its first 5 days, generated this log entry **_Could not set DHCPv4 address: Connection timed out_** on Jan 28 02:59:51 UTC, followed by **_Failed_** and **_Configured_**. From there on, the machine became unresponsive and AWS finally raised a StatusCheckFailed_Instance at 06:59 UTC. At 09:06 UTC machine was stopped and restarted through the Console. I found these apparently related issues, but still clueless: [CoreOS goes offline on DHCP failure on Amazon VPC](https://github.com/coreos/bugs/issues/2020) [CoreOS on EC2 losing network connection once a day](https://github.com/coreos/bugs/issues/1551) The box is running MySQL 5.7.36 and Memcache 1.5.6 on top of Ubuntu 18.04. I would be thankful if someone could help me identify the **root cause** of this issue, and: 1. Could this be related to ntp-systemd-netif.service ? 2. This instance type has a separate channel for EBS, but with network down, and no customers making requests (no usage logs on the application machine, except the "MySQL connection timeouts"), what would explain a surge on EBS disk reads? CloudWatch graphs below. 3. We have an EFS disk attached to this instance, that started failing at 04:04 UTC _probably_ related to network failure. No errors reported at EFS sa-east São Paulo status page. Jan 28 02:17:01 ip-172-xxx-xxx-xxx CRON[18179]: pam_unix(cron:session): session opened for user root by (uid=0) Jan 28 02:17:01 ip-172-xxx-xxx-xxx CRON[18180]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jan 28 02:17:01 ip-172-xxx-xxx-xxx CRON[18179]: pam_unix(cron:session): session closed for user root Jan 28 02:29:11 ip-172-xxx-xxx-xxx systemd-networkd[728]: ens5: Configured Jan 28 02:29:11 ip-172-xxx-xxx-xxx systemd-timesyncd[623]: Network configuration changed, trying to establish connection. Jan 28 02:29:12 ip-172-xxx-xxx-xxx systemd-timesyncd[623]: Synchronized to time server 169.254.169.123:123 (169.254.169.123). Jan 28 02:29:12 ip-172-xxx-xxx-xxx systemd[1]: Started ntp-systemd-netif.service. Jan 28 02:59:51 ip-172-xxx-xxx-xxx systemd-timesyncd[623]: Network configuration changed, trying to establish connection. Jan 28 02:59:51 ip-172-xxx-xxx-xxx systemd-networkd[728]: **ens5: Could not set DHCPv4 address: Connection timed out** Jan 28 02:59:51 ip-172-xxx-xxx-xxx systemd-networkd[728]: **ens5: Failed** Jan 28 02:59:51 ip-172-xxx-xxx-xxx systemd-networkd[728]: **ens5: Configured** Jan 28 02:59:51 ip-172-xxx-xxx-xxx systemd-timesyncd[623]: Synchronized to time server 169.254.169.123:123 (169.254.169.123). Jan 28 02:59:51 ip-172-xxx-xxx-xxx systemd-timesyncd[623]: Network configuration changed, trying to establish connection. Jan 28 02:59:51 ip-172-xxx-xxx-xxx systemd-timesyncd[623]: Synchronized to time server 169.254.169.123:123 (169.254.169.123). Jan 28 03:00:01 ip-172-xxx-xxx-xxx systemd[1]: **Started ntp-systemd-netif.service.** Jan 28 03:01:21 ip-172-xxx-xxx-xxx systemd-udevd[503]: seq 16407 '/kernel/slab/proc_inode_cache/cgroup/proc_inode_cache(4935:ntp-systemd-netif.service)' is taking a long time Jan 28 03:01:28 ip-172-xxx-xxx-xxx systemd-udevd[503]: seq 16408 '/kernel/slab/:A-0000040/cgroup/pde_opener(4935:ntp-systemd-netif.service)' is taking a long time Jan 28 03:01:34 ip-172-xxx-xxx-xxx systemd-udevd[503]: seq 16409 '/kernel/slab/kmalloc-32/cgroup/kmalloc-32(4935:ntp-systemd-netif.service)' is taking a long time Jan 28 03:01:40 ip-172-xxx-xxx-xxx systemd-udevd[503]: seq 16410 '/kernel/slab/kmalloc-4k/cgroup/kmalloc-4k(4935:ntp-systemd-netif.service)' is taking a long time Jan 28 03:17:03 ip-172-xxx-xxx-xxx CRON[18284]: pam_unix(cron:session): session opened for user root by (uid=0) Jan 28 03:17:12 ip-172-xxx-xxx-xxx CRON[18285]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jan 28 03:19:34 ip-172-xxx-xxx-xxx snapd[6419]: autorefresh.go:530: Cannot prepare auto-refresh change: Post https://api.snapcraft.io/v2/snaps/refresh: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 03:19:34 ip-172-xxx-xxx-xxx CRON[18284]: pam_unix(cron:session): session closed for user root Jan 28 03:28:44 ip-172-xxx-xxx-xxx snapd[6419]: stateengine.go:149: state ensure error: Post https://api.snapcraft.io/v2/snaps/refresh: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 28 03:36:35 ip-172-xxx-xxx-xxx systemd[1]: Starting Ubuntu Advantage Timer for running repeated jobs... Jan 28 04:01:18 ip-172-xxx-xxx-xxx systemd[1]: **Started ntp-systemd-netif.service.** Jan 28 04:03:09 ip-172-xxx-xxx-xxx systemd-udevd[503]: seq 16496 '/radix_tree_node(4961:ntp-systemd-netif.service)' is taking a long time Jan 28 04:04:00 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:06:13 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:06:26 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:09:14 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:09:26 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:12:15 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:12:26 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:12:36 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:15:15 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:15:26 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:15:34 ip-172-xxx-xxx-xxx kernel: nfs: server fs-0ac698ea1xxxxxxxx.efs.sa-east-1.amazonaws.com not responding, timed out Jan 28 04:16:39 ip-172-xxx-xxx-xxx sshd[4657]: pam_unix(sshd:session): session closed for user ubuntu Jan 28 04:17:30 ip-172-xxx-xxx-xxx systemd-logind[974]: Failed to abandon session scope, ignoring: Connection timed out Jan 28 04:18:00 ip-172-xxx-xxx-xxx systemd-logind[974]: Removed session 27. [Cloud Watch Graphs](https://ibb.co/7tydsyQ) Thanks!
0
answers
0
votes
6
views
asked 4 months ago

EC2 instance fails to start after changing instance type from t2 to t3

I tried to update an EC2 instance from t2 to t3. Since the AZ I was running the instance in did not support t3 instances. I stopped the instance, created an image, and then tried to create an instance from that image in us-east-1c. The instance is running RockyOS v. 8.5. The instance did not start. Using the serial console, it appears as though the EBS volume was not detected. I verified that the ENA and NMVE drivers were installed. I tried a series of experiments, where I created new instances from the original AMI and I was able to create t2 instances, stopped them, created images, and then created new t3 instances without issue. The main difference of course is that the production instance has a lot more data on it, has been updated via dnf update, etc. I suppose I could just create a brand new t3 instance and migrate the data over, but I would like to understand why I wasn't able to convert the instance from t2 to t3. Some more information: The reason that the experiments worked, was because the original AMI was based on RockyOS v. 8.4. This version allows me to migrate between t2 and t3 versions without any issues. The production instance was updated at some point to version 8.5 and for some reason this version does not boot on t3 (nitro) instance types. I repeated my experiment, and launched the original AMI in a t2, did an upgrade, then after changing the instance type to t3, the instance does not boot. While this doesn't provide a solution to the problem, at least not it is reproducible. So, what is it about Rocky OS v. 8.5 that prevents the migration to a nitro instance? The modinfo ena and modinfo nvme both show the drivers are present
2
answers
0
votes
12
views
asked 4 months ago

EBS Volumes on RAID0 to gain perfomance

Hi, I have to setup a Domino Server, that uses a "translog" path. On the Domino Server documentation, it is suggested to dedicate a specific disk with his own controller to this path directory. I am not sure what method to use, either to use a normal GP3 disk, and add more IOPs if I see they are needed, or using ESB (algo GP3) on Raid0. Problem that I see with Raid0 is that I do not known how easy is to increase volumes on a RAID0 Configuration. On a normal (single) ESB volume, starting with a "small" size of the volume (ie, 100Gb), and then, increase it, is easy, is a trivial thing, and I can add also more IOPs / trhoughput on a very easy way, so I guess I can get the same (or very similar) perfomance that I will get with RAID0. I am aware that will RAID0 I will be able to "double" the perfomance, because I will get accumulatted IOPs values, so the maximum of IOPs obtained on RAID0 will be always greater, but not sure if I will need to increase the IOPS to a ratio that can not be obtained also on a single GP3 Disk. Moreover, my concern on RAID0 is about how easy will be to increase volume on it.. Can I increase a ESB volume that is being part of a RAID0 the same way that a normal EBS volume ? Do I need to have both ESB volumes at the same size ? Is the administrative additional task on a RAID0 woth the less (ie, complexity added for increasing volumes, for snapshoots, backups/recoverys, etc) ? In summary... When is bettert to have a single EBS volume with the number of IOPS you need, or a RAID0 config ?
2
answers
0
votes
6
views
asked 5 months ago

Multiple EBS volumes to gain perfomance

Hi, I want to run "HCL Domino Server 12" on an EC2 instance, Domino is a server specialized on collaboration applications and includes a mail server, we can see it also as web servers that includes a Non-sql database behind as the engine for the email and the apps . During server setup, I can specify different paths for transactional logging, view indexes, mail&database applications, etc. I was thinking on create different filesystems for each route, and assign a different EBS volumen to each path/filesystem, but I do have several concerns / question on it: - EBS baseline: I am aware that t3 family EC2 instances have a baseline for CPU on 30%.... What about baselines for EBS ? Does t3 also have a baseline for EBS and credits towards the use of them ? I did not see clearly that info. - EBS and IOPs / Troughtput: I guess that if I have 3 EBS disk, and each of them have a performance base-level of 3000 IOPS and 125 MiB/s throughput. Does that means that using 3, I will have 9000 IOPS in total and 375 Mibs ? I am not sure if I have a previous bottleneck on the EC2 (ie, EC2 having a maximum on total for all the disk of 300 Mibs, so even I have multiple EBS volumes the maximum troughput is the one obtained by the EC2 machine) - Root Volumes: When you create and EC2 machine on a t3.large instance, how is by default created the Root Volume ? Is using an EBS gp2 or an EBS gp3 volume ? - NVMe SSD volumes: I saw EC2 images (ie, m5ad.large) that insted using "normal" EBS SSD volumes on the rrot, they provide you directly with 1x75 NVMe SSD volumes, and higher. I am confused there, since when I mounted additional SSD volumes on my linux systems, they always appeared also as a "NVMe" device. Are not normal gp2 / gp3 volumes NVMe based ? Can someone explained the difference and the value of this 1x75 VNMe SSD volumes offered by the m5ad.large image ?
1
answers
0
votes
13
views
asked 5 months ago

EC2 Image Builder updating Launch Templates and wiping Snapshot configuration

I have a pretty simple Launch Template that launches an instance with a root volume defined by the AMI, and a data volume that comes from a snapshot. Lets say this snapshot includes data that cannot be recreated within the context of the EC2 Image Builder, but also doesn't have to be mounted during the Image Builder run. I'd like for the end result of the EC2 Image Builder to update the Launch Template to - Update the Root volume with the new AMI snapshot - Preserve the secondary volume to have the extant snapshot If I don't specify the mounting of any secondary volume during the Image Builder run - the Snapshot specification in the Launch Template is wiped when the new Launch Template version is automatically created. If I manually update the Launch Template with the new AMI - the Snapshot specification is preserved. From playing around with mounting the snapshot during Image Creation - I can see new snapshots being made from the Build process. However - part of the builder process involves a modification to that secondary volume - I'd like for that to not persist and for the new image to use a "clean" slate in the launch template. The changes made are not trivial to undo, so a straight up `rm -rf` on the secondary volume is not really an option. Is there a way for me to preserve the original snapshot usage on the secondary volume when updating the Launch Template?
1
answers
0
votes
14
views
asked 5 months ago

Every stack update tries to optimize gp3 volume

We have a stack which went through the following series of events: 1. Created EBS volumes of type st1 and attached them to EC2 volumes. 2. Later decided to convert these to gp3 when it was announced. Changed the VolumeType in the stack and applied an update. 3. The volumes started optimizing. 4. ~7 hours after initiating the update, the update failed and the stack went to being stuck in UPDATE_ROLLBACK_FAILED. CloudFormation tried to roll back the change, but could not do so. The status messages for each volume indicates: "Volume vol-... cannot be modified in modification state OPTIMIZING (Service: AmazonEC2; Status Code: 400; Error Code: IncorrectModificationState; Request ID: ...; Proxy: null)". 5. A couple of days later the volume optimization finished. The EBS console shows gp3, and on the EC2 instances, we quite clearly see gp3 performance and not the previous st1. In CloudFormation the volumes show UPDATE_FAILED. 6. Some time later we had to deploy another stack update to set throughput and IOPS of the gp3 volumes. Could not do so due to being stuck in UPDATE_ROLLBACK_FAILED. 7. We rolled back the stack and excluded the volumes. Stack was now in UPDATE_ROLLBACK_COMPLETE and volumes were now in UPDATE_COMPLETE, and we deployed the updated stack. 8. The volumes started optimizing again. It took over a day but eventually optimization finished. 9. Once again, ~7 hours after starting the stack update, the update failed and the stack went to UPDATE_ROLLBACK_FAILED. Same messages for the volumes. 10. After the volume optimization finished, the new throughput and IOPS are shown in the console. CloudWatch metrics show that the volume usage reflects with the new values. 11. Today we had another update to the volumes. In this case we were only changing tags. All of the volumes except 1 started optimizing. The new tags were set, but the stack update failed due to a different reason, CloudFormation tried to roll back, and once again the volumes are now UPDATE_FAILED and the stack is UPDATE_ROLLBACK_FAILED, with the exact same message for the volumes. The volumes are still optimizing ~3 hours later. We think the original problem was that we hit some kind of internal timeout in CloudFormation. No idea why it tries to optimize the volumes every time - shouldn't the most recent tag-only update not require optimization? Is there anything we can adjust in the template, or during the update, to force CloudFormation to fully wait for volume optimization, or to bypass the attempt to optimize every time? I've considered creating a wait condition and manually resolving it (using cURL or whatever) once we see that the optimization completes, just to get it out of the way. I've also considered creating a stack policy to prevent updates to the EBS volumes but that doesn't guarantee that we won't run into this exact same problem if we need remove the policy to update the volumes in the future.
1
answers
0
votes
3
views
asked a year ago
1
answers
0
votes
6
views
asked 2 years ago

EC2 Instance reachability check failed after yum upgrade

Instance id: i-096ae05732c55de39 AMI ID: amzn2-ami-hvm-2.0.20190228-x86_64-gp2 (ami-0de7daa7385332688) I tried to reboot, stop and start, detach and attach the volume but nothing works. I tried creating a new instance and attach the problematic instance volume to the new instance and I'm getting the same issue with the same logs (see below). I really hope you can help me log back to the instance or recover the current code and crontab. Thank you, Ido System logs: \[ 6.515568] cloud-init\[2107]: Cloud-init v. 19.3-2.amzn2 running 'init' at Sun, 12 Jan 2020 14:50:46 +0000. Up 6.49 seconds. \[ 6.531320] cloud-init\[2107]: ci-info: +++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++ \[ 6.531553] cloud-init\[2107]: ci-info: _--------_-------_-----------_-----------_-------_-------------------+ \[ 6.531687] cloud-init\[2107]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | \[ 6.531814] cloud-init\[2107]: ci-info: _--------_-------_-----------_-----------_-------_-------------------+ \[ 6.531938] cloud-init\[2107]: ci-info: | eth0 | False | . | . | . | 06:0a:56:c4:72:d4 | \[ 6.532090] cloud-init\[2107]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | \[ 6.532209] cloud-init\[2107]: ci-info: | lo | True | ::1/128 | . | host | . | \[ 6.532326] cloud-init\[2107]: ci-info: _--------_-------_-----------_-----------_-------_-------------------+ \[ 6.532442] cloud-init\[2107]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ \[ 6.532565] cloud-init\[2107]: ci-info: _-------_-------------_---------_-----------_-------_ \[ 6.532683] cloud-init\[2107]: ci-info: | Route | Destination | Gateway | Interface | Flags | \[ 6.532797] cloud-init\[2107]: ci-info: _-------_-------------_---------_-----------_-------_ \[ 6.532912] cloud-init\[2107]: ci-info: _-------_-------------_---------_-----------_-------_ \[ 6.643586] cloud-init\[2107]: Jan 12 14:50:46 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Calling 'http://169.254.169.254/latest/api/token' failed \[0/1s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a89684d0>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 7.646859] cloud-init\[2107]: Jan 12 14:50:47 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964b10>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 7.666755] cloud-init\[2107]: Jan 12 14:50:47 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[0/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964a50>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 8.670012] cloud-init\[2107]: Jan 12 14:50:48 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964790>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 8.690271] cloud-init\[2107]: Jan 12 14:50:48 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[1/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964650>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 9.693515] cloud-init\[2107]: Jan 12 14:50:49 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964810>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 9.713765] cloud-init\[2107]: Jan 12 14:50:50 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[2/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a89645d0>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 10.716931] cloud-init\[2107]: Jan 12 14:50:51 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964850>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 10.737123] cloud-init\[2107]: Jan 12 14:50:51 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[3/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964e90>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 11.740529] cloud-init\[2107]: Jan 12 14:50:52 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964710>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 11.760948] cloud-init\[2107]: Jan 12 14:50:52 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[4/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964550>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 12.764088] cloud-init\[2107]: Jan 12 14:50:53 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964310>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 12.785413] cloud-init\[2107]: Jan 12 14:50:53 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[5/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964590>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 14.789780] cloud-init\[2107]: Jan 12 14:50:55 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964fd0>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 14.809967] cloud-init\[2107]: Jan 12 14:50:55 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[7/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964850>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 16.814238] cloud-init\[2107]: Jan 12 14:50:57 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964650>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 16.834230] cloud-init\[2107]: Jan 12 14:50:57 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[9/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964e10>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 18.838605] cloud-init\[2107]: Jan 12 14:50:59 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964a50>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 18.859101] cloud-init\[2107]: Jan 12 14:50:59 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[11/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964c90>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 20.863487] cloud-init\[2107]: Jan 12 14:51:01 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964a90>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 20.890368] cloud-init\[2107]: Jan 12 14:51:01 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[13/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964b50>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 22.894642] cloud-init\[2107]: Jan 12 14:51:03 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964710>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 22.916020] cloud-init\[2107]: Jan 12 14:51:03 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[15/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964850>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 25.921482] cloud-init\[2107]: Jan 12 14:51:06 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a89642d0>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 25.942292] cloud-init\[2107]: Jan 12 14:51:06 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[18/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964e50>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 28.947641] cloud-init\[2107]: Jan 12 14:51:09 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964ed0>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 28.980291] cloud-init\[2107]: Jan 12 14:51:09 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[21/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964c90>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 31.985667] cloud-init\[2107]: Jan 12 14:51:12 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964790>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 32.006417] cloud-init\[2107]: Jan 12 14:51:12 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[24/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964350>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 35.011882] cloud-init\[2107]: Jan 12 14:51:15 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964750>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 35.032071] cloud-init\[2107]: Jan 12 14:51:15 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[27/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964850>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 38.037589] cloud-init\[2107]: Jan 12 14:51:18 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a89643d0>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 38.058474] cloud-init\[2107]: Jan 12 14:51:18 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[30/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964ad0>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 42.064758] cloud-init\[2107]: Jan 12 14:51:22 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a89646d0>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 42.084761] cloud-init\[2107]: Jan 12 14:51:22 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[34/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964c90>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 46.091063] cloud-init\[2107]: Jan 12 14:51:26 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964b90>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 46.111536] cloud-init\[2107]: Jan 12 14:51:26 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[38/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964d90>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 50.117923] cloud-init\[2107]: Jan 12 14:51:30 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964310>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 50.139248] cloud-init\[2107]: Jan 12 14:51:30 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[42/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964850>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 54.145806] cloud-init\[2107]: Jan 12 14:51:34 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a89645d0>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 54.165882] cloud-init\[2107]: Jan 12 14:51:34 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[46/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964610>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 58.172140] cloud-init\[2107]: Jan 12 14:51:38 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964d50>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 58.191955] cloud-init\[2107]: Jan 12 14:51:38 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[50/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964c90>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 63.199313] cloud-init\[2107]: Jan 12 14:51:43 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964550>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 63.220255] cloud-init\[2107]: Jan 12 14:51:43 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[55/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a89644d0>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 68.227799] cloud-init\[2107]: Jan 12 14:51:48 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964590>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 68.248621] cloud-init\[2107]: Jan 12 14:51:48 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[60/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964850>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 73.255912] cloud-init\[2107]: Jan 12 14:51:53 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964990>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 73.275464] cloud-init\[2107]: Jan 12 14:51:53 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[65/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964f50>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 78.282771] cloud-init\[2107]: Jan 12 14:51:58 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964e10>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 78.303398] cloud-init\[2107]: Jan 12 14:51:58 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[70/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964c90>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 83.310696] cloud-init\[2107]: Jan 12 14:52:03 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964810>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 83.341005] cloud-init\[2107]: Jan 12 14:52:03 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[75/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964e90>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 89.349505] cloud-init\[2107]: Jan 12 14:52:09 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964b50>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 89.370090] cloud-init\[2107]: Jan 12 14:52:09 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[81/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964850>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 95.378513] cloud-init\[2107]: Jan 12 14:52:15 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964fd0>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 95.399052] cloud-init\[2107]: Jan 12 14:52:15 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[87/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964950>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 101.407361] cloud-init\[2107]: Jan 12 14:52:21 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964e50>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 101.429123] cloud-init\[2107]: Jan 12 14:52:21 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[93/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964c90>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 107.437417] cloud-init\[2107]: Jan 12 14:52:27 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964bd0>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 107.460035] cloud-init\[2107]: Jan 12 14:52:27 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[99/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964510>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 113.468430] cloud-init\[2107]: Jan 12 14:52:33 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964ad0>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 113.488680] cloud-init\[2107]: Jan 12 14:52:33 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[105/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964d90>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 120.498447] cloud-init\[2107]: Jan 12 14:52:40 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964dd0>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 120.519135] cloud-init\[2107]: Jan 12 14:52:40 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[112/120s]: request error \[HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964e10>: Failed to establish a new connection: \[Errno 101] Network is unreachable',))] \[ 127.528455] cloud-init\[2107]: Jan 12 14:52:47 cloud-init\[2107]: DataSourceEc2.py\[WARNING]: Unable to get API token: None/latest/api/token raised exception HTTPConnectionPool(host='none', port=80): Max retries exceeded with url: /latest/api/token (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f07a8964850>: Failed to establish a new connection: \[Errno -2] Name or service not known',)) \[ 127.548962] cloud-init\[2107]: Jan 12 14:52:47 cloud-init\[2107]: url_helper.py\[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed \[119/120s]: unexpected error \[Attempted to set connect timeout to 0.0, but the timeout cannot be set to a value less than or equal to 0.] \[ 134.556546] cloud-init\[2107]: Jan 12 14:52:54 cloud-init\[2107]: DataSourceEc2.py\[CRITICAL]: Giving up on md from \['http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 126 seconds \[\[32m OK \[0m] Started Initial cloud-init job (metadata service crawler). Edited by: idogrady on Jan 12, 2020 8:40 AM
2
answers
0
votes
5
views
asked 2 years ago

ECS agent never register instance with loop in userdata script

Hi, I'm trying to launch an EC2 based ECS cluster with the docker rexray plugin installed using a cloudformation adapted from <https://aws.amazon.com/blogs/compute/amazon-ecs-and-docker-volume-drivers-amazon-ebs/> . In the blog post, the user-data script waits for the ecs service to become responseive by curling in loop ecs metada url (http://localhost:51678/v1/metadata) and in my cloudformation script I'm doing exactly the same. ``` ... Properties: UserData: Fn::Base64: !Sub | #!/bin/bash yum install -y aws-cfn-bootstrap yum update -y /opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --resource ECSInstanceConfiguration --region ${AWS::Region} /opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource ECSScalingGroup --region ${AWS::Region} exec 2>>/var/log/ecs-agent-install.log set -x until curl -s http://localhost:51678/v1/metadata; do sleep 1; done # service ecs stop # docker plugin install rexray/ebs REXRAY_PREEMPT=true EBS_REGION=${AWS::Region} --grant-all permissions # service docker restart # service ecs start AssociatePublicIpAddress: true EbsOptimized: 'true' ``` When I launch a new stack with my user-data **without** the loop "until curl" part the ecs agent starts pretty quickly and register the instance with my ecs cluster properly. However by just adding the loop part, the ecs service never becomes responsive and template goes one until it fails by timeout. I am using an ECS optimized ami (ami-04e333c875fae9d77 region sa-east-1). What may be preventing the ecs service from starting properly? How should I adapt my script in order to install the docker plugin to add extra EBS for my volumes? the entire LaunchConfiguration declaration can be seen below. ``` ECSInstanceConfiguration: Type: AWS::AutoScaling::LaunchConfiguration Metadata: AWS::CloudFormation::Init: config: packages: yum: jq: [] commands: 01_enable_ecs_cluster: command: !Sub | cat <<EOF >> /etc/ecs/ecs.config ECS_CLUSTER=${ECSCluster} ECS_ENABLE_TASK_IAM_ROLE=true ECS_ENABLE_CONTAINER_METADATA=true ECS_CONTAINER_INSTANCE_PROPAGATE_TAGS_FROM=ec2_instance EOF files: /etc/cfn/cfn-hup.conf: content: !Sub | [main] stack=${AWS::StackId} region=${AWS::Region} mode: 00400 owner: root group: root /etc/cfn/hooks.d/cfn-auto-reloader.conf: content: !Sub | [cfn-auto-reloader-hook] triggers=post.update path=Resources.ECSInstanceConfiguration.Metadata.AWS::CloudFormation::Init action=/opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --resource ECSInstanceConfiguration --region ${AWS::Region} runas=root services: sysvinit: cfn-hup: enabled: 'true' ensureRunning: 'true' files: - "/etc/cfn/cfn-hup.conf" - "/etc/cfn/hooks.d/cfn-auto-reloader.conf" Properties: UserData: Fn::Base64: !Sub | #!/bin/bash yum install -y aws-cfn-bootstrap yum update -y /opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --resource ECSInstanceConfiguration --region ${AWS::Region} /opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource ECSScalingGroup --region ${AWS::Region} RET=0 exec 2>>/var/log/ecs-agent-install.log set -x until curl -s http://localhost:51678/v1/metadata; do sleep 1; done # #docker plugin install --alias cloudstor:aws --grant-all-permissions docker4x/cloudstor:18.03.0-ce-aws1 CLOUD_PLATFORM=AWS AWS_REGION=${AWS::Region} EFS_SUPPORTED=0 DEBUG=1 # service ecs stop # docker plugin install rexray/ebs REXRAY_PREEMPT=true EBS_REGION=${AWS::Region} --grant-all-permissions # service docker restart # service ecs start AssociatePublicIpAddress: true EbsOptimized: 'true' ImageId: !Ref ClusterInstanceImageIdParameter InstanceType: !Ref ClusterInstanceTypeParameter IamInstanceProfile: !Ref ECSInstanceProfile KeyName: !Ref KeyNameParameter SecurityGroups: - !Ref ECSInstanceSecurityGroup ```
1
answers
0
votes
1
views
asked 3 years ago
  • 1
  • 90 / page