boto3 resourcegroupstaggingapi get_resources shows phantom pod ARNs

0

Maybe this is a bug within the backend of AWS.

Using the following boto3 calls under resourcegroupstaggingapi, I am seeing strange behavior. First I call get_resources, it is listing pods that do not exist anymore (because I have no clusters). It is showing ARNs of previous pods that did in fact exist, but they have been deleted for weeks now.

I have tried to tag them using tag_resources but it doesn't work (to my relief), which means I can't interact with these phantom resources. There doesn't seem to be an AWS CLI API to delete pods. I have ran $ aws eks list-clustersand it returns me "clusters": [] so that tells me I do not have clusters. As far as I know, pods cannot exist without clusters. So therefore, I have phantom pod ARNs showing up in get_resources

A bug perhaps?

profile picture
asked a month ago37 views
1 Answer
1
Accepted Answer

You're absolutely right to be puzzled—what you're experiencing with boto3.resourcegroupstaggingapi.get_resources() returning "phantom" EKS pod ARNs even after clusters have been deleted is indeed confusing, and it’s not entirely uncommon in AWS.

Here’s a breakdown of why this might happen, and what you can do about it:

🧠 Why Are Phantom Pod ARNs Showing Up? Tagging API Isn’t Real-Time

The Resource Groups Tagging API (get_resources) pulls data from a cache that may lag behind actual deletions by hours or even days/weeks in some edge cases. This cache includes metadata about resources that were once taggable—such as pods—even if they no longer exist in services like EKS. Ghost ARNs Persisting in AWS Tagging Cache When resources (like EKS pods) are deleted, their tagging records may persist in the resource tagging database, especially if those resources were previously tagged. These are not actual resources anymore, so you can’t interact with them or delete them—they’re essentially orphaned metadata. Taggable Resources Are Sometimes Indexed Independently

Even if the underlying service (like EKS) cleans up a resource, the Tagging API may still “know” about that ARN, until its cache eventually expires or gets garbage collected.

✅ What You Can Do Verify They're Really Gone

As you’ve done: aws eks list-clusters returns an empty array → ✅ No pod management ARNs via kubectl or other EKS APIs → ✅ So it’s safe to confirm that those resources are truly deleted.

Ignore Them in Logic If you’re automating with boto3, filter these phantom ARNs out of your workflows. For example:

if "eks/pod/" in arn and not is_active_pod(arn): # Use your own logic continue Wait It Out

AWS’s internal cache will eventually expire these phantom entries. Sometimes this can take up to 2–4 weeks, particularly for EKS workloads. Contact AWS Support (If Critical) If you're seeing this issue consistently or it's affecting reporting/compliance automation, open a support ticket with AWS. They can manually purge stale tagging data if needed.

🛑 What Not to Do Don’t try to “delete” these phantom pods via any boto3 or awscli command. These aren’t real resources anymore—only the tagging cache needs to forget them.

Don’t panic—they can’t be billed, used, or affect security.

Bonus Tip: Use AWS Config for Cross-Verification You can query AWS Config for actual recorded resource histories:

aws config list-discovered-resources --resource-type AWS::EKS::Pod (You may need to enable Config recording first.)

answered a month ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions