By using AWS re:Post, you agree to the Terms of Use
/Amazon Simple Storage Service/

Questions tagged with Amazon Simple Storage Service

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

How to properly and completely terminate a multipart upload?

In our Java app we have what is basically boilerplate S3 V2 code for creating a multipart upload of a file to S3. We absolutely need the ability to cancel the upload, and recover all resources used by the upload process, INCLUDING the CPU and network bandwidth. Initially we tried simply cancelling the completionFuture on the FileUpload, but that doesn't work. I can watch the network traffic continue to send data to S3, until the entire file is uploaded. Cancelling the completionFuture seems to stop S3 from reconstructing the file, but that's not sufficient. In most cases we need to cancel the upload because we need the network bandwidth for other things, like streaming video. I found the function shutdownNow() in the TransferManager class, and that seemed promising, but it looks like it's not available in the V2 SDK (I found it in the V1 sources). I've seen a function getSubTransfers() in the V1 MultipleFileUpload class that returns a list of Uploads, and the Upload class has an abort() function, but again, we need to use V2 for other reasons. I've also found and implemented code that calls listMultipartUploads, looks for the upload we want to cancel, creates an abortMultipartUploadRequest, issues it and the threads keep on rolling, and rolling, and rolling.... Is there a "correct" way of terminating a multipart upload, including the threads processing the upload?
0
answers
0
votes
11
views
asked 2 days ago

Describe table in Athena fails with insufficient lake formation permissions

When I try to run the following query via the Athena JDBC Driver ```sql describe gitlab.issues ``` I get the following error: > [Simba][AthenaJDBC](100071) An error has been thrown from the AWS Athena client. FAILED: SemanticException Unable to fetch table gitlab. Insufficient Lake Formation permission(s) on gitlab (Service: AmazonDataCatalog; Status Code: 400; Error Code: AccessDeniedException; Request ID: be6aeb1b-fc06-410d-9723-2df066307b35; Proxy: null) [Execution ID: a2534d22-c4df-49e9-8515-80224779bf01] the following query works: ```sql select * from gitlab.issues limit 10 ``` The role that is used has the `DESCRIBE` permission on the `gitlab` database and `DESCRIBE, SELECT` permissions on the table `issues`. It also has the following IAM permissions: ```json { "Version": "2012-10-17", "Statement": [ { "Action": [ "athena:BatchGetNamedQuery", "athena:BatchGetQueryExecution", "athena:CreatePreparedStatement", "athena:DeletePreparedStatement", "athena:GetDataCatalog", "athena:GetDatabase", "athena:GetNamedQuery", "athena:GetPreparedStatement", "athena:GetQueryExecution", "athena:GetQueryResults", "athena:GetQueryResultsStream", "athena:GetTableMetadata", "athena:GetWorkGroup", "athena:ListDatabases", "athena:ListNamedQueries", "athena:ListPreparedStatements", "athena:ListDataCatalogs", "athena:ListEngineVersions", "athena:ListQueryExecutions", "athena:ListTableMetadata", "athena:ListTagsForResource", "athena:ListWorkGroups", "athena:StartQueryExecution", "athena:StopQueryExecution", "athena:UpdatePreparedStatement" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "glue:BatchGetCustomEntityTypes", "glue:BatchGetPartition", "glue:GetCatalogImportStatus", "glue:GetColumnStatisticsForPartition", "glue:GetColumnStatisticsForTable", "glue:GetCustomEntityType", "glue:GetDatabase", "glue:GetDatabases", "glue:GetPartition", "glue:GetPartitionIndexes", "glue:GetPartitions", "glue:GetSchema", "glue:GetSchemaByDefinition", "glue:GetSchemaVersion", "glue:GetSchemaVersionsDiff", "glue:GetTable", "glue:GetTableVersion", "glue:GetTableVersions", "glue:GetTables", "glue:GetUserDefinedFunction", "glue:GetUserDefinedFunctions", "glue:ListCustomEntityTypes", "glue:ListSchemaVersions", "glue:ListSchemas", "glue:QuerySchemaVersionMetadata", "glue:SearchTables" ], "Resource": "*", "Effect": "Allow" }, { "Condition": { "ForAnyValue:StringEquals": { "aws:CalledVia": "athena.amazonaws.com" } }, "Action": [ "s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket", "s3:ListBucketMultipartUploads", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::aws-athena-query-results-123456789012-eu-west-1", "arn:aws:s3:::aws-athena-query-results-123456789012-eu-west-1/*", "arn:aws:s3:::aws-athena-federation-spill-123456789012-eu-west-1", "arn:aws:s3:::aws-athena-federation-spill-123456789012-eu-west-1/*" ], "Effect": "Allow" }, { "Action": [ "lakeformation:CancelTransaction", "lakeformation:CommitTransaction", "lakeformation:DescribeResource", "lakeformation:DescribeTransaction", "lakeformation:ExtendTransaction", "lakeformation:GetDataAccess", "lakeformation:GetQueryState", "lakeformation:GetQueryStatistics", "lakeformation:GetTableObjects", "lakeformation:GetWorkUnitResults", "lakeformation:GetWorkUnits", "lakeformation:StartQueryPlanning", "lakeformation:StartTransaction" ], "Resource": "*", "Effect": "Allow" }, { "Condition": { "ForAnyValue:StringEquals": { "aws:CalledVia": "athena.amazonaws.com" } }, "Action": "lambda:InvokeFunction", "Resource": "arn:aws:lambda:*:*:function:athena-federation-*", "Effect": "Allow" }, { "Condition": { "ForAnyValue:StringEquals": { "aws:CalledVia": "athena.amazonaws.com" } }, "Action": ["s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket"], "Resource": "*", "Effect": "Allow" } ] } ``` even if I make the role a LakeFormation Admin, Database Creator, assign Super Permissions to the table and database and add the AdministratorAccess IAM Policy to the role it still fails.
0
answers
0
votes
20
views
asked 7 days ago

Unable to configure SageMaker execution Role with access to S3 bucket in another AWS account

**Requirement:** Create SakeMaker GroundTruth labeling job with input/output location pointing to S3 bucket in another AWS account **High Level Steps Followed:** Lets say, *Account_A:* SageMaker GroundTruth labeling job and *Account_B*: S3 bucket 1. Create role *AmazonSageMaker-ExecutionRole* in *Account_A* with 3 policies attached: * AmazonSageMakerFullAccess * Account_B_S3_AccessPolicy: Policy with necessary S3 permissions to access S3 bucket in Account_B * AssumeRolePolicy: Assume role policy for *arn:aws:iam::Account_B:role/Cross-Account-S3-Access-Role* 2. Create role *Cross-Account-S3-Access-Role* in *Account_B* with 1 policy and 1 trust relationship attached: * S3_AccessPolicy: Policy with necessary S3 permissions to access S3 bucket in the this Account_B * TrustRelationship: For principal *arn:aws:iam::Account_A:role/AmazonSageMaker-ExecutionRole* **Error:** While trying to create SakeMaker GroundTruth labeling job with IAM role as *AmazonSageMaker-ExecutionRole*, it throws error *AccessDenied: Access Denied - The S3 bucket 'Account_B_S3_bucket_name' you entered in Input dataset location cannot be reached. Either the bucket does not exist, or you do not have permission to access it. If the bucket does not exist, update Input dataset location with a new S3 URI. If the bucket exists, give the IAM entity you are using to create this labeling job permission to read and write to this S3 bucket, and try your request again.*
1
answers
0
votes
60
views
asked 9 days ago

IAM Policy - AWS Transfer Family

Hello, This question may seem a bit long-winded since I will be describing the relevant background information to hopefully avoid back and forth, and ultimately arrive at a resolution. I appreciate your patience. I have a Lambda function that is authenticating users via Okta for SFTP file transfers, and the Lambda function is called through an API Gateway. My company has many different clients, so we chose this route for authentication rather than creating user accounts for them in AWS. Everything has been working fine during my testing process except for one key piece of functionality. Since we have many customers, we don't want them to be able to interact or even see another customer's folder within the dedicated S3 bucket. The directory structure has the main S3 bucket at the top level and within that bucket resides each customer's folder. From there, they can create subfolders, upload files, etc. I have created the IAM policy - which is an inline policy as part of an assumed role - as described in this document: https://docs.aws.amazon.com/transfer/latest/userguide/users-policies.html. My IAM policy looks exactly like the one shown in the "Creating a session policy for an Amazon S3 bucket" section of the documentation. The "transfer" variables are defined in the Lambda function. Unfortunately, those "transfer" variables do not seem to be getting passed to the IAM policy. When I look at the Transfer Family endpoint log, it is showing access denied after successfully connecting (confidential information is redacted): <user>.39e979320fffb078 CONNECTED SourceIP=<source_ip> User=<user> HomeDir=/<s3_bucket>/<customer_folder>/ Client="SSH-2.0-Cyberduck/8.3.3.37544 (Mac OS X/12.4) (x86_64)" Role=arn:aws:iam::<account_id>:role/TransferS3AccessRole Kex=diffie-hellman-group-exchange-sha256 Ciphers=aes128-ctr,aes128-ctr <user>.39e979320fffb078 ERROR Message="Access denied" However, if I change the "transfer" variables in the Lambda function to include the actual bucket name and update the IAM policy accordingly, everything works as expected; well, almost everything. With this change, I am not able to restrict access and, thus, any customer could interact with any other customer's folders and files. Having the ability to restrict access by using the "transfer" variables is an integral piece of functionality. I've searched around the internet - including this forum - and cannot seem to find the answer to this problem. Likely, I have overlooked something and hopefully it is an easy fix. Looking forward to getting this resolved. Thank you very much in advance!
5
answers
0
votes
52
views
asked 12 days ago

Trying to share an S3 bucket across accounts using 'aws:PrincipalOrgPaths', how to debug?

We have several AWS accounts, all arranged into a tree in Organizations: o-ABCDEF / r-1234 / ou-XXXX / ou-YYYY / ou-ZZZZ / ou-<actual_accounts> The intermediate X->Y->Z OUs are just there for, well, organizational purposes. The "actual accounts" correspond to projects and customers and stuff with a need for isolated resources, billing, yadda yadda. There's also an "actual account" OU at the same level as the ZZZZ branch. This actual account (call it Central Account) is where we put a lot of our central internal resources: EKS running websites, S3 buckets holding gobs of data, etc. In the interests of making new accounts a little easier to stand up (along with Account Factory out of the Control Tower service), we wanted to be able to have EC2 instances in the various "actual accounts" download some stuff from one of the S3 buckets in that Central Account. There's an example given in AWS documentation about using aws:PrincipalOrgPaths as a condition for a bucket policy, so following that example, I came up with ``` "Version": "2012-10-17", "Statement": [ { "Sid": "meaningful human reminder goes here", "Effect": "Allow", "Principal": "*", "Action": [ "s3:GetObject", "s3:ListBucket", "s3:a_few_other_Get_and_List_entries but no Put or Delete" ], "Resource": [ "arn:aws:s3:::name-of-bucket", "arn:aws:s3:::name-of-bucket/special-prefix", "arn:aws:s3:::name-of-bucket/special-prefix/*" ], "Condition": { "ForAnyValue:StringLike": { "aws:PrincipalOrgPaths": [ "o-ABCDEF/r-1234/*/ou-ZZZZ/*" ] } } } ] ``` That's the entire bucket policy. "Block all public access" is on. ACLs are disabled. The path in Organizations does have an intermediate wildcard because the AWS documentation had explicitly mentioned it. I had originally written the intermediate OUs in there and it didn't make a difference. The trailing wildcard was always present. [This SO question](https://stackoverflow.com/q/59349041/1824182) mentions that the PrincipalOrgPaths takes an array even when you're only listing a single entry, but that the AWS Console editor removes the square brackets in such cases. I've tried it with square brackets and one entry, as well as listing the same path multiple times just so that the square brackets would be preserved; made no difference. Organizations has "Service control policies" enabled, and its FullAWSAccess (AWS managed policy) attached. I'm not entirely certain if that matters or not. Trying to access s3://name-of-bucket/special-prefix/from an EC2 instance in one of the other accounts in the OU tree gives only Access Denied errors. I've turned on server logging for the bucket, and the log entries showing my test attempts give the bucket name, originating instance role, the 403 response, etc, but obviously don't mention what Organization OU is involved. I'm not sure what's wrong with the policy, or if there's something else I need to change, or if there's a way to see what test S3 is applying that's failing instead of succeeding. This should be doable with just Organizations, right?
1
answers
0
votes
16
views
asked 15 days ago

S3 Interface Endpoint from On-Prem Acccess Denied

Hello, We have S3 Endpoint (interface type) created at eu-west-1 region. We are trying to write to the buckets using the DNS created in eu-west-1 from our on-premise location connected via Direct Connect. DNS: *.vpce-1234567890-abcd2zc.s3.eu-west-1.vpce.amazonaws.com I have given the following permission in the bucket policy to write to these bucket but still when we try to upload/write to this bucket, we are getting Access Denied error as below. ``` { "Version": "2008-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": [ "s3:PutObject", "s3:GetObject", "s3:PutObjectAcl", "s3:ListBucket" ], "Resource": [ "arn:aws:s3::<bucket-name>/*", "arn:aws:s3:::<bucket-name>" ] }, { "Effect": "Allow", "Principal": "*", "Action": [ "s3:PutObject", "s3:GetObject", "s3:PutObjectAcl", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::<bucket-name>/*", "arn:aws:s3::<bucket-name> ] } ] } ``` OTErrWrnLn||ERROR||-1||SERVICE||GBS3||<Bucket_Name> Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 0QWNYWPJZY14EGRC; S3 Extended Request ID: sXic/CHy/OU5oakn7MBb6UESIbggdr9IxaILUiVuGMeUu7iZTUpIUpLeIUieNs82g6jXdBdQ3sU=)||-1||-1||-1|| Access Denied I would like to know what permission is required to write to this bucket from on-premise please. Or any other steps or configuration I need to apply please. When I run nslookup on the s3 endpoint from the on-prem server, it resolves to private IP. BTW, it works when I enable Allow Public Access. Thank you
2
answers
0
votes
27
views
asked 15 days ago

AWS S3 Get object not working. Getting corrupt file

I have a .zip file on an S3 bucket. When I try to get it and save it as a file it is corrupt and can't be opened. I don't know what to do, is really hard to get that S3 zip file. I am using this code as a guide https://docs.aws.amazon.com/AmazonS3/latest/userguide/download-objects.html. I'm currently on Unity, but the SDK for .NET should work. The resulted file has the first 7.4 kb all NULL. This is the code that gets the zip file, returns the buffer, and saves it with File.WriteAllBytes: ``` /// <summary> /// Get Object from S3 Bucket /// </summary> public async Task<byte[]> GetZip(string pFile) { try { Debug.Log("KEY: " + pFile); GetObjectRequest request = new GetObjectRequest { BucketName = S3Bucket, Key = pFile }; using (GetObjectResponse response = await S3Client.GetObjectAsync(request)) using (Stream responseStream = response.ResponseStream) { Debug.Log("Response stream"); if (responseStream != null) { byte[] buffer = new byte[(int)response.ResponseStream.Length]; int result = await responseStream.ReadAsync(buffer, 0, (int)response.ResponseStream.Length); Debug.Log("Stream result: " + result); File.WriteAllBytes(songPath, buffer); Debug.Log("Readed all bytes: " + buffer.Length + " - " + ((int)response.ResponseStream.Length)); return buffer; } else { Debug.Log("Response is null"); } } } catch (AmazonS3Exception e) { // If bucket or object does not exist Debug.Log("Error encountered ***. Message:"+ e.Message + " when reading object"); } catch (Exception e) { Debug.Log("Unknown encountered on server. Message:"+ e.Message + " when reading object"); } return null; } ```
3
answers
1
votes
62
views
asked 22 days ago

Lifecycle Configuration Standard --> Standard IA -- Glacier Flexible Restore via CloudFormation

We do shared web hosting and my cPanel servers stores backups in S3, each server with its own bucket. cPanel does not have a provision to select the storage class, so everything gets created as Standard. With around 9TB of backups being maintained, I would really like them to be stored as Standard IA after the first couple of days, and then transition to Glacier after they have been in IA for 30 days. The logic here is the backup that is most likely needed would be the most recent. Currently we skip the step of transferring to IA and they go straight to Glacier after 30 days. According to this page, that kind of multi staged transition should be ok, and it confirms that the transitions from class to class I want are acceptable. https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html The examples on this page show a transition in days of 1, seeming to show that a newly created object stored in Standard can be transitioned immediately: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket-lifecycleconfig.html My YAML template for Cloud Formation has this section in it: ``` - Id: TransitionStorageType Status: Enabled Transitions: - StorageClass: "STANDARD_IA" TransitionInDays: 2 - StorageClass: "GLACIER" TransitionInDays: 32 ``` When I run the template all of the buckets update with nice green check marks, then the whole stack rolls back without saying what the issue is. If turn that into 2 separate rules like this: ``` - Id: TransitionStorageIA Status: Enabled Transitions: - StorageClass: "STANDARD_IA" TransitionInDays: 2 - Id: TransitionStorageGlacier Status: Enabled Transitions: - StorageClass: "GLACIER" TransitionInDays: 32 ``` Then each bucket getting modified errors with: `Days' in Transition action must be greater than or equal to 30 for storageClass 'STANDARD_IA'` but if you look at the rules, it is in Standard IA for 30 days as it doesn't change to Glacier until day 32, and it transitions to Standard IA at day 2. So that error does not make any sense. What do I need to do to make this work? My monthly bill is in serious need of some trimming. Thank you.
1
answers
0
votes
13
views
asked a month ago

Uploading a file I downloaded from Sharepoint to S3 Bucket

I am attempting to download a file from SharePoint through their Rest API, and then upload that file to my S3 Bucket. I can "successfully" download the file and upload to S3. The issue is when I then go to my bucket and look at the file it's corrupted. (This code is in Apex, since I'm attempting to do this in Salesforce) My code: ``` public static void uploadFile() { AccessTokenResponse accessToken = getAccessToken(); HttpRequest req = new HttpRequest(); req.setEndpoint('<Sharepoint path>/_api/Web/GetFileByServerRelativePath(decodedurl=\'/sites/<channelname>/Shared%20Documents/General/TestFile.docx\')//OpenBinaryStream()'); req.setMethod('GET'); req.setHeader('Accept', 'application/json;odata=verbose'); req.setHeader('Authorization', 'Bearer ' + accessToken.access_token); Http h = new Http(); HTTPResponse res = h.send(req); uploadToAWS(res.getBody()); } public static void uploadToAWS(String body) { Blob bodyBlob = Blob.valueOf(body); String attachmentBody = EncodingUtil.base64Encode(bodyBlob); String formattedDateString = Datetime.now().formatGMT('EEE, dd MMM yyyy HH:mm:ss z'); String key = '<key>'; String secret = '<secret>'; String bucketname = '<bucket name>'; String host = 's3.us-east-1.amazonaws.com'; String method = 'PUT'; String filename = 'TestFile.docx'; HttpRequest req = new HttpRequest(); req.setMethod(method); req.setEndpoint('https://' + bucketname + '.' + host + '/' + filename); req.setHeader('Host', bucketname + '.' + host); req.setHeader('Content-Length', String.valueOf(attachmentBody.length())); req.setHeader('Content-Encoding', 'UTF-8'); req.setHeader('Content-type', 'application/vnd.openxmlformats-officedocument.wordprocessingml.document'); req.setHeader('Connection', 'keep-alive'); req.setHeader('Date', formattedDateString); req.setHeader('ACL', 'public-read-write'); req.setBodyAsBlob(EncodingUtil.base64Decode(attachmentBody)); String stringToSign = 'PUT\n\napplication/vnd.openxmlformats-officedocument.wordprocessingml.document\n' + formattedDateString + '\n' + '/' + bucketname + '/' + filename; String encodedStringToSign = EncodingUtil.urlEncode(stringToSign, 'UTF-8'); Blob mac = Crypto.generateMac('HMACSHA1', blob.valueof(stringToSign),blob.valueof(secret)); String signedKey = EncodingUtil.base64Encode(mac); String authHeader = 'AWS' + ' ' + key + ':' + signedKey ; req.setHeader('Authorization', authHeader); Http http = new Http(); HTTPResponse res = http.send(req); } ``` I can go to my bucket in S3 and see that a file was uploaded by that name. When I download the file and try to open it though it says its corrupted. I'm not sure how to debug anymore. I've tried with both a docx and a pdf, with the same corrupted result. Any idea on why my file is corrupted, or either what I should be doing to fix it or how I can debug more to understand where things are failing?
1
answers
0
votes
18
views
asked a month ago

Simple Amplify Storage Requests Which Require Authentication

Hello, I am new to AWS, and I am using Amplify to build my application (React + Node). I am trying to make a very simple storage interface for user documents, and I don't want these documents to be accessible by those who do not sign in through the Cognito user pool. However, I do want these documents to be accessible to all users who have signed in through my application. I followed all of the directions specified in [the official documentation page regarding setup](https://docs.amplify.aws/lib/storage/getting-started/q/platform/js/#storage-with-amplify), and didn't configure any special options. I then went into the web interface for my S3 bucket, found the newly created storage bucket, and added a folder called "templates" with a couple sub folders, and then some user document templates. The problems started to occur upon calling the `Storage.list(...)` function within my application. The promise would resolve successfully, but the list would be empty. I understand now that's because my application was attempting to index the S3 bucket through a `public` scope prefix. When I create a folder named public, and add the files in there, everything works nicely. I was under the opinion though that using this public folder would allow my privileged content to be indexed to users who were not credentialed (i.e. guests from outside my application who didn't pass through the Cognito login portal). Is that the case? There are no groups configured from within my Cognito user pool. Right now, calling Amplify storage API functions work, but only in the `public` scope. I had thought what I wanted to do was only allow such functionality within the `private` scope; but I'm beginning to think based on the docs pages regarding user access that what I would be fine using the `public` scope, as it doesn't allow access to internal files by guests, who would not be signed in. This hunch is furthered by information regarding `protected` and `private` scopes being user-specific. Should I delve deeper into the permissions associated with these bucket objects, and configure some sort of user group system and then configure ACLs based on the groups, or would using files within the public scope be fine for my use case? I just don't want users who aren't signed in through Cognito to be able to access files. Thank you for your time, and I hope this question finds you well.
0
answers
0
votes
18
views
asked a month ago

RDS Backup & Restore SP Failing with Error - Provided Token Is Expired

We had scheduled, daily auto back-up of our SqlServer Db on RDS, to S3 bucket for at least last 6 years. It was working fine since then. Suddenly it stopped working, which means we don't see any Backup in our S3 bucket since 24th March. Up on diagnosing the problem, we realized that it is failing since then and the error is STEP1 exec msdb.dbo.rds_restore_database @restore_db_name='RestoreDbFromS3', @s3_arn_to_restore_from='arn:aws:s3:::awsbucketName/SqlServerDb.bak'; STEP 2 exec msdb.dbo.rds_task_status @task_id=7; Response indicates Error with following Task Description [2022-05-28 12:51:22.030] Task execution has started. [2022-05-28 12:51:22.237] Aborted the task because of a task failure or an overlap with your preferred backup window for RDS automated backup. [2022-05-28 12:51:22.240] Task has been aborted [2022-05-28 12:51:22.240] The provided token has expired. We studied a lot to identify the root cause and solution but could not find anything accurately relevant. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedural.Importing.html#SQLServer.Procedural.Importing.Native.Troubleshooting above link shows troubleshooting options as per the error responses, but this does not include the error response that we are getting. Note: between 25th & 26th March, our aws instance was suspended for couple of hours due to delayed payment of monthly invoice. we restored the same quickly. Everything on the same aws account is working fine since then, but we just found out that db backup service has impacted as we see the last successful backup available in S3 bucket is of dated 24th March. We suspect that some token has expired up on account suspension, but are unable to identify which one and how to restore the same back to normal. Help, Assistance and Guidance would be much appreciated.
1
answers
0
votes
35
views
asked a month ago

What is the relationship between AWS Config retention period and AWS S3 Lifecycle policy?

I found here: https://aws.amazon.com/blogs/mt/configuration-history-configuration-snapshot-files-aws-config/ " AWS Config delivers three types of configuration files to the S3 bucket: Configuration history (A configuration history is a collection of the configuration items for a given resource over any time period. ) Configuration snapshot OversizedChangeNotification" However, in this docs: https://docs.aws.amazon.com/ja_jp/config/latest/developerguide/delete-config-data-with-retention-period.html It only said that retention period delete the "ConfigurationItems" (A configuration item represents a point-in-time view of the various attributes of a supported AWS resource that exists in your account. ) In this docs: https://docs.aws.amazon.com/config/latest/developerguide/config-concepts.html#config-history: "The components of a configuration item include metadata, attributes, relationships, current configuration, and related events. AWS Config creates a configuration item whenever it detects a change to a resource type that it is recording. " I wonder that: Is ConfigurationItems a subset of Configuration history? Is the things that saved to S3 equal to ConfigurationItems? If not, where is ConfigurationItems stored? And if things stored in S3, is ConfigurationItems deleted or become damaged? I am setting AWS S3 lifcycle is expire objects in 300 days and AWS Config retention period is 7 years. Therefore, I am wondering what is the relationship between those 2? Because S3 lifecycle period is 300 days, will AWS Config data is deleted in 300 days? Thank you so much!
1
answers
0
votes
25
views
asked a month ago
  • 1
  • 90 / page