By using AWS re:Post, you agree to the Terms of Use
/Amazon S3 Glacier/

Questions tagged with Amazon S3 Glacier

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Unsupported Action in Policy for S3 Glacier/Veeam

Hello, New person using AWS S3 glacier and I ran across an issue. I am working with Veeam to add an S3 Glacier to my backup. I have the bucket created. I need to add the following to my bucket policy: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "s3:DeleteObject", "s3:PutObject", "s3:GetObject", "s3:RestoreObject", "s3:ListBucket", "s3:AbortMultipartUpload", "s3:GetBucketVersioning", "s3:ListAllMyBuckets", "s3:GetBucketLocation", "s3:GetBucketObjectLockConfiguration", "ec2:DescribeInstances", "ec2:CreateKeyPair", "ec2:DescribeKeyPairs", "ec2:RunInstances", "ec2:DeleteKeyPair", "ec2:DescribeVpcAttribute", "ec2:CreateTags", "ec2:DescribeSubnets", "ec2:TerminateInstances", "ec2:DescribeSecurityGroups", "ec2:DescribeImages", "ec2:DescribeVpcs", "ec2:CreateVpc", "ec2:CreateSubnet", "ec2:DescribeAvailabilityZones", "ec2:CreateRoute", "ec2:CreateInternetGateway", "ec2:AttachInternetGateway", "ec2:ModifyVpcAttribute", "ec2:CreateSecurityGroup", "ec2:DeleteSecurityGroup", "ec2:AuthorizeSecurityGroupIngress", "ec2:AuthorizeSecurityGroupEgress", "ec2:DescribeRouteTables", "ec2:DescribeInstanceTypes" ], "Resource": "*" } ] } ``` Once I put this in, the first error I get is "Missing Principal". So I added "Principal": {}, under SID. But I have no idea what to put in the brackets. I changed it to "*" and that seemed to fix it. Not sure if this the right thing to do? The next error I get is for all the EC2's and s3:ListAllMyBuckets give me an error of "Unsupported Action in Policy". This is where I get lost. Not sure what else to do. Do I need to open my bucket to public? Is this a permissions issue? Do I have to recreate the bucket and disable object-lock? Please help.
2
answers
0
votes
5
views
amatuerAWSguy
asked 2 days ago

AWS: Multipart upload results in 403 Forbidden error even though single part upload works fine

**CONTEXT:** In my app, I have a feature that allows the user to upload a video. I noticed that when the users try to upload large videos, sometimes the upload fails. After, I did a bit of research, I found-out for files larger than 100 Mb, I should use [multipart upload][1]. So I have been following this [tutorial][2] to implement multipart upload in my app. And I reached **Stage Three**. ---------- **PART 1: Previous single part upload works fine** This is the implementation of a single part upload using *pre-signed urls*: > BACKEND var AWS = require("aws-sdk"); const REGION = "*************************"; //e.g. "us-east-1" const BUCKET_NAME = "l****************"; AWS.config.update({ region: REGION }); const s3 = new AWS.S3({ signatureVersion: "v4", apiVersion: "2006-03-01", }); var getVideoSignedUrl = async function (key) { return new Promise((resolve, reject) => { s3.getSignedUrl( "putObject", { Bucket: BUCKET_NAME, Key: key, ContentType: "video/*", ACL: "public-read", Expires: 300, }, (err, url) => { if (err) { reject(err); } else { resolve(url); } } ); }); }; exports.getVideoSignedUrl = getVideoSignedUrl; > FRONTEND export const getVideoPreSignedUrl = async () => await axios.get("/api/profile/getVideoPreSignedURL"); export const uploadVideoFileToCloud = async (file) => { const { data: uploadConfig } = await getVideoPreSignedUrl(); await axios.put(uploadConfig.url, file, { headers: { "Content-Type": file.type, "x-amz-acl": "public-read", }, transformRequest: (data, headers) => { delete headers.common["Authorization"]; return data; }, }); }; ---------- **PART 2: Multipart upload which throws 403 forbidden error** > BACKEND var AWS = require("aws-sdk"); const REGION = "***********************"; //e.g. "us-east-1" const BUCKET_NAME = "************************"; AWS.config.update({ region: REGION }); const s3 = new AWS.S3({ signatureVersion: "v4", apiVersion: "2006-03-01", }); // ========================================================== // Replacing getVideoSignedUrl with initiateMultipartUpload // That would generate a presigned url for every part const initiateMultipartUpload = async (object_name) => { const params = { Bucket: BUCKET_NAME, Key: object_name, ContentType: "video/*", ACL: "public-read", Expires: 300, }; const res = await s3.createMultipartUpload(params).promise(); return res.UploadId; }; const generatePresignedUrlsParts = async (object_name, number_of_parts) => { const upload_id = await initiateMultipartUpload(object_name); const baseParams = { Bucket: BUCKET_NAME, Key: object_name, UploadId: upload_id, }; const promises = []; for (let index = 0; index < number_of_parts; index++) { promises.push( s3.getSignedUrlPromise("uploadPart", { ...baseParams, PartNumber: index + 1, }) ); } const res = await Promise.all(promises); const signed_urls = {}; res.map((signed_url, i) => { signed_urls[i] = signed_url; }); return signed_urls; }; exports.initiateMultipartUpload = initiateMultipartUpload; exports.generatePresignedUrlsParts = generatePresignedUrlsParts; > FRONTEND This is where the error occurs. **See** `const resParts = await Promise.all(promises)` export const getMultiPartVideoUploadPresignedUrls = async (number_of_parts) => { const request_params = { params: { number_of_parts, }, }; return await axios.get( "/api/profile/get_multi_part_video_upload_presigned_urls", request_params ); }; // Using multipart upload export const uploadVideoFileToCloud = async (video_file, dispatch) => { // Each chunk is 100Mb const FILE_CHUNK_SIZE = 100_000_000; let video_size = video_file.size; let video_size_in_mb = Math.floor(video_size / 1000000); const number_of_parts = Math.floor(video_size_in_mb / 100) + 1; const response = await getMultiPartVideoUploadPresignedUrls(number_of_parts); const urls = response.data; console.log( "🚀 ~ file: profileActions.js ~ line 654 ~ uploadParts ~ urls", urls ); // async function uploadParts(file: Buffer, urls: Record<number, string>) { // const axios = Axios.create() // delete axios.defaults.headers.put["Content-Type"]; const keys = Object.keys(urls); const promises = []; for (const indexStr of keys) { const index = parseInt(indexStr); const start = index * FILE_CHUNK_SIZE; const end = (index + 1) * FILE_CHUNK_SIZE; const blob = index < keys.length ? video_file.slice(start, end) : video_file.slice(start); console.log( "🚀 ~ file: profileActions.js ~ line 691 ~ uploadParts ~ urls[index]", urls[index] ); console.log( "🚀 ~ file: profileActions.js ~ line 682 ~ uploadParts ~ blob", blob ); const upload_params = { headers: { "Content-Type": video_file.type, "x-amz-acl": "public-read", }, transformRequest: (data, headers) => { delete headers.common["Authorization"]; return data; }, }; const axios_request = axios.put(urls[index], blob, upload_params); promises.push(axios_request); console.log( "🚀 ~ file: profileAction.helper.js ~ line 117 ~ uploadParts ~ promises", promises ); } // Uploading video parts // This throws the 403 forbidden error const resParts = await Promise.all(promises); // This never gets logged console.log( "🚀 ~ file: profileAction.helper.js ~ line 124 ~ uploadParts ~ resParts", resParts ); // return resParts.map((part, index) => ({ // ETag: (part as any).headers.etag, // PartNumber: index + 1 // })) }; This is the error that's logged: [![PUT 403 forbidden error][3]][3] ---------- **PART 3: AWS Bucket & CORS policy:** 1. CORS Policy: [ { "AllowedHeaders": [ "*" ], "AllowedMethods": [ "PUT", "POST", "GET" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 } ] 2. Bucket policy hasn't been changed since I created the bucket and it's still empty by default: { "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Principal": {}, "Effect": "Allow", "Action": [], "Resource": [] } ] } [![Current bucket policy][4]][4] So maybe I should add something here? I also have all of these **unchecked:** [![Bucket Permissions][5]][5] ---------- **NOTES:** 1. I tested multipart upload with files smaller and larger than 100 Mb. And it always throws the 403 forbidden error. 2. I don't understand why I would get forbidden error if the single part upload works just fine. In other words, the upload is allowed and if both single part and multipart upload are using the same credentials, then that **forbidden** error should not occur. 3. I have a piece of code that shows me the progress of the upload. And I see the upload progressing. And the error seems to occur **AFTER** the upload of **EACH PART** is done: [![Upload progress image 1][5]][5] [![Upload progress image 2][6]][6] [1]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html [2]: https://www.altostra.com/blog/multipart-uploads-with-s3-presigned-url [3]: https://i.stack.imgur.com/1xCMz.png [4]: https://i.stack.imgur.com/pz2pw.png [5]: https://i.stack.imgur.com/OyqRp.png [6]: https://i.stack.imgur.com/HzICz.png [7]: https://i.stack.imgur.com/5W4IU.png
1
answers
0
votes
5
views
AWS-User-9169178
asked 20 days ago

S3 intelligent tier re-archiving clarification

We have objects in intelligent tiering storage class. The bucket had an intelligent tiering configuration to move objects to Archive Access and Deep Archive access. We removed this configuration, but not before objects were moved to these tiers. For objects in Archive and Deep Archive access, we have issued restore object requests. I would expect that archiving behavior for *restored* objects would follow whatever the bucket's current config specifies, and in the absence of an intelligent configuration, I would expect the default behavior of objects being moved only to the Archive Instant Access tier. However, [the documentation on restoring objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-initiate-restore-object.html) is unclear: > When you restore from the S3 Intelligent-Tiering Archive Access or Deep Archive Access tiers, the object transitions back into the S3 Intelligent-Tiering Frequent Access tier. The object **automatically transitions into the Archive Access tier after a minimum of 90 consecutive days** of no access. It moves into the **Deep Archive Access tier after a minimum of 180 consecutive days** of no access ... This reads like objects restored from these tiers will *always* go back to archive or deep archive if not accessed, regardless if the bucket is only configured to move objects to Archive Instant Access. Is this actually the case? Or do restored objects obey the bucket's current config like one would expect?
1
answers
0
votes
11
views
yodiggity
asked 24 days ago

HELP with uploading large files to Glacier

Hi I don't know why glacier is so convoluted/tricky to manage. Is this part of the reason why it's so cheap? I'm merely trying to upload some large vmdk's to glacier. I've been trawling AWS documentation for days and I literally have 50 tabs open detailing exactly, nothing. All the progress I've made thus far (just creating a freaking vault) was done with the help of information on other obscure blogs and youtube videos. For example on https://docs.aws.amazon.com/cli/latest/userguide/cli-services-glacier.html#cli-services-glacier-initiate there is a heading "Preparing a File" but this is immediately followed by "Create a file". So which is it now, preparing an EXISTING file or creating a NEW file? Is this step even required? Why is this so convoluted? Then from the looks of it, I need to use some ancient buggy Windows XP program to split the files into chunks before uploading? Are you kidding me??! So, it already took the best part of a day to export this large vmdk. Now I have to spend another day merely "splitting" this into chunks (If I have enough HDD space that is) and then I have to make sure I don't make any mistakes in the cli code to follow by correctly stating the starting/ending bytes, FOR EACH CHUNK. Then another day for uploading this, and another day to reassemble it? Again, are you kidding me?! If I have a 100GB file, how many chunks will this result in? I have to address EACH chunk with its own special little line of code. Absolutely bonkers. I'm on CBT Nuggets and TrainSignal, neither of these have any support videos on Glacier, does anyone know of any other material that will help me grasp what exactly I need to do in order to upload large files to Glacier? I know there are 3rd party clients available but I'd like to understand how to do this via cmd. Thanks for reading. Edited by: fnanfne on Sep 17, 2019 4:19 AM Edited by: fnanfne on Sep 17, 2019 8:53 AM
3
answers
0
votes
0
views
fnanfne
asked 2 years ago
  • 1
  • 90 / page