By using AWS re:Post, you agree to the Terms of Use
/Amazon S3 Glacier/

Questions tagged with Amazon S3 Glacier

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

How to make/access call recordings filenames

When we add enable Call Recording Behavior in contact flow the call recording files are pushed & stored in S3 with `contactId_timestamp.wav` format file. We can get the contactId using Amazon connect stream API `getContactId()` event. I want to get the recordings of every call after call completed so to get the recordings from S3 i need to pass the key as filename in getObject Api. So, i am trying to automate this in my code as after every single call it will fetch/pull the call recording and add it in my call activity, the same procedural we are experiencing in `Search Contact Flow` as after every call we get a audio file. How to get the timestamp to make and access the call recording files by names? So far i what i have done is: -Used getObject Api to get/downloading the recording by filename. ``` require("dotenv").config(); const expres = require("express"); const app = expres(); app.listen(3001); const aws = require("aws-sdk"); const multer = require("multer"); const multerS3 = require("multer-s3"); aws.config.update({ secretAccessKey: process.env.ACCESS_SECRET, accessKeyId: process.env.ACCESS_KEY, region: process.env.REGION }) const BUCKET = process.env.BUCKET const s3 = new aws.S3(secretAccessKey = process.env.ACCESS_SECRET, accessKeyId = process.env.ACCESS_KEY); app.get("/download/filename", async(req, res)=>{ const filename = req.params.filename let x = await s3.getObject({Bucket:BUCKET, Key:filename}).promise(); res.send(x.Body); }) ```
1
answers
0
votes
10
views
asked 2 months ago

COPY from S3 to Redshift with manifest fails

Hi, I need to load data from Aurora (MySQL) to Redshift and using S3 is one of the available options. I can extract data from Aurora (MySQL) to S3 using: ``` SELECT * FROM data_table INTO OUTFILE S3 's3-XXX://bucket_name/aurora_files/data_table' FORMAT CSV HEADER FIELDS TERMINATED BY ';' LINES TERMINATED BY '\n' OVERWRITE ON; ``` and load the same data to Redshift using: ``` copy data_table from 's3://bucket_name/aurora_files/data_table.part_00000' access_key_id 'XXX' secret_access_key 'XXX' csv delimiter ';' ignoreheader 1 timeformat 'YYYY-MM-DD HH:MI:SS' region 'XXX'; ``` If I try to extract data with Manifest and load that from Manifest, I get the following error: ``` [2022-03-14 18:08:52] [XX000] ERROR: S3 path "s3-XXX://bucket_name/aurora_files/data_table.part_00000" has invalid format. [2022-03-14 18:08:52] Detail: [2022-03-14 18:08:52] ----------------------------------------------- [2022-03-14 18:08:52] error: S3 path "s3-XXX://bucket_name/aurora_files/data_table.part_00000" has invalid format. [2022-03-14 18:08:52] code: 8001 [2022-03-14 18:08:52] context: Parsing S3 Bucket [2022-03-14 18:08:52] query: 312924 [2022-03-14 18:08:52] location: s3_utility.cpp:133 [2022-03-14 18:08:52] process: padbm@ster [pid=13049] [2022-03-14 18:08:52] ----------------------------------------------- ``` Following commands are used to create S3 file and load that to Redshift with manifest: ``` SELECT * FROM data_table INTO OUTFILE S3 's3-XXX://bucket_name/aurora_files/data_table' FORMAT CSV HEADER FIELDS TERMINATED BY ';' LINES TERMINATED BY '\n' MANIFEST ON OVERWRITE ON; ``` ``` copy data_table from 's3://bucket_name/aurora_files/data_table.manifest' access_key_id 'XXX' secret_access_key 'XXX' csv delimiter ';' ignoreheader 1 timeformat 'YYYY-MM-DD HH:MI:SS' region 'XXX' manifest; ``` What could be the issue?
1
answers
0
votes
10
views
asked 2 months ago

Total size of my buckets is not the same as what appears inside them

Hi, I would like to contact you because I have a question about how the size of my buckets is managed in S3. For this I attach three images. In this account I have only one bucket created, "gualaceo". When I access the S3 dashboard it appears that the total size is 1.4 TB. When I access my bucket and select all the folders to calculate the total size, it appears that the size of the bucket "gualaceo" is 661 GB. I searched and asked if there could be any difference between the overall size and the buckey size and I was provided with the following link: https://aws.amazon.com/premiumsupport/knowledge-center/s3-console-metric-discrepancy/?nc1=h_ls After reading it and following the instructions, I proceeded to see if there are incomplete multipart uploads, which the result is 0. The only option I have active is Object versioning. But since the total size is 1.8 TB and the size of the only existing bucket is 661 GB. There is a difference of more than 1TB wherewith I can´t understand where such a difference comes from. Despite the fact that this also implies an increase in the cost of the bill, my "problem" more than anything is to know where that big difference in storage comes from or if you could help me in some way to be able to analyze and thus learn in the case that you are managing something incorrectly. I will appreciate that. Thanks for your attention and for your time. Best regards, Cereza.
1
answers
0
votes
12
views
asked 3 months ago

How to use S3 Bucket getObjectTorrent with node js?

Hi i m using node js sdk with serverless aws lambda to getObjectTorrent for object inside s3 bucket but it gives me error "message": "The specified method is not allowed against this resource.", Request: { "Bucket": "my-bucket-name", "Key": "file name with extension", "ExpectedBucketOwner": "my-account-id", "RequestPayer": "requester" } ``` let s3 = new AWS.S3({ region: " us-east-1", endpoint: 'https://s3.amazonaws.com/', // endpoint: 'https://lock-bucket.s3.amazonaws.com/request.PNG?torrent', accessKeyId: `${event.queryStringParameters.AccessKeyID}`, secretAccessKey: `${event.queryStringParameters.SecretAccessKey}`, }); let params: AWS.S3.GetObjectTorrentRequest = { Bucket: event.body.Bucket, // Key: `${event.body.Key}`, Key: event.body.Key, ExpectedBucketOwner: event.body.ExpectedBucketOwner, RequestPayer: event.body.RequestPayer, }; console.log(params); let data = await s3.getObjectTorrent(params).promise(); return formatJSONResponse( { data, }, 200 ); ``` Error Message: "message": "Internal server", "error": { "message": "The specified method is not allowed against this resource.", "code": "MethodNotAllowed", "region": null, "time": "2022-02-03T09:53:50.202Z", "requestId": "9KMFF5C74NPN1BHY", "extendedRequestId": "Jpog5Y5godEIJP8+9EJlv+VbCz8QK8UewsR8f/fH7039KrSBTDjcG1SyGhkdocUd8e+VhrXQVgk=", "statusCode": 405, "retryable": false, "retryDelay": 8.720824387981164 }
2
answers
0
votes
18
views
asked 4 months ago

Unsupported Action in Policy for S3 Glacier/Veeam

Hello, New person using AWS S3 glacier and I ran across an issue. I am working with Veeam to add an S3 Glacier to my backup. I have the bucket created. I need to add the following to my bucket policy: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "s3:DeleteObject", "s3:PutObject", "s3:GetObject", "s3:RestoreObject", "s3:ListBucket", "s3:AbortMultipartUpload", "s3:GetBucketVersioning", "s3:ListAllMyBuckets", "s3:GetBucketLocation", "s3:GetBucketObjectLockConfiguration", "ec2:DescribeInstances", "ec2:CreateKeyPair", "ec2:DescribeKeyPairs", "ec2:RunInstances", "ec2:DeleteKeyPair", "ec2:DescribeVpcAttribute", "ec2:CreateTags", "ec2:DescribeSubnets", "ec2:TerminateInstances", "ec2:DescribeSecurityGroups", "ec2:DescribeImages", "ec2:DescribeVpcs", "ec2:CreateVpc", "ec2:CreateSubnet", "ec2:DescribeAvailabilityZones", "ec2:CreateRoute", "ec2:CreateInternetGateway", "ec2:AttachInternetGateway", "ec2:ModifyVpcAttribute", "ec2:CreateSecurityGroup", "ec2:DeleteSecurityGroup", "ec2:AuthorizeSecurityGroupIngress", "ec2:AuthorizeSecurityGroupEgress", "ec2:DescribeRouteTables", "ec2:DescribeInstanceTypes" ], "Resource": "*" } ] } ``` Once I put this in, the first error I get is "Missing Principal". So I added "Principal": {}, under SID. But I have no idea what to put in the brackets. I changed it to "*" and that seemed to fix it. Not sure if this the right thing to do? The next error I get is for all the EC2's and s3:ListAllMyBuckets give me an error of "Unsupported Action in Policy". This is where I get lost. Not sure what else to do. Do I need to open my bucket to public? Is this a permissions issue? Do I have to recreate the bucket and disable object-lock? Please help.
2
answers
0
votes
11
views
asked 4 months ago

AWS: Multipart upload results in 403 Forbidden error even though single part upload works fine

**CONTEXT:** In my app, I have a feature that allows the user to upload a video. I noticed that when the users try to upload large videos, sometimes the upload fails. After, I did a bit of research, I found-out for files larger than 100 Mb, I should use [multipart upload][1]. So I have been following this [tutorial][2] to implement multipart upload in my app. And I reached **Stage Three**. ---------- **PART 1: Previous single part upload works fine** This is the implementation of a single part upload using *pre-signed urls*: > BACKEND var AWS = require("aws-sdk"); const REGION = "*************************"; //e.g. "us-east-1" const BUCKET_NAME = "l****************"; AWS.config.update({ region: REGION }); const s3 = new AWS.S3({ signatureVersion: "v4", apiVersion: "2006-03-01", }); var getVideoSignedUrl = async function (key) { return new Promise((resolve, reject) => { s3.getSignedUrl( "putObject", { Bucket: BUCKET_NAME, Key: key, ContentType: "video/*", ACL: "public-read", Expires: 300, }, (err, url) => { if (err) { reject(err); } else { resolve(url); } } ); }); }; exports.getVideoSignedUrl = getVideoSignedUrl; > FRONTEND export const getVideoPreSignedUrl = async () => await axios.get("/api/profile/getVideoPreSignedURL"); export const uploadVideoFileToCloud = async (file) => { const { data: uploadConfig } = await getVideoPreSignedUrl(); await axios.put(uploadConfig.url, file, { headers: { "Content-Type": file.type, "x-amz-acl": "public-read", }, transformRequest: (data, headers) => { delete headers.common["Authorization"]; return data; }, }); }; ---------- **PART 2: Multipart upload which throws 403 forbidden error** > BACKEND var AWS = require("aws-sdk"); const REGION = "***********************"; //e.g. "us-east-1" const BUCKET_NAME = "************************"; AWS.config.update({ region: REGION }); const s3 = new AWS.S3({ signatureVersion: "v4", apiVersion: "2006-03-01", }); // ========================================================== // Replacing getVideoSignedUrl with initiateMultipartUpload // That would generate a presigned url for every part const initiateMultipartUpload = async (object_name) => { const params = { Bucket: BUCKET_NAME, Key: object_name, ContentType: "video/*", ACL: "public-read", Expires: 300, }; const res = await s3.createMultipartUpload(params).promise(); return res.UploadId; }; const generatePresignedUrlsParts = async (object_name, number_of_parts) => { const upload_id = await initiateMultipartUpload(object_name); const baseParams = { Bucket: BUCKET_NAME, Key: object_name, UploadId: upload_id, }; const promises = []; for (let index = 0; index < number_of_parts; index++) { promises.push( s3.getSignedUrlPromise("uploadPart", { ...baseParams, PartNumber: index + 1, }) ); } const res = await Promise.all(promises); const signed_urls = {}; res.map((signed_url, i) => { signed_urls[i] = signed_url; }); return signed_urls; }; exports.initiateMultipartUpload = initiateMultipartUpload; exports.generatePresignedUrlsParts = generatePresignedUrlsParts; > FRONTEND This is where the error occurs. **See** `const resParts = await Promise.all(promises)` export const getMultiPartVideoUploadPresignedUrls = async (number_of_parts) => { const request_params = { params: { number_of_parts, }, }; return await axios.get( "/api/profile/get_multi_part_video_upload_presigned_urls", request_params ); }; // Using multipart upload export const uploadVideoFileToCloud = async (video_file, dispatch) => { // Each chunk is 100Mb const FILE_CHUNK_SIZE = 100_000_000; let video_size = video_file.size; let video_size_in_mb = Math.floor(video_size / 1000000); const number_of_parts = Math.floor(video_size_in_mb / 100) + 1; const response = await getMultiPartVideoUploadPresignedUrls(number_of_parts); const urls = response.data; console.log( "🚀 ~ file: profileActions.js ~ line 654 ~ uploadParts ~ urls", urls ); // async function uploadParts(file: Buffer, urls: Record<number, string>) { // const axios = Axios.create() // delete axios.defaults.headers.put["Content-Type"]; const keys = Object.keys(urls); const promises = []; for (const indexStr of keys) { const index = parseInt(indexStr); const start = index * FILE_CHUNK_SIZE; const end = (index + 1) * FILE_CHUNK_SIZE; const blob = index < keys.length ? video_file.slice(start, end) : video_file.slice(start); console.log( "🚀 ~ file: profileActions.js ~ line 691 ~ uploadParts ~ urls[index]", urls[index] ); console.log( "🚀 ~ file: profileActions.js ~ line 682 ~ uploadParts ~ blob", blob ); const upload_params = { headers: { "Content-Type": video_file.type, "x-amz-acl": "public-read", }, transformRequest: (data, headers) => { delete headers.common["Authorization"]; return data; }, }; const axios_request = axios.put(urls[index], blob, upload_params); promises.push(axios_request); console.log( "🚀 ~ file: profileAction.helper.js ~ line 117 ~ uploadParts ~ promises", promises ); } // Uploading video parts // This throws the 403 forbidden error const resParts = await Promise.all(promises); // This never gets logged console.log( "🚀 ~ file: profileAction.helper.js ~ line 124 ~ uploadParts ~ resParts", resParts ); // return resParts.map((part, index) => ({ // ETag: (part as any).headers.etag, // PartNumber: index + 1 // })) }; This is the error that's logged: [![PUT 403 forbidden error][3]][3] ---------- **PART 3: AWS Bucket & CORS policy:** 1. CORS Policy: [ { "AllowedHeaders": [ "*" ], "AllowedMethods": [ "PUT", "POST", "GET" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 } ] 2. Bucket policy hasn't been changed since I created the bucket and it's still empty by default: { "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Principal": {}, "Effect": "Allow", "Action": [], "Resource": [] } ] } [![Current bucket policy][4]][4] So maybe I should add something here? I also have all of these **unchecked:** [![Bucket Permissions][5]][5] ---------- **NOTES:** 1. I tested multipart upload with files smaller and larger than 100 Mb. And it always throws the 403 forbidden error. 2. I don't understand why I would get forbidden error if the single part upload works just fine. In other words, the upload is allowed and if both single part and multipart upload are using the same credentials, then that **forbidden** error should not occur. 3. I have a piece of code that shows me the progress of the upload. And I see the upload progressing. And the error seems to occur **AFTER** the upload of **EACH PART** is done: [![Upload progress image 1][5]][5] [![Upload progress image 2][6]][6] [1]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html [2]: https://www.altostra.com/blog/multipart-uploads-with-s3-presigned-url [3]: https://i.stack.imgur.com/1xCMz.png [4]: https://i.stack.imgur.com/pz2pw.png [5]: https://i.stack.imgur.com/OyqRp.png [6]: https://i.stack.imgur.com/HzICz.png [7]: https://i.stack.imgur.com/5W4IU.png
1
answers
0
votes
36
views
asked 5 months ago

S3 intelligent tier re-archiving clarification

We have objects in intelligent tiering storage class. The bucket had an intelligent tiering configuration to move objects to Archive Access and Deep Archive access. We removed this configuration, but not before objects were moved to these tiers. For objects in Archive and Deep Archive access, we have issued restore object requests. I would expect that archiving behavior for *restored* objects would follow whatever the bucket's current config specifies, and in the absence of an intelligent configuration, I would expect the default behavior of objects being moved only to the Archive Instant Access tier. However, [the documentation on restoring objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-initiate-restore-object.html) is unclear: > When you restore from the S3 Intelligent-Tiering Archive Access or Deep Archive Access tiers, the object transitions back into the S3 Intelligent-Tiering Frequent Access tier. The object **automatically transitions into the Archive Access tier after a minimum of 90 consecutive days** of no access. It moves into the **Deep Archive Access tier after a minimum of 180 consecutive days** of no access ... This reads like objects restored from these tiers will *always* go back to archive or deep archive if not accessed, regardless if the bucket is only configured to move objects to Archive Instant Access. Is this actually the case? Or do restored objects obey the bucket's current config like one would expect?
1
answers
0
votes
13
views
asked 5 months ago

Invalid inventory retrieval date range

If I don't include InventoryRetrievalParameters in initiateJob params for inventory-retrieval, I get my jobId. If I do include InventoryRetrievalParameters, I get a data range error. code:"InvalidParameterValueException" message:"Invalid inventory retrieval date range: InventoryRetrievalJobInput::StartDate=2020-03-17T14:21:48.776Z" name:"InvalidParameterValueException" requestId:"4y-oJW0GsHlp0o5dTfPQJdzkB6qi3lO4iq7e88eIpTFpDME" retryable:false retryDelay:60.730553151816856 stack:"InvalidParameterValueException: Invalid inventory retrieval date range: InventoryRetrievalJobInput::StartDate=2020-03-17T14:21:48.776Z at Object.extractError (c:\source\repos\Glacier\node_modules\aws-sdk\lib\protocol\json.js:51:27) at Request.extractError (c:\source\repos\Glacier\node_modules\aws-sdk\lib\protocol\rest_json.js:55:8) at Request.callListeners (c:\source\repos\Glacier\node_modules\aws-sdk\lib\sequential_executor.js:106:20) at Request.emit (c:\source\repos\Glacier\node_modules\aws-sdk\lib\sequential_executor.js:78:10) at Request.emit (c:\source\repos\Glacier\node_modules\aws-sdk\lib\request.js:683:14) at Request.transition (c:\source\repos\Glacier\node_modules\aws-sdk\lib\request.js:22:10) at AcceptorStateMachine.runTo (c:\source\repos\Glacier\node_modules\aws-sdk\lib\state_machine.js:14:12) at c:\source\repos\Glacier\node_modules\aws-sdk\lib\state_machine.js:26:10 at Request.<anonymous> (c:\source\repos\Glacier\node_modules\aws-sdk\lib\request.js:38:9) at ... statusCode:400 var params = { accountId: "-", jobParameters: { Description: `GET INVENTORY`, Format: 'JSON', Type: 'inventory-retrieval', // InventoryRetrievalParameters: { // StartDate: '2020-03-17T14:21:48.776Z', // EndDate: '2020-04-04T04:59:31.958Z', // Limit : '10000' // } }, vaultName: hidden }; var response = await initiateJob(params); if(response) { console.log(response); } const initiateJob = (params) => new Promise((resolve, reject) => { glacier.initiateJob(params, function(err, data) { if (err) { console.log(err, err.stack); reject(err); } else { // return the jobId resolve(data); } }); }); Edited by: gregbolog on Apr 14, 2020 8:25 AM
1
answers
0
votes
0
views
asked 2 years ago

HELP with uploading large files to Glacier

Hi I don't know why glacier is so convoluted/tricky to manage. Is this part of the reason why it's so cheap? I'm merely trying to upload some large vmdk's to glacier. I've been trawling AWS documentation for days and I literally have 50 tabs open detailing exactly, nothing. All the progress I've made thus far (just creating a freaking vault) was done with the help of information on other obscure blogs and youtube videos. For example on https://docs.aws.amazon.com/cli/latest/userguide/cli-services-glacier.html#cli-services-glacier-initiate there is a heading "Preparing a File" but this is immediately followed by "Create a file". So which is it now, preparing an EXISTING file or creating a NEW file? Is this step even required? Why is this so convoluted? Then from the looks of it, I need to use some ancient buggy Windows XP program to split the files into chunks before uploading? Are you kidding me??! So, it already took the best part of a day to export this large vmdk. Now I have to spend another day merely "splitting" this into chunks (If I have enough HDD space that is) and then I have to make sure I don't make any mistakes in the cli code to follow by correctly stating the starting/ending bytes, FOR EACH CHUNK. Then another day for uploading this, and another day to reassemble it? Again, are you kidding me?! If I have a 100GB file, how many chunks will this result in? I have to address EACH chunk with its own special little line of code. Absolutely bonkers. I'm on CBT Nuggets and TrainSignal, neither of these have any support videos on Glacier, does anyone know of any other material that will help me grasp what exactly I need to do in order to upload large files to Glacier? I know there are 3rd party clients available but I'd like to understand how to do this via cmd. Thanks for reading. Edited by: fnanfne on Sep 17, 2019 4:19 AM Edited by: fnanfne on Sep 17, 2019 8:53 AM
3
answers
0
votes
6
views
asked 3 years ago
  • 1
  • 90 / page