By using AWS re:Post, you agree to the Terms of Use
/Amazon Simple Storage Service/

Questions tagged with Amazon Simple Storage Service

Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

metadata service is unstable: connection timeout, Failed to connect to service endpoint etc

start from recently, our long running job are hitting metadata issue frequently. The exceptions various, but the all point to EC2 metadata service. It's either failed to connection the endpoint, or timeout to connect to the service, or complaining that I need to specify the region while building the client. The job is running on EMR 6.0.0 in Tokyo, with correct Role set, and the job has been running fine for months, just started from recent, it became unstable. So my question is: how can we monitor the healthy the metadata service? request QPS, success rate, etc. A few callstacks ``` com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain: [com.amazon.ws.emr.hadoop.fs.guice.UserGroupMappingAWSSessionCredentialsProvider@4a27ee0d: null, com.amazon.ws.emr.hadoop.fs.HadoopConfigurationAWSCredentialsProvider@76659c17: null, com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.auth.InstanceProfileCredentialsProvider@5c05c23d: Failed to connect to service endpoint: ] at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:136) ``` ``` com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region. at com.amazonaws.client.builder.AwsClientBuilder.setRegion(AwsClientBuilder.java:462) at com.amazonaws.client.builder.AwsClientBuilder.configureMutableProperties(AwsClientBuilder.java:424) at com.amazonaws.client.builder.AwsSyncClientBuilder.build(AwsSyncClientBuilder.java:46) ``` ``` com.amazonaws.SdkClientException: Unable to execute HTTP request: mybucket.s3.ap-northeast-1.amazonaws.com at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1189) ~[aws-java-sdk-bundle-1.11.711.jar:?] Caused by: java.net.UnknownHostException: mybucket.s3.ap-northeast-1.amazonaws.com at java.net.InetAddress.getAllByName0(InetAddress.java:1281) ~[?:1.8.0_242] at java.net.InetAddress.getAllByName(InetAddress.java:1193) ~[?:1.8.0_242] at java.net.InetAddress.getAllByName(InetAddress.java:1127) ~[?:1.8.0_242] ``` ``` com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.SdkClientException: Failed to connect to service endpoint: Caused by: java.net.SocketTimeoutException: connect timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ```
0
answers
0
votes
1
views
Hubery
asked 19 hours ago

Unsupported Action in Policy for S3 Glacier/Veeam

Hello, New person using AWS S3 glacier and I ran across an issue. I am working with Veeam to add an S3 Glacier to my backup. I have the bucket created. I need to add the following to my bucket policy: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "s3:DeleteObject", "s3:PutObject", "s3:GetObject", "s3:RestoreObject", "s3:ListBucket", "s3:AbortMultipartUpload", "s3:GetBucketVersioning", "s3:ListAllMyBuckets", "s3:GetBucketLocation", "s3:GetBucketObjectLockConfiguration", "ec2:DescribeInstances", "ec2:CreateKeyPair", "ec2:DescribeKeyPairs", "ec2:RunInstances", "ec2:DeleteKeyPair", "ec2:DescribeVpcAttribute", "ec2:CreateTags", "ec2:DescribeSubnets", "ec2:TerminateInstances", "ec2:DescribeSecurityGroups", "ec2:DescribeImages", "ec2:DescribeVpcs", "ec2:CreateVpc", "ec2:CreateSubnet", "ec2:DescribeAvailabilityZones", "ec2:CreateRoute", "ec2:CreateInternetGateway", "ec2:AttachInternetGateway", "ec2:ModifyVpcAttribute", "ec2:CreateSecurityGroup", "ec2:DeleteSecurityGroup", "ec2:AuthorizeSecurityGroupIngress", "ec2:AuthorizeSecurityGroupEgress", "ec2:DescribeRouteTables", "ec2:DescribeInstanceTypes" ], "Resource": "*" } ] } ``` Once I put this in, the first error I get is "Missing Principal". So I added "Principal": {}, under SID. But I have no idea what to put in the brackets. I changed it to "*" and that seemed to fix it. Not sure if this the right thing to do? The next error I get is for all the EC2's and s3:ListAllMyBuckets give me an error of "Unsupported Action in Policy". This is where I get lost. Not sure what else to do. Do I need to open my bucket to public? Is this a permissions issue? Do I have to recreate the bucket and disable object-lock? Please help.
2
answers
0
votes
5
views
amatuerAWSguy
asked 2 days ago

Cloudfront Multiple Distributions Automatic Directs

Hello, I have a question, I have 2 cloudfront distributions with 2 different certificates / domains that point to the same S3 Bucket # main distribution is 123456789.cloudfront.net, with alternate domain + certificate: main.mydomain.com # second distribution is 987654321.cloudfront.net, with alternate domain + certificate: sub1.otherdomain.com On DNS (I use cloudflare) I have a CNAME for the main distribution domain: # main.mydomain.com cname to 123456789.cloudfront.net and I add other subdomains pointing to this other CNAME (for better management as I have many subdomains): # sub1.mydomain.com cname to main.mydomain.com but I also do point subdomain from the other domain to this (again because of management and some hardcoded links, **so I can't point it to it's own distribution**): # sub1.otherdomain.com cname to main.mydomain.com On theory I would need to use **cloudfront function to redirect the sub1.otherdomain.com to it's distribution (987654321.cloudfront.net)**, but it works without it and I don't know why (it shouldn't or there is some universal property of cloudfront I'm not aware about), because **there is no pointing / redirect from first distribution to the second one** configured, the **only DNS pointing to cloudfront is from main.mydomain.com** (cname to 123456789.cloudfront.net), and the **certificates are different**. Is it expected? Need to be sure for not having headaches on the future with production stuff.
1
answers
0
votes
6
views
Emerson Junior
asked 6 days ago

JS SDK createPresignedPost returns error Cannot read properties of undefined (reading 'endsWith')

I have the following code in my Node server: ``` const { S3Client } = require('@aws-sdk/client-s3'); const { createPresignedPost } = require('@aws-sdk/s3-presigned-post'); router.post( '/sign-s3', async (req, res, next) => { const { name, type } = req.body; const client = new S3Client({ region: 'eu-central-1' }); const params = { Bucket: process.env.S3_BUCKET_NAME, Expires: 60, Conditions: [ ['content-length-range', 100, 5242880], { 'Content-Type': 'image/jpeg' }, ], Fields: { key: `blog/${name}`, 'Content-Type': type, success_action_status: '201', }, }; try { const data = await createPresignedPost(client, params); return res.json(data); } catch (err) { return next({ status: 500, message: err.message }); } } ); ``` This route returns the following: `Cannot read properties of undefined (reading 'endsWith')`. The error is not really helpful. I've tried the following possible solutions: * Checking if AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are available from process.env * Passing credentials into the S3Client, like so: ``` credentials: { accessKeyId: process.env.AWS_ACCESS_KEY_ID, secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY } ``` * Removing credentials from .env and S3Object to narrow down the issue - it does throw a credential error if both are removed, however, if for instance I pass an empty credentials object into S3Object, I still get the `Cannot read properties of undefined (reading 'endsWith')` error. It's almost like there's no credentials validation? I also tried passing completly wrong credentials. * Passing credentials DIRECTLY into the credentials object (omitting .env altogether) * Uninstalling and installing SDK again * Removing params and leaving bucket name only Nothing works. Please help.
0
answers
0
votes
2
views
AWS-User-1685651
asked 9 days ago

S3 Event Bridge events have null values for VersionId. Is this a bug?

When working with Lambda Functions to handle EventBridge events from an S3 bucket with versioning enabled, I find that the VersionId field of the AWS Event object always shows a null value instead of the true value. For example, here is the JSON AWSEvent that uses the aws.s3@ObjectDeleted schema. This JSON was the event payload that went to my Lambda Function when I deleted an object from a bucket that had versioning enabled: Note that $.object.versionId is null but when I look in the bucket, I see unique Version ID values for both the original cat pic "BeardCat.jpg" and its delete marker. Also, I found the same problem in the AWSEvent JSON for an aws.s3@ObjectCreated event, too. There should have been a non-null VersionId in the ObjectCreated event and the ObjectDeleted event. Have I found a bug? Note: Where you see 'xxxx' or 'XXXXXXXXX' I was simply redacting AWS Account numbers and S3 bucket names for privacy reasons. ``` { detail: class ObjectDeleted { bucket: class Bucket { name: tails-dev-images-xxxx } object: class Object { etag: d41d8cd98f00b204e9800998ecf8427e key: BeardCat.jpg sequencer: 0061CDD784B140A4CB versionId: null } deletionType: null reason: DeleteObject requestId: null requester: XXXXXXXXX sourceIpAddress: null version: 0 } detailType: null resources: [arn:aws:s3:::tails-dev-images-xxxx] id: 82b7602e-a2fe-cffb-67c8-73b4c8753f5f source: aws.s3 time: Thu Dec 30 16:00:04 UTC 2021 region: us-east-2 version: 0 account: XXXXXXXXXX } ```
2
answers
0
votes
7
views
TheSpunicorn
asked 17 days ago

How to upload video files using rest API after receiving an "upload URL"

I'm working with ShotGrid (an AutoDesk service) who make it possible to upload media to their S3 buckets The basic idea: Developer sends a request to ShotGrid for an AWS S3 "upload URL" [ShotGrid's upload documentation](https://developer.shotgridsoftware.com/rest-api/?shell#requesting-an-upload-url) explains how to make the request for the "upload URL", and it seems to work just, but then there's no documentation explaining how to actually execute the upload after receiving it. So far I'm getting errors, the most promising of which shows "SignatureDoesNotMatch / The request signature we calculated does not match the signature you provided. Check your key and signing method." More detail below... I've tried the following: Request for 'upload URL' is ``` curl -X GET https//myshow.shotgrid.autodesk.com/api/v1/entity/Version/{VersionId}/_upload?\filename={FileName} \ -H 'Authorization: Bearer {BearerToken} \ -H 'Accept: application/json' ``` Result is ``` { "UrlRequest": { "data": {"timestamp": "[timestsamp]", "upload_type": "Attachment", "upload_id": null, "storage_service": "s3", "original_filename": "[FileName]", "multipart_upload": false }, "links": { "upload": "https://[s3domain].amazonaws.com/[longstring1]/[longstring2]/[FileName] ?X-Amz-Algorithm=[Alg] &X-Amz-Credential=[Creds] &X-Amz-Date=[Date] &X-Amz-Expires=900 &X-Amz-SignedHeaders=host &X-Amz-Security-Token=[Token] &X-Amz-Signature=[Signature]", "complete_upload": "/api/v1/entity/versions/{VersionId}/_upload" } } ``` Then the upload request... ``` curl -X PUT -H 'x-amz-signature=[Signature-See-Above]' -d '@/Volumes/Path/To/Upload/Media' 'https://[uploadUrlFromAbove]' ``` And get the following error... ``` <Error> <Code>SignatureDoesNotMatch</Code> <Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message> </Error> ```
3
answers
0
votes
8
views
Trln
asked 19 days ago

AWS: Multipart upload results in 403 Forbidden error even though single part upload works fine

**CONTEXT:** In my app, I have a feature that allows the user to upload a video. I noticed that when the users try to upload large videos, sometimes the upload fails. After, I did a bit of research, I found-out for files larger than 100 Mb, I should use [multipart upload][1]. So I have been following this [tutorial][2] to implement multipart upload in my app. And I reached **Stage Three**. ---------- **PART 1: Previous single part upload works fine** This is the implementation of a single part upload using *pre-signed urls*: > BACKEND var AWS = require("aws-sdk"); const REGION = "*************************"; //e.g. "us-east-1" const BUCKET_NAME = "l****************"; AWS.config.update({ region: REGION }); const s3 = new AWS.S3({ signatureVersion: "v4", apiVersion: "2006-03-01", }); var getVideoSignedUrl = async function (key) { return new Promise((resolve, reject) => { s3.getSignedUrl( "putObject", { Bucket: BUCKET_NAME, Key: key, ContentType: "video/*", ACL: "public-read", Expires: 300, }, (err, url) => { if (err) { reject(err); } else { resolve(url); } } ); }); }; exports.getVideoSignedUrl = getVideoSignedUrl; > FRONTEND export const getVideoPreSignedUrl = async () => await axios.get("/api/profile/getVideoPreSignedURL"); export const uploadVideoFileToCloud = async (file) => { const { data: uploadConfig } = await getVideoPreSignedUrl(); await axios.put(uploadConfig.url, file, { headers: { "Content-Type": file.type, "x-amz-acl": "public-read", }, transformRequest: (data, headers) => { delete headers.common["Authorization"]; return data; }, }); }; ---------- **PART 2: Multipart upload which throws 403 forbidden error** > BACKEND var AWS = require("aws-sdk"); const REGION = "***********************"; //e.g. "us-east-1" const BUCKET_NAME = "************************"; AWS.config.update({ region: REGION }); const s3 = new AWS.S3({ signatureVersion: "v4", apiVersion: "2006-03-01", }); // ========================================================== // Replacing getVideoSignedUrl with initiateMultipartUpload // That would generate a presigned url for every part const initiateMultipartUpload = async (object_name) => { const params = { Bucket: BUCKET_NAME, Key: object_name, ContentType: "video/*", ACL: "public-read", Expires: 300, }; const res = await s3.createMultipartUpload(params).promise(); return res.UploadId; }; const generatePresignedUrlsParts = async (object_name, number_of_parts) => { const upload_id = await initiateMultipartUpload(object_name); const baseParams = { Bucket: BUCKET_NAME, Key: object_name, UploadId: upload_id, }; const promises = []; for (let index = 0; index < number_of_parts; index++) { promises.push( s3.getSignedUrlPromise("uploadPart", { ...baseParams, PartNumber: index + 1, }) ); } const res = await Promise.all(promises); const signed_urls = {}; res.map((signed_url, i) => { signed_urls[i] = signed_url; }); return signed_urls; }; exports.initiateMultipartUpload = initiateMultipartUpload; exports.generatePresignedUrlsParts = generatePresignedUrlsParts; > FRONTEND This is where the error occurs. **See** `const resParts = await Promise.all(promises)` export const getMultiPartVideoUploadPresignedUrls = async (number_of_parts) => { const request_params = { params: { number_of_parts, }, }; return await axios.get( "/api/profile/get_multi_part_video_upload_presigned_urls", request_params ); }; // Using multipart upload export const uploadVideoFileToCloud = async (video_file, dispatch) => { // Each chunk is 100Mb const FILE_CHUNK_SIZE = 100_000_000; let video_size = video_file.size; let video_size_in_mb = Math.floor(video_size / 1000000); const number_of_parts = Math.floor(video_size_in_mb / 100) + 1; const response = await getMultiPartVideoUploadPresignedUrls(number_of_parts); const urls = response.data; console.log( "🚀 ~ file: profileActions.js ~ line 654 ~ uploadParts ~ urls", urls ); // async function uploadParts(file: Buffer, urls: Record<number, string>) { // const axios = Axios.create() // delete axios.defaults.headers.put["Content-Type"]; const keys = Object.keys(urls); const promises = []; for (const indexStr of keys) { const index = parseInt(indexStr); const start = index * FILE_CHUNK_SIZE; const end = (index + 1) * FILE_CHUNK_SIZE; const blob = index < keys.length ? video_file.slice(start, end) : video_file.slice(start); console.log( "🚀 ~ file: profileActions.js ~ line 691 ~ uploadParts ~ urls[index]", urls[index] ); console.log( "🚀 ~ file: profileActions.js ~ line 682 ~ uploadParts ~ blob", blob ); const upload_params = { headers: { "Content-Type": video_file.type, "x-amz-acl": "public-read", }, transformRequest: (data, headers) => { delete headers.common["Authorization"]; return data; }, }; const axios_request = axios.put(urls[index], blob, upload_params); promises.push(axios_request); console.log( "🚀 ~ file: profileAction.helper.js ~ line 117 ~ uploadParts ~ promises", promises ); } // Uploading video parts // This throws the 403 forbidden error const resParts = await Promise.all(promises); // This never gets logged console.log( "🚀 ~ file: profileAction.helper.js ~ line 124 ~ uploadParts ~ resParts", resParts ); // return resParts.map((part, index) => ({ // ETag: (part as any).headers.etag, // PartNumber: index + 1 // })) }; This is the error that's logged: [![PUT 403 forbidden error][3]][3] ---------- **PART 3: AWS Bucket & CORS policy:** 1. CORS Policy: [ { "AllowedHeaders": [ "*" ], "AllowedMethods": [ "PUT", "POST", "GET" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [], "MaxAgeSeconds": 3000 } ] 2. Bucket policy hasn't been changed since I created the bucket and it's still empty by default: { "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Principal": {}, "Effect": "Allow", "Action": [], "Resource": [] } ] } [![Current bucket policy][4]][4] So maybe I should add something here? I also have all of these **unchecked:** [![Bucket Permissions][5]][5] ---------- **NOTES:** 1. I tested multipart upload with files smaller and larger than 100 Mb. And it always throws the 403 forbidden error. 2. I don't understand why I would get forbidden error if the single part upload works just fine. In other words, the upload is allowed and if both single part and multipart upload are using the same credentials, then that **forbidden** error should not occur. 3. I have a piece of code that shows me the progress of the upload. And I see the upload progressing. And the error seems to occur **AFTER** the upload of **EACH PART** is done: [![Upload progress image 1][5]][5] [![Upload progress image 2][6]][6] [1]: https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html [2]: https://www.altostra.com/blog/multipart-uploads-with-s3-presigned-url [3]: https://i.stack.imgur.com/1xCMz.png [4]: https://i.stack.imgur.com/pz2pw.png [5]: https://i.stack.imgur.com/OyqRp.png [6]: https://i.stack.imgur.com/HzICz.png [7]: https://i.stack.imgur.com/5W4IU.png
1
answers
0
votes
5
views
AWS-User-9169178
asked 20 days ago

S3 bucket permissions to run CloudFormation from different accounts and create Lambda Funtions.

Not sure what I am missing but I keep getting permission denied errors when I launch CloudFormation using https URL Here are the details. I have a S3 bucket "mys3bucket" in ACCOUNT A. In this bucket, I have a CloudFormation template stored at s3://mys3bucket/project1/mycft.yml . The bucket us in us-east-1. It uses S3 Serverside Encryption using S3 key [not KMS] For this bucket, I have disabled ACLs , bucket and all objects are private but I have added a bucket policy which is as below: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::ACCOUNT_B_NUMBER:root" }, "Action": [ "s3:GetBucketLocation", "s3:GetObject", "s3:GetObjectTagging", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::mys3bucket", "arn:aws:s3:::mys3bucket/project1/*" ] } ] } Now, I login to Account B --> CloudFormation --> Create new stack --> Template is Ready --> Amazon S3 URL and the I enter the object path to my template in this format https://mys3bucket.s3.amazonaws.com/project1/mycft.yml When I click next, I get the following message on the same page as a banner in red S3 error: Access Denied For more information check http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html Also, just for your information, I am able to list the bucket and objects from Account B if I use Cloud9 and run aws s3 ls s3://mys3bucket/project1/mycft.yml aws s3 cp s3://mys3bucket/project1/mycft.yml . What am I missing? [I think this should work even when bucket is set a private but bucket policy allows cross-account access]. Does this use case require my bucket to be hosted as static website?
2
answers
0
votes
8
views
Alexa
asked 21 days ago

Best way to setup bucket with access points?

Hello, As part of a SaaS solution, I'm currently setting up the structure for a S3 bucket which will contian multiple clients' data. The idea is to use one access point per client, in order to isolate the different client's data. To be clear, the data is not made accessible to the client (not directly at least). The bucket is only used to absorb data to be used for processing and analysis purposes. This data is saved into different folders depending on the source type, so for example in a given access point one could have /images/ /logs/ etc. However, I'm unsure whether I should add extra partitioning to that, for a few reasons. For example, one is file collision. Suppose access point A has a file /images/tree.png, and then access point B tries to add a file with the same path, how is the collision handled? That could be solved with a something like hash suffix, but I'd still like to know what would happen. Then there is the question of scalability. This is not an issue per se, but I'm trying to think about what could happen in the future. It seems to me that having an extra partition on top of the access point would make it easier in the future, if there's any migration / refactoring that are needed. My solution would be to add the organisation id as prefix. Each access point would only have access (through the policy) to files in a specific subdirectory, like /12345/* However, this means that the callers to the access point need to add that prefix too, which is adds an extra step for all inputs pushing data to the access point, instead of using access point like it were a bucket directly. I'm not sure which way to go, if I'm complicating things or if there is a simpler solution, hence my question. Any advice would be greatly appreciated!
1
answers
0
votes
11
views
AWS-User-7072076
asked a month ago

Empty bucket not really empty?

Hi! We are using this particular bucket as a temporary storage - one service stores big (1-500GB) dump files there and another scheduled daily job checks if this bucket contains any files and retrieves them (files are removed from the bucket afterwards). It's a really low frequency traffic and there shouldn't be more than a couple of these files per day - there are days with no traffic at all. Versioning is off, of course. And this is the problem: this bucket is empty (for days now), but if I look at bucket metrics, CloudWatch claims there is around 1TB of data there in more than 65K of files. I understand that sometimes operations are just scheduled and do not happen immediately, but a week seems like enough time to delete a couple of files. What have I tried so far? 1. I read somewhere that these files could be remnants from failed uploads, which are not shown because they are not complete files. And that they could be cleared out if I turned the versioning on with a lifecycle rule that deletes everything. Tried, it didn't change anything. 2. I deleted this bucket completely - it shouldn't work if bucket is not empty in the first place (as far as I understand). I created a new bucket with the same name after a day and lo and behold - total bucket size is 1018GB, according to metrics. If there are files still in this bucket, how do I see/remove them from this limbo? If that's not the case, how do I stop CloudWatch from giving me false information? Why pay for "nothing"? Cheers
2
answers
0
votes
7
views
AWS-User-8706187
asked a month ago

Need suggestion to Automate the task to convert glb file into usdz by using docker command in EC2 instance.

We have implemented a docker on EC2 instance (i-05d**** (ARubntu)) for the purpose of conversion of GLB files to USDZ. It is implemented properly and we are also able to use it from the EC2 command line. But we want to give this feature to convert the file to our users on our webpage for which they will first upload the GLB file(this we have done successfully) but now we want to implement the conversion function on the webpage for which we don't have any idea and we need help with that. 1. First step file uploading on s3 bucket in our case bucket name is bucket_name (ap-south-1) 2. Second step is to convert this .glb file into .usdz (manually by using this docker command it is successfully uploaded to the same bucket---> docker run -e INPUT_GLB_S3_FILEPATH='bucket_name/10_Dinesh/8732f71f6eca07050f62b014354c5/model.glb' \ -e OUTPUT_USDZ_FILE='model.usdz' \ -e OUTPUT_S3_PATH='bucket_name/10_Dinesh/8732f71f6eca07050f62b014354c5' \ -e AWS_REGION='ap-south-1' \ -e AWS_ACCESS_KEY_ID='AKIA6N3W****' \ -e AWS_SECRET_ACCESS_KEY='0GuRz3b1X8****' \ -it --rm awsleochan/docker-glb-to-usdz-to-s3 by using the above command we can get the .usdz file in particular s3 bucket 3. Now we want to automate that task each time whenever any user uploads a .glb file in the s3 bucket it should give a .usdz file as well in the same bucket. Does anyone have a solution for this? We just want the object path automatically updated in this command.
2
answers
0
votes
3
views
AWS_Reality-bit
asked a month ago

Data Pipeline stops processing files in S3 bucket

I have a Data Pipeline which reads CSV files from an S3 bucket and copies the data into an RDS database. I specify the bucket/folder name and it goes through each CSV file in the bucket/folder and processes it. When it is done, a ShellCommandActivity moves the files to another 'folder' in the S3 bucket. That's how it works in testing. With the real data it just stops after a few files. The last line in the logs is `07 Dec 2021 09:57:55,755 [INFO] (TaskRunnerService-resource:df-1234xxx1_@Ec2Instance_2021-12-07T09:53:00-0) df-1234xxx1 amazonaws.datapipeline.connector.s3.RetryableS3Reader: Reopening connection and advancing 0` The logs show that it usually downloads the CSV file, then it writes the 'Reopening connection and advancing 0' line, then it deletes a temp file, then goes onto the the next file. But on the seventh file it just stops on 'Reopening connection and advancing 0'. It isn't the next file that is the problem, as it will process fine on it's own. I've already tried making the files smaller - originally it was stopping on the second file but now the filesizes are about 1.7MB it's getting through six of them before it stops. The status of each task (both DataLoadActivity and ShellCommandActivity) show 'CANCELLED' after one attempt (3 attempts are allowed) and there is no error message. I'm guessing this is some sort of timeout. How can I make the pipeline reliable so that it will process all of the files?
2
answers
0
votes
8
views
erc_aws
asked a month ago

mxnet error encountered in Lambda Function

I trained and deployed a semantic segmentation network (mlp2.xlarge) using SageMaker. I wanted to use an AWS Lambda function to send an image to this endpoint and get a mask in return however when I use invoke_endpoint it gives an mxnet error in the logs. Funnily when I use the deployed model from a transformer object from inside the SageMaker notebook the mask is returned properly. Here is my Lambda function code: ``` import json import boto3 s3r = boto3.resource('s3') def lambda_handler(event, context): # TODO implement bucket = event["body"] key = 'image.jpg' local_file_name = '/tmp/'+key s3r.Bucket(bucket).download_file(key, local_file_name) runtime = boto3.Session().client('sagemaker-runtime') with open('/tmp/image.jpg', 'rb') as imfile: imbytes = imfile.read() # Now we use the SageMaker runtime to invoke our endpoint, sending the review we were given response = runtime.invoke_endpoint( EndpointName='semseg-2021-12-03-10-05-58-495', ContentType='application/x-image', Body=bytearray(imbytes)) # The actual image # The response is an HTTP response whose body contains the result of our inference result = response['Body'].read() return { 'statusCode': 200, 'body': json.dumps(result) } ``` Here are the errors I see in the logs: mxnet.base.MXNetError: [10:26:14] /opt/brazil-pkg-cache/packages/AIAlgorithmsMXNet/AIAlgorithmsMXNet-1.4.x.4276.0/AL2_x86_64/generic-flavor/src/3rdparty/dmlc-core/src/recordio.cc:12: Check failed: size < (1 << 29U) RecordIO only accept record less than 2^29 bytes
1
answers
0
votes
17
views
YashJain
asked a month ago

Greengrass v2: 'Forbidden' (403) to get a file from S3

Hi, we are using AWS Greengrass v2 and have successfully running a core device with code provided via a lambda function. We are already successfully retrieving secrets via the secret manager and stream our data via kinesis to the cloud. But we still struggle to access/read a configuration file we have stored in a s3 bucket. We tried two ways, both without success. It would be great to get some advice to make at least one work. 1. As this solution has beem migrated from GreenGrass v1 to v2 we had the following piece of code working fine: ```python {noformat} try: s3 = boto3.client('s3') obj = s3.get_object(Bucket="OurBucketName", Key="OurFile1.csv") csv_file = obj['Body'].read().decode('utf-8-sig') print(csv_file) except botocore.exceptions.ClientError as error: print('Error long: ', error.response) {noformat} ``` But since migrating to Greengrass v2 we get a botocore exception (Failed due to: ClientError('An error occurred (403) when calling the GetObject operation: Forbidden')) or as prompted by the code example: ``` ...lambda_function.py:12,Error long: . {serviceInstance=0, serviceName=boto3_s3_test, currentState=RUNNING} ...lambda_function.py:12, . {serviceInstance=0, serviceName=boto3_s3_test, currentState=RUNNING} ...lambda_function.py:12,{'Error': {'Code': '403', 'Message': 'Forbidden'}, 'ResponseMetadata': {'RequestId': '', 'HostId': '', 'HTTPStatusCode': 403, 'HTTPHeaders': {'connection': 'Keep-Alive', 'content-type': 'text/html', 'cache-control': 'no-cache', 'content-length': '5748', 'x-frame-options': 'deny'}, 'RetryAttempts': 1}}. {serviceInstance=0, serviceName=boto3_s3_test, currentState=RUNNING} ``` Does anyone have an idea why we still get an error 403 (forbidden)? 2. We also tried to deploy our configuration files using the Artifacts feature, but without success. We configured it in the "Configuration update" of the AWS component in the deployment configuration. ``` { "reset": [], "merge": { "Artifacts": [ { "URI": "s3://OurBucketName/OurFile1.csv" }, { "URI": "s3://OurBucketName/OurFile2.csv" } ] } } ``` After the deployment we can't find our files on the greengrass core device, neither any hint in any of the log files. The documentation come with this example for the Artifact URI: "s3://DOC-EXAMPLE-BUCKET/artifacts/MyGreengrassComponent/1.0.0/artifact.py" Is that folder structure .../artifacts/<ComponentName>/<ComponentVersion>/... required? Does anyone have an idea why we don't get the artifacts on our greengrass core device? BTW for both attempts we followed the documentation and adjusted our version of the "GreengrassV2TokenExchangeRoleAccess" policy to allow (meanwhile) all s3 actions on all resources. ``` { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "greengrass:*", "iot:Receive", "logs:CreateLogStream", "iot:Subscribe", "secretsmanager:*", "s3:*", "iot:Connect", "logs:DescribeLogStreams", "iot:DescribeCertificate", "logs:CreateLogGroup", "logs:PutLogEvents", "iot:Publish" ], "Resource": "*" } ``` Thanks in adance! Regards, Dirk_R P.S.: Editing a post to get a proper result is definitely a pain in neck!
5
answers
0
votes
0
views
Dirk-R
asked 9 months ago

S3 content protection and deletion methods

Hi there, I'm looking to compile a complete list of ways to protect data in S3 from accidental or malicious deletion. Assuming that Object locking is not in play, and that we can't rely on cross-region replication for a redundant copy, are there any other ways to completely destroy data aside from the following: 1. DeleteObject API Call from an authenticated IAM principal (be that a role or a user) 2. PutObject API Call over an existing object without object versioning in play. 3. A lifecycle policy that deletes objects. 4. The Root user issues a DeleteObject Call Note that in certain infrequent administrative circumstances, I will need to still be able to delete an object (So object locking with compliance mode is not usable here) In short, It appears that data can be protected from Administrators (*:* permissions on *) by doing the following (please confirm?): Either: 1. Implement Object Locking in governance mode 2. Explicitly Deny s3:BypassGovernanceRetention and s3:GetBucketObjectLockConfiguration 3. Enable detective measures to undo these configurations. 4. Prevent the root account from being used OR: 1. Explicitly deny s3:DeleteObject, s3:DeleteObjectVersion, and s3:PutLifecycleConfiguration 2. Enable Versioning (to prevent overwrite) 3. Enable detective measures to undo these configurations. 4. Prevent the root account from being used OR: 1. Store a redundant copy of the object in a backup bucket and protect accordingly. 2. Restrict IAM access completely to backup copy 3. Enable detective measures to undo these configurations. 4. Prevent the root account from being used
1
answers
0
votes
1
views
AWS-User-7579179
asked a year ago
  • 1
  • 90 / page