Browse through the questions and answers listed below or filter and sort to narrow down your results.
Send set of Python files to AWS S3
I need help because I need to create an executable in Python that is running all the time and that every 100 photos I send them and so on until I reach a thousand. To give it context, a system was made in Python (arduino, jetson, etc), which through a camera takes photos from 1 to 1,000 but the photos are all uploaded to a folder created by the same system when the photos are finished. photos. I try to find a way to send the photos taken by groups of 10-20 or 100 and not wait for the 1000 photos to finish taking and then upload them. And the connection is through Boto3, if there is any other better connection or that can help me, it is also helpful
Low transfer rates
Hi, I run EC2 in Ireland region with GoAnyhwere Application that receives SFTP traffic on 922 port. Normally, on prem, I receive constant 5-10Mbps inbound, but in AWS instance I receive 1mbs and constant drops to 0mbps. The instance is c5.xlarge. Is there anyone who faces similar anomaly ? The same anomaly happens when I transfer to the attached S3
Unable to set bucket accelerate configuration - Administrator Access
When trying to deploy stack using Cloudformation, I am getting error: ``` API s3:setBucketAccelerateConfiguration Access Denied ``` See screenshot: https://pasteboard.co/7gPE6va6ILxP.png I have `AdministratorAccess` policy added to my IAM user. As you can see I am creating the bucket, therefore I am an owner. Why is there a problem with permissions? I have deployed same stack in different account (also using `AdministratorAccess` in that account) and there was no problem.
How to merge aws data pipeline output files into a single file?
Hi There, I'm sourcing data from dynamo db and residing into s3 bucket using AWS data pipeline. I have been running this pipeline once in a week to get up-to date the records from dynamo db table. Everything is working fine there's no issues with pipeline since i have small tables i would like to pull all records every time. The thing is that when AWS data pipeline writes the exported files in chunks into s3 which becoming hard now because i have to read the file one by one i don't wanna do that.. I am pretty new with AWS data pipeline service... Can someone guide me how to configure AWS pipeline so that it produces just one output file for each table? or any other better to resolve this? Any help would be appreciated...!
PROBLEMS WITH S3 DISCONNECTS
Hello AWS Support , Greetings for the day and I hope that you are well. We have a special situation with a production, We have an instance of ubuntu, which has a fileSystem called s3fs. Lately it has been failing to dismount, it is worth mentioning that to remount we perform the following actions: 1. sudo su 2. su - git 3.fusermount -u /var/www/resources 4. s3fs -o nonempty -o iam_role='rol_ssm' -o endpoint="us-west-2" -o use_cache=/tmp/cachegit stargroup-resources /var/www/resources When the FS is unmounted, question marks appear. d????????? ? ? ? ? ? resources/ Additional I add the data of the instance where the FS is mounted DISTRIB_ID=Ubuntu DISTRIB_RELEASE=14.04 DISTRIB_CODENAME=trusty DISTRIB_DESCRIPTION="Ubuntu 14.04.6 LTS" NAME="Ubuntu" VERSION="14.04.6 LTS, Trusty Tahr" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 14.04.6 LTS" VERSION_ID="14.04" HOME_URL="http://www.ubuntu.com/" SUPPORT_URL="http://help.ubuntu.com/" BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" Could you please support us? Karen T, Itera´s Cloud Operation Rep
How can i upload 50 MB file to s3 from edge device in multiple of 1 MB chunked size?
Hi, I want to upload 50 MB file to AWS S3 from embedded device. I want to chunk file in multiple of 1 MB and send 50 such PUT request for same object. (Providing range in HTTP header) Let me know the HTTP header field i have to set for this. Is there any example available in C language for same? Currently i could send PUT request to S3 and upload 1 MB data. But on subsequent request data is overwriting in same file instead of appending it. Let me know if you want more information. Thanks in advance, Darshak
How to integrate Amazon S3 into Microsoft Access
I am supporting a well established Microsoft Access application. I have previously written a separate VB.Net application to backup the database to an AWS S3 Bucket. But I would like to integrate this backup routine into the MS Access code (VBA). If I try to add the dll's (AWSSDK.Core.dll & AWSSDK.S3.dll) from the VB.NET code as References to the Access(VBA) it fails "Can't add reference to the specified file". In Access terms References as Type Libraries. So where can I find a Type Library which gives S3 functionality to some VBA code? Many thanks
AWS slow upload on tiny files
Hi there, I upload on a regular basis static websites on S3 buckets. I have a new website where this upload is very very very slow: it can take up to 1 hour to upload a ~30MB bundle of 1500 to 2000 files, mainly .js (i.e. no large image or video). The upload first indicates a correct speed and ~2min remaining, then it just slows and seems very lost in the estimated remaining time :) It does not happen on other projects, the bucket in the same region (eu-west-3), I activated the Transfer acceleration without any substantial difference and I tried from 3 different networks. Any idea ?! Thanks
uploaded media file got very slow buffered streaming
I uploaded a media file on S3 with .exe and other supported file as its an interactive video media file. It was running very smooth until last 2 months. It buffers a lot while playback, even after every 2-3 seconds. Not sure why its happening. I have checked my internet speed and its good 200mbps. Tried with different machines and locations but same issue.
Total size of my buckets is not the same as what appears inside them
Hi, I would like to contact you because I have a question about how the size of my buckets is managed in S3. For this I attach three images. In this account I have only one bucket created, "gualaceo". When I access the S3 dashboard it appears that the total size is 1.4 TB. When I access my bucket and select all the folders to calculate the total size, it appears that the size of the bucket "gualaceo" is 661 GB. I searched and asked if there could be any difference between the overall size and the buckey size and I was provided with the following link: https://aws.amazon.com/premiumsupport/knowledge-center/s3-console-metric-discrepancy/?nc1=h_ls After reading it and following the instructions, I proceeded to see if there are incomplete multipart uploads, which the result is 0. The only option I have active is Object versioning. But since the total size is 1.8 TB and the size of the only existing bucket is 661 GB. There is a difference of more than 1TB wherewith I can´t understand where such a difference comes from. Despite the fact that this also implies an increase in the cost of the bill, my "problem" more than anything is to know where that big difference in storage comes from or if you could help me in some way to be able to analyze and thus learn in the case that you are managing something incorrectly. I will appreciate that. Thanks for your attention and for your time. Best regards, Cereza.
Getting rate limited massively downloading from S3 (AWS public datasets)
Hi all, I have a large EMR cluster with the typical VPC, private network, public network, and internet gateway. There, each vCPU tries to download a WARC file from S3. I have all the instances on the same VPC but I am getting rate limited. I think that it should not be happening, I mean, instances should independently connect to the URLs using different network paths. However I do not know how to setup a connection with independent connections/IPs, rotating IPs or something. This should be a common issue and there should be an standard solution, otherwise, how does people massively work with not only AWS Public Datasets, but their own S3 buckets without getting limited. Edit: with more than 160 vCPUs the rate limit starts and the cluster performance degrades from 95% of vCPU usage to ~10%. Thanks, David
How to report an issue related to Amazon Linux 2 repo?
While experimenting with IPv6 only network in AWS I've found out that `yum` in Amazon Linux 2 requires IPv4 as of 2022-02-07. The reason of this that the request `curl -v https://amazonlinux-2-repos-eu-central-1.s3.dualstack.eu-central-1.amazonaws.com/2/core/2.0/aarch64/mirror.list` sent by yum (note `dualstack` in the URL) redirected to the URL like `https://amazonlinux-2-repos-eu-central-1.s3.eu-central-1.amazonaws.com/2/core/2.0/aarch64/8b618629d7dc5df1d8d584701a7c14d0ab8ddbf11e2c586a9860a7a92072a33f` (note that no `dualstack`, therefore ipv4 only) . Because of this server-side issue `yum` fails. Does anyone know what is the best way to report this issue to Amazon team? The fix might be easy to perform since S3 supports IPv6. Some reference documents I've found 1. IPv6 support for S3 https://aws.amazon.com/blogs/aws/now-available-ipv6-support-for-amazon-s3/ 2. Similar complain about yum not supported in IPv6-only network https://forums.aws.amazon.com/thread.jspa?messageID=842295