Questions tagged with Amazon S3 Glacier
Content language: English
Sort by most recent
Amplify.Storage.getUrl Security Token expiry date
Is there any expiry date of the security token present in the URL which I got through: ``` Amplify.Storage.getUrl( "ExampleKey", result -> Log.i("MyAmplifyApp", "Successfully generated: " + result.getUrl()), error -> Log.e("MyAmplifyApp", "URL generation failure", error) ); ``` I'm asking this because I want to hardcode the URL in my post model of graphql schema Second question: is it good to hardcode the URL? I'm worried because recently the s3 object URL format got deprecated. Update (September 23, 2020) – Over the last year, we’ve heard feedback from many customers who have asked us to extend the deprecation date. Based on this feedback we have decided to delay the deprecation of path-style URLs to ensure that customers have the time that they need to transition to virtual hosted-style URLs. Like this one day, virtual hosted-style URLs might be deprecated
Best way to archive Msk Data to s3(MSk connect too slow)
Hi there, I need advice on moving archive MSK topic data to an S3 bucket. I'm currently doing this with MSK connect with Confluence custom plugin running two tasks. We have ~163GB of data in MSK, and for the last 48 hours, it has only copied ~3GB; this will cost us a lot of money if we leave it like this, considering msk connect running cost and S3 PUT charges. Is there a better way to migrate the data that is cost-effective? Below is the msk Connector configuration should in case I missed out anything ``` connector.class=io.confluent.connect.s3.S3SinkConnector s3.region=### flush.size=1 schema.compatibility=NONE tasks.max=2 topics=##### format.class=io.confluent.connect.s3.format.json.JsonFormat partitioner.class=io.confluent.connect.storage.partitioner.DefaultPartitioner value.converter=org.apache.kafka.connect.storage.StringConverter storage.class=io.confluent.connect.s3.storage.S3Storage s3.bucket.name=######### key.converter=org.apache.kafka.connect.storage.StringConverter ``` Thanks
Getting error while querying table in athena.
Row is not a valid JSON Object - JSONException: Unterminated string at 404 [character 405 line 1] This query ran against the "covid_dataset" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: 64235c28-e4a8-4cb8-b37d-73bab288e2dd
automatically send new image from s3 to ec2
What happens in glacier after you initiate an archive retrieval job
I'm writing some code to retrieve archives from glacier. My code initiates a job then listens on the queue for a message with that jobId, then does the download of the archive. Documentation says that the jobId will not expire for 24 hours. Does this mean that if you don't complete the download within those 24 hours you can no longer download and need to initiate a new job? And does the 24 hour clock start when the job starts or when the job finishes?
Issues in S3 Buckets in Ireland Region - missing TSval & TSecr on 3 way handshake
Hello everyone, We are experiencing issues with some S3 Buckets in the Ireland Region. Sometimes in the 3-way handshake, the TSval & TSecr on the [SYN ACK] are not returned (RFC 1323). When this happens we see a huge drop in the bandwidth. In other cases, it works fine. We are trying to download files from Italy with a latency of 40ms. The drop in performance might be caused by the CCR2004 Mikrotik router that is delaying the ACKs that are discarded by the S3 bucket, thus leading to a reduction in the congestion window. Why some S3 buckets are not returning the TSval & TSecr? (we have found that at least the subnet 18.104.22.168/16 is affected by this behaviour) Anyone else has the same problem with this latency (40ms)?
Glacier IR expenses towards requests
I have only 1 file stored in Glacier IR storage class. I am not sure how was that file listed 1000 times. For e.g. this is my bill: ``` Amazon Simple Storage Service Requests-GIR-Tier2$0.01 $0.1 per 10,000 GET and all other requests to Glacier Instant Retrieval 1,087.000 Requests $0.01 ``` I do not think I have downloaded or listed that file more than 1000 times. I will like to know how this charge is calculated.
Move objects in Deep Glacier Archive from one account to another while preserving storage class
Looking for the best approach to moving data in S3 Glacier Deep Archive from one account to another, while preseving current storage class. Any solutions I've come across thus far seem to indicate that all data must be retreived from Deep Archive before copying, which feels redundant and costly given that the data should ultimately remain in Glacier Deep Archive in the destination account. What is the least costly approach to doing this? Can it be done without first performing (e.g.) a Bulk Retrieval on the original data and then subsequently moving back to Glacier Deep Archive in the destination account?
Restored Glacier object cannot be copied
I need advice on why an object that has been successfully restored from Glacer can't be copied or have the ability to transition back to a S3 storage class. I'm not expecing to have to downlod and re-upload the object as I didn't have to do this last time. ![Snippet from S3 console](/media/postImages/original/IM1k0y_y_tRE-wDqYm5tjr9Q) I need to restore objects to trigger sync to a second AWS region and change the storage class to Gacier Deep Archive.