By using AWS re:Post, you agree to the Terms of Use
/All/
All Questions
Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

Amazon SES production access denied without any important reason

Amazon SES production access denied without any important reason. My Case ID 9489919231 was deny without any important reason, if Amazon SES service is so rude for new customers, why it's not described in SES docs? Why in SES docs everything look's very easy? I'm doubling here my request look at them and tell me please, someone, WHAT IS REASON? "Production access request Service: SES Sending Limits Region: us-west-2 Please enable production access ------------ Use case description: First of all, we want to say you Hello! We plan use SES to send messages, transactional messages to our members. You can look here: https://peretz-centre.org/join-us/ This is way, how visitors become members and how they being add to recipient list. All bounces, complaints and unsubscribe requests we plan to track with SNS service, when our app receive message from SNS - we will stop any sending to this address. Please enable production access on our account. Mail Type: TRANSACTIONAL Website URL: https://peretz-centre.org Replied to first deny: Usually we send messages every day, because for one year of our membership we have already about 350.000 members. So every day we receive new members requests. About recipient list maintaining, we take several tools to proceed that: 1. Service for validation and list cleaning. 2. We track all bounces and spam rates of our campaign. 3. Using segmentation and personalisation. 4. We dont send spam, only high quality content. All bounces, complaints and unsubscribe requests we plan to track with SNS service, when our app receive message from SNS - we will stop any sending to this address. We will setup serverless-ly (SNS + SQS + Lambda) so the cost should be next to nothing. And it's very effective solution. There is a lot of information from AWS how to do this - and we will do this after you enable production access on account. Thank you for attention, have a nice day!" Tell me, when Amazon Web Services will already respect customers and new users? If SES only for old users or for users with big budget - NOTIFY about that.
0
answers
0
votes
2
views
AWS-User-6191652
asked 9 hours ago

metadata service is unstable: connection timeout, Failed to connect to service endpoint etc

start from recently, our long running job are hitting metadata issue frequently. The exceptions various, but the all point to EC2 metadata service. It's either failed to connection the endpoint, or timeout to connect to the service, or complaining that I need to specify the region while building the client. The job is running on EMR 6.0.0 in Tokyo, with correct Role set, and the job has been running fine for months, just started from recent, it became unstable. So my question is: how can we monitor the healthy the metadata service? request QPS, success rate, etc. A few callstacks ``` com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain: [com.amazon.ws.emr.hadoop.fs.guice.UserGroupMappingAWSSessionCredentialsProvider@4a27ee0d: null, com.amazon.ws.emr.hadoop.fs.HadoopConfigurationAWSCredentialsProvider@76659c17: null, com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.auth.InstanceProfileCredentialsProvider@5c05c23d: Failed to connect to service endpoint: ] at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:136) ``` ``` com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region. at com.amazonaws.client.builder.AwsClientBuilder.setRegion(AwsClientBuilder.java:462) at com.amazonaws.client.builder.AwsClientBuilder.configureMutableProperties(AwsClientBuilder.java:424) at com.amazonaws.client.builder.AwsSyncClientBuilder.build(AwsSyncClientBuilder.java:46) ``` ``` com.amazonaws.SdkClientException: Unable to execute HTTP request: mybucket.s3.ap-northeast-1.amazonaws.com at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1189) ~[aws-java-sdk-bundle-1.11.711.jar:?] Caused by: java.net.UnknownHostException: mybucket.s3.ap-northeast-1.amazonaws.com at java.net.InetAddress.getAllByName0(InetAddress.java:1281) ~[?:1.8.0_242] at java.net.InetAddress.getAllByName(InetAddress.java:1193) ~[?:1.8.0_242] at java.net.InetAddress.getAllByName(InetAddress.java:1127) ~[?:1.8.0_242] ``` ``` com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.SdkClientException: Failed to connect to service endpoint: Caused by: java.net.SocketTimeoutException: connect timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ```
0
answers
0
votes
0
views
Hubery
asked 18 hours ago

SageMaker AutoML generates ExpiredTokenException

Hi, I can train models using different AWS SageMaker estimators, but when I use SageMaker AutoML Python SDK the following error occurs about 15 minutes into the model training process: "botocore.exceptions.ClientError: An error occurred (ExpiredTokenException) when calling the DescribeAutoMLJob operation: The security token included in the request is expired" The role used to create the AutoML object is associated with the following AWS pre-defined policies as well as one inline policy. Can you please let me know what I’m missing that's causing this ExpiredTokenException error? AmazonS3FullAccess AWSCloud9Administrator AWSCloud9User AmazonSageMakerFullAccess Inline policy: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": "*", "Condition": { "StringEquals": { "iam:PassedToService": "sagemaker.amazonaws.com" } } }, { "Effect": "Allow", "Action": [ "sagemaker:DescribeEndpointConfig", "sagemaker:DescribeModel", "sagemaker:InvokeEndpoint", "sagemaker:ListTags", "sagemaker:DescribeEndpoint", "sagemaker:CreateModel", "sagemaker:CreateEndpointConfig", "sagemaker:CreateEndpoint", "sagemaker:DeleteModel", "sagemaker:DeleteEndpointConfig", "sagemaker:DeleteEndpoint", "cloudwatch:PutMetricData", "logs:CreateLogStream", "logs:PutLogEvents", "logs:CreateLogGroup", "logs:DescribeLogStreams", "s3:GetObject", "s3:PutObject", "s3:ListBucket", "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage" ], "Resource": "*" } ] } Thanks, Stefan
0
answers
0
votes
0
views
AWS-User-9933965
asked 18 hours ago

Cognito - CustomSMSSender InvalidCiphertextException: null on Code Decrypt (Golang)

Hi, i followed this document to customize cognito SMS delivery flow https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-lambda-custom-sms-sender.html I'm not working on a Javascript environment so wrote this Go snippet: ``` package main import ( "context" golog "log" "os" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/kms" ) // USING THIS TYPES BECAUSE AWS-SDK-GO DOES NOT SUPPORTS THIS // CognitoEventUserPoolsCustomSmsSender is sent by AWS Cognito User Pools before each mail to send. type CognitoEventUserPoolsCustomSmsSender struct { events.CognitoEventUserPoolsHeader Request CognitoEventUserPoolsCustomSmsSenderRequest `json:"request"` } // CognitoEventUserPoolsCustomSmsSenderRequest contains the request portion of a CustomSmsSender event type CognitoEventUserPoolsCustomSmsSenderRequest struct { UserAttributes map[string]interface{} `json:"userAttributes"` Code string `json:"code"` ClientMetadata map[string]string `json:"clientMetadata"` Type string `json:"type"` } func main() { lambda.Start(sendCustomSms) } func sendCustomSms(ctx context.Context, event *CognitoEventUserPoolsCustomSmsSender) error { golog.Printf("received event=%+v", event) golog.Printf("received ctx=%+v", ctx) config := aws.NewConfig().WithRegion(os.Getenv("AWS_REGION")) session, err := session.NewSession(config) if err != nil { return err } kmsProvider := kms.New(session) smsCode, err := kmsProvider.Decrypt(&kms.DecryptInput{ KeyId: aws.String("a8a566c5-796a-4ba1-8715-c9c17c6f0cb5"), CiphertextBlob: []byte(event.Request.Code), }) if err != nil { return err } golog.Printf("decrypted code %v", smsCode.Plaintext) return nil } ``` i'm always getting `InvalidCiphertextException: : InvalidCiphertextException null`, can someone help? This is how lambda config looks on my user pool: ``` "LambdaConfig": { "CustomSMSSender": { "LambdaVersion": "V1_0", "LambdaArn": "arn:aws:lambda:eu-west-1:...:function:cognito-custom-auth-sms-sender-dev" }, "KMSKeyID": "arn:aws:kms:eu-west-1:...:key/a8a566c5-796a-4ba1-8715-c9c17c6f0cb5" }, ```
0
answers
0
votes
0
views
AWS-User-1153293
asked a day ago

Using Elastic Beanstalk - Docker Platform with ECR - Specifying a tag via environment variable

Hi, I am trying to develop a CI/CD process, using Beanstalk's Docker platform with ECR. Code Pipeline performs the builds and manages ECR Tags & Promotions. Terraform manages the infrastructure. I am looking for an approach that allows us to use the same Dockerfile/Dockerrun.aws.json in production & non-production environments, despite wanting different tags of the same image deployed.. Perhaps from different repositories (repo_name_PROD vs repo_name_DEV ). Producing and moving Beanstalk bundles that only differ in a TAG feels unnecessary. the idea of dynamically changing Dockerfiles using the deployment process also seems fragile. What I was exploring was using a simple Environment variable: change what tag (comitHash) of an image should be based on a Beanstalk environment variable. ``` FROM 00000000000.dkr.ecr.us-east-1.amazonaws.com/repoName:${TAG} ADD entrypoint.sh / EXPOSE 8080 8787 9990 ENTRYPOINT [ "/entrypoint.sh" ] ``` Where TAG is the Git hash of the code repository from which the artifact was produced. CodeBuild has built the code and tagged the docker image. I understand that Docker supports this: ``` ARG TAG FROM 00000000000.dkr.ecr.us-east-1.amazonaws.com/repo_name:${TAG} ADD entrypoint.sh / EXPOSE 8080 8787 9990 ENTRYPOINT [ "/entrypoint.sh" ] ``` but requires building the image like this: `docker build --build-arg GIT_TAG=SOME_TAG . ` Am I correct in assuming this wil not work with the docker platform? I do not believe the EB Docker platform exposes a way to specify the build-arg. What is standard practice for managing tagged docker images in Beanstalk. I am a little leery of the `latest` tag.. as a poorly timed auto scaling event could pull an update before it should be deployed: that just does not work in my case. Updating my Dockerfile dufing deployment (via `sed` ) seems like asking for trouble.
0
answers
0
votes
1
views
bchandley
asked a day ago

Frustrating experience - Pearson VUE canceled my exam - Connection lost during cert - Although my Wifi was excellent

Hello, I had a bad experience taking the AWS cert with PearsonVUE yesterday. It was highly frustrating. I had scheduled the AWS SysOps Associate for 14th Fri 6:45 PM PST. The test began on time and after 10 mins, the PearsonVUE system got disconnected and showed "attempting to reconnect". It attempted to reconnect for 30 mins during which I tried to speak with a proctor, none responded. After 30 mins I didnt know what to do and I decided to close the application and relaunch it. This time I was allowed to initiate the test from the 11th minute which is great. After 15 mins again the same problem occurred. This time I waited for another 25 mins, no proctor responded. Finally I had to close the browser and tried to relaunch but I was not allowed to relaunch. During this entire experience my WiFi was excellent. On Sat morning I learnt that I am FAILED. This is highly frustrating experience. It takes a lot of effort to prepare and schedule for these exams. Now, the system allows me to retake the exam only after 14 days. Meaning I have to dedicate time again, for no fault of mine. Lot of people have had complaints with PearsonVUE and am not sure how we are considering them as the choice of test administrators. How does PEARSONVue plan to compensate me for the time and effort that I have lost? And they seem so rude during communication with the proctor. I dont know how large corporations are still collaborating with PearsonVUE.
0
answers
0
votes
2
views
AWS-User-3352992
asked a day ago

AWS ElasticSearch returning DIFFERENT results in Kibana and http request in browser for the exact same query

I am running this kibana query: I have this query in Kibana: GET nearby/_search { "from": 20, "size":20, "query": { "bool": { "must": { "match": { "X": "B" } }, "filter": { "geo_distance": { "distance": "3.0km", "PO": { "lat": 26.8466937, "lon": 80.94616599999999 } } } } } } and response to this is: all the responses are with X=B: 20 results are there, i have removed some fields and some docs to keep the post short { "took" : 228, "timed_out" : false, "_shards" : { "total" : 5, "successful" : 5, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : { "value" : 71, "relation" : "eq" }, "max_score" : 2.5032558, "hits" : [ { "_index" : "nearby", "_type" : "_doc", "_id" : "n3YeKvJqvpu1okE7QDBp", "_score" : 2.2831507, "_source" : { "PO" : "tuc89gfn0", "X" : "B" } }, { "_index" : "nearby", "_type" : "_doc", "_id" : "5FPJ2eyr0YoQ9F0xPYzW", "_score" : 2.2831507, "_source" : { "PO" : "tuc89gfn0", "X" : "B" } }, { "_index" : "nearby", "_type" : "_doc", "_id" : "QJflnqGKF1dpOjEaY8vy", "_score" : 2.2831507, "_source" : { "PO" : "tuc89gvr8", "X" : "B" } }] } } This is the browser REQUEST, QUERY REMAINS SAME: https://search-wul8888888.ap-south-1.es.amazonaws.com/nearby/_search?q="{"from":20,"size":20,"query":{"bool":{"must":{"match":{"X":"B"}},"filter":{"geo_distance":{"distance":"3km","PO":{"lat":26.8466937,"lon":80.94616599999999}}}}}}" This is the response: as u can see there are mostly X=I docs i.e. must-match isnt honoured, SECOND THING IS THAT I AM SENDING SIZE=20 BUT I GET 10 REULTS ONLY WHICH IS DEFAULT(BELOW I HAVE REMOVED EXTRA docs TO KEEP THE POST SHORT) {"took":149,"timed_out":false, "_shards":{"total":5,"successful":5,"skipped":0,"failed":0}, "hits":{"total":{"value":802,"relation":"eq"},"max_score":8.597985, "hits":[ {"_index":"nearby","_type":"_doc","_id":"iJ71MNq4a4TCkcT4vWSP","_score":8.597985,"_source":{//EXTRA FIELDS REMOVED "PO":"tuc8unwp7","X":"I","BI":"tRhKrWiDxFSt57tIH7g5"}}, {"_index":"nearby","_type":"_doc","_id":"PmngNe8WcC8aSraDMluz","_score":7.3973455,"_source":{"PO":"tuc8uhc5z",**"X":"I"**,"BI":"m3S6yEicvu1HFI1UOTIb"}}, {"_index":"nearby","_type":"_doc","_id":"lDqjflPZGYsymPGU8iHD","_score":7.1520696,"_source":{"PO":"tuc89wpg5","X":"B"}}, {"_index":"nearby","_type":"_doc","_id":"QIf2KsO4FpCjT3m7kH4I","_score":6.402881,"_source":{"PO":"tuc8uhc5z","X":"I","BI":"m3S6yEicvu1HFI1UOTIb"}}]}} PLEASE HELP. I TRIED EVERYTHING BUT NOT ABLE TO UNDERSTAND. MY HUNCH IS THAT EVERY TIME I M BEING RETURNED A STALE/old RESULT BUT dont know how to fix that. even in chrome incognito mode result for browser is same as above. Even if i change the radius in browser, result remains same which says clearly browser queries are getting the stale result.
0
answers
0
votes
2
views
PalmGini
asked 2 days ago

Annoying HLS Playback Problem On Windows But Not iOS

Hello All, I am getting up to speed with CloudFront and S3 for VOD. I have used the CloudFormation template. Uploaded an MP4, obtained the Key for the m3u8 file. I create a distribution in CF. I embed it in my webpage. For the most part, it works great. But there is a significantly long buffering event during the first few seconds. This problem does not exist when I play the video on my iOS device. And strangely, it does not happen when I play it in Akami's HLS tester on my Windows 11 PC using Chrome. The problem seems to only occur when I play it from my website, using any browser, on my Windows 11 PC. Steps I take to provoke the issue: Open an Incognito tab in Chrome / navigate to my website, my player is set to auto play so it auto plays / the video starts out a bit fuzzy, it then stops for a second / restarts with great resolution / and stays that way until the endo f the video. If I play again, no problems at all, but that is to be expected. I assume there is a local cache. Steps I have tried to fix / clues: I have tried different segment lengths via modifying the Lambda function created when the stack was formed by the template. The default was 5. At that setting, the fuzzy aspect lasted the longest but the buffer event seemed slightly shorter. At 1 and 2, the fuzzy is far shorter but the buffering event is notably longer. One thought, could this be related to the video player I am using? I wanted to use the AWS IVS but could not get it working the first go around so I tried the amazon-ivs-videojs. That worked immediately, except for the buffer issue. And as the buffer issue seems to go away when I test the distribution via the Akami HLS tester. As always, much appreciation for reading this question and any time spent pondering on it.
0
answers
0
votes
4
views
Redbone
asked 2 days ago

Aurora MySQL crashing randomly

I am on my third Aurora cluster that is randomly failing, leading my application to die. AWS support team didn't answer my support case. Engine version: 8.0.mysql_aurora.3.01.0 ``` /etc/rds/oscar-start-cmd: line 39: 2687 Killed /rdsdbbin/oscar/bin/mysqld --core-file --innodb_shared_buffer_pool_uses_huge_pages='1' "$@" grover/runtime/overlay.cpp:2270: Assertion failed: err == 0 Stack trace: /rdsdbbin/oscar/bin/mysqld() [0x2be2f08] /rdsdbbin/oscar/bin/mysqld(_Z27log_grover_pid_from_page_nomm+0x1d) [0x2850bdd] <inline> (in buf_page_t::set_grover_vol_pid(unsigned long, unsigned long) at /local/p4clients/pkgbuild-FRTaI/workspace/src/OscarMysql80/storage/innobase/include/ut0lock_free_hash.h:638) /rdsdbbin/oscar/bin/mysqld() [0x2597395] (in buf_page_init at /local/p4clients/pkgbuild-FRTaI/workspace/src/OscarMysql80/storage/innobase/buf/buf0buf.cc:6645) /rdsdbbin/oscar/bin/mysqld(_Z22buf_page_init_for_readP7dberr_tmRK9page_id_tRK11page_size_tm+0x2e0) [0x25a3cf0] /rdsdbbin/oscar/bin/mysqld(_Z17buf_read_page_lowP7dberr_tbmmRK9page_id_tRK11page_size_tbbb+0x91) [0x25c6c91] /rdsdbbin/oscar/bin/mysqld(_Z13buf_read_pageRK9page_id_tRK11page_size_tb+0x3c) [0x25c76bc] /rdsdbbin/oscar/bin/mysqld(_ZN9Buf_fetchI16Buf_fetch_normalE9read_pageEv+0x27) [0x2597ce7] /rdsdbbin/oscar/bin/mysqld(_ZN16Buf_fetch_normal3getERP11buf_block_t+0xb2) [0x259ed82] /rdsdbbin/oscar/bin/mysqld(_ZN9Buf_fetchI16Buf_fetch_normalE11single_pageEv+0x4e) [0x25a654e] /rdsdbbin/oscar/bin/mysqld(_Z16buf_page_get_genRK9page_id_tRK11page_size_tmP11buf_block_t10Page_fetchPKcmP5mtr_tb+0x1d9) [0x25a75a9] /rdsdbbin/oscar/bin/mysqld() [0x2637bc1] /rdsdbbin/oscar/bin/mysqld(_Z28fseg_alloc_free_page_generalPhjhmP5mtr_tS1_+0x1d0) [0x2639160] /rdsdbbin/oscar/bin/mysqld(_Z14btr_page_allocP12dict_index_tjhmP5mtr_tS2_+0xd5) [0x256ecc5] /rdsdbbin/oscar/bin/mysqld(_ZN3lob14alloc_lob_pageEP12dict_index_tP5mtr_tjb+0x216) [0x28bb676] /rdsdbbin/oscar/bin/mysqld(_ZN3lob12first_page_t5allocEP5mtr_tb+0x24) [0x28ab0c4] /rdsdbbin/oscar/bin/mysqld(_ZN3lob6insertEPNS_13InsertContextEP5trx_tRNS_5ref_tEP15big_rec_field_tm+0x14f) [0x28b78df] /rdsdbbin/oscar/bin/mysqld(_ZN3lob31btr_store_big_rec_extern_fieldsEP5trx_tP10btr_pcur_tPK5upd_tPmPK9big_rec_tP5mtr_tNS_6opcodeE+0xb16) [0x26edbb6] /rdsdbbin/oscar/bin/mysqld() [0x277331d] /rdsdbbin/oscar/bin/mysqld(_Z29row_ins_clust_index_entry_lowjmP12dict_index_tmP8dtuple_tP10btr_pcur_tP9que_thr_tb+0x646) [0x2774906] /rdsdbbin/oscar/bin/mysqld(_Z25row_ins_clust_index_entryP12dict_index_tP8dtuple_tP10btr_pcur_tP9que_thr_tb+0xe8) [0x277b158] /rdsdbbin/oscar/bin/mysqld(_Z12row_ins_stepP9que_thr_t+0x274) [0x277b7d4] /rdsdbbin/oscar/bin/mysqld() [0x278ca73] /rdsdbbin/oscar/bin/mysqld(_ZN11ha_innobase9write_rowEPh+0x226) [0x268fac6] /rdsdbbin/oscar/bin/mysqld(_ZN7handler12ha_write_rowEPh+0x177) [0x14a4867] /rdsdbbin/oscar/bin/mysqld(_Z12write_recordP3THDP5TABLEP9COPY_INFOS4_+0x5d4) [0x172e3d4] /rdsdbbin/oscar/bin/mysqld(_ZN21Sql_cmd_insert_values13execute_innerEP3THD+0xbaf) [0x173018f] /rdsdbbin/oscar/bin/mysqld(_ZN11Sql_cmd_dml7executeEP3THD+0x6cc) [0x119905c] /rdsdbbin/oscar/bin/mysqld(_Z30mysql_execute_command_internalP3THDb+0x1143) [0x1139f33] /rdsdbbin/oscar/bin/mysqld(_Z21mysql_execute_commandP3THDb+0x17b) [0x113d31b] /rdsdbbin/oscar/bin/mysqld(_Z20dispatch_sql_commandP3THDP12Parser_state+0x351) [0x113df91] 21:03:27 UTC - mysqld got signal 6 ; Most likely, you have hit a bug, but this error can also be caused by malfunctioning hardware. Thread pointer: 0x14652cf4e000 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 1465869fba9f thread_stack 0x40000 /rdsdbbin/oscar/bin/mysqld(my_print_stacktrace(unsigned char const*, unsigned long)+0x2d) [0x246ac4d] /rdsdbbin/oscar/bin/mysqld(handle_fatal_signal+0x532) [0x1310292] /lib64/libpthread.so.0(+0x117df) [0x147cc9b707df] /lib64/libc.so.6(gsignal+0x110) [0x147cc8ef3c20] /lib64/libc.so.6(abort+0x147) [0x147cc8ef50c7] /rdsdbbin/oscar/bin/mysqld() [0xf963d7] /rdsdbbin/oscar/bin/mysqld() [0x2dba17a] /rdsdbbin/oscar/bin/mysqld() [0x2dba333] /rdsdbbin/oscar/bin/mysqld() [0x2be2f08] /rdsdbbin/oscar/bin/mysqld(log_grover_pid_from_page_no(unsigned long, unsigned long)+0x1d) [0x2850bdd] /rdsdbbin/oscar/bin/mysqld() [0x2597395] /rdsdbbin/oscar/bin/mysqld(buf_page_init_for_read(dberr_t*, unsigned long, page_id_t const&, page_size_t const&, unsigned long)+0x2e0) [0x25a3cf0] /rdsdbbin/oscar/bin/mysqld(buf_read_page_low(dberr_t*, bool, unsigned long, unsigned long, page_id_t const&, page_size_t const&, bool, bool, bool)+0x91) [0x25c6c91] /rdsdbbin/oscar/bin/mysqld(buf_read_page(page_id_t const&, page_size_t const&, bool)+0x3c) [0x25c76bc] /rdsdbbin/oscar/bin/mysqld(Buf_fetch<Buf_fetch_normal>::read_page()+0x27) [0x2597ce7] /rdsdbbin/oscar/bin/mysqld(Buf_fetch_normal::get(buf_block_t*&)+0xb2) [0x259ed82] /rdsdbbin/oscar/bin/mysqld(Buf_fetch<Buf_fetch_normal>::single_page()+0x4e) [0x25a654e] /rdsdbbin/oscar/bin/mysqld(buf_page_get_gen(page_id_t const&, page_size_t const&, unsigned long, buf_block_t*, Page_fetch, char const*, unsigned long, mtr_t*, bool)+0x1d9) [0x25a75a9] /rdsdbbin/oscar/bin/mysqld() [0x2637bc1] /rdsdbbin/oscar/bin/mysqld(fseg_alloc_free_page_general(unsigned char*, unsigned int, unsigned char, unsigned long, mtr_t*, mtr_t*)+0x1d0) [0x2639160] /rdsdbbin/oscar/bin/mysqld(btr_page_alloc(dict_index_t*, unsigned int, unsigned char, unsigned long, mtr_t*, mtr_t*)+0xd5) [0x256ecc5] /rdsdbbin/oscar/bin/mysqld(lob::alloc_lob_page(dict_index_t*, mtr_t*, unsigned int, bool)+0x216) [0x28bb676] /rdsdbbin/oscar/bin/mysqld(lob::first_page_t::alloc(mtr_t*, bool)+0x24) [0x28ab0c4] /rdsdbbin/oscar/bin/mysqld(lob::insert(lob::InsertContext*, trx_t*, lob::ref_t&, big_rec_field_t*, unsigned long)+0x14f) [0x28b78df] /rdsdbbin/oscar/bin/mysqld(lob::btr_store_big_rec_extern_fields(trx_t*, btr_pcur_t*, upd_t const*, unsigned long*, big_rec_t const*, mtr_t*, lob::opcode)+0xb16) [0x26edbb6] /rdsdbbin/oscar/bin/mysqld() [0x277331d] /rdsdbbin/oscar/bin/mysqld(row_ins_clust_index_entry_low(unsigned int, unsigned long, dict_index_t*, unsigned long, dtuple_t*, btr_pcur_t*, que_thr_t*, bool)+0x646) [0x2774906] /rdsdbbin/oscar/bin/mysqld(row_ins_clust_index_entry(dict_index_t*, dtuple_t*, btr_pcur_t*, que_thr_t*, bool)+0xe8) [0x277b158] /rdsdbbin/oscar/bin/mysqld(row_ins_step(que_thr_t*)+0x274) [0x277b7d4] /rdsdbbin/oscar/bin/mysqld() [0x278ca73] /rdsdbbin/oscar/bin/mysqld(ha_innobase::write_row(unsigned char*)+0x226) [0x268fac6] /rdsdbbin/oscar/bin/mysqld(handler::ha_write_row(unsigned char*)+0x177) [0x14a4867] /rdsdbbin/oscar/bin/mysqld(write_record(THD*, TABLE*, COPY_INFO*, COPY_INFO*)+0x5d4) [0x172e3d4] /rdsdbbin/oscar/bin/mysqld(Sql_cmd_insert_values::execute_inner(THD*)+0xbaf) [0x173018f] /rdsdbbin/oscar/bin/mysqld(Sql_cmd_dml::execute(THD*)+0x6cc) [0x119905c] /rdsdbbin/oscar/bin/mysqld(mysql_execute_command_internal(THD*, bool)+0x1143) [0x1139f33] /rdsdbbin/oscar/bin/mysqld(mysql_execute_command(THD*, bool)+0x17b) [0x113d31b] /rdsdbbin/oscar/bin/mysqld(dispatch_sql_command(THD*, Parser_state*)+0x351) [0x113df91] /rdsdbbin/oscar/bin/mysqld(dispatch_command(THD*, COM_DATA const*, enum_server_command)+0x1b39) [0x113ff99] /rdsdbbin/oscar/bin/mysqld(do_command(THD*)+0x1c6) [0x1140f46] /rdsdbbin/oscar/bin/mysqld(THD_task::process_connection()+0x134) [0x12fcfc4] /rdsdbbin/oscar/bin/mysqld(Thread_pool::worker_loop()+0x180) [0x12fbc80] /rdsdbbin/oscar/bin/mysqld(Thread_pool::worker_launch(void*)+0x20) [0x12fbea0] /rdsdbbin/oscar/bin/mysqld() [0x296c531] /lib64/libpthread.so.0(+0x740a) [0x147cc9b6640a] /lib64/libc.so.6(clone+0x3e) [0x147cc8fad09e] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (1479bc268028): [omitted] Connection ID (thread ID): 45980 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. aurora backtrace compare flag : 1 Writing a core file [...] ```
0
answers
0
votes
5
views
tobias
asked 2 days ago

Unsupported Action in Policy for S3 Glacier/Veeam

Hello, New person using AWS S3 glacier and I ran across an issue. I am working with Veeam to add an S3 Glacier to my backup. I have the bucket created. I need to add the following to my bucket policy: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "s3:DeleteObject", "s3:PutObject", "s3:GetObject", "s3:RestoreObject", "s3:ListBucket", "s3:AbortMultipartUpload", "s3:GetBucketVersioning", "s3:ListAllMyBuckets", "s3:GetBucketLocation", "s3:GetBucketObjectLockConfiguration", "ec2:DescribeInstances", "ec2:CreateKeyPair", "ec2:DescribeKeyPairs", "ec2:RunInstances", "ec2:DeleteKeyPair", "ec2:DescribeVpcAttribute", "ec2:CreateTags", "ec2:DescribeSubnets", "ec2:TerminateInstances", "ec2:DescribeSecurityGroups", "ec2:DescribeImages", "ec2:DescribeVpcs", "ec2:CreateVpc", "ec2:CreateSubnet", "ec2:DescribeAvailabilityZones", "ec2:CreateRoute", "ec2:CreateInternetGateway", "ec2:AttachInternetGateway", "ec2:ModifyVpcAttribute", "ec2:CreateSecurityGroup", "ec2:DeleteSecurityGroup", "ec2:AuthorizeSecurityGroupIngress", "ec2:AuthorizeSecurityGroupEgress", "ec2:DescribeRouteTables", "ec2:DescribeInstanceTypes" ], "Resource": "*" } ] } ``` Once I put this in, the first error I get is "Missing Principal". So I added "Principal": {}, under SID. But I have no idea what to put in the brackets. I changed it to "*" and that seemed to fix it. Not sure if this the right thing to do? The next error I get is for all the EC2's and s3:ListAllMyBuckets give me an error of "Unsupported Action in Policy". This is where I get lost. Not sure what else to do. Do I need to open my bucket to public? Is this a permissions issue? Do I have to recreate the bucket and disable object-lock? Please help.
2
answers
0
votes
5
views
amatuerAWSguy
asked 2 days ago
0
answers
0
votes
1
views
redec
asked 2 days ago

Is it possible to use a non-default bridge network when running CodeBuild locally?

When I run codebuild locally it creates the default docker network and volumes as in the below output. Is there a way to use different bridge network and volumes instead of these default ones? I tried modifying the docker command in [codebuild_build.sh](https://github.com/aws/aws-codebuild-docker-images/blob/master/local_builds/codebuild_build.sh) to add a network (below build command output shows `--network mylocaltestingnetwork`) but that didn't help as such. The reason I am trying to use a different network and volumes is because I am using [localstack](https://localstack.cloud/) along with it, and the bridge network and volumes need to be accessible from the codebuild local container. If I configure my localstack to use the `agent-resources_default` network created by local codebuild then codebuild is able to access localstack. But, I would like to keep the dependency external to both codebuild and localstack by using a separate bridge network. ``` $ ./codebuild_build.sh -i public.ecr.aws/codebuild/amazonlinux2-x86_64-standard:3.0 -a codebuild-output/ -b buildspec-local.yml -c -p localstack Build Command: docker run -it -v /var/run/docker.sock:/var/run/docker.sock -e "IMAGE_NAME=public.ecr.aws/codebuild/amazonlinux2-x86_64-standard:3.0" -e "ARTIFACTS=<repo path>/codebuild-output/" --network mylocaltestingnetwork -e "SOURCE=<repo path>" -e "BUILDSPEC=<repo path>/buildspec-local.yml" -e "AWS_CONFIGURATION=<homedir>/.aws" -e "AWS_PROFILE=localstack" -e "INITIATOR=<user>" public.ecr.aws/codebuild/local-builds:latest Removing network agent-resources_default Removing volume agent-resources_source_volume Removing volume agent-resources_user_volume Creating network "agent-resources_default" with the default driver Creating volume "agent-resources_source_volume" with local driver Creating volume "agent-resources_user_volume" with local driver Creating agent-resources_agent_1 ... done Creating agent-resources_build_1 ... done Attaching to agent-resources_agent_1, agent-resources_build_1 agent_1 | [Container] 2022/01/14 15:57:15 Waiting for agent ping agent_1 | [Container] 2022/01/14 15:57:17 Waiting for DOWNLOAD_SOURCE agent_1 | [Container] 2022/01/14 15:57:23 Phase is DOWNLOAD_SOURCE ```
0
answers
0
votes
1
views
bbideep
asked 2 days ago

Lambda Execution Function Issue For RDS Reboot

Greetings, I created a simple function taking as reference the basic Lambda in Python to start/stop RDS from here: [https://aws.amazon.com/es/blogs/database/schedule-amazon-rds-stop-and-start-using-aws-lambda/]() But I changed it for reboot purposes, so my Python code is the following: ``` # Lambda for RDS reboot given a REGION, KEY and VALUE import boto3 import os import sys import time from datetime import datetime, timezone from time import gmtime, strftime # REGION: the rds region # KEY - VALUE: the KEY and VALUE from RDS tag def reboot_rds(): region = os.environ["REGION"] key = os.environ["KEY"] value = os.environ["VALUE"] client = boto3.client("rds", region_name=region) response = client.describe_db_instances() v_readReplica = [] for i in response["DBInstances"]: readReplica = i["ReadReplicaDBInstanceIdentifiers"] v_readReplica.extend(readReplica) for i in response["DBInstances"]: # Check if the RDS is Aurora if i["Engine"] not in ["aurora-mysql", "aurora-postgresql"]: # Check if RDS is a replica instance if ( i["DBInstanceIdentifier"] not in v_readReplica and len(i["ReadReplicaDBInstanceIdentifiers"]) == 0 ): arn = i["DBInstanceArn"] resp2 = client.list_tags_for_resource(ResourceName=arn) # Check tag if 0 == len(resp2["TagList"]): print("Instance {0} tag value is not correct".format(i["DBInstanceIdentifier"])) else: for tag in resp2["TagList"]: # if tag values match if tag["Key"] == key and tag["Value"] == value: if i["DBInstanceStatus"] == "available": client.reboot_db_instance( DBInstanceIdentifier=i["DBInstanceIdentifier"], ForceFailover=False, ) print("Rebooting RDS {0}".format(i["DBInstanceIdentifier"])) elif i["DBInstanceStatus"] == "rebooting": print( "Instance RDS {0} is already rebooting".format( i["DBInstanceIdentifier"] ) ) elif i["DBInstanceStatus"] == "creating": print( "Instance RDS {0} is on creation, try later".format( i["DBInstanceIdentifier"] ) ) elif i["DBInstanceStatus"] == "modifying": print( "Instance RDS {0} {0} is modifying, try later".format( i["DBInstanceIdentifier"] ) ) elif i["DBInstanceStatus"] == "stopped": print( "Cannot reboot RDS {0} it is already stopped".format( i["DBInstanceIdentifier"] ) ) elif i["DBInstanceStatus"] == "starting": print( "Instance RDS {0} is starting, try later".format( i["DBInstanceIdentifier"] ) ) elif i["DBInstanceStatus"] == "stopping": print( "Instance RDS {0} is stopping, try later.".format( i["DBInstanceIdentifier"] ) ) elif tag["Key"] != key and tag["Value"] != value: print( "Tag values {0} doesn't match".format(i["DBInstanceIdentifier"]) ) elif len(tag["Key"]) == 0 or len(tag["Value"]) == 0: print("Error {0}".format(i["DBInstanceIdentifier"])) else: print( "Instance RDS {0} is on a different state, check the RDS monitor for more info".format( i["DBInstanceIdentifier"] ) ) def lambda_handler(event, context): reboot_rds() ``` My environment variables: | Key| Value | | --- | --- | | KEY | tmptest | | REGION | us-east-1e | | VALUE| reboot| And finally my event named 'Test' `{ "key1": "tmptest", "key2": "us-east-1e", "key3": "reboot" }` I checked the indentation of my code before execute it and its fine, but in execution of my test event I got the following output: `{ "errorMessage": "2022-01-14T14:50:22.245Z b8d0dc59-714d-4543-8651-b5a2532dfe8e Task timed out after 1.00 seconds" }` ``` START RequestId: b8d0dc59-714d-4543-8651-b5a2532dfe8e Version: $LATEST END RequestId: b8d0dc59-714d-4543-8651-b5a2532dfe8e REPORT RequestId: b8d0dc59-714d-4543-8651-b5a2532dfe8e Duration: 1000.76 ms Billed Duration: 1000 ms Memory Size: 128 MB Max Memory Used: 65 MB Init Duration: 243.69 ms 2022-01-14T14:50:22.245Z b8d0dc59-714d-4543-8651-b5a2532dfe8e Task timed out after 1.00 seconds ``` Also my test RDS has the correct tag values in order to get the reboot action but nothing, until now I cannot reboot my instance with my Lambda function. Any clue what's wrong with my code? Maybe some additional configuration issue or something in my code is not correct, I don't know. I'd appreciate if someone can give a hand with this. **UPDATE 2022/01/15** As suggestion of **Brettski@AWS** I increased the time from 1 second to 10 then I got the following error message: ``` { "errorMessage": "Could not connect to the endpoint URL: \"https://rds.us-east-1e.amazonaws.com/\"", "errorType": "EndpointConnectionError", "requestId": "b2bb3840-42a2-4220-84b4-642d17d7a9e6", "stackTrace": [ " File \"/var/task/lambda_function.py\", line 103, in lambda_handler\n reiniciar_rds()\n", " File \"/var/task/lambda_function.py\", line 16, in reiniciar_rds\n response = client.describe_db_instances()\n", " File \"/var/runtime/botocore/client.py\", line 386, in _api_call\n return self._make_api_call(operation_name, kwargs)\n", " File \"/var/runtime/botocore/client.py\", line 691, in _make_api_call\n http, parsed_response = self._make_request(\n", " File \"/var/runtime/botocore/client.py\", line 711, in _make_request\n return self._endpoint.make_request(operation_model, request_dict)\n", " File \"/var/runtime/botocore/endpoint.py\", line 102, in make_request\n return self._send_request(request_dict, operation_model)\n", " File \"/var/runtime/botocore/endpoint.py\", line 136, in _send_request\n while self._needs_retry(attempts, operation_model, request_dict,\n", " File \"/var/runtime/botocore/endpoint.py\", line 253, in _needs_retry\n responses = self._event_emitter.emit(\n", " File \"/var/runtime/botocore/hooks.py\", line 357, in emit\n return self._emitter.emit(aliased_event_name, **kwargs)\n", " File \"/var/runtime/botocore/hooks.py\", line 228, in emit\n return self._emit(event_name, kwargs)\n", " File \"/var/runtime/botocore/hooks.py\", line 211, in _emit\n response = handler(**kwargs)\n", " File \"/var/runtime/botocore/retryhandler.py\", line 183, in __call__\n if self._checker(attempts, response, caught_exception):\n", " File \"/var/runtime/botocore/retryhandler.py\", line 250, in __call__\n should_retry = self._should_retry(attempt_number, response,\n", " File \"/var/runtime/botocore/retryhandler.py\", line 277, in _should_retry\n return self._checker(attempt_number, response, caught_exception)\n", " File \"/var/runtime/botocore/retryhandler.py\", line 316, in __call__\n checker_response = checker(attempt_number, response,\n", " File \"/var/runtime/botocore/retryhandler.py\", line 222, in __call__\n return self._check_caught_exception(\n", " File \"/var/runtime/botocore/retryhandler.py\", line 359, in _check_caught_exception\n raise caught_exception\n", " File \"/var/runtime/botocore/endpoint.py\", line 200, in _do_get_response\n http_response = self._send(request)\n", " File \"/var/runtime/botocore/endpoint.py\", line 269, in _send\n return self.http_session.send(request)\n", " File \"/var/runtime/botocore/httpsession.py\", line 373, in send\n raise EndpointConnectionError(endpoint_url=request.url, error=e)\n" ] } ``` It's strange because my VPC configuration is fine, it's the same VPC of my RDS, its zone and the same security group. What else have I to consider in order to make my code work properly?
2
answers
0
votes
5
views
TEENEESE
asked 2 days ago

AppSync - Cognito: user/pwd not working in jest test - rejectNoUserPool thrown

Hello, I have an AppSync app configured with Cognito authentication. All is working fine for the Amplify frontend. I would like to add some jest integration tests and was hoping to be able to also use user/pwd authentication (found examples using API_KEY but that is not really something I want to add) I have tried a lot of things and config options but all of them result in ``` console.error [ERROR] 18:24.753 AuthError - Error: Amplify has not been configured correctly. The configuration object is missing required auth properties. This error is typically caused by one of the following scenarios: 1. Did you run `amplify push` after adding auth via `amplify add auth`? See https://aws-amplify.github.io/docs/js/authentication#amplify-project-setup for more information 2. This could also be caused by multiple conflicting versions of amplify packages, see (https://docs.amplify.aws/lib/troubleshooting/upgrading/q/platform/js) for help upgrading Amplify packages. at ConsoleLogger.Object.<anonymous>.ConsoleLogger._log (node_modules/@aws-amplify/core/src/Logger/ConsoleLogger.ts:115:4) at ConsoleLogger.Object.<anonymous>.ConsoleLogger.error (node_modules/@aws-amplify/core/src/Logger/ConsoleLogger.ts:174:12) at NoUserPoolError.AuthError [as constructor] (node_modules/@aws-amplify/auth/src/Errors.ts:34:10) at new NoUserPoolError (node_modules/@aws-amplify/auth/src/Errors.ts:40:3) at AuthClass.Object.<anonymous>.AuthClass.rejectNoUserPool (node_modules/@aws-amplify/auth/src/Auth.ts:2248:25) at AuthClass.Object.<anonymous>.AuthClass.signIn (node_modules/@aws-amplify/auth/src/Auth.ts:480:16) at signIn (test/AnnotationsSpec.ts:44:17) ``` Before sharing any of the config details (which I am sure would be needed to provide detailed help) is it at all possible to use Appsync with USER_PWD authentication from a jest test? Any examples 'out there'? Tx!! Peter PS using a bunch of curl commands it does work out to send a query to the AppSync app and get the results back
0
answers
0
votes
1
views
AWS-User-0477472
asked 2 days ago

Mqtt connection between the user's iot devices and the user's phone

I want the communication to be done with publish and subcribe methods over mqtt. I don't want to use Shadow services. With the JITR method, devices can easily authentication with the AWS IoT by using device certificate that was signed by my unique CA. Each device has a unique certificate and a unique policy associated with that certificate. The following policy has only been added to a device's certificate. ``` Device's client id is = edb656635694fb25f2e6d50f361c37d64aa31e72118224df19f151ee70cc2923 ``` ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "iot:Connect", "Resource": "arn:aws:iot:<REGION>:<USER-ID>:client/edb656635694fb25f2e6d50f361c37d64aa31e72118224df19f151ee70cc2923" }, .......... ......... ] } ``` The user who buys the IOT device performs the following steps during registration with the iot device: 1. Sign up the AWS Cognito Service. 2. Policy name and client id info are sent from the iot device to the phone via Bluettoth. 3. It registers the Cognito identity with Policy using AttachPolicy. [https://imgur.com/a/hfWqjkD]() I found out that it only accepts a single connection with the client id. That's why the above didn't work. ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "iot:Connect", "Resource": [ "arn:aws:iot:<REGION>:<USER-ID>:client/edb656635694fb25f2e6d50f361c37d64aa31e72118224df19f151ee70cc2923", "arn:aws:iot:<REGION>:<USER-ID>:client/mobileUser1" ] }, ``` When I changed the identity as above, the system worked. With this method, I was able to restrict the resources of both iot devices and phone users. But I did the above process manually(adding a new line to policy), What should I do for mass production? At the same time, another iot device will have its own policy. How can the user communicate with iot devices? At the same time, more than one client can be paired to an iot device. I think I'm on the wrong way please guide me.
0
answers
0
votes
1
views
AWS-User-8111104
asked 2 days ago

aws lambda - ES6 module error : module is not defined in ES module scope

Based on these resources : https://aws.amazon.com/about-aws/whats-new/2022/01/aws-lambda-es-modules-top-level-await-node-js-14/ https://aws.amazon.com/blogs/compute/using-node-js-es-modules-and-top-level-await-in-aws-lambda/ it is clear that aws nodejs 14.x now supports ES6 module. However, when I to run a nodejs app with ES6 module, I get this error ``` undefined ERROR Uncaught Exception { "errorType": "ReferenceError", "errorMessage": "module is not defined in ES module scope\nThis file is being treated as an ES module because it has a '.js' file extension and '/var/task/package.json' contains \"type\": \"module\". To treat it as a CommonJS script, rename it to use the '.cjs' file extension.", "stack": [ "ReferenceError: module is not defined in ES module scope", "This file is being treated as an ES module because it has a '.js' file extension and '/var/task/package.json' contains \"type\": \"module\". To treat it as a CommonJS script, rename it to use the '.cjs' file extension.", " at file:///var/task/index.js:20:1", " at ModuleJob.run (internal/modules/esm/module_job.js:183:25)", " at process.runNextTicks [as _tickCallback] (internal/process/task_queues.js:60:5)", " at /var/runtime/deasync.js:23:15", " at _tryAwaitImport (/var/runtime/UserFunction.js:74:12)", " at _tryRequire (/var/runtime/UserFunction.js:162:21)", " at _loadUserApp (/var/runtime/UserFunction.js:197:12)", " at Object.module.exports.load (/var/runtime/UserFunction.js:242:17)", " at Object.<anonymous> (/var/runtime/index.js:43:30)", " at Module._compile (internal/modules/cjs/loader.js:1085:14)" ] } ``` I have already added `"type": "module" `in package.json package.json ``` { "name": "autoprocess", "version": "1.0.0", "description": "", "main": "index.js", "type": "module", "scripts": { }, "author": "", "license": "ISC", "dependencies": { "@aws-sdk/client-sqs": "^3.41.0", "aws-sdk": "^2.1030.0", "check-if-word": "^1.2.1", "express": "^4.17.1", "franc": "^6.0.0", "is-html": "^3.0.0", "nodemon": "^2.0.15" } } ``` index.json ``` 'use strict'; import StringMessage from './StringMessage.js'; module.exports.handler = async (event) => { var data = JSON.parse(event.body); //other code goes here let response = { statusCode: 200, headers: { }, body: "" }; console.log("response: " + JSON.stringify(response)) return response; }; ``` I have also tried replacing "module.exports.handler" with "exports.handler ". this does not work either. error message shows "exports is not defined in ES module scope" What am I doing wrong? additional info: I am uploading the function code via zip file
1
answers
0
votes
4
views
az-gi
asked 3 days ago
  • 1
  • 90 / page