Follow popular topics

see all

Top contributors

see all
RankNameTotal points
1
1,691
2
1,567
3
1,327
4
1,170
5
1,140

Recent questions

see all
1/18
  • Hi All, We just updated our redis cluster in elasticache redis cluster from v7.0.5 to v7.0.7 with dualstack connection and it just went down and keep resetting all incoming connections. After investigation, we found out it may be a bug that can reproduce on all AWS account **expected output**: Everything should works fine after redis upgrade **actual output**: redis cluster keep reset all incoming connections **step to reproduce**: 1. Open a new redis cluster with the following settings: choose "dualstack" in connection section instead of default ipv4 option choose redis v7 2. check if AWS choose v7.0.7, we can only reproduce this in v7.0.7, not v7.0.5 not v6.2 or v6 3. try to connect to this redis cluster and will find out all connection refused. ![use nping to test and get all connection refused](/media/postImages/original/IMaVSxiVvJT1GhzmnQ1kAwFw) Thanks for every folks in AWS User Group Taiwan that help us to narrow down the issue Original Post on Facebook in Chinese Traditional: https://www.facebook.com/groups/awsugtw/posts/5984294404980354/
    0
    answers
    0
    votes
    1
    views
    asked 2 minutes ago
  • I am having issues in regards to adding storage onto my instances I have already created and have been using for awhile. My storage is low so I decided to add more storage. I added 30GB of storage to each instance (General Purpose SSD (gp2) changed to 60 GiB from 30 GiB, when I sign onto the server it doesn’t show up in the storage but it shows up in my disk management that I have 30GB unallocated. What do i need to do to get the 30GB I added to the instance onto the server itself?
    0
    answers
    0
    votes
    1
    views
    asked 3 minutes ago
  • Hi, I have followed the [documentation](https://wellarchitectedlabs.com/cost/200_labs/200_cloud_intelligence/cost-usage-report-dashboards/dashboards/deploy_dashboards/) as mentioned in link. But I'm not getting all accounts data in dashboard, its showing data only for destination account. As i can't use/access management account so I'm getting all my individual accounts data(creating cur and storing into s3) and storing to one destination account like below. s3://cur-buck****/account/SourceAccount1/cur/cur/year=2023/month=3/0001.snappy.parquet s3://cur-buck****/account/SourceAccount2/cur/cur/year=2023/month=3/0001.snappy.parquet s3://cur-buck****/account/SourceAccount3/cur/cur/year=2023/month=3/0001.snappy.parquet s3://cur-buck****/account/DestinationAccount1/cur/cur/year=2023/month=3/0001.snappy.parquet I'm crawling location s3://cur-buck****/account/ in crawler s3. My dashboards were deployed successfully and working well but i'm getting only data for destination account. Account_map view is also giving 1 entry of destination account. I'm unable to get other accounts data even in athena query. Please help , I still not able to get this CURBucketPath path when we have multiple accounts. May be i have added wrong prefix while creating stack via CFN. ![Enter image description here](/media/postImages/original/IMPH9-vgTlRJe4fwfiKS4v3w) Thanks!
    0
    answers
    0
    votes
    2
    views
    asked 9 minutes ago
  • I have a newly set up workmail organization and account. However, it was unable to send emails and it always got the below error. I have searched a lot of documentation and followed the steps but no luck. Can anybody help? Thanks Sending Email failed. Could not send email. SubmitId: xxxxxxxxx Your administrator needs to give permissions to WorkMail to perform e-mail sending on your behalf. To give WorkMail sending permissions, follow the instructions here: https://docs.aws.amazon.com/workmail/latest/adminguide/editing_domains.html
    0
    answers
    0
    votes
    1
    views
    asked 10 minutes ago
  • ## Issue We have an Aurora PostgreSQL version 14.5 RDS cluster. We have a secret in SecretsManager with credentials for a user we want to rotate the password for. When rotating the secret, the Lambda gets stuck at the `setSecret` step with the error `Unable to log into database with previous, current, or pending secret`. We have determined that this relates to the `password_encryption` option in the cluster parameter group. If we set it to `md5` (whereas the default is, I believe, `scram-sha-256`) the rotation will work again _after_ we update it manually. We can then rotate it as many times as we want. ### Question How can we get the secret rotation to work while using the default cluster parameter group for an Aurora PostgreSQL cluster? ### To reproduce 1. Have a secret [formatted as expected](https://docs.aws.amazon.com/secretsmanager/latest/userguide/reference_secret_json_structure.html#reference_secret_json_structure_rds-postgres). 2. Have a Lambda running the [python code provided by AWS](https://github.com/aws-samples/aws-secrets-manager-rotation-lambdas/blob/master/SecretsManagerRDSPostgreSQLRotationSingleUser/lambda_function.py). 3. Have a version 14.5 Aurora PostgreSQL cluster using the `default.aurora-postgresql14` cluster parameter group. 4. Click the "Rotate secret immediately" button in the console 5. In Lambda logs, see the error `setSecret: Unable to log into database with previous, current, or pending secret of secret arn arn:aws:secretsmanager:....` ### How to Recover 1. Create a new cluster parameter group that is a copy of `default.aurora-postgresql14` 2. Change the `password_encryption` to be `md5` 3. Apply this new parameter group to the cluster 3. Cancel the secret rotation: `aws secretsmanager cancel-rotate-secret --secret-id ....` 4. Manually change the password on the user to a new one 5. Update the secret with the new password 6. click the "Rotate secret immediately" button in the console
    0
    answers
    0
    votes
    1
    views
    asked 27 minutes ago
  • Hi Team, I have a requirement to support Webhook (http notification requests) consumption and the applications/micro services which consumes these requests will be hosted on multiple regions in clusters. When http notification requests comes we want all these micro services running in different regions should get the requests equally so that proper load balancing will happen. Can we achieve this kind of functionality using AWS global accelerator service? If So, How? and is there any service along with AGA is needed ? Please let us know. Thank you so much in advance. Also, We are looking to have one URL exposed which boils down to one Fqdn/static IP address listening on port 80.
    0
    answers
    0
    votes
    1
    views
    asked 32 minutes ago
  • My instance was giving a 504 error. To my knowledge, nothing was updated about the site. My IP is 52.35.76.129 I spun up a new instance from a previous snapshot at 35.166.203.154 but am getting a network connection error. I also attached a classical load balancer and now am getting that the health check failed. What are the next steps?
    0
    answers
    0
    votes
    2
    views
    asked 34 minutes ago
  • I created EKS resources via Terraform. I now want to get temporary credentials for a new role (new_dev has eks:DescribeCluster permission). It throws below error, user xxxxx has AdminitratorAccess policy. Should I add an assume role policy to the user xxxxx? aws sts assume-role --role-arn arn:aws:iam::---:role/new_dev --role-session-name dev An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:iam::---:user/xxxxx is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::---:role/new_dev
    1
    answers
    0
    votes
    6
    views
    asked 39 minutes ago
  • ran sudo apt-get update && sudo apt-get upgrade with no issues. Error when opening cloud9 is Your AWS Cloud9 environment failed to install recommended updates: tmux, ncurses, and libevent. This may be caused by disk space, network, or permissions issues. If the updates continue to fail, we recommend that you create a new environment to stay up to date with the latest AWS Cloud9 features and security patches, or to update them manually. You can report any feedback by choosing Support > Submit Feedback. Also unable to submit feedback to report the issue.
    0
    answers
    0
    votes
    1
    views
    asked 40 minutes ago
  • Hi AWS, as you know there are a couple of caching strategies i.e. Cache Aside, Read-through cache, Write-through cache & Write-back. However I want to know which one is supported and beneficial in case of AWS ElastiCache Redis vs AWS ElastiCache Memcached. Please suggest.
    0
    answers
    0
    votes
    2
    views
    profile picture
    asked an hour ago
  • Is there a way to query Timestream from an Appsync API using an HTTP resolver? Is there any examples of setting that up using the cdk?
    0
    answers
    0
    votes
    3
    views
    asked an hour ago
  • I am seeking guidance on how to resolve the problem with Elasticache Memcached that we are currently encountering. Our setup consists of a 2-node cluster, and our application is based on Lambda. To ensure read availability, we write requests to both nodes. However, we are facing random connection timeouts that last for 10 to 30 seconds, and we receive "get_misses" errors. Upon checking the metrics, the CPU usage is low (around 5%) and there is sufficient available memory. Can someone offer suggestions on how to troubleshoot this issue?
    0
    answers
    0
    votes
    4
    views
    Ashish
    asked an hour ago
  • I've tried to send feedback on AWS documents (e.g. https://docs.aws.amazon.com/cloud9/latest/user-guide/setup.html) where the information in the documentation is incorrect, but when submitting feedback and after entering the security catchpa, I end up with a HTTP Status 400 - Bad Request. The feedback button seems to work ok in an incognito page, but not in my normal browser profile - any pointers as to which cookie is causing things to break?
    0
    answers
    0
    votes
    3
    views
    KrisH
    asked an hour ago
  • I want to create a Custom I AM policy with custom IAM Actions. something like below: `{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "myCustomService:MyCustomAction", "myCustomService1:MyCustomAction1", ], "Resource": "*" } ] }` I need this to control clients/ users/ clientApplication access to my application running in EKS cluster. thanks in advance.
    0
    answers
    0
    votes
    5
    views
    asked an hour ago
  • Hello every body, i subscribed for Docker Engine - Enterprise for Windows Server 2019 and followed this video https://www.youtube.com/watch?v=7eObt3MSzWw&ab_channel=CloudInfrastructureServices but i encountered the the below error : Success Restart Needed Exit Code Feature Result ------- -------------- --------- -------------- False Maybe Failed {} Install-WindowsFeature : A prerequisite check for the Hyper-V feature failed. 1. Hyper-V cannot be installed: The processor does not have required virtualization capabilities. Please advise.
    1
    answers
    0
    votes
    5
    views
    asked 2 hours ago
  • Hi all, I've recently started trying out AWS Transfer Family with AS2. According to the documentation, when sending AS2 messages or asynchronous MDNs to a trading partner's HTTPS endpoint, I must use a valid SSL certificate signed by a certificate authority (CA) that's trusted by AWS Transfer Family. Self-signed certificates are not supported. The list of trusted CAs can be found at https://www.amazontrust.com/repository/. I am not sure which certificate to get and how to obtain it. Can someone guide me through the process of choosing the right SSL certificate and obtaining it from a trusted CA for AWS Transfer Family with AS2 HTTPS endpoints? Thank you in advance!
    0
    answers
    0
    votes
    5
    views
    Max_H
    asked 2 hours ago
  • Hello! Im trying to deploy a component to read battery values from a UPS board attached to Raspberry pi model 3B+ using I2C protocol. For this purpose, im using the [smbus](https://pypi.org/project/smbus/) library with python, and reading the `0x36 address`. The code runs with no problem, but when i try to deploy as component using greengrass, doesn't shows when lists all the components. The artifact and recipe docs are showed: Recipe: ``` { "RecipeFormatVersion": "2020-01-25", "ComponentName": "com.example.battery", "ComponentVersion": "1.0.0", "ComponentDescription": "My first AWS IoT Greengrass component.", "ComponentPublisher": "Amazon", "ComponentConfiguration": { "DefaultConfiguration": { "Message": "world" } }, "Manifests": [ { "Platform": { "os": "linux" }, "Lifecycle": { "RequiresPrivilege": true, "Install": "python3 -m pip install --user smbus", "Run": "python3 -u {artifacts:path}/battery.py \"{configuration:/Message}\"" } } ] } ``` Artifact: ``` # ! / Usr / bin / env python import struct import smbus import time def readVoltage(bus): "This function returns as float Hat via the voltage from the Raspi UPS the \provided SMBus object" address = 0x36 read = bus.read_word_data(address, 2) swapped = struct .unpack("<H", struct.pack("> H", read))[0] voltage = swapped * 1.25 / 1000 / 16 return voltage def readCapacity(bus): "This function returns as a float the remaining capacity of the battery connected to the Raspi UPS Hat via the provided SMBus object " address = 0x36 read = bus.read_word_data(address, 4) swapped = struct.unpack("<H", struct .pack("> H", read))[0] capacity = swapped / 256 return capacity bus = smbus .SMBus(1) # 0 = / dev / i2c-0 (port I2C0), 1 = / dev / i2c-1 (port I2C1) while True: print("++++++++++++++++++++") print("Voltage:% 5.2fV" % readVoltage(bus)) print("Battery:% 5i %%" % readCapacity(bus)) if readCapacity(bus) == 100: print("Battery FULL") if readCapacity(bus) < 20: print("Battery LOW") print("++++++++++++++++++++") time . sleep(2) ``` The code for deploy de component is the following: ``` sudo /greengrass/v2/bin/greengrass-cli deployment create --recipeDir ~/greengrassv2/recipes --artifactDir ~/greengrassv2/artifacts --merge "com.example.battery=1.0.0" ``` And the error in `greengrass.log` is the following: ``` 2023-03-28T15:32:51.103Z [ERROR] (pool-2-thread-12) com.aws.greengrass.deployment.DeploymentService: Deployment task failed with following errors. {DeploymentId=42b70947-9b4a-4732-ac16-de41b80ede7f, detailed-deployment-status=FAILED_NO_STATE_CHANGE, deployment-error-types=[REQUEST_ERROR], GreengrassDeploymentId=42b70947-9b4a-4732-ac16-de41b80ede7f, serviceName=DeploymentService, currentState=RUNNING, deployment-error-stack=[DEPLOYMENT_FAILURE, NO_AVAILABLE_COMPONENT_VERSION, COMPONENT_VERSION_REQUIREMENTS_NOT_MET]} com.aws.greengrass.componentmanager.exceptions.NoAvailableComponentVersionException: No local or cloud component version satisfies the requirements Check whether the version constraints conflict and that the component exists in your AWS account with a version that matches the version constraints. If the version constraints conflict, revise deployments to resolve the conflict. Component com.example.battery version constraints: LOCAL_DEPLOYMENT requires =1.0.0. at com.aws.greengrass.componentmanager.ComponentManager.negotiateVersionWithCloud(ComponentManager.java:229) at com.aws.greengrass.componentmanager.ComponentManager.resolveComponentVersion(ComponentManager.java:164) at com.aws.greengrass.componentmanager.DependencyResolver.lambda$resolveDependencies$2(DependencyResolver.java:125) at com.aws.greengrass.componentmanager.DependencyResolver.resolveComponentDependencies(DependencyResolver.java:221) at com.aws.greengrass.componentmanager.DependencyResolver.resolveDependencies(DependencyResolver.java:123) at com.aws.greengrass.deployment.DefaultDeploymentTask.lambda$call$2(DefaultDeploymentTask.java:125) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ```
    0
    answers
    0
    votes
    3
    views
    asked 2 hours ago
  • 1. Where does CloudWatch store data ? Is it S3 / Dynamo DB, or something else, or a combination ? Does CW use different services for different types of data? Are those back-end services (S3, Dynamo DB or similar ) accessible to customer / customer manageable ? Thank you
    0
    answers
    0
    votes
    5
    views
    AWS
    asked 2 hours ago

Recent Knowledge Center content

see all
1/18

Recent articles

see all
1/18