By using AWS re:Post, you agree to the Terms of Use

Management & Governance

In the past, organizations have had to choose between innovating faster and maintaining control over cost, compliance, and security. With AWS Management and Governance services, customers don’t have to choose between innovation and control—they can have both. With AWS, customers can enable, provision, and operate their environment for both business agility and governance control.

Recent questions

see all
1/18

Sending AS2 messages between two AWS Transfer Family servers

I've set up AWS Transfer Family servers in two different regions to test the sending functionality. However, even though the VPC is created, sending messages fail with either UNABLE_TO_CONNECT_TO_REMOTE_HOST_OR_IP or "File path not found". I'm using S3 for the document to send. I've checked the IP address with a different program (Mendelson AS2) and it's able to connect fine. It even was able to send a test document. Despite that, when sending through a lambda function, it fails. A few things tried: * Checking permissions: I'm able to connect and describe the server, the connectors, etc with no problem so it's not that * Connector with the wrong URL: I used the same URL as the URL in Mendelson with the port attached at the end (http:/s-xxx:5080 in the format specified in [1] with the region). I also tried the URL without the port specified and that didn't work either * Region issue: I thought the mismatch between the region could be an issue since the lambda was set in us-west-1 while the as2 server I was sending to is in us-east2 so I created a different connector and had it send to itself in the same region. Still the same error with being unable to connect * Checked the cloudwatch logs: It actually reports that everything sent successfully with a 200 code Weird things noticed: * After the lambda is triggered, it creates the expected failed and processing folder but after the first few times, it no longer saves the results. I get a .cms file and a .json file sometimes but not every time, even though the cloudwatch logs are correctly created every time. * The failed and processed folders somehow got created a folder above rather than the folder the file was uploaded to. (e.g. the folder structure is bucket/folder 1/folder2/folder 3 and the uploaded file was in folder3. However, the failed and processing folders were created in folder2 instead of the expected folder3. This happened just once though. Additional question: I can upload this as a different question if needed but since it's related to my issue, I figured I'd put it here as well * What's the transfer id for? Is that supposed to be the execution id? There doesn't seem to be an option to view the results of the transfer in the documentation [2]. References: [1] https://docs.aws.amazon.com/transfer/latest/userguide/as2-end-to-end-example.html#as2-create-connector-example [2] https://docs.aws.amazon.com/transfer/latest/userguide/API_StartFileTransfer.html
0
answers
0
votes
3
views
asked 5 hours ago

Can log destination work with KMS encrypted kinesis streams

I am following [AWS CloudWatch Logs - Setting up a new cross-account subscription](https://docs.amazonaws.cn/en_us/AmazonCloudWatch/latest/logs/Cross-Account-Log_Subscription-New.html) and I been able to get WAF logs from Account A to flow through to my Opensearch Cluster in Account B using the Documentation. But I want to extend it so that everything is doing Encryption at Rest or Server Side Encryption, but I am having an issue when I try to create a log destination where I get an error saying "Check if the destination is valid". I have the following setup: Data stream with Server-side encryption using KMS managed key IAM role called CWLtoKinesisRole with the following trusted Policy: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": "logs.us-east-1.amazonaws.com" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "aws:PrincipalOrgID": "o-12345" } } } ] } ``` and the following policy: ``` { "Statement": [ { "Action": "kinesis:PutRecord", "Effect": "Allow", "Resource": "arn:aws:kinesis:us-east-1:123456789123:stream/logs-recipient", "Sid": "" }, { "Action": [ "kms:GenerateDataKey", "kms:Encrypt", "kms:Decrypt" ], "Effect": "Allow", "Resource": "arn:aws:kms:*:123456789123:key/*", "Sid": "" } ], "Version": "2012-10-17" } ``` Then when I run: ``` aws logs put-destination \ --destination-name "testDestination" \ --target-arn "arn:aws:kinesis:region:123456789123:stream/logs-recipient" \ --role-arn "arn:aws:iam::123456789123:role/CWLtoKinesisRole" ``` I get `cloudwatch log destination: InvalidParameterException: Could not deliver test message to specified destination. Check if the destination is valid` Any direction on what I am missing here would be great, thanks Phil
1
answers
0
votes
9
views
asked 18 hours ago

aws command installed with awscli library inside a python venv on Windows invokes a python OUTSIDE the venv

For awscli 1.25.86, installing on a freshly minted Windows EC2 (Windows Server 2022 Dataserver), I did this: 1. selected my home directory (`cd`) 2. installed `pyenv` (e.g., via PowerShell using https://github.com/pyenv-win/pyenv-win#power-shell). This said it didn't succeed but seems to have fully installed pyenv. This is not the bug. I had to select a new PowerShell to see the effect of having installed `pyenv`) 3. told `pyenv` to install python 3.8 (`pyenv install 3.8.10`) 4. selected python 3.8 globally (`pyenv global 3.8.10`) 5. created a virtual environment (`pyenv exec python -m venv myvenv`) 6. entered the venv (`myvenv\scripts\activate`) 7. installed `awscli` (`pip install awscli`) 8. tried to invoke `awscli` (`aws --version`). This gives the message `File association not found for extension .py` which is an ignorable problem followed by an error that is the bug I'm reporting: ``` Traceback (most recent call last): File "C:\Users\Andrea\GitHub\Submit4DN\s4dn_venv\Scripts\aws.cmd", line 50, in <module> import awscli.clidriver ModuleNotFoundError: No module named 'awscli' ``` After studying this problem, I believe I know the source of this problem, and am pretty sure it's in the `awscli` library. The library installs `myvenv\scripts\aws.cmd` which implements the `aws` command inside the virtual environment, but that script sniffs around for a `python` to invoke and finds one _outside_ of the virtual environment. The problem isn't that it tries to get out of the virtual environment, it's just apparently oblivious to the presence of one, and so it isn't picky about which python it finds. it successively seeks `python.cmd`, `python.bat`, and `python.exe` (see line 7 of `myenv\scripts\aws.cmd`) but finds `python.cmd` first, and that is not inside the virtual environment. Had it checked `python.exe` first, it would have found the one in the virtual environment. If you swap the order of `(cmd bat exe)` on line 7 of `aws.cmd` so that it searches `(exe bat cmd)` it will invoke the python within the virtual env and so will find the `awscli` that was just installed within the virtual environment. That's not necessarily the right fix. It still feels fragile. But it seems to me that this proves it's the locus of the problem. Another somewhat workaround is to install `awscli` outside of the virtual environment by doing `deactivate`, then `pip install awscli`, then `myvenv/scripts/activate`, and then finally trying `aws --version` and it will work _except_ that if you change to another version of python globally via pyenv, the `aws` command within the venv will break again unless you again reinstall `awscli` in each globally selected python. I don't have a good fix to suggest because I'm not current on writing of Windows shell scripts, but imagine it involves a different way of discovering python that gives strong preference to a venv if one is active, e.g., by noticing there is a `%VIRTUAL_ENV%` in effect and just invoking `python`(since virtual envs always have a `python`) or `%VIRTUAL_ENV%\scripts\python` if you're wanting to be double-sure. Note that I was able to reproduce this problem on my professional desktop version of Windows 10 at my home as well, so it's nothing specific to the EC2 itself, that's just a way to show that this problem can be demonstrated in a clean environment. The problem seems pretty definitely in the `awscli` library. Whatever solution you pick, I hope this illustrates the issue clearly enough that you can quickly issue some sort of fix to the `awscli` library because the present situation is just plain broken and this is impacting some instructions we're trying to give some users about how to access our system remotely. I'd rather not be advising users to edit scripts they got from elsewhere, nor do I want to supply alternate scripts for them to use. Things should just work.
0
answers
0
votes
33
views
profile picture
asked 19 hours ago

ResourceLimitExceeded exception but I have Quota

I am not able to to start a SageMaker notebook neither a SageMaker training job with ml.c5.xlarge (or any other instance type). I checked on "Quota Services", and I clearly have quotes for both tasks. - 1 in "applied quota value" for "ml.c5.xlarge for notebook instance usage". - 15 in "applied quota value" for "ml.c5.xlarge for training job usage". Of course I am checking in the same region I try to work: "us-east-1". I have researched for several days, and all forum suggests to ask for a limit increase. Nevertheless, I already have quota (limits) available. Nevertheless, when I try to start the Jupyter notebook, it raise the exception `The account-level service limit 'ml.c5.xlarge for notebook instance usage' is 0 Instances, with current utilization of 0 Instances and a request delta of 1 Instances. Please contact AWS support to request an increase for this limit.` It is strange because the exception says that I have a limit of 0 instances, while the quota list services says I have 1. Here's the output of the command `service-quotas list-service-quotas` ``` { "ServiceCode": "sagemaker", "ServiceName": "Amazon SageMaker", "QuotaArn": "arn:aws:servicequotas:us-east-1:631720213551:sagemaker/L-E2BB44FE", "QuotaCode": "L-E2BB44FE", "QuotaName": "ml.c5.xlarge for training job usage", "Value": 15.0, "Unit": "None", "Adjustable": true, "GlobalQuota": false }, { "ServiceCode": "sagemaker", "ServiceName": "Amazon SageMaker", "QuotaArn": "arn:aws:servicequotas:us-east-1:631720213551:sagemaker/L-39F5FD98", "QuotaCode": "L-39F5FD98", "QuotaName": "ml.c5.xlarge for notebook instance usage", "Value": 1.0, "Unit": "None", "Adjustable": true, "GlobalQuota": false, "UsageMetric": { "MetricNamespace": "AWS/Usage", "MetricName": "ResourceCount", "MetricDimensions": { "Class": "None", "Resource": "notebook-instance/ml.c5.xlarge", "Service": "SageMaker", "Type": "Resource" }, "MetricStatisticRecommendation": "Maximum" } }, ``` I strongly appreciate your help, because I have no way to open a SageMaker training job for several days. Thanks.
0
answers
1
votes
11
views
asked a day ago

Recent articles

see all
1/3

Popular users

see all
1/18

Learn AWS faster by following popular topics

1/2