By using AWS re:Post, you agree to the Terms of Use
Questions in Management & Governance
Sort by most recent
  • 1
  • 90 / page

Browse through the questions and answers listed below or filter and sort to narrow down your results.

powershell cloudtrail trying to get instance id from requestparameters

I am trying to pull instance Id and other parameters from cloudtrail using ps like so $results = Find-CTEvent -StartTime (Get-Date).AddMinutes(-30) | ? {$_.EventName -eq "TerminateInstances"} ` {"eventVersion":"1.08","userIdentity":{"type":"IAMUser","principalId":"xx","arn":"arn:aws:iam::462518063128:user/awslab1","accountId":"xxx","acces sKeyId":"xx","userName":"awslab1","sessionContext":{"sessionIssuer":{ },"webIdFederationData":{},"attributes":{"creationDate":"2022-05-27T14:28:44Z","mfaAuth enticated":"false"}}},"eventTime":"2022-05-27T17:04:12Z","eventSource":"ec2.amazonaws.c om","eventName":"TerminateInstances","awsRegion":"us-west-1","sourceIPAddress":"AWS Internal","userAgent":"AWS Internal","requestParameters":{"instancesSet":{"items":[{"in stanceId":"i-07efe3d31ef2cef02"}]}},"responseElements":{"requestId":"dde64a51-2fd6-40ef -b9d6-06fde8a2abd9","instancesSet":{"items":[{"instanceId":"i-07efe3d31ef2cef02","curre ntState":{"code":32,"name":"shutting-down"},"previousState":{"code":16,"name":"running" }}]}},"requestID":"dde64a51-2fd6-40ef-b9d6-06fde8a2abd9","eventID":"dfc1fa38-c5db-401d- 9ac9-11cd5ab41dd8","readOnly":false,"eventType":"AwsApiCall","managementEvent":true,"re cipientAccountId":"462518063038","eventCategory":"Management","sessionCredentialFromCon sole":"true"} ` then convertfrom json $results.CloudTrailEvent | ConvertFrom-Json eventVersion : 1.08 userIdentity : @{type=IAMUser; principalId=xxxx; arn=arn:aws:iam::462518063128user/awslab1; accountId=xx; accessKeyId=xxxx; userName=awslab1; sessionContext=} eventTime : 5/27/2022 5:04:12 PM eventSource : ec2.amazonaws.com eventName : TerminateInstances awsRegion : us-west-1 sourceIPAddress : AWS Internal userAgent : AWS Internal requestParameters : @{instancesSet=} responseElements : @{requestId=dde64a51-2fd6-40ef-b9d6-06fde8a2abd9; instancesSet=} requestID : dde64a51-2fd6-40ef-b9d6-06fde8a2abd9 eventID : dfc1fa38-c5db-401d-9ac9-11cd5ab41dd8 readOnly : False eventType : AwsApiCall managementEvent : True recipientAccountId : 462518061234 eventCategory : Management sessionCredentialFromConsole : true But the requestParameters : @{instancesSet=} is missing instance id and other values any idea?
0
answers
0
votes
4
views
asked 2 days ago

How to create dynamic dataframe from AWS Glue catalog in local environment?

I I have performed some AWS Glue version 3.0 jobs testing using Docker containers as detailed [here](https://aws.amazon.com/blogs/big-data/develop-and-test-aws-glue-version-3-0-jobs-locally-using-a-docker-container/). The following code outputs two lists, one per connection, with the names of the tables in a database: ```python import boto3 db_name_s3 = "s3_connection_db" db_name_mysql = "glue_catalog_mysql_connection_db" def retrieve_tables(database_name): session = boto3.session.Session() glue_client = session.client("glue") response_get_tables = glue_client.get_tables(DatabaseName=database_name) return response_get_tables s3_tables_list = [table_dict["Name"] for table_dict in retrieve_tables(db_name_s3)["TableList"]] mysql_tables_list = [table_dict["Name"] for table_dict in retrieve_tables(db_name_mysql)["TableList"]] print(f"These are the tables from {db_name_s3} db: {s3_tables_list}\n") print(f"These are the tables from {db_name_mysql} db {mysql_tables_list}") ``` Now, I try to create a dynamic dataframe with the *from_catalog* method in this way: ```python import sys from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import Job from awsglue.dynamicframe import DynamicFrame source_activities = glueContext.create_dynamic_frame.from_catalog( database = db_name, table_name =table_name ) ``` When `database="s3_connection_db"`, everything works fine, however, when I set `database="glue_catalog_mysql_connection_db"`, I get the following error: ```python Py4JJavaError: An error occurred while calling o45.getDynamicFrame. : java.lang.ClassNotFoundException: com.mysql.cj.jdbc.Driver ``` I understand the issue is related to the fact that I am trying to fetch data from a mysql table but I am not sure how to solve this. By the way, the job runs fine on the Glue console. I would really appreciate some help, thanks!
0
answers
0
votes
12
views
asked 2 days ago

What is the relationship between AWS Config retention period and AWS S3 Lifecycle policy?

I found here: https://aws.amazon.com/blogs/mt/configuration-history-configuration-snapshot-files-aws-config/ " AWS Config delivers three types of configuration files to the S3 bucket: Configuration history (A configuration history is a collection of the configuration items for a given resource over any time period. ) Configuration snapshot OversizedChangeNotification" However, in this docs: https://docs.aws.amazon.com/ja_jp/config/latest/developerguide/delete-config-data-with-retention-period.html It only said that retention period delete the "ConfigurationItems" (A configuration item represents a point-in-time view of the various attributes of a supported AWS resource that exists in your account. ) In this docs: https://docs.aws.amazon.com/config/latest/developerguide/config-concepts.html#config-history: "The components of a configuration item include metadata, attributes, relationships, current configuration, and related events. AWS Config creates a configuration item whenever it detects a change to a resource type that it is recording. " I wonder that: Is ConfigurationItems a subset of Configuration history? Is the things that saved to S3 equal to ConfigurationItems? If not, where is ConfigurationItems stored? And if things stored in S3, is ConfigurationItems deleted or become damaged? I am setting AWS S3 lifcycle is expire objects in 300 days and AWS Config retention period is 7 years. Therefore, I am wondering what is the relationship between those 2? Because S3 lifecycle period is 300 days, will AWS Config data is deleted in 300 days? Thank you so much!
1
answers
0
votes
13
views
asked 2 days ago

SSM Agent update problems

I'm having issues with SSM for the last 3 days with the RUN COMMAND on the AWS-UpdateSSMAgent document. It tries to upgrade to the latest version of amazon-ssm-agent but failed to do so. I tried to refresh the snap from 3.1.1188.0 but no updates available.. also tried to remove and reinstall the package but still get the 3.1.1188.0 version. My Windows Server instance get updated but all the Ubuntu 20.04LTS and 22.04LTS have issues with that update. I know 22.04 is not fully supported by SSM but the ssm agent update document always worked prior to the last 3 days. Anyone having similar issues? Here's the log: ``` Successfully downloaded manifest Successfully downloaded updater version 3.1.1446.0 Updating amazon-ssm-agent from 3.1.1188.0 to 3.1.1446.0 Successfully downloaded https://s3.ca-central-1.amazonaws.com/amazon-ssm-ca-central-1/amazon-ssm-agent/3.1.1188.0/amazon-ssm-agent-ubuntu-amd64.tar.gz Successfully downloaded https://s3.ca-central-1.amazonaws.com/amazon-ssm-ca-central-1/amazon-ssm-agent/3.1.1446.0/amazon-ssm-agent-ubuntu-amd64.tar.gz Initiating amazon-ssm-agent update to 3.1.1446.0 failed to install amazon-ssm-agent 3.1.1446.0, ErrorMessage=The execution of command returned Exit Status: 125 exit status 125 Initiating rollback amazon-ssm-agent to 3.1.1188.0 failed to install amazon-ssm-agent 3.1.1188.0, ErrorMessage=The execution of command returned Exit Status: 125 exit status 125 Failed to update amazon-ssm-agent to 3.1.1446.0 ```
0
answers
0
votes
13
views
asked 3 days ago

How to ensure using the latest lambda layer version when deploying with CloudFormation and SAM?

Hi, we use CloudFormation and SAM to deploy our Lambda (Node.js) functions. All our Lambda functions has a layer set through `Globals`. When we make breaking changes in the layer code we get errors during deployment because new Lambda functions are rolled out to production with old layer and after a few seconds *(~40 seconds in our case)* it starts using the new layer. For example, let's say we add a new class to the layer and we import it in the function code then we get an error that says `NewClass is not found` for a few seconds during deployment *(this happens because new function code still uses old layer which doesn't have `NewClass`)*. Is it possible to ensure new lambda function is always rolled out with the latest layer version? Example CloudFormation template.yaml: ``` Globals: Function: Runtime: nodejs14.x Layers: - !Ref CoreLayer Resources: CoreLayer: Type: AWS::Serverless::LayerVersion Properties: LayerName: core-layer ContentUri: packages/coreLayer/dist CompatibleRuntimes: - nodejs14.x Metadata: BuildMethod: nodejs14.x ExampleFunction: Type: AWS::Serverless::Function Properties: FunctionName: example-function CodeUri: packages/exampleFunction/dist ``` SAM build: `sam build --base-dir . --template ./template.yaml` SAM package: `sam package --s3-bucket example-lambda --output-template-file ./cf.yaml` Example CloudFormation deployment events, as you can see new layer (`CoreLayer123abc456`) is created before updating the Lambda function so it should be available to use in the new function code but for some reasons Lambda is updated and deployed with the old layer version for a few seconds: | Timestamp | Logical ID | Status | Status reason | | --- | --- | --- | --- | 2022-05-23 16:26:54 | stack-name | UPDATE_COMPLETE | - 2022-05-23 16:26:54 | CoreLayer789def456 | DELETE_SKIPPED | - 2022-05-23 16:26:53 | v3uat-farthing | UPDATE_COMPLETE_CLEANUP_IN_PROGRESS | - 2022-05-23 16:26:44 | ExampleFunction | UPDATE_COMPLETE | - 2022-05-23 16:25:58 | ExampleFunction | UPDATE_IN_PROGRESS | - 2022-05-23 16:25:53 | CoreLayer123abc456 | CREATE_COMPLETE | - 2022-05-23 16:25:53 | CoreLayer123abc456 | CREATE_IN_PROGRESS | Resource creation Initiated 2022-05-23 16:25:50 | CoreLayer123abc456 | CREATE_IN_PROGRESS | - 2022-05-23 16:25:41 | stack-name | UPDATE_IN_PROGRESS | User Initiated
2
answers
0
votes
43
views
asked 4 days ago

ApplicationLoadBalancedFargateService with listener on one port and health check on another fails health check

Hi, I have an ApplicationLoadBalancedFargateService that exposes a service on one port, but the health check runs on another. Unfortunately, the target fails health check and terminates the task. Here's a snippet of my code ``` const hostPort = 5701; const healthCheckPort = 8080; taskDefinition.addContainer(stackPrefix + 'Container', { image: ecs.ContainerImage.fromRegistry('hazelcast/hazelcast:3.12.6'), environment : { 'JAVA_OPTS': `-Dhazelcast.local.publicAddress=localhost:${hostPort} -Dhazelcast.rest.enabled=true`, 'LOGGING_LEVEL':'DEBUG', 'PROMETHEUS_PORT': `${healthCheckPort}`}, portMappings: [{containerPort : hostPort, hostPort: hostPort},{containerPort : healthCheckPort, hostPort: healthCheckPort}], logging: ecs.LogDriver.awsLogs({streamPrefix: stackPrefix, logRetention: logs.RetentionDays.ONE_DAY}), }); const loadBalancedFargateService = new ecsPatterns.ApplicationLoadBalancedFargateService(this, stackPrefix + 'Service', { cluster, publicLoadBalancer : false, desiredCount: 1, listenerPort: hostPort, taskDefinition: taskDefinition, securityGroups : [fargateServiceSecurityGroup], domainName : env.getPrefixedRoute53(stackName), domainZone : env.getDomainZone(), }); loadBalancedFargateService.targetGroup.configureHealthCheck({ path: "/metrics", port: healthCheckPort.toString(), timeout: cdk.Duration.seconds(15), interval: cdk.Duration.seconds(30), healthyThresholdCount: 2, unhealthyThresholdCount: 5, healthyHttpCodes: '200-299' }); ``` Any suggestions on how I can get this to work? thanks
1
answers
0
votes
35
views
asked 5 days ago

How to Configure stickiness and autoscaling in elasticbeanstalk application.

Hello, We have a application running on elasticbeanstalk that listens for client request and returns a stream segment. We have some requirements for application: 1) Client session should be sticky (all request for some session should go to same EC2) for specified time without any changes on client side. (we can't add cookie sending via client). As per my understanding application load balancer supports that and i enabled stickiness in load balancer. As per my understanding load balancer generated cookie are managed by load balancer and we do not need to send cookie through client side. 2) Based on CPU utilisation we need to auto scale instances, (when CPU load > 80%) we need to scale instances +1. Problem:- 1) When i request from multiple clients from same IP address. CPU load goes above 80% and new instance is launched. But after sometime i see CPU load going down . does this mean that 1 of these client are now connected to new instance and load is shared. That means stickiness is not working. Though It is not clear how to test it properly. However sometimes when i tried to stop new instance manually . No client has got any errors. When I stop first instance all client gets 404 error for sometime. How to check whether stickiness is working properly ? 2) If i get stickiness to work. As per my understanding Load will not be shared by new instance. So Average CPU usage will be same. So autoscaling will keep on launching new instance until max limit. How do i set stickiness with autoscaling feature. I set stickiness seconds to 86400 sec (24 hours) for safe side. Can someone please guide me how to configure stickiness and autoscaling proper way ?
3
answers
0
votes
23
views
asked 6 days ago

Elemental Mediaconvert job template for Video on Demand

I launched the fully managed video on demand template from here https://aws.amazon.com/solutions/implementations/video-on-demand-on-aws/?did=sl_card&trk=sl_card. I have a bunch of questions on how to tailor this service to my use case. I will each separate questions for each. Firstly, is possible to use my own GUID as an identifier for the mediaconvert jobs and outputs. The default GUID tagged onto the videos in this workflow are independent of my application server. So it's difficult for the server to track who owns what video on the destination s3 bucket. Secondly, I would like to compress the video input for cases where the resolution is higher than 1080p. For my service i don't want to process any videos higher than 1080p. Is there a way i can achieve this without adding a lamda during the ingestion stage to compress it? I know it can by compressed on the client, i am hoping this can be achieved on this workflow, perhaps using mediaconvert? Thirdly, based on some of the materials i came across about this service, aside from the hls files mediaconvert generates, its supposed to generate an mp4 version of my video for cases where a client wants to download the full video as opposed to streaming. That is not the default behaviour, how do i achieve this? Lastly, how do i add watermarks to my videos in this workflow. Forgive me if some of these questions feel like things i could have easily researched on and gotten solutions. I did do some research, but i failed to grasp a clear understanding on anything
1
answers
0
votes
12
views
asked 8 days ago

Config Advanved Query Editor - Return ConfigRuleName

I am using the AWS Config Service across multiple Accounts within my Organization. My goal is to write a query which will give me a full list of non-compliant resources in all regions, in all accounts. I have an Aggregator which has the visibility for this task. The Advanced Query I am using is similar to the AWS [Example in the docs:](https://docs.aws.amazon.com/config/latest/developerguide/example-query.html) ``` SELECT configuration.targetResourceId, configuration.targetResourceType, configuration.complianceType, configuration.configRuleList, accountId, awsRegion WHERE configuration.configRuleList.complianceType = 'NON_COMPLIANT' ``` However, the ConfigRuleName is nested within `configuration.configRuleList` - as there could be multiple config rules, (hence the list) assigned to `configuration.targetResourceId` How can I write a query that picks apart the JSON list returned this way? Because the results returned do not export to csv for example very well at all. Exporting a JSON object within a csv provides an unsuitable method if we wanted to import this into a spreadsheet for example, for viewership. I have tried to use `configuration.configRuleList.configRuleName` and this only returns `-` even when the list has a single object within. If there is a better way to create a centralised place to view all my Org's Non-Compliant Resources, I would like to learn about it. Thanks in Advance.
0
answers
0
votes
5
views
asked 12 days ago

What is the suggested method to track user's actions after assuming a cross-account role

I need to be able to guarantee that a user's actions can always be traced back to their account regardless of which role they have assumed in another account. What methods are required to guarantee this for? * Assuming a cross-account role in the console * Assuming a cross-account role via the cli I have run tests and can see that when a user assumes a role in the CLI, temporary credentials are generated. These credentials are seen in CloudTrail logs under responseElements.credentials for the assumeRole event. All future events generated by actions taken in the session include the accessKeyId and I can therefore track all of the actions in this case. Using the web console, the same assumeRole event is generated, also including an accessKeyId. Unfortunately, future actions taken by the user don't include the same accessKeyId. At some point a different access key is generated and the session makes use of this new key. I can't find any way to link the two and therefore am not sure of how to attribute actions taken by the role to the user that assumed the role. I can see that when assuming a role in the console, the user can't change the sts:sessionName and this is always set to their username. Is this the suggested method for tracking actions? Whilst this seems appropriate for roles within the same account, as usernames are not globally unique I am concerned about using this for cross account attribution. It seems placing restrictions on the value of sts:sourceIdentity is not supported when assuming roles in the web console.
1
answers
2
votes
62
views
asked 13 days ago

ClientError: An error occurred (UnknownOperationException) when calling the CreateHyperParameterTuningJob operation: The requested operation is not supported in the called region.

Hi Dears, I am building ML model using DeepAR Algorithm. I faced this error while i reached to this point : Error : ClientError: An error occurred (UnknownOperationException) when calling the CreateHyperParameterTuningJob operation: The requested operation is not supported in the called region. ------------------- Code: from sagemaker.tuner import ( IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner, ) from sagemaker import image_uris container = image_uris.retrieve(region= 'af-south-1', framework="forecasting-deepar") deepar = sagemaker.estimator.Estimator( container, role, instance_count=1, instance_type="ml.m5.2xlarge", use_spot_instances=True, # use spot instances max_run=1800, # max training time in seconds max_wait=1800, # seconds to wait for spot instance output_path="s3://{}/{}".format(bucket, output_path), sagemaker_session=sess, ) freq = "D" context_length = 300 deepar.set_hyperparameters( time_freq=freq, context_length=str(context_length), prediction_length=str(prediction_length) ) Can you please help in solving the error? I have to do that in af-south-1 region. Thanks Basem hyperparameter_ranges = { "mini_batch_size": IntegerParameter(100, 400), "epochs": IntegerParameter(200, 400), "num_cells": IntegerParameter(30, 100), "likelihood": CategoricalParameter(["negative-binomial", "student-T"]), "learning_rate": ContinuousParameter(0.0001, 0.1), } objective_metric_name = "test:RMSE" tuner = HyperparameterTuner( deepar, objective_metric_name, hyperparameter_ranges, max_jobs=10, strategy="Bayesian", objective_type="Minimize", max_parallel_jobs=10, early_stopping_type="Auto", ) s3_input_train = sagemaker.inputs.TrainingInput( s3_data="s3://{}/{}/train/".format(bucket, prefix), content_type="json" ) s3_input_test = sagemaker.inputs.TrainingInput( s3_data="s3://{}/{}/test/".format(bucket, prefix), content_type="json" ) tuner.fit({"train": s3_input_train, "test": s3_input_test}, include_cls_metadata=False) tuner.wait()
1
answers
0
votes
8
views
asked 14 days ago
  • 1
  • 90 / page