Questions tagged with Amazon EventBridge

Content language: English

Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

How to define API Gateway to Eventbridge integration?

I am building an API Gateway (v1) integration such that posts to a given endpoint should route the body of the POST to my custom eventbus in Eventbridge. I have created the gateway endpoints, event bus etc but I am struggling with the definition of the integration. I am doing this in Terraform, which basically wraps the AWS API for PutIntegration and I cannot seem to figure out the correct format of the request-parameters map required by AWS. since eventbus payloads have a specific structure, I assume I need to build the map to construct that payload. I also saw a post about needing a custom X-Amz-Target header as well. Are there any AWS documented examples of just how to build this integration & mapping? My attempts invariably lead to an error response along the lines of: *Error updating API Gateway Integration: BadRequestException: Invalid mapping expression specified: Validation Result: warnings : [], errors : [Invalid mapping expression specified: tagmodernization-dev-us-west-2-eventbus-information-reporting, Invalid mapping expression specified: integration.request.body.Entries[0].EventBusName]* mapping variations I have tried include: integration.request.body.EventBusName integration.request.body.Entries[0].EventBusName] EventBusName I realize I can achieve a similar goal using VTL with the request templates capability, but I am still unclear on the output format of the mapping anyway.
1
answers
0
votes
279
views
asked 6 months ago

Run a Lambda function for RDS using the DBInstanceIdentifier

Hello, I have been trying to automate stopping RDS instance using a Lambda Function when the number of connections made to the instances are low than 1. I have used an EventBridge that triggers the Lambda function when the Alarm created goes into In Alarm. From this, it stops all the RDS instances even those with connections. I understand that the issue is in the Lambda function since it loops through all instances and turns them off. I was inquiring if there is a way to pass the DBInstanceIdentifier of the instance in alarm state only to the lambda function for it to only shut down the instance in which the alarm is on. Below is the lambda code used. import boto3 import os target_db = None region = os.environ['AWS_REGION'] rds = boto3.client('rds', region_name=region) def get_tags_for_db(db): instance_arn = db['DBInstanceArn'] instance_tags = rds.list_tags_for_resource(ResourceName=instance_arn) return instance_tags['TagList'] def get_tags_for_db_cluster(db): instance_arn = db['DBClusterArn'] instance_tags = rds.list_tags_for_resource(ResourceName=instance_arn) return instance_tags['TagList'] def lambda_handler(event, context): dbs = rds.describe_db_instances() readReplica = [] for db in dbs['DBInstances']: readReplicaDB = db['ReadReplicaDBInstanceIdentifiers'] readReplica.extend(readReplicaDB) print("readReplica : " + str(readReplica)) for db in dbs['DBInstances']: db_id = db['DBInstanceIdentifier'] db_engine = db['Engine'] print('DB ID : ' + str(db_id)) db_tags = get_tags_for_db(db) print("All Tags : " + str(db_tags)) tag = next(iter(filter(lambda tag: tag['Key'] == 'AutoStop' and tag['Value'].lower() == 'true', db_tags)), None) print("AutoStop Tag : " + str(tag)) if db_engine not in ['aurora-mysql','aurora-postgresql']: if db_id not in readReplica and len(readReplica) == 0: if tag: target_db = db print("DB Details : " + str(target_db)) db_id = target_db['DBInstanceIdentifier'] db_status = target_db['DBInstanceStatus'] print("DB ID : " + str(db_id)) print("DB Status : " + str(db_status)) if db_status == "available": AutoStopping = rds.stop_db_instance(DBInstanceIdentifier=db_id) print("Stopping DB : " + str(db_id)) else: print("Database already stopped : " + str(db_id)) else: print("AutoStop Tag Key not set for Database to Stop...") else: print("Cannot stop or start a Read-Replica Database...") dbs = rds.describe_db_clusters() readReplica = [] for db in dbs['DBClusters']: readReplicaDB = db['ReadReplicaIdentifiers'] readReplica.extend(readReplicaDB) print("readReplica : " + str(readReplica)) for db in dbs['DBClusters']: db_id = db['DBClusterIdentifier'] db_engine = db['Engine'] print('DB ID : ' + str(db_id)) db_tags = get_tags_for_db_cluster(db) print("All Tags : " + str(db_tags)) tag = next(iter(filter(lambda tag: tag['Key'] == 'AutoStop' and tag['Value'].lower() == 'true', db_tags)), None) print("AutoStop Tag : " + str(tag)) if db_engine in ['aurora-mysql','aurora-postgresql']: if db_id not in readReplica and len(readReplica) == 0: if tag: target_db = db db_id = target_db['DBClusterIdentifier'] db_status = target_db['Status'] print("Cluster DB ID : " + str(db_id)) print("Cluster DB Status : " + str(db_status)) if db_status == "available": AutoStopping = rds.stop_db_cluster(DBClusterIdentifier=db_id) print("Stopping Cluster DB : " + str(db_id)) else: print("Cluster Database already stopped : " + str(db_id)) else: print("AutoStop Tag Key not set for Cluster Database to Stop...") else: print("Cannot stop or start a Read-Replica Cluster Database...")
2
answers
0
votes
194
views
asked 6 months ago

Unable to pass aws.events.event.json in input transformer template

I am using eventbridge to trigger batch run through custom put-events request(using boto3). My request is as follows :- ```python INPUT_DATA = { "data":[List<Dict>], # i.e. data key has list value where each list item is dict "sample_key":"sample_string_value" } client = boto3.client("events") response = client.put_events(Entries=[ { "Source": "my.app.com", "Detail": json.dumps(INPUT_DATA), "DetailType":"sample-type", "EventBusName":"my-event-bus" } ]) ``` In order to pass data from eventbridge to batch, I have following input transformer template :- ```terraform input_paths = { } input_template = <<EOF { "ContainerOverrides": { "Environment": [ { "Name": "PAYLOAD", "Value": "<aws.events.event.json>" } ] } } EOF ``` where `aws.events.event.json` is pre-defined variable for event payload(i.e. detail key value) as *string*. When I run the script, I am able to see metric for `triggeredRule`with `1` but invocation for batch run fails i.e. `failedInvocations` count as `1` is also seen in the metrics. At the same time, if replace `aws.events.event.json` with `aws.events.rule-arn` or `aws.events.rule-name` or `aws.events.event.ingestion-time`, batch run is triggered and correct values of latter variables is also seen in job descriptions's environment section. I tried to refer similar issue [here](https://repost.aws/questions/QUCMU-UIYoThyQqlkCn2sWEQ/eventbridge-input-transformer-example-doesnt-work) but it does not seem solve the issue. Can someone suggest where is the issue in above input transformer while using `aws.events.event.json` ? Is it due format of `INPUT_DATA` that I am sending ? Would appreciate any hint. Thanks
1
answers
0
votes
63
views
asked 6 months ago