- Più recenti
- Maggior numero di voti
- Maggior numero di commenti
You will want something like the following. Note that it has a deliberate sleep
command in it to try and avoid API throttling. This isn't perfect - I could probably do better by checking the return value from the delete_log_stream
call but it will only be an issue if you're deleting many logs.
It also checks for a keyword (in this case "keyword") to skip those logs. And it sets the retention for log groups that don't have a retention period set to 7 days.
What the middle does is delete logs older than the retention time. They should already be deleted but there are cases where the retention period is set after logs have been created and those logs are retained.
This doesn't completely answer your question but it gives you something to start with.
import boto3
import time
logs = boto3.client('logs')
def lambda_handler(event, context):
logGroups = logs.describe_log_groups()['logGroups']
for group in logGroups:
if 'keyword' not in group['logGroupName']: continue
daysRetention = group.get('retentionInDays', 0)
if daysRetention != 7:
logs.put_retention_policy(logGroupName=group['logGroupName'], retentionInDays=7)
continue
maxRetention = time.time()-(daysRetention*86400)
logStream = logs.describe_log_streams(logGroupName=group['logGroupName'])['logStreams']
for stream in logStream:
if (stream['creationTime']/1000) < maxRetention:
print(f'Deleting: {region} {group["logGroupName"]} {stream["logStreamName"]}')
logs.delete_log_stream(logGroupName=group['logGroupName'], logStreamName=stream['logStreamName'])
time.sleep(0.2)
Contenuto pertinente
- AWS UFFICIALEAggiornata 3 anni fa
- AWS UFFICIALEAggiornata un anno fa
- AWS UFFICIALEAggiornata 3 anni fa
I would change the
stream['creationTime']
tostream[lastIngestionTime]
^^^ What he said. ;-)