I have spoken with several others about this same question and the answer really boiled down to this:
There are many ways to move data around AWS, and many of them can be the 'right' way depending on several factors such as velocity, volume, data sources, data consumption patterns and tools, and more. In short - there is no blanket 'right' answer, it will depend on the specific context.
The initial proposed approach (Using Kinesis Data Stream as the primary delivery mechanism, and use that to feed two Kinesis Firehose streams - each one targeted at one of the destinations required) is an acceptable approach and pattern. However the question that should be answered is: does the customer want to create a 'raw data' bucket of these logs, or is the landed data (in either S3/parquet or Elasticsearch) the acceptable source of truth.
The other patterns mentioned here by others are also acceptable patterns, however each should be reviewed for trade-offs and impacts to ensure that the solution matches the customer requirements and context (i.e. velocity, volume, data sources, data consumption patterns and tools, and more).
How to set document id when delivering data from Kinesis Data Firehose to Opensearch indexasked 9 months ago
can i use safely a lambda as target when using CloudWatch PutDestinationasked 2 months ago
Central cloudwatch logs group for vpc flowlogs from multiple accountsasked 4 months ago
Multiple Kinesis Firehose DestinationsAccepted Answerasked 2 years ago
Kinesis Firehose Delivery Stream - S3 - JSONasked 4 months ago
Cost optimisation for Kinesis FirehoseAccepted Answerasked 3 years ago
Kinesis Firehose component for AWS Greengrass not sending data streamsasked 5 months ago
Year wrongly set to 2022 for AWS Kinesis Firehose Delivery Stream to S3Accepted Answerasked a year ago
CDK - how to set OpenSearch/Elasticsearch as the destination of a Kinesis Firehose?asked 8 months ago
Multiple Kinesis Data Analytics apps to use the same Kinesis firehose delivery stream as sourceAccepted Answerasked 2 years ago