I understand that you have an issue where Redshift COPY command takes the wrong file from S3
And you have an understanding that
copy bundle FROM 's3://....../..../bundle'
ran into some column formatting issue, but then when you check the stl_load_errors table, the file path captured is different than the one you specified in the command.
From my analysis I have found that the S3 bucket folder name will act similar to a wild card character when running the query in redshift spectrum . I tried using a '/' at the end of the bucket subfolder name which you are intending to hit but what I strongly suggest more is to use the absolute path to the file.
Example in your case would be: copy bundle FROM 's3://....../..../bundle.csv'
Please see the suggested documentation 
For More information on S3 's3://copy_from_s3_objectpath' see the link .
I hope the above information is helpful in resolving your issue .
 https://docs.aws.amazon.com/redshift/latest/dg/copy-parameters-data-source-s3.html - Parameters section
How To Copy Files From On-Prem To Amazon S3 With PowerShell Scripts with over 60 daysasked a month ago
Copy command is ignoring extra columns in case of BLANK valuesasked 4 years ago
s3 parquet partitions load to redshift using COPY commandasked a month ago
Copying earlier version of object in S3 bucket to Redshift with COPY commandasked 24 days ago
Copy from S3 parquet to Redshift tableasked 4 years ago
PARQUET argument is not supported when loading from file systemasked 4 years ago
COPY from S3 to Redshift with manifest failsasked 5 months ago
Failed to COPY parquet files from S3 to Redshiftasked 3 days ago
Redshift Copy fails due to '\' present in some of the data in my csv fileasked 2 years ago
Redshift COPY commend takes the wrong file from S3asked 2 months ago