As the partition columns are also written in the schema of the Parquet files, because of this when we read the data using DynamicFrame and perform some Spark action to the created DynamicFrame it fails with below error:
AnalysisException: Found duplicate column(s) in the data schema and the partition schema:
In order to fix this error, the ideal way is to fix the underlying Parquet files by re-writing them after dropping the partition columns.
However, as a workaround in this scenario, you can consider to update the Partition Column names manually in the Glue Data Catalog. Kindly refer  for more details. Then, you would be able to perform the action. By doing this, the partition column name read by the DynamicFrame will be referred from the Glue Data Catalog (as you are creating DynamicFrame using 'from_catalog()' method).
Can't get Partitions to work with my Glue Data CatalogAccepted Answerasked 7 months ago
Data Catalog schema table getting modified when I run my Glue ETL jobasked 6 months ago
Glue Jobs are failing and cannot resolve given input column when run with Enabled Job Bookmarkasked 10 months ago
Glue Crawler error: Folder partition keys do not match table partition keysasked 9 months ago
Glue job fail many workersasked 8 months ago
Partition schema mismatch in Glue Tableasked 6 months ago
AWS GLue Spark job: Found duplicate column(s) in the data schema and the partition schema: `day`, `month`, `year`asked 19 days ago
What's the best way to filter out duplicated records in a Glue ETL Job with bookmarking enabled?asked a year ago
How add partitions on Glue Job without update table schema?asked 8 months ago
Aws glue script toDF().sort() method gives exceptionasked 8 months ago