AWS Glue Crawlers are used to automatically discover the schema of the data in Amazon S3 or other data sources. They also help in capturing schema evolution.
If your schema is fixed (do not change often), already known and you do not have issues creating your tables manually via the console or your code using the APIs, then you do not need to use them.
Consider also that the Crawler do have a cost, so cost optimization might be another reason to not use them if you are fine with self managing the schemas of your datasets.
for additional information on Crawlers, you can refer to this section of the AWS Glue Documentation.
hope this helps
Can an Glue Crawler use a S3 Lambda Access Point as a data store?asked 3 months ago
Can't get Partitions to work with my Glue Data CatalogAccepted Answerasked 4 months ago
I need to transfer objects from s3 bucket to another s3 in the same accountasked 2 months ago
How to insert S3 data into Aurora table via glue transform?asked 3 years ago
Glue: Using S3 ObjectCreated events with Crawler Catalog Targetasked 2 days ago
When I tried to use sdk to complete the data migration of s3, I encountered a strange error, but the data has been migratedasked a month ago
Can I Import .sql files from Encrypted s3 bucket into Encrypted MySQL RDS instance?asked 2 months ago
Glue ETL job write part-r-00 files to same bucket as my input. Any way to change this?Accepted Answer
I need to read S3 data, transform and put into Data Catalog. Should I be using a Crawler?Accepted Answer
Delete partitions in Glue Data Catalog using crawler not working.asked 4 months ago