1 réponse
- Le plus récent
- Le plus de votes
- La plupart des commentaires
0
Hi Shayne.
Although I have not tried this scenario, a couple of possible options are:
- Use Sitemap. Per the documentation:
If you want to crawl a sitemap, check that the base or root URL is the same as the URLs listed on your sitemap page. For example, if your sitemap URL is https://example.com/sitemap-page.html, the URLs listed on this sitemap page should also use the base URL "https://example.com/".
- Include/Exclude explicitly using robots.txt. From the documentation:
Amazon Q Web Crawler respects standard robots.txt directives like Allow and Disallow. You can modify the robot.txt file of your website to control how Amazon Q Web Crawler crawls your website. Use the user-agent to make entries designed for Amazon Q.
User-agent: amazon-QBusiness
I hope this helps.
Contenus pertinents
- demandé il y a 10 mois
- demandé il y a 12 jours
- demandé il y a un an
- AWS OFFICIELA mis à jour il y a 3 ans
- AWS OFFICIELA mis à jour il y a 2 ans
- AWS OFFICIELA mis à jour il y a un an