Skip to content

Add additional filters to robots.txt to avoid crawler traps #335

@PGijsbers

Description

@PGijsbers

I updated the robots.txt in #334. Unfortunately, we still see a sizable number of crawlers stuck because of two issues (see also #336). One issue is that most pages allow for filters (and sorting), and this means there are (near) limitless urls to crawl. We should disallow them in our robots.txt. We perhaps should not do this right away though, as currently also the entity pages (e.g., the dataset pages https://www.openml.org/search?type=data&sort=runs&id=151&status=active) contain filters/sorts. I do think we want crawlers to visit the dataset pages. So we must first create entity pages with urls which do not contain queries. Then we can disallow crawling of the remaining pages that do support queries.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions