How robots.txt works and what it should not be used for
A practical guide to robots.txt for technical SEO, with clear limits on what crawler directives can and cannot do.
Robots.txt controls crawling, not secrecy
A robots.txt file gives instructions to crawlers about which paths they should or should not request. It is useful for avoiding wasteful crawling on low value areas.
That does not make a blocked URL private or guaranteed absent from search results.
Use it for guidance, not as a security layer
Robots.txt helps shape crawl behavior, but it should not be treated as protection for sensitive content.
If a page must stay private, use proper access controls instead of relying on crawler directives.