Robots.txt vs noindex: which one solves the problem you actually have
Compare robots.txt and noindex so you can choose the right control for crawling, indexing and sensitive pages.
These two controls solve different problems
Robots.txt tells crawlers which paths they should avoid requesting. A noindex directive tells search engines that a page should not stay in the index. Those are related ideas, but they are not interchangeable.
That is why technical SEO mistakes happen so often here. Teams use robots.txt when the real goal is deindexation, or they use noindex when the actual problem is wasted crawl budget.
Pick the directive based on the real risk
Use robots.txt when you want to reduce crawling of low value areas like internal search, faceted combinations or staging sections. Use noindex when the page can be crawled but should not appear as a search result.
If the page is truly sensitive, neither option should be your main protection. That is a security and access control problem, not only an SEO directive choice.