If your goal is for this pages to not be seen by the public, it's best to put a password on this set of pages. And/or have some configuration that only allows specific, whitelisted addresses able to access the site (this can be done at the server level, likely via your host or server admin).
If your goal is to have these pages exist, just not indexed by Google, or other search engines, as others have mentioned, you do have a few options, but I think it's important to distinguish between the two main functions of Google Search in this sense: Crawling and Indexing.
Crawling vs. Indexing
Google crawls your site, Google indexes your site. The crawlers find pages of your site, the indexing is organizing the pages of your site. More information on this a bit here.
This distinguishing is important when trying to block or remove pages from Google's "Index". Many people default to just blocking via robots.txt, which is a directive telling Google what (or what not) to crawl. It's often assumed that if Google doesn't crawl your site, it's unlikely to index it. However, it's extremely common to see pages blocked by robots.txt, indexed in Google.
Directives to Google & Search Engines
These type of "directives" are merely recommendations to Google on which part of your site to crawl, and index. They're not required to follow them. This is important to know. I've seen many devs over the years think that they can just block the site via robots.txt, and suddenly the site is being indexed in Google a few weeks later. If someone else links to the site, or if one of Google's crawlers somehow gets a hold of it, it can still be indexed.
Recently, with GSC (Google Search Console)'s updated dashboard, they have this report called the "Index Coverage Report." New data is available to webmasters here that's not been directly available before, specific details on how Google handles a certain set of pages. I've seen and heard of many websites receiving "Warnings," labeled "Indexed, but blocked by Robots.txt."
Google's latest documentation mentions that if you want pages out of the index, add noindex nofollow tags to it.
Remove URLs Tool
Just to build on what some others have mentioned about the "Remove URL's Tool"....
If the pages are indexed already, and it's urgent to get them out, Google's "Remove URLs Tool" will allow you to "temporarily" block pages from search results. The request lasts 90 days, but I've used it to have pages removed quicker from Google than using noindex, nofollow, kind of like an extra layer.
Using the "Remove URLs Tool," Google still will crawl the page, and possibly cache it, but while you're using this feature, you can add the noindex nofollow tags, so it sees them, and by the time the 90 days are up, it'll hopefully know not to index your page anymore.
IMPORTANT: Using both robots.txt and noindex nofollow tags are somewhat conflicting signals to Google.
The reason is, if you tell google not to crawl a page, and then you have noindex nofollow on that page, it may not crawl to see the noindex nofollow tag. It can then be indexed through some other method (whether a link, or whatnot). The details on why this happens are rather vague, but I've seen it happen.
In short, in my opinion, the best way to stop specific URLs from being indexed are to add a noindex nofollow tag to those pages. With that, make sure that you're not blocking those URLs also with robots.txt, as that could prevent Google from properly seeing those tags. You can leverage the Remove URLs from Google tool to temporarily hide them from search results while Google processes your noindex nofollow.