Google and Bing now render JavaScript when crawling websites. However, there is a lot that can go wrong in that process. The first step you should take in debugging the problem is to sign up for Google Search Console, verify your site, and then use the inspection tool. It can show a rendered screenshot of your site so that you can see if Googlebot is actually seeing your content or not.
Here is a list of the common things that can go wrong:
Only Google and Bing are advanced enough to index JavaScript websites
Other search engines such as Yandex, and Baidu are still not indexing client-side rendered websites as far as I know. Since Google has a 90%+ share of the search market, this may not be a deal breaker for you.
If you need your JS site to show up in the other search engines, you can use server-side-rendering (SSR) which is described further in a section below.
Google takes months to index new websites
Google seems to be taking months to index any new website these days, regardless of whether it it requires rendering. I'd expect an 8 month old site to have at least some of its content indexed, but keep in mind that it could just be a matter of waiting longer.
Websites that require rendering take longer
Googlebot has separate queues for regular crawling and rendering. It does a first pass to grab the server supplied HTML then it comes back later to do the rendering. Google made some announcements that typical delay between first crawl and rendering is now down to seconds. Despite that, websites that require rendering often seem to lag in indexing by days or weeks compared to pages that don't need to be rendered. See Rendering Queue: Google Needs 9X More Time To Crawl JS Than HTML | Onely
Assign each piece of content its own URL
When you are using a single-page-application (SPA) framework, it is tempting to just use a single URL for your entire website. Doing so will kill your SEO. Google needs to be able to direct users to specific content deep within your site instead of sending all visitors to your home page. That means that you need to assign each piece of content on your site its own URL. Google will only crawl and index content that has its own URL. If you have a true one-page site, Google will only ever index the content that is visible when the home page loads.
Note that you can still use SPA frameworks, you just have to use pushState to change URLs for users without causing the full page to re-load from the server.
Ensure that your web app loads and shows the correct content for any starting URL
Your web app needs to load for every URL on your site. The typical way of implementing this is to put a front controller rule into .htaccess that causes index.html to be served, regardless of what URL is requested.
Your site needs to show the correct content for that deep URL without navigating there from the home page. Googlebot will crawl your site by starting at every URL. If it doesn't get the content for that URL by visiting it first, your site won't get indexed. Additionally, users from Google need to see that deep content for the URL when the click from the search results to it.
You need to make sure that the content is visible within a few seconds of the page loading. Googlebot only allows the page to render for a limited time.
All content needs to load for a URL without any user interaction. Googlebot doesn't simulate any user interaction such as clicking, scrolling, or typing.
Render <a href=...> anchors for navigation
Googlebot only finds deep URLs in your site by scanning the document object model (DOM) for links. Googlebot doesn't click on anything, so you need to use links in your rendered HTML to tell Googlebot about all the pages.
When users click on these links, your JavaScript can intercept the clicks and load the desired content without reloading the whole page.
Pay attention to 404 errors
If a URL on your site is requested that shouldn't have any content, you shouldn't serve default content for it, you should show a "404 Not Found" error. With an SPA this is harder to do than with server-side content. Common ways of simulating a 404 that Googlebot understands are to have JavaScript change the URL to an actual 404 page, or just render an error message that says "404 Not Found."
Consider using server-side rendering
Most client-side JavaScript rendered frameworks have some way of rendering the initial page load server-side, usually by running Node.js on the server. When you implement this, search engine bots end up getting a normal HTML and CSS page which makes crawling and indexing much easier.
Users will get their first page view pre-rendered, but then use client-side rendering for their subsequent page views.
It could be a problem with your content, your link structure, or your reputation
If you have all the technical stuff related to client-side rendering figured out, there are lots of more basic reasons that Google chooses not to index content. See Why aren't search engines indexing my content?