5

For SEO, I need to have my link https://www.example.com/ on a third party website. However, I also need this website to pass parameters to my site.

If the third party website used the below structure would it pass a backlink to my main URL https://www.example.com/ or would it pass a backlink to https://www.example.com/?xyz=55

Would this be seen as legit from an SEO perspective or pushing it? Is there a better way to accomplish this from an SEO perspective?

<script>
   var val = 55;
</script>

Link to <a href="https://www.example.com/" onclick="location.href=this.href+'?xyz='+val;return false;">My Site</a>

user1609391
  • 151
  • 1

2 Answers2

5

Based on Links Crawable Guidelines, the link will be crawled as https://www.example.com/, because Google can not follow link from onclick attributes.

Google search central says:

Google can follow links only if they are an <a> tag with an href attribute. Links that use other formats won't be followed by Google's crawlers. Google cannot follow <a> links without an href tag or other tags that perform a links because of script events. Here are examples of links that Google can and can't follow:

  • Can follow:
    • <a href="https://example.com">
    • <a href="/relative/path/file">
  • Can't follow:
    • <a routerLink="some/path">
    • <span href="https://example.com">
    • <a onclick="goto('https://example.com')">

I just saw the test that @Trebor mentioned in a comment which says The onclicks links were fully crawled and followed, but that test was created in 2015, and the official guidelines was first captured at 2020-11-11 and the last updated at 2021-08-26, so I prefer to believe in official announcements than the test. Because I think the test is outdated, I need the test that created at least this year to prove it.

I see in your comment says:

I'm thinking of blocking the parameter pages with robots.txt or in webmaster tools to prevent duplicate content and put the focus on https://www.example.com

Based on Duplicate URLs guideline, don't block pages using robots.txt, just use rel=canonical <link> tag , rel=canonical HTTP header, Sitemap, 301 redirect, or AMP variant as described at official Google guideline.

General guidelines For all canonicalization(duplicate URLs signal) methods, follow these general guidelines:

  • Don't use the robots.txt file for canonicalization purposes.

  • Don't use the URL removal tool for canonicalization. It removes all versions of a URL from Search.

  • Don't specify different URLs as canonical for the same page using the same or different canonicalization techniques (for example, don't specify one URL in a sitemap but a different URL for that same page using rel="canonical").

  • Don't use noindex as a means to prevent selection of a canonical page. This directive is intended to exclude the page from the index, not to manage the choice of a canonical page.

  • Specify a canonical page when using hreflang tags. Specify a canonical page in same language, or the best possible substitute language if a canonical doesn't exist for the same language.

  • Link to the canonical URL rather than a duplicate URL, when linking within your site. Linking consistently to the URL that you consider to be canonical helps Google understand your preference.

0

This is a good hole for XSS-attack and very bad practice!

onclick="location.href=this.href+'?xyz='+val;return false;"

Use click trackers to track mailto and other non-HTTP calls only.

You can use something like this:

<a href="http://example.com" target="_blank" rel="noopener">
    MySite
</a>

?xyz=55

the transfer of the link juice will depend on the processing settings for the "?"-parameter on your web server, in Google SearchConsole and robots.txt

Roman Mikhol
  • 598
  • 3
  • 9