4

I hope that the contents which is pulled on page load could be crawled by google robot, how to do that ?

I went to Google Webmasters tool to ensure that whether Google crawler could see the content pulled by the script or not, but sadly the crawler do not see that.

I know there are some ways to make AJAX content crawlable. like http://moz.com/blog/how-to-allow-google-to-crawl-ajax-content the way is put query parameter on url after !#. But my query parameter is in script?

<script src= "https://script.google.com/macros/s/AKfycbwjTaclaKdzdr4IgCOM_PpIWJvxFSGkz2qLgrhfJ5fNYf09djI/exec?id=1rnLnui4IXBKZcsDH5zxFHS_rO4TpRPGQD9RY4C_OMAc&callback=pullDone"></script>

Demo page: http://radiansmile.github.io/web/Test/ajax_at_Initial.html

How could I solve this problem?

Stephen Ostermiller
  • 99,822
  • 18
  • 143
  • 364
Radian Jheng
  • 151
  • 3

1 Answers1

1

The solution would be to serve the content as html for search engines and js for normal users. But that should be done in a way that is not considered cloaking. So the ideal way for it to happen would be to have exactly two copies of the same page - one using js and another using html and render the page accordingly based on the user agent of the requester. If a crawler is found, serve the html version of the page, and for others, serve the js based page. There are services that convert a page into html, cache it, identify the user agent and serve accordingly. Prerender is one such service.

Rana Prathap
  • 826
  • 4
  • 21