It’s actually pretty simple, now that Google has updated its crawler to render pages. Google allows you to crawl and render your page directly in your Search Console account. Here’s a breakdown, using a website that we just began monitoring and working on, of how to check and determine what the issue is:
Visit the “Fetch as Google” menu item in the Crawl category on your left. From there, if you’re looking to check your home page, leave the URL field blank and press “fetch and render.” It will take a minute or two to run, but after it’s done, you’ll get a result under the “status” that tells you whether there are problems. Do this for both desktop and mobile when you’re checking – the dropdown box next to the “Fetch” button is where you’ll see this option. If the results say “Complete” – then congratulations, you’re good to go! Otherwise, the results will say “Partial” and you’ll have to go one step further to take a look at what’s not being rendered properly. In the case of this sample site, well, you’ll see:
This site is having a lot of issues, as you can see by the fact that it’s not showing properly in the display screen (this should match the way you see your site when you visit it in your browser), as well as the list of items under the “Google couldn’t get all the resources for this page” area. This is a list of all the files that the Googlebot couldn’t access in order to determine the style and content of your site.
You can block the main folder, and allow subfolders to still be crawled. Alternatively, if you want to allow only Google’s robot to crawl those folders, then your code would look like:
Once you successfully edit your files, you can process the fetch and render again, and if all is well, you should get a result like the one below for this website: