Table of Contents
In a hangout meeting with Google’s Martin Splitt, it was mentioned that Google’s web crawler bot does not click on buttons on any website. Later in the video, Splitt said that clicking on every button on a website is very expensive in CPU power. Ultimately, this can lead to a lot of content remaining undiscovered by Googlebot.
What is Googlebot?
Googlebot is a generic name given to two of Google’s web crawlers, which are programs that simulate a user in a desktop and mobile environment. If you have a website, it will be crawled by Googlebot Desktop and Googlebot Smartphone. This is how Google discovers websites and content. There is no central registry of webpages available, so Google has to ‘crawl’ on individual pages to add them to its list of known pages. Googlebot goes between multiple pages using corresponding links.
Why is this a big deal?
A lot of websites use buttons like ‘Read More’ to divide content into smaller-sized chunks. However, the content hidden behind buttons is not crawled by Googlebot because it does not click on buttons. This means a lot of content remains hidden from Google, which in turn does not index it, dropping the website’s ranking in the process.
How to overcome this problem?
Splitt says the solution varies from website to website. However, there are general guidelines that can be followed. The first guideline has already been discussed:
Googlebot does not click on buttons.
The next guideline states
It is better to use static links in place of buttons.
According to Splitt, the appearance of the link does not matter. If the link leads to new, unique content, Googlebot will visit it. The optimal solution would be to make it so every page displays its unique content. For example, page one should display 10 products, but the second page should display the next 10 unique products, rather than displaying the 10 products from page one. If page two displays 20 products, including 10 from the first page, Googlebot may consider pages one and two identical.