


Some of you may remember the excellent ‘ Googlebot is Chrome‘ post from Joshua G on Mike King’s blog back in 2011, which discusses Googlebot essentially being a headless browser. I highly recommend reading Adam Audette’s Googlebot JavaScript testing from last year if you’re not already familiar.Īfter much research and testing, we integrated the Chromium project library for our rendering engine to emulate Google as closely as possible. Google deprecated their old AJAX crawling scheme and we have seen JavaScript frameworks such as AngularJS (with links or utilising the HTML5 History API) crawled, indexed and ranking like a typical static HTML site. You can choose whether to crawl the static HTML, obey the old AJAX crawling scheme or fully render web pages, meaning executing and crawling of JavaScript and dynamic content.

The SEO Spider is now able to render and crawl web pages in a similar way. It’s been known for a long time that Googlebot acts more like a modern day browser, rendering content, crawling and indexing JavaScript and dynamically generated content rather well. Secondly, we wanted to crawl rendered pages and read the DOM. This is why we created the Screaming Frog Log File Analyser, as a crawler will only ever be a simulation of search bot behaviour. Firstly, understand exactly what the search engines are able to crawl and index. There were two things we set out to do at the start of the year. This includes the following – 1) Rendered Crawling (JavaScript) Our team have been busy in development and have some very exciting new features ready to release in the latest update. I’m excited to announce version 6.0 of the Screaming Frog SEO Spider, codenamed internally as ‘render-Rooney’.
