How does it work?
The intelligent Searchperience Crawler indexes and analyses all your URLs automatically and at regular intervals. We do not require any data from you for this – and this makes us unique among SaaS providers! All we need is a list of Start-URLs and the Crawler will then independently index the entire website.
While crawling, the function automatically recognizes forwarding links, down-times, link parameters and duplicates and processes them accordingly.
During your search configuration briefing, we discuss the relevance of individual WebPages to you and the information they contain. Once the Crawler has been configured, it will recognize the given page types and proceed with individual indexing. In detail this involves:
- A single unified search for your WebPages: various page types, such as Shop, Wiki, Forum, FAQs and tables of contents can be read, indexed and made available via a common search
- The different areas can be shown in the search results as different tabs or separated by filtering
- Products pages are recognized and all product information extracted and normalized
- When indexing a page, information from other pages can also be pulled up
- Images can be analyzed to enable, for example, color-coordinated searches
Updating the index
Depending on how frequently the page is updated, the index will be updated accordingly. Thus, for example, a news page might be re-indexed several times a day, while a content page might only require updating every few days.