Duplicate content within the site may be present in the following elements:

  • Announcements of articles, rubricators. To fill them out, use the first few sentences from the page to which the link leads;
  • Title, description of categories and subcategories in paginated sections. Having multiple pages filled with identical content will be considered spam;
  • Identical product descriptions. Relevant for stores whose assortment consists of thousands of items. Writing unique descriptions for each product will require significant time and financial investment;
  • Generated pages with filters and sorting. They are created on the server side and sent to the user. Their addresses may vary, which will provoke the appearance of internal duplicates;
  • Identical content in different subcategories. For example, the same product is displayed in the category to which it belongs and in the list for a specific brand;
  • Reviews. Repetitions of the same text on different pages are considered spam by search engines. The result will be slower indexing and lower rankings in search results.

We must not forget about the laziness and inattention of the webmaster who fills the site with content. His mistakes can cause negative consequences for the entire resource.

Why duplicate content is bad

There are a number of problems that duplicate content can lead to:

  • deterioration of indexing. The search robot spends certain resources and time on crawling the site. If he sees that the content is regularly repeated, indexing will take much longer. In rare cases, it may stop altogether;
  • filter overlay . One of the most dangerous algorithms for unscrupulous SEO specialists, the “Panda” algorithm, has an extremely negative attitude towards duplicate content. Many sites are at risk of losing ranking positions or falling out of search results. The resources that came under its influence could not be restored. Panda takes into account traffic and user behavior. If duplicate content finds an audience, the impact may be minimal;
  • decline behavioral factors . If there are several different pages on a site that have parts of the same content, the user runs the risk of accidentally clicking on a link that is irrelevant to him. He will be disappointed and leave the resource. Several hundred dissatisfied users in a short time will significantly reduce behavioral factors.

How to find duplicate pages

When creating a website without a CMS, it is enough to monitor the content of the resource, write unique announcement texts and avoid duplication. When controlling through the engine, you need to make sure that duplicates are not created automatically by periodically initiating checks.

Those who have been working on a site that has existed for a long time will face painstaking work. Searching for duplicates begins with using the webmaster panel in the tools offered by search engines. Here you can find pages with the same meta descriptions.

Next, you will have to use a strict search for specific phrases. This can be done before adding new content to avoid posting duplicate texts, descriptions, and images.

Special sites that check the uniqueness of texts on the Internet can perform the same work within one site. Flexible functionality allows you to fine-tune their operation. It is important to remember that only indexed content will be displayed as non-unique. Therefore, you should not upload a lot of texts and images at the same time.

What to do with duplicates:

  • content can be rewritten;
  • on generated pages (during pagination, in filter and sorting results), place transitions to canonical links and tags that prohibit indexing and following by a search robot. Meta tags only work in Yandex. Google is able to index them.

To maintain positions, it is better to post unique content, periodically updating old texts. A website content audit , which our company can quickly and efficiently carry out, will help identify low-quality text or the presence of duplicates.