Technical SEO Issues

However, as websites age and change, technical SEO issues can arise that can cause user experience issues as well as negative impacts on your website’s overall organic health. Below are some of the most common technical SEO issues that should be monitored for and fixed whenever they appear.

Indexability Issues

A page that is not indexed has no chance to generate any organic search traffic.  

Indexability issues relate to the technical configuration of a website’s URLs so that they are either “Indexable” or “Not Indexable”. Indexing signals can often get misconfigured, or set up incorrectly, which can result in important URLs not getting indexed.

<head> Contains Invalid HTML Elements

The URL being analyzed contains a <noscript> tag in the <head>, which includes invalid HTML elements (anything other than <link>, <style>, and <meta> elements). 

This is a high priority fix because once Google detects an invalid element, it assumes it reached the end of the <head> and stops reading any further element in the <head>.

Disallowed Image

The URL being analyzed contains an image URL that is blocked by a robots.txt disallow rule.

If Google is blocked from certain page resources, it might be unable to accurately render the page content, causing a negative impact on rankings.

Canonical Points to HTTP Version of Website

The URL in question uses the Hypertext Transfer Protocol Secure (HTTPS), but the canonical URL uses HTTP (non-secure).

This may cause issues in getting the search engines to recognize and index the ‘right’ URL (which in this case is assumed to be the HTTPS version).

Canonical Points to a Noindex URL

The URL being analyzed has a canonical element whereby the canonical URL is noindex. This sends conflicting messages to the search engines which in turn means they will generally ignore the canonical and make their own decision. This could result in having incorrect URLs being indexed or even duplicate content.

Internal Link Issues

Internal links are connections between pages on the same website, that can pass link equity which can have a huge impact on the ranking ability of a given URL.

Websites with poor internal linking or misconfigured internal links may have a very hard time ranking well on search engines.

Page has Only One Followed Internal Link

The URL being analyzed only has one followed incoming link from one other page on the website. 

Pages with this few internal links make it incredibly difficult for the page to rank. If the page is important, it’s recommended to point more internal links towards it.

Orphaned URLs

URLs that are not part of the crawlable website architecture are considered “orphaned”.

Since these pages have no internal links, crawlers cannot find the pages on its own which hurts its chance to rank.

URL has an Internal Link with no Anchor Text

The URL being analyzed contains at least one outgoing anchor link which has no anchor text. Anchor text is the visible, clickable text in the hyperlink which search engines use to provide context about the link’s destination.  

When the anchor text is empty, it means there is a missed opportunity to provide more value to the target page.  

URL Receives Both Follow and Nofollow Internal Links

When a page receives a follow link (instructs crawlers to crawl the linked URL) and a nofollow link (instructs crawlers no to crawl the linked URL) it will confuse crawlers and negate the main affect you were trying to achieve. This could also indicate a site misconfiguration.

On-Page Technical Issues

On-Page is one of the most fundamental foundations of SEO – these basic elements like title tags, heading tags and meta descriptions, need to be implemented correctly if you expect to compete in competitive spaces.

<html> Tag is Missing

Pages that do not have a title tag present in the HTML are missing a large opportunity for rankings and optimal click-through-rates.

As title tags are one of the most important SEO factors of a page, it is recommended that all important pages have a unique title that helps users and crawlers quickly understand the context of the page.

Page Has Multiple <title> Tags

If a page has multiple title tags, it may lead to search engines displaying the ‘wrong’ one, which in turn may lead to lower engagement or CTR from search results and may also have a negative SEO impact.

Images Are Missing Alternative Text

The URL being analyzed contains images that have no alt attribute, or images that do have the alt attribute, but with no text (or both).

Images without alt text represent poor accessibility (screen readers will not be able to identify images for the visually impaired) as well as a missed SEO opportunity.

<h1> Tag is Missing

The header 1 tag is an important element that helps users and crawlers understand the context of the page. When this tag is missing, it represents a missed optimization opportunity.

Redirect Issues

While redirects are essential for website management, they can cause serious issues if not managed correctly. Misconfigured redirects can negatively impact user experience and can even stop crawlers from being able to effectively crawl your website’s content.

Internal Redirect is Broken (4xx/5xx)

This means that the URL being analyzed is an internal URL that redirected to a URL that was not accessible.

This creates a bad user experience for both the user and search engine crawler as they won’t be able to reach the content they were looking for.

Internal URL redirects back to itself

The URL being analyzed has a redirect which points back at itself. For example, URL 1 is being redirected to URL 1.

This can cause a negative impact on SEO as crawlers won’t be able to access the content of the page, and users will be shown an error page (poor user experience).

Internal redirects from trailing slash mismatch

The URL being analyzed is redirected to another internal URL, where the only difference between URLs is the presence/absence of a trailing slash. For example, www.foundery.ca and www.foundery.ca/.

Redirects add an extra step, which means it takes longer for the content to become available, which creates poor user experience, and means that search engine crawlers have to do extra ‘work’ to find the content. 

Internal redirected URLs

As mentioned above, redirects cause additional work for search engine crawlers to find your content.  

Since internal URLs can be controlled by the website owner, redirecting URLs should be avoided if possible. Remove the internally redirected URLs and replace them with the destination URL.

Internal URL Issues

Many issues can be caused by subpar URL paths and how those URLs resolve, which can impact both users and search engines.

Broken Internal URLs

The URL being analyzed was not successfully crawled, and had a crawl status of either Not Found, Error, Forbidden or Timeout.  

Broken URLs create a poor user experience since the user will be unable to access the page’s content. It also can have a severe negative impact on SOE depending on the scale of the issue.  

URL Resolves Under Both HTTP and HTTPS

The URL being audited returns a 200 (OK) status for both the HTTP and HTTPS versions of the URL.  

Firstly, this poses a security risk as a user can access content via HTTP that is supposed to be secure. Secondly, it is an SEO concern because both URLs will have identical content which could cause duplicate content issues.  

URL Contains Whitespace

URLs with whitespace characters (ex. www.ontarioseo.ca/digital ads/google/) are not recommended as they could cause issues when site visitors share or link to the URL, potentially leading to broken links and a loss of potential link equity.

URL Contains Uppercase Characters

Mixed case URLs can cause duplicate content issues if both the lower-case version and mixed case version of the URL respond with a 200-status code.

If this is a problem at scale, it could also be causing crawl budget issues as crawlers may need to crawl thousands of extra, unnecessary pages.

Duplicate Content Issues

Duplicate content is problematic for search engines and can be a detrimental problem if its at scale because search engines don’t want to serve duplicate results to searchers.

All pages should have unique content as well as unique elements such as title tags and h1s to avoid triggering quality algorithms or keyword cannibalization.

URLs with Duplicate Content

This means the URL being audited has identical HTML content to at least one other indexable URL.  

If this is a large-scale issue, you could potentially trigger quality algorithms, which can depress organic search traffic to your website as a whole.  

URLs with Duplicate Page Titles

If multiple pages on a website have the same title, it makes it difficult for crawlers to differentiate which page is “best” for a given search query.

This can cause keyword cannibalization, and if this is at scale, it can trip quality algorithms (like Google’s Panda).

URLs with Duplicate h1s

If multiple pages on a website have the same <h1>, it makes it difficult for crawlers to differentiate which page is “best” for a given search query.

This can cause keyword cannibalization, and if this is at scale, it can trip quality algorithms (like Google’s Panda).

Technically Duplicate URLs

The URL being analyzed is technically identical to at least one other indexable URL. This could be URLs that are only different based on case: 

  • www.foundery.ca/Seo/ 
  • www.foundery.ca/seo/  

 Or have the same query string parameters and values (but in a different order): 

  • www.foundery.ca/page/?a=1&b=2 
  • www.foundery.ca /page/?b=2&a=1

XML Sitemap Issues

XML Sitemaps are extremely useful as it allows you to declare exactly which URLs you wish to be indexed on a website. This is why sitemaps should only ever contain the URLs you actually want indexed and URLs that return a 200-status code.  

Sitemaps with “dirt” in them can eventually erode the trust of search engines to the point where they simply ignore them.

Error (5XX) URL in XML Sitemap

5XX errors indicate an issue with the server itself (rather than the website). Server errors affect access to your site, for search engines and users alike. This is a severe issue because if crawlers continue to find broken links in your XML sitemap, they may eventually stop trusting them for crawling and indexing signals.   

Noindex URL in XML Sitemap

The XML Sitemap should only contain URLs you want search engines to index. If a URL is set to noindex, this is an explicit statement to search engines that you do not wish for the URL to be indexed.  

When including a noindex URL in a sitemap, you are providing conflicting information to search engines, and may result in unintended URLs getting indexed. 

Not Found (4xx) URL in XML Sitemap

The XML Sitemap should only contain URLs you want search engines to index. The sitemap should be “clean” and only contain URLs with a 200 status.  

4XX errors in a sitemap could pose as a severe issue because if crawlers continue to find broken links in your XML sitemap, they may eventually stop trusting them for crawling and indexing signals.

Canonicalized URL in XML Sitemap

A canonicalized URL is an explicit statement to search engines that you do not want the URL to be indexed, and instead wish for the canonical URL to consolidate indexing signals.  

When including a canonicalized URL in a sitemap, it provides conflicting information to search engines, and may impact what they consider is the canonical URL, which may in turn mean that unintended URLs get indexed.