Technical SEO in 2025

Technical SEO in 2025
Before we explore the core elements of technical SEO, it’s essential to understand what it actually means and why it's such a critical part of any successful digital strategy.
Technical SEO refers to the process of optimising your website’s infrastructure to ensure that search engines can effectively crawl, render, and index your pages. In simple terms, it’s all about helping search engines discover and understand your content, so it can appear in search results.
This makes technical SEO a foundational component of your overall SEO efforts. Without it, even the most compelling content might remain hidden from your audience.
Why Is It Called “Technical”?
The term “technical” is used because this area of SEO focuses on behind-the-scenes elements — things that aren’t related to the content itself or how it’s promoted.
Instead, technical SEO addresses things like:
- HTTP status codes
- Internal linking structures
- Meta robots tags
- XML sitemaps
All of these elements work together to ensure that your website’s content is accessible and understandable to search engines like Google.
Do You Need to Be a Developer?
You don’t have to be a web developer or write code to get started with technical SEO. However, having a basic understanding of how websites are built can make a significant difference.
Foundational knowledge of HTML and JavaScript will help you navigate technical audits more confidently and understand the logic behind common recommendations.
If you’re not familiar with these technologies yet, we recommend taking some time to strengthen your understanding before diving deeper into technical SEO.
The Foundations of Technical SEO
To master technical SEO, it’s essential to first understand how search engines like Google operate. This foundational knowledge will give you a clear view of what affects your site’s visibility and what technical elements need your attention.
Crawling, Indexing, and Rendering
At the heart of search engine functionality are three key processes: crawling, indexing, and rendering.
1. Crawling: Discovering Content
Search engines begin by accessing and downloading a website’s content — including text, images, and videos. This is known as crawling. Specialised software called crawlers (also known as bots or spiders) follows links from one page to another, collecting data along the way.
Crawlers also discover content through:
- Submitted XML sitemaps
- Direct fetch requests from tools like Google Search Console
- Notification protocols such as IndexNow, used by engines like Bing and Yandex
Popular crawlers include:
- Googlebot Smartphone (mobile-first crawler)
- Googlebot Desktop
- Bingbot (mobile and desktop versions)
Because of the massive scale of the web, search engines must limit the time and resources spent crawling each site. This is known as the crawl budget, and it’s influenced by:
- Your site’s crawl capacity (technical health, server speed, configuration limits)
- Its crawl demand (page popularity, update frequency)
Most crawlers will respect rules you define in your robots.txt file, which controls what they can access and how frequently.
We’ll dive deeper into robots.txt later in this guide, but for now, just know: it plays a critical role in guiding how your site is crawled.
2. Indexing: Understanding and Storing Content
Once a page is crawled, the next step is indexing. This is when search engines analyse the content to understand what it’s about and then store it in their database.
In addition to HTML pages, search engines can also index other text-based formats such as PDFs and XML files.
However, not every crawled page is indexed. Whether a page is added to the index depends on factors such as:
- Originality and quality of content
- Proper HTML configuration
- Efficient handling of JavaScript content
Search engines assess the uniqueness of each page. If they find multiple pages with similar or duplicated content, they select a canonical version, the one they consider most representative to appear in search results.
This makes it essential to structure and configure your site properly to help search engines choose the right version of your content.
3. Rendering: Displaying JavaScript Content
For websites that use client-side JavaScript, there’s an additional step: rendering. This involves dynamically generating HTML content within the browser using JavaScript, rather than serving it directly from the server.
There are two key approaches:
- Client-side rendering (CSR): HTML is built in the browser after JavaScript loads.
- Server-side rendering (SSR): HTML is generated by the server and sent to the browser fully formed.
While search engines are capable of rendering JavaScript, doing so requires more time and resources. At scale, this can result in slower indexing or missed content altogether. That’s why server-side rendering is strongly recommended for SEO.
Also, understanding the rendering process is crucial for SEO professionals, especially when working with other platforms or developers.
A key takeaway: even on JavaScript-heavy sites, always use standard HTML <a> tags with href attributes for internal links and avoid JavaScript-based navigation alone. This ensures that crawlers can discover and follow your links effectively.
Bringing It All Together
These three core processes crawling, indexing, and rendering are the building blocks of technical SEO. The way your website is configured can directly affect how efficiently search engines move through these phases and whether your content appears in search results.
HTTP Status Codes for Technical SEO
One of the foundational technical configurations in SEO is understanding and properly using HTTP status codes. These codes play a vital role in how search engines interact with your website and can significantly influence how your content is indexed and ranked.
Let’s break down the most important ones and explore how they affect SEO.
What Are HTTP Status Codes?
HTTP status codes are three-digit responses sent by a server whenever a browser or a search engine crawler requests a web page. These codes are part of the HTTP protocol and indicate whether the request has been successfully processed, redirected, or resulted in an error.
In SEO, HTTP status codes guide search engines on how to treat each page — whether to index it, redirect it, or remove it from their index.
Although platforms like Wix Studio automate much of this process, understanding these codes is essential for SEO professionals.
Key HTTP Status Codes and Their SEO Impact
2XX – Success
- 200 OK
Indicates a successful request.
→ This is the ideal status for pages you want search engines to crawl and index.
If a page meant for indexing doesn’t return 200 OK, it should be corrected immediately.
3XX – Redirects
- 301 Moved Permanently
Indicates a page has been permanently moved to a new URL.
→ Essential for preserving SEO value (rankings, backlinks, traffic) during site migrations or URL updates.
Always use 301 redirects for permanent changes. - 302 Found
Indicates a temporary redirect.
→ Unlike 301, 302 generally does not pass SEO value to the new URL. Use it only for short-term redirection.
Important note: Google follows up to 10 redirect hops. Anything beyond may be ignored or treated as an error.
4XX – Client Errors
- 404 Not Found
The server can’t find the requested page.
→ Pages returning 404s are removed from the index and can negatively affect crawl budget if too frequent. - 410 Gone
Indicates that content has been intentionally and permanently removed.
→ Use when content is no longer needed and has no traffic or backlinks.
⚠️ 5XX – Server Errors
- 500 Internal Server Error
A general server failure.
→ Can harm SEO if unresolved. Often caused by server overload or misconfiguration. - 503 Service Unavailable
Usually signals temporary unavailability, such as during maintenance.
→ Tells search engines to try again later without harming rankings. - 429 Too Many Requests
Indicates that the server is receiving too many requests in a short time.
→ Can slow down or temporarily halt search engine crawling.
All 5XX errors, if persistent, lead to de-indexing over time. They should be addressed promptly by the system administrator (sysadmin).
Soft 404 Errors: The Hidden Trap
A soft 404 occurs when a page returns a 200 OK status but contains no meaningful content, displays an error message, or signals that it shouldn't exist.
- These pages confuse search engines, as the status says "all good," but the content suggests otherwise.
- To handle this:
- Use 301 redirects to relevant content or parent category pages.
- If content is permanently gone and has no SEO value, consider using 410 Gone.
- Use 301 redirects to relevant content or parent category pages.
Understanding HTTP status codes is crucial for managing how search engines interpret and interact with your website. They influence everything from crawl efficiency to ranking retention especially during redesigns, migrations, or cleanups.
URL Structure for Technical SEO Success
Every web page on your site has something critical to SEO that you should never overlook: the URL. A well-structured URL not only improves discoverability for search engines but also enhances user understanding and experience.
Here, we’ll explore best practices for URL structure and how to optimise links for better crawlability, indexing, and rankings.
Why Does URL Structure Matters?
A good URL provides immediate clarity to both search engines and users about the content of the page. A clean, well-structured URL is more likely to be indexed, ranked appropriately, and clicked on in search results.
Best Practices for SEO-Friendly URLs
Here are the key guidelines to follow when creating or updating your URLs:
Keep URLs:
- Short, simple, and lowercase
- Descriptive, using natural language that reflects the page’s content
- Free from special characters (like accents or symbols)
- Written using hyphens to separate words (not underscores or spaces)
URLs should follow the internet standard known as RFC 3986, which defines allowed characters and structure. It's important to understand this standard.
Avoid:
- Duplicate or near-duplicate URLs for similar content
- Deep or unnecessarily long folder structures
- Using confusing parameters like yoursite.com/id-23-p?id=2
Good vs Bad Example:
- Good: yoursitename.com/smartphones/iphone/
- Bad: yoursitename.com/id-23-p?id=2
The good example clearly communicates content and follows best structure practices, while the bad one offers no context or hierarchy.
Trailing Slashes and Canonical URLs
Be consistent in your use of trailing slashes (/). Inconsistent use can result in duplicate content, such as:
- yoursite.com/category-a
- yoursite.com/category-a/
To avoid indexing both versions, always 301-redirect the non-canonical version to the preferred one.
Optimising Internal and External Links
Links are essential for helping search engines discover pages and for passing SEO authority. Here’s how to ensure your links are working for you:
🔗 Internal Linking Best Practices:
- Use <a href=""> HTML tags, not JavaScript-based links
- Ensure links are included in:
- Main navigation
- Footers
- Breadcrumbs
- In-content editorial links
- Use server-side rendering to ensure search engines can crawl them
- Always link to the canonical, indexable URL
- Prefer absolute URLs over relative ones to avoid issues
- Prioritise important, high-value pages in your linking strategy
- Cross-link between related pages within the same category
Internal links should form a hierarchical structure that mirrors your site architecture. Key pages should receive more internal links to signal their importance to search engines.
Anchor Text in Technical SEO
Anchor text — the clickable text in a link — should be:
- Descriptive and relevant to the linked page
- Inclusive of keywords (where natural)
- Not generic (avoid "click here", "read more", etc.)
Link Attributes: rel="nofollow", rel="ugc", rel="sponsored"
Search engines allow you to define the relationship between your site and the linked page using link attributes. Here’s what they mean and when to use them:
- rel="nofollow"
Use when:- You don't want to pass authority to an external site
- The link is part of a commercial relationship
- You don't endorse the linked page
→ Avoid using on internal links.
- rel="ugc" (User Generated Content)
Use for:- Blog comments
- Forum posts
→ Prevents manipulation through spammy user-submitted links.
- rel="sponsored"
Use when:- Linking to ads or paid placements
→ Clearly signals that the link is part of a commercial arrangement.
- Linking to ads or paid placements
Misuse or omission of these attributes especially when linking to sponsored content or questionable sources can be flagged as link manipulation under Google’s spam policies, potentially harming your site’s rankings.
URLs and links are more than just technical details, they’re essential tools in your SEO toolkit. Optimising them improves crawlability, enhances user experience, and preserves your authority in search rankings.
XML Sitemaps in Technical SEO
An XML sitemap is a file written in XML format that lists all the important URLs on a website including pages, images, and videos that are intended to be crawled and indexed by search engines.
Sitemaps help search engines understand:
- What content exists on your site
- How various files (like images or videos) relate to pages
- Which pages should be prioritised for crawling
Although a sitemap is not strictly required, it can significantly enhance indexing, especially for large, complex, or poorly linked websites.
Why Are XML Sitemaps Useful?
Without a sitemap, search engines must rely solely on links and crawl paths found in the site’s source code to discover content. This can delay the indexing of newer or harder-to-reach pages.
With a sitemap, search engines receive a direct signal about which pages to crawl, improving:
- Indexing speed
- Coverage of less-visible pages
- Prioritisation of high-value content
Types of XML Sitemaps
In addition to general sitemaps for standard pages, Google and other search engines support:
- Video sitemaps (for multimedia content)
- Image sitemaps
- News sitemaps
Platforms like Wix automatically generate and maintain these sitemaps, keeping them up to date whenever content is added or removed. This is particularly helpful for site owners who want hands-off technical management.
Best Practices for Creating XML Sitemaps
If you’re handling your sitemap manually or want to understand how automated platforms like Wix Studio do it, here are some important guidelines:
File Size and URL Limits
- Maximum 50MB uncompressed
- Up to 50,000 URLs per sitemap
- If your sitemap exceeds either limit, split it into multiple files and use a sitemap index to reference them all
Location and Encoding
- Place the sitemap at the root of the domain (e.g., yoursite.com/sitemap.xml)
- Use UTF-8 encoding
Use Absolute URLs
Always include absolute URLs in your sitemap (e.g., https://yoursite.com/page.html), rather than relative ones (/page.html), as search engines will only crawl the explicitly listed addresses.
Submitting Your Sitemap
You can submit your sitemap to search engines through their dedicated tools. For example:
- Google Search Console
- Bing Webmaster Tools
These platforms help:
- Confirm sitemap submission
- Identify crawling and indexing issues
- Show which pages have been indexed
Understanding the Sitemap Format
A properly configured XML sitemap entry may look like this:
<url>
<loc>https://www.example.com/page.html</loc>
<lastmod>2025-06-01T12:00:00+00:00</lastmod>
</url>
- <loc>: The location of the page
- <lastmod>: Indicates the last modification date in W3C Datetime format
Google ignores the priority and changefreq values, but it does use lastmod if the information is accurate and consistent.
While XML sitemaps may seem technical, they’re a simple but powerful tool for improving your site’s visibility in search. Whether managed manually or via platforms such as Wix Studio or WordPress, ensuring your sitemap is properly configured and submitted can accelerate and enhance your SEO performance.
Crawlability and Indexing with robots.txt and Meta Robots Tags
Two simple but powerful tools give you fine-grained control over how search engines access and process your site’s content: the robots.txt file and meta robots tags. Used correctly, these tools help shape crawl behaviour and indexing strategy — a vital part of technical SEO.
Let’s dive into how they work and when to use each.
What Is robots.txt?
The robots.txt file is a plain text file placed at the root of your site (e.g., yoursite.com/robots.txt). It tells search engine crawlers which parts of your site they’re allowed to crawl and how they should behave when doing so.
Important: It doesn’t prevent indexing on its own. If another page links to a URL you've blocked via robots.txt, search engines may still index it without visiting it. For indexing control, use meta robots tags instead.
Key Functions of robots.txt:
- Limits crawl access to pages or directories
- Helps manage server load
- Directs crawlers to your sitemap
- Supports both broad and specific rules using user-agents and wildcards
robots.txt: Format and Directives
To function properly, your robots.txt file must:
- Be encoded in UTF-8
- Be named exactly robots.txt
- Return a 200 HTTP status
- Avoid more than 5 redirects
Otherwise, search engines may interpret it as “full allow” — meaning everything on your site can be crawled.
Common Directives:
- User-agent
Defines which crawler the rule applies to (e.g., Googlebot, Bingbot, or * for all crawlers). - Disallow
Prevents crawlers from accessing a specific page or directory.
Example:
Disallow: /private-folder/ - Allow
Overrides a previous disallow rule to grant access.
Example:
Allow: /private-folder/public-page.html - Sitemap
Lists the URL of your XML sitemap to guide discovery.
Example:
Sitemap: https://www.example.com/sitemap.xml
All directives support wildcards (*) and comments can be added using #.
Examples of robots.txt Rules
User-agent: *
Disallow:
Blocks all crawlers from accessing the entire site.
User-agent: Googlebot
Disallow: /example/
Blocks Googlebot from crawling the /example/ directory.
User-agent: Bingbot
Disallow: /page.html
Blocks Bingbot from crawling a specific page.
Note: Editing robots.txt incorrectly can harm your visibility in search. Always double-check your changes and understand the SEO implications.
What Are Meta Robots Tags?
While robots.txt applies sitewide or by section, meta robots tags offer page-level control over indexing and crawling.
These tags are placed in the <head> section of a web page’s HTML or set in the HTTP headers.
<meta name="robots" content="noindex, nofollow">
This tells crawlers not to index the page or follow its links.
Key Meta Robots Directives
- noindex
Prevents the page from being indexed. Useful for:- Duplicate content
- Internal search results pages
- Thank you or login pages
- index
Allows the page to be indexed. This is the default behaviour, even if no tag is set. - follow
Tells crawlers to follow all links on the page. Also the default unless explicitly overridden. - nofollow
Prevents crawlers from following any links on the page. Useful for:- Pages linking to commercial partners
- Avoiding link manipulation penalties
- noimageindex
Prevents images on the page from being indexed or shown in image search. - nosnippet
Stops search engines from showing text or video snippets in search results. - none
Equivalent to noindex, nofollow.
Combining Directives
You can combine multiple directives in a single tag using commas:
<meta name="robots" content="noindex, nofollow">
Or use multiple tags:
<meta name="robots" content="noindex">
<meta name="robots" content="nofollow">
Together, robots.txt and meta robots tags give you powerful control over how your site is crawled, indexed, and represented in search results. Used strategically, they help prevent duplicate content issues, manage crawl budget, and ensure sensitive or low-value pages don’t dilute your site’s SEO performance.
Canonicalisation: Making Sure the Right Page Gets Indexed
In technical SEO, canonicalisation plays a key role in managing duplicate content and guiding search engines to index the right version of a page. If a single page exists under multiple URLs whether due to URL parameters, protocol differences, or site structure, canonicalisation helps search engines understand which version should appear in search results.
Let’s explore how to implement canonicalisation effectively.
What Is a Canonical Page?
A canonical page is the version of a web page that you want search engines to index and rank. When duplicate or similar pages exist across different URLs, canonicalisation ensures that only the preferred version appears in the search engine results pages (SERPs).
Note: Without proper canonicalisation, search engines may index the wrong version or divide ranking signals across multiple URLs harming SEO performance.
When Do Canonicalisation Issues Arise?
You might encounter duplicate content issues when:
- The same page is accessible via both HTTP and HTTPS
- A page is available at www and non-www versions
- URLs contain filtering or tracking parameters
- Session IDs or campaign tags are appended to URLs
In these cases, search engines may find multiple versions of the same content and try to decide which one is best unless you guide them explicitly.
How Search Engines Decide the Canonical Version
Search engines use various signals to determine which version of a page is canonical. Some signals are stronger than others.
Strong Signals
- 301 Redirects
A permanent redirect to another URL is a clear signal that the destination is the canonical version. - Canonical Annotations (rel="canonical")
Adding a canonical tag in the <head> of the HTML page or in the HTTP header strongly indicates which version should be indexed.
Weaker Signals
- Inclusion in the XML Sitemap
Listing a URL in your sitemap suggests it's canonical although not as strong as redirects or canonical tags.
When to Use Each Signal
- Use 301 redirects when duplicate versions don’t need to be accessible. For instance, redirect http://yoursite.com/page to https://yoursite.com/page.
- Use canonical tags when you want to keep the duplicate versions live (e.g., campaign URLs, category filters), but still guide search engines to index the primary version.
- Include only the canonical URLs in your XML sitemap to reinforce your preferred version.
Example: Campaign Tracking URLs
Imagine you're running a campaign with UTM parameters.
- The URL yoursite.com/product?utm_source=ad displays the same content as yoursite.com/product.
- You don’t want the UTM version indexed, so you place a canonical tag on the campaign URL pointing to the clean version:
<link rel="canonical" href="https://yoursite.com/product" />
This tells search engines to index the main URL and consolidate ranking signals there.
How to Implement Canonical Annotations
In HTML <head>:
Place this within the <head> section of your page:
<link rel="canonical" href="https://www.example.com/page.html" />
- Always use the absolute URL.
- Use HTTPS if your site uses SSL.
- Make sure there’s only one canonical tag per page.
In HTTP Headers:
For non-HTML files like PDFs, add the canonical tag in the HTTP header:
Link: <https://www.example.com/page.html>; rel="canonical"
This is particularly useful when you offer both a web version and a downloadable format of the same content.
Why Does Canonicalisation Matters?
Search engines aim to deliver the most relevant, non-duplicate content in their results. Canonicalisation:
- Consolidates link equity across duplicate URLs
- Prevents split ranking signals
- Helps improve crawl efficiency
- Ensures consistent indexing of key pages
Canonicalisation may appear straightforward, but overlooking it can result in lost traffic and diminished search visibility. Whether through redirects, meta tags, or sitemap strategy, sending consistent canonical signals ensures your best content gets the attention it deserves.
Optimising JavaScript for SEO
JavaScript brings interactivity and dynamic experiences to websites, powering everything from animations to user interface behaviours. But when it comes to SEO, using JavaScript, particularly client-side rendering requires careful consideration.
What Is Client-Side Rendering?
Client-side rendering (CSR) is when a browser downloads a minimal HTML page and uses JavaScript to generate the full content dynamically. This approach is great for performance and user experience in some cases, but it introduces extra complexity for search engines.
Search engines must take an additional step to execute JavaScript before they can understand and index your content. This uses more time and resources and at scale, it can limit crawl and indexing efficiency.
Because of this, server-side rendering (SSR) is the preferred method for SEO. It delivers fully rendered HTML to crawlers from the server, reducing friction in the indexing process.
You know, all Wix and Wix Studio websites use server-side rendering by default, helping you bypass many JavaScript-related SEO issues.
When You Can’t Avoid Client-Side Rendering
If you're working on a site that relies on CSR and you can’t switch to SSR, it’s critical to optimise your JavaScript for SEO.
Here are the key optimisation practices you should follow:
1. Use Crawlable Links
Ensure all links are implemented using standard HTML <a> elements with a valid href attribute.
Avoid using JavaScript-based links or onclick events to load new content, as search engines may not follow or discover those URLs.
2. Ensure Each Page Has a Unique URL
Avoid loading different page content via fragments (e.g. example.com/page#section1).
Each piece of content should live on its own distinct URL so that search engines can crawl and index it correctly.
3. Keep JavaScript Resources Crawlable
All files required to render your JavaScript including scripts, images, and stylesheets must be accessible to search engine bots. Blocked resources can result in incomplete rendering and cause content or functionality to be missed during indexing.
4. Preserve Key SEO Tags in Raw HTML
Meta tags like robots and canonical should already be present in the raw HTML not generated or altered dynamically via JavaScript.
If they differ between the raw HTML and the rendered version, you’ll send conflicting signals to search engines.
5. Avoid Lazy Loading Above-the-Fold Content
Above-the-fold content (what users see without scrolling) should not rely on lazy loading triggered by scrolling or interaction.
This content needs to load immediately for users and for crawlers.
6. Monitor How Search Engines Render Your Pages
Use tools like Google Search Console’s URL Inspection Tool to see how your pages are rendered and indexed.
Keep in mind that crawlers don’t behave like users. JavaScript triggered by clicks or mouse actions may not be executed during crawling.
Tip: Always compare the rendered HTML (post-JavaScript execution) with the raw HTML to identify potential gaps.
Why JavaScript Optimisation Matters for SEO?
While search engines have improved their ability to render JavaScript, it still requires more resources and time. Sites that rely heavily on CSR may be crawled and indexed less efficiently, potentially hurting visibility.
That’s why, if your site uses CSR:
- Audit your JavaScript-rendered pages regularly
- Test how crawlers see your content
- Stick to SEO best practices, even in dynamic environments
JavaScript can enhance your site’s experience but it needs to be strategically implemented to support SEO.
If server-side rendering isn’t an option, make sure you follow best practices to ensure your content remains discoverable, indexable, and crawl-friendly.
Why HTTPS Matters for SEO
In today’s digital environment, security is no longer optional—it’s expected. That’s why HTTPS, or HyperText Transfer Protocol Secure, plays such an essential role in both user trust and search engine visibility.
What Is HTTPS?
HTTPS is the secure version of HTTP, used to encrypt the data exchanged between a user’s browser and a website. It ensures that information such as login credentials, payment details, and personal data is protected from interception by malicious third parties.
Why HTTPS Is Important for SEO
Search engines, including Google, use HTTPS as a ranking signal. Although its weight is lower than signals like content quality or backlink authority, it still influences where your pages appear in search results.
More importantly, it contributes to a safer, more trustworthy user experience, a factor that indirectly supports your site’s overall SEO performance by increasing engagement, trust, and return visits.
How to Migrate a Site to HTTPS
If your site isn’t yet using HTTPS, transitioning is a technical must.
Here's how to migrate your or your client’s website securely and correctly:
1. Install an SSL/TLS Certificate
Begin by purchasing and installing an SSL/TLS certificate on your web server. This certificate authenticates your site’s identity and enables secure, encrypted communication between the server and the browser.
Most hosting providers and CMS platforms offer free or paid SSL options, and many allow automatic installation.
2. Redirect All HTTP URLs to HTTPS
Once your certificate is active, set up 301 permanent redirects from all HTTP URLs to their HTTPS counterparts. This ensures that users and search engines are always directed to the secure version of your pages.
Many CMS platforms offer this as a built-in option. If not, it can be configured via the .htaccess file (Apache), nginx.conf (Nginx), or equivalent server settings.
3. Update Internal Links
Go through your website and ensure all internal references whether links to pages, images, videos, stylesheets or scripts are updated to use https:// rather than http://.
Failing to do this may result in mixed content issues, which can degrade the user experience and affect SEO performance.
4. Check External Resources
If your site uses third-party tools or resources (e.g. CDNs, fonts, embeds), verify that those services support HTTPS and update the references accordingly.
Loading insecure assets over HTTP on an HTTPS page can trigger browser warnings or even block content from loading.
5. Validate Mixed Content Warnings
Use a site crawler or your browser’s developer tools to check for mixed content errors where HTTPS pages are pulling insecure resources.
Browsers like Google Chrome flag these warnings and may block content until the issue is resolved.
Tip: Tools like Screaming Frog or Google Search Console’s URL Inspection can help identify these problems quickly.
Migrating to HTTPS isn’t just a technical checkbox—it’s a fundamental trust signal for users and an SEO best practice.
When configured correctly, HTTPS ensures your content is served securely, earns favour with search engines, and delivers peace of mind to visitors.
Mobile-First Indexing
Search engines have officially moved into a mobile-first era. Today, they primarily use mobile crawlers to evaluate and index content and that means your mobile site version plays a critical role in how your website ranks in search results.
In this guide, we’ll walk you through the best practices for mobile SEO, including how to configure your website to deliver an excellent experience across devices while remaining search engine friendly.
What Is Mobile-First Indexing?
Mobile-first indexing means Google and other major search engines now crawl and index the mobile version of your website first, not the desktop version. If your mobile site isn’t fully optimised, it could negatively affect your visibility and rankings.
So how do you ensure your site performs well in a mobile-first world?
3 Mobile Site Configurations That Matter
There are three core approaches to mobile configuration:
1. Responsive Web Design (Recommended)
Responsive design delivers the same HTML code across all devices using a single URL, but uses CSS to adjust the layout based on the screen size.
Google strongly recommends this approach as it’s the most consistent, reliable and easy to maintain. All Wix Studio websites support responsive design by default, making them a smart choice for SEO professionals and developers alike.
2. Dynamic Serving
With dynamic serving, your site still uses a single URL, but serves different HTML and CSS depending on the user-agent (i.e. whether it’s a desktop browser or a mobile device).
This method is more complex to manage and requires meticulous configuration to avoid indexing issues.
3. Separate URLs for Mobile and Desktop
This configuration serves different content through distinct URLs (e.g. example.com for desktop and m.example.com for mobile). It relies on detecting the user-agent to redirect visitors accordingly.
This approach increases the risk of duplication, misconfiguration, and crawl inefficiencies making it less preferred for long-term SEO.
Best Practices for Mobile SEO Success
To maximise your performance with mobile-first indexing, follow these practical steps:
Match Mobile and Desktop Content
Ensure the primary content and technical configuration on your mobile site is equivalent to your desktop site. This includes headings, text, links, structured data, and meta tags.
If a mobile page contains less content (e.g. only a summary of a full desktop article), that truncated version will be indexed and your rankings could suffer as a result.
Make Mobile Resources Crawlable
Search engines need to access CSS, JS, images, and other assets to render your mobile pages properly. Check your robots.txt file to confirm that no critical mobile resources are blocked, and use URL Inspection Tool to detect any crawl issues.
Avoid Lazy-Loading Above-the-Fold Content
Lazy-loading is useful for performance, but do not lazy-load critical content that appears above the fold. Google’s crawler cannot interact with your page (e.g. it can’t scroll or swipe), so if your top content is hidden, it may never be indexed.
Simulate Mobile Crawlers During Audits
When performing a technical SEO audit, test how both mobile and desktop crawlers access your site. This helps you detect discrepancies in indexing, visibility, or content delivery.
Mobile-first indexing isn’t just a trend, it’s the current standard. To stay competitive in search, your site must be fully functional and content-complete on mobile. With responsive design, crawlable resources, and content parity across devices, you can create a fast, accessible, and high-performing mobile experience.
Structured Data for SEO
In today’s competitive search landscape, it’s not enough to simply publish quality content, you must also help search engines understand it. That’s where structured data comes in.
We'll explore what structured data is, how it connects to Schema.org, and how to implement it correctly to unlock rich results and enhanced visibility in search engines.
What Is Structured Data?
Structured data is a standardised format that allows you to describe your page’s content in a way that search engines can easily interpret. It enables you to define what type of content exists on a page whether it’s a recipe, review, event, product, or FAQ and identify specific elements such as ratings, ingredients, prices, or author names.
This additional context improves how your content appears in search results and can trigger rich results, which typically lead to higher click-through rates (CTR).
Understanding Schema.org
Structured data relies on the Schema.org vocabulary, a collaborative initiative developed by major search engines to create a unified set of tags and properties for marking up content.
Schema lets you define:
- Entities (e.g. a recipe, a person, an organisation),
- Properties (e.g. cooking time, image, review),
- Relationships (e.g. product → has review → rating),
- Actions (e.g. a person publishes a blog post).
This structured format helps search engines understand the meaning behind your content, rather than just reading the words on the page.
Types of Structured Data That Trigger Rich Results
Google supports structured data for a variety of content types. Some popular ones include:
- Breadcrumb
- Logo
- Event
- FAQ
- How-To
- Image Metadata
- Product
- Q&A
- Recipe
- Review
- Software
- Video
If implemented correctly, these can result in rich results that display additional details in the search engine results page (SERP), such as star ratings, event times, and step-by-step instructions.
Supported Structured Data Formats
Google supports three main formats for structured data:
1. JSON-LD (Recommended)
- Preferred by Google.
- Written in JavaScript and inserted into <script> tags.
- Placed in the <head> or <body> of your HTML.
- Easiest to scale and maintain.
2. Microdata
- Uses HTML tag attributes to define properties directly within the content.
- More tightly woven into the page’s visible HTML.
3. RDFa
- An extension to HTML5.
- Allows linking of structured data with other web resources (linked data).
- Embedded within HTML using attributes like about, property, and typeof.
Validating Your Structured Data
Before publishing, you should always test your structured data for accuracy. Google offers several useful tools:
- Rich Results Test: Checks if a page is eligible for rich results and validates JSON-LD, Microdata, and RDFa.
- Schema Markup Validator: Provided by Schema.org to ensure your markup aligns with the specification.
- Google Search Console: Tracks the presence and performance of structured data across your site, including any errors.
Example: JSON-LD for a Recipe Page
Here’s a basic example of valid JSON-LD markup for a recipe page:
{
"@context": "https://schema.org",
"@type": "Recipe",
"name": "Classic Lasagna",
"author": {
"@type": "Person",
"name": "Jane Doe"
},
"cookTime": "PT1H",
"recipeIngredient": [
"1 pound ground beef",
"2 cups tomato sauce",
"12 lasagna noodles"
],
"recipeInstructions": [
"Boil noodles.",
"Cook beef.",
"Layer ingredients and bake."
]
}
Implementing Structured Data: Manual vs Automated
When deciding how to implement structured data:
- Manually: Add it directly to your site's HTML or via CMS code editors.
- Automatically: Use CMS tools and plugins to add markup site-wide.
How to Identify Structured Data Opportunities
Follow this workflow to uncover where structured data can benefit your site:
- Review your content types (e.g. blog, product, event, FAQ).
- Check Google Search Central’s documentation to see which schemas are supported.
- Determine feasibility of implementing structured data manually or using platform tools.
- Validate markup before publishing and monitor it in Google Search Console.
Structured data isn’t just a technical nicety, it’s a strategic SEO tool. When used correctly, it improves how search engines read your content and increases the likelihood of earning rich results, driving more clicks and visibility.
Take the time to assess your existing content, plan schema enhancements, and use validation tools to ensure everything runs smoothly.
Core Web Vitals
As Google continues to focus on real-world user experience, Core Web Vitals have become essential to both technical SEO and site performance. These metrics evaluate how users experience the speed, interactivity, and visual stability of a webpage factors that directly influence how your site ranks.
Here, we’ll talk about what Core Web Vitals are, how they’re measured, and what makes a good score.
What Are Core Web Vitals?
Core Web Vitals are a set of three performance metrics developed by Google to quantify the quality of the user experience on a website. They focus on how quickly content loads, how soon a user can interact with it, and how visually stable the page is during loading.
These signals are part of Google’s page experience ranking system, meaning optimising for them can positively impact your site's visibility in search results.
The three main Core Web Vitals are:
- Largest Contentful Paint (LCP)
- Interaction to Next Paint (INP) (previously First Input Delay)
- Cumulative Layout Shift (CLS)
1. Largest Contentful Paint (LCP)
LCP measures how long it takes for the largest visible element such as an image or block of text to load on the screen. It focuses on content above the fold and provides a realistic estimate of when the page becomes usable to the user.
LCP Scoring:
- Good: Less than 2.5 seconds
- Needs improvement: 2.5 – 4 seconds
- Poor: Over 4 seconds
Example:
Imagine a mobile page begins loading. First, a heading appears (a candidate for LCP), followed by a paragraph, and finally a large image that completes at the 2.5-second mark. That image is the final LCP element, and the LCP score is based on its load time.
Do note that LCP scores vary by session and device, so testing on both desktop and mobile is critical.
2. Interaction to Next Paint (INP)
Google officially replaced First Input Delay (FID) with Interaction to Next Paint (INP) in March 2024. While FID measured only the first interaction, INP measures the entire interaction lifecycle, capturing the delay between a user action (like a click) and when the screen updates in response.
INP Scoring:
- Good: Less than 200 ms
- Needs improvement: 200 – 500 ms
- Poor: More than 500 ms
This broader view gives a more accurate picture of interactivity, especially on pages with multiple dynamic elements or heavy JavaScript.
3. Cumulative Layout Shift (CLS)
CLS measures visual stability during page load. If elements shift unexpectedly while the page loads like buttons jumping or text moving this leads to a higher CLS score.
We’ve all experienced it: you're about to tap a link and suddenly the layout shifts, causing you to click the wrong thing. That's the kind of experience CLS is designed to eliminate.
CLS Scoring:
- Good: Less than 0.1
- Needs improvement: 0.1 – 0.25
- Poor: Over 0.25
Your goal should be a score as close to zero as possible to ensure smooth and stable loading.
Tools to Measure Core Web Vitals
Google offers several tools to assess your site’s Core Web Vitals:
- Google Search Console – Real-world performance data from actual users.
- PageSpeed Insights – Provides both lab and field data with optimisation tips.
- Lighthouse (via Chrome DevTools) – Simulated performance testing in a controlled environment.
- Wix Site Speed Dashboard – Tailored insights for Wix users, providing detailed vitals reporting per page.
These tools are essential when conducting an SEO audit or diagnosing performance issues.
Lab Data vs Field Data: What’s the Difference?
When assessing Core Web Vitals, it’s important to understand the two types of data you may encounter:
Lab Data:
- Collected in a controlled environment (simulated).
- Great for debugging and testing changes pre-launch.
- Doesn’t always reflect real user experiences.
Field Data:
- Collected from real user visits.
- Offers authentic insights into how the site performs across various devices and network conditions.
- Used by Google for actual ranking signals.
For a comprehensive view, both types of data should be considered during performance optimisation.
Core Web Vitals are not just technical benchmarks, they are indicators of user satisfaction and direct contributors to your search performance. By improving LCP, INP, and CLS, you enhance both usability and SEO.
Lab Data vs Field Data
When it comes to understanding your website’s performance especially in relation to Core Web Vitals, there are two essential types of data to know: lab data and field data. Both serve different purposes in your technical SEO strategy, but knowing when and how to use them can be the difference between surface-level insights and real user-focused optimisation.
What’s the Difference?
At a glance:
- Lab Data = Simulated environment testing
- Field Data = Real-world user experience
Let’s break these down further.
Field Data: Real Experience from Real Users
Field data reflects how actual visitors experience your site in the wild. It’s collected from real user sessions most commonly through Chrome browsers and represents performance across different devices, networks and geographic regions.
How it’s gathered:
Field data is sourced from the Chrome UX Report (CrUX), a public dataset that collects anonymised performance metrics from Chrome users who have opted into syncing and usage statistics.
Tools that use it:
- Google Search Console
- PageSpeed Insights (field section)
- CrUX Dashboard
When to use field data:
- To assess real user experience at scale
- To evaluate the impact of recent optimisations
- When enough traffic exists to generate reliable insights
Keep in mind: Field data takes time. You won’t instantly see the impact of changes. Google needs to collect enough sessions first.
Lab Data: Controlled, Immediate Performance Testing
Lab data is generated in a simulated environment, using predefined devices, networks and conditions. Tools like Lighthouse run performance tests and assign scores based on how your site performs during the simulation.
Tools that use it:
- Lighthouse
- PageSpeed Insights (lab section)
- Web.dev Measure
- Chrome DevTools
When to use lab data:
- Before launching a site (no users yet)
- To identify optimisation opportunities quickly
- To test design or code changes without needing to wait for user traffic
For example, if you redesign a product page, you can use lab data to test how it performs without needing to wait for thousands of real visitors.
Lab vs Field: A Practical Example
Imagine two sites:
- Site A targets users in high-income countries with fast internet and modern devices.
- Site B serves users in regions with older devices and slower connections.
If both sites score equally in lab tests, their field data may still differ significantly due to actual user conditions. This is why Google relies on field data for ranking, it reflects real experiences, not just potential performance under ideal conditions.
Both lab data and field data are valuable tools in your SEO toolkit but for different reasons.
- Use lab data for fast feedback, experimentation, and when your site lacks enough traffic for meaningful field results.
- Rely on field data when making SEO decisions, as Google ultimately cares about how real users experience your site.
Technical SEO Audit
Conducting a technical SEO audit is one of the most powerful things you can do to boost a website’s organic search performance. It helps you uncover crawlability, indexability and structural issues that might be holding back rankings, while also revealing opportunities for growth.
In this guide, we’ll walk you through how to perform a technical SEO audit step by step ensuring your recommendations are practical, prioritised and aligned with real business goals.
What Is a Technical SEO Audit?
A technical SEO audit is an in-depth review of a website’s configuration and infrastructure to ensure it’s optimised for search engines. It examines everything from how a site is crawled and indexed to how efficiently it serves users across devices.
Importantly, it’s not just about finding problems, it’s about identifying opportunities to grow organic traffic, improve rankings, and help businesses achieve meaningful results online.
SEO is an iterative process. You don’t need to fix everything at once. Progress is made through consistent improvement over time.
The 5 Key Stages of a Technical SEO Audit
- Data Gathering
- Analysis
- Hypothesis
- Prioritisation
- Reporting
Let’s break down each step.
1. Data Gathering
Start by collecting data on your site's current technical health and organic performance. You’ll use a mix of tools to do this:
Tools to use:
- Google Search Console & Bing Webmaster Tools (crawling/indexing data)
- Screaming Frog or Sitebulb (site crawling simulation)
- Google Analytics (traffic and conversions from organic search)
- Server logs or CDN logs (optional but helpful)
Tip: Crawl using the mobile crawler and with JavaScript rendering enabled. It’s the best way to reflect how search engines experience your site today.
2. Analysis
Once your data is gathered, you’ll want to identify key configurations across your site pages. Focus on:
- HTTP status codes
- Meta robots and canonical tag setups
- Internal linking structure
- Structured data implementation
- Mobile-friendliness
- Core Web Vitals
- Indexed URLs and keyword rankings
- Traffic and conversion data
Use this to determine which configurations may be hurting your rankings or holding back organic growth.
3. Hypothesis
After you’ve identified issues or gaps, ask yourself:
- What happened?
What’s the exact issue? Why is it a problem for SEO? - Why did it happen?
What’s the root cause? Is it due to broken links, redirects, poor tagging, or missing elements? - How can it be fixed or improved?
What’s a practical, effective solution that can be implemented within your client's platform or resources?
For example, if your internal links are pointing to canonicalised URLs (not meant to be indexed), investigate where those links originate from and update them to point to the correct canonical URLs.
4. Prioritisation
Not all SEO issues are created equal. Prioritise them by:
- Criticality (how essential is this to crawlability/indexability?)
- Scope (how many pages are affected?)
- Effort (how complex or resource-heavy is the fix?)
- Potential Impact (on rankings, CTR, traffic, conversions)
Fixing a missing title tag on core category pages is likely more impactful and easier than implementing a complex structured data solution across thousands of minor pages.
Create a prioritised roadmap of actions. Focus first on quick wins with high impact, then move toward more complex opportunities.
5. Reporting
Once your findings and priorities are clear, present them in a format that your team or client can easily understand and act on.
Options include:
- A shared document (Google Docs or Word)
- A slide presentation
- A task board or project management tool (like Asana or Trello)
What matters is clarity, not format.
Your report should include:
- Clear description of the issue/opportunity
- Screenshots showing specific examples
- URLs affected and where they’re listed
- Viable, step-by-step solutions
- Competitor examples (showing correct implementations)
- Executive summary for non-technical stakeholders
Make it easy for decision-makers to say "yes". The more actionable and understandable your report, the faster you’ll see progress.
A well-executed technical SEO audit goes beyond diagnostics, it’s a strategic tool for growth. When done right, it gives you:
- A clear picture of what’s blocking performance
- Data-backed insights to justify your SEO work
- A roadmap of actions to drive better rankings and traffic
And remember, SEO is not a one-off task. It’s ongoing. So build habits around regular audits, implement iteratively, and always stay curious about what can be improved next.
Best Practices for Implementing and Communicating Technical SEO Tasks
Completing an SEO audit is only half the battle. Once your recommendations are approved, the real work begins by implementing them effectively and ensuring seamless communication across teams.
Here, we’ll walk you through the best practices for coordinating technical SEO tasks with developers, designers, and other stakeholders ensuring that your recommendations don’t just sit in a document but translate into actual results.
Align SEO with Development and Design Workflows
Before anything else, take time to understand your client or company’s existing web development and design workflows. This will help you align your SEO plans with how they build and maintain the site.
Your recommendations will be far more effective when they’re tailored to fit into existing processes.
Prioritise What’s Impactful and Feasible
When presenting your recommendations:
- Start with quick wins—those that are both impactful and easy to implement.
- Clearly explain the benefits—focus on how SEO changes align with broader business goals.
- Confirm feasibility with the development team before locking in timelines.
If you encounter technical pushback due to feasibility, explore alternative approaches that could produce similar results with fewer resources.
Establish Clear Communication and Follow-Up
Set up a communication workflow that keeps everyone aligned:
- Recurring updates: A weekly or biweekly call is a good baseline.
- Asynchronous tools: Use a project management system like Trello, Asana or Jira for task updates, questions, and notes.
- Collaborative spirit: Avoid placing blame. Be empathetic and involve the web team early to co-develop solutions.
Consider setting up a basic SEO knowledge base to help team members understand the “why” behind your recommendations.
Use a Staging Environment Before Going Live
Before releasing any changes that could affect SEO:
- Test them in a staging environment that mirrors the live site as closely as possible.
- Block this environment from search engines.
- Require user authentication to prevent accidental indexing.
- Agree on validation workflows with the dev and design teams to ensure everything is reviewed pre- and post-release.
This will reduce risks and give you time to fix anything unexpected before users (or Google) see it.
Communicate Results Effectively
When it’s time to report on technical SEO performance:
- Tie changes back to ranking improvements, traffic growth, and revenue impact.
- If results aren’t meeting expectations, explain why, and offer alternative strategies.
- Celebrate and share wins with the team highlighting how everyone contributed to SEO success builds motivation and alignment.
Technical SEO Do’s and Don’ts
Do:
- Schedule regular crawls using tools like Screaming Frog (File > Scheduling) to monitor your site’s health.
- Filter results: Prioritise SEO issues based on their actual impact not just tool alerts.
- Measure success using real metrics: Focus on crawlability, indexability, ranked queries, clicks, conversions, and revenue from organic search.
Don’t:
- Attempt to fix every issue flagged by an SEO tool without evaluating its real-world effect.
- Rely solely on third-party scores or metrics as indicators of technical SEO success.
- Overwhelm dev teams with a backlog of low-priority tasks. Use your prioritised SEO plan to stay focused.
Strong technical SEO doesn’t happen in isolation, it’s the result of clear communication, cross-team collaboration, and ongoing prioritisation. By following these best practices, you’ll build trust, improve execution, and drive meaningful results from your SEO strategy.