Crawlable Links: The Complete SEO Guide (2026)

Saar Twito7 min read
Saar Twito
Saar TwitoFounder & SEO Engineer

Hi, I'm Saar - a software engineer, SEO specialist, and lecturer who loves building tools and teaching tech.

View author profile →

What Is a Crawlable Link?

A crawlable link is a real HTML anchor tag with an href attribute pointing to a regular URL — for example <a href="/products/iphone-15">iPhone 15</a>. Googlebot, Bingbot, and AI crawlers follow links by reading the href attribute on <a> elements. Anything else — buttons that navigate via JavaScript, onClick handlers without an href, links revealed only after a click — is invisible to crawlers, which means the destination page is invisible too.

Key Facts (TL;DR)

  • Google's rule is explicit: in its "Links best practices" documentation, Google states it can only follow links that are <a> elements with an href attribute pointing to a URL.
  • JavaScript-only navigation is not crawled. <a onclick="navigate()"> with no href is treated as text, not a link.
  • <button> is not a link. Buttons that route via JS are not followed by Googlebot regardless of how the JS routing works.
  • href="javascript:void(0)" is invisible to crawlers — Google explicitly lists it as a non-crawlable pattern.
  • Next.js Link, React Router Link, and Nuxt NuxtLink all output real <a href> in the rendered HTML — verify in View Source, not Inspect.
  • Content revealed only after a click is invisible. Tabs, modals, and accordions that load links via JS hide those links from crawlers.

Link Patterns: Crawlable vs Not

PatternCrawlable?Fix
<a href="/page">Text</a>YesThis is the correct form
<a href="https://example.com/page">Text</a>YesAbsolute URLs work too
<a onclick="navigate()">Text</a> (no href)NoAdd href="/page"
<a href="javascript:void(0)">Text</a>NoReplace with real href
<a href="#" onclick="navigate()">NoReplace # with the destination URL
<button onclick="router.push('/page')">NoUse <a href="/page"> styled as a button
<div data-href="/page" onclick=...>NoUse a real anchor tag
<a href="/page" rel="nofollow">Crawlable, but not followed for rankingRemove nofollow for trusted internal links
Next.js <Link href="/page">YesRenders to real <a href> — verify in View Source
React Router <Link to="/page">YesRenders to real <a href>
Link revealed by JS only after a clickNoRender the link in the initial HTML
Link inside a closed accordion / tabYes if in HTML, even if visually hiddenMake sure HTML contains the link

Why Some Links Look Like Anchors But Aren't Crawled

Three common reasons Googlebot ignores something that looks like a link in a browser:

  1. The href doesn't contain a real URL. href="#", href="javascript:void(0)", and missing href all fall here. Google has documented these as non-crawlable.
  2. The URL only exists in JavaScript variables. Routing libraries that store paths in data-*attributes or JS state (and hijack click events) don't produce a URL Google can extract.
  3. The element renders only after user interaction.A "Load more" button that injects 50 links into the DOM after click — Googlebot doesn't click. Render the links upfront, even if visually hidden behind "Load more", or use server-side pagination with real URLs.

How to Implement Crawlable Links Correctly

Plain HTML

<!-- Crawlable: real anchor with a URL in href -->
<a href="/products/iphone-15">iPhone 15</a>

<!-- Crawlable: opens in new tab, still followed -->
<a href="/about" target="_blank" rel="noopener">About us</a>

<!-- NOT crawlable -->
<a onclick="goTo('/products')">Products</a>
<a href="javascript:void(0)" onclick="goTo('/products')">Products</a>
<button onclick="router.push('/products')">Products</button>

Next.js (App Router)

import Link from 'next/link';

// Renders to: <a href="/products/iphone-15">iPhone 15</a>
<Link href="/products/iphone-15">iPhone 15</Link>

// Need a button that looks like a button but is crawlable?
// Use Link styled as a button — never <button> for navigation.
<Link href="/signup" className="btn-primary">Sign up</Link>

React Router

import { Link } from 'react-router-dom';

// Renders to: <a href="/products/iphone-15">iPhone 15</a>
<Link to="/products/iphone-15">iPhone 15</Link>

Nuxt

<!-- Renders to: <a href="/products/iphone-15">iPhone 15</a> -->
<NuxtLink to="/products/iphone-15">iPhone 15</NuxtLink>

All of the above frameworks output real <a href> tags in the rendered HTML — but only if you render on the server. A pure client-side rendered SPA can ship an empty HTML shell with the links only appearing after JS runs, which defeats the point. Always verify with View Source.

Common Mistakes (Bad vs Good)

Mistake: Using <button> for navigation

Bad: <button onClick={() => router.push('/pricing')}>Pricing</button>. Crawlers ignore it.

Good: Use <Link href="/pricing"> styled as a button. <button> is for actions (submit, toggle), not navigation.

Mistake: href="#" with onClick

Bad: <a href="#" onClick={handleClick}>See pricing</a>. The # is not a real URL.

Good: Put the destination in href. If you need to intercept the click, use event.preventDefault() in the handler — Googlebot still extracts the href.

Mistake: Pagination as JS-only "Load more"

Bad:Page 1 loads 20 items; the "Load more" button appends 20 more via JS. Pages 2+ have no URLs, so Google never discovers items 21+.

Good: Provide real paginated URLs (/blog?page=2) with <a href> links to each, even if you also offer JS-based progressive loading.

Mistake: Hiding nav inside a JS-only mega menu

Bad:Sub-category links live in a mega menu that only renders into the DOM when the user hovers. Googlebot doesn't hover.

Good: Render the menu HTML at all times; use CSS to hide / show it. The links exist in HTML regardless of interaction.

Mistake: Storing the URL in a data- attribute

Bad: <div data-href="/page" onClick={navigate}>. Google does not extract URLs from data-* attributes.

Good: Use a real <a href="/page">.

How to Test If Your Links Are Crawlable

  1. View page source. Right-click → "View page source" (not Inspect — that shows the post-JS DOM). Search for your link text. If you find a real <a href="/page">, it's crawlable.
  2. curl the page. curl https://yoursite.com/ | grep -o 'href="[^"]*"'. The list of href values is exactly what crawlers see.
  3. Search Console URL Inspection.Run a Live Test, view rendered HTML, check the "Page resources" section — Google reports which URLs it discovered as outgoing links.
  4. Disable JavaScript.In Chrome DevTools, disable JS and reload. Click each navigation link — if any don't work, those are JS-only and not crawlable.
  5. Run a crawl. Screaming Frog with JS rendering off, or Sitebulb in HTML-only mode, will list every URL discoverable from raw HTML.

FAQ

Does Google really not follow JavaScript links?

Google renders JavaScript and may discover links injected by JS — but its official documentation says only <a> tags with href attributes are guaranteed to be followed. JS-only links are best-effort, often delayed, and invisible to other crawlers (Bingbot, GPTBot, ClaudeBot). Use real anchor tags.

Are Next.js Link and React Router Link crawlable?

Yes — both render to real <a href> in the HTML. The catch is they only do so when the page is server-rendered or pre-rendered. In a fully client-side app the anchor only appears after hydration, so verify with View Source.

Should I use <button> or <a> for a CTA that looks like a button?

If clicking it changes the URL — <a href> styled as a button. If clicking it triggers an action that doesn't change the URL (open a modal, submit a form) — <button>. The visual style is independent of the semantic element.

Does rel="nofollow" make a link uncrawlable?

No. nofollow tells Google not to pass ranking signals through the link, but the link is still crawlable and the destination URL is still discovered. For internal links between your own pages, do not add nofollow.

What about href="#section-id" for in-page anchors?

Crawlable and useful — Google indexes them as fragment links to the same page. They don't create new indexable pages, but they help with featured snippets and on-page navigation.

Is target="_blank" a problem for SEO?

No. Google follows the link normally regardless of target. Add rel="noopener" for security, not SEO.

What if my page is fully client-rendered? Can the links still be crawled?

Sometimes — Google renders JS and may discover the links. But AI crawlers like GPTBot and ClaudeBot don't render JS, and even Google's rendering is delayed. Move to SSR or SSG. See our CSR and SEO guide.

How many links per page is too many?

Google retired the old "100 links per page" guideline years ago. Pages can have hundreds of crawlable links without penalty. The practical concern is link equity dilution — each link splits some authority — not crawlability.

Conclusion

Crawlable links come down to one rule: real <a> tag, real href, real URL, present in the initial HTML. Anything else — buttons that navigate, onClick handlers, javascript:void(0), JS-only mega menus — is invisible to crawlers. View Source on your most important pages, run curl through grep href, and verify that every URL you want indexed is reachable through real anchor tags. Without crawlable links, the rest of your SEO doesn't matter — see also our site crawlability guide.