A crawlable link is a real HTML anchor tag with an href attribute pointing to a regular URL — for example <a href="/products/iphone-15">iPhone 15</a>. Googlebot, Bingbot, and AI crawlers follow links by reading the href attribute on <a> elements. Anything else — buttons that navigate via JavaScript, onClick handlers without an href, links revealed only after a click — is invisible to crawlers, which means the destination page is invisible too.
<a> elements with an href attribute pointing to a URL.<a onclick="navigate()"> with no href is treated as text, not a link.<button> is not a link. Buttons that route via JS are not followed by Googlebot regardless of how the JS routing works.href="javascript:void(0)" is invisible to crawlers — Google explicitly lists it as a non-crawlable pattern.Link, React Router Link, and Nuxt NuxtLink all output real <a href> in the rendered HTML — verify in View Source, not Inspect.| Pattern | Crawlable? | Fix |
|---|---|---|
<a href="/page">Text</a> | Yes | This is the correct form |
<a href="https://example.com/page">Text</a> | Yes | Absolute URLs work too |
<a onclick="navigate()">Text</a> (no href) | No | Add href="/page" |
<a href="javascript:void(0)">Text</a> | No | Replace with real href |
<a href="#" onclick="navigate()"> | No | Replace # with the destination URL |
<button onclick="router.push('/page')"> | No | Use <a href="/page"> styled as a button |
<div data-href="/page" onclick=...> | No | Use a real anchor tag |
<a href="/page" rel="nofollow"> | Crawlable, but not followed for ranking | Remove nofollow for trusted internal links |
Next.js <Link href="/page"> | Yes | Renders to real <a href> — verify in View Source |
React Router <Link to="/page"> | Yes | Renders to real <a href> |
| Link revealed by JS only after a click | No | Render the link in the initial HTML |
| Link inside a closed accordion / tab | Yes if in HTML, even if visually hidden | Make sure HTML contains the link |
Three common reasons Googlebot ignores something that looks like a link in a browser:
href doesn't contain a real URL. href="#", href="javascript:void(0)", and missing href all fall here. Google has documented these as non-crawlable.data-*attributes or JS state (and hijack click events) don't produce a URL Google can extract.<!-- Crawlable: real anchor with a URL in href -->
<a href="/products/iphone-15">iPhone 15</a>
<!-- Crawlable: opens in new tab, still followed -->
<a href="/about" target="_blank" rel="noopener">About us</a>
<!-- NOT crawlable -->
<a onclick="goTo('/products')">Products</a>
<a href="javascript:void(0)" onclick="goTo('/products')">Products</a>
<button onclick="router.push('/products')">Products</button>import Link from 'next/link';
// Renders to: <a href="/products/iphone-15">iPhone 15</a>
<Link href="/products/iphone-15">iPhone 15</Link>
// Need a button that looks like a button but is crawlable?
// Use Link styled as a button — never <button> for navigation.
<Link href="/signup" className="btn-primary">Sign up</Link>import { Link } from 'react-router-dom';
// Renders to: <a href="/products/iphone-15">iPhone 15</a>
<Link to="/products/iphone-15">iPhone 15</Link><!-- Renders to: <a href="/products/iphone-15">iPhone 15</a> -->
<NuxtLink to="/products/iphone-15">iPhone 15</NuxtLink>All of the above frameworks output real <a href> tags in the rendered HTML — but only if you render on the server. A pure client-side rendered SPA can ship an empty HTML shell with the links only appearing after JS runs, which defeats the point. Always verify with View Source.
<button> for navigationBad: <button onClick={() => router.push('/pricing')}>Pricing</button>. Crawlers ignore it.
Good: Use <Link href="/pricing"> styled as a button. <button> is for actions (submit, toggle), not navigation.
href="#" with onClickBad: <a href="#" onClick={handleClick}>See pricing</a>. The # is not a real URL.
Good: Put the destination in href. If you need to intercept the click, use event.preventDefault() in the handler — Googlebot still extracts the href.
Bad:Page 1 loads 20 items; the "Load more" button appends 20 more via JS. Pages 2+ have no URLs, so Google never discovers items 21+.
Good: Provide real paginated URLs (/blog?page=2) with <a href> links to each, even if you also offer JS-based progressive loading.
Bad:Sub-category links live in a mega menu that only renders into the DOM when the user hovers. Googlebot doesn't hover.
Good: Render the menu HTML at all times; use CSS to hide / show it. The links exist in HTML regardless of interaction.
data- attributeBad: <div data-href="/page" onClick={navigate}>. Google does not extract URLs from data-* attributes.
Good: Use a real <a href="/page">.
<a href="/page">, it's crawlable.curl https://yoursite.com/ | grep -o 'href="[^"]*"'. The list of href values is exactly what crawlers see.Google renders JavaScript and may discover links injected by JS — but its official documentation says only <a> tags with href attributes are guaranteed to be followed. JS-only links are best-effort, often delayed, and invisible to other crawlers (Bingbot, GPTBot, ClaudeBot). Use real anchor tags.
Yes — both render to real <a href> in the HTML. The catch is they only do so when the page is server-rendered or pre-rendered. In a fully client-side app the anchor only appears after hydration, so verify with View Source.
<button> or <a> for a CTA that looks like a button?If clicking it changes the URL — <a href> styled as a button. If clicking it triggers an action that doesn't change the URL (open a modal, submit a form) — <button>. The visual style is independent of the semantic element.
rel="nofollow" make a link uncrawlable?No. nofollow tells Google not to pass ranking signals through the link, but the link is still crawlable and the destination URL is still discovered. For internal links between your own pages, do not add nofollow.
href="#section-id" for in-page anchors?Crawlable and useful — Google indexes them as fragment links to the same page. They don't create new indexable pages, but they help with featured snippets and on-page navigation.
target="_blank" a problem for SEO?No. Google follows the link normally regardless of target. Add rel="noopener" for security, not SEO.
Sometimes — Google renders JS and may discover the links. But AI crawlers like GPTBot and ClaudeBot don't render JS, and even Google's rendering is delayed. Move to SSR or SSG. See our CSR and SEO guide.
Google retired the old "100 links per page" guideline years ago. Pages can have hundreds of crawlable links without penalty. The practical concern is link equity dilution — each link splits some authority — not crawlability.
Crawlable links come down to one rule: real <a> tag, real href, real URL, present in the initial HTML. Anything else — buttons that navigate, onClick handlers, javascript:void(0), JS-only mega menus — is invisible to crawlers. View Source on your most important pages, run curl through grep href, and verify that every URL you want indexed is reachable through real anchor tags. Without crawlable links, the rest of your SEO doesn't matter — see also our site crawlability guide.