Network Round Trip Time (RTT) is the duration for a small packet to travel from a user's device to your server and back. It is the physical-distance component of every network operation — the speed-of-light tax you pay before any byte of HTML, image, or script can begin transferring. RTT is the network half of TTFB; server latency is the other half.
Think of RTT the way you think of catch with a friend across the room. The throw is fast, the catch is fast — but if your friend is across town, even with the strongest throw, the round trip simply takes time. RTT is that distance, paid for every connection.
Light travels through fiber at roughly two-thirds of its vacuum speed — about 200,000 km/s. That number is the unbreakable floor. The actual best-case RTT between two points is:
Theoretical minimum RTT = (2 × distance) / 200,000 km/s
London ↔ New York (~5,500 km): ~55 ms minimum
New York ↔ Sydney (~16,000 km): ~160 ms minimum
Tokyo ↔ Frankfurt (~9,500 km): ~95 ms minimum
Same city (~50 km): ~0.5 ms minimumReal-world RTT is typically 1.5× to 2.5× the theoretical floordue to routing, congestion, peering, and last-mile networks (especially mobile and rural connections). London ↔ New York "in real life" is usually 80–120 ms.
The single most effective RTT reduction is to put your content physically closer to users. A CDN doesn't make data move faster; it makes the data start closer. From the user's perspective, a 50 ms RTT to an edge node beats a 200 ms RTT to a single regional origin every time.
A single HTTPS connection requires several round trips before any application data flows:
| Protocol | TCP Handshake | TLS Handshake | Total Setup (cold) | At 100 ms RTT |
|---|---|---|---|---|
| HTTP/1.1 + TLS 1.2 | 1 RTT | 2 RTTs | 3 RTTs | ~300 ms |
| HTTP/2 + TLS 1.3 | 1 RTT | 1 RTT | 2 RTTs | ~200 ms |
| HTTP/3 (QUIC, TLS 1.3) | combined: 1 RTT | 1 RTT | ~100 ms | |
| HTTP/3 (0-RTT resumption) | 0 RTT (resumed session) | 0 RTT | ~0 ms | |
Bottom line: Upgrading from HTTP/1.1 + TLS 1.2 to HTTP/3 with TLS 1.3 saves 2 round-trips on every cold connection — a 200 ms latency gift for users at 100 ms RTT, paid for every visit and every new origin.
ping example.com from different locations gives you a quick RTT baseline. Use a service like a global ping checker to measure from multiple regions at once.The single biggest RTT reduction. A CDN that caches HTML and assets at points-of-presence near every user reduces the effective distance from user to content to a few hundred kilometers — typically 5–30 ms RTT instead of 100–200 ms.
Fix:Use any reputable CDN (most have free or low-cost tiers). Cache HTML at the edge for content that's the same for all users, and cache assets aggressively with content-hashed filenames.
HTTP/3 combines TCP and TLS into a single 1-RTT handshake (or even 0-RTT for resumed sessions), saving 100–200 ms on every cold connection at typical inter-regional RTTs.
Fix: Most CDNs and modern web servers support HTTP/3 with a single config toggle. Verify the browser is using it via DevTools → Network → Protocol column (should say h3 for first-party requests).
Tells the browser to start the TCP+TLS handshake to a third-party origin before the parser would otherwise discover it — eliminating the connection-setup RTT from the critical path.
<link rel="preconnect" href="https://fonts.example.com">
<link rel="preconnect" href="https://api.example.com">
<link rel="dns-prefetch" href="https://analytics.example.com">Every new origin = a new connection setup = N × RTT of pure handshake before the first byte. A page that loads from 8 origins at 100 ms RTT pays 800 ms of handshake latency total (some in parallel, but rarely all).
Fix: Self-host fonts, icons, and small libraries instead of pulling them from third-party CDNs. Audit your origin count: most marketing pages can run with 2–3 origins instead of 8–12.
Without keep-alive, each request opens a new connection — paying full handshake RTT every time. With keep-alive, the connection persists and subsequent requests skip the setup entirely.
Fix: Modern servers enable keep-alive by default. Verify Connection: keep-aliveis in your origin's response headers. Long keep-alive timeouts (60+ seconds) help repeat-resource scenarios.
If 80% of your traffic is in Europe, hosting in us-east-1 means every visitor pays 80–120 ms of distance latency on every request — without a CDN to absorb it.
Fix:Deploy origins in regions that match your audience. Use audience-region analytics in Ahrefs or Semrush, or Google Search Console's "Country" filter, to see where traffic actually comes from.
For repeat visitors, TLS session resumption (or 0-RTT in TLS 1.3 / HTTP/3) lets the browser skip the full handshake on subsequent connections — eliminating that 100–200 ms cost entirely.
Fix: Modern TLS configurations enable resumption by default. Verify with a TLS testing tool or by inspecting the SSL_SESSION_REUSED indicator in server logs.
Inlined critical CSS in <head> needs zero additional connections — bypassing RTT entirely. Even if your CSS file would only need 1 round-trip, inlining saves the full 100+ ms cost for cold visitors.
Fix: Extract above-the-fold CSS (~10–14 KB) and inline it. Async-load the rest. Same approach works for tiny, frequently-needed JavaScript snippets.
What's happening: Your origin is in one data center; users from other continents pay 150–250 ms RTT before any byte arrives.
Fix: Front the origin with a CDN that has edge nodes near every audience region. For dynamic content, move HTML rendering to an edge runtime where possible.
What's happening: Your page loads from 10+ different domains (analytics, ads, fonts, embeds, chat). Each cold origin adds 200–300 ms of handshake at typical RTTs.
Fix: Self-host fonts and icons. Consolidate analytics into one tag manager. Add preconnect hints for unavoidable third-party origins so handshakes happen in parallel with HTML parsing.
What's happening: The site is on HTTP/1.1 + TLS 1.2 — the slowest possible HTTP combination. Every cold connection pays 3 RTTs.
Fix: Upgrade to HTTP/3 (with HTTP/2 fallback) and TLS 1.3. Most CDNs support all three with a single switch.
What's happening: Mobile carriers add 30–80 ms RTT compared to wired connections, especially on 3G or weaker 4G. Effective RTT for mobile visitors can be 2× the desktop equivalent.
Fix: Mobile-first optimization is the leverage point. Reduce origin count, inline more critical resources, and aggressively cache. Every saved round-trip is worth more on mobile.
| Metric | Good Threshold | How RTT Affects It |
|---|---|---|
| Time to First Byte (TTFB) | ≤ 800 ms | RTT is the network half of TTFB; high RTT directly inflates TTFB. |
| First Contentful Paint (FCP) | ≤ 1.8 s | FCP can't paint until TTFB completes. High RTT delays everything in the chain. |
| Largest Contentful Paint (LCP) | ≤ 2.5 s | Hero images on a separate origin cost full RTT for connection setup. |
| Time to Interactive (TTI) | ≤ 3.8 s | JavaScript downloads later when origins have high RTT, pushing TTI out. |
| Speed Index | ≤ 3.4 s | Visual progress can't begin until HTML arrives — RTT delays the curve start. |
Bottom line:RTT sets a hard floor for every other metric. No amount of code optimization can rescue a page where the user is 200 ms away from a single origin and there's no CDN.
For users near your origin, target ≤ 50 ms. Globally, with a properly configured CDN, you should see < 100 ms for the vast majority of users. Anything over 150 ms means the user is intercontinental from your nearest edge node — usually fixable by adding more PoPs.
RTT is the network travel time — bounded by physics. Server latency is the time your server spends processing the request after it arrives — bounded by your code and infrastructure. Both add into TTFB; the fix for each is different (CDN for RTT; caching/queries for server latency).
Almost always yes — for users near a CDN edge node. A CDN's edge nodes are deployed in major data-center hubs worldwide, so most users are within 200 km of one (~5 ms RTT). Users in remote regions with no nearby PoP see less benefit.
HTTP/3 combines TCP and TLS handshakes into a single 1-RTT setup, vs. HTTP/2's 2 RTTs (TCP + TLS 1.3) or 3 RTTs (TCP + TLS 1.2). At 100 ms RTT, that's a 100–200 ms savings on every cold connection. HTTP/3 also handles packet loss more gracefully on lossy networks (mobile, satellite).
Indirectly. High RTT inflates TTFB, which inflates LCP, which lowers Core Web Vitals scores, which lowers Google rankings — and AI search systems preferentially cite well-ranked pages. High RTT can also slow AI crawler bots, reducing how often they re-index your content.
The Chrome User Experience Report (CrUX) doesn't directly expose RTT, but Google Search Console's Core Web Vitals report shows TTFB by country — high TTFB in regions far from your origin is almost always RTT-driven. For real-time data, the Network Information API exposes navigator.connection.rtt in supporting browsers.
Two times distance divided by the speed of light in fiber (~200,000 km/s). For a user 1,000 km from the nearest server, that's 10 ms minimum. Real-world RTT is typically 1.5–2.5× the theoretical floor due to routing and infrastructure overhead.
Network RTT is the unfixable foundation of every other performance metric. You can't beat physics — but you can avoid it. Push content to edge nodes near every audience region, upgrade to HTTP/3 to halve handshake count, and consolidate origins to reduce how many handshakes a single page load needs. Most sites that follow those three rules see RTT-driven latency drop by 50–80%.
Run a Greadme deep scan to see your TTFB broken down into network and server components, identify the routes with the worst RTT footprint, and get a prioritized fix list.