Anycast explained: how the same IP is in 200+ places at once
When you query 1.1.1.1, dozens of physical servers around the world could answer — and the right one always does. That trick is called anycast, and it underpins modern DNS, CDNs, and DDoS protection.
You query Cloudflare's public DNS server at 1.1.1.1. Your friend in Tokyo queries the same IP. A coworker in Lagos queries it. Each of you talks to a different physical machine, in a different data center, on a different continent. None of you specified which one — yet each got the closest one automatically.
That trick is called anycast, and it's quietly responsible for most of what makes the modern internet feel fast.
The two-line summary
Anycast lets multiple servers share the same IP address, and routes each user's traffic to the topologically nearest one automatically. Best for high-volume, latency-sensitive services that need to be globally available.
Cloudflare DNS, Google's 8.8.8.8, the DNS root nameservers, every major CDN, and most modern DDoS protection — all run on anycast.
How anycast works
The internet routes packets using BGP. Every router builds a map of "for IP range X, send packets toward AS Y." When multiple ASes (or multiple locations within the same AS) announce the same IP range, the routers' map ends up with multiple paths to the same destination.
Routers pick the shortest path — usually the one with fewest hops or lowest cost. So a packet from your home in London to 1.1.1.1 ends up at Cloudflare's London data center, while the same destination IP from your friend in Tokyo ends up at Cloudflare's Tokyo data center.
The packet doesn't carry "which 1.1.1.1" — there's no such thing. The routing fabric handles it implicitly.
This is in contrast to:
- Unicast — one IP, one destination. The default for most internet traffic. Reliable but doesn't scale geographically.
- Multicast — one packet sent to many destinations. Used for some streaming on private networks; rarely on the public internet.
- Anycast — one IP, many destinations, route to the "closest" one. Modern, global-scale, used by infrastructure operators.
Why infrastructure providers love it
Anycast solves four problems at once.
1. Latency
The closest data center is always picked, automatically. No DNS-based geo-routing, no GeoIP lookups in your app — the network layer does it for free. For a service like DNS, where latency adds up to perceptible delay on every page load, anycast is decisive.
When you query 1.1.1.1, the answer typically returns in under 15ms because Cloudflare has a data center in or near your city. A traditional unicast DNS provider would have to choose one geographic region for the IP, and users on the other side of the world would pay 100–200ms of latency.
2. Resilience
If a data center goes down, BGP automatically stops announcing that IP from that location. Routers update their maps. Traffic redirects to the next-closest still-up data center. Users don't notice — the IP didn't change.
Compare to unicast: if 1.1.1.1 were a single server in Virginia, a single power outage there would make the service globally unavailable. With anycast, you'd need every data center in every region to fail simultaneously — far less likely.
3. DDoS absorption
A flood of attack traffic targeting one anycast IP gets distributed across every data center announcing that IP. A DDoS attack with 1 Tbps of traffic against 1.1.1.1 is split across hundreds of locations, each absorbing a few Gbps — often within their normal capacity.
This is how Cloudflare and similar providers absorb attacks that would have leveled a unicast service. The attacker can't focus the load on one box; the network automatically spreads it.
4. Simplicity for the user
You don't need to know about regions. You don't need to pick a server. You don't need DNS-based geo routing. The same IP works from anywhere.
For users, anycast IPs are also easy to remember (1.1.1.1, 8.8.8.8, 9.9.9.9) — the operators picked memorable IPs because they only need one.
What anycast doesn't work for
Anycast has tradeoffs. It's specifically good for stateless or very-short-lived connections.
Stateful connections are hard
Anycast routing is set per-packet by the underlying routing fabric. If routes change mid-connection (because a data center went down, or a new path became preferable), packets from the same connection might end up at different physical servers, which won't know about each other's state. Connection breaks.
For TCP, this is largely solved by route stability — major operators don't shuffle routes mid-flow without good reason — and at higher layers by techniques like sticky session cookies or shared state via consistent hashing across all anycast pops.
For long-lived persistent state (databases, file uploads, multi-step transactions), anycast alone isn't the right model. You usually pair it with something else: clients hit anycast for routing, then are redirected to a unicast or DNS-routed backend for the actual session.
Routing isn't always optimal
BGP picks the shortest AS-path route, not necessarily the geographically closest one. Sometimes a packet from Singapore ends up at a North American data center because of how transit links and peering work. This is rare but happens, especially at smaller anycast operators.
Major providers (Cloudflare, Google, AWS) work hard to peer broadly so anycast paths actually reflect geographic distance. Small or new anycast deployments can have weird routing for months until peering settles.
How CDNs use anycast
Content delivery networks like Cloudflare, Fastly, Akamai, and AWS CloudFront use anycast for the edge layer:
- DNS for the CDN-managed hostname is anycast → users hit the CDN's nearest edge.
- The edge's IP is also anycast → the connection itself goes to the nearest physical server.
- Cached content is served from that edge directly.
- Cache misses fall through to unicast origins (the customer's actual server).
The split — anycast at the edge, unicast at the origin — is the modern CDN architecture. It scales globally without forcing the customer to do geographic deployment themselves.
Quick FAQ
Are anycast IPs special somehow? Not at the IP level — they look like any other IPv4 or IPv6 address. The "anycastness" is purely a routing-policy decision by whoever announces the IP. Any provider with a /24 of IPv4 (or /48 of IPv6) and BGP capabilities can deploy anycast.
Can I deploy anycast myself? You'd need: an ASN, a /24 of public IP space, and BGP-capable routers in multiple physical locations with transit. It's a real engineering investment — typically only done by ISPs, hosting providers, and large content companies.
Can I tell if a service is anycast?
Trace the IP from different locations: traceroute 1.1.1.1 from London and from Tokyo. The endpoints will look the same; the paths to reach them differ.
Does anycast affect privacy? Marginally — your traffic is potentially routed through whichever data center is nearest, which might cross national boundaries. For most users, this is invisible. For users with specific data-residency concerns, it's worth noting.
What about the "DNS root servers"? The thirteen "logical" root nameservers (a-root through m-root) are each implemented as anycast deployments — together totaling ~1,500 physical servers globally. Without anycast, the root system would have collapsed under modern internet load decades ago.
TL;DR
- Anycast = many servers share one IP; routers send each user's traffic to the topologically nearest one.
- Used by Cloudflare DNS, Google DNS, the DNS root, every major CDN, and DDoS protection services.
- Wins: low latency, high availability, DDoS absorption.
- Limits: harder for stateful connections; routing isn't always perfectly geographic.
The next time 1.1.1.1 answers your DNS query in 8 milliseconds, anycast is what made it possible.