If shit is really broken
You should contact the NOC team.
Anchor maintains a few points of presence (POP), one of which is our headquarters in the Sydney CBD.
22.214.171.124/20 » Details « Details
Our original allocation, you can think of this as 16x /24s. Most allocations are still made here, and logical separations tend to be made at /24 or /25 boundaries
Address: 126.96.36.199 11001010.00000100.1110 0000.00000000 Netmask: 255.255.240.0 = 20 11111111.11111111.1111 0000.00000000 Wildcard: 0.0.15.255 00000000.00000000.0000 1111.11111111 => Network: 188.8.131.52/20 11001010.00000100.1110 0000.00000000 HostMin: 184.108.40.206 11001010.00000100.1110 0000.00000001 HostMax: 220.127.116.11 11001010.00000100.1110 1111.11111110 Broadcast: 18.104.22.168 11001010.00000100.1110 1111.11111111 Hosts/Net: 4094 Class C
22.214.171.124/19 » Details « Details
Our second allocation from APNIC, it's seen relatively little use
Address: 126.96.36.199 01101110.10101101.100 00000.00000000 Netmask: 255.255.224.0 = 19 11111111.11111111.111 00000.00000000 Wildcard: 0.0.31.255 00000000.00000000.000 11111.11111111 => Network: 188.8.131.52/19 01101110.10101101.100 00000.00000000 HostMin: 184.108.40.206 01101110.10101101.100 00000.00000001 HostMax: 220.127.116.11 01101110.10101101.100 11111.11111110 Broadcast: 18.104.22.168 01101110.10101101.100 11111.11111111 Hosts/Net: 8190 Class A
2407:7800::/32 » Details
Mostly in use at LAX1 so far, and the first address space with an actual addressing policy
Anchor has an AS number, it's 18020. This is used for our BGP routing and lets us get on the interwebs.
We have a few POPs inside the "Anchor cloud" (our Autonomous System)
OSPF gets traffic between POPs
BGP gets traffic in and out of the Anchor cloud
Dynamic routing is all about advertising your presence - rather than setting up static routes on every system, you advertise yourself and other systems will figure out how to reach you
Why we need to route
Anchor has multiple connections to the outside world at each POP for redundancy. If a connection fails (eg. backhoe causality incident) we still have connectivity. Routing protocols handle the failover for us, resulting in a minimum of downtime.
Routing also gives us some flexibility in addressing, as we can advertise chucks of address space from each POP where it's in use.
How we make use of BGP
BGP is primarily an exterior routing protocol, between autonomous systems. BGP is also often used inside an AS so that border routers can have a consistent picture of connectivity to the outside world. When used in this way it's called IBGP (Interior or Internal BGP).
The border routers at each POP advertise part of Anchor's address space to the internet, via our upstream providers at each POP. The border routers are referred to as "BGP speakers", and BGP is spoken along the cyan-coloured links in the diagram above.
The working behind this is roughly » as follows
We have address space allocations from APNIC, detailed above
Chunks of this space are allocated for various uses, according to our internal policy (sic)
Roughly, "this /24 will be used at that POP"
Internal routers are configured to forward traffic where it will need to go
A corresponding BGP advertisement is added to the border routers at that POP
As the advertisement propagates to the rest of the world, the address space will become reachable and traffic will arrive at the POP.
Note that a given chunk of address space (eg. a /24) can be advertised from multiple POPs. This is not a problem, so long as traffic can be correctly routed to its destination once it's inside the AS.
We already have an interior routing protocol (OSPF), so why do we need to use BGP internally as well? We run IBGP in a limited fashion within our AS, it basically gives border routers options for egressing the AS.
At the time of writing, IBGP is used almost exclusively within each POP, and is not represented on the diagram above.
A non-representative scenario
This is a generic example of why IBGP is useful. We do not use IBGP » like this though.
Referring to the diagram above, you can imagine that LAX1 has good connectivity to the USA, and SYD1 has good connectivity to Australia
Let us pretend that the SYD1 border has a packet destined for a US residential broadband customer
Without IBGP, the SYD1 router will use one of its local upstream connections (eg. Vocus) and send the packet out to the internet
With IBGP, the SYD1 router learns that LAX1 has a better path to the US. It will send the packet to LAX1 internally (fibre connection, VPN tunnel, carrier pigeon, whatever), and let the router at LAX1 deal with it
A realistic scenario
We use IBGP between the border routers at each POP. Consider the » following example
We have two upstream providers at SYD1, A and B
We have two border routers at SYD1, bdr1 and bdr2
bdr1 is peering with provider A
bdr2 is peering with provider B
An outbound packet arrives at bdr2 destined for a residential Telstra Bigpond customer
Without IBGP, bdr2 consults its routing table and finds that provider B has the best route to reach Telstra Bigpond, and sends the packet on its way accordingly
With IBGP, bdr2 knows that bdr1 has a better route to to Telstra Bigpond. It sends the packet to bdr1 (internal gigabit fibre/copper cross-connect) and lets bdr1 deal with it
How we make use of OSPF
We use OSPF to get packets between POPs. The alternative is to maintain static routes at each POP, which has substantial management overheads and would be error-prone.
OSPF routers are always "adjacent" to each other. In an ideal world this would be a direct fibre connection. The harsh reality is that we use VPN tunnels between POPs, as described in the next section. These are the pink-coloured links in the diagram above.
Within a POP
At the time of writing, we're not doing this but it's on the cards for the future.
FIXME: finish this next
Advertising subnets "up" to our border routers.
Use of routing at each POP
We use tunnels to carry inter-POP traffic. This is fantastic because it allows our privately-addressed networks (RFC1918) to be accessed from anywhere in the Anchor network. On the downside, it means we need to be careful when allocating new privately-addressed networks.
We use OpenVPN and IPsec tunnels depending on the available networking infrastructure at each POP. Use of OSPF between POPs means we can run a full mesh of tunnels if we want to, and everything should keep working in the event of failures.
Private addressing is a bit of a mess
Fukken <x> how does it work!?
We should develop a template for all the information you expect to find for each POP. Structure will be something like /PoP/$popname/foo
Connectivity (upstream providers)
Network infrastructure (hardware, how it's deployed, what software it runs, local network architecture)
Naming policy (eg. VLANs in FQDNs)
Local addressing policy? Don't duplicate anything from the global scope
Access and delivery arrangements
Local routing policies (eg. BGP path prefixes, any funky processing that we do)
Resource allocations (power, network ports)
Troubleshooting? Ideally shouldn't be POP-specific