Having helped a customer setup VPNs for private connectivity to several large (ie. Fortune 100) companies lately, I’ve really dreaded seeing how NAT has been abused to the extent that it is making private islands on the Internet and breaking everything from routing to DNS to any future protocol enhancements. First some history.

Back in the day when we started out routing networks for customers, generally the practice was to route a whole Class C (or /24 now-a-days, this was before CIDR  was widely deployed as well), or maybe 2 or 4 depending on how many IP addresses that customer, or larger department needed. Large companies didn’t have connected networks, each workgroup or department was usually separate, or if they were really big, we’d help get them a Class B network with the Internic.

There wasn’t the prevelant use of firewalls, these connections were directly on, or if there was a firewall, it was a proxy type, with the stations having the ports blocked and needing to proxy out for things like FTP or HTTP if that was even a consideration. It was pretty apparent even than that having a total of about 2 million Class C’s available wasn’t going to be able to service a lot of customers with these types connections. Especially when you look now-a-days and see a large ISP like Comcast easily having 2 million subscribers in one larger metro area. So, out came CIDR which saved us from handing out /24s in every case and it certainly cut down on handing out IP addresses wastefully.

It took a while for people to get the concept of CIDR and even longer for gear to fully support it properly but it helped.

In the meantime NAT was created.

At first it was more one-to-one translation so that if some workstations were in an unfortunate range they could be translated to another range of IP addresses that worked better. Then PAT (aka NAPT) came about that actually let us hide a range of machines behind one IP address on the outside. This was starting to look a lot more like what the current crop of residential routers allows today; hide a whole range of networks behind one public IP address so that we can go back to the model of having one IP address per customer. Finally what allowed that was modifying existing protocols that broke with NAT to be more NAPT friendly, recognizing that something can still be in the middle. IPSec got NAT-T transport, and enterprise class firewalls got things like ALG which allowed more deeper inspection into protocols to help things along like FTP, SIP, and H.323 which do weird things embedding raw IP addresses inside the protocol itself rather than just the headers.

The only reason the Internet could grow at the rate it did was that NAT saved the ISPs from having to deploy huge tracts of IP addressing that wasn’t going to be used (do you have 256 devices connected up in your home network? Multiple your home by millions of others in your metro area). Now that we have exhausted IPv4 it seems like they should have done more, but really at the time, nobody envisioned 78% (240 million) of the US population to be connected on the Internet, let alone the adaption rates around the globe.

What is broken now.

When I was working with these large enterprise companies they had deployed NAT to all their borders.

They’ve totally isolated themselves off from the world, anything going out and going in has to be NAT thus when we needed to get into their networks with the VPN tunnels it produced all sorts of interesting problems that never ever was imagined way back when the Internet was first launching.

Since NAT was required talking even back to us over the VPN tunnel, we determined that things like terminal server (TS) gateway which our customer was trying to deploy would be failing because of the NAT requirements. I don’t know of any firewall that has an ALG for TS gateway but even if there was one, it most likely wouldn’t have been deployed by these large dinosaur of an enterprise company in their IT department.

Thankfully we weren’t doing any VoIP connections for these people but those would have broken too, as SIP and H.323 and MGCP all require ALGs on the firewall level to deal with NAT. Since we were hidden behind NAT things would have broken pretty hard getting these types of packets back to them.

Finally, since they NAT everything in and out, any and all external DNS are broken too, so they have to take over DNS deployments for our customers machines, further isolating them off from the rest of the Internet. Normal DNS entries and domains don’t work in the large IT enterprise enviornment, custom DNS has be requesitioned, setup, tested and deployed at each site, negating any benefit for a single authoritative source (which when DNSSec takes off, will be promptly ignored because it won’t be used).

The future.

Since the widespread rollout of IPv6 recently it has been so nice to get back to the roots of the concepts of unique IP addressing for end-to-end communication. Unfortunatly, these kinds of mindsets that have isolated the large enterprise off the Internet directly are pushing IPv6 deployments to have the same sort of NAT in, NAT out mentality matching what IPv4 rollouts they have already.

NAT66 has several drafts going through the IETF process now. I was really hoping that we could do away with all the nastiness of NAT and we certainly will in the general deployment for most people. But I fear the driving force behind these large enterprise IT will again make things so convoluted and isolated that we will be dealing with NAT issues for quite some time to come.