The other day I was reading an article about Ford’s efforts in conjunction with MIT to embed wireless applications into cars. It’s really exciting stuff. A scenario described is one car communicating with cars further up the road to prepare the driver and vehicle for impending road conditions. For example, setting ABS for wet pavement when the wipers on the car a mile ahead turn on. This got me thinking about the potential DNS queries, the volume of queries, and responsiveness of queries necessary for this to really happen. To address these issues, it’s worth a look under the hood to see how DNS works.
All internet applications depend on the DNS name server hierarchy for DNS resolution. There are two fundamental characteristics of import for DNS resolution: name server availability and speed of resolution. Factors affecting availability include the redundancy of name servers within the DNS zone. Fault tolerance is mandatory so any authoritative DNS name server failure is backed-up by peer servers within that zone. Factors affecting performance (speed of resolution) is the proximity (latency) between application/user and name server node. The more geographically dispersed the nodes the lower the resolution latency. It gets interesting when you think about doing queries while driving down the highway at 70 MPH. So, how do these queries get handled? The principle methods are anycast and unicast routing. Unicast is the traditional model setting up a 1:1 relationship between a single client and a single server. Anycast is a newer model setting up a 1:many relationship between the IP address and the name server. In other words, the same name server IP address can simultaneously exist at multiple points. In general, unicast is simpler to implement while anycast can be higher-performing with lower latency, particularly for connectionless interactions. This makes me think I’m going to need anycast under my DNS hood to support my next generation intelligent vehicle.
John Burke is a Principal Research Analyst with Nemertes Research, where he conducts primary research, develops cost models, delivers strategic seminars, advises clients, and writes thought-leadership pieces across a wide variety of topics. John’s main focus of research are cloud computing, virtualization, application delivery networking, SOA, and SaaS. His other areas of expertise are information stewardship (including information protection, information lifecycle management, business continuity planning, compliance, and data quality management) and storage technologies.
As an established speaker, John has appeared at Interop, Network World IT Roadmap and TechTarget events, as well as private events for Cisco, AT&T, and others.
As research analyst, John draws on his past experience as a practitioner and director of IT to better understand the needs of IT executives and the challenges facing vendors trying to sell to them. His career began at The Johns Hopkins University, where he supported the engineering faculty in its use of computers in research and teaching. He moved on to departmental management as well as systems and network administration at The College of St. Catherine, in St. Paul, MN, and then to directing staff in voice, data, desktop and systems management at the University of St. Thomas, also in St. Paul. He has broad and deep experience in computing, communications, and IT management.
John holds a bachelor of science degree in electrical engineering and a masters degree in the history of science, both from The Johns Hopkins University.
He devotes his spare time to family, baking, gardening, bird watching and wild mushroom hunting.