Archives

DNS Working Group
Wednesday, 16 May 2018
2 p.m.

Recursive resolver DNS Working Group

CHAIR: Welcome to the second session of the DNS Working Group at RIPE 76. You are all welcome back. We'll start the session soon, there is a couple of things we'd like to say.

We have, I'd like to thank the RIPE NCC staff, Eesham and I forgot your name, Alistair, they provide support during the chat and the scribing as well as to the stenography people who make this possible for us to understand each other.

Before we go into the content itself, I'd like to make a quick mention. We'll be posting to the list but we would like to see a small change to the way we select chairs in this Working Group. The three chairs have come to realise that the way things work right now, the first one so raise their hand is usually crowds out anyone else who might be thinking but needs a little more time. And I think we all think that it's unfair to anyone else. So, we are going to propose a modification that, where we open a period and once that period is over, we publish all the candidates at the same time. But you'll see this in the list. I just wanded to give you a heads‑up.

This session is quite packed so let's get going, first with Colin Petrie.

COLIN PETRIE: Hi everyone. I work for the RIPE NCC. And I am talking today about DNS over TLS at the RIPE meeting which is basically an announcement from us saying that we set up a DNS TLS resolvers for use at this meeting and you should use them, or at least play with them and let us know whether they work or not or something is broken and get some experience. The reason why we did this, first of all the DNS Working Group chairs asked us to and we like to keep them happy an also it seemed quite fun, we thought we'd have a shot at it.

What are we doing. We're using a not resolver. They are list engine on port 853, their addresses are the same as the existing resolvers that you use anyway at this meeting network. They are just on a different port. We did this, when we were choosing the software so use, we had some things that we wanted. We wanted to get DNS over TLS obviously. We also quantitied X name minimisation and aggressive DNSSEC validation and caching and DNS 64 for our NAT 64 network. There is a night matrix on the DNS privacy .org website that tells you about all the features supported by the different resolver software, which is very useful for you to select the software you want to use.

We first of all evaluated BIND, because we currently use BIND as our main resolvers. It doesn't yet support X name minimisation, although I hear that's coming soon. The other trouble is it doesn't have support for TLS itself. You can work around this by running a TLS proxy in front of it. The problem that we had, though, was that that then hides the source address of the incoming query from BIND. So it can't apply an ACL to decide who to do DNS 64 for. You can work around this with address rewriting or source routing and running multiple proxies. That was going to make it more complicated so we wanted to find a simpler solution.

We also evaluated Unbound, which actually supported all the features that we wanted. The main problem that it had was that DNS 64 is a global flag, and there was no ability only do DNS 64 for certain IP addresses. Again, we could run two different versions of it and do proxying and source address routing and things like that. Again that was the same thing. We didn't want the complexity.

We went to look at knot resolver. It basically was exactly the same. Supported all the features but didn't have ACL support for DNS 64. But it turned out that this was quite easy to implement, because the DNS 64 module in knot resolver is written in Lua and it was just a few lines of Lua to add in source address matching code to only do DNS 64 for certain addresses.

So we went with that.

We tested it. Added it to the load balancer. We have got ‑‑ we were test it go using K dig at first ‑‑

AUDIENCE SPEAKER: I was wondering why you need the ACL support for DNS 64?

COLIN PETRIE: Well we want to run one set of resolvers but we only perform DNS 64 for queries that are coming from people on the NAT 64 network. We don't do DNS 64 for the people on the main network.

AUDIENCE SPEAKER: Otherwise they would be trying to use the NAT 64 while there was no NAT 64 plus unnecessary because they had IPv4.

COLIN PETRIE: Yeah. So, we were testing it. It seems to work. I have used it as the upstream forwarders for stubby, for sometime, and it seems to work fine. You can also manually test it using KDig. Now it's up to other people to play with it, test it, the details are up there on the meeting website and for all the technical information, and play with it and see what happens.

While we were doing that, we found some interesting things about QName minimisation where we found that knot resolver stops minimising whenever it gets authoritative answers, it only follows referrals. This was something that we noticed affects country codes where they operate second level domains and the DTLD and the SLD are the shame authoritative name servers. So some countries that this was effected where you are basically sending the full QName towards the registry so we had a look at that found that this step in the QName minimisation algorithm is not done where you can take the NS records from an authoritative answer and continue to minimise, knot resolver doesn't do that. But I found it was actually quite easy to add a patch to knot resolver to support that feature. I submitted it upstream. It's running on the resolvers that we have here. There maybe it causes problems. We'd love everyone to test it and let us know if it works, especially if it works well, hopefully we could get that accepted upstream.

Final note was that recently, Android announced that the latest developer preview of Android will use DNS over TLS by default. If a network's DNS server supports t now that should be true on our network, because the addresses that you are using for the existing resolvers also have the TLS resolvers on the same IP, not just on a different port. We're wondering if anyone has a device running the latest developer preview of Android and if so would we be able to test that, that would be quite interesting to see if that works by default.

So that was it. Basically just a little update to let you know that these resolvers are available. Please use them and play with them.

JOAO DAMAS: Thank you Colin.

AUDIENCE SPEAKER: This is Andrei Surrey. Although I am very happy you are notices the not resolver, I would also tell you that the QName minimisation in BIND is almost ready and it will be ready by the next RIPE that I'm sure of. And they are also working on DNS over TLS but it might take more time because the network model it basically quite hard to staple TLS to it, but we are ‑ it's always a work in progress.

COLIN PETRIE: Indeed, we would have liked to be able to use BIND just because we were already using BIND, but the features weren't quite there yet. So... whether or not if they then come along, would they then move back to BIND? I'm not sure. We'd decide that at the time.

AUDIENCE SPEAKER: Petter CZ neck. First of all, thank you. The QName minimisation has some weird ‑‑ please talk to me if you find that problem, we are more than interested in feedback, because the behaviour you pointed out is, was there kind of there intentionally, so we are eager to hear feedback. Please.

COLIN PETRIE: That's the thing. I wrote the patch. We can't see anything wrong with it, but we don't have enough users yet, if someone can spot a problem with it, that would be great.

AUDIENCE SPEAKER: Andrei: Actually, talk to all of us about the QName minimisation.

AUDIENCE SPEAKER: A small quick question. Do you have ‑‑ have you also up streamed the support DNS 64 ACL? Or is it something just a hack?

COLIN PETRIE: It was just a hack, I should probably turn it into better code and then submit it.

JOAO DAMAS: Okay. Next up...

ANNAND BUDDHEV: Hi. Good afternoon. I am Annand Buddhev, from the RIPE NCC and I'm going to be presenting a short update on the RIPE NCC's DNS services, what we have been up to and upcoming developments.

I'll start by talking about K‑roots. We have been expanding the footprint of K‑root for sometime now and this process continues and has been growing. So, since the last RIPE meeting, we have added four new sites and the current count of instances at the moment is 61. Of note, is that we have new servers in Manana in Bahrain, we have one here in France, in Lyons. We have a new one in Dushanbe, in Tajikistan and the last one we added was in Panama city. The node that we have in Leshanbay was actually funded by the RIPE NCC community projects fund. And this is an under served region so we're quite please that had we have some presence there as well. And our friends at LACNIC have sponsored the deployment. Panama city, so thank you LACNIC.

K‑root in France. We have two ininstanceses of K‑root in France. And the first one we deployed here was actually at France IX, our hosts, and that's in Paris. More recently we deployed a second note in Leon, and that's hosted by ED X and the graphic on the screen shows the combined query rate coming to these two servers and it peaks at about 4,000 queries per second, which is 5% of the total K‑root traffic globally, these nodes are serving a substantial amount of traffic. And you can see their footprint. So mostly they are serving France, but also neighbouring European countries and then a smattering of other countries. This graphic comes from the RIPE Atlas project where we have a map of the route name server as seen by each Atlas probe. Naturally we have some in Africa, because some of transits of these K‑root hosts also provides service to African countries.

We also continue to improve the capacity of K‑root, and we are upgrading our core site from 1 G to 10g ports, and just last week we upgraded one more site to 10g, and this process continues. Of note also is that the RIPE NCC Board has approved a proposal by us for a new site at 100 G, and the planning and deployment of this is going to happen shortly after RIPE 76.

The idea behind doing this is for us to gain more experience and help us to plan deployments of more such sites in 2019. It will help us figure out the budget and how best to deploy sites at such large scale.

We have also been contributing data for research purposes. So earlier this year, the DITL 2018 took place. This is where operators of various DNS servers collect PCAP data and submit it to DNSO AR C and this data is available to researchers for looking at queries and working with it and so from K‑root and also from from our single AS112 instance in Amsterdam, we uploaded all the PCAPs for a period of 50 hours. So researchers who want to look at this data can do so by signing an agreement with DNSO AR C.

The route zone KSK roleover is happening this year also requires some data. ICANN asked us to submit some data, particularly to examine the trunk signalling, that is defined in RFC 4815 and we submit data every hour from all the K‑root instances to ICANN, it helps them plan and figure out how best to do this KSK rollover.

Another thing I'd like to point out, it's just a very small win but we do Joan transfers between various DNS servers and have been pushing to do more and more of this just over IPv6. The RIPE NCC provides secondly DNS for a number of organisations including the other RIRs. And at the moment, between APNIC, ARIN, LACNIC and ICANN, and ISC's secondary name service, we exclusively use IPv6 for zone transfers. So we have completely abandoned IPv4 there. And with AFRINIC, we still have IPv4 as a backup, but very soon, they are going to deploy some new infrastructure and we should be able to switch to just IPv6 with them. Small wins, but we're getting there.

We also still have our old DNSSEC signers and they are running but we need to replace them. I mentioned this in the previous RIPE meeting. The feedback from the community about a non HSM based solution was actually quite positive, and so we are continuing to look in this direction and we are evaluating using knot DNS, power DNS and secure 64's own new product, which is also based on knot DNS as possible replacements. This work continues and we hope to have done the transition by the end of this year.

We are using Zonemaster, this is a project, a joint project between the French registry and the Swedish registry, and this Zonemaster does a pre‑delegation checks on all domain objects that are created or updated in the RIPE database. And that's been working quite well for us. There is also gooey that ships with this projectage we have it available here, that let's users do delegation checks, post pre‑and post delegation checks. However we would like to integrate this functionality into RIPE Stat, so that there is only a single place for users to go to when they wish to do their checks. And so in the upcoming months, next two months after this meeting, we are going to move all the gooey functionality into RIPE Stat and then we won't be using the gooey code that ships with Zonemaster.

And that was it from me. Any questions?

JOAO DAMAS: Just a quick question. Your 100 gig node that you got board approval for it, why did you have to go through this? Was it an exceptional expense or...

ANNAND BUDDHEV: Yes, at 100 gig the equipment is going to cost a little bit more than you know what we would normally spend. And so, we needed to get board approval and you know, if things go well, then we would need more budget in the upcoming years, so yeah...

JOAO DAMAS: It was a one‑off because of divergence from budgets.

Anyone else? Thanks.

(Applause)

Next up Vicky Risk.

VICKY RISK: Hi. I am Vicky risk from ISC, I am going to talk about the results of a survey that I ran in the past month about DNS privacy.

First of all, I may sound a little bit defensive, but I'm well aware that in the Internet community, DNS privacy is regarded as a must have. And I see Stefan in the front row there. I was at the IETF meeting where assembled engineers basically agreed that pervasive monitoring on the Internet could be regarded as an attack and it was the most consensus that I have ever seen at the IETF, was on this point and I think also the folks who have been working on the Internet standards for their whole careers feel some responsibility that the Internet seems to have evolved into a surveillance tool and there is a strong feeling that we must fix this.

However, I worry that nobody has really asked the operators if they are interested in deploying this. I'm not convinced that they are seeing a tremendous amount of user demand. I know that typically they have to evaluate the business benefit and consider the operational cost, and I don't know if they are convinced that this is even an important problem.

So, I really just want to know if we put the effort into building the features and the tools to support DNS privacy, will they deploy it? And if they won't deploy it, then we are needlessly complicating the protocols in the software.

So, I created a survey and posted it March 27. I shut down the survey, March 4th, in order to produce this presentation. I advertised it first on social media. The advertisement said, do you care about DNS privacy? Do you think it is another example of over engineering on the Internet? Please consider, you know, answering this survey. I advertised that on Twitter, LinkedIn and Facebook and then I also sent an e‑mail to the RIPE DNS Working Group mailing list. I was a little concerned that the main people who had respond via these mechanisms were people who were also privacy advocates. So I also put a link on IS A's website where you download BIND, and said if you are downloading our software and can't give us money, why don't you answer this survey.

So, who responded?
I asked a couple of demographic questions. These are the topics I asked. I tried to keep it short. Minimal demographic questions. A question about the importance of privacy onto the organisation and the impact. A question about the deployment status of QName minimisation. A couple of questions about encryption DNS and their concerns. Whether or not they would encourage their users to use one of the publicly available DNS privacy services. And then I asked them if they respondent expected to be involved in implementing GDP R for their organisation and if they responded affirmatively I took them down a path with some further questions.

So, who responded?
The first demographic request asked what's your role in the DNS at your organisation? And surprisingly large number of people who responded appeared to be individuals. 23% of the respondents were identified themselves as an individual consumer or Internet user, there was another category where I asked people to write in what their role was, their answer was hobbyist, consultant, IT engineer, small business person. So I figured that those people might be privacy advocates and probably were not responsible for managing a service that affected a lot of other users. So, throughout my analysis, I frequently looked to see if their responses were significantly different than the other folks.

I am very proud that we had a pretty wide geographic distribution. I know this is an unreadable chart but I like it anyway. In the pink area shows a lot of folks from the US, but we also had significant participation from Germany, from Canada, from the Netherlands, from China, and you see a lot of countries in the RIPE area there as well.

I did not offer any free T‑shirts for this survey. So...

So the first question was how important are end user privacy concerns in decisions about what products and services your company offers and how those services work?

I excluded the individual respondents from this particular statistic, that's what I'm showing here. 68% of the people not included the people who might have been privacy advocates, said that it was either very important or extremely important. So those are the two highest categories. And you'll see in most of these questions I offered five choices. That's a very conservative way to do a survey because anyone who is on the fence can just pick the middle answer and they are not committed. When you included the individuals the number was slightly higher, but 68% I thought was quite high.

Perhaps it's because of the recent focus on GDP R but I'm certainly aware that Internet user data, not DNS data specifically but Internet user data has some significant marketing value. So I also wanted to ask, what is the marketing benefit of user privacy?

If you can make end user privacy claims about your products or services, do you see a useful marketing benefit? And 50% of the respondents said they saw either a very useful or extremely useful marketing benefit.

I wanted to ask about QName minimisation, so, first I explained what it is. And said if this option is available, do you or will you enable it?

And here I'm showing the respondents in two buckets. The top line are the individuals that I mention and the other people who are mostly individuals. The most common response there was I don't know. But when you look at everyone else which are ISPs, educational institutions, enterprises, 50% of them said they either have already deployed Q minimisation or they plan to. So I thought that was a very strong positive response.

Similarly, when I asked how interested are you in offering your users the option of encrypting DNS data? 50% are very or extremely interested. I did not try to explain what the options were. I asked them what their preferences were technically. I just asked this they were interested in offering the service.

I wanted to show you the question. I want to identify what are the obstacles to implementing DNS encryption because people who are big advocates of DNS privacy should probably focus on what are the things we can do to remove the obstacles for operators. So I listed some things that I thought might be obstacles, and asked them to rate each one. Is it a significant obstacle, somewhat significant, a minor consideration or not a factor? Combining the two top ratings as significant obstacle or very significant obstacle, the biggest problem is the features aren't available yet in the products or services they use. That's something that we certainly can fix in our own products and the developer community can fix.

The second thing though, almost exactly as big a problem is that the operators said they were too busy. This is not something we can fix but obviously we have to focus on making this as easy as possible to deploy.

I did ask them kind of a timely question. Would you consider migrating your users to a free public hosted DNS resolver service that implements DNS privacy features such as 9.9.9.9 or 1.1.1.1? Almost 70% said no they would not consider t now if these were just ISPs I could understand that, but these include enterprises and educational institutions, you would think if they are worried about the effort required to deploy DNS privacy service, a free one that's no effort would be more a more popular choice.

So, overall in summary.
I thought that the results that I got so far were that end user privacy is actually very important in decision making about these services. People do see that there is a useful marketing benefit. I wish I'd ask if they thought the marketing benefit of privacy would offset the marketing benefit of having the user data, I didn't ask that. Half either have deployed or planned to deploy QName minimisation. Half are interested in nipped DNS. But most report significant obstacle to offering encryption. And in this survey, which was 195 responses, almost 70% were quite sceptical about a hosted DNS privacy service. Obviously the folks who were operating those services will have even more direct information, so ‑‑ but I was surprised to see how high the scepticism was.

I have omitted all of the data about the questions about GDP R for time. They are in my backup slides to if you look at them you'll see t if anybody would like the full datasets, just contact me, I'd be happy to share t there is no user identifying information in there. So no problem with sharing it. That's it.

(Applause)

AUDIENCE SPEAKER: Benno override err. Thank you Vicky. Do you have any idea ‑‑ this is kind of about a commitment of the respondents. So, filling in a questionnaire is one thing, I have done also this kind of thing, this questionnaire seven years ago about routing security actually. Everybody found it very important but we still talking about routing security, and also Andrei Rob check see with MANRS, it's partly lip service. So one thing is to subscribe MANRS but the other thing is really act. So do you have any idea or feeling how this commitment is, how the respondents more than 50%, 60, 80%, 70% even, I think it's important. You did give the overview what are the barriers, but how can we ‑‑

VICKY RISK: That's a great question. It's not a huge survey. 195 respondents. It is probably significant and certainly when I talk about the group of other, it's more than 30 respondents, so it's probably reasonably significant. But you're right, any time you want to do a survey would you buywe this product and they can answer yes without spending any money most of the time they will say yes. A lot of interesting information is in the comments typically and I didn't have time to go through a whole lot of text. That's why I said, I take the 70% would not use a hosted service a little bit with a grain of Salt because there is going to be much better information in the folks operating it as I see the up take. As for the QName minimisation, I have been sort of lobbying Geoff Huston and I think he is thinking about taking on a project to actually measure how many people are doing QName minimisation. Obviously those direct measures are better. I still thought it was unconscionable not to at least ask. But you are right. On the other hand, these people are not getting any glory. They didn't put their names n they are not getting a T‑shirt. There was no little you know, cat video at the end. So, I don't see any particular motivation to over state it either.

JOAO DAMAS: I think there might be a little bit of bias in the population. Some are already intent on running BIND so they are more likely to not use someone else's surveys.

VICKY RISK: That's the reason I did both social media and via the downloads page. I did that among the comments somebody commented that they are waiting to see this in the power DNS recursor. So I know these are not all BIND users. Also a number of them had deployed QName minimisation which isn't available in BIND yet.

AUDIENCE SPEAKER: May that I say. Very disappointed there was no cat video at the end. You said that you did the survey for one of the reasons to see if there is a demand from the operators, and so you would not do ‑‑ make all these things in the software to make it more complicated than how to maintain I guess. Did you receive significant enough data to jump to a conclusion and if so what is it?

VICKY RISK: That's a very good question. I personally, I don't think this is enough, particularly because when it comes to encryption, we did not even get into the topic of what would be required, how many of our user population would be able to take advantage of it, get into further detail. So I think there is more to be done there. I understand that the Internet community and even my own colleagues at ISC are gung ho to implement DNS privacy, but I still think that unless we want to have you know, be talking about this in eight years like DNSSEC, we should pay some more attention to what is going to motivate the operators to deploy it and investigate a little bit more the obstacles. But, yes, particularly when it comes to encryption, I don't think we have really found out enough.

AUDIENCE SPEAKER: Maybe the follow‑up question is what will be the next step if cannot make a conclusion yet?

VICKY RISK: I have been thinking about that, and you know I'd be happy to talk to other folks here. We have implemented QName minimisation although we haven't release it had yet. It won't be long number that's available in BIND. I know that we plan to make it the default once we're confident of it. So, we don't ‑‑ I know this sounds patronizing ‑‑ but it doesn't matter as much if they want to deploy it, if it's the default we know that most people will just accept the default. It doesn't work for encryption so I think we have to do some further not just market research but selling.

AUDIENCE SPEAKER: Victor Coursing, Oracle Dyn. So, in my experience from the operator world the people that typically download BIND or those engineers are not necessarily decision makers in those organisations especially when it comes to things like what they run for DNS in their infrastructure. Do you think we were able to capture that, like is a person responding actually the decision maker who gets to decide is this what we're going to do as an an organisation, and if not, do you think we have actually extracting that data, it's normally product people and other folks in the organisation that decide, right?

VICKY RISK: You could say the same thing about the marketing question. I note the people down logo the software don't have a marketing job. So it's correct, I'm asking people questions and maybe they're not the best people to answer those questions. So, I do think that there is some follow‑up work here that would really be called for, but I think that the fact that even the engineers think that there is a marketing benefit is really encouraging and we should pursue that. I think creating a marketing benefit may be related to Sara Dickson's BCOP with coming up with some kind of self certification for DNS privacy service and trying to market to end users that this is something they should be looking for. But that's not any part of the survey.

AUDIENCE SPEAKER: Warn can he Marie from Google. I guess I should start by obviously I'm bias, but I guess I should also mention that Android P is going to be supporting DNS over TLS sort of with magic auto config and CloudFlare does DNS over TLS and DOH and one of the Google people is co‑chair of the DOH Working Group. But you also have a bunch of stuff saying that people don't want to rely on DNS hosted over TLS, similar things. If you don't implement this, that's going to be their own option. So do you really want to make it that sort of privacy sensitive people have to rely on the 8.8.8.8 and 1.1.1.1.and 9.9.9.9 and all the rest, are you are you making it available to all.

VICKY RISK: I know something about you know research and this is far from a watertight survey. I totally get that. I still think that we should make an over the to find out what people want. So one third of the people, who answered the survey, were people who are down logo from the downloads page. By definition those are roll your own, build it yourself kind of people. The two thirds of people from the Twitter verse, God only knows who they were. But I don't think you can do a privacy survey and ask people to identify themselves. If we were to do it again, I would like to ask additionally how many users are you supporting? Because there is one person who made a comment about a million users and a bunch of other people who I think are just running a DNS server in their basement for their family. So big difference.

JOAO DAMAS: Thank you very much.

(Applause)

Next up we have Baptiste, about high performance DNS over TCP.

BABTISTE JONGLEZ: So, hello everyone, I am a Ph.D. student a university in France and I will talk to you about large scale DNS over TCP. So, my goal, my main goal is not to convince you that you should do this. I mean that would be nice but that's not the main goal. The main goal is to explore what is possible to have some practical results about what works and what doesn't.

But if you are convinced, that's even better.

So, I think everybody knows the current issue about DNS over UDP, so no source address validation, fragmenttation, the lack of privacy son so. But I just want to point out that a set of solutions that I identified. They address one or several of these problems but not all of them. Okay. And basically, TCP, really DNS over TLS solves all of these problems. Okay, so I found an interesting way to go.

So, my point is that I want to look if it's possible to enable all DNS queries from start to resolver over TCP by default and then we can extend that to TLS, but to TCP first. So basically the model I used is that ‑‑ so it's not completely realistic but it doesn't matter. You take an ISP, you assume it has a single resolver, why not, and then I'm going to assume that all customers will open a persistent TCP connection to the resolver. Okay. And then all DNS queries from device inside the network will get pushed through this persistent TCP connection. I know the question is does this work at the scale of a B G SP?

Yes, so the advantages of doing this in an ideal world you could switch off UDP completely. So, but there nor nor amplification of text possibly. So adding TLS would not be very difficult. And also, there was a certain point in my research on lossy networks, we notice if you use TCP you get lower response times because you can reach them faster. So I won't detail this point but that was the motivation.

So, the obvious question is if you take an ISP with millions of subscribers, this can not possibly work. So, I developed..., to answer this question, does it work or not, I developed a methodology to measure the performance of a resolver. Then I did real experiments with lots of clients. I'll show you how we can do that, and then one of the questions also, is does the performance of the resolver depend on the number of clients. Like if you have a two client over TCP or 1 million clients over TCP, do you get the same performance?

Apparently this is not completely obvious why the performance could depend on the number of clients. Actually because the software OSs have become very good at performance, but could you still have an overhead of managing a lot of TCP connections, a lot of scripters and so oranges so low level stuff. When you go to millions of connections and timers and that, it could be ‑‑ it could affect performance.

So now, this setup. It's not that easy to respond millions of clients like this. And the second question was, how do you generate queries? So, I won't talk a lit about these because I didn't find a lot of models to do that. So I used a simple model to generate queries.

And so for the first point of having millions of clients, so, in France, we have a test bed, a research test bed which is called grid 5,000 and basically it's a lot of racks of servers, big servers that you can use in a kind of a hardware service so you reserve the nodes and you can completely route on the machine, do whatever you want, at the end of the machine can be used by somebody else to run experiments.

So, with a machine of a lot of memory, so this is an experiment at large scale and it's all 10 gig. So basically, the setup is quite simple. I take one of these servers to run the recursive resolver, so in this case Unbound, then on other machines I run a lot of little machines, so I would typically run 10 or 20 machines on each server and then each virtual machine will open several persistent TCP connection, and then we start DNS queries over these connections.

So for the largest experiment I used SEK like 250 virtual machines. So this is the setup. So, you have the DNS resolver here, the network which is just a local 10 gig network, and they ever all these run the virtual machines.

So, now about the results. First, how do you measure the performance of a resolver? So I discovered here, by discussing with Sara, about something called DNS PERF, if I remember correctly, that is basically this but I did not know about it at the time so I wrote my own client on my own methodology, which is quite similar. So basically, this is a graph ‑‑ this is the time of the experiment. So this is one experiment. And this is the query rates in thousands of queries per seconds. So the black one is a load, is the number of queries per second that the client generates. And the red one is the answer rate. So how many answers do I receive on all clients every second.

And so basically, run the load in a linear way. At first, the resolver answered all queries. At some point it starts to have trouble. And then it kind of collapses to this rate, sometimes on some other stuff. So, basically, the metric I used is the last point where the answer rate is higher than the query rate, or equal. Here at this point, so I say okay, the buffer around about 70,000 query per second for these parameters.

So another one, which is a bit more messy, so basically here, you can see that when the resolver starts having trouble, it accumulates some backlog here because it answers slower than the query rate, then it catches up, then it gets some backlog again, catches up, so this I don't think this is very significant because it could be some aggregation, so I use this metric where the last time the two curves cross basically. Which looks like a reasonable metric.

So, this is the result of a few tens of experiments. In blue, there is the preference of UDP, in red of TCP, DNS over TCP and the parameter here is a number of connections ‑‑ so total number of TCP connections that I used. So, here you just have a few clients that sends a lot of queries and here you have a lot of TCP queries with less queries on each of them.

So here there is a bits of a variation, but basically, with UDP you have no reaction. I mean connections in UDP just there is different support so doesn't really make a difference, or shouldn't make a difference. And it's mostly flat. But with TCP, so if you have very few clients, you get quite good performance, so this is on a single core, so almost 200,000 queries per second, which is nice. But when you start to be more compliance, it kind of decreases quite fast actually. And then if you go to much much more clients, it stays mostly stable, at here it would be 60,000 queries per second.

So, the first conclusion is that the overhead of of TCP compared to UDP is not that large, I mean a factor of maybe 5 if you have a lot of compliance. I mean it's not that much slower, just a bit slower. And then there is this bigger view, which is maybe unexpected, I don't know. So, I'm not completely sure yet why this decrease. When I put this if you have a lot of TCP connections, the data centre will not fit into the CPU cache on the resolver because you have a lot of queries. So each time you have a query you you have a cache and it slows things down. Or it could be that when you have only a few clients based on a lot of queries, so you have aggregate queries together. Like, several queries using the same TCP segment or things like that. So I did some experiment to understand better this, but it's not yet completed.

So this is what I just said...
Okay. Now for the question here, because it was for tense tens of thousands of clients. I run an experiment with 6.5 million clients, all connected to the same resolver on a single server. So this took 200 VMs, a lot of TCP connections, and basically it works. I was quite surprised that it could work at this scale. I mean in Linux is quite efficient in this regard. So the performance was around 50k queries per second, which is similar to what I got with just a few thousand clients. The memory usage was quite high. This is just for the TCP connection and the buffers in Unbound. That's 50 gig of RAM. So... but it works.

So, this is a bit messy, but this is ‑‑ so here I only looked throughput of the server. But you also have to look at what the clients experience, like if you have a one second of delay for your queries, you're not happy.

So, this is a bit messy. So here this is micro seconds, so the scale is from 2 to 8 milliseconds, and that's a case where you have 4 million clients and quite a high load, it's not validated but it's starting to get quite loaded and so a few milliseconds is acceptable. So that's fine.

And then ‑‑ so the scale is completely different now. That's when I over load the server. So this is an linear increase. You see as the load increases, you get some bumps in latency, when the server gets back logged and then it's completely goes away. So here is the scale is 1 second here, so here you have like a few tens of milliseconds at most.

So, if you are under the capacity of the server, latency is reasonable. Otherwise obviously it completely explodes.

Okay. Now, this was all on a single CPU core. Where do you get if you use several cores. So Unbound, you can configure that in Unbound to tell Unbound to use several threads and I used recent option in Linux that allows several threads to BIND to the same port. So that the kernel does the load balancing between the different threads. And basically one thread, two threads, six threads, ten threads, almost you know, perfect increase, so that's really nice. So this is on the two CPUs with ten cores each. So once you have distinct two CPUs, it's not so good. But here you stipple have more than half a million queries per second, which is very reasonable.

Okay. These are some limitationings of this work. Obviously this was in lab environment so everything was sent from the cache. So you will get lower performance if you need to do extra work obviously. One thing which I did not think about before but which is quite important, is that in my case I opened all TCP connection at the beginning and then I send queries on these connections. But in a real DNS server you will get a lot of connection, disconnection, and this can create some additional load, especially if you use TLS. So this I haven't looked at that yet.

So the query model could be improved. If you have any idea I am interested to have pointers.

The constant query rate...

These are details of the setup. I won't talk tomb about it but you have linked to the source code of the consensus come client etc. And the details about the hardware. .

So the conclusion is that doing this kind of thing, millions of clients on the same resolver is not necessarily a good idea but it's possible. So, if you hear that TCP is too slow, it does not scale, actually it can. One of the main categories that on Linux you can have basically as much TCP clients as you want, provided you tweak a bit some of the things and you have enough memory.

And so I plan to do some more experiments to look at what the cache, to test some other software like node or BIND, so, Sara, did something similar. And one of the main goals is to try over transport protocols to add that.

So that's it. Thank you. If you have any questions.

(Applause)

AUDIENCE SPEAKER: Warren Kamarai, Google. I'm sorry if I miss this had, I was reading some mail. How much TCP tuning did you do? I notice stuff like it's doing 4k per TCP, you can tune that down a bunch, you can do stuff in different trees for foster connection set up etc. There is a lot of work already done for multiple connections. Did you do much tuning? I don't think I don't think I didn't change any kernel codes. Only TLD setsings. 4k thing was in Unbound because by default Unbound allocates 65 killey of buffers for each client, which is quite big. But on the kernel side, I just modified the limits of ‑‑ so that's something that Google could do in the kernel ten years ago. Ten years ago it was fixed on, now you can change it.

BENNO OVEREINDER: Not a question, just a thank you for setting up 16 million clients. Indeed we were talking about it for years, or we, the community, and it's difficult and it should or cannot scale. I like your approach. Just do T it's not just do it you explained it's not trivial, but I appreciate your work. Thank you.

AUDIENCE SPEAKER: I actually had the same question as warn, and then I sat down and he started talking. There is also just configuration tunings that you can do Inbound like maximum number of TCP connections per threat. Did you touch those?

BABTISTE JONGLEZ: I had to by default.

AUDIENCE SPEAKER: What was the number that you set it to in this setup?

BABTISTE JONGLEZ: Usually I set it a bit higher than the number of connections I make, because it's load balance on the different threads.

AUDIENCE SPEAKER: He you based it on the number of queries.

BABTISTE JONGLEZ: Yeah because some rated it to the number of slots that you allocate.

AUDIENCE SPEAKER: Thanks.

BABTISTE JONGLEZ: So I set it to 7 million or something like that.

AUDIENCE SPEAKER: My name it leads from men dim low custody, I am sorry I think the key question you are answering here is the wrong one because how fast is a server? I don't care, if a server isn't fast enough, buy a bigger one. What I care about is how much delay is there for the clients? And that is actually pretty significant, so, that is the thing that is going to be kill this.

BABTISTE JONGLEZ: Actually, this is what I was talking about just at the beginning, but I didn't present it. My initial thing was about the client side performance validation, about latency, this is exactly this. And so basically, if you use persistent connections, then you have a negligible latency impact.

AUDIENCE SPEAKER: Yeah, but you start without the connections, so at some point you have to initiate it, you have a whole bunch of round trips for the TLS.

JOAO DAMAS: That's one question I had off him. It would be interesting to see. Normally TCP establishes a session, does things and then closes it. But if his measures actually point to the fact that you can have as many as you want. Maybe we need to change our mental ‑‑

BABTISTE JONGLEZ: As many queries.

JOAO DAMAS: Maybe the device when it boots establishes a connection, a permanent connection to its resolver and that it stays active forever until the device goes away? Maybe it could be a really nice thing to find out?

AUDIENCE SPEAKER: And then we have met time‑outs and firewall time oats. So I want to see much much much much much much much much more about this before I ever deploy it.

JOAO DAMAS: We all do. But one thing would be cool like if you could compare performance of the traditional model of connect, do, close. Versus this one, because apparently it scales to a lot. So maybe you are just doing it wrong?

AUDIENCE SPEAKER: Warn: I forgot to say this earlier. Thank you. This is awesome. Thank you for doing this work. We need more.

(Applause)

Next up, Sara Dickson.

SARA DICKINSON: So, I'm also going to talk about some measurements in terms of benchmarking which are actually at the other end of the spectrum to what you have just seen, and this was work done by the team at sin Dunne.

I did have two topics to talk about but I think we are quite tight for time so I will stick to just my first one which is benchmarking where we have looked at both TLS and TC C at approximate recursive resolver. This is very much a work in progress. I want to thank the open technology fund that partly fund this had work. With the very first stage of this work, we had quite limited goals. What we really wanted to do was just understand the characteristics of how the existing recursive resolver handle their TCP and TLS loads. And we wanted to look at relative performance rather than absolute performance and I have quite a limited scope to begin with.

Something you probably gathered just from listening to that earlier talk, we have traditionally bench marked DNS name servers over UDP and as soon as you start thinking about TCP you introduce a whole new level of complexities, more variables, tests dimensions and parameters that you have to consider. So what we're doing here is just scratching the surface, this is by no means a comprehensive report, it's just the start of looking at one portion of the testing space.

So, we chose to compare four software, pieces of software. We wanted to look at BIND. It doesn't do TLS yet but it's on the way but it's so widely used we wanted to understand how it handled TCP. We looked at Unbound and knot resolver, which both do TLS natively. Wanted to look at DNS‑Dist, it's a proxy, but again that is being very widely used to provide DNS over TLS service.

Test setup is about the opposite of what you have just seen. We have a very trivial setup. We have two servers controlled through a Jenkins. We have one server we run our client on and the client that we use is DNS pert and we use that one single machine to generate all the clients for our testing. We have another server running each of the flavours of the name server on there, connected by a 10 gig link. Eve of our servers have two 8 core processors. We have done only the most basic tuning, we set the obvious default parameters and we're starting from there as our baseline.

We also choose the lock the name server just to four cores to make sure this simplistic setup that we sat rate it. And we heat the cache up before we start the runs.

The software that we used is DNS perf, this was original knee developed by Nominum, now Akamai. In the middle of last year we took a fork of that. Much more recently, we also tried to add TLS support into it and when we did that, we came across a problem to do with the threading model in DNS perf where it uses separate threads to send and receive queries which gives you problems if you are trying to access the same SSL object model at the same time, so unfortunately we had to refactor it and introduce some locking and we do think that's introduced a slight degradation in performance. The TLS numbers you see here could be improved if we had a tool that was optimised for TLS.

Also, we have just stuck to TLS 1.2 for now. We have no fast open, it's all very vanilla. We started just by thinking about let's take a very few number of clients, understand how the name servers work there and then also look at varying the number of queries we put on a connection.

So this is the first result. This is our result just looking at UDP. So across the bottom access here we are increasing the number of clients from DNS perf, so that is essentially the number of threads that uses to send queries, and on UDP it's just sending queries on each of those threads as fast as it can. So, the top lines here are red is DNS‑Dist and blue is Unbound, and you can see they are very similar here. Yellow is knot resolver and in the lovely green we have BIND, which is noticeably an awful lot flatter than the others and we're not quite sure why. BIND is the oldest of all the software here, so maybe it's just showing its age a little bit in this.

So, this is what you see when you then use exactly the same parameters to do TCP. So what that means because we're doing TCP, these clients are simultaneous TCP connections. So we're in noddey land here, we're only up to 16 at this point. But what we do see is very interestingly, this first top line here, is DNS‑Dist does better other TCP than over UDP in this scenario. For the ours we see pretty consistent results in that we drop by about 50% when we are doing TCP under these same limited conditions. Having chatted to the DNS‑Dist guys yesterday, we begin to understand that we are not actually comparing apples with apples here, we slightly misunderstood their documentation. For all the name servers, we had locked them to full cores and for Unbound, BIND and Knot, we had limited them to four threads as well. We thought we had limited DNS‑Dist to four threads, but what it actually does, is it opens a new thread for every TCP connection it handles and you can't decouple that, it's one parameter to control both. Actually up at this end it's using 16 threads. So that's our best guess as to why we see this difference at the moment.

So, I have a similar graph for TCP versus TLS but probably the easiest way to show this data is this graph. So what we have here is the blue is TCP and the yellow is TLS and this is shown as a percentage of the UDP performance at the 8 client points, that was the mid‑point of the previous graph. So again this is you know a handful of compliance and we see DNS‑Dist wait up here. But for the others, we see, and this kind of matches with what Baptiste has seen, you have up about at 50% of your throughput with TCP and for Unbound and Knot resolver the TLS is not that far behind here and I'm quite interested as to why the DNS‑Dist TLS is so low. But I don't quite understand that yet. I will comment in passing that Unbound has a slightly different implementation. It doesn't process queries concurrently as it takes them off the TCP connection. It will take the query off. Find the answer, send a response and only then will it pick up the next one. So it's not surprising it's the lowest one here, but it's not actually doing that bad and maybe that's to do with the fact that we're just going from a hot cache and it doesn't have that much overhead.

So, now on to looking at what happens when you start changing the number of queries per connection. All those previous measurements were done with 20,000, so they were effectively persistent connections. Here, we don't see much change between 20,000 and 10,000 and what we're doing here is we're taking 10,000 down to very low numbers. For DNS‑Dist, we see a pretty sort of gradual decrease and then about 2,000 it starts shooting down. The solid line, the blue solid line is Unbound and the green solid line is BIND over TCP and they stay flatter for a little bit longer but about 1,000 it where you see the tail off. On the TLS ones we see very gradual decay and we think this iss are the limitation on the implementation kicks in they start being flat at very low numbers of queries.

But the interesting thing is what happens with Knot resolver, which is that it has this little kink in it at the end. So, if we then drill right down and look at what happens below 500 queries per second, we see really quite different behaviour here, we see Knot resolver being particularly flat down to 100 queries, whereas the others are all in a roughly equivalent decline. So something about the way Knot is handling these connections is different. Having chatted to pet a, our best guess at this at the moment is that Knot resolver uses lid B V and it hands off an awful lot, almost all of the event in connection handling into lip P.V., we think that's doing a good job and why as in some of the other cases we know there is a lot of context contention when they are trying to pull queries off TCP connections.

And just as a purely hypothetical exercise, if you think theoretically of how a TCP handshake amortises on a connection, if that was the single factor that changes the overhead you'd see that blue line there, you'd be able to stay flat down to about 50 connections.

So, I think I am pretty much done for time. There is a full report written up on the DNS privacy website. We have a huge to do list. We actually come out of this with probably a lot more questions has been answers. We clearly need to understand the separate implementations a lot better. There is a whole range of work we can do on OS tuning and refining the configurations for the name servers. We very much want to drill downright down to low numbers of queries per connection so we can understand the behaviour there and started aing in tricks like fast open, we want to look at TLS 1.3, zero RTTs how they affect it and of course as I mention, we're very low numbers of clients here. We have run this up to 100 clients, I didn't show you that, because I literally got the results yesterday. We didn't see anything dramatic happen up to 1,000 clients apart from DNS‑Dist where we're having trouble getting it to run but we're hoping we can work with the DNS‑Dist guys and sort that out and we'll have numbers going up not tens of thousands of clients very soon.

One thing would I love to do is drop a pure TLS proxy into our test system and just see what profile that shows us, because it could be hugely different or it could be this is just generally what stuff looks like when you put it under these conditions. And clearly, there is work, there is some work that's already been done on changing the way Unbound processes, I'd be very interested to see how that improves the handling once the concurrent processing is there. As I mentioned, actually if you want to do this kind of testing, we probably do need a whole new test tool written from scratch designed to cater for these multiple transports and give you good comparisons between the conditions.

With that, I will leave it there and hopefully time for a few questions. Thank you very much.

(Applause)

JOAO DAMAS: Thank you Sara.

AUDIENCE SPEAKER: Andrei, ISC, it was much more fun being a new kid on the block. We know about like the performance in inBIND and we are working on that, but I was chatting to Ray about the measurements, and he suggested that there might be something about the nic queues and how the kernel distributes the queues to the threads, so ‑‑

SARA DICKINSON: Agreed. That's something there that we haven't explorted yet and it could have a huge impact. If you want to get an input from Ray, it would be best to chat to him directly.

AUDIENCE SPEAKER: Phil Stanhope, Oracle Dyn. I just wanted to follow‑up on that point. So I do think ‑‑ let's follow up and talk just about pure benchmark. Because you will find that you get a soft IO Q storm if you don't queue it right and that will kill you, once you figure out to tune that, I can't give the data outright now but I'll tell that you you are going to get to about 500 unique connections per second over TLS before your CPU stalls. Now you are into the space where we shouldn't be doing this from scratch in the DNS community, we need to look at this because this data is out there it how to tune these types of servers for that work load and then you at least know what the benchmark is and then I would suggest a proxy, like you were describing, termination in front, any filtering, anything you want to do there, and then just a persistent fat socket to your actual name server behind it.

SARA DICKINSON: That's certainly a model you could look at.

JOAO DAMAS: Thank you very much.

We are running slightly late. So bear with us.

WILLEM TOOROP: This presentation is addressing the question whether you can, if DNSSEC is still necessary if you have DNS over TLS. And this comes from Eckel on the DNS coordination list which is a list run by the Internet Society for people that want to ‑‑ it's a public list for people that want to advance the DNSSEC deployment. And it's not ‑‑ there are more people that have the same question and actually even experts draft writers sometimes seem to make this assumption, for example the below snippet is from a work in progress version, I have to say that, of the DNS over HTTP draft which suggests that if you do not have DNSSEC, then you do need an authorised upstream resolver.

So I commented on this piece of text and it's now ‑‑ now it has a more nuanced description, just to indicate that it's not ‑‑ it's a question that more people have or assume.

So, the motivation for DNSSEC is that UDP is easy to spoof. You just ‑‑ the resolver just asks a question and receives an answer. There is no state. That's it. So tan attacker could simply try to inject a false answer by sending multiple answers to the recursive resolver and adds such spoofing the cache. So now one choice that one could have made when DNSSEC was developed would have been to have a secure trans sport instead of UDP, but perhaps because UTP scales so nicely and cheaply to very large numbers of queries this wasn't done and instead the data ‑‑ the choice was made to sign the zone data itself. So this is how this DNSSEC works.

Each zone signs for itself. The public compare of the key signing the zone is also in the zone. Parent zones authorise child zones to sign for themselves by signing the DS records which is not just the key signing key of the child sign.

And then validating recursive resolvers that verify DNSSEC only need one trust anchor, the root trust anchor and I can verify any DNSSEC data with that. The important notion here is that they verify that the data is authentic and because it's signed by the actual owners of the data or the producers of the data, that it is origin authenticity which is delivered to the recursive resolver.

So, also, it's done with publically key encryption, the signature doesn't match if something changed in the data. So, integrity comes also for free. DNSSEC does not provide privacy. Questions and answers are out in the open. And path from the step to the validating recursive resolver is not really the protection of that path is not well defined. So that's still left open. And this is commonly called the first mall problem or the last mall problem in DNSSEC.

There is another property that DNSSEC has, and that is transitive. It doesn't matter how you get DNSSEC data, if you can lay your hands on DNSSEC data you can verify that it's authentic. And so one way to overcome the first mile problem would be for a step to do a DNSSEC validation itself. Unfortunately for end entities like step resolvers it's quite hard to get DNSSEC data because it's hampered mailboxes.

DNSSEC does not protect against address hijacking. I have seen this week at least two presentations on address hijacking, it's trivial, DHCP tricks with IPv6, it's also very easy injecting route advertisements etc.

But, this can be done with transport layer security. So, transport layer security, or TLS, prospects against DNS address hijacking by letting a client connecting to a service check that it is actually speaking to the service it wants to speak to. This is checked by checking the signature of a third‑party, a so‑called certificate authority, and the third‑party signs the domain name of the service, the TLS servers, together with the public key compare with the private key of which the service is encrypting its session. So it's actually vouching for the domain name you are trying to connect to. So, it also does authenticate the domain name. So DNSSEC, in this case, is not needed any more.

But, there is nor in DNS than just address look ups, there a lot of name redirections, for example if you send mail, you look up which are the mail servers for this domain and these answers need to be authentic, otherwise you would be connecting to the wrong service by name, so you would authenticate the name but you are not connecting to the party you intended to connect to.

So, note that with TLS, it's a third‑party that is vouching for the transport security. So, it's the data itself is generated by the one who is creating the data, in this case of a web service that H T ‑‑ does not sign its own, it's the operator that signs the transport. Therefore it's not origin authentication but just authentication and there is also integrity is so, so in that it's probably, or it is absolutely the content that operators sell it to you but it might not be the content that the content provider provided. So the operator can modify the content it serves.

And this is the biggest flaw of TLS. There are in 2010, the measurement of how many certificate authorities there are. At that time 1,500, so probably more now, and I can all vouch for all domain names. So that's the actual weak spot of current normal TLS PKI X.

So, the motivation for DNS over TLS is privacy. After the Snowden revelations about NSA civilians, invasive monitoring, we all started to encrypt everything to make it private and stop this monitoring.

So here you see DNS over TLS and DNSSEC, the two techniques together. And they are actually doing different things. So, DNSSEC is providing origin authenticity and integrity, and TLS is providing the privacy. So they are complimentary, but they are a bit more than complimentary, they are also strengthening each other, because the secure transport for the first mile the authentication of it it be restricted to a single certificate authority, which might be the operator itself, with Dwayne D A N E, and the TLS transport covers the first mile problem of DNSSEC and delivers DNSSEC as well as that it provides a twisted connection to a twisted resolver.

So to answer the question, if you have DNS over TLS, do you still need DNSSEC? In the current world, probably yes. But, what have everything is DNS over TLS? So, also, because of resolvers to the authoratives, and suppose the recursive resolver, not a validating one, but yeah, validating the transport security, has a certificate authority store with 13 certificates in it, the 13 certificates of the operators that serve the root and following the delegation, they learn the certificate authority of the child zone they follow.

This is better than having everything just on DNS over TLS, or let me put it this way: This does not have too many certificate authorities problem that normally TLS has, because you are only twisting the operators that the zone content providers twist to serve the zones.

So the only thing that still remains is the step to resolver connection. So, with this architecture of having DNS over TLS only, the question is, what do you think is better, have the rock solid mathematically sound proof that something is correct over DNSSEC or should we perhaps be more flexible and rely on our operators to do all the serving for us? And all the truth worthiness. This is my last slide and I am open for your opinions.

JOAO DAMAS: I missed one thing. Like, there is an alternative called DNS script that lot of people out there seem to be already using an benefiting from.

WILLEM TOOROP: DNS over TLS or DNS script, that's more or less ‑‑ it's transport security.

JOAO DAMAS: I just wanted to make sure.

AUDIENCE SPEAKER: Yelta Jansen. There is one other problem I don't see solved here, the way X 59 certificates are handed out. What if the address is already hijacked the moment the validation takes place from the certificate authority? They usually just connect to you, check if they get a connection and they hand you out the certificate. So there is no actual origin authenticity anyway in the way certificates are handed out.

WILLEM TOOROP: You start out with the 13 root operators ‑‑

AUDIENCE SPEAKER: How will they validate the other ones? That's missing ‑‑

WILLEM TOOROP: You are having a secure connection to a K dot roofers .org, suppose you are asking for I don't know, ripe.net and it will give you an AS‑SET or the operators of.net and alongside T it will provide the certificate authority for those operators.

AUDIENCE SPEAKER: The problem is kind of the boot serving problem of TLS in general, so if you can spoof the address when you ask for a certificate, that's ‑‑ we have a similar problem with DNSSEC probably, but still.

AUDIENCE SPEAKER: Matthijs, Oracle‑Dyn. I like this idea of having that information store the CAs in the root and then if I want to go to a DNL site, an NL zone that CA stores it in the route and I can get it over TLS, so yeah. Do we need DNSSEC if this is here? Maybe not. On the other hand, DNSSEC would still be nice because we learn that it is a very good mechanism to use aggressive NSEC caching so we can leverage it to not send much data. Suddenly, there is other things that point to the horizon. So still it's complimentary, I think we should do both, and I'd like to discuss this over beers.

AUDIENCE SPEAKER: Andrei: Well, Willem, what you are saying is that we should ship the private keys for TLS to China and Belarus?

WILLEM TOOROP: If that's where the operators are, yes.

AUDIENCE SPEAKER: It's for the route servers that's okay. They solve different kinds of problems and I think we need both because DNSSEC is providing the authenticity of the data. And DNS over TLS is just providing the authenticity of the channel. And if one of those it pro comprised men it's a different thing basically. So I don't think ‑‑ I don't think that you can replace DNSSEC with DNS over TLS.

WILLEM TOOROP: I think it's a matter of who you want to trust. Do you trust the operators?

AUDIENCE SPEAKER: No. You can't trust the operators, because it's not only a case of the authoritative regimes spying on their citizens, like, US, but there is a case there, you know, most of the big operators are US based companies and do you trust them? Maybe you trust the companies, but do you trust all the ‑‑

WILLEM TOOROP: It's also not that you as a client have to trust them. It's the zone owners trust those operators to serve their data. Right, so they have a contract with that operator. Awed avid so little trust these days.

JOAO DAMAS: Warn.

AUDIENCE SPEAKER: Warren Kumari Google. Can you go back a slide? So, you use different terminology a couple of times, specifically at the bottom, "Learn CA of child zone operator" or is it learn certificate of child zone operator.

WILLEM TOOROP: Certificate I think yes, sorry.

AUDIENCE SPEAKER: That has basically all of the problems that DNSSEC does, which is the child needs to tell the parent what specific key etc. So you have got the same problem of rolling out. This is largely kind of rolling DNSSEC into this. Another thing that DNSSEC gives you which I'm not clear if this does, is for stuff like the root you can just grab the whole zone and use it and that's I think a really nice feature within our local root band and localised roots. I'm unclear with this if you can do that but it kind of looks like maybe but I'm not sure.

AUDIENCE SPEAKER: Phil Stanhope. I think one thing we need to do is to get the roots, the trusted roots into the default certificate bundles that all the OSs have as a minimum. That's just the boot strapping problem, and applies to all HTPS scenarios as well. With regards to the question, I don't know who asked it, I don't know about poisoning the name server address when obtaining a certificate, which has nothing to do with this work per se and that's just more how you get certs. The protocol is about to adopt a multi‑challenge scenario to avoid or or try to manage DNS poisoning and DHCP path poisoning, that's not live yet, but there will be multiple concurrent challenges and that has to be baked into, that still not formally part of the protocol. But the work is being ‑‑ or it may be and it's experimentally being rolled out. That's going to apply to anybody who wants to get an automated CA so then you have to have the combination of the DNS, the trusted roots and trust of your own certificates at the authority to be able to even CONS plate the request to get a certificate that you would then use in this type of scenario.

AUDIENCE SPEAKER: Tim from Curatr Labs. So you state that one of the main problems of DNSSEC is privacy, which is correct, and DNS over TLS sort of solves this. But it's important to still keep in mind that undercurrent circumstances, most of the time the data which is kept private by DNS over TLS is then being ‑‑ is then leaking to malicious person, we are most of the time server name indication in TLS. There is initiative to encrypt this but it's nowhere near the completion. Maybe it's a good idea maybe ‑‑ because now the problem of encryption and the problem of DNS over TLS sort of, in my opinion, is that those are different initiatives developed completely independent, maybe it's a good idea to that I can this in mind, maybe find some way to help to encrypt this using DNS over TLS, I don't know. Just the idea is that in coming two or three years privacy given by DNS over TLS is, isn't that much of a thing. So this was intended to be a question, but finally, it's a comment, sorry.

JOAO DAMAS: Okay. Thank you William. Thank you very much.

(Applause)

With that, we are done here. There is coffee waiting out for you. I have a last request from the PC to remind you all that there are slots for lightning talks available on Friday. So if you feel like presenting, that would be welcome.

(Coffee break)

LIVE CAPTIONING BY
MARY McKEON, RMR, CRR, CBC
DUBLIN, IRELAND.