Archives

Routing Working Group
Thursday, 17 May 2018
9 a.m. Routing Working Group
Thursday, 17 May 2018
9 a.m.

CHAIR: Good morning, Routing Working Group meeting is starting in one minute, please take your seats and get ready for an exciting session.

CHAIR: Good morning everybody. Welcome to RIPE 76. This is the Routing Working Group. My name is Paulo Moroney I am the co‑chair of the working group. Ignas is on my side, is the other co‑chair. Let's start as usually with the administration.

The first thing is the minutes of the Working Group at RIPE 75 in Dubai. The draft has been published to the mailing list no so long ago. Are there comments or corrections? No? Okay, we these minutes as final. Thank you.

Second point is the scribe. Usually the NCC helps us with a scribe. So, thank you.

And I think we can start with the agenda.

This is what we plan for today. It is a bit packed. I hope we'll be on time. That's why I am starting on time. Although some people with still coming in, but, is there any addition you want to see in the agenda? Not really. Okay. So, I think then we can start. But before starting, I have a couple of things.

One is we rely on you for improving what we are doing. So, please help us with your feedback. Please rate the talks, and if you wish, please feel free to approach any of us, after this session directly, or of course in the mailing list, that's always available.



IGNAS BAGDONAS: The Working Group agenda was quite light in Dubai. This is to compensate for that. And kind of intentionally today's topics dominate about on the topic of the routing security. This was not planned, but maybe the Working Group might think that this is overall a good idea to have a dedicated topics for the meetings. If you have any feelings about that, please come to the co‑chairs, talk about that and in general and feedback. And with this we are starting with the first security related, routing security related session. ARTEMIS, the presentation of the work done by the Greek research organisation.

XENOFONTAS DIMITROPOULOS: Good morning everyone. Today, we will talk to you about our work, about ARTEMIS, which is a tool to detect and mitigate DHCP prefix hijacks within a minute. This adjoined work with, between university of great forth and arcade an in the US. So BGP hijacks are an important problem for the operations of networks for the customers of networks and also for their peers. And I'm sure that most of you, as we also saw yesterday, are basically well aware of the problem.

Of course the problem is not new. Probably one well known solution is RPKI. RPKI, however, is presently adopted in 8% of the prefixes which are covered by ROAs. In addition, in a survey we conducted last year with network operators, we saw that network operators are often unwilling to adopt RPKI for various reasons. The top three reasons in our survey were first that it's not widely adopted yet. And then, because it costs, complexity and risks for adopting it.

In our survey, we also saw that sometimes network operators use third‑party BGP hijack detection services. However, these services have their own set of limitations, in particular they trigger false positives and false negatives. Then they only detect basic BGP hijack events. In addition, you need to disclose some information about your prefixes and your neighbours, and this might be sensitive information. And finally, we saw that because you get third‑party alerts, you need to manually verify these alerts which introduces delay for the mitigation.

In our survey, we saw that from the organisations that had been affected by a BGP hijacking event, more than 50% of them, they required several hours or even days to resolve the problem.

ARTEMIS is an approach that operates locally in the protected network. It uses realtime BGP monitoring data from the public BGP monitoring infrastructure which have been recently introduced. And it can therefore detect BGP hijacks in realtime. And optionally, when the user desires that it can take automated mitigation measures and when this is possible. So in the rest of this presentation, I hope we will convince you that ARTEMIS has the following key features:

First of all, besides basic BGP hijacks, it can also detect more advanced hijacking events like man in the middle hijacks. Then it has zero false positive and false negative rate for the basic hijacking events. Third, and I think this is one of the most interesting observations, in our experiments we saw that the detection in the mitigation cycle by doing forks prefix deaggregation, takes only a minute from the moment that the hijacking event is initiated.

In addition, we work lowly on the protected network, you don't need to disclose information about your network to a third‑party and it's flexible. . Let me clarify here that ARTEMIS is a tool that we are currently developing. It's funded, as we will also discuss further on, by the RIPE NCC community project, and here our purpose here is to get feedback, as much as, from you so as when like we finish this tool and we will make ‑‑ it will be an Open Source tool, it basically it has features that are desirable by network operators.

So let's see an example. So, ARTEMIS has these modules. The monitoring module of ARTEMIS uses BGP monitoring data from RIPE RIS and from KDoS BGPStream service, which internally uses the route used data addition, ARTEMIS plugs into the BGP routers of the protected network.

So, assume that the administrator owns autonomous system 1 here. So he will insert a configuration file which states that they don't a prefix, 10.22, /22 in this example. As it will state, the autonomous systems from where this prefix is advertised, as well as the neighbours of this autonomous system. So, in in example, assume that the AS 4 conducts a hijack by announcing a sub‑prefix with a fake AS path towards AS1, which includes this pureious link between AS 2 and AS 4. So, the announcement will propagate and it will be seen by money of the monitors of ARTEMIS, and then ARTEMIS will reason, in this example and in this case, that this AS link has not been observed before in the available data for any other prefix and in any other direction. Because this link has not been observed, it will raise an alarm for this hijack.

Further on after raising the alarm, it can optionally take automated mitigation measures.

Okay. So, the public BGP monitoring infrastructure can capture most impactful events. In our experiments, we saw using emulations that all events that affect more than 2% of the autonomous systems are visible by the vantage points of the public BGP monitoring infrastructure.

Further on, in this work we use the realtime BGP monitoring services that have been recently introduced in RIPE RIS, which use internally B N P and we observe that the detection is extremely fast. Using the peering test bed, we contacted, control the hijacking experiments towards our own prefixes and and we saw that from different location detection can happen only within five seconds from the moment a hijacking event is initiated. At this point I'll hand over to my colleague.

VASILEIOS KOTRONIS: I would like to highlight another feature of ARTEMIS. That is comprehensiveness in terms of the attack type detection. I will use a 3D model of the attacks base. The first dimension is the prefix, it can be exact or sub‑prefix hijack. The second is how ‑‑ blackhole link or a man in the middle. Internally how it man plates the path on the control plane. For the latter case the hijacking of the hijack err on the fraudulent path determines the attack of the hijack. In order to ‑‑ fake first uplink will be Type I and so on.

Using this model we compare ARTEMIS to other approaches available and we saw that it can detect successfully attacks across this space. Now, okay, we say the detection can be comprehensive, but is it accurate? We saw that for the basic hijacks like sub‑prefix and type 0, Type I hijacks, the first positives and negatives are 0, expectising on the route supplied by the operator himself. For advanced hijacks, there is a tuneable trade‑off so the operator can choose to trade false positives with false negatives and time. For example, if you observe an totally new link of Internet, it can be fake or it can be real. If we wait for a few minutes for BGP to convert we can reduce false positives down to 0 for 89% of the cases.

After detection the next natural step to deal with the hijack is to mitigate it. With ARTEMIS, we employ two basic approaches. The first one is a do it yourself approach with deaggregation. Now, this is possible for of course some prefixes. For /24s, which may be filtered after deaggregation, the idea is to get help from other ASes. This involves these ASes announce the affected prefix, attract traffic towards it and tunnel it back towards the victim AS. This model is not unknown to the community who have seen it before in the context of DDos protection services. In fact, in our research, we identify the top DDos protection organisations as very effective help ASes should such a hijack take place.

The mitigation that can be done with this approach is both automated and flexible. Automated because after detection the system can go triggered immediately to put in counter measures or it can await operators consent. The flexibility stems from the configuration capabilities that the administrator has over the prefix of the hijack, the type or even the impact of the attack. For example, how many monitors have seen it worldwide. The important message here both the detection and mitigation cycles combined with brought down to one mine, which is the demarkation point.

On the right side, you see a real example of this cycle using the peering test bed. On the downside, so the continuous one, what we observed during the time of the experiment, where for RIPE RIS monitors were realtime streaming and the upper line, the dashed one, shows what would happen if all RIPE RIS route collectors were realtime streaming. Today's situation, is somewhere in the middle.

Regarding the status of the tool currently. So, the development is funded by RIPE NCC community projects. We have a basic minimal gooey based on a web application, monitoring input. In terms of detection, we we detective the basic hijack types like sub‑prefix and 0 type and Type I hijacks. We are working on development of exact type N ones but capitalise on the knowledge base built by ARTEMIS. Finally with the operators feedback, we would like to proceed to automatic mitigation mechanisms that work in the operational network.

Thus, a short recap of the on figures file. The operator defines own prefixes, ASNs, all DS Ns and also the neighbours. This involves extracting the information from the local routers and also from RIRs and RIPE RIS route dataset.

Feedback. You are welcome to answer our questionnaire at this URL or try the current test version. And most importantly, we'd like some collaboration for potentially testing ARTEMIS in a real network. For example, supply something operation files and advice on this integration. And of course welcome to talk to us during this RIPE meeting or via e‑mail.

I will leave the slide up to have all the sources available in one go. And we will take your questions. Thank you very much.

(Applause)

AUDIENCE SPEAKER: First of all, thank you for your talks, it's a really interesting approach to fight hijacks. May I ask you to go back, you were describing different types of hijacks and the way you are going to detect them. Especially where there was ‑‑ new links, there was some graph. So, side note. If you will be targeting new links, it just happens, and there will be a lot of false positives. But if you start the difference of prefixes from your customer and what is seen in the wild. So, if there is an AS path poisoning, the victims of autonomous system number is consulted in the AS path. So the victim of the ISP will drop the trode and if you have a multihop BGP session from a victim instances, you will see the victim is not announcing this prefix. And this is much simpler than what you are trying to do. If there is AS path poisoning or you can name it AS path spoofing, it's very easy to detect it from their source of the announcements, because they will just don't see such prefix. That's all.

VASILEIOS KOTRONIS: But this is a more active approach than poison the AS path.

AUDIENCE SPEAKER: It was just your example where the attacker is not only advertises prefixes but also inserts victims advance system in the AS path. The most simple way is not to start the way it propagates, but to compare what is seen if the victim side and in the world, if there is differences in the set of prefixes.

XENOFONTAS DIMITROPOULOS: Can I answer here F you do poisoning like in this example, AS1 will get the announcement from the monitoring points, right.

AUDIENCE SPEAKER: I am speaking you need to have direct multi‑hop BGP session with your customer. In this case you can compare the data from your customer and what is seen from our points. And with help you will be able to detect such hijacks so I think ‑‑ too long to the mic, so let's talk about it afterwards.

AUDIENCE SPEAKER: Andrei Robachevski, this detection, do you do this only for participating ASNs or for all ASNs in the Internet?

VASILEIOS KOTRONIS: No, it's a local system so it works per network.

AUDIENCE SPEAKER: I understand that, but your database of those incidents and hijacks, is it for all ASNs in the world or just secretary ASNs that cooperate with you.

VASILEIOS KOTRONIS: Forth basic hijacks we keep it related to itself. Where the link is fraudulent, we collect information related to many prefixes in the Internet.

XENOFONTAS DIMITROPOULOS: It only protects the network that uses the software.

AUDIENCE SPEAKER: Okay. Thanks.

CHAIR: Any other questions? Thank you again.

ALEXANDER AZIMOV: Good morning. I am from Qrator Labs. Today I'm going to discuss with you the problems of IP spoofing and the possible solutions.

To start with. Spoofing is bad. There are various scenarios that can be used ‑‑ there are vary attacks that can use IP spoofing, the most popular now is amplification attacks, but they are not alone, but okay, speaking only about amplification attacks. Of course there is two parts of amplification attack. There is around the world services and the number of them is decreasing but still it's too high or enough high for the attackers. And the problem is that okay F you have such vulnerable services inside your network, there are several community projects that can assist you to fix them. But, the majority of these services, I'd like to think so, are out of your control. It's inside your customer networks and things are getting even more complicated when we're speaking about IP trans relations.

So, we're network guys here and what we can do to make the problem less critical. So, what techniques are available at the level of IP transit?

So, it's BGP 38 and it provide three types of three modes to detect and something IP spoofing. The first one is a strict mode where egress traffic it BIND to the traffic of the ingress traffic. Such approach has limitations, it's just not working at the level of IP transit and for example in this particular example, the provider C will drop traffic that is received from provider B and originated by its customer X. The next mode is a loose mode. It states that if there is any route for a selector's IP, then it's okay. In reality, it will not drop any legitimate traffic. In reality it will do nothing. Because it gives a very strong limitation for attackers. Do not use private IP ranges during spoofing. And that's all.

And the last one is feasible mode.
And it states that the traffic must be accepted if there is at least alternative route through selected interface. And at first glance it looks quite okay. But, if we switch to a situation where traffic, some address space is announced only in a single direction, and it happens, again we will have a problem. So, just summarise the situation.

There is a BCP that clearly states that there are three different modes that drops spoof traffic and none of them is useful for the level of IP transit.

The key idea behind this failure is that BGP protocol carries route information, policy information. It was not supposed to carry availability information. But I have been already accused that my slides are very depressive. So, let's speak about solutions.

Option 1. New SAFI family. We can create a new SAFI that will carry availability information. We can rewrite or change significantly BGP decision propagation process and maybe after decades, we will reach partial adoption rate.

Maybe it's a good option, we shall discuss it.

Option 2. Hacking. We can try to find a way to create this information messages in BGP using techniques that we already have. And there is a wonderful thing, we already have it. So let me introduce to you graceful shut down community.

It's a new well known community that was invented to assist planned network maintainers, so it works as follows:

If you need to drop BGP session or for example make the link down with one of your IP transit providers, it may result in problems during BGP propagation or convergence process, resulting in a packet loss due to convergence dynamic loops and so on. And the idea is that before dropping the session you should mark all your prefixs with these graceful shut down community. And each deprevs all your prefixes and its transit. So while it propagates all your providers are providers of your providers will use finally alternative routes. And after that without any worries, you can drop your BGP session. But by the way, take a look at this picture. So provider B is already using not direct route to its customer, but using its upstream provider.

So, it has a route, but it's not using it and we have a guarantee that it is not used. So, we have got an informational message what we are looking for. Still, there are of course limitations. If we go back to the scenario when the prefixes are, were announced only in one direction, a gradual shut down will help us partially because it will still not work for transit ISPs. But it work for multihomed ISPs and multihomed are the many, there is more than 85% of all ISPs are multihomed.

So option 2. It has a positive side. We do not need to change significantly BGP. We don't need to ship additional software. But still, it requires significant work with your customers. You need to convince them to change a routine policy. So, it's a lot of your time and work.

And option 3. Do nothing.
And are we really prepared for the next year? So, I can't decide for you. It's the community that must choose the proper way and follow it. But I will be very glad to know your opinion. And may I ask you to prepare your smartphones, ladies and gentlemen I'm not kidding you, please get your smartphones and be ready. We are going to try innovation technology and I will show you how the RIPE NCC elections should look like.

So, here is the link, you are free to use one of them. Ladies and gents I am waiting for you, I will not go to the next slide before you all get your smartphones. Feel free to use it. So, I hope at least somebody has already followed the link.

And here is our online poll. I am really surprised ‑‑ I am already surprised with the result. So, let's discuss it. First of all, I am glad to see that not all networks are not going to use ‑‑ okay, I am really happy to see that the first point is not the most popular one. We have a lot of networks that have already applied anti‑spoofing techniques to share customer links. I am surprised how it's working for you. The number 2 is is full automation. I need to highlight that we may have full automation only in cases of new SAFI family and this will postpone such a decision for decades. But it may be also an option.

Also there are people that are already eager to work with customers. So you are free to do it, because we already have this community, we just need to check that it is supported by your software and you can try to do it now. So, thank you for listening, I will keep this slide here so it will be updated during the mic session. So, thank you.

(Applause)

GERT DÖRING: Gert Döring, long time anti‑spoofing advocate. Your sort of like focusing on getting BGP to automatically enable your RPF to do the automatic job. What we have been doing is we build prefix filters for the customers with an automated tool, basically, BGP Q3 and this can also build an ACL. So, if the customer is permitted to announce a prefix to us by means of prefix filters, we will also accept traffic for that prefix. And that's ‑‑ we already have the automation stage for the prefix filtering, just adding the ACL there is five minute work. So you don't need any fiddling with BGP automation or whatever, and the router can usually do ACL in hardware an the customer is free to say I'm not announcing this prefix to you today because today I just want to announce it elsewhere but you still validating whether they can send you packets. Of course it needs communication towards the customer if you don't document your prefixes, neither your BGP nor your packets will get anywhere but that's a positive side effect.

ALEXANDER AZIMOV: Don't you afraid about the quality. You are getting these prefix lists from the AS‑SETs, am I right? Aren't you afraid that there will be a lot of dull shit in this AS‑SETs and for example your customer that is eager to spoof will just add some /8 in its set?

GERT DÖRING: If I catch them doing, I know where they live. And my customers know ‑‑

ALEXANDER AZIMOV: You have a direct interaction with your customers

GERT DÖRING: We're a small network, so that is easy. But you have to start somewhere. And if you are not prefix filtering your customers, if you say I'm not going to prefix filter them because there could be garbage in the AS‑SET, then you are on the wrong track.

ALEXANDER AZIMOV: There is still ROA region validation that can also support you. But okay, I accept ‑‑ a valid point. Thank you.

RANDY BUSH: Gert, two things. You can do this in a small network. And in a large network cannot per packet filter. You just ‑‑ your router goes bump and you don't have that direct relationship with your customers, customers customers etc. But yes, for a small network or for a non transit stub, yeah

GERT DÖRING: I have been told that large networks had to disable URPF as well because their hardware couldn't do it. So if it works for small networks, it's at least some gain.

RANDY BUSH: But I thought Alexander's point is what do we do? What can we do that scales up.

ALEXANDER AZIMOV: Still, the partial sluice works also only for multihomed. But it's very interesting results. Thank you.

CHAIR: Thank you. Any other question? .

IGNAS BAGDONAS: The next talk.

EMILE ABEN: I work at RIPE NCC in R&D. And this is a project called AS Hegemony and this is the work of these two scientists, and I was also somehow involved.

This work is about inter dependency of networks. How do networks depend an our networks. You know that you have dependencies up on your directly connected networks, your upstreams or your peers or however, your topology is, but there is also indirect, so what's upstream from your upstream? What's beyond what you can directly see?

And we're collecting data on that of course, there is a RIPE RIS, so there is Oregon routviews and there is also Hurricane Electric, all kinds of projects, but a big problem with these route collector projects is that they are bias, they are bias towards the cluefulness records. And there is also a very limited set of vantage points that you have, it's in the order of hundreds of haven't points, whereas the total at the time of ASes is in the order of 10,000 so there is a bias there. And I promise to be quick so I'll not go into the the full explanation of this.

But, this technique works on actually showing these inter dependencies. The technique works on the graph theory, there is something in there walled between the the centrality. So how central is a node in a big graph. And there is a nice explanation on Wikipedia and I stole this picture from that. So, the blue nodes here are very central because they are very central to the network, you have lots of authority path through them and the red nodes are not very central. There is global versus local, so there is global, so that's in the global picture what are the central things and in the local picture, there is you pick one point and which are the networks that are very central to you for the rest of the Internet.

So, people have tried this before and it doesn't work very well because of the bias but the key insight that Romain brought to this is that you can use something simple like a truncated mean. Everybody knows that averaging of numbers. And this is a way to solve bias in this. So, we're in a very soccer loving city, football loving city I should say in English, so you can actually see a full stadium of supporters, and they have an average income, which is I don't know what the average income of a soccer fan is. But, now you add Mark Zuckerberg here. What's the average income then? That's going to be ten times higher, 100 times higher, I don't know, but a way to remove this bias is just cut off these outliers and just calculate stuff over the rest of it and that's what this technique works on. If you want to have a better explanation of it, there is a really nice slide set that Romain presented at the passive and active measurement conference. There is a scientific paper if you like reading that type of stuff. Or if you like to read code, there is code available that implements this technique.

But what I really want to show here is this. There is actually a website here, Internet health report dot IIJ lab.net. You put your AS in there and you'll actually see what this simple technique ‑‑ I should also say this is very simple. It's truncated so you don't have to do all kinds of very complicated calculation to say get to scoring. So, how does, in this case, level 3 depend on our networks. And you see the dependency is between 0 and 1. 1 is fully dependent and again this is not only direct but also indirect dependencies. So there is a little bit of dependency in other networks in terms of the ‑‑ in this case the IPv4 address space that is routed. And these are the number of networks that are dependent on level 3 in this case.

Another case, Google. No dependencies that we could measure. So that's in line with what's been told about this network. So, no significant dependencies on other networks, it's very highly peered. And this is what ‑‑ well you can see this as a local flattening of the Internet example. If you actually look at the top, the traditional tier 1, we actually see something interesting that there is a slide trend upwards. So, they become more central over time in terms of the v4 address space that they are central to. So, maybe that's a global non flattening, so locally things are flattening and globally these networks become slightly more important. That's what this kind of looks like.

The other interesting thing that you can potentially do with this technique is detection of leakage. This is an example of level 3 leaking one of the Comcast networks, 33667. You see normally they are fully dependent on 7922, that's the mother ship for Comcast, but there was an event here and you can actually see the graphs changes and these graphs are typically very flat so you can use this actually potentially as a detection of significant events happening and I think that part is really interesting.

So, what I want from you, look at your own network. If you see weird things happening there, if you see unexpected dependencies, come talk to me, I have an e‑mail address there, tell us if you see weird things. And where to take this, would people want alerting, an LIR portal, we could use this for peer selection or if you have any other ideas how to use this, please let us know.

That's it from me. These are Twitter feeds of the people involved in this project. I highly recommend you follow Romain if you like seeing all these events, he does a pick of the week where he sees interesting events in the Internet. So, that's it from me. Is there any questions?

AUDIENCE SPEAKER: Break which Zayo. Thanks for building T we use it every week at least.

(Applause)

IGNAS BAGDONAS: Again, continuing on the security topic, Max presenting on RPKI AS cones.

MASSIMILLIANO STUCCHI: I am from RIPE NCC. Although this is a work that I am presenting that I have done mostly on my spare time, and I have done it with Job Snijders, but I can't see him here. He was supposed to be standing here and showing his beauty this morning, but no. Not yet.

So, AS cones, what are we talking about?

You know, everything is starts with AS‑SETs and route objects. You want to find out the networks for your customers and what you do is run out to the AS‑SETs, do a reverse or an inverse lookup, find all the route and Route‑6 objects that your customers are supposed to be announcing, and then you build a prefix list. And we're using again an example from Job Snijders, basically this is one of his AS‑SETs, and that's what you get. You make sure filtering can be done supposedly right.

But there are some issues.

You have limitations in AS‑SETs. What are the issues? AS‑SETs makes different databases, may have the same name, they might be managed by different organisations, they might contain totally different data. So, you can't really rely on them in this case.

And then moreover, how do you find out what an AS‑SET is for your customers? There is no fixed way to find them. So, there are also different trust levels in the different IRRs, there are different issues. But what we can do is try to get to look at it from another perspective. We are RPKI. A lot of people don't like it. Okay. But RPKI has data you can trust and you have data that's similar to a route object, a ROA. You have a prefix, you have an ASN that is supposed to be originating the prefix. Fine you even have some additional data. You have max length, you can also use that. And again it's data you can trust. So, you know who controlled it, you know where it's supposed to be. And you can validate it in a way.

So, enter AS cones. We have RPKI. We are our ROAs, what are we missing? We're missing a way to define AS‑SETs in RPKI? And how can we do this? So we started thinking about it and we propose a draft. The goals are to create basically feature parity between the IRR and RPKI and actually in the end, one of the things you can achieve, is that you can take maybe later on we could think about taking the data from RPKI and feeding it to the IRR. We can make provisioning operations easier because what we do is what we can create a way to easily, more easily identify the name of an AS cone so we can easily researches to it and we want to go global. So independent from the IRR. Well you still need to feed data to your IRR but you have a global Repository in the end where you can find them.

So, how can you do this? You have different features. First of all, granularity of declarations and this will be achieved, and I'll show you, by having two separate types of objects the a policy and the AS cone itself. You have a default name space. So, you know where to find the information. You have a simple validation process which derives from having a simple name space. And together with it, we brought in some other simplicity, you are a stub network, you are a small one, you don't have to do anything. So AS cones takes care of that for you.

So, I mentioned the two different objects. We have first of all a policy object. What's the idea behind it. We want to bring granularity. So in the policy object, that's just a bad example actually ‑‑ you define to this neighbour I have, this is the AS cone I want to announce. This is needed because then you have ‑‑ you can easily search for what AS cones an ASN is supposed to announce to you. So you can go and look it up

You can have ‑‑ well you must have a default policy. By default, that policy contains only your own ASN. Actually, when I was reading them, when I was reading my slides this morning I found out that this can be an issue compared to what I say in the next bullet point. Because in every relationship, you point to only an AS cone. That's the idea. Even if you only have one ASN behind you you need to create an AS cone so I'll have to figure this one out, but that's a minor issue, minor bump on the way.

Then the policy points you to an AS cone which is basically an AS‑SET the same exact way you can find in an IRR nowadays.

But, you researches to it, that's the only difference, you reference to it similarly to the way you would researches to a community. So, the prefix is the AS number, column and then the name of the cone. So this is how you easily find the cones globally with a default name space.

The names of the AS cones then, must be unique only per ASN.

Then how do you find them? Basically, your local validator should give you access to them. Via NAPI, via another export facility, so this is the way you get the objects that have already been validated by it, a local validated cache, you read the objects, and then you can build your list.

So, how do you generate the prefix list with an AS cone? You get as an upstream, you read the policy of your down stream, you get the cast cone that they want to announce to you, you walk the AS cones or the lists of ASNs in there, you walk it all. If you find a duplicate researches, you just discard it but the goal is you build a list of ASNs for which you have to look up the ROAs in your validated cache then. So you look up all the ROAs which have as origin those ASNs in the AS cone.

So, in the end, it should look something like this. And that's why they are called cones. You start from one policy. You see the blue in the background is the AS cone. And then the more you go, the more the cone grows. And then this way you know you can actually walk down all the path to you.

And last references. We have a GitHub Repository with the XML version of the draft and we welcome discussion in the grow IETF Working Group.

So, this was brief, but I'm open to questions.

AUDIENCE SPEAKER: Hi, I am Illjtsch from Logius. Basically with RPKI and the way it is today, you have one filter for everything, right? That's the way it works. What you are proposing is that you have different filters for different connections to RISes, right?

MASSIMILLIANO STUCCHI: Yes. Well feature parity with what people already use. In the previous ‑‑ well two presentations ago, there was a question, I think Gert pointed out that people use BGP Q3 most of the time. What does BGP Q3 relate to? It relates to ‑‑

AUDIENCE SPEAKER: Now you're talking about other things. We're talking about RPKI which is one filter for everything. Now you're saying we are going to have multiple filters, so this also needs a different way to import the filters if the routers. But what the real question that I have is this: So, who authorizes what? Is it the ISP that says these are my customers or is it the customers that say these are my ISPs or is it both?

MASSIMILLIANO STUCCHI: It's the customers that say what they are going to be announcing to their upstream. So the relationship goes upwards.

AUDIENCE SPEAKER: So it's not the relationship in two directions?

MASSIMILLIANO STUCCHI: No.

AUDIENCE SPEAKER: I think it should be.

MASSIMILLIANO STUCCHI: Okay, if you have any suggestions, please send it to me in an e‑mail. Or the mailing list and we'll take care of that. Thanks.

AUDIENCE SPEAKER: Rudiger Volk. I am fairly disturbed for kind of being really rude, this is to me, an example of hacking before conceptual clarity. What you are talking about is ‑‑ well, okay, the intended use of the RPKI is actually a completely different usage model than what you are doing. That's not really a problem per se as I have been proposing using the RPKI in similar ways, but the RPKI is done as something to use ‑‑ well, okay, for having people with authority about some name space to declare authoritative and authorising information. What I am seeing here is at least the way you have explained it is what has been done as model in the RPSL, which is not the authorisation, there is an optional authorisation model for RPSL and it's incomplete and lacking, but the RPSL is for people running networks to document their policy. And the AS‑SETs are kind of exactly that. And a lack of the AS‑SET concept that well, okay, what you put, the party that puts stuff into the AS‑SET has no authority about it. There is no way to make sure that kind of the content that someone is putting there and maintaining it cleanly and so on is actually kind of true and trustworthy, and there used to be in the early stages of RPKI discussion, there used to be a proposal for authorising, for an AS owner authorising an upstream to propagate and that's kind of a way of authorising certain AS paths, and one could discuss that. But, kind of, what I'm hearing is well, okay, there is not really and authorisation concept here ‑‑

MASSIMILLIANO STUCCHI: Can I? The part is we leverage the system that's already in place for the rest of RPKI. Basically to get ‑‑

RUDIGER VOLK: So what are the rules to put in something that is validated in RPKI there? You are just saying well, okay, we have a database that is successfully deployed with a global scope with some structure and we will throw some other garbage into it.

MASSIMILLIANO STUCCHI: Rudiger ‑‑

RUDIGER VOLK: I am stopping. I think I made the point that this is conceptually very questionable.

MASSIMILLIANO STUCCHI: I can understand that. But the point is, you have to build your policy and you have your own name space, so if you own the ASN you can easily go and create those and that's the point. So I authorise you by with that. That's the authorisation that can be built around.

RUDIGER VOLK: The authorisation has to come from the actual owners of the sources. And you are building the other way around. You are essentially following the RPSL target model and that is exactly opposite to what is actually supposed to be the structure and the authorisation model in RPKI. And kind of ‑‑ we have to identify the gaps in the overall picture and fill them correctly but do not start hacking about this before having a clear conceptual model.

MASSIMILLIANO STUCCHI: Can I suggest something? Can we maybe spend sometime off line and then you can explain to me how can I make this better. Because ‑‑ this was just a proposal and then if you have any idea on how to make it better, please, I am open to any suggestion. So ‑‑

AUDIENCE SPEAKER: Arcade a has worked in the customer cones with missing objects, it would be interesting to sort of compare and probably what AS cones you get from the set of objects which is Kay dataset inferences ‑‑ FAS I willious) so, that's all, it's based on relationship inference.

MASSIMILLIANO STUCCHI: That would be interesting.

AUDIENCE SPEAKER: Martin Levy from CloudFlare. With all respect to Rudiger, please do continue proposing this. Please do break the model. Please use the draft mechanism. Please let this get flushed out. Because as far as I can tell, the authentication and authorisation that sits through the creation of RPKI objects beats the heck out of anything the IRR or an RPSL or RPSL NG environment could ever bring to the table. So if the model is wrong, and this is actually where I have great respect for Rudiger because he has spent a lot of time thinking about this, then continue pushing this until the local minimer actually get found where the answer actually is because there is structure in this and there is Crypto that gives some authority to the information. So, simple question:
When you do this and you mentioned one key point, the stub network has to do nothing, and we will equate that in the operational world to a network with not too much expertise and I say that as politely as I can, but we all know they exist.

MASSIMILLIANO STUCCHI: And I know they are also the majority of the networks and that's why I wanted to point that out.

AUDIENCE SPEAKER: Stubbs are the majority independent whether they are multihomed or not with researches to the previous talk. So my question is, as you look at this, do you understand how to get involvement both up the chain of transit, down the chain, as in the cone, the customer cone, and have you thought ‑‑ you went one way but had you thought about it in the other direction? Had you thought about mutual authentication of some variety that would get us to, partly to an answer to Rudiger, because you are sitting on an authenticated base. So...

MASSIMILLIANO STUCCHI: I have thought initially about that before I started writing it, but then the idea was to try to keep it as simple as possible. And if you start putting the authentication, then you end up with a lot more work for every network. But, again, I'm open to any suggestion on how to make this better. This was just the ‑‑ we only submitted the very first draft, and if people think that there is ran for making it a two‑way relationship, then we can think about that.

AUDIENCE SPEAKER: Gert Döring. We talk about this on Tuesday, and I think I understood you there differently than you explained it here. So what I understood on Tuesday what was missing here is that the customer cone object doesn't automatically build the route set but it gives awe list of candidate ASes that are then evaluated by the documented policy object.

MASSIMILLIANO STUCCHI: Exactly.

GERT DÖRING: So it's not like I could just include level 3 into my customer cone object and get permission to transit all of their stuff but then the system would have to actually check all the policy objects. So this is not sort of like violating or turning around policy, just listing candidate objects. So, maybe that helps clarification where there are uncertainties. So I like this approach by the way. Stuck tuck thank you.

RANDY BUSH: What you are trying to do is worthwhile. Okay. And I have great sympathy for T I am an AS‑SET addict. And I kind of think the RPKI might be useful. But the problem that people keep trying to talk about and try to talk about in the grow Working Group is you have got a clash of authority models, and that really has to be solved at the design and architecture level, and I have sympathy for wanting to use well understood hierarchic authority of the RPKI to patch up the policy model of the IRR. But it doesn't ‑‑ by gluing the RPKI on the side, it does not make the IRR data any more authentic, nor does the authority model of the IRR data map well to the authority model of the RPKI data. I'm not saying this is impossible. I'm saying more thought is needed. This is a Hack, as somebody just said a few minutes ago, and I think what we need is an architecture.

MASSIMILLIANO STUCCHI: Okay.

IGNAS BAGDONAS: Thank you. And this, just as a summary, this appears to be a contentious and emotional topic and while everyone is still here nor a day and a half please get together and discuss this and maybe think about reporting what was done at next meeting.

CHAIR: Thanks.

(Applause)

INGMAR POESE: Hi. First of all. I would like to probably apologise a little bit because I'm not going to talk about BGP. Pretty much everything was about BGP up to now. So, what I'm going to talk about is more an AS central approach on how you do routing without doing routing.

So, let's assume you have your AS and your AS is in the middle and you have your routers and you are you're peerings that go outside and let's assume there are a couple of CDN servers that want to push traffic into your network and that want to deliver stuff to your customers, let's say they want to deliver stuff there and let's say for some reason that ISPs usually don't know, the CDN chooses the server on the bottom. And because of routing and because of policies and because of BGP and whatever protocol you are using, that's the path that is being chosen from the CDN server to the customers that are requesting that content.

Now, you can sort of engineer that. You can put it on a different path over here, which is sort of the same length but still utilises your network. Or if all of that it full, you can also go that way. And kind of do that path, but in the end, what you're fixed to is that traffic is coming from that down here. There is another server over there that would have very easy path over here but you can't choose that, you are fixed at the ingress point and short of pretty much withdrawing the routes down here or doing some not very easy things to do in order to shift the traffic somewhere else, the CDN is the one in the end that chooses that server and that's exactly what we think should change at that point. So what we actually want is this: We want to be able to talk to the CDN, we want to be able to tell them, okay, guys, whatever you are doing here, this is not the right mapping. Please, for that customer region, please use that CDN server. That's much better for us and that will also help you. It's less hops. It's better paths. And we can all gain from this. Not only us as an ISP but also CDN can gain from this.

So, let's put this into reality. This is a map of Germany ‑‑ I'm German ‑‑ and what you see here is fairly simple. You have four different locations where traffic can ingress. You have in circles the colours where this traffic is coming from. Now interesting is this region up here, because there is actually something in Berlin, but half the traffic is coming from Frankfurt. Why would it be doing that? It really doesn't make that much sense if it is already in Berlin, the cache. So what it should really look like is this. You should sort of aligned to your traffic and please note that even though this up here is actually very far away, it still gets delivered from Berlin because of the network topology, you should actually use your topology and use all of your internal information and give that in an abstract way to the CDN to make them understand how you would like your traffic inside the network flow.

So, that's exactly what we do. And what we have been building. I have been working at this pretty much for five years, and we have this ready. So, in order to do this, and I'm go through this one quickly, in order to do this, we need data of course. We have basic data and of course we need IGP, also some numbers on how much we're collecting there at the moment. We need IGP of course, we need BGP of course, BGP either from the route reflectors or from the reg nodes it doesn't matter, as long as you get a full view of the network. With that you can do your basic mapping just on that data, but that's sometimes not sufficient, so we're also collecting NetFlow, for example, to do ingress points. We're tracking SNMP, and there is a whole bunch of other things we can collect as well which all pretty much in the end gets translated into one topology that is annotated with all the information that we can get, which you then use to build your topology separation and build your mapping for a specific CDN.

That's in the end how it works. So, you pretty much have all your inputs here, you throw that into our engine. The analytics part up here is something you have heard talk about for a while. One or two presentations, this up here, but that's not the focus today. The focus today is down here, the interfaces for the actual CDNs. We have established connections through, and please don't understand me wrong, it says BGP here but this is really a misuse of the protocol. So, all they're doing here, they said we want to communicate prefixes, and the natural thought was we need to use BGP. This is really just a transport protocol. This has nothing to do with the BGP that's running in the Internet. This is point to point connection its. But, this is being done.

Alto is ready on our side but we haven't put that ‑‑ actually we are putting it into action at the moment, but when I these slides it wasn't ready yet, it wasn't productive yet. JSON is pretty much an export of Alto and we can export it to XML. If there is another format that is needed we can easily convert that.

So, this is basically, and I apologise for lots of text here, this is basically what happens inside the core.

So, we build a topology and the crucial bits about this is that we have a function that breaks the network into segments that can be hierarchical and basically makes it possible to separate parts of the network out for individual mapping, and for these individual mappings for each CDN we then have a section function, a cost function, that determines for each server that a CDN wants to use what the cost to these regions that we have defined in topology is. What that in the end boils down to, is that for each CDN, you have a mapping towards the, towards each of your regions and you can easily give that over to a CDN because all of the information that you have collected internally is now abstracted away and sort of built into this cost function you are not actually giving out your own data, all you are doing is telling them this is my preference.

Of course I have been talking a lot about theory right now. Let's go practice at this point.

And we have, what we have done in the end is we have built this. We have this productive. We are currently controlling over 90% of one of a major CDN's traffic through this. What you see here is basically we start at, this is all normalised traffic. We start here before we turn on, you see the 0% line, that's where we basically don't do anything yet. And then we gradually increase 20, 60 and down here, 90, we gradually increase the control and as you can see the normalised traffic, that means we have taken out the traffic growth. This is about the distribution and how efficient it's being delivered inside the network. We have a general downward trend. Now, I can already see some of you guys saying well what the hell is this? Why is there a mountain in there and why does traffic scale up by 15% here?

This, to say it mildly, was miss configuration. The first line here, we had a test bed trying to incorporate a sub‑net in DNS which didn't quite work out the way we intended to. Then we figured out around here and we switched back to the normal operation mode expecting to to go down like this. By the way this is about ten days before Christmas. We switched back, we turned everything off again. Expecting to go here. CDN also switched back but unfortunately forgot to reenact the old mapping which means that not only did they not use their old mapping they actually went back to ground zero and started everything a knew and relearning everything. We figured that out shortly after Christmas, somewhere around here, and turned things back on and since then it's on a done ward slope. I know this is already pretty much a month old the last data, but this is basically the trend that we're seeing.

So, in other words, this is now basically what's happening inside the network and the fact that we're aiming for. On the top loaded links we want to reduce traffic and when you have valleys here, here and here, all of these, that's where you want to shift traffic. You want to equal eyes the traffic inside your network. By just telling a CDN to please use different servers. We're not changing any routing or doing any BGP, all we're doing it calculating things and telling a CDN please use different servers.

And with that I am pretty much done. It's fairly easy to use. We're very plug inbased so if there is specialities in a we need to do, we can do that. I have talked about the benefits in the collaboration a not. So with that, I will finish here, and if you are interested in talking about this, either as an ISP or a CDN, please come talk to me or talk to Oliver who is sitting over there.

CHAIR: Thank you. Questions?

AUDIENCE SPEAKER: Patrick Gilmore, ARIN. I used do a little on CDNs once in a while. There is something that you didn't say in there but I assume you already took this into account. Not every CDN server serves the same content. There are some which are very homogenous, some that are very not homogeneous, so giving a purpose doesn't always mean traffic moves. Secondly there has been at least one CDN I'm sure many ears that had a preference signalling mechanism for you know decade plus, I wouldn't ‑‑ I don't suppose you'd be willing to say which CDNs you are testing this with or ‑‑ I didn't think so. All right, let's talk about it afterwards.

INGMAR POESE: Okay.

AUDIENCE SPEAKER: Warren Kumari, Google. I just want to point out your definition of optimal is not necessarily the same as my definition.

INGMAR POESE: That is perfectly true. We are not saying you should throw away your definition. What we're saying is we are supplying you the data when you are doing the mapping what would be optimal for the network. Since Google is peering directly with a lot of customers, yes, you do still have your own server choice and you have to look at load and everything. But we're not telling what you to data networks but we're telling you this would be optimal for us.

AUDIENCE SPEAKER: Randy Bush. It's all in F and C. The magic is under the covers.

INGMAR POESE: I didn't put the magic here. And the magic, or the math that you have to do can be specific to the individual CDNs, or the specific ISPs. That's why I didn't put the functions here. I can do a follow‑up about the functions that we're using and how we are using them.

CHAIR: We are magically on that. Thank you.

(Applause)

IGNAS BAGDONAS: And to finish up the routing security festival of today, a lightning talk about insecurity, measuring routing insecurity.

ANDREI ROBACHEVSKY: Hello everyone. I work for Internet Society. And this is not scientific research, it's actually some work we're doing and asking for your help, your feedback and any pointers too if you know about research that has been done in this area.

So, you know about this MANRS project, right. And one of the things that we want to do, we want to actually measure. Measure how members of this initiative stand against their commitment, that's one thing tee members want to measure this and improve reputation of this effort.

Another thing that when we talk about routing security, we often talk about this in terms of anecdotal evidence. We have in‑depth analysis of certain incidents but in general, if we ask a question is routing getting better, is it getting worse? Does what we do here and discuss have any impact on routing security? That's a very tough question to answer. At least I haven't seen those things attend answered.

So if we can simplify routing security and come up with certain may interconnection and measure them over time and map on the timeline, probably we can get some idea where the whole routing system is going) and finally and this is again related more to MANRS project rather than anything else is, when a member joins MANRS, we do some testing of, to verify their commitments are indeed implemented. But it's not automated. It's manual. It's not done sort of continuously. So, this is another opportunity to do that.

So, how do measure? This is very high level. One thing is that the measurements should be transparent so we should use data that is available and in principle anyone should reproduce those measurements, that's transparency, so there are no questions about what's behind the hood.

And they are passive. There is no cooperation required from a network which sort of allows those measurements to scale up.

Now, as I said, those measurements are not just routing security in general, they are related to what we call MANRS actions, those are four actions, and the next slides we show kind of matrix we device that will indicate readiness of a particular AS with regards to those actions.

So, the question what can we measure? Taking into account that it should be transparent, passive. Unfortunately, well at least what I see, not very much, but at least something. So if we talk about filtering recollect and filtering this action says you are not propagating, you are preventing incorrect routing announcements from your customers and from your own network by observing routing, global routing using data such as RIS, routviews and some other route collector, we can look at route leaks right, we can look at route hijacks, if the network announces bogons, Bogon prefixes, Bogon IS Ns, so those we can probably measure.

If you look about/TAOFBG, that's another that says you prevent spoof traffic from your network and from your single home stub customers. Keyed a has this database where they list cul Brits that through the tests, the test indicates that they are spoofable so we can look at this and we can look if AS originates this traffic or if IS allows this traffic from their officers.

Well coordination is simple. It's just checking the content information is there, it's not really routing security. And this action that is aimed at populating global databases, routing databases, RPKI bodies, IRR and correct documentation of the policy, so this is also relatively ease to check and I will not focus on that part.

So, the question, real question here, even taking this limited amount of matrix and I would like to concentrate on filtering more than anything else, how to calculate, so for instance if we have this metric M 2 which is route hijacked by an autonomous system, how do we calculate? Well we can look at the impact right, we need to say hey if it's impact it probably depends on how many prefixs were hijacked, what's the address span of those prefixes and what's the duration? But there are other questions, first of all not all prefixes are equal, right. What is worse, hijacking a /8 or a hijacking a /24, and which 24 you are hijacking? Those questions are very difficult to answer you can't just measure that.

So it's also hard to normalise and define thresholds because duration can go on and on and on and address span can be very huge so you get a very big number and you are comparing this maybe with a very small number, but again not understanding the impact very well.

Now, getting back to the sort of the original objective. We are actually not looking at the impact in this, we are looking at conformity, how do you actually implement those filters, right. And in this respect, it doesn't matter if you hijacked one prefix or two prefixes. What really matters is the distinct incidents, how many incidents you have in a certain period of time and your resolution time. So if you have less incidents, you are better, if your resolution time is shorter, that's also very good.

Now, events and incidents. So, certain project like for instance BGP MON, they show events, right. And in many cases, those events are actually a result of one single miss configuration, right. So how do you actually, if you talk about incidents and not just events, how you combine them, first of all their weight. So, one suggestion is to wait depending on the distance from the culprit. If you have a very huge customer cone, then you are sort of edge edge customer, which is certain ‑‑ like certain level of indirections, you have less control and probably you shouldn't be penalised as much for those than for your direct customers. So, one suggestion is to sort of have this dampening coefficient that will reduce your sort of panelising score, depending on how many AS hops from you this incident happened.

So, combining those events into incidents, that's another thing, and then duration, defining certain thresholds if incident is taking time than you are penalised more. This is a picture how those events are combined into incidents and those incidents are weighted and therefore that's how you can calculate this formula.

So, that's it. That's my short lightning talk and any questions and feedback and pointers are very much appreciated.

CHAIR: Thank you. Any questions? No? Thank you.

(Applause)

IGNAS BAGDONAS: So. This brings the meeting to the end. All of the routing security and some of the topics discussed. Anyone would have any other comments, anything else ‑‑ well, one suggestion from a routing chairs, as the way of the potentially hot topics, please continue discussing that off line, and just take the benefit of that that you all are here and the bandwidth of a direct conversation might be higher than on the mailing list.

And with that, end of this meeting here and see all of you in Amsterdam.

(Coffee)

LIVE CAPTIONING BY
MARY McKEON, RMR, CRR, CBC
DUBLIN, IRELAND.