[Zanog-discuss] Interesting article about when the Cloudfare incident
mark.tinka at seacom.mu
Sat Jul 6 17:22:03 SAST 2019
On 6/Jul/19 09:24, Ben Maddison via zanog-discuss wrote:
> Doesn't that mean that in this example:
> \ /
> \ /
> customer customer
> \ /
> \ /
> If the link between 65001 is down (for maintenance, etc), you'll filter
> 2001:db8::/32 from 65000, and therefore it will become unreachable
> inside 37100? That doesn't seem like a desirable situation to me.
We would still reach 65001, but via a higher-latency path, e.g.,
> Yup, get that. In fact, one less obvious reason that we are *so* strict
> when building customer-facing filters (we require an exact match to a
> route(6) object, not just a covering one), is that we can be relatively
> certain that if a route is accepted by us, then it will be accepted by
> pretty much anyone else that filters on IRR data too.
> This makes troubleshooting a little easier, and means that we rarely
> need to ask customers to add stuff after the fact.
Totally understand that. Of course, it means you'd have a fairly large
router configuration, which may or may not be a bad thing.
We considered that option, and for us, accepting a customer's block up
/24 or /48 is not unreasonable because:
* For a number of networks that don't use IRR within and outside
Africa, the forwarding behaviour as intended by the originating
network will be fulfilled.
* Customers who do not match what they send to us vs. what they have
in the IRR will be hurting only themselves. While we have warm
bodies on this every month to reconcile inconsistencies, it's really
in the customers' hands to make sure they remain diligent. We can
find issues and help customers fix them, but we'll always be at
least one step behind as we can only dedicate a certain amount of
resources toward this.
For us, it's a compromise between security and admin. Of course, once
RPKI is mainstream, this problem starts to fall away.
> I'd suggest that you can probably just tell them to use AFRINIC at this
> point. I spent a couple of years telling people that operate mirrors to
> ensure that they have that data.
> At this point, I believe that every important mirror has the AFRINIC
> The downside to using multiple databases is that if they get out of
> sync, the results returned from a mirror will depend on the ordering of
> the config file of that mirror - which is tricky to diagnose in the
> event of issues.
Agreed. AFRINIC is now widely supported compared to as little as 4 years
We (SEACOM) have to maintain an RADB membership because we often serve
many customers who use IP addresses that belong to non-AFRINIC regions.
We have come across some networks outside Africa that do not query
AFRINIC, but will query RADB, hence.
> My experience is that NOC engineers are not particularly familiar with
> the internals of IRRd!
Yes, not the gory inner workings.
On our side, our Provisioning and NOC teams know what an IRR needs to
get a customer's routing sorted, which helps.
> For a while, I've been toying with the idea of making a view of which
> prefixes are being filtered per peer available via our LG.
> Would this functionality be useful for that team?
> If yes, would it be acceptable for this to be anonymously accessible
> like the other LG output? Or would you consider this sensitive enough
> to live behind an auth wall?
> (Not just a question for MT, I'd like the views of others too pls?)
I think if any network can provide this view from their looking glass,
it is always a good thing. So yes, if you can afford to, it would be
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the zanog-discuss