15:00:13 #startmeeting neutron_ipv6 15:00:14 Meeting started Tue Jan 27 15:00:13 2015 UTC and is due to finish in 60 minutes. The chair is sc68cal. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:15 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:17 The meeting name has been set to 'neutron_ipv6' 15:01:44 Kilo-2 milestone is coming up quick 15:02:37 xuhanp: how goes your dhcpv6 opts review? 15:03:01 sc68cal, pretty great. Got a lot of comments thanks to everyone 15:03:51 o/ 15:03:57 Hi 15:04:10 xuhanp: cool, let me know if there is anything we can do to help 15:04:20 s/me/us 15:04:48 sc68cal, will do. thanks 15:05:44 I'll open the floor - anyone have anything they'd like to discuss? 15:06:14 these days I considered writing some tests for ra.py, and I tried to split tests for the module from test_l3_agent first. it was not trivial and I ended up with the series: https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:master+topic:split_ra_unit_tests,n,z 15:06:45 I would like people to check whether this smth we should approach (I guess it corresponds to a bp we have to restructure unit tests) 15:07:52 ihrachyshka: thanks for the heads up - i'll take a look. I like the concept already 15:08:10 the main one (and the last one) is https://review.openstack.org/#/c/150066/ 15:08:18 all prev stuff is just to achieve this 15:08:28 (which tells a lot about our tests) 15:09:12 yeah the l3 agent is going through some big changes, for the better. 15:10:23 ihrachyshka: will take a look sometime. 15:10:53 thanks guys 15:11:09 ihrachyshka: I saw your thread on the ML regarding sanity checks for dnsmasq version 15:11:24 I was mainly interested in covering the module with tests to push for https://review.openstack.org/148017 (also worth checking on whether it's a correct path) 15:11:43 sc68cal, yeah, I hope to kill that check asap since centos7+kilo is broken 15:11:49 https://review.openstack.org/#/c/150066/ 15:11:50 but it's not strictly ipv6 :) 15:11:51 doh 15:11:57 http://lists.openstack.org/pipermail/openstack-dev/2015-January/053909.html 15:12:14 for those interested, https://review.openstack.org/148577 15:13:41 I get a little worried about them backporting stuff from 2.67 to 2.66 - but hey if that's what they want to do ... 15:13:52 I wish them luck 15:14:07 sc68cal, well, that's redhat 15:14:21 we are profies in those kinds of perversions 15:14:27 ihrachyshka: lol yep :) 15:15:50 also of ipv6 note, juno freeze is approaching 15:15:58 and I have some ipv6 backports: https://review.openstack.org/#/q/status:open+project:openstack/neutron+branch:stable/juno+topic:bug/1358709,n,z 15:16:48 ihrachyshka: perfect I'll take a look 15:16:50 one of those was discussed on prev meeting and we seemed to decide not to push for backport, but I thought since then and think that it's actually backportable, for the current behaviour is plain wrong. 15:16:55 so, again, comments 15:16:56 :) 15:18:18 well, the original commit was filed as a bug... ;) 15:21:10 dane_leblanc: how goes your patches> 15:21:20 I am in favour of those backports 15:21:40 sc68cal: Fairly well, thanks. Got some reviews from baoli and Xu Han, thanks 15:22:08 The first patch is failing some Tempest tests, I think I know the root causes, but fixing is tricky 15:23:24 I was going to ask, how to fix tempest tests when the change in Neutron is not merged? 15:24:00 it's kind of a chicken-egg problem 15:24:01 xuhanp: The problems that I see so far are fixable within the patchset 15:24:13 xuhanp: as far as I can tell. 15:24:16 xuhanp, oh, have you heard about cross-repo deps patch for zuul? 15:24:31 https://review.openstack.org/#/c/144555/ 15:24:46 ihrachyshka, good to hear that! Thanks 15:24:48 it should bring us a Ponny Land 15:24:53 excellent 15:24:55 will take a look 15:25:18 One of the problems that I see is that for SLAAC, we assume that the addresses are available upon subnet create (and we make associations for the ports on subnet create) 15:25:19 that will be very useful 15:25:50 But the addresses aren't really available until RADVD is spawned, and I think that happens when there's a router interface add 15:26:32 dane_leblanc, I think this was discussed in your spec review. But I cannot remember the conclusion. 15:26:35 So we may be making assumptions and displaying associations with SLAAC addresses which aren't available at the time. 15:27:05 xuhanp: I don't think I nailed in 100% in the spec. 15:27:14 dane_leblanc: true, but those are available once RADVD is live. It's not like they have to be populated in dnsmasq hosts or anything 15:27:36 since the addr is tied to the port's MAC 15:27:57 dane_leblanc, what's the real problem? can we just avoid exposing those addresses if subnet is isolated? 15:28:09 aveiga: Right, the addresses are actually available when radvd is live, but if we do a show ports before radvd is live, we'll show associations on the ports for the SLAAC addresses 15:29:04 I think we just need to roll with it though, since it isn't always radvd sending the RA 15:29:05 Let's say someone creates a slaac subnet, but doesn't do add router interface to spawn radvd 15:29:45 aveiga: I think we can distinguish the source of RA based on ra_mode 15:30:19 So before add router interface is done, show ports will show associations with slaac addresses that aren't really there 15:30:19 dane_leblanc, but ips are still available for usage, it's just that there is nothing that would automate it. same story for subnets with dhcp disabled. 15:30:42 If I understand correctly, this is not really a new thing - since there is a time between when a port is assigned a IPv4 address and when the client actually does a DHCP request 15:30:52 in theory, something inside instance may set addressing in an alternative way 15:31:01 also true 15:31:12 if you were using metadata, you can still use that assigned address 15:31:38 ihrachyshka: for that scenario, ra_mode would be None, and ra_address_mode would be slaac 15:31:48 not necessarily 15:31:53 Meaning, don't use openstack radvd 15:31:59 you might want the RAs there for routing purposes 15:32:09 but make your address stable 15:32:12 how about 'don't use openstack radvd for now'? 15:32:45 dane_leblanc: as long as the 'mode' says it's slaac, RA will run some where eventually, right? 15:32:59 baoli: Eventually, yes 15:33:22 I'm concerned with debuggability 15:33:58 it should be easy to figure out - if a guest isn't getting a slaac address that means radvd is busted 15:34:00 In the interval between when we're assuming slaac addresses are available and when they're actually available 15:34:17 when modes are slaac/slaac 15:35:19 sc68cal: That's true, the guest would show whether the address is assigned or not, although openstack show ports might show something different 15:36:09 I don't see how this requires special behaviour in comparison to dhcp disabled. show ports does not show active addresses of an instance but ip addresses available for usage using the port. 15:36:38 this is not divergent behavior 15:36:41 dane_leblanc: may be a cli/api to show what radvd is currently doing would help 15:36:44 I think it's fine 15:36:58 I think we can do a little hand-waving and say that Neutron always displays the desired behavior 15:37:38 rather than digging it up manually. 15:37:53 it's eventual consistency, dude. some day the instance will set the address. 15:38:35 Okay, sounds like I'm outnumbered. :) If the expectation is there that the show ports might not be 100% real, then it's okay. 15:38:57 dane_leblanc: we're just trying to cut down on your workload :) 15:39:03 lol 15:39:11 sc68cal: So am I! 15:39:38 sc68cal: This will determine a fix I need to make to get Tempest test working 15:39:50 we should print a motto - "We'll fix it in P" 15:40:02 sc68cal: Ha! 15:40:10 sc68cal: s/P/Current+1/g 15:40:40 then we'll cross off P and write in new ones underneath 15:40:52 Thanks for discussing this, good input. 15:45:17 Speaking of radvd reminds me to share my observations regarding radvd in an HA setup :-) 15:45:27 I've seen that in an HA Setup, radvd is spawned in all the HA Routers (irrespective of master/backup). 15:45:39 This was creating issues as the traffic from the standby/backup routers was causing the switch ports to reconfigure the IP/MAC port info as the 15:45:48 interfaces still had the LLA. Thanks to Assaf's patch [https://review.openstack.org/#/c/142843/] which is now merged and it addresses this issue . 15:46:24 I remember that we had a discussion earlier on the need for gratuitous arp for IPv6 subnets 15:46:25 SridharG: was it sending RAs without priority? 15:46:45 that's actually a valid config if the active was higher priority than the passive 15:46:51 because it would lower the failover time 15:47:18 aveiga: I've not checked the priority of RA 15:47:45 but AFAIK we do not change the priority of RADVD messages. So it would exactly be the same RA in all the HA router instances. 15:47:55 so that's the bug 15:47:56 what I actually wanted to convey is that... 15:48:10 running radvd on all instances is valid, but not if they'e the same priority 15:48:10 while discussing about GARP, some of you mentioned that it is not required as RA would serve the same purpose.. 15:48:16 and this seems to be true. 15:49:10 aveiga: eventhough Radvd is running Assaf's patch would remove the LLA from the qr-xyz iface. 15:49:10 aveiga: I think the problem was for HA routers they were using the same mac address 15:49:25 so the RA would not be seen by the instances/VMs. 15:49:26 oh, that's not good 15:49:42 a follow up patch could add priority for ra packet and have different LLAs 15:50:09 yeah, I think we still need to figure out the multiple LLA thing 15:50:20 that was still up in the air if I remember correctly 15:51:16 yes sc68cal, the IP/MAC for the qg-xyz/qr-xyz would be the same on all the HA routers. 15:52:03 SridharG: I think the bigger question is which of the HA methods we want to support 15:52:16 you can do LLA spoofing similar to a VRRP mode 15:52:28 or multiple RAs from different LLAs with different priorities 15:52:43 aveiga: I'm talking about HA with Keepalived (VRRP) 15:53:04 is neutron managing keepalived? 15:53:09 yes 15:53:19 then I guess we'd better fix that setup :( 15:54:10 neutron spawns keepalived, but it uses scripts to get some notifications when there is a failover. 15:54:27 so I'm not sure, if its kind of straight forward. 15:56:21 sounds like a summit session proposal ;) 15:56:57 lol 15:57:02 +1 15:57:09 or at least a pod session 15:58:16 looks like we're nearly out of time 15:58:41 AFAIK neutron is not notified of failover. Only the agents have that info 16:00:27 I agree with Rajeev, there seems to be some pending work to get the active HA router. 16:00:34 Take care everyone, see you all next week 16:00:41 #endmeeting