15:01:15 <carl_baldwin> #startmeeting neutron_l3 15:01:16 <openstack> Meeting started Thu Mar 5 15:01:15 2015 UTC and is due to finish in 60 minutes. The chair is carl_baldwin. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:18 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:20 <openstack> The meeting name has been set to 'neutron_l3' 15:01:31 <carl_baldwin> #topic Announcements 15:01:40 <carl_baldwin> #link https://wiki.openstack.org/wiki/Meetings/Neutron-L3-Subteam 15:01:52 <carl_baldwin> Just two weeks until Kilo-3. It is flying by isn’t it? 15:02:14 <carl_baldwin> #topic Bugs 15:02:26 <carl_baldwin> Any new bugs to be aware of? 15:03:21 <haleyb> https://bugs.launchpad.net/neutron/+bug/1428305 :) 15:03:24 <openstack> Launchpad bug 1428305 in neutron "Floating IP namespace not created when DVR enabled" [Undecided,New] - Assigned to Brian Haley (brian-haley) 15:03:44 <haleyb> but i've identified two patches that fix it 15:04:10 <carl_baldwin> haleyb: Good progress. Are the patches up? 15:04:23 <haleyb> they're cisco's ipv6 patches 15:05:07 <carl_baldwin> haleyb: You confirmed that it is only with ipv6 enabled? 15:05:30 <carl_baldwin> haleyb: How much work will it be to write tests for the issue? 15:05:48 <haleyb> yes, with ipv6 and dvr, so it needs multiple things turned-on 15:06:21 <haleyb> i know nothing about the dvr job, but perhaps we should enable ipv6 with it eventually 15:06:40 <salv-orlando> carl_baldwin, haleyb: that should be a test executed as part of the dvr job. But I wonder if we isolate the failure mode correctly, would it be possible to add a functional test for this? 15:07:10 <salv-orlando> I don't think we explicitly need to enable ipv6 on a job. We need to add an ipv6-specific test. 15:07:41 <HenryG> haleyb: note the ipv6 patches are community work, not cisco's :) 15:08:17 <salv-orlando> haleyb: from the bug report it pretty much seems dvr is totally broken when using ipv6? 15:08:28 <carl_baldwin> salv-orlando: Good point. I think even having ipv6 enabled in the job might’ve caught this. 15:09:15 <haleyb> salv-orlando: floating ip's are what's broken as the fip- namespace is never created, for ipv6 to work we need all those "community" patches :) 15:09:44 <haleyb> we can probably create a functional test to add two subnets and see if the namespace is created 15:10:27 <salv-orlando> haleyb: that sounds reasonable. But in order to repro the issue all you have to do is to attach a v4 floating IP on a DVR linking it to a V6 internal address? 15:10:29 <HenryG> I think sc68cal is working towards making 4+6 the default in devstack (and hence all jobs) 15:11:03 <sc68cal> yup 15:11:28 <salv-orlando> sc68cal: basically by making the "private" network v6? 15:11:38 <haleyb> salv-orlando: the VM is dual-stack, problem is the l3-agent doesn't deal with multiple subnets on the gateway port properly 15:12:04 <sc68cal> salv-orlando: the private network will have an RFC1918 IPv4 subnet associated, and an IPv6 subnet from the ULA prefix 15:12:21 <carl_baldwin> salv-orlando: IOW, I don’t think we even need ipv6 addresses involved to break ipv4. Just enabling it for the stack breaks it. 15:12:54 <sc68cal> https://bugs.launchpad.net/neutron/+bug/1401728 15:12:56 <openstack> Launchpad bug 1401728 in neutron "Routing updates lost when multiple IPs attached to router" [Medium,In progress] - Assigned to Sean M. Collins (scollins) 15:12:58 <salv-orlando> carl_baldwin: I am a bit lost about what do you mean for "enabling it for the stack" 15:13:11 <salv-orlando> thanks sc68cal, that's a really good thing to do 15:13:21 <carl_baldwin> salv-orlando: Just turning on ipv6 in devstack. 15:13:43 <carl_baldwin> haleyb: Correct me if I’m wrong. 15:13:55 <sc68cal> What happens is the code in the L3 agent gets bad data from the DB layer, we have some code in the db layer to skip ports that have more than one subnet associated with the port 15:14:20 <sc68cal> and it ends up not creating a key in the dictionary the L3 agent accesses, and the whole thing goes kablooowey 15:14:33 <salv-orlando> carl_baldwin: ok, I did not recall we needed to explicitly enable it for devstack. I'm trying to understand if this issue affects not only IPv6 users, but also users which build external ipv4 networks using multiple non-contiguous ranges as subnets 15:15:17 <sc68cal> salv-orlando: it's possbile it does 15:15:26 <sc68cal> *does affect 15:15:36 <carl_baldwin> salv-orlando: Interesting. My thinking is that it would not because multiple subnets are handled with a different structure, not multiple addresses on each port. 15:15:37 <haleyb> carl_baldwin: do you mean enabling ipv6 but not on that network? i think sc68cal has it right wrt multiple subnets 15:16:09 <sc68cal> whoops carl_baldwin is right, it's multiple addresses 15:16:15 <salv-orlando> carl_baldwin: good point. 15:16:51 <carl_baldwin> salv-orlando: There is an “extra_subnets” field (or something liket that) 15:17:41 <carl_baldwin> To summarize, we need to turn attention to these ipv6 patches mentioned in the bug report. We also need to work out details for how to test this and/or turn attention to sc68cal ’s effort to enable ipv6 in devstack. 15:18:14 <carl_baldwin> Anyway, good work haleyb and others on this bug. 15:18:21 <carl_baldwin> Any other bugs? 15:19:19 <carl_baldwin> #topic L3 Agent Restructuring 15:19:50 <carl_baldwin> mlavalle: We got yours merged. 15:20:06 <mlavalle> carl_baldwin: yeap, I saw that.... thanks for the help :-) 15:20:16 <carl_baldwin> I’ve got a couple of small patches lined up next and then a couple of medium sized ones. 15:20:58 <pc_m> Please review https://review.openstack.org/#/c/160983/, which fixes VPN UTs as a result of 147744 merged. 15:21:07 <carl_baldwin> But, we’re getting to the end of the line. I looked yesterday and saw just under 40 non-abandoned patches on this topic. Most have merged. 15:21:10 <mlavalle> carl_baldwin: I am also adding a funtional test for the namespaces manager. I will push it tonight or tomorrow 15:21:29 <carl_baldwin> #action carl_baldwin will be sure to review https://review.openstack.org/#/c/160983/ today for pc_m 15:21:41 <carl_baldwin> ^ Encourage others to review too. 15:21:44 <pc_m> Also, I have https://review.openstack.org/#/c/160179/ to separate VPN device driver from L3 agent, too. 15:22:04 <carl_baldwin> pc_m: Ready to go now? 15:22:08 <pc_m> carl_baldwin: Thanks, need review of both of these. 15:22:26 <pc_m> carl_baldwin: Yes, in fact, I did part 2 changes as well so this is the whole thing. 15:22:35 <carl_baldwin> pc_m: Thanks. 15:22:48 <pc_m> I made it dependent on the bug fix, so should pass Jenkins. 15:23:15 <pc_m> 160179 depends on 160983, which was waiting on 147741... whew! 15:23:34 <carl_baldwin> pc_m: :) 15:23:57 <carl_baldwin> amuller do you have anything? 15:25:01 <carl_baldwin> #topic neutron-ipam 15:25:35 <carl_baldwin> I can’t believe that we have only 2 weeks left. 15:26:21 <carl_baldwin> salv-orlando: pavel_bondar: johnbelamaric: tidwellr: How are things going? 15:26:37 <salv-orlando> making progress, even if not as fast as I'd like 15:26:50 <tidwellr> same for me 15:27:06 <salv-orlando> I've addressed all comments, and completed the subnet allocation part. I need to finish the address pool allocation bits. 15:27:13 <salv-orlando> And then unit tests - and this bit is done. 15:27:20 <salv-orlando> Then we'll have to add functional tests 15:27:38 <johnbelamaric> pavel_bondar: what remains on the db base refactor? 15:28:01 <salv-orlando> but before the functional tests we'll need to integrate the reference driver with pavel_bondar refactoring 15:28:02 <pavel_bondar> I am no longer marking re-factored db_base as not WIP, so feel free to review #link https://review.openstack.org/#/c/153236/ , however there some amount of work in todo list still 15:28:26 <pavel_bondar> It would be nice if we could start integration testing soon 15:29:24 <pavel_bondar> right now, I have only minor things left to do, improve tests, add rollback for delete_subnet, probably missed something else 15:30:21 <carl_baldwin> salv-orlando: pavel_bondar: Sounds like we need a plan of attack for integration and testing. 15:31:11 <salv-orlando> for testing, the strategy is to have a non-voting job, similar to dvr 15:31:12 <carl_baldwin> Would it help to meet at another time to discuss this in more depth? 15:31:26 <salv-orlando> carl_baldwin: as long as it's IRC it's fine forme. 15:31:41 <carl_baldwin> We can also discuss here, didn’t mean to imply that is not an option. 15:31:55 <carl_baldwin> IRC is fine for me either way. 15:32:05 <pavel_bondar> ok for me too 15:33:00 <pavel_bondar> once reference driver is ready, I can rebase on it and turn new IPAM by default, so we will see how many tests are broken in jenkins 15:33:26 <salv-orlando> carl_baldwin: it was just because IRC allows me to have meetings also very late in the night. As I don't have to speak I won't wake up anybody 15:33:27 <carl_baldwin> pavel_bondar: That sounds like a reasonable way to start. 15:34:25 <carl_baldwin> salv-orlando: We’ll strive to be considerate of your time. 15:35:29 <carl_baldwin> I’m inclined to let salv-orlando stay focused on the remaining address pool allocation bits and set up another time to discuss our testing strategy maybe next week sometime. 15:35:44 <salv-orlando> even monday will work for me 15:35:58 <carl_baldwin> salv-orlando: That could work. I’ll have to check to be sure. 15:36:37 <carl_baldwin> #action carl_baldwin will find a time we can meet next week about a testing IPAM 15:38:26 <carl_baldwin> tidwellr: Anything to discuss on subnet allocation? 15:38:59 <carl_baldwin> We have a good start on tempest tests 15:39:01 <carl_baldwin> #link https://review.openstack.org/159644 15:39:06 <tidwellr> some thoughts on the semantics of updates to the prefix of a pool would be appreciated 15:39:17 <carl_baldwin> Also, on python-neutronclient 15:39:20 <carl_baldwin> #link https://review.openstack.org/159618 15:39:33 <tidwellr> yes, I've seen those 15:40:00 <carl_baldwin> tidwellr: Do you want those thoughts in the review? 15:40:43 <tidwellr> review, ML, "offline" in IRC works as well 15:41:33 <tidwellr> quotas is something I'm starting to worry about a little 15:42:03 <tidwellr> in the spec it was mentioned that the built-in quota mechanism might not be sufficient for subnet pools 15:42:04 <salv-orlando> are quotas the thing that we had so much fun about back in Utah? 15:42:13 <carl_baldwin> tidwellr: Wow, quotas. I have managed not to think about quotas much at all. 15:42:35 <carl_baldwin> salv-orlando: Yes, we had some fun with that. 15:42:36 <tidwellr> yeah, la la la, what quotas? :) 15:43:27 <carl_baldwin> tidwellr: Well, first things first. Let’s talk about updating the pool a bit. 15:44:16 <salv-orlando> do we really want to do that? management-wise is difficult, we know that 15:44:29 <salv-orlando> but is there a use case that justifies adding this complexity? 15:44:31 <carl_baldwin> tidwellr: I’m fine if updating a pool completely ignored current allocations. What do others think? 15:44:42 <salv-orlando> I think pools can be extended over time 15:44:49 <salv-orlando> so that can be a use case... 15:44:58 <tidwellr> carl_baldwin: I was going to propose we not support updates at this time, too complex 15:45:32 <carl_baldwin> tidwellr: What is the complexity? 15:45:44 <salv-orlando> in principle I am fine with detaching current allocations from the prefix. But on the other hand how should we handle the situation where one adds a prefix to a pool for which allocations exist in another pool? 15:46:17 <carl_baldwin> tidwellr: Would it be simpler to support only updates that extend the pool but do not take away from it at first? 15:47:15 <tidwellr> carl_baldwin: I suppose adding prefixes is not really a problem. Removing prefixes, dealing with existing allocations, compacting the prefix list, that gets messy 15:47:18 <carl_baldwin> salv-orlando: That is an interesting thought. 15:48:51 <tidwellr> salv-orlando carl_baldwin: prefixes don't have to be unique across pools, is that even a problem? 15:48:58 <carl_baldwin> salv-orlando: at one point, I was thinking that — at least initially — we’d only allow one pool per address scope. So, if two pools overlapped, it would be assumed that the addresses are in different scopes and that would be okay. 15:49:24 <tidwellr> +1 15:49:42 <salv-orlando> carl_baldwin, tidwellr: I reckon in the design it unicity of addresses across pool was configurable. 15:49:47 <carl_baldwin> salv-orlando: tidwellr: With ipv4, yes. With ipv6 they *should* be unique globally. However, I wasn’t thinking that we would deal with enforcing it. 15:50:06 <salv-orlando> carl_baldwin: but perhaps we don't even need it do it now 15:50:11 <carl_baldwin> salv-orlando: The configurability was for addresses *within* one pool. 15:50:22 <salv-orlando> carl_baldwin: ok 15:50:36 <carl_baldwin> salv-orlando: You know how current Neutron allows overlap all over the place. :) 15:51:26 <tidwellr> carl_baldwin: we can easily support extending the prefix pool, so I'll go that route 15:51:30 <salv-orlando> ok let's say one pool for address scope? 15:51:44 <carl_baldwin> salv-orlando: Yes. 15:51:45 <tidwellr> salv-orlando: yes 15:52:15 <carl_baldwin> I think the most important use case to support will be extending the pool. tidwellr: does that make it any easier? 15:52:39 <tidwellr> yes, I had a patch set earlier that supported this 15:53:53 <tidwellr> we can chat later about quotas, I don't want to take any more time 15:54:42 <carl_baldwin> tidwellr: Could you be sure to note any restrictions in the commit message so that we and document writers can easily figure out how to present all of this? 15:55:05 <carl_baldwin> tidwellr: Fair enough. Though we may not want to wait until next week to talk about quotas. 15:55:21 <tidwellr> I meant today or tomorrow :) 15:55:47 <tidwellr> better commit message coming 15:55:55 <carl_baldwin> tidwellr: Thanks. 15:56:00 <carl_baldwin> Anything else on IPAM? 15:57:11 <carl_baldwin> #topic Open Discussion 15:58:21 <mrsmith> I have a fip ns delay patch that may help with some of the intermittent failures of the dvr job 15:58:39 <mrsmith> https://review.openstack.org/#/c/151758/ 15:58:45 <mrsmith> its still a WIP 15:59:06 <mrsmith> but anyone who wants to take a look and let me know if I am going in the right or wrong direction would be appreciated 15:59:31 <mrsmith> it tries to delay the deletion of the fip ns and ports to reduce the churn that is seen on the dvr job 15:59:49 <mrsmith> also - Rajeev's lock patch may help as well - https://review.openstack.org/#/c/153422/ 16:00:40 <carl_baldwin> mrsmith: Thanks. 16:00:53 <carl_baldwin> #endmeeting