22:00:19 #startmeeting neutron_drivers 22:00:20 Meeting started Thu Mar 31 22:00:19 2016 UTC and is due to finish in 60 minutes. The chair is armax. Information about MeetBot at http://wiki.debian.org/MeetBot. 22:00:21 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 22:00:23 The meeting name has been set to 'neutron_drivers' 22:00:44 welcome to this exciting episode of neutron drivers, Newton Series 22:00:45 hi 22:00:48 hi 22:00:51 hi 22:01:04 you ready for some action? 22:01:10 no 22:01:20 get off my lawn. 22:01:31 HenryG : :-) 22:01:31 whose lawn? yours? 22:01:32 pff 22:01:40 remember to register for the austin party! 22:01:46 link? 22:01:46 whose party? 22:01:48 yours? 22:01:49 pff 22:02:04 https://www.eventbrite.com/e/stackcity-austin-a-community-festival-for-stackers-tickets-24174378216 22:02:21 #action kevinbenton to buy beer to anyone who register to this party 22:03:03 ok, let’s dive in, but before we do that, I’d like to share a few reminders 22:03:46 Reviewing neutron-specs changes is just as important as reviewing neutron ones 22:04:06 don’t forget that as drivers members you’re the custiodian of the +A 22:04:16 without that, specs stall 22:04:22 and no-one wants that 22:05:02 so please take the time out of your busy schedule to spare sometime going over the backlog and nudge contributors, review, approve and pending changes in that repo 22:05:28 Not a long list 22:05:28 the backlog is rather small now, so it’s pretty easy to clear 22:05:29 #link https://review.openstack.org/#/q/status:open+project:openstack/neutron-specs 22:05:31 oh boy, stackalytics /90 on that repo is grim. 22:05:45 carl_baldwin: thanks you beat me to it 22:05:57 carl_baldwin: there’s a link on the drivers page for your convenience 22:06:02 #link https://wiki.openstack.org/wiki/Meetings/NeutronDrivers 22:06:19 well i personally de-prioritized all spec reviews during the last part of the cycle to work on stability and bugs 22:06:34 kevinbenton: fair enough, but now it’s time to switch gear again 22:06:36 hence the reminder 22:06:49 #link https://review.openstack.org/#/c/286413/ needs some love 22:07:02 * amuller slotting spec/RFE reviews in to his calendar 22:07:03 #link https://review.openstack.org/#/c/286413/ is close 22:07:14 sorry 22:07:16 #link https://review.openstack.org/#/c/225384/ 22:07:24 #link https://review.openstack.org/#/c/190285/ 22:07:40 as for the latter, we’d need to nudge the submitter to respin 22:07:55 bear in mind that we need to repurpose specs for Newton 22:08:17 #link https://github.com/openstack/neutron-specs/tree/master/specs/backlog/mitaka 22:08:49 during the next team meeting we’ll go over N-1/Mitaka backlog and remind people to nudge these to the right release 22:09:12 but if you’re proactive (for the stuff you’re approver of) then, we might save a day or two 22:09:16 armax: are we fast-tracking missed and backlog specs again, or do they need to go through this meeting? 22:09:33 they can be fast-tracked 22:09:56 no need to go back to the end of the queue, but it’d be good to understand if they need new owners/approvers 22:10:12 hence I’d like to have a recorded conversation 22:10:33 dougwig: does that answer your question? 22:10:36 yep 22:11:03 well, no. 22:11:08 your answers are kind of contradictory. :) 22:11:24 so are yours :) 22:11:32 ok 22:11:36 let me give you an example 22:11:44 vlan-aware-vms 22:11:50 the spec is in the backlog 22:12:10 it need to be resubmitted to Newton 22:12:33 that doesn’t require a full blown spec approval process 22:12:59 but I’d like to understand if the original approvers/owners assigned in Mitaka intend to continue to work in Netwon 22:13:09 if not, then before fast tracking we’d need to find new owners 22:13:18 otherwise what’s the point in fast tracking? 22:13:22 you with me now? 22:13:47 so, simple +2/+A, but bring it up briefly in the meeting to verify owner/approver before the +A? 22:13:53 correct 22:13:57 with you 22:13:57 armax : good explanation - reasonably clear now 22:14:10 sounds fair 22:14:13 dougwig, Sukhdev sorry I wasn’t crystal clear 22:14:15 armax: speaking of vlan-aware-vms, does it currently have an active approver? 22:14:37 rossella_s was in charge of it 22:14:46 we’d need to check with her if her priorities have changed 22:15:07 bence too, I haven’t seen him being super active on its patch 22:15:09 his 22:15:38 last spin of https://review.openstack.org/#/c/273954/ is as old as Feb 2 22:15:42 *sigh 22:15:50 armax : Ironic needs it as well - if no one comes forward, I will try to find a taker for this 22:15:54 * armax hurts himself 22:16:12 Sukhdev: I am sure we have many people interested 22:16:16 whoever takes over this really needs to understand the l2 agent well for the implementation 22:16:19 but no-one with the right endurance :) 22:16:27 otherwise it will stall 22:16:33 kevinbenton: exactly 22:16:41 but this isn’t just l2 agent alone 22:16:46 pitfalls may be all over the place 22:16:51 and all of those people committed themselves. 22:16:59 anyhow we’ll discuss about this in the right venue 22:17:07 dougwig: to a mental institution? 22:17:20 lol 22:17:26 * armax is unclear 22:17:26 :-) 22:17:38 I am funny aren’t I? 22:17:40 anyhoo 22:17:45 * kevinbenton thinkgs dougwig is the active peanut gallery 22:17:50 if there’s nothing else, shall we dive in? 22:17:58 I'd like to quickly revisit an RFE from last week: bug 1560003 22:18:00 bug 1560003 in neutron "[RFE] Creating a new stadium project for BGP Dynamic Routing effort" [Wishlist,Triaged] https://launchpad.net/bugs/1560003 - Assigned to vikram.choudhary (vikschw) 22:18:10 HenryG: you bully 22:18:12 It was pointed out to me that there is an exist repo, networking-bgpvpn 22:18:15 HenryG: go ahead 22:18:22 HenryG: aye 22:18:25 Do we want a different repo for BGP dynamic routing? 22:19:07 it’s my udnerstanding that the two are not quite the same 22:19:11 Some in the team indicated that they do want it separate 22:19:15 There are two different architectures. Many discussions in the past about the differences and the need for both, which has always resulted in both moving ahead so far 22:19:35 i wouldn't think they are the same. 22:19:44 bgpvpn is closer related to vpnaas than it is to the BGP dynamic routing that recently merged 22:19:47 but to be fair, I did make the point repeately that neutron-bgp is probably not appropriate as a name 22:19:58 i assumed it'd be networking-bgp 22:20:26 That's all I wanted cleared up 22:20:26 HenryG: actually I just reviewed both bug and patch from vikram 22:20:31 HenryG: thanks boos 22:20:32 boss 22:20:44 let’s keep an eye on this 22:20:45 The naming can be discussed in the bug 22:21:05 the ultimate bikeshed topic. 22:21:12 HenryG, dougwig: ack 22:21:20 openstack/networking-BigGreenPenis 22:21:25 * salv-orlando did anyone say bikeshedding? 22:21:45 oh my, the painter awoke 22:21:46 ok, too many chocolate covered espresso beans. 22:21:50 salv-orlando: go back to bed 22:21:58 * kevinbenton awkward silence 22:22:02 :) 22:22:04 ok 22:22:05 bug 1507499 22:22:06 bug 1507499 in neutron "[RFE] Centralized Management System for testing the environment" [Wishlist,Triaged] https://launchpad.net/bugs/1507499 22:22:35 this is not going away easily 22:22:41 seems to be disagreement on what people want from this 22:22:42 we had a new related proposal on bug 1563538 22:22:44 bug 1507499 in neutron "duplicate for #1563538 [RFE] Centralized Management System for testing the environment" [Wishlist,Triaged] https://launchpad.net/bugs/1507499 22:23:08 we punted to the mid-cycle, we’re close enough to the summit that we could punt there easily 22:23:33 extension or separate command-line tool, this could proceed in a separate repo, to see if it goes anywhere. 22:23:41 yeah, maybe a session on what kind of debugging we want built in 22:23:47 or a friday session 22:24:17 I still think that we’d need to augument our API to report more sophisiticated health information on a resource basis 22:24:22 to start off 22:24:37 then we can worry about how we provide the toolkit to implement remedy actions 22:25:09 I think both armax and dougwig provided good arguments to close the discussion on this rfe 22:25:09 I added some notes to #link https://etherpad.openstack.org/p/neutron-troubleshooting 22:25:16 as long as we actually come to an agreement for this cycle I'm game 22:25:23 I’ll make the point on the bug report again, see if I can find proselytes 22:25:32 * salv-orlando on this node goes back to this sleep 22:25:36 amuller: follow me and I’ll show you the light 22:25:50 salv-orlando: have a good one, we love you 22:26:08 bug 1520719 22:26:09 bug 1520719 in neutron "[RFE] Use the new enginefacade from oslo_db" [Wishlist,Triaged] https://launchpad.net/bugs/1520719 - Assigned to Ann Kamyshnikova (akamyshnikova) 22:26:20 HenryG: I look at you to get this resolved 22:26:33 Approve it and it shall be done 22:26:40 yep 22:26:45 HenryG: I’d like to see a plan first 22:26:47 but aye 22:27:06 like what to expect in the course of the release 22:27:12 but thanks for taking ownership 22:27:16 you shall be rewarded 22:27:23 bug 1530331 22:27:24 bug 1530331 in neutron "[RFE] [ipv6] Advertise tenant prefixes from router to outside" [Wishlist,Triaged] https://launchpad.net/bugs/1530331 22:28:36 Hi 22:28:37 shall we consider something within the scope? 22:28:57 It would be a third way to get IPv6 routes back in to tenant networks. 22:29:07 It could be the simplest way though. 22:29:29 yes, if you could stuff a /128 in there 22:29:54 haleyb: I wasn't even thinking about host routes yet. 22:30:00 haleyb: That would take some more thinking. 22:30:05 to be honest, I’d rather choose a single way to address the specific use case 22:30:47 oh, the CVR doing this 22:31:26 The thing is, I haven't heard much demand for this. 22:31:33 Maybe asking operators would be good. 22:31:37 carl_baldwin, haleyb do I read that this needs more baking until it formalized into an actionable proposal? 22:31:55 armax: I think so. 22:32:06 carl_baldwin: most likely we’ll have an operators sponspored session at the summit 22:32:13 yes, more baking 22:32:15 carl_baldwin: let’s make sure we get the time to talk about this 22:32:22 armax: ++ 22:32:28 haleyb: you can’t leave the cake in the oven unattended 22:32:30 it’ll burn! 22:32:42 haleyb: can I trust you and carl_baldwin will see through this? 22:32:44 armax: set a timer. ;) 22:32:44 * haleyb is already hungry and that didn't help 22:32:55 haleyb: but it’s gonna burn 22:33:01 it’s not gonna taste good 22:33:07 bug 1552631 22:33:09 bug 1552631 in neutron "[RFE] Bulk Floating IP allocation" [Wishlist,Triaged] https://launchpad.net/bugs/1552631 22:33:13 armax: I'll take it. 22:33:22 carl_baldwin: don’t oversubscribe 22:33:26 I hear haleyb is a slacker 22:33:39 * armax spreads false rumors 22:33:40 i can help too, only one RFE on my plate 22:33:50 honestly, i think this is a horizon rfe, and not appropriate for neutron api 22:33:52 * carl_baldwin passes it on to haleyb 22:34:14 I added one comment to this one about contiguous fip allocations. 22:34:21 carl_baldwin: ack 22:34:29 But, that might be a different request altogether. 22:34:31 talking about FIPs 22:35:08 and contiguous space 22:35:23 I wonder if we 22:35:37 will end up causing users to stamped on each other 22:36:23 I appreciate some customers may indeed ask for this, but at the same time, I think it’s right to say no if that means taking the rope away from users 22:36:24 armax: In what way? 22:36:58 if you have concurrent requests asking for the same contiguous space 22:37:02 if there's a contiguous fip rfe, we should deal with that separately. if this is really just the horizon use case, the ui should be driving the api. 22:37:18 dougwig: fair point 22:37:25 dougwig: +1 I fear I've expanded the scope of this rfe with my comment. 22:37:44 carl_baldwin: but that’s an important aspect nonetheless 22:38:01 amotoki: what do you reckon? 22:38:12 you’re the resident Horizon SME 22:38:14 I think we call multiple APIs as they want. 22:38:57 meaning we can provide a client side binding that accepts # of FIPs 22:38:57 if someone wants 2 FIPs in GUI, it is more tough compared to running CLI two times... 22:38:58 ? 22:39:19 and return a list of FIPs? 22:39:28 armax: i just added a comment there as well, i'm remembering a customer wanting say a /28 contiguous, so they had one SG rule. I think they were using the IPs on their end as well so it impacted their VPN config 22:39:39 amotoki: django does two api calls instead of one, in that case, i think, right? 22:39:41 there are two caniddate to implement it. horizon API wrappers or neutron client library. 22:39:54 dougwig: yes 22:39:55 haleyb: ok 22:40:16 amotoki: I can see value in having the client library binding 22:40:22 we wrote a "trawler" to scoop up all the blocks and keep them just for them 22:40:49 armax: you hate orchestration, but you're in favor of a binding that a simple for loop can handle? 22:40:52 armax: agree to some extent. 22:41:18 dougwig I do hate orchestration server side 22:41:18 if we support in in the client library, how about openstacksdk side? do we need to support it in openstacksdk? 22:41:39 dougwig: but not all orchestration 22:41:45 if I can’t live without 22:41:48 i'm opposed to an api change, mildly opposed to a client orchestration, but will lose sleep over neither. 22:42:24 dougwig: good, then we’re on the same page 22:42:34 bug 1552680 22:42:35 bug 1552680 in neutron "[RFE] Add support for DLM" [Wishlist,Triaged] https://launchpad.net/bugs/1552680 - Assigned to John Schwarz (jschwarz) 22:42:44 see if you lose sleep over this one 22:42:51 * jschwarz ^_^ 22:43:55 this could either be awesome or a disastrous nightmare of epic proportions. i don't think it has a middle ground. 22:44:27 I wonder if this an opportunity for us to take and experiment with this for ‘new’ stuff and leave stuff we built up until now alone 22:44:32 without adopting it in every place that touches an object protected by a lock, it's not buying us much 22:44:42 if it does end up being a disastrous nightmare as dougwig says, we can easily revert back 22:44:43 at least until we’ve given ourselves enough time to master the space 22:45:36 I agree with kevinbenton - the key is identifying specific places that can benefit from it greatly. 22:45:39 so I wodner if we can make a recommendation that for new work items this be considered 22:46:17 potentially overlapping with the ovo work that ihar, rossella et al are working on 22:46:36 not sure if I am making any sense 22:46:47 armax: you want to lock an object every time it's mutated, at the OVO layer? 22:46:52 I’ll need to give some more thought 22:47:47 amuller: I am saying that I am not sure I am prepered to warrant refactoring to adopt DLM 22:47:51 kevinbenton: Did you give any thought to locking at what low of a layer vs locking at higher layers like jschwarz's PoC? 22:47:55 at that* 22:48:06 aside from our ability to make use of this well, is tooz stable enough to wrap our stuff around? 22:48:07 but I am definitely open to seeing how this may play out in the context of a new effort 22:48:16 we have the real first need in L3 area? if so, we can try PoC without overlapping efforts 22:48:27 jschwarz: are you aware of any users of tooz's DLM API? 22:48:30 amuller: locking that low won't help when it's related objects (e.g. router and it's HA network) 22:48:41 kevinbenton: aye 22:48:48 amuller, dougwig, ceilometer uses it 22:49:11 jschwarz: the locking API or grouping API? 22:49:29 amuller, grouping iirc 22:49:56 amuller, https://github.com/openstack/ceilometer/blob/master/ceilometer/coordination.py 22:50:21 so we'd probably be adding tooz and contributing to it in parallel 22:50:38 let’s keep brainstorming on this one 22:51:16 dougwig: So I don't know if anyone can make that determination at this point 22:51:40 bug 1554869 22:51:42 bug 1554869 in neutron "[RFE] Make API errors conform to API working group schema" [Wishlist,Triaged] https://launchpad.net/bugs/1554869 - Assigned to xiexs (xiexs) 22:51:50 dougwig, I think that part of the stability of tooz is the stability of the backend we choose 22:51:55 that means the answer is unknown, so no. i personally don't want to risk neutron's adoption momentum for something like that, and if we do it, i'd want to see it behind a config/enable toggle. 22:52:50 are these api error changes additive, or will we be breaking backwards compat to switch? 22:53:00 dougwig: taht’s what I am questioning too 22:53:27 from the specs proposal it looks like they are non bw compat 22:54:25 has anyone given this any thought? 22:54:49 just my opinion, but i'm in favor of applying those standards to additions and any new v3 api, but not to potentially breaking users. 22:55:15 +1 22:55:19 yes, but before doing that we’d nee to see what that v3 api looks like and who is interested in working on one 22:55:20 :) 22:55:25 v3 or versioned API 22:55:38 "next major rev, if", then. :) 22:55:58 amotoki: rather the latter I’d say, but a bump needs to happen nonetheless 22:56:11 bug 1557290 22:56:12 bug 1557290 in neutron "[RFE]DVR FIP agent gateway does not pass traffic directed at fixed IP" [Wishlist,Triaged] https://launchpad.net/bugs/1557290 22:56:26 I agree with carl_baldwin that this should be treated as a regular bug 22:56:33 +1 22:57:01 carl_baldwin: do you see many architectural changes involved? 22:57:22 armax: I don't think so. 22:57:26 ok 22:57:29 good to know 22:57:38 bug 1558812 22:57:39 bug 1558812 in neutron "[RFE] Enable adoption of an existing subnet into a subnetpool" [Wishlist,Triaged] https://launchpad.net/bugs/1558812 22:58:18 this is interesting, and I think it makes sense, how that is going to end materializing I am not quite sure, so I’d advice us to iterate on a spec 22:58:29 goes against pets vs cattle, but i guess this is the real world we're living in. 22:58:38 My biggest concern here is how to make an API that ensures consistency on a network. 22:59:14 perhaps iterating on a spec may help us shed some lights on these types of issues 22:59:19 dougwig: Imagine an external network with lots of things on it. You want to tear them all down? You can't kill your whole herd. 22:59:36 too bad we had one bug left from the lot 22:59:47 not sure if we can cover it in the last few seconds left 22:59:49 bug 1563069 22:59:50 bug 1563069 in neutron "[RFE] Centralize Configuration Options" [Wishlist,Triaged] https://launchpad.net/bugs/1563069 23:00:14 I see no point in rushing on this, but right now I have a kneejerk reaction to say ‘go away' 23:00:19 and on that note 23:00:22 thank you all 23:00:22 carl_baldwin: there's a place for herds of pets. 23:00:24 #endmeeting