22:01:15 #startmeeting neutron_drivers 22:01:16 Meeting started Thu Feb 18 22:01:15 2016 UTC and is due to finish in 60 minutes. The chair is armax. Information about MeetBot at http://wiki.debian.org/MeetBot. 22:01:18 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 22:01:20 The meeting name has been set to 'neutron_drivers' 22:01:48 hello gents 22:01:52 ready for some fun? 22:02:00 hello 22:02:01 * carl_baldwin ready :) 22:02:04 ooh boy are we. 22:02:09 * mestery jumps up and down 22:02:17 Ya 22:02:20 #link https://wiki.openstack.org/wiki/Meetings/NeutronDrivers 22:02:28 Wait, by fun do you mean pointless bikeshedding on the mailing list? 22:02:35 ok, open the boxes you received in the post this morning... 22:02:50 I didn't get one 22:02:55 you all received suspiciuous packages did you? 22:03:00 This isn't some sort of "we all drink at once" thing is it? 22:03:02 kevinbenton: that’s because you’re never home 22:03:03 mine was full of intel NICs. 22:03:16 o/ 22:03:36 oh well…it looks like this didn’t work this time, I’ll try next week 22:03:38 let’s get serious 22:03:44 the list of triaged bugs for the week 22:03:45 https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=Triaged&field.tag=rfe&orderby=datecreated&start=0 22:03:46 #link https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=Triaged&field.tag=rfe&orderby=datecreated&start=0 22:04:09 we punted on bug 1507499 22:04:09 bug 1507499 in neutron "Centralized Management System for testing the environment" [Wishlist,Triaged] https://launchpad.net/bugs/1507499 22:04:21 so let’s move it aside for now 22:04:31 next 22:04:33 bug 1522102 22:04:33 bug 1522102 in neutron "[RFE] use oslo-versioned-objects to help with dealing with upgrades" [Wishlist,Triaged] https://launchpad.net/bugs/1522102 - Assigned to Justin Hammond (justin-hammond) 22:04:47 I was hoping Ihar would be around, but it’s too much to ask 22:05:11 I imagine that’s the sort of activity that’ll have to be chewed on slowly 22:05:18 anyone know when he will be back? 22:05:19 it won’t finish in the span of a single cycle 22:05:28 njohnston: it’s late night for him already 22:05:48 yeah, I just haven't seen him in a couple of days 22:05:57 IIRC, upgrade sprint in Mar will tackle OVO for core resources. 22:06:04 amotoki: ack 22:06:27 I think that we all watch this effort with interest, and we should let it progress so long as it doesn’t disrupt anything else 22:06:32 so far it seems it isn't 22:06:46 ++ 22:06:48 at one point I wasn’t even sure this needed an RFE 22:06:57 i've so far not met anyone that messed with ovo and liked it afterwards. 22:07:03 it's kinda like drupal that way. 22:07:12 in that, that’s one of the things we’ll have to tackle at some point in some form or another 22:07:19 dougwig: Ihar is one tough dude, lets see if he makes it out with his sanity 22:07:46 dougwig: that’s one of those things: you gotta try it for yourself 22:08:29 so if there are no major concerns on this one, we should watch it and make sure we have a clear understing of the implications of using OVOs everywhere 22:08:56 ++ 22:09:15 next bug 1527671 22:09:15 bug 1527671 in neutron "[RFE]Neutron QoS Priority Queuing rule" [Wishlist,Triaged] https://launchpad.net/bugs/1527671 22:09:35 I have been seeing a number of QoS RFEs lately 22:10:21 I wonder if at some point we need to draw the line 22:10:41 I am not against this one or a few others I have seen 22:10:47 if it has active contributors, why would we? 22:11:02 +1 22:11:06 +1 22:11:56 dougwig: understood, but I’d rather work with something finite 22:12:04 and knowing that at some point I’ll be done 22:12:08 it doesn’t have to be today 22:12:10 or tomorrow 22:12:20 but at some point in the future :) 22:12:32 armax: indeed they are discrete items and therefore finite 22:12:46 natural numbers are discrete 22:12:47 now the number of those items might or might not be infinite 22:12:53 I don’t recall they are finite 22:12:54 but I digress 22:13:02 heh, we either like little pieces or mega-specs. i'm not sure either really ends. it's when they don't get done, or are lousy, that i have an issue. 22:13:08 aye indeed. 22:13:29 I am still unclear on how all these qos pieces fit together 22:13:38 we don’t seem to have a cohesive QoS strategy in Neutron 22:13:55 and I’d be wary of adding little pieces without having the overall picture in sight 22:14:06 or where we want to get 22:14:08 but that’s just me 22:14:40 I think our QoS strategy is a pretty straightforward "Rome wasn't built in a day" 22:14:48 I don't think it's unreasonable to try to understand the broader picture with respect to QoS armax, that's within reason completely. 22:14:54 I am not saying I am against this RFE, but I’d rather look at it in the context of the other proposals to assess if it is an essential piece of the QoS puzzle 22:14:59 armax: that argues for a larger spec,t hen. 22:14:59 I sense it is 22:15:31 * carl_baldwin feels the same about the QoS strategy but thought maybe he just didn't understand it. 22:15:37 perhaps I ams simply advocating for looking at all the QoS RFE proposal together 22:16:05 so that we have a coherent story we’re gonna sell when it comes to QoS in Neutron 22:16:23 I wouldn’t simply rubberstamp RFE’s just for the sake of it 22:16:37 I wonder if the sentiment is only mine 22:16:45 I think there are 2 types of QoS RFEs that might come in: expanding QoS to different drivers (like LinuxBridge) and adding features (like DSCP and ECN). The list of possible QoS features to add is a finite and small set. 22:17:29 But the fact that you have an X times Y effect makes the number of RFEs seem large (DSCP on SR-IOV! ECN on Midonet!). 22:18:20 njohnston: right, let’s see if the people closer to QoS have a stronger opinion 22:18:34 I’ll provide this feedback and we’ll reconvene in due course 22:18:47 I don't think backend implementations are related unless it is in-tree drivers. 22:19:12 amotoki: there’s certainly a distinction between API and implementation 22:19:27 yes 22:20:34 let’s have this simmer a bit more, QoS is still churning in Mitaka, so review bandwidth is kinda limited anyway…talking about QoS :) 22:21:03 next? 22:21:07 bug #1529109 22:21:07 bug 1529109 in neutron "[RFE] Security groups resources are not extendable" [Wishlist,Triaged] https://launchpad.net/bugs/1529109 - Assigned to Roey Chen (roeyc) 22:21:19 I personally can get behind salv-orlando’s argument 22:21:52 I was not sure whether the misisng method was an oversight or a by-design choice 22:22:11 and hence I initially flagged negatively the patch that brought this to our attention 22:23:23 armax: the method was missing because the guy that did that for l3 did not bother doing the same stuff for other extension 22:23:30 anyone feels strongly one way or another? 22:23:38 salv-orlando: would that guy be you? 22:24:03 armax: lazyness always gives me away 22:24:09 salv-orlando: it does 22:24:19 salv-orlando: that’s why you’re so predictable 22:24:22 :) 22:25:14 ok, so if we move past the alignment with the ec2 version of security group API (which I guess we diverged from by now anyway) we could let this go 22:25:41 but I’d still like to be conscious on how this is going to be used, at least within the borders of Neutron itlsef. 22:25:44 sounds fair? 22:25:58 Sure 22:26:10 ok, moving on 22:26:14 bug 1540512 22:26:14 bug 1540512 in neutron "Host Aware IPAM" [Wishlist,Triaged] https://launchpad.net/bugs/1540512 22:26:52 So, this is from Calico. 22:26:52 I tried to digest this one a few times 22:26:57 but I have been unable to 22:27:32 Doesn't ipam already get the port object? 22:27:32 They want a mechanism to aggregate IPs in subnets around something like a host, rack, or some other physical thing. 22:27:34 if you see overlap with routed networks 22:28:06 could they provide feedback to that initiative? 22:28:13 armax: routed networks works a little bit differently because subnets are confined to an L2 domain. 22:28:15 If so, the only new thing this is requesting is the change to the nova neutron interaction for late ip assignment 22:28:16 i think we could rename this "add neutron to the nova scheduler". 22:28:28 lol 22:28:35 dougwig: not quite 22:28:44 No, this isn't adding neutron to the nova scheduler. 22:28:53 This is just saying that neutron will pick IPs based on the host 22:29:09 i get that it's a narrow case, but what the real high-level detail is to get neutron-awareness into the vm scheduler. 22:29:09 In fact, since their requirement to aggregate IPs is more of a soft requirement, they don't need to affect nova scheduling. 22:29:25 kevinbenton: I didn't think IPAM got a port but I'll have to checke. 22:29:27 check. 22:29:44 dougwig: No, its not. 22:30:14 ok so if I understand this correctly they want the IP selection to take into account the host on which the VM needs to land on 22:30:17 Won't they want to fail the nova scheduling decision if it doesn't fall in their IP addressing scheme? 22:30:18 What they need is 1) delayed IP assignment (which we're doing) and 2) sending the host to IPAM 22:30:27 armax: yes 22:30:28 ok 22:30:32 well, do we want this narrowly focused RFE (and maybe we do), or do we open this up to what it really is.. *every* network resource could benefit from locality and influence in the scheduling, (pre or post, which is the fine hair you're parsing to say it's not he scheduler, but whatev.) 22:30:43 mestery: no, their requirement is soft. 22:30:45 /he/the/ 22:31:04 dougwig: Nova scheduling integration would not satisfy their use case 22:31:04 I'm not sure how that's useful, but ok 22:31:06 :) 22:31:12 carl_baldwin: I think 2 is a natural consequence of 2 22:31:16 *1 22:31:19 kevinbenton: OK, I give up, explain it to me please and use small words 22:31:20 :) 22:31:26 armax: Is that recursive? 22:31:27 ;) 22:31:28 y'all are reading too narrowly into my use of the word scheduler. 22:31:42 Neutron looks at port object and chooses IP based on host field 22:32:04 kevinbenton: we can only do so when we’re deferring the IP allocation 22:32:20 armax: yes 22:32:22 to way after the scheduling has taken place 22:32:24 armax: which is a consequence of which? 22:32:35 2 consequence of 1 22:32:55 because if we don’t have late IP assignment, we may not know the host 22:32:57 at all 22:33:12 armax: Maybe but I kind of think not necessarily. 22:33:35 Anyway, their ask here is just to work on getting the host info to IPAM. They have a spec up but it needs a lot of work. 22:33:48 well 22:33:59 that’s not a simple API change, even if internal 22:34:01 kevinbenton: I'll look at what is already passed in the context of the spec, if we want to approve. 22:34:17 it’s a matter of where the IPAM layer is involved in the port life cycle 22:35:06 so, I guess it’s worth looking at, only after the routed networks effort has started to yield some code 22:35:19 armax: +1 22:35:28 at the current API, we will receive an IP address when port-create, so if we defer IP allocation it is an API change as well. 22:35:35 armax: +1 22:35:41 ok 22:35:43 It *could* be used on my routed networks in the future, so I'm mildly interested myself. 22:36:08 amotoki: deferring IP address allocation is part of routed networks work already. 22:36:15 right 22:36:27 carl_baldwin: yeah, i forgot it. 22:36:39 and I think that derring IP address is a prerequisite to be able to pass a non NULL host to IPAM 22:36:52 Let's let this simmer. I'll give some feedback on the spec and show some interest but we'll wait until the routed networks work progresses. 22:37:06 armax: right 22:37:08 carl_baldwin: ack 22:37:25 bug 1541579 22:37:25 bug 1541579 in neutron "Port based HealthMonitor in neutron_lbaas" [Wishlist,Triaged] https://launchpad.net/bugs/1541579 22:37:38 dougwig: as LBaaS SME in charge 22:37:45 reading 22:38:35 reasonable, i'll comment on the bug on the exact syntax and whatnot. likely not mitaka, though. 22:39:00 dougwig: ok 22:39:15 dougwig: I’ll follow up with you 22:39:21 ok 22:39:28 bug 1541895 22:39:28 bug 1541895 in neutron "[RFE] [IPAM] Make IPAM driver a per-subnet pool option" [Wishlist,Triaged] https://launchpad.net/bugs/1541895 - Assigned to John Belamaric (jbelamaric) 22:39:37 this is giving me a bit of a headache 22:40:00 for a couple of reasons 22:40:02 MLIpam! 22:40:12 MLMLIPAM 22:40:29 jokes aside, we have the migration to sort out 22:40:46 but assumed that’s sorted 22:41:53 I'm not sure how much motivation there is behind this request. 22:42:10 well I wonder about the actual demand for such a use case 22:42:16 * carl_baldwin passes armax some Ibuprofen 22:42:28 I need somethign stronger 22:42:31 armax: Exactly what I'm wondering. 22:42:33 mlml -- a mechanism driver that passes all ethernet frames via openstack-dev@openstack.org. 22:43:04 to be fair, if we allow for multiple *things* to operate in Neutron 22:43:18 it’s only natural to extend the concept to IPAM 22:43:19 dougwig: ROTFL 22:43:36 dougwig: can you write a sepc? 22:43:37 spec? 22:43:41 dougwig: there is already a plugin that sends networks requests via email btw 22:44:12 Human defined networking! 22:44:26 my opinion is that a feature like this should not be implemented just because it's possible 22:44:37 salv-orlando: +1 22:44:47 otherwise we should also do what dougwig suggested: an email based datapath 22:44:48 why not 22:44:56 but do we envision the request down the line? 22:44:56 So we shouldn't implement any features that are possible? 22:45:09 ;) 22:45:22 I am not sure DHCP is really key and diversity is really needed 22:45:24 armax: If the RFE had a description of a use case we might 22:45:40 how many IPAM systems does one really need? 22:45:43 I think we all agree. Let's push back on this one to build a case for it. 22:45:47 salv-orl_: +1 22:45:53 carl_baldwin: +1 22:45:56 carl_baldwin: +1 22:45:59 but I don't have a use case, if not selling IPAM from a given vendor as a premium service in a cloud 22:46:05 Each tenant should be able to bring their own ipam 22:46:08 that’s another one 22:46:12 bug 1544676 22:46:12 bug 1544676 in neutron "[RFE] Support for multiple L2 agents on a host" [Wishlist,Triaged] https://launchpad.net/bugs/1544676 22:46:17 LL2 agent 22:46:30 mind boggling 22:46:38 meta-agent 22:46:39 What? Isn't this supported? 22:46:44 this is about running multiple L2 agents on the same compute 22:46:44 the main reason I submitted the RFE was for upgrades 22:46:58 kevinbenton: not supported until we develop a L1 agent and the ML1 plugin 22:47:04 jniesz: ack 22:47:11 these are side-ways upgrades 22:47:17 so, uhh, i'm not even sure we do *one* very well atm. 22:47:19 meaning from one technology to another 22:47:24 dougwig: :) 22:47:25 right 22:47:32 Wait, we already support this with certain agents 22:47:51 It sounds like this is a bug in either lb agent or ovs agent 22:48:00 I think we can run sriov-agent and ovs-agent. 22:48:00 or if you want to back a specific network with a particular agent for other reasons 22:48:10 like performance or feature 22:48:12 amotoki: +1 22:48:33 well this is about different L2 agents 22:48:34 the ovs agent will enslave the interface first 22:48:45 if you are running ovs+linux bridge 22:48:51 Sr-iov is an L2 agent 22:49:22 We need to see what the fix to allow this would entail 22:49:22 kevinbenton: come on, go beyond my words 22:50:07 in my understanding, in theory we allow multiple l2-agents on a same host. 22:50:11 If it does not work, I think it is just a bug. 22:50:42 I recall one or two bugs 22:50:50 where you had one agent and switched to another 22:50:56 but not two at the same time 22:52:42 Let's see how invasive the fix is to see if an rfe is even necessary 22:53:11 ok I have a feel that this is going to be messy 22:53:55 I am not sure this is somethign we can just try out, I imagine this needs some planning 22:53:58 and digging 22:54:04 agree. if we allow this we will need some gateway between two types of network backends... 22:55:01 we also would need to be prescriptive of the migration use cases 22:55:11 because we can’t simply assume this would work in all possible circustamces 22:56:06 is this for no downtime migration? 22:56:40 because it’s not that hard to have a predicatble migration path where you unplug and replug the VM’s nic 22:56:42 for avoiding dedicating a set of compute hosts for migration 22:56:56 you don’t need dedicated hosts to do that 22:57:31 it is possible to cascade 22:57:45 as long as the swing space is there 22:58:29 I am unclear as to why you’d need more compute resources if you can tolerate some downitme 22:58:48 which is the safest thing to do in migration circumstances anyway 22:58:57 we should probably continue the conversation offline 22:59:06 we’re runnign short 22:59:09 ok 22:59:19 it’s food for thought 22:59:27 jniesz: thanks for filing the RFE 22:59:44 ok, we’ll continue next week from where we left off 22:59:48 #endmeeting