22:01:15 <armax> #startmeeting neutron_drivers
22:01:16 <openstack> Meeting started Thu Feb 18 22:01:15 2016 UTC and is due to finish in 60 minutes.  The chair is armax. Information about MeetBot at http://wiki.debian.org/MeetBot.
22:01:18 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
22:01:20 <openstack> The meeting name has been set to 'neutron_drivers'
22:01:48 <armax> hello gents
22:01:52 <armax> ready for some fun?
22:02:00 <jniesz> hello
22:02:01 * carl_baldwin ready :)
22:02:04 <dougwig> ooh boy are we.
22:02:09 * mestery jumps up and down
22:02:17 <kevinbenton> Ya
22:02:20 <armax> #link https://wiki.openstack.org/wiki/Meetings/NeutronDrivers
22:02:28 <mestery> Wait, by fun do you mean pointless bikeshedding on the mailing list?
22:02:35 <armax> ok, open the boxes you received in the post this morning...
22:02:50 <kevinbenton> I didn't get one
22:02:55 <armax> you all received suspiciuous packages did you?
22:03:00 <mestery> This isn't some sort of "we all drink at once" thing is it?
22:03:02 <armax> kevinbenton: that’s because you’re never home
22:03:03 <dougwig> mine was full of intel NICs.
22:03:16 <njohnston> o/
22:03:36 <armax> oh well…it looks like this didn’t work this time, I’ll try next week
22:03:38 <armax> let’s get serious
22:03:44 <armax> the list of triaged bugs for the week
22:03:45 <armax> https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=Triaged&field.tag=rfe&orderby=datecreated&start=0
22:03:46 <armax> #link https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=Triaged&field.tag=rfe&orderby=datecreated&start=0
22:04:09 <armax> we punted on bug 1507499
22:04:09 <openstack> bug 1507499 in neutron "Centralized Management System for testing the environment" [Wishlist,Triaged] https://launchpad.net/bugs/1507499
22:04:21 <armax> so let’s move it aside for now
22:04:31 <armax> next
22:04:33 <armax> bug 1522102
22:04:33 <openstack> bug 1522102 in neutron "[RFE] use oslo-versioned-objects to help with dealing with upgrades" [Wishlist,Triaged] https://launchpad.net/bugs/1522102 - Assigned to Justin Hammond (justin-hammond)
22:04:47 <armax> I was hoping Ihar would be around, but it’s too much to ask
22:05:11 <armax> I imagine that’s the sort of activity that’ll have to be chewed on slowly
22:05:18 <njohnston> anyone know when he will be back?
22:05:19 <armax> it won’t finish in the span of a single cycle
22:05:28 <armax> njohnston: it’s late night for him already
22:05:48 <njohnston> yeah, I just haven't seen him in a couple of days
22:05:57 <amotoki> IIRC, upgrade sprint in Mar will tackle OVO for core resources.
22:06:04 <armax> amotoki: ack
22:06:27 <armax> I think that we all watch this effort with interest, and we should let it progress so long as it doesn’t disrupt anything else
22:06:32 <armax> so far it seems it isn't
22:06:46 <carl_baldwin> ++
22:06:48 <armax> at one point I wasn’t even sure this needed an RFE
22:06:57 <dougwig> i've so far not met anyone that messed with ovo and liked it afterwards.
22:07:03 <dougwig> it's kinda like drupal that way.
22:07:12 <armax> in that, that’s one of the things we’ll have to tackle at some point in some form or another
22:07:19 <mestery> dougwig: Ihar is one tough dude, lets see if he makes it out with his sanity
22:07:46 <armax> dougwig: that’s one of those things: you gotta try it for yourself
22:08:29 <armax> so if there are no major concerns on this one, we should watch it and make sure we have a clear understing of the implications of using OVOs everywhere
22:08:56 <amotoki> ++
22:09:15 <armax> next bug 1527671
22:09:15 <openstack> bug 1527671 in neutron "[RFE]Neutron QoS Priority Queuing rule" [Wishlist,Triaged] https://launchpad.net/bugs/1527671
22:09:35 <armax> I have been seeing a number of QoS RFEs lately
22:10:21 <armax> I wonder if at some point we need to draw the line
22:10:41 <armax> I am not against this one or a few others I have seen
22:10:47 <dougwig> if it has active contributors, why would we?
22:11:02 <vhoward> +1
22:11:06 <njohnston> +1
22:11:56 <armax> dougwig: understood, but I’d rather work with something finite
22:12:04 <armax> and knowing that at some point I’ll be done
22:12:08 <armax> it doesn’t have to be today
22:12:10 <armax> or tomorrow
22:12:20 <armax> but at some point in the future :)
22:12:32 <salv-orlando> armax: indeed they are discrete items and therefore finite
22:12:46 <armax> natural numbers are discrete
22:12:47 <salv-orlando> now the number of those items might or might not be infinite
22:12:53 <armax> I don’t recall they are finite
22:12:54 <armax> but I digress
22:13:02 <dougwig> heh, we either like little pieces or mega-specs. i'm not sure either really ends. it's when they don't get done, or are lousy, that i have an issue.
22:13:08 <dougwig> aye indeed.
22:13:29 <armax> I am still unclear on how all these qos pieces fit together
22:13:38 <armax> we don’t seem to have a cohesive QoS strategy in Neutron
22:13:55 <armax> and I’d be wary of adding little pieces without having the overall picture in sight
22:14:06 <armax> or where we want to get
22:14:08 <armax> but that’s just me
22:14:40 <njohnston> I think our QoS strategy is a pretty straightforward "Rome wasn't built in a day"
22:14:48 <mestery> I don't think it's unreasonable to try to understand the broader picture with respect to QoS armax, that's within reason completely.
22:14:54 <armax> I am not saying I am against this RFE, but I’d rather look at it in the context of the other proposals to assess if it is an essential piece of the QoS puzzle
22:14:59 <dougwig> armax: that argues for a larger spec,t hen.
22:14:59 <armax> I sense it is
22:15:31 * carl_baldwin feels the same about the QoS strategy but thought maybe he just didn't understand it.
22:15:37 <armax> perhaps I ams simply advocating for looking at all the QoS RFE proposal together
22:16:05 <armax> so that we have a coherent story we’re gonna sell when it comes to QoS in Neutron
22:16:23 <armax> I wouldn’t simply rubberstamp RFE’s just for the sake of it
22:16:37 <armax> I wonder if the sentiment is only mine
22:16:45 <njohnston> I think there are 2 types of QoS RFEs that might come in: expanding QoS to different drivers (like LinuxBridge) and adding features (like DSCP and ECN).  The list of possible QoS features to add is a finite and small set.
22:17:29 <njohnston> But the fact that you have an X times Y effect makes the number of RFEs seem large (DSCP on SR-IOV!  ECN on Midonet!).
22:18:20 <armax> njohnston: right, let’s see if the people closer to QoS have a stronger opinion
22:18:34 <armax> I’ll provide this feedback and we’ll reconvene in due course
22:18:47 <amotoki> I don't think backend implementations are related unless it is in-tree drivers.
22:19:12 <armax> amotoki: there’s certainly a distinction between API and implementation
22:19:27 <amotoki> yes
22:20:34 <armax> let’s have this simmer a bit more, QoS is still churning in Mitaka, so review bandwidth is kinda limited anyway…talking about QoS :)
22:21:03 <armax> next?
22:21:07 <armax> bug #1529109
22:21:07 <openstack> bug 1529109 in neutron "[RFE] Security groups resources are not extendable" [Wishlist,Triaged] https://launchpad.net/bugs/1529109 - Assigned to Roey Chen (roeyc)
22:21:19 <armax> I personally can get behind salv-orlando’s argument
22:21:52 <armax> I was not sure whether the misisng method was an oversight or a by-design choice
22:22:11 <armax> and hence I initially flagged negatively the patch that brought this to our attention
22:23:23 <salv-orlando> armax: the method was missing because the guy that did that for l3 did not bother doing the same stuff for other extension
22:23:30 <armax> anyone feels strongly one way or another?
22:23:38 <armax> salv-orlando: would that guy be you?
22:24:03 <salv-orlando> armax: lazyness always gives me away
22:24:09 <armax> salv-orlando: it does
22:24:19 <armax> salv-orlando: that’s why you’re so predictable
22:24:22 <armax> :)
22:25:14 <armax> ok, so if we move past the alignment with the ec2 version of security group API (which I guess we diverged from by now anyway) we could let this go
22:25:41 <armax> but I’d still like to be conscious on how this is going to be used, at least within the borders of Neutron itlsef.
22:25:44 <armax> sounds fair?
22:25:58 <kevinbenton> Sure
22:26:10 <armax> ok, moving on
22:26:14 <armax> bug 1540512
22:26:14 <openstack> bug 1540512 in neutron "Host Aware IPAM" [Wishlist,Triaged] https://launchpad.net/bugs/1540512
22:26:52 <carl_baldwin> So, this is from Calico.
22:26:52 <armax> I tried to digest this one a few times
22:26:57 <armax> but I have been unable to
22:27:32 <kevinbenton> Doesn't ipam already get the port object?
22:27:32 <carl_baldwin> They want a mechanism to aggregate IPs in subnets around something like a host, rack, or some other physical thing.
22:27:34 <armax> if you see overlap with routed networks
22:28:06 <armax> could they provide feedback to that initiative?
22:28:13 <carl_baldwin> armax: routed networks works a little bit differently because subnets are confined to an L2 domain.
22:28:15 <kevinbenton> If so, the only new thing this is requesting is the change to the nova neutron interaction for late ip assignment
22:28:16 <dougwig> i think we could rename this "add neutron to the nova scheduler".
22:28:28 <mestery> lol
22:28:35 <kevinbenton> dougwig: not quite
22:28:44 <carl_baldwin> No, this isn't adding neutron to the nova scheduler.
22:28:53 <kevinbenton> This is just saying that neutron will pick IPs based on the host
22:29:09 <dougwig> i get that it's a narrow case, but what the real high-level detail is to get neutron-awareness into the vm scheduler.
22:29:09 <carl_baldwin> In fact, since their requirement to aggregate IPs is more of a soft requirement, they don't need to affect nova scheduling.
22:29:25 <carl_baldwin> kevinbenton: I didn't think IPAM got a port but I'll have to checke.
22:29:27 <carl_baldwin> check.
22:29:44 <carl_baldwin> dougwig: No, its not.
22:30:14 <armax> ok so if I understand this correctly they want the IP selection to take into account the host on which the VM needs to land on
22:30:17 <mestery> Won't they want to fail the nova scheduling decision if it doesn't fall in their IP addressing scheme?
22:30:18 <carl_baldwin> What they need is 1) delayed IP assignment (which we're doing) and 2) sending the host to IPAM
22:30:27 <carl_baldwin> armax: yes
22:30:28 <armax> ok
22:30:32 <dougwig> well, do we want this narrowly focused RFE (and maybe we do), or do we open this up to what it really is..  *every* network resource could benefit from locality and influence in the scheduling, (pre or post, which is the fine hair you're parsing to say it's not he scheduler, but whatev.)
22:30:43 <carl_baldwin> mestery: no, their requirement is soft.
22:30:45 <dougwig> /he/the/
22:31:04 <kevinbenton> dougwig: Nova scheduling integration  would not satisfy their use case
22:31:04 <mestery> I'm not sure how that's useful, but ok
22:31:06 <mestery> :)
22:31:12 <armax> carl_baldwin: I think 2 is a natural consequence of 2
22:31:16 <armax> *1
22:31:19 <mestery> kevinbenton: OK, I give up, explain it to me please and use small words
22:31:20 <mestery> :)
22:31:26 <carl_baldwin> armax: Is that recursive?
22:31:27 <carl_baldwin> ;)
22:31:28 <dougwig> y'all are reading too narrowly into my use of the word scheduler.
22:31:42 <kevinbenton> Neutron looks at port object and chooses IP based on host field
22:32:04 <armax> kevinbenton: we can only do so when we’re deferring the IP allocation
22:32:20 <kevinbenton> armax: yes
22:32:22 <armax> to way after the scheduling has taken place
22:32:24 <carl_baldwin> armax: which is a consequence of which?
22:32:35 <armax> 2 consequence of 1
22:32:55 <armax> because if we don’t have late IP assignment, we may not know the host
22:32:57 <armax> at all
22:33:12 <carl_baldwin> armax: Maybe but I kind of think not necessarily.
22:33:35 <carl_baldwin> Anyway, their ask here is just to work on getting the host info to IPAM.  They have a spec up but it needs a lot of work.
22:33:48 <armax> well
22:33:59 <armax> that’s not a simple API change, even if internal
22:34:01 <carl_baldwin> kevinbenton: I'll look at what is already passed in the context of the spec, if we want to approve.
22:34:17 <armax> it’s a matter of where the IPAM layer is involved in the port life cycle
22:35:06 <armax> so, I guess it’s worth looking at, only after the routed networks effort has started to yield some code
22:35:19 <carl_baldwin> armax: +1
22:35:28 <amotoki> at the current API, we will receive an IP address when port-create, so if we defer IP allocation it is an API change as well.
22:35:35 <amotoki> armax: +1
22:35:41 <armax> ok
22:35:43 <carl_baldwin> It *could* be used on my routed networks in the future, so I'm mildly interested myself.
22:36:08 <carl_baldwin> amotoki: deferring IP address allocation is part of routed networks work already.
22:36:15 <armax> right
22:36:27 <amotoki> carl_baldwin: yeah, i forgot it.
22:36:39 <armax> and I think that derring IP address is a prerequisite to be able to pass a non NULL host to IPAM
22:36:52 <carl_baldwin> Let's let this simmer.  I'll give some feedback on the spec and show some interest but we'll wait until the routed networks work progresses.
22:37:06 <carl_baldwin> armax: right
22:37:08 <armax> carl_baldwin: ack
22:37:25 <armax> bug 1541579
22:37:25 <openstack> bug 1541579 in neutron "Port based HealthMonitor in neutron_lbaas" [Wishlist,Triaged] https://launchpad.net/bugs/1541579
22:37:38 <armax> dougwig: as LBaaS SME in charge
22:37:45 <dougwig> reading
22:38:35 <dougwig> reasonable, i'll comment on the bug on the exact syntax and whatnot.  likely not mitaka, though.
22:39:00 <armax> dougwig: ok
22:39:15 <armax> dougwig: I’ll follow up with you
22:39:21 <dougwig> ok
22:39:28 <armax> bug 1541895
22:39:28 <openstack> bug 1541895 in neutron "[RFE] [IPAM] Make IPAM driver a per-subnet pool option" [Wishlist,Triaged] https://launchpad.net/bugs/1541895 - Assigned to John Belamaric (jbelamaric)
22:39:37 <armax> this is giving me a bit of a headache
22:40:00 <armax> for a couple of reasons
22:40:02 <kevinbenton> MLIpam!
22:40:12 <armax> MLMLIPAM
22:40:29 <armax> jokes aside, we have the migration to sort out
22:40:46 <armax> but assumed that’s sorted
22:41:53 <carl_baldwin> I'm not sure how much motivation there is behind this request.
22:42:10 <armax> well I wonder about the actual demand for such a use case
22:42:16 * carl_baldwin passes armax some Ibuprofen
22:42:28 <armax> I need somethign stronger
22:42:31 <carl_baldwin> armax: Exactly what I'm wondering.
22:42:33 <dougwig> mlml -- a mechanism driver that passes all ethernet frames via openstack-dev@openstack.org.
22:43:04 <armax> to be fair, if we allow for multiple *things* to operate in Neutron
22:43:18 <armax> it’s only natural to extend the concept to IPAM
22:43:19 <njohnston> dougwig: ROTFL
22:43:36 <armax> dougwig: can you write a sepc?
22:43:37 <armax> spec?
22:43:41 <salv-orlando> dougwig: there is already a plugin that sends networks requests via email btw
22:44:12 <kevinbenton> Human defined networking!
22:44:26 <salv-orlando> my opinion is that a feature like this should not be implemented just because it's possible
22:44:37 <armax> salv-orlando: +1
22:44:47 <salv-orlando> otherwise we should also do what dougwig suggested: an email based datapath
22:44:48 <salv-orlando> why not
22:44:56 <armax> but do we envision the request down the line?
22:44:56 <kevinbenton> So we shouldn't implement any features that are possible?
22:45:09 <kevinbenton> ;)
22:45:22 <armax> I am not sure DHCP is really key and diversity is really needed
22:45:24 <salv-orlando> armax: If the RFE had a description of a use case we might
22:45:40 <armax> how many IPAM systems does one really need?
22:45:43 <carl_baldwin> I think we all agree.  Let's push back on this one to build a case for it.
22:45:47 <kevinbenton> salv-orl_: +1
22:45:53 <armax> carl_baldwin: +1
22:45:56 <amotoki> carl_baldwin: +1
22:45:59 <salv-orlando> but I don't have a use case, if not selling IPAM from a given vendor as a premium service in a cloud
22:46:05 <kevinbenton> Each tenant should be able to bring their own ipam
22:46:08 <armax> that’s another one
22:46:12 <armax> bug 1544676
22:46:12 <openstack> bug 1544676 in neutron "[RFE] Support for multiple L2 agents on a host" [Wishlist,Triaged] https://launchpad.net/bugs/1544676
22:46:17 <armax> LL2 agent
22:46:30 <armax> mind boggling
22:46:38 <salv-orlando> meta-agent
22:46:39 <kevinbenton> What? Isn't this supported?
22:46:44 <armax> this is about running multiple L2 agents on the same compute
22:46:44 <jniesz> the main reason I submitted the RFE was for upgrades
22:46:58 <salv-orlando> kevinbenton: not supported until we develop a L1 agent and the ML1 plugin
22:47:04 <armax> jniesz: ack
22:47:11 <armax> these are side-ways upgrades
22:47:17 <dougwig> so, uhh, i'm not even sure we do *one* very well atm.
22:47:19 <armax> meaning from one technology to another
22:47:24 <armax> dougwig: :)
22:47:25 <armax> right
22:47:32 <kevinbenton> Wait, we already support this with certain agents
22:47:51 <kevinbenton> It sounds like this is a bug in either lb agent or ovs agent
22:48:00 <amotoki> I think we can run sriov-agent and ovs-agent.
22:48:00 <jniesz> or if you want to back a specific network with a particular agent for other reasons
22:48:10 <jniesz> like performance or feature
22:48:12 <kevinbenton> amotoki: +1
22:48:33 <armax> well this is about different L2 agents
22:48:34 <jniesz> the ovs agent will enslave the interface first
22:48:45 <jniesz> if you are running ovs+linux bridge
22:48:51 <kevinbenton> Sr-iov is an L2 agent
22:49:22 <kevinbenton> We need to see what the fix to allow this would entail
22:49:22 <armax> kevinbenton: come on, go beyond my words
22:50:07 <amotoki> in my understanding, in theory we allow multiple l2-agents on a same host.
22:50:11 <amotoki> If it does not work, I think it is just a bug.
22:50:42 <armax> I recall one or two bugs
22:50:50 <armax> where you had one agent and switched to another
22:50:56 <armax> but not two at the same time
22:52:42 <kevinbenton> Let's see how invasive the fix is to see if an rfe is even necessary
22:53:11 <armax> ok I have a feel that this is going to be messy
22:53:55 <armax> I am not sure this is somethign we can just try out, I imagine this needs some planning
22:53:58 <armax> and digging
22:54:04 <amotoki> agree. if we allow this we will need some gateway between two types of network backends...
22:55:01 <armax> we also would need to be prescriptive of the migration use cases
22:55:11 <armax> because we can’t simply assume this would work in all possible circustamces
22:56:06 <armax> is this for no downtime migration?
22:56:40 <armax> because it’s not that hard to have a predicatble migration path where you unplug and replug the VM’s nic
22:56:42 <jniesz> for avoiding dedicating a set of compute hosts for migration
22:56:56 <armax> you don’t need dedicated hosts to do that
22:57:31 <jniesz> it is possible to cascade
22:57:45 <jniesz> as long as the swing space is there
22:58:29 <armax> I am unclear as to why you’d need more compute resources if you can tolerate some downitme
22:58:48 <armax> which is the safest thing to do in migration circumstances anyway
22:58:57 <armax> we should probably continue the conversation offline
22:59:06 <armax> we’re runnign short
22:59:09 <jniesz> ok
22:59:19 <armax> it’s food for thought
22:59:27 <armax> jniesz: thanks for filing the RFE
22:59:44 <armax> ok, we’ll continue next week from where we left off
22:59:48 <armax> #endmeeting