18:03:33 <SumitNaiksatam> #startmeeting networking_policy
18:03:34 <openstack> Meeting started Thu Nov 12 18:03:33 2015 UTC and is due to finish in 60 minutes.  The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:03:35 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:03:37 <openstack> The meeting name has been set to 'networking_policy'
18:04:09 <SumitNaiksatam> #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#Standing_items & https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#Nov_12th_2015
18:04:24 <SumitNaiksatam> hemanthravi: do you know if mageshgv is around?
18:04:33 <hemanthravi> let me check
18:05:34 <SumitNaiksatam> i wanted to discuss this patch which posted today #link https://review.openstack.org/#/c/244746/
18:05:52 <SumitNaiksatam> with reference to #link https://launchpad.net/bugs/1515632
18:05:52 <openstack> Launchpad bug 1515632 in Group Based Policy UI "Clearing the protocol in Classifier is not possible" [Undecided,New]
18:05:53 <hemanthravi> mageshv not around
18:06:00 <SumitNaiksatam> hemanthravi: okay
18:06:20 <SumitNaiksatam> hemanthravi: i am wondering if the above is a client issue or a backend side fix
18:07:25 <SumitNaiksatam> hemanthravi: we already allow the protocol to be set to None on the backend, hence any user friendly options which we need to add should ideally go in the client
18:08:00 <SumitNaiksatam> anyway, i think we can follow up in gerrit on this, i just thought if mageshgv was around it might be easier to wrap up it up here
18:08:23 <hemanthravi> as opposed to handling "any" in the backend
18:08:24 <hemanthravi> ?
18:08:28 <hemanthravi> will check with him
18:08:31 <SumitNaiksatam> hemanthravi: yes
18:09:07 <mageshgv> hi
18:09:31 <mageshgv> SumitNaiksatam: sorry, am on another call
18:09:34 <SumitNaiksatam> hemanthravi: i believe the issue is that the CLI will not allow you to unset certain attributes today, since it cannot distinguish between a value not specified (during an update) versus an actual value of None (used to unset an attribute)
18:09:38 <SumitNaiksatam> mageshgv: np
18:09:58 <SumitNaiksatam> mageshgv: you can read the back scroll whenever you get a chance
18:10:22 <SumitNaiksatam> hemanthravi: if its indeed that issue, then we should update the CLI to accept an “unset” argument
18:10:34 <SumitNaiksatam> hemanthravi: similarly, the UI can have an “any” option
18:10:54 <SumitNaiksatam> ok moving on (unless mageshgv wants to discuss this further today)
18:11:02 <mageshgv> SumitNaiksatam:  one question
18:11:12 <SumitNaiksatam> mageshgv: sure
18:11:39 <mageshgv> So we are planning to go with having None as it is today ?
18:12:36 <mageshgv> because the cli framework cannot send None as a value to a field as far as I checked
18:12:43 <SumitNaiksatam> mageshgv: my recollection is that, None, as it is today, indicates that all protocols are matched for that classifier (and which i believe is also how SG does it)
18:13:27 <SumitNaiksatam> mageshgv: right, and hence the suggestion to update the CLI to take an extra argument such as “—unset-protocol”
18:14:02 <mageshgv> SumitNaiksatam: Okay, so we implement a similar option for the UI as well ?
18:14:03 <SumitNaiksatam> perhaps there is a better way to phrase that…
18:14:08 <SumitNaiksatam> mageshgv: yes
18:14:26 <SumitNaiksatam> mageshgv: i believe this is how its done for SG as well
18:14:39 <SumitNaiksatam> mageshgv: and i think we already have the “any” option in the UI, no?
18:14:59 <mageshgv> SumitNaiksatam: actually the "any" doesnt work today
18:15:13 <SumitNaiksatam> mageshgv: ok
18:15:14 <mageshgv> I was trying to do something like this
18:15:17 <mageshgv> #link https://review.openstack.org/244746
18:15:57 <SumitNaiksatam> mageshgv: yes
18:16:38 <mageshgv> SumitNaiksatam: tweaking the cli should work as well and then we can map the "Any" in UI to this new operation
18:16:43 <mageshgv> for update
18:16:43 <SumitNaiksatam> mageshgv: i believe the other problem here is that covert function here just returns on None
18:16:57 <SumitNaiksatam> mageshgv: right
18:17:18 <SumitNaiksatam> mageshgv: we also have the other bug to allow integer protocol numbers
18:17:21 <mageshgv> SumitNaiksatam: I was trying to preserve the old model and adding Any at the same time
18:18:14 <SumitNaiksatam> mageshgv: and the backend seems to support those integer values
18:18:48 * rkukura will be back in a minute
18:19:01 <mageshgv> SumitNaiksatam: Yes, I went through it too today, looks like  we need one change though - the DB has it as an enum
18:19:02 <SumitNaiksatam> mageshgv: so we might need to update the CLI and UI anyway for supporting the protocol numbers
18:19:17 <SumitNaiksatam> mageshgv: oh bummer!
18:19:26 <mageshgv> SumitNaiksatam: That might be tricky to do a data migration
18:19:54 <SumitNaiksatam> mageshgv: hmmm
18:20:34 <SumitNaiksatam> mageshgv: need to think a little more as to whats the best way to do it, lets discuss offline
18:21:02 <mageshgv> SumitNaiksatam: ok
18:21:27 <SumitNaiksatam> mageshgv: but we can probably use this patch to fix both (handling of None, and the protocol numbers)
18:21:41 * rkukura is back
18:21:59 <SumitNaiksatam> rkukura: right on time
18:22:03 <mageshgv> SumitNaiksatam: okay
18:22:10 <SumitNaiksatam> rkukura posted this patch #link https://review.openstack.org/#/c/243334/
18:22:54 <rkukura> A migration translating protocol names to numbers should not be difficult.
18:22:57 <SumitNaiksatam> this allows us to bail out of two nasty situations #link https://launchpad.net/bugs/1470646 and #link https://launchpad.net/bugs/1510327
18:22:57 <openstack> Launchpad bug 1470646 in Group Based Policy "Deleting network associated with L2P results in infinite loop" [High,Triaged] - Assigned to Robert Kukura (rkukura)
18:22:58 <openstack> Launchpad bug 1510327 in Group Based Policy "GBP: Deleting groups leads to subnet-delete in infinite loop " [Critical,In progress] - Assigned to Robert Kukura (rkukura)
18:23:32 <rkukura> This patch doesn’t solve either of the problems - it just limits the loop and adds some logging.
18:23:47 <SumitNaiksatam> rkukura: right
18:24:02 <SumitNaiksatam> i wasnt sure that we needed to iterate 100 times before bailing out, but i am not stuck on that
18:24:24 <rkukura> I still don’t even have a theory on what causes bug 1510327
18:24:24 <openstack> bug 1510327 in Group Based Policy "GBP: Deleting groups leads to subnet-delete in infinite loop " [Critical,In progress] https://launchpad.net/bugs/1510327 - Assigned to Robert Kukura (rkukura)
18:24:27 <SumitNaiksatam> if everyone is fine with adding this instrumentation we can merge this patch
18:25:09 <rkukura> I wanted to be safe enough that we didn’t give up before the loop would eventually succeed. 100 is probably more than enough, but I kind of want there to be enough iterations logged that people notice.
18:25:41 <SumitNaiksatam> rkukura: okay
18:25:56 <SumitNaiksatam> ok so moving, since we have a lot of items to cover
18:26:01 <rkukura> ok
18:26:12 <SumitNaiksatam> i dont have any other critical bugs that i need to bring up here
18:26:24 <SumitNaiksatam> anyone else wants to discuss any specific bugs/patches?
18:26:53 <SumitNaiksatam> there are a few long standing ones which i want to touch later in the agenda (hopefully ivar will be around at that time)
18:27:18 <SumitNaiksatam> ok moving on
18:27:28 <SumitNaiksatam> #topic Packaging
18:27:35 <SumitNaiksatam> rkukura: anything new to discuss?
18:28:02 <rkukura> No new developments, but I need to update the fedora packages with the latest releases, and make sure RDO is getting updated.
18:28:04 <SumitNaiksatam> CLI issue mentioned in the agenda is no longer an issue, i should have removed it
18:28:36 <SumitNaiksatam> rkukura: oh yeah on that, you can perhaps hold off, since we will have a new stable release next monday
18:28:48 <SumitNaiksatam> we found some bugs and fixed them
18:28:49 <rkukura> ok
18:28:56 <SumitNaiksatam> and there are more fixes expected
18:29:09 <SumitNaiksatam> rkukura: but thanks for keeping that on the radar
18:29:21 <rkukura> np
18:29:49 <SumitNaiksatam> #topic Integration/Rally tests
18:29:59 <SumitNaiksatam> during the summit we agreed to make these jobs voting
18:30:13 <SumitNaiksatam> on the rally tests we are waiting for some fixes to the test suite
18:30:20 <SumitNaiksatam> once that happens we will attempt to do this
18:31:06 <SumitNaiksatam> hopefullly we can get to the bottom of the “host_ip” not found devstack error as well before that so that we dont have unnecessary false triggers in the gate
18:31:13 <SumitNaiksatam> #topic Docs
18:32:13 <SumitNaiksatam> few weeks back i discussed that we need to move the API spec and the design documentation in-tree, still not gotten to it, but thats the plan (you will see what i mean when i post the patches)
18:32:25 <SumitNaiksatam> #topic QoS Action
18:32:35 <SumitNaiksatam> igordcard: thanks for bringing this up
18:32:41 <SumitNaiksatam> igordcard: over to you
18:33:18 <igordcard> so I see that QoS is something that GBP wanted to have since the beginning and it makes perfectly sense to have it
18:33:28 <SumitNaiksatam> igordcard: right
18:34:32 <igordcard> I would like to help/enable it, but I would first like to hear how you are imagining it to be
18:34:57 <igordcard> oh, and of course, making use of the current Neutron QoS API extensions
18:35:30 <SumitNaiksatam> igordcard: at the most basic level, since its an action associated with a rule, which has a classifier, the expectation would be that the QoS would applied to traffic that matches the classifier
18:36:24 <SumitNaiksatam> rest of the team, please feel free to pitch in here with your thoughts
18:36:28 <rkukura> SumitNaiksatam: Wouldn’t we want to be able to apply QoS and redirect to a service chain? Seems like an action could only do one or the other.
18:37:06 <SumitNaiksatam> rkukura: agreed in cases we would want to able to apply more than one action
18:37:17 <igordcard> rkukura, in that case QoS would have to be guaranteed for the whole path of the chain right?
18:37:17 <SumitNaiksatam> rkukura: even redirect is an implicit allow
18:37:53 <SumitNaiksatam> igordcard: right
18:38:08 <SumitNaiksatam> rkukura: igordcard: we actually allow a list of actions for a policy-rule
18:38:17 <SumitNaiksatam> https://github.com/openstack/group-based-policy/blob/master/gbpservice/neutron/extensions/group_policy.py#L603-L606
18:38:39 <SumitNaiksatam> the dificult part is to compose this list if the order matters
18:38:58 <SumitNaiksatam> but i dont think there is any ordering issue with QoS and Redirect
18:39:11 <rkukura> igordcard: I’m not clear yet on what the neutron QoS API guarantees - not sure if it guarantees at least a certain minimum bandwidth or guarantees a maximum bandwidth won’t be exceeded, or both. This might impact what we need to do within a chain.
18:39:20 <igordcard> so what I could do to kick this off is propose a spec for a new QoS action which, for now, would not "live together" with the redirect action; and then make an implementation for it, leveraging the neutron APIs
18:39:30 <ivar-lazzaro> mmm
18:39:38 <rkukura> SumitNaiksatam: List of actions seems to resolve my concern, and I don’t think order would matter.
18:39:41 * ivar-lazzaro feels something it off
18:39:50 <igordcard> rkukura, the neutron APi allows for bandwidth limiting, max rate at least I think
18:40:38 <hemanthravi> if it's enforcing max-rate, we might not need to anything in the chain since it's being enforced at the entry
18:40:40 <rkukura> igordcard: In that case, limiting it at once place should be fine (rather than at each hop in a chain)
18:41:04 <igordcard> and after this initial QoS basic support we could improve it for the most difficult scenarios, like qpplying QoS for the whole chains
18:41:17 <hemanthravi> +2 ?
18:41:27 <rkukura> igordcard: I agree on starting with something simple
18:41:41 <igordcard> hemanthravi, yes, indeed
18:42:01 <igordcard> #link https://specs.openstack.org/openstack/neutron-specs/specs/liberty/qos-api-extension.html
18:42:26 <SumitNaiksatam> we need to revisit what it means to have two rules with the same classifier and different actions in the same PRS
18:42:41 <SumitNaiksatam> because then it leads to ordering issues
18:43:05 <SumitNaiksatam> to get around this we added the list of actions
18:43:22 <ivar-lazzaro> SumitNaiksatam: but they all seems to imply "allow" so far
18:43:47 <SumitNaiksatam> ivar-lazzaro: right, allow is implicit
18:44:10 <SumitNaiksatam> ivar-lazzaro: you had some other conflicting thought earlier?
18:44:18 <ivar-lazzaro> SumitNaiksatam: so reordering might not be needed as long as all the operations get applied
18:44:29 <SumitNaiksatam> ivar-lazzaro: yes
18:44:39 <ivar-lazzaro> Oh yeah, like why did I find myself right through the meeting if I joined half an hour earlier
18:44:47 <ivar-lazzaro> and then I realized, time zones.
18:44:56 <SumitNaiksatam> ivar-lazzaro: you missed the time change
18:45:04 <ivar-lazzaro> yep
18:45:11 <SumitNaiksatam> i sent a reminder earlier today, but i should have proabably sent it yesterday
18:45:37 <ivar-lazzaro> it's ok, my bad
18:45:53 <SumitNaiksatam> igordcard: so the proposal is that initially the QoS would not apply to redirect?
18:46:02 <SumitNaiksatam> ivar-lazzaro: np!
18:46:03 <igordcard> SumitNaiksatam, yes
18:46:42 <SumitNaiksatam> igordcard: okay
18:47:04 <ivar-lazzaro> igordcard: does the initial Neutron implementation already have traffic classificaiton?
18:47:39 <ivar-lazzaro> igordcard: or it's just applied by port regardless of the type of traffic?
18:48:10 <igordcard> I might test it out with redirect too, considering that in the case of max rate it should be enough to limit at the source - but I wouldn't commit on that, also because more types of QoS will probably be available in the future
18:48:38 <SumitNaiksatam> igordcard: okay
18:48:43 <igordcard> there should be some way to specify "simultaneous" actions to apply, or something like that
18:49:06 <ivar-lazzaro> I ask because I'm not sure whether we really want this to be applied to an action ad opposed to a policy that simply applies to a "group" (Gold/Silver/Bronze groups)
18:49:08 <SumitNaiksatam> igordcard: yes, thats the list of actions provision
18:49:18 <igordcard> ivar-lazzaro, well yeah I was thinking about that and it might be a problem
18:49:38 <ivar-lazzaro> igordcard: so it is port based? no classification?
18:49:39 <SumitNaiksatam> igordcard: ivar-lazzaro has a good question regarding where neutron applies QoS today
18:50:04 <igordcard> ivar-lazzaro, QoS is applied to ports or networks
18:50:39 <SumitNaiksatam> igordcard: that makes sense from a neutron perspective
18:50:45 <ivar-lazzaro> igordcard: ok
18:51:11 <ivar-lazzaro> SumitNaiksatam: I think Miguel's team is working on defining a classifier for Neutron though
18:51:13 <igordcard> if there's no way around that today, then in a first phase no classification will be possible
18:51:33 <ivar-lazzaro> SumitNaiksatam: in order to cover more complex cases (with networking-sfc as well)
18:51:44 <igordcard> ivar-lazzaro, and there is work for a Neutron "flow classifier" coming from the networking-sfc folks
18:51:50 <SumitNaiksatam> ivar-lazzaro: right
18:52:00 <ivar-lazzaro> ajo: Hi there! ^^
18:52:05 <igordcard> ivar-lazzaro, maybe the same then?
18:52:37 <ivar-lazzaro> igordcard: yeah probably
18:53:04 <SumitNaiksatam> igordcard: so you are suggesting that when the QoS action is used the protocol classifier in the rule would always be “any”
18:53:15 <igordcard> both QoS and classification work seem to be moving at high speed
18:53:45 <SumitNaiksatam> igordcard: so Neutron QoS is available in kilo or liberty?
18:53:56 <igordcard> SumitNaiksatam, as a first step, yes, if there isn't a straightforward workaround for now
18:54:09 <igordcard> SumitNaiksatam, it's subject to more investigation from my part
18:54:31 <SumitNaiksatam> igordcard: okay, i am thinking if we can enforce this at the driver level, and keep the API more comprehensive
18:54:36 <SumitNaiksatam> igordcard: sure
18:54:50 <igordcard> one of the folks in my team is starting on Neutron QoS contributions so that is a plus for everybody
18:54:56 <ivar-lazzaro> igordcard: one more question, let's say I want to rate limit the traffic that gets to my server: where is the rate limiting applied? The server's port? Neutron's Router? Client's Port?
18:54:57 <SumitNaiksatam> igordcard: nice!!
18:55:06 <ivar-lazzaro> igordcard: ++
18:55:17 <igordcard> SumitNaiksatam, Liberty
18:55:29 <SumitNaiksatam> igordcard: ah, i was hoping it was kilo
18:55:49 <igordcard> SumitNaiksatam, yes I was also envisioning that (driver vs api line)
18:56:09 <SumitNaiksatam> igordcard: ok cool
18:56:49 <SumitNaiksatam> igordcard: ivar-lazzaro’s question ^^^
18:57:37 <igordcard> ivar-lazzaro, so inbound traffic? you can do it for either the client or server port, but I'm not sure now if the limiting is made to the total traffic - in which case it would be bad if there were other flows
18:57:49 <igordcard> ivar-lazzaro, let me check better and talk to you offline
18:57:58 <ivar-lazzaro> igordcard: ok Thanks
18:58:00 <SumitNaiksatam> igordcard: thanks!
18:58:11 <SumitNaiksatam> we only have a couple of mins
18:58:13 <ivar-lazzaro> The way I see it, we should not be limited to a QoS action alone
18:58:27 <ivar-lazzaro> but we should think about the use cases and start from there
18:58:31 <SumitNaiksatam> #topic Liberty release
18:58:38 <SumitNaiksatam> I was working on a patch to sync with liberty
18:58:51 * tbachman wonders if he got the wrong time for the IRC meeting :(
18:58:54 <SumitNaiksatam> one problem i am facing is that we have dependencies on vendor packages in the UTs
18:59:13 <SumitNaiksatam> tbachman: sorry, missed notifying you!
18:59:21 <tbachman> no worries
18:59:37 <igordcard> ivar-lazzaro, there is a direction attribute in the Neutron QoS API
18:59:37 <SumitNaiksatam> and those vendor packages need to be updated as well
18:59:49 <SumitNaiksatam> thats delaying this sync up activity
19:00:20 <SumitNaiksatam> and we are out of time
19:00:51 <SumitNaiksatam> please take a look at these two specs: #link https://review.openstack.org/242781 and #link https://review.openstack.org/239743 (the latter i believe is in WIP)
19:00:54 <SumitNaiksatam> thanks all!
19:00:59 <SumitNaiksatam> #endmeeting