18:00:42 <SumitNaiksatam> #startmeeting networking_policy
18:00:43 <openstack> Meeting started Thu Nov 17 18:00:42 2016 UTC and is due to finish in 60 minutes.  The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:00:44 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:00:46 <openstack> The meeting name has been set to 'networking_policy'
18:00:53 <tbachman> SumitNaiksatam: hi!
18:01:04 <igordcard> hi
18:01:09 <SumitNaiksatam> we are back after a bit of hiatus :-)
18:01:12 <SumitNaiksatam> igordcard: hi
18:01:38 <SumitNaiksatam> we have a bit of an open agenda today, since i am not sure if people in the US have adjusted to the change in time
18:01:47 <SumitNaiksatam> #topic QoS branch/patch
18:01:59 <SumitNaiksatam> igordcard: thanks for following up on the patch
18:02:19 <SumitNaiksatam> igordcard: i was on paternity leave, and wasnt able to respond to your question earlier
18:02:24 <rkukura> hi
18:02:37 <SumitNaiksatam> igordcard: were you able to make progress on the UT failures you were asking?
18:03:17 <igordcard> SumitNaiksatam: yes I fixed the existing UTs but broke others :|
18:03:28 <SumitNaiksatam> igordcard: oh
18:03:48 <SumitNaiksatam> #link https://review.openstack.org/#/c/301701
18:03:53 <igordcard> SumitNaiksatam: so now I'll have to look at those, was completely booked for an event today so we'll have to study the issue later this evening or tomorrow
18:04:02 <igordcard> s/we'll/I'll
18:04:23 <SumitNaiksatam> igordcard: i think this is still a setup issue
18:04:34 <SumitNaiksatam> igordcard: seems like a driver is not loaded
18:04:36 <igordcard> the problem is in what extension drivers are being loaded for which test classes
18:04:49 <SumitNaiksatam> igordcard: right
18:05:00 <SumitNaiksatam> igordcard: might have been something when you did a rebase and merge
18:05:00 <igordcard> cuz the qos extension driver doesn't seem to be compatible with the aim_extension or apic
18:05:27 <igordcard> is it acceptable to consider qos incompatible with aim_extension?
18:05:28 <SumitNaiksatam> igordcard: i am guessing that the UT setup part is not initialized correctly
18:05:43 <SumitNaiksatam> igordcard: yeah it is completely acceptable
18:06:05 <igordcard> SumitNaiksatam: ok
18:06:16 <SumitNaiksatam> igordcard: i did not realize that you were trying to get those tests to work
18:06:54 <SumitNaiksatam> igordcard: in that context, let me know if you see an obvious fix for the UT failures you are seeing, otherwise we look through whats going on
18:07:00 <igordcard> I need to load the qos extension as locally as possible (down in the inheritance chain of the test classes, or it will affect different sets of tests - from aim)
18:07:00 <SumitNaiksatam> igordcard: no rush though
18:07:04 <rkukura> I doubt the apic_aim mechanism driver configuation is enabling the Neutron-level QoS extension, so I’m not sure using GBP’s QoS on top of that would work
18:07:45 <SumitNaiksatam> rkukura: right, i dont think it would, and i dont think we are targeting it this point, right?
18:07:53 <igordcard> SumitNaiksatam: alright; rkukura yeah it's not loading it, and if I try it doesn't seem to load apic_aim
18:08:20 <rkukura> so why would aim_extension be invoved in these tests?
18:10:24 <igordcard> rkukura: the aim tests are indirectly inherited from GroupPolicyPluginTestBase, and I was setting qos there
18:11:23 <igordcard> that and related settings were preventing it from loading and making some tests fail, but I think I have a similar "noisy test class" now which made another set of tests fail
18:11:28 <igordcard> https://review.openstack.org/#/c/301701/24..25/gbpservice/neutron/tests/unit/services/grouppolicy/test_grouppolicy_plugin.py
18:11:39 <rkukura> SumitNaiksatam: Does this make sense? Seems we would need GBP unit tests to run independently on resource_mapping, apic_mapping, and aim_mapping configs.
18:11:54 <igordcard> (...) from loading, and making some tests fail (...)
18:12:03 <SumitNaiksatam> rkukura: yeah
18:12:57 <SumitNaiksatam> rkukura: i am wondering, to make things easier for igordcard, if we could “unload” the certain extensions when initializing specific driver related tests?
18:13:01 <rkukura> maybe we’ve got some aim-speciic config in a base test class rather than in a config-specific class
18:13:34 <SumitNaiksatam> rkukura: or that, but i suspect that would require some surgery to get the right combination of extensions
18:14:18 <SumitNaiksatam> rkukura: if we could just override the loading of this specific extension from the apic_aim UT initialization that might be easier
18:14:47 <SumitNaiksatam> i havent done this before, so i am not sure it can be done, but just thinking loud
18:15:37 <SumitNaiksatam> as a more permanent solution, rkukura, i agree we should solve this with a new hierarchy of parent classes for setting up the UT, so that we initialize only the extensions we need to
18:16:11 <rkukura> agreed, and if shorter term solution is available, I’m OK with that too
18:16:37 <SumitNaiksatam> rkukura: okay
18:16:56 <SumitNaiksatam> igordcard: thanks for the update on this
18:16:56 <igordcard> alright, I should be able to fix it for now for my patch
18:17:03 <SumitNaiksatam> igordcard: great
18:17:26 <SumitNaiksatam> #topic Consider HAProxy for the NFP namespace Proxy
18:17:30 <SumitNaiksatam> songole: hi
18:17:41 <songole> hi SumitNaiksatam
18:17:48 <SumitNaiksatam> songole: glad you could make it
18:17:58 <SumitNaiksatam> songole: you want to quickly summarize the proposal
18:18:09 <songole> ok
18:18:12 <SumitNaiksatam> songole: it might be helpful to first summarize the current function of the proxy
18:18:20 <SumitNaiksatam> and/or how it works
18:18:42 <songole> we have a proxy as a means to communicate from under the cloud components to the configurator VM
18:18:50 <songole> which is running over the cloud
18:19:06 <SumitNaiksatam> songole: so the proxy bridges the under cloud to the over cloud
18:19:16 <SumitNaiksatam> songole: and its done over a unix domain socket?
18:19:26 <songole> this listens on unix sockets on one end and makes tcp connection to configurator
18:19:36 <songole> SumitNaiksatam: right
18:19:51 <songole> this is a simple proxy written in python
18:19:58 <SumitNaiksatam> songole: thanks, and this proxy is currently home-grown GBP component, right?
18:20:10 <SumitNaiksatam> songole: you answered my question
18:20:10 <songole> SumitNaiksatam: correct
18:20:28 <songole> the proposal is to replace it with haproxy
18:21:02 <songole> haproxy has some advantages.. more reliable, proven
18:21:03 <SumitNaiksatam> songole: did you run into issues with the current proxy implementation?
18:21:45 <songole> SumitNaiksatam: not really.. I am concerned about performance and scale
18:22:10 <SumitNaiksatam> songole: ah okay, that was my next question
18:22:10 <songole> It could also load balance when we have these multiple configurators ..
18:22:45 <SumitNaiksatam> songole: i definitely think there is a direct benefit from the getting the loadbalancing for free
18:22:47 <songole> I am yet to validate.. But, I was just trying to broach the idea
18:22:58 <SumitNaiksatam> songole: i dont know about the performance
18:23:38 <SumitNaiksatam> songole: i am not sure if haproxy will add any extra processing that you dont need, but i am quite ignorant of this, so i am just speculating
18:23:41 <songole> however, we need to install haproxy package
18:24:04 <SumitNaiksatam> songole: so, haproxy would become a GBP dependency?
18:24:28 <songole> SumitNaiksatam: good question on additional processing.. We just need L4 LB
18:24:46 <songole> SumitNaiksatam: GBP dependency - yes
18:25:03 <SumitNaiksatam> rkukura: what is your thinking on adding something like haproxy as a GBP package dependency?
18:25:22 <rkukura> If its already an Open
18:25:29 <SumitNaiksatam> i guess, firstly we have ascertain that it will be available across distros
18:25:30 <rkukura> an OpenStack dependency, I see no issue
18:25:47 <SumitNaiksatam> rkukura: i am not aware that its an openstack dependency
18:25:59 <SumitNaiksatam> rkukura: as in in the global requirements, but we can check
18:26:07 <rkukura> we should check that
18:26:14 <SumitNaiksatam> i doubt it is
18:26:30 <SumitNaiksatam> rkukura: but i agree with your reasoning
18:26:48 <SumitNaiksatam> rkukura: so lets say its not, then how do we go about it?
18:28:16 <SumitNaiksatam> i did a quick search in global requirements repo on the string “haproxy”, did not find it
18:28:21 <rkukura> Even if its not in global requirements, I guess I’d look for any other OpenStack projects that use it
18:28:23 <SumitNaiksatam> i mean in the requirements
18:28:29 <SumitNaiksatam> rkukura: okay
18:29:02 <rkukura> This isn’t just a python package, so I’m not sure how the dependency would be expressed
18:29:11 <SumitNaiksatam> rkukura: good point
18:29:29 <SumitNaiksatam> so looking in global requirements is probably not the right thing
18:29:34 <songole> lbaas project does use it
18:29:45 <rkukura> I’ve seen some mentions of binary dependencies in email/IRC recently
18:30:13 <SumitNaiksatam> songole: but like rkukura said, i think the lbaas dependency on haproxy is probably not captured in the requirements
18:30:48 <SumitNaiksatam> and is probably taken care off at the distro level (and by devstack in developer env)
18:30:58 <songole> SumitNaiksatam: I got it. I don't see it either..
18:31:08 <rkukura> Would UTs have any dependency on haproxy?
18:31:26 <SumitNaiksatam> rkukura: i dont think so
18:31:51 <rkukura> So it might just be a matter of devstack plugins installing it
18:31:51 <SumitNaiksatam> i think the UTs related to the current proxy implementation would just go away
18:32:04 <SumitNaiksatam> songole can correct me
18:32:30 <songole> SumitNaiksatam: I would think so
18:32:35 <SumitNaiksatam> songole: if we were to this, would it mean that everytime we install GBP, the installation would have to somehow install haproxy?
18:33:07 <songole> SumitNaiksatam: right
18:33:10 <SumitNaiksatam> installation -> GBP installer
18:33:16 <SumitNaiksatam> songole: okay
18:34:00 <SumitNaiksatam> songole: it seems like we would actually have to go through this exercise once, to understand the dependencise issue, and also the ease of deployment
18:34:21 <rkukura> OpenStack HA docs reference haproxy: http://docs.openstack.org/ha-guide/controller-ha-haproxy.html
18:35:11 <rkukura> For GBP NFP, on what node(s) would haproxy run?
18:35:28 <SumitNaiksatam> network node
18:35:30 <songole> rkukura: network node
18:36:10 <SumitNaiksatam> songole: if you see a big benefit in doing this, perhaps we can invest time
18:36:20 <rkukura> ok
18:36:35 <songole> SumitNaiksatam: okay
18:36:49 <SumitNaiksatam> i guess its a good experiment to try out
18:37:44 <SumitNaiksatam> songole: we can carry forward the discussion offline, and/or follow up in the upcoming IRC meetings
18:37:52 <songole> SumitNaiksatam: ok
18:37:54 <SumitNaiksatam> songole: thanks for proposing this
18:37:59 <SumitNaiksatam> #topic Open Discussion
18:38:13 <SumitNaiksatam> I wasnt able to attend the Barcelona summit
18:38:29 <SumitNaiksatam> as is probably the case with most other people here
18:38:47 <SumitNaiksatam> not sure if there were any discussions related to GBP, and/or any follow up was required
18:39:12 <rkukura> I wasn’t there either, but I have not heard of any GBP discussion
18:39:18 <SumitNaiksatam> rkukura: okay, thanks
18:39:30 <rkukura> Any timeframe for moving GBP master to depend on Neutron newton or master?
18:39:36 <SumitNaiksatam> we always have this meeting, and the #openstack-gbp channel if anyone wants to follow up
18:39:41 <SumitNaiksatam> rkukura: :-)
18:39:59 <SumitNaiksatam> i know we needed to do that like yesterday
18:40:06 <SumitNaiksatam> but currently i dont have a timeline on this
18:40:21 <SumitNaiksatam> if anyone is willing to take this on right now, i can definitely work with them
18:40:58 <rkukura> Seems we would move our master to depend on newton, get that comfortably working, create our stable/newton, then move our master to depend on master, right?
18:41:14 <SumitNaiksatam> rkukura: yes
18:41:47 <rkukura> So at least the target for the first effort would not be a moving target at this point
18:41:57 <SumitNaiksatam> to be able to cut stable/newton the non-voting gate jobs should also pass
18:42:07 <rkukura> Meaning the work could start and take as much time as needed to complete
18:42:19 <SumitNaiksatam> rkukura: true, newton is already out, that makes it a bit easier
18:42:47 <SumitNaiksatam> rkukura: one thing we need to do is fix the aim job first
18:42:55 <rkukura> good point
18:43:00 <SumitNaiksatam> we meaning, between the two of us
18:43:16 <SumitNaiksatam> i think we are flying blind a little bit there
18:43:39 <SumitNaiksatam> no point in having that job if we dont get it to work :-)
18:43:54 <SumitNaiksatam> my bad that i wasnt able to fix it earlier
18:44:25 <rkukura> SumitNaiksatam: lets look into it this afternoon if we get a chance
18:44:37 <SumitNaiksatam> rkukura: sounds good
18:44:56 <SumitNaiksatam> as a programming note, since there are few holidays coming up, we might be meeting less often over the next few weeks
18:45:17 <SumitNaiksatam> we wont be meeting in the week of thanksgiving, and the last two weeks of december
18:45:56 <SumitNaiksatam> oh, and there is this patch from tbachman: #link https://review.openstack.org/#/c/399126/
18:46:09 <SumitNaiksatam> songole: if you are still there, you might also want to take a look
18:46:23 <songole> SumitNaiksatam: ok. will do
18:46:33 <SumitNaiksatam> songole: it turns out that tbachman was hitting the same issue you noticed with lbaas in mitaka
18:46:49 <SumitNaiksatam> songole: and which we worked around by patching
18:47:13 <SumitNaiksatam> tbachman: thanks for relentlessly pursuing the root cause of this!
18:47:43 <songole> tbachman: thank you
18:47:53 <rkukura> tbachman and SumitNaiksatam: I’d like to catch up with you guys on this DB issue as soon as I get in
18:48:00 <songole> it appears we didn't run into that issue again
18:48:03 <SumitNaiksatam> rkukura: okay
18:48:13 <SumitNaiksatam> songole: after the monkey patch, that is, right?
18:48:39 <rkukura> I was also discussing this general sort of issue with kevinbenton yesterday
18:48:41 <songole> no.. even without it.. with new install of gbp rpms..
18:48:52 <SumitNaiksatam> songole: ah, interesting
18:48:57 <SumitNaiksatam> rkukura: okay cool
18:49:18 <SumitNaiksatam> alrighty, any one have anything else for today?
18:49:31 <rkukura> that’s it for me
18:49:54 <SumitNaiksatam> rkukura: songole igordcard tbachman: thanks for joining!
18:49:58 <SumitNaiksatam> bye
18:50:02 <SumitNaiksatam> #endmeeting