18:00:42 #startmeeting networking_policy 18:00:43 Meeting started Thu Nov 17 18:00:42 2016 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:00:44 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:00:46 The meeting name has been set to 'networking_policy' 18:00:53 SumitNaiksatam: hi! 18:01:04 hi 18:01:09 we are back after a bit of hiatus :-) 18:01:12 igordcard: hi 18:01:38 we have a bit of an open agenda today, since i am not sure if people in the US have adjusted to the change in time 18:01:47 #topic QoS branch/patch 18:01:59 igordcard: thanks for following up on the patch 18:02:19 igordcard: i was on paternity leave, and wasnt able to respond to your question earlier 18:02:24 hi 18:02:37 igordcard: were you able to make progress on the UT failures you were asking? 18:03:17 SumitNaiksatam: yes I fixed the existing UTs but broke others :| 18:03:28 igordcard: oh 18:03:48 #link https://review.openstack.org/#/c/301701 18:03:53 SumitNaiksatam: so now I'll have to look at those, was completely booked for an event today so we'll have to study the issue later this evening or tomorrow 18:04:02 s/we'll/I'll 18:04:23 igordcard: i think this is still a setup issue 18:04:34 igordcard: seems like a driver is not loaded 18:04:36 the problem is in what extension drivers are being loaded for which test classes 18:04:49 igordcard: right 18:05:00 igordcard: might have been something when you did a rebase and merge 18:05:00 cuz the qos extension driver doesn't seem to be compatible with the aim_extension or apic 18:05:27 is it acceptable to consider qos incompatible with aim_extension? 18:05:28 igordcard: i am guessing that the UT setup part is not initialized correctly 18:05:43 igordcard: yeah it is completely acceptable 18:06:05 SumitNaiksatam: ok 18:06:16 igordcard: i did not realize that you were trying to get those tests to work 18:06:54 igordcard: in that context, let me know if you see an obvious fix for the UT failures you are seeing, otherwise we look through whats going on 18:07:00 I need to load the qos extension as locally as possible (down in the inheritance chain of the test classes, or it will affect different sets of tests - from aim) 18:07:00 igordcard: no rush though 18:07:04 I doubt the apic_aim mechanism driver configuation is enabling the Neutron-level QoS extension, so I’m not sure using GBP’s QoS on top of that would work 18:07:45 rkukura: right, i dont think it would, and i dont think we are targeting it this point, right? 18:07:53 SumitNaiksatam: alright; rkukura yeah it's not loading it, and if I try it doesn't seem to load apic_aim 18:08:20 so why would aim_extension be invoved in these tests? 18:10:24 rkukura: the aim tests are indirectly inherited from GroupPolicyPluginTestBase, and I was setting qos there 18:11:23 that and related settings were preventing it from loading and making some tests fail, but I think I have a similar "noisy test class" now which made another set of tests fail 18:11:28 https://review.openstack.org/#/c/301701/24..25/gbpservice/neutron/tests/unit/services/grouppolicy/test_grouppolicy_plugin.py 18:11:39 SumitNaiksatam: Does this make sense? Seems we would need GBP unit tests to run independently on resource_mapping, apic_mapping, and aim_mapping configs. 18:11:54 (...) from loading, and making some tests fail (...) 18:12:03 rkukura: yeah 18:12:57 rkukura: i am wondering, to make things easier for igordcard, if we could “unload” the certain extensions when initializing specific driver related tests? 18:13:01 maybe we’ve got some aim-speciic config in a base test class rather than in a config-specific class 18:13:34 rkukura: or that, but i suspect that would require some surgery to get the right combination of extensions 18:14:18 rkukura: if we could just override the loading of this specific extension from the apic_aim UT initialization that might be easier 18:14:47 i havent done this before, so i am not sure it can be done, but just thinking loud 18:15:37 as a more permanent solution, rkukura, i agree we should solve this with a new hierarchy of parent classes for setting up the UT, so that we initialize only the extensions we need to 18:16:11 agreed, and if shorter term solution is available, I’m OK with that too 18:16:37 rkukura: okay 18:16:56 igordcard: thanks for the update on this 18:16:56 alright, I should be able to fix it for now for my patch 18:17:03 igordcard: great 18:17:26 #topic Consider HAProxy for the NFP namespace Proxy 18:17:30 songole: hi 18:17:41 hi SumitNaiksatam 18:17:48 songole: glad you could make it 18:17:58 songole: you want to quickly summarize the proposal 18:18:09 ok 18:18:12 songole: it might be helpful to first summarize the current function of the proxy 18:18:20 and/or how it works 18:18:42 we have a proxy as a means to communicate from under the cloud components to the configurator VM 18:18:50 which is running over the cloud 18:19:06 songole: so the proxy bridges the under cloud to the over cloud 18:19:16 songole: and its done over a unix domain socket? 18:19:26 this listens on unix sockets on one end and makes tcp connection to configurator 18:19:36 SumitNaiksatam: right 18:19:51 this is a simple proxy written in python 18:19:58 songole: thanks, and this proxy is currently home-grown GBP component, right? 18:20:10 songole: you answered my question 18:20:10 SumitNaiksatam: correct 18:20:28 the proposal is to replace it with haproxy 18:21:02 haproxy has some advantages.. more reliable, proven 18:21:03 songole: did you run into issues with the current proxy implementation? 18:21:45 SumitNaiksatam: not really.. I am concerned about performance and scale 18:22:10 songole: ah okay, that was my next question 18:22:10 It could also load balance when we have these multiple configurators .. 18:22:45 songole: i definitely think there is a direct benefit from the getting the loadbalancing for free 18:22:47 I am yet to validate.. But, I was just trying to broach the idea 18:22:58 songole: i dont know about the performance 18:23:38 songole: i am not sure if haproxy will add any extra processing that you dont need, but i am quite ignorant of this, so i am just speculating 18:23:41 however, we need to install haproxy package 18:24:04 songole: so, haproxy would become a GBP dependency? 18:24:28 SumitNaiksatam: good question on additional processing.. We just need L4 LB 18:24:46 SumitNaiksatam: GBP dependency - yes 18:25:03 rkukura: what is your thinking on adding something like haproxy as a GBP package dependency? 18:25:22 If its already an Open 18:25:29 i guess, firstly we have ascertain that it will be available across distros 18:25:30 an OpenStack dependency, I see no issue 18:25:47 rkukura: i am not aware that its an openstack dependency 18:25:59 rkukura: as in in the global requirements, but we can check 18:26:07 we should check that 18:26:14 i doubt it is 18:26:30 rkukura: but i agree with your reasoning 18:26:48 rkukura: so lets say its not, then how do we go about it? 18:28:16 i did a quick search in global requirements repo on the string “haproxy”, did not find it 18:28:21 Even if its not in global requirements, I guess I’d look for any other OpenStack projects that use it 18:28:23 i mean in the requirements 18:28:29 rkukura: okay 18:29:02 This isn’t just a python package, so I’m not sure how the dependency would be expressed 18:29:11 rkukura: good point 18:29:29 so looking in global requirements is probably not the right thing 18:29:34 lbaas project does use it 18:29:45 I’ve seen some mentions of binary dependencies in email/IRC recently 18:30:13 songole: but like rkukura said, i think the lbaas dependency on haproxy is probably not captured in the requirements 18:30:48 and is probably taken care off at the distro level (and by devstack in developer env) 18:30:58 SumitNaiksatam: I got it. I don't see it either.. 18:31:08 Would UTs have any dependency on haproxy? 18:31:26 rkukura: i dont think so 18:31:51 So it might just be a matter of devstack plugins installing it 18:31:51 i think the UTs related to the current proxy implementation would just go away 18:32:04 songole can correct me 18:32:30 SumitNaiksatam: I would think so 18:32:35 songole: if we were to this, would it mean that everytime we install GBP, the installation would have to somehow install haproxy? 18:33:07 SumitNaiksatam: right 18:33:10 installation -> GBP installer 18:33:16 songole: okay 18:34:00 songole: it seems like we would actually have to go through this exercise once, to understand the dependencise issue, and also the ease of deployment 18:34:21 OpenStack HA docs reference haproxy: http://docs.openstack.org/ha-guide/controller-ha-haproxy.html 18:35:11 For GBP NFP, on what node(s) would haproxy run? 18:35:28 network node 18:35:30 rkukura: network node 18:36:10 songole: if you see a big benefit in doing this, perhaps we can invest time 18:36:20 ok 18:36:35 SumitNaiksatam: okay 18:36:49 i guess its a good experiment to try out 18:37:44 songole: we can carry forward the discussion offline, and/or follow up in the upcoming IRC meetings 18:37:52 SumitNaiksatam: ok 18:37:54 songole: thanks for proposing this 18:37:59 #topic Open Discussion 18:38:13 I wasnt able to attend the Barcelona summit 18:38:29 as is probably the case with most other people here 18:38:47 not sure if there were any discussions related to GBP, and/or any follow up was required 18:39:12 I wasn’t there either, but I have not heard of any GBP discussion 18:39:18 rkukura: okay, thanks 18:39:30 Any timeframe for moving GBP master to depend on Neutron newton or master? 18:39:36 we always have this meeting, and the #openstack-gbp channel if anyone wants to follow up 18:39:41 rkukura: :-) 18:39:59 i know we needed to do that like yesterday 18:40:06 but currently i dont have a timeline on this 18:40:21 if anyone is willing to take this on right now, i can definitely work with them 18:40:58 Seems we would move our master to depend on newton, get that comfortably working, create our stable/newton, then move our master to depend on master, right? 18:41:14 rkukura: yes 18:41:47 So at least the target for the first effort would not be a moving target at this point 18:41:57 to be able to cut stable/newton the non-voting gate jobs should also pass 18:42:07 Meaning the work could start and take as much time as needed to complete 18:42:19 rkukura: true, newton is already out, that makes it a bit easier 18:42:47 rkukura: one thing we need to do is fix the aim job first 18:42:55 good point 18:43:00 we meaning, between the two of us 18:43:16 i think we are flying blind a little bit there 18:43:39 no point in having that job if we dont get it to work :-) 18:43:54 my bad that i wasnt able to fix it earlier 18:44:25 SumitNaiksatam: lets look into it this afternoon if we get a chance 18:44:37 rkukura: sounds good 18:44:56 as a programming note, since there are few holidays coming up, we might be meeting less often over the next few weeks 18:45:17 we wont be meeting in the week of thanksgiving, and the last two weeks of december 18:45:56 oh, and there is this patch from tbachman: #link https://review.openstack.org/#/c/399126/ 18:46:09 songole: if you are still there, you might also want to take a look 18:46:23 SumitNaiksatam: ok. will do 18:46:33 songole: it turns out that tbachman was hitting the same issue you noticed with lbaas in mitaka 18:46:49 songole: and which we worked around by patching 18:47:13 tbachman: thanks for relentlessly pursuing the root cause of this! 18:47:43 tbachman: thank you 18:47:53 tbachman and SumitNaiksatam: I’d like to catch up with you guys on this DB issue as soon as I get in 18:48:00 it appears we didn't run into that issue again 18:48:03 rkukura: okay 18:48:13 songole: after the monkey patch, that is, right? 18:48:39 I was also discussing this general sort of issue with kevinbenton yesterday 18:48:41 no.. even without it.. with new install of gbp rpms.. 18:48:52 songole: ah, interesting 18:48:57 rkukura: okay cool 18:49:18 alrighty, any one have anything else for today? 18:49:31 that’s it for me 18:49:54 rkukura: songole igordcard tbachman: thanks for joining! 18:49:58 bye 18:50:02 #endmeeting