18:01:09 <SumitNaiksatam> #startmeeting networking_policy
18:01:10 <openstack> Meeting started Thu Oct 20 18:01:09 2016 UTC and is due to finish in 60 minutes.  The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot.
18:01:11 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
18:01:12 <igordcard> hi
18:01:13 <tbachman> SumitNaiksatam: hi!
18:01:14 <openstack> The meeting name has been set to 'networking_policy'
18:01:18 <ivar-lazzaro> hi
18:01:26 <hemanthravi> hi
18:01:35 <SumitNaiksatam> #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#Oct_20th_2016
18:02:00 <SumitNaiksatam> any critical bugs or standing items we need to discuss before we get into the NFP patches?
18:02:01 <jagadish> hi
18:02:28 <SumitNaiksatam> #topic NFP Pending reviews
18:02:38 <SumitNaiksatam> #link https://review.openstack.org/#/q/topic:bp/gbp-network-services-framework+status:open
18:03:00 <SumitNaiksatam> so before we get to those, i wanted to check on the progress for fixing the NFP gate job
18:03:08 <SumitNaiksatam> it still seems to be failing intermittently
18:03:16 <SumitNaiksatam> jagadish: how are we doing on that?
18:04:03 <SumitNaiksatam> or may be hemanthravi?
18:04:04 <jagadish> The gate passes on the top most change set which is https://review.openstack.org/#/c/359883/57
18:04:20 <SumitNaiksatam> jagadish: ah okay, nice
18:04:42 <hemanthravi> jagadish: ashutosh: are all the comments from subra addressed
18:04:44 <jagadish> It fails on the bottom change sets because of a dependency of nfp.ini file
18:04:52 <SumitNaiksatam> jagadish: that should be fine
18:05:05 <SumitNaiksatam> jagadish: i have just rechecked, to see if it passes consistently
18:05:15 <SumitNaiksatam> jagadish: hemanthravi’s question ^^^
18:05:37 <jagadish> hemanth: Yes. we are in the process of addressing them. Should be done in a day
18:05:50 <SumitNaiksatam> jagadish: okay, good
18:06:26 <SumitNaiksatam> jagadish: so for the benefit of the rest of the team, can you please let us know what was the reason for the NFP job failures?
18:06:37 <hemanthravi> SumitNaiksatam: once jagadish and ashutosh are done addressing these subra and i'll review and +2 these
18:07:15 <SumitNaiksatam> hemanthravi: okay, for the benefit of the team, i would like to spend a few minutes discussing here what is going into the patches
18:07:32 <jagadish> SumitNaiksatam: The old configuration files are unified to make a single ini. These files are deleted in the bottom change set and hence the failure.
18:08:03 <SumitNaiksatam> jagadish: however the failure is seen on other patches which are not part of this change too
18:08:28 <jagadish> SumitNaiksatam: The unified ini file is added on the top most changeset
18:09:07 <SumitNaiksatam> jagadish: yeah, but i am talking about other GBP patches which are not related to NFP or this chaing of patches which keep intermittently keep failing the NFP job
18:09:42 <SumitNaiksatam> jagadish: so far we have been ignoring the failiure and merging the patches, however its pointless to have the job if we cannot use it
18:09:51 <SumitNaiksatam> jagadish: a few weeks back OSM had explained it had to do with something transient in the NFP, i dont recall though
18:10:29 <SumitNaiksatam> jagadish: for instance, check this patch #link https://review.openstack.org/#/c/384282/
18:10:37 <jagadish> SumitNaiksatam: The gate failed few days back because of old patch number in local.conf.nfp file. After fixing it with latest patch number, it started working
18:10:53 <SumitNaiksatam> jagadish: see the patch above, its not related to NFP
18:11:30 <jagadish> SumitNaiksatam: Right. This is not related to NFP but still fails
18:11:39 <SumitNaiksatam> jagadish: all the exercise scripts are failing
18:12:07 <jagadish> SumitNaiksatam: Need to look at this to find out the reason for failure
18:12:39 <SumitNaiksatam> jagadish: okay, this has been happening for a few weeks now, and OSM had mentioned that someone was looking into this
18:13:06 <SumitNaiksatam> anyway, jagadish, ashutosh: lets try to this week at the earliest
18:13:27 <SumitNaiksatam> perhaps your new patches fix this anyway, but lets confirm that
18:13:44 <jagadish> SumitNaiksatam: Sure. We will take a look at this.
18:13:48 <SumitNaiksatam> jagadish: thanks
18:14:03 <SumitNaiksatam> other than that, can you please provide a quick summary of what each patch is doing
18:14:36 <SumitNaiksatam> most of them talk about enhancements and some patches are really very big to be able to review without a context
18:14:48 <SumitNaiksatam> it will help if you can set some context here for the team
18:15:11 <jagadish> SumitNaiksatam: Ok. I will start from bottom most changeset and go towards the top one
18:15:26 <SumitNaiksatam> jagadish: yes, perfect
18:15:35 <jagadish> SumitNaiksatam: https://review.openstack.org/#/c/359856/21
18:16:18 <jagadish> SumitNaiksatam: Configuration is read from single ini file instead of multiple files
18:16:47 <SumitNaiksatam> jagadish: okay, this touches the initial framework which ahmed had added, right?
18:16:49 <jagadish> and the core is enhanced to load the modules from multiple paths
18:16:53 <jagadish> yes
18:17:04 <jagadish> SumitNaiksatam: yes
18:17:04 <SumitNaiksatam> jagadish: okay, so this is not touching any of the queueing stuff
18:17:33 <SumitNaiksatam> oh wait, i do see changes to the worker
18:17:50 <jagadish> another change is to post an event to specific module
18:18:55 <SumitNaiksatam> ah okay, so that’s the worker change
18:19:02 <jagadish> This is where the events are sent by specifying the module name
18:19:32 <jagadish> yes
18:19:51 <jagadish> this change spans to worker, event and other files
18:20:13 <SumitNaiksatam> consolidating the conf files is a very good thing in my opinion
18:20:34 <SumitNaiksatam> okay, so next one #link https://review.openstack.org/#/c/359862
18:20:36 <jagadish> ok
18:21:10 <jagadish> The next one has all the changes related to orchestrator
18:21:36 <jagadish> A configuration for supported vendors has been added
18:21:42 <SumitNaiksatam> what is the “delete path opimizations”?
18:22:33 <ashutosh> delete path optimizations used where we are going in parallel paths to optimize delete path
18:23:00 <ashutosh> For ex device config deletion and user config deletions are going in parallel path to optimize it
18:23:34 <SumitNaiksatam> ashutosh: ah okay, so you are introducing optimiation in how the delete is being processed
18:23:40 <SumitNaiksatam> i was reading it all wrong
18:24:15 <SumitNaiksatam> jagadish: where is this “supported_vendors” config defined?
18:24:18 <jagadish> To add to what ashutosh mentioned, it processes the events in delete path in parallel
18:24:25 <ashutosh> Initially we were doing it in serialized way where device deletion happening before user config deletion
18:24:33 <hemanthravi> ashutosh: uses different worker processes to do send these request concurrently to over the cloud, right?
18:24:49 <ashutosh> yes
18:24:54 <jagadish> SumitNaiksatam: The supported vendors configuration is provided in the nfp.ini file
18:25:25 <SumitNaiksatam> hmmm, but thats not part of this change
18:25:43 <SumitNaiksatam> you would be defining the config elements in this change, no?
18:26:20 <jagadish> This nfp.ini is submitted in topmost change set
18:26:55 <SumitNaiksatam> jagadish: okay, but soemwhere in your code you would need to define and “read” that config, right?
18:26:59 <SumitNaiksatam> i am trying to look for that
18:27:04 <jagadish> https://review.openstack.org/#/c/359883/57/gbpservice/nfp/bin/nfp.ini
18:27:18 <ashutosh> Corresponding config is present in orchestrator/modules/__init__.py file
18:27:40 <SumitNaiksatam> ashutosh: got it, i wasnt looking at the init file
18:28:08 <SumitNaiksatam> and you are using the __init__ because you want to get loaded right at the beginning?
18:28:10 <jagadish> SumitNaiksatam: The supported_vendors configuration parameter is there at line number 14 in https://review.openstack.org/#/c/359883/57/gbpservice/nfp/bin/nfp.ini
18:28:19 <ashutosh> Reading is done at https://review.openstack.org/#/c/359862/22/gbpservice/nfp/orchestrator/modules/service_orchestrator.py@721
18:28:54 <ashutosh> Yes all this are the basic parameters which nfp orchestrator component is going to use
18:29:00 <SumitNaiksatam> jagadish: ashutosh: okay
18:29:15 <SumitNaiksatam> the supported vendors list does not have lbaasv1?
18:29:29 <SumitNaiksatam> of thats haproxy, i guess
18:29:52 <jagadish> Both lbaasv1 and v2 are with haproxy vendor
18:30:02 <jagadish> SumitNaiksatam: Yes
18:30:13 <SumitNaiksatam> there seem to be a lot of magic numbers in constants.py
18:30:40 <SumitNaiksatam> those have been derived emperically?
18:30:55 <jagadish> SumitNaiksatam: These are added for various timeout values
18:31:09 <jagadish> They are derived based on the testing done
18:31:16 <SumitNaiksatam> jagadish: okay
18:31:33 <SumitNaiksatam> jagadish: so none of this needs to change based on the scale of a deployment?
18:31:52 <SumitNaiksatam> especially L116-117
18:32:09 <jagadish> SumitNaiksatam: These might need to be changed based on the scale of a deployment
18:32:20 <jagadish> So, we need to make these configurable in future
18:32:25 <SumitNaiksatam> jagadish: so may be helpful to read from config
18:32:26 <SumitNaiksatam> yeah
18:32:58 <jagadish> A REVISIT will be added for these in current patches
18:33:43 <SumitNaiksatam> jagadish: hmmm, seems like a small enough change to make it config driven, no?
18:33:50 <SumitNaiksatam> i will leave it to you
18:33:58 <SumitNaiksatam> next patch #link https://review.openstack.org/#/c/359864
18:34:44 <jagadish> SumitNaiksatam: This change set has proxy, config orchestrator and node driver changes
18:35:09 <SumitNaiksatam> jagadish: what is the context of the change to support fw+vpn chain?
18:35:12 <jagadish> Primarily has the delete path optimization changes
18:35:56 <SumitNaiksatam> okay
18:36:48 <ashutosh> In Fw+vpn chain, as we depends on gateway ports which plumber provides, we need to use same gateway ports for both fw, vpn
18:37:00 <jagadish> And there are fixes related to vpn service
18:37:21 <SumitNaiksatam> ashutosh: jagadish: okay
18:37:42 <jagadish> For VPN, we eliminated some unnecessary data to be sent from under the cloud to over the cloud
18:37:49 <SumitNaiksatam> jagadish: okay
18:38:06 <SumitNaiksatam> so the UTs did not need to be updated to reflect these changes?
18:39:16 <jagadish> Some fields in the dict have been removed. So, no UT change is needed here
18:39:26 <jagadish> the functional arguments remain same
18:39:48 <SumitNaiksatam> jagadish: okay
18:40:06 <SumitNaiksatam> next patch #link https://review.openstack.org/#/c/359873
18:40:36 <jagadish> The next patch changes are related to controller, over the cloud
18:41:29 <jagadish> Primary changes are (1) fixed the issue of losing the RPC messages when controller restarts
18:41:50 <jagadish> and (2) Update API support for firewall and LB
18:42:13 <SumitNaiksatam> jagadish: what is “ pika RPC library”?
18:43:00 <jagadish> This is a AMQP messaging library
18:43:49 <SumitNaiksatam> okay
18:44:00 <jagadish> We have preferred this as the oslo library will not provide flexibility to get the messages synchronously from the RPC queues
18:44:09 <SumitNaiksatam> okay
18:44:21 <SumitNaiksatam> but how will this library get installed?
18:44:38 <SumitNaiksatam> i dont see any requirements dependency mentioned
18:44:49 <jagadish> This is needed inside the controller VM
18:44:57 <jagadish> So, gets built along with it
18:45:01 <SumitNaiksatam> ah okay
18:45:08 <SumitNaiksatam> in this patch, where is it being used?
18:45:28 <jagadish> Let me get that
18:45:52 <ashutosh> https://review.openstack.org/#/c/359873/47/gbpservice/contrib/nfp/configurator/advanced_controller/controller.py@20
18:46:10 <jagadish> https://review.openstack.org/#/c/359873/47/gbpservice/contrib/nfp/configurator/advanced_controller/controller.py
18:46:30 <SumitNaiksatam> ashutosh: jagadish thanks for the pointer
18:46:59 <SumitNaiksatam> so pardon my ignorance but how does the VM automatically get this library?
18:47:17 <SumitNaiksatam> or for that matter any other packages that the configurator VM needs
18:47:38 <jagadish> When the controller VM is built, this package is installed inside the image
18:47:51 <jagadish> SumitNaiksatam: yes
18:48:05 <jagadish> This is like any other package that configurator needs
18:48:21 <jagadish> sorry, controller, not configurator
18:48:28 <ashutosh> From the above devstack patch - https://review.openstack.org/#/c/359883/57/gbpservice/contrib/nfp/tools/image_builder/Dockerfile@23
18:48:31 <SumitNaiksatam> jagadish: right, i am just trying to understand where its being captured
18:48:51 <ashutosh> it is specified in DockerFile
18:49:00 <SumitNaiksatam> ashutosh: aha, got it
18:49:01 <hemanthravi> ashutosh: this is part of the disk image builder scripts?
18:49:06 <ashutosh> yes
18:49:08 <jagadish> SumitNaiksatam: Yes
18:49:53 <SumitNaiksatam> ashutosh: jagadish: since the dockerfile is already present, from a code sanity perspective it would have been good to make the update to add pika in this patch itself
18:50:28 <SumitNaiksatam> you are adding it in the last patch in the chain
18:50:59 <jagadish> SumitNaiksatam: Ok. We added it in top most change set as it is related to the build.
18:51:06 <SumitNaiksatam> not a big deal in this case since no one is going to deploy until all patches are merged in this chain
18:51:08 <ashutosh> SumitNaiksatam, Actually we grouped changes w.r.t components wise
18:51:22 <SumitNaiksatam> but these kind of things make it easier to review
18:51:30 <SumitNaiksatam> thanks for the clarification
18:51:36 <jagadish> SumitNaiksatam: Sure. We will do this from next time.
18:52:04 <ashutosh> SumitNaiksatam, Sure, From next time, we are planning to have fetaure wise patches instead of components wise patches
18:52:05 <SumitNaiksatam> what is HM in “Loadbalancer - HM and pool resources”
18:52:47 <jagadish> HM stands for Health Monitoring
18:52:49 <ashutosh> HM stands for Health Monitor used to monitor pool members status
18:53:08 <SumitNaiksatam> ah okay
18:53:21 <SumitNaiksatam> hadnt seen that acronym before but should have guessed
18:53:39 <SumitNaiksatam> next patch #link https://review.openstack.org/#/c/359876
18:53:58 <SumitNaiksatam> so this is a monster patch
18:54:23 <jagadish> This one is for LBaasV2 support
18:54:54 <SumitNaiksatam> are we duplicating octavia code here?
18:55:07 <SumitNaiksatam> and why?
18:55:57 <jagadish> I think, it was reused here
18:56:30 <jagadish> But, need to check why this cannot be used directly from octavia repo
18:57:07 <SumitNaiksatam> hemanthravi: had you looked at this patch?
18:57:16 <hemanthravi> SumitNaiksatam: octavia being reused but in over the cloud controller to talk to the lb vms
18:57:47 <SumitNaiksatam> i can understand the monkey patching to the plugin, and things like that
18:58:15 <SumitNaiksatam> but i want to be careful that we dont copy-paste entire modules
18:58:51 <hemanthravi> octavia code is not in the patch, the package is installed in over the cloud controller
18:58:53 <SumitNaiksatam> and a lot of code in this patch seems to be like that
18:59:54 <SumitNaiksatam> hemanthravi: seems like a lot of code under gbpservice/contrib/nfp/configurator/drivers/loadbalancer/v2/haproxy has been sourced from somewhere else
19:01:02 <SumitNaiksatam> okay its time to end the meeting
19:01:11 <SumitNaiksatam> jagadish: ashutosh: thanks for staying up
19:01:32 <hemanthravi> i'll check on the haproxy octavia dirver
19:01:34 <SumitNaiksatam> please take a look at the above patch and remove any gratitous duplication
19:01:37 <SumitNaiksatam> hemanthravi: thanks
19:01:42 <jagadish> SumitNaiksatam: Thanks
19:01:43 <SumitNaiksatam> thanks all for joining
19:01:47 <SumitNaiksatam> bye
19:01:48 <ashutosh> SumitNaiksatam, thanks for suggestions/ques, we'll address it
19:01:52 <SumitNaiksatam> #endmeeting