18:00:56 #startmeeting networking_policy 18:00:57 Meeting started Thu Mar 10 18:00:56 2016 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:00:58 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:01:00 The meeting name has been set to 'networking_policy' 18:01:18 #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#March_10th.2C_2016 18:02:01 starting with the standing items - 18:02:22 as for bugs I dont think there was any major activity last week 18:02:45 hi 18:02:53 hemanthravi: hi 18:03:02 rkukura: anything new to discuss on the packaging? 18:03:19 no update this week 18:04:25 rkukura: you were going to find out and tell me if the non-date based versioning is required for RHEL packaging (mitaka) 18:04:54 based on that i can release dev milestone 18:05:08 SumitNaiksatam: yes, but never got a response, and haven’t followed up yet 18:05:29 rkukura: sure np, whenever you have info let me know 18:05:41 #topic Addition of status attributes 18:05:43 SumitNaiksatam: OK 18:05:54 spec #link https://review.openstack.org/289127 18:06:04 impl #link https://review.openstack.org/289530 18:06:28 so the implementation is ready, would appreciate if the team can review 18:07:08 this does not do anything to the drivers, just adds to the API and DB layers 18:07:51 had to change the UT strategy a little bit since these attributes cannot be set from the API 18:07:59 so had to do direct DB testing 18:08:24 SumitNaiksatam: Are you sure we’d want to store this in the DB? 18:09:09 rkukura: yes, i dont think we can make assumptions on how the backend wil handle this 18:09:25 I’d think drivers would need to get it on demand, each time a get of the resource was processed 18:09:43 I don’t think we can assume async changes to status will find their way into these db tables 18:10:18 rkukura: why not? 18:10:36 rkukura: this is the standard way of processing status in other openstack projects 18:12:03 If the backend has its own DB with status in it (which might be asynchronously updated), the policy driver would want to return that, not copy it into the plugin’s db table 18:12:28 rkukura: i think if a particular implementation wants to always pull from the backend, then plugin get method can be overriden 18:12:53 SumitNaiksatam: I don’t think policy drivers have a way to do that 18:13:14 rkukura: the plugin would have to do that 18:13:34 rkukura: but there is no reason to not have this for the default case 18:13:40 maybe we need a way for the plugin to call the drivers to get the status 18:13:54 for instance neutron will always notify of state changes 18:14:10 with multiple drivers, which is responsible for updating the status in the plugin’s tables? How would these get composed? 18:15:04 rkukura: that logic would have to go into the plugin 18:15:21 but that is regardless of whether the status is persisted in the DB or not 18:15:45 sure, the plugin would need to compose status from each of the drivers 18:16:02 for that, it seems better to get status from each of the drivers 18:16:34 rkukura: yes, any time composition has to be done, it would probably have to be done in the DB layer 18:16:41 not sure I agree 18:16:54 rkukura: at this point i am not trying to solve this composition problem though 18:17:17 i know we have to tried to address this in the past in neutron in various different contexts 18:17:26 I understand that, but we’ve had some discussion of this same topic for ML2 18:17:39 right 18:18:10 there some in the context of DVR and FWaaS as well 18:18:21 are there cases where the driver doesn't have storage 18:19:34 hemanthravi: in theory yes 18:19:58 hemanthravi: this would be the case where the driver is leveraging the GBP model to store the state 18:20:24 SumitNaiksatam, in that case you'll need the plugin to compose and store the status 18:20:35 and which, one can argue, is being addressed with this basic implementation 18:20:38 hemanthravi: right 18:20:57 SumitNaiksatam: I have not reviewed either of these yet, but would like to look at how the policy drivers are intended to update this status. They gererally do not have direct access to the plugin’s DB models, right? 18:21:40 rkukura: hmmm, why not? 18:22:39 rkukura: not sure i completely understood what you meant by “direct access” 18:25:00 My thinking is based more on ML2, but we’ve tried to follow the same general driver pattern in GBP. In ML2, we really expect each driver to see the exact state that eventually gets returned to the caller. Clearly GBP is different in this regard. 18:25:48 rkukura: yeah, i agree, GBP could be in different in cases 18:26:23 any other thoughts/questions on this? 18:26:27 Maybe we should change GBP to not have a list of policy drivers, but instead have a single one, and move the IPD to something that happens before the policy driver precommit/postcommit phases 18:26:55 rkukura: i had similar thoughts as well 18:27:10 at least in the case of IPD 18:27:35 yes, lots of issue would be resolved if we could do the IPD work before the main GBP transaction 18:28:00 rkukura: in the new async driver, the IPD is collapsed on the “other” policy driver 18:28:01 but a single PD would avoid the issue of having to merge status 18:28:15 on -> in 18:28:43 there's also the service chain driver 18:29:02 I’d still argue that, with a single PD model, we should get the state from the PD (which building the resource dictionary) rather than require it to be in the DB. 18:29:12 ivar-lazzaro: but that gets called from the service chain plugin, right? 18:29:13 right 18:29:42 SumitNaiksatam: not that, I mean the sc mapping driver, the one that creates SC resources depending on GBP model 18:30:17 ivar-lazzaro: would any of the potentially influence the status of the GBP resource? 18:30:26 this guy: https://github.com/openstack/group-based-policy/blob/master/gbpservice/neutron/services/grouppolicy/drivers/chain_mapping.py 18:30:37 ivar-lazzaro: okay 18:30:49 rkukura: well yeah, if a SCI fails to get created a Policy Rule will be down 18:30:52 ivar-lazzaro: if i recall this was initially part of RMD and later factored out? 18:31:05 correct 18:31:13 ivar-lazzaro: okay 18:31:46 so in theory we can always come up with up a design that will help to compose the status by reading from different drivers 18:32:21 but the simpler to implement solution seems to be address it via one driver 18:32:50 how can the state be updated based on an async event, for eg when a network service vm is operational 18:32:57 SumitNaiksatam: I still have concerns about needing to push async status into the GBP DB vs. pull it from a backend DB as needed, but we can discuss that offline 18:33:28 rkukura: okay 18:33:59 hemanthravi: by listening to the appropriate notification? 18:34:24 in that case it's not being pulled from the driver 18:34:31 pulling it from the backed will slow down all the GETs though 18:34:42 rkukura: we are not precluding the possibility of pulling from the DB backend 18:34:53 ivar-lazzaro: i agree 18:34:53 which might not even be a huge problem in facrt 18:35:11 ivar-lazzaro: i guess the more relevant point is that it might not be needed in some cases 18:35:18 and it might be in others 18:35:24 I’m not clear where this pull would occur, but we can discuss that offline 18:35:29 or in the review(s) 18:35:57 rkukura: okay 18:36:16 thanks for the discussion here 18:36:22 #topic Other Design Specs 18:36:42 hemanthravi: updated #link https://review.openstack.org/239743 earlier today 18:36:49 but i havent gotten a chance to take a look 18:36:59 * tbachman goes to rebase his patch 18:37:10 SumitNaiksatam, updated most of the comments except for the HA related one and updating the diag 18:37:16 we can discuss here if anyone got to it 18:37:23 hemanthravi: okay thanks 18:37:38 hoping to have the impl patches posted early next week 18:37:52 hemanthravi: okay nice 18:38:20 I think igordcard_ #link https://review.openstack.org/#/c/275358/ also updated his spec 18:38:46 patchset 12, 13 should be same 18:38:53 and i posted a comment that if we are doing this in a feature branch, we can mention that in the spec, and approve this spec once that is done 18:39:02 let me know if anyone disagrees with that 18:39:35 the spec would be experimental in that sense, and based on the experience with implementing it in the feature branch, we can update the spec if required 18:40:05 not sure if igordcard_ is around 18:40:28 #topic Open Discussion 18:40:55 oh igordcard_ just sent me a message saying he is having connectivity issues 18:41:33 the following workshop proposed by the team got accepted for the Austin summit #link https://www.openstack.org/summit/austin-2016/summit-schedule/events/6894 18:41:59 we had some offline discussions on how to go about this 18:42:37 if you have any thoughts/suggestions and would like to participate, please let me know 18:43:42 anyone else have anything else for today? 18:44:33 alright, thanks everyone for joining, see you next week! 18:44:36 bye 18:44:40 bye 18:44:49 bye 18:44:57 thanks SumitNaiksatam! 18:44:58 #endmeeting