18:01:11 #startmeeting networking_policy 18:01:12 Meeting started Thu Jan 29 18:01:11 2015 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:01:13 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:01:16 The meeting name has been set to 'networking_policy' 18:01:57 we will discuss kilo plan and milestone as an agenda items, but prior to that I dont have any milestone related announcements 18:02:11 anything anyone wants to share in terms info/announcements to the team? 18:02:27 #info agenda https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy#Jan_29th.2C_2015 18:02:46 #topic Bugs 18:03:01 we went through some of the major bugs last week 18:03:20 a couple of more have been added to that, and mageshgv has fixed another 18:03:25 yapeng: hi 18:03:58 hi 18:04:01 but the meta-comment i had was that to request you to look at the assigned bugs 18:04:11 most bugs have a milestone date 18:04:36 if a bug is assigned to you and is targeted for k1, please confirm that you can fix it 18:04:46 else move it to a later milestone 18:05:17 for bugs which are more open ended we have a “next” milestone 18:05:23 so use you judgements 18:05:26 *judgement 18:05:35 KrishnaK: hi 18:05:50 SumitNaiksatam: hi 18:06:08 if we determine that the bug is high priority we cannot defer it to too long though 18:06:14 currently we dont have any critical bug 18:06:18 *bugs 18:07:14 currently i think all of the k1 server side bugs are assigned to myself, rkukura, ivar-lazzaro, and mageshgv 18:07:15 ivar-lazzaro: hi 18:07:20 hi 18:08:15 also please make a call whether a particular fix needs to be backported 18:08:40 if so please assign the “juno-backport-potential” tag 18:08:58 once the fix merges, please cherry-pick the fix to the stable/juno branch 18:09:15 questions/comments? 18:09:59 okay moving on 18:10:05 #topic Packaging Update 18:10:21 rkukura: you have some good news to report, right? ;-) 18:10:32 yes 18:10:56 At least I think this is since last meeting 18:11:06 rkukura: yes 18:11:11 The GBP packages are now in the official RDO yum repositories 18:11:21 yay! 18:11:33 nice! 18:11:52 This means running “yum install \*gbp\*” will install all of them on top of an existing RDO setup. 18:12:10 rkukura: sweet! thats much easier 18:12:12 cool! 18:12:23 I plan to do some testing and then update the RDO wiki page as soon as I get a chance. 18:12:32 very cool 18:12:46 rkukura: great! Also thanks for your help this AM. Iam continuing my setup testing (centos ) ... 18:12:59 This should support Fedora 20 and 21, as well as RHEL 7 and CentOS 7. 18:13:58 KrishnaK: thanks for your perseverance on this, you are the only person at this time that i know off who has dared to start testing on CentOS, so the team needs to really thank you here! :-) 18:14:09 rkukura: cool 18:14:32 Next steps are to start supporting GBP in the puppet scripts and Red Hat’s packstack and foreman installers. 18:14:47 rkukura: so the issues which KrishnaK has ran into (gbpautomation), that is CentOS-specific, or its his setup specific? 18:15:37 KrishnaK: Correct me if I’m wrong, but I think the issue was that the default packstack configuration did not install heat, so restarting it with GBP failed because it wasn’t configured. 18:16:07 rkukura: yes. thx. 18:16:09 I’ll add a note about this to the RDO GBP wiki as well. 18:16:42 That it for RDO. 18:16:47 rkukura: good point about the installer 18:16:54 *installers 18:17:08 rkukura: is that a long process too, to get a patch in? 18:17:32 I’ve never directly worked on upstreaming puppet patches, but that may take some time 18:17:57 rkukura: okay, is it an ongoing thing, or they have milestones and freezes too? 18:18:28 Would be good to get someone with puppet experience involved, and I think SumitNaiksatam may have someone in mind. 18:18:51 rkukura: I am not volunteering myself :-) 18:18:59 rkukura: but yeah we need to discuss offline 18:19:00 I don’t think upstream puppet is tied to OpenStack release cadence. 18:19:05 rkukura: okay 18:19:18 rkukura: anyone else wanting to jump in on this and help out is welcome 18:19:19 If anyone on the team wants to take this on, please speak up! 18:19:25 rkukura: +1 18:20:02 rkukura: if someone wants to jump in on this, you will be able to connect them to the right set of folks and pointers in terms of the process to follow? 18:20:31 I’m happy to advice and help, but I’m focusing on upstream neutron work right now, plus some GBP bug fixing, etc. 18:21:10 rkukura: yeah, my question was more whether we know the process 18:21:12 sorry, late 18:21:17 rkukura: seems like you know it 18:21:21 s3wong: hi, np 18:21:33 noticed songole joined earlier too, hi! 18:21:47 SumitNaiksatam: I’ve observed it through team meetings etc., back at Red Hat, but never involved directly with puppet or packstack. 18:21:48 Hi SumitNaiksatam 18:22:05 rkukura: okay, i think that should be good to get us started 18:22:12 okay any other questions for rkukura on the fedora and RDO? 18:23:05 rkukura: BIG thanks on behalf of the entire team for working on this and getting this into RDO! 18:23:26 on the ubuntu packaging 18:23:57 mageshgv ran into an issue when installing the group-based-policy-automation package in his setup 18:24:10 he investigated and found out why it happens 18:24:16 mageshgv: thanks for that 18:24:26 we are still working on what the right solution for this 18:24:45 depending on which ubuntu packages you have installed, you may or may not run into that issue 18:25:05 please ping mageshgv or me if you run into it and need a solution (while we are trying to fix this correctly) 18:25:12 mageshgv: anything you wanted to add on that? 18:25:49 ok next topic 18:25:49 Basically we will run into this problem if we have pbr version <0.10.7 18:26:04 mageshgv: ah okay 18:26:23 #topic Proposed Kilo Plan and Milestones 18:26:29 #link https://etherpad.openstack.org/p/kilo-gbp-plan 18:26:53 so we have been gathering feedback after the release, and to some extent discusssing in this meeting as well 18:27:21 the plan above is based on pretty much everyone’s input (and the issues we have logged in launchpad) 18:27:58 this is a kind of a guiding plan 18:28:13 we will adapt as we go along 18:28:24 so please provide your feedback 18:29:14 I think we need to explicitly list tempest tests and CI 18:29:24 rkukura: okay 18:29:31 I see GBP Test Plan and CI now, so OK 18:29:50 was looking for the word “tempest” 18:29:52 yeah but we could add “tempest tests” as a separate item 18:30:02 yeah we could add that explicitly 18:30:03 i agree 18:30:27 i think that is one are where we would need to work collectively as a team 18:30:53 add coverage in tempest for the entire project cannot be done by one or two people 18:31:04 *adding 18:31:29 so we will start by identifying one or two people, and they can guide the rest of the team here 18:31:48 if you are interested in participating in this, please ping me 18:32:32 given that we planning to add more features, i think this is the most critical area of the project that needs attention so that we dont introduce regressions 18:32:59 well even before we add features, we will be doing a bunch of refactoring 18:33:31 do people have questions on the specific line items mentioned in the plan? 18:34:51 for the l4-7 classifiers -- do you have more details? 18:34:51 I have question about DB change and REST API changes. 18:35:04 i will take Yi’s question first 18:35:29 Yi: yes, LouisF has posted a spec on that 18:35:52 #link https://review.openstack.org/#/q/status:open+project:stackforge/group-based-policy-specs+branch:master,n,z 18:36:03 Yi: the above has all the specs currently in review 18:36:38 ok 18:36:43 SumitNaiksatam: i's like to have those specs reviewed 18:36:52 LouisF: :-) 18:37:27 LouisF: so Yi should look at the “GBP Classifier Extensions” spec? 18:37:43 sure 18:37:54 SumitNaiksatam, Yi : yes 18:38:12 is nicolas here? 18:38:30 he was planning to do some investigation on the ODL side to see how this could be implemented 18:38:42 that is partly the reason that is holding this up 18:39:10 we need to have some level of validation that we are able to realize it 18:39:22 SumitNaiksatam: don't think there is any DPI project on ODL as of yet 18:39:31 s3wong: hmmm 18:40:03 SumitNaiksatam: but that tons of projects are being proposed, so I probably have missed it 18:40:17 s3wong: yeah, we had this discussion a few months back, seems like you are saying that not much has changed since 18:40:40 that is part of the reason i put this in the second milestone so that we can some more time to investigate 18:40:50 SumitNaiksatam: not that I am aware of, but then again, I have NOT being attending or catching up with the TSC meetings 18:40:54 LouisF: any chance that you can touch base with nicolas on this? 18:41:02 s3wong: np 18:41:14 is the ODL the only option to implement this? 18:41:23 SumitNaiksatam: i will contact him reagrds the odl support for dpi 18:41:25 yapeng: no, not saying that 18:41:43 yapeng: i brought it up because that was the option that nicolas mentioned he was going to explore 18:41:49 yapeng: no, but we need a reference implementation 18:41:52 other than that no one has proposed anything 18:42:00 LouisF: thanks 18:42:10 yapeng: back to your question 18:42:31 yapeng: you were asking about DB and REST server? 18:43:34 my question is that: DB change seems have big impact on many things, how to resolve the dependencies of all the other feature developed in parallel? 18:43:45 yapeng: yeah, good question 18:43:59 in fact this is true for the entire refactor 18:44:47 lets decide as as team as to whats the most non-disruptive path 18:44:53 i have some ideas 18:45:07 so specifically regarding the DB 18:45:28 the critical change that we need to make is to remove the foreign key constraints 18:46:06 foreing key constraints to the neutron tables that is 18:46:37 SumitNaiksatam: is upgrade from GBP Juno to GBP Kilo an item to consider also on this? 18:46:47 s3wong: good point :-) 18:47:03 s3wong: we will have to make some hard decisions there 18:47:43 s3wong: one of the stated goals of the Juno release was that we are making it available to users to get their feedback, so changes should be expected 18:47:56 SumitNaiksatam: that 18:47:58 SumitNaiksatam: In place of the foriegn key constraints, is the idea to consume notifications from neutron and cleanup related GBP resources if something is deleted in neutron? 18:48:16 that's fair (sorry, accidentally hit "enter") 18:48:25 that said, we will gauge who is using it, and how much impact it has, and based on that decide the most effective strategy 18:48:37 we will definitely strive to minimize pain 18:49:36 rkukura: good question, the point about concurrently using the Neutron API is tricky in itself (as we have discussed before) 18:50:30 rkukura: the approach that you mention is certainly worth considering 18:51:04 SumitNaiksatam, rkukura: what approach? 18:51:32 s3wong: replacing the FK constraints with cleanup based on notifications from neutron 18:51:42 s3wong: so if someone cleans up a port on the neutron side, clean up the PT of the GBP side 18:52:01 SumitNaiksatam, rkukura: OK 18:52:02 and do this by consuming the port delete notification from Neutron 18:52:27 this happens today based on the FK constraints 18:52:36 SumitNaiksatam, rkukura: does Neutron have all the notification on things we care about (port, subnet, router, SG, SG rules...)? 18:52:47 and all that wil have to be implemented in the code when we decouple 18:53:01 s3wong: yes, for all CRUD operations for all resources 18:53:24 SumitNaiksatam: good 18:53:44 the issue obviously is that if the PT deletion fails for some reason, then we are left in an inconsistent state 18:53:52 or if we lose the notification 18:54:14 Hopefully we’ll be able to work through these kinds of details in reviewing specs for kilo. 18:54:25 rkukura: yeah 18:54:42 which means getting the spec written ;) 18:54:46 :-) 18:55:13 Sumit, rkukura: the notification is coming from API, or from mechanism driver? 18:55:14 SumitNaiksatam: so the notification comes as the API is invoked? Or as the operation is completed (i.e., the Neutron port is deleted)? 18:55:41 as I recall, there are start and end notifications 18:56:03 Yi: s3wong the notification comes on an AMQP topic 18:56:33 rkukura: oh, OK 18:56:34 Do they exist for any resource? Or just ports? 18:56:59 all resourcces 18:57:04 ivar-lazzaro: my earlier point, it used to be for all resources, unless something has changed 18:57:27 the API handling code had the notifications built it 18:57:56 ivar-lazzaro: Ports have additional notifications for specific state changes to be consumed by nova 18:58:09 rkukura: good point 18:58:15 okay we have a couple of mins 18:58:16 and/or by firewall (SG) driver/agent 18:58:24 good dicussion 18:58:29 neutron has these kinds of notification existing today? 18:58:31 SumitNaiksatam, rkukura: with this, does it mean that our policy drivers no longer need to have the GBP ML2 driver? 18:58:37 we need to have a lot more of that on these topics 18:58:38 SumitNaiksatam, rkukura: I see. thanks for the clarification 18:59:07 yes, ceilometer for example uses them I believe (not to mention nova) 18:59:08 ivar-lazzaro: i was trying to look up the code, but github is not responding 18:59:19 s3wong: no, that is independent 18:59:31 banix: perfect, thats where it started 18:59:39 s3wong: I guess it depends on what your ML2 driver' role is. For instance, if you use it to bind a special ML2 network type you may still need one 19:00:45 s3wong: one potential risk is that the ml2 driver and policy driver will now be in separate processes, but may need some coordination, but that shouldn’t be too difficult, and is not that different from cases where neutron-server is replicated. 19:01:02 ivar-lazzaro, SumitNaiksatam: I see. The reason I asked is that one of the function of the GBP ML2 driver seems to be get notified for port state changes 19:01:20 ivar-lazzaro: rkukura s3wong: good points 19:01:25 we are min over time 19:01:32 we can take this to #openstack-gbp 19:01:36 thanks all for attending 19:01:44 bye 19:01:45 SumitNaiksatam: obviously a great topic to discuss :-) 19:01:46 bye 19:01:47 thank 19:01:51 bye 19:01:51 thanks! 19:01:56 #endmeeting