18:02:18 #startmeeting networking_policy 18:02:19 Meeting started Thu Oct 8 18:02:18 2015 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:02:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:02:23 The meeting name has been set to 'networking_policy' 18:02:40 #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#Oct_8th_2015 18:03:12 #topic Packaging 18:03:38 starting with this since rkukura mentioned we were on the edge with regards to the fedora packages :-) 18:03:46 rkukura: any blockers for you? 18:03:59 no blockers - need to do some testing 18:04:40 then we’ll get the updated packages pulled into RDO and/or Delorean 18:05:05 rkukura: okay, so we will make the cut off? 18:05:33 SumitNaiksatam: The drop-from-fedora cutoff? 18:05:44 rkukura: yeah 18:06:10 I think we are already clear of that since the packages have been updated to build without broken dependencies 18:06:47 rkukura: ah nice, did not realize that you already had their packages into their system 18:06:51 mageshgv: hi 18:06:54 SumitNaiksatam: hi 18:07:03 rkukura: thanks for the update on the packaging 18:08:16 no updates on the integration testing from me 18:08:29 #topic Bugs 18:09:07 rkukura: you are working on #link https://bugs.launchpad.net/group-based-policy/+bug/1503135 ? 18:09:07 Launchpad bug 1503135 in Group Based Policy "Concurrent subnet allocations from pool can overlap" [High,In progress] - Assigned to Robert Kukura (rkukura) 18:09:11 saw the WIP patch 18:09:41 yes, initial patch just added logging so I could verify my theory of what the problem is 18:09:50 rkukura: okay 18:09:56 working on the fix and should have non-WIP patch later today 18:10:18 currently this seems like highest priority for us 18:10:31 not marked as critical since it does not always happen 18:10:56 hopefully this will let us make rally require 100% success 18:11:35 rkukura: absolutely, fingers crossed! 18:11:35 rkukura: nice! With how many workers? 18:12:24 ivar-lazzaro: we can turn it up until it breaks 18:12:25 ivar-lazzaro: i believe the fix should be independent of the number of workers 18:12:36 rkukura: :-) 18:12:54 just wondering if it's more than 1 right now 18:13:16 of course we will go over 9000 eventually :D 18:14:36 mageshgv: #link https://bugs.launchpad.net/group-based-policy/+bug/1489090 or #link https://bugs.launchpad.net/group-based-policy/+bug/1418839 ? 18:14:36 Launchpad bug 1489090 in Group Based Policy "GBP: Resource intergrity fails between policy-rule-set & external-policy" [High,Confirmed] - Assigned to Magesh GV (magesh-gv) 18:14:37 Launchpad bug 1418839 in Group Based Policy "Service config in yaml cannot be loaded" [High,In progress] - Assigned to Magesh GV (magesh-gv) 18:15:31 SumitNaiksatam: I did not get a chance to work on this yet, right now busy with the NCP testing 18:15:38 mageshgv: okay 18:15:51 any other critical bugs that we need to discuss 18:15:58 ? 18:16:31 ivar-lazzaro: our devstack has default number of worker threads, which is 0, and it means essentially one worker thread 18:16:54 ivar-lazzaro: rally is instrumented to exercise with a concurrency of 10 18:17:18 SumitNaiksatam: cool! Thanks 18:17:19 by increasing the concurrency we do see these errors 18:17:34 ivar-lazzaro: Were you talking about neutron workers or rally workers? 18:17:51 rkukura: i believe ivar-lazzaro was referring to neutron workers 18:18:23 rkukura: SumitNaiksatam yes, Neutron workers 18:18:35 it seems that setting the concurrency configuration in rally should not impact if we have only one worker on neutron 18:18:37 however it does 18:19:18 Its clear from the logging that neutron is executing GBP requests concurrently in the rally tests. 18:19:18 SumitNaiksatam: not sure default is 0 though, I think this was changed in Kilo to the number of cores running on the host 18:19:21 and we are seeing the same error in a systemt where we have increased the nubmer of neutron api workers 18:19:58 ivar-lazzaro: the devstack job is defaulting 0 in the conf 18:20:04 ok 18:20:20 ivar-lazzaro: if internally it executes on multiple cores, then that might explain 18:21:14 however we are seeing teh same pattern of concurrency rally results across juno and kilo 18:21:29 wiht, what i understand, to be the same devstack job 18:21:58 perhaps best to test on a local system where we have better control over these parameters 18:22:21 ivar-lazzaro: but certainly good to investigate if there is some automatic mapping to cores happening 18:22:54 on juno, i we set api workers to 0, and rpc workers to 0, then i have verified that only neutron server process gets spawned 18:23:03 i -> if 18:23:21 *only 1 18:23:33 There are multiple greenthreads processing API requests within each worker process, aren’t there? 18:24:47 rkukura: ivar-lazzaro perhaps that is decided by the number of cores? 18:25:16 SumitNaiksatam: that would be the Neutron default 18:25:29 SumitNaiksatam: so unless we force it otherwise on devstack, then that's the case 18:26:01 ivar-lazzaro: yeah 18:26:28 rkukura: will be good to investigate how is the threading model/number controlled 18:27:20 SumitNaiksatam: agreed 18:27:22 #topic Protocol number support 18:27:42 mageshgv brought up this user requirement #link https://bugs.launchpad.net/group-based-policy/+bug/1499916 18:27:42 Launchpad bug 1499916 in Group Based Policy "GBP classifier create API does not support custom IP protocol numbers" [High,Confirmed] 18:28:14 is there any objection to allowing integer protocol numbers in teh policy classifier? 18:29:09 SumitNaiksatam: does Neutron allow that on SGs? 18:30:05 ivar-lazzaro: As far as I know it does 18:30:25 ivar-lazzaro: i was thinking this is something that can be rejected at the driver level if not supported, no? 18:31:07 SumitNaiksatam, ivar-lazzaro: Just checked, Security groups does accept protocol numbers 18:31:08 SumitNaiksatam: also, we can have some level of translation for well known protocol numbers 18:31:21 ivar-lazzaro: for neutron #link https://github.com/openstack/neutron/blob/master/neutron/extensions/securitygroup.py#L135-L142 18:31:25 mageshgv: thanks 18:31:40 ivar-lazzaro: the translation i believe is available in the UI currently 18:32:00 well i know it is, we added that support a few weeks back 18:33:43 since this will be an additive change, i believe it should not create any backward incompatibility? 18:35:04 okay, so i think we can go ahead and make this minor update to the API (basically in the validation function) 18:35:15 now our favorite part of the meeting 18:35:26 #topic Node Composition Plugin/Driver/Plumber enhancements 18:35:55 last week and over the weekend we reviewed and merged a number of patches on this topic 18:36:31 there are two specs which we have still not merged (mostly because they have been evolving as ivar-lazzaro has been implementing them): 18:36:35 #link https://review.openstack.org/#/c/228693 18:36:44 and https://review.openstack.org/#/c/203226/ 18:37:46 igordcard: you had a question on the proxy ptg spec? 18:38:22 yes, just a simple technical question that can be answered by testing the current impl as well :) 18:38:31 but it is: "How does TScP currently flow traffic? By routing?" 18:39:03 igordcard: it depends on the attached services 18:39:36 igordcard: TScP creates a topology where many networks are connected by "Services", these can be L2/L3 services 18:39:37 igordcard: both routing and switching 18:40:26 ivar-lazzaro, SumitNaiksatam : and is that the current state or the final state after the changes are merged? 18:40:55 that's already the case 18:41:06 not sure the heat driver still supports it though 18:42:09 thanks ivar-lazzaro SumitNaiksatam 18:43:12 ivar-lazzaro: thanks 18:43:39 by the way, there's a "bonus" patch on top of the current chain 18:43:45 https://review.openstack.org/#/c/179327/ 18:43:50 #link https://review.openstack.org/#/c/179327/ 18:44:19 We used to have one patch for remote SG implementation 18:44:21 ivar-lazzaro: thanks, i noticed 18:44:33 which however is not supported/working in Neutron 18:44:39 ivar-lazzaro: its great that you split it form the other patch 18:45:02 I removed that part, but I at least kept the part where we split the SG mapping into a pluggable component 18:45:28 Starting from there, it should be easy to move to a model where sharing PRSs is possible 18:45:35 ivar-lazzaro: yeah, this makes it easy to merge now and very useful 18:46:13 ivar-lazzaro: for the remaining patches in the chain, https://review.openstack.org/#/q/status:open+project:stackforge/group-based-policy+branch:master+topic:bp/node-centric-chain-plugin,n,z 18:46:26 ivar-lazzaro: is there any update to the spec for the HA/cluster model? 18:46:28 with that, and the SC mapping split, we finally have a RMD with less than 2000 LoC :D 18:46:52 I haven't posted the spec yet 18:46:56 ivar-lazzaro: i believe there is some change to the internal API for the HA/clustering support? 18:47:01 ivar-lazzaro: okay 18:47:15 mageshgv: were you able to test these patches? 18:47:23 SumitNaiksatam: actually, the API would be public 18:47:36 ivar-lazzaro: okay 18:47:49 SumitNaiksatam: although we can change it in the default policy.jsonm 18:48:00 SumitNaiksatam: Yes, in progress, I should be able to complete by this weekend 18:48:07 mageshgv: nice! 18:48:12 ivar-lazzaro: okay 18:48:49 mageshgv: so are we at a point where we are comfortable with the two posted specs? 18:49:02 igordcard: wanted to check with you as well, since you reviewed 18:49:17 SumitNaiksatam: Yes 18:49:30 mageshgv: nice, thanks 18:49:48 SumitNaiksatam, yes 18:49:56 igordcard: great, thanks 18:50:04 ivar-lazzaro: thanks for the update and all the hard work here! 18:50:26 #topic Open Discussion 18:50:55 there was another request from a user to be able to model remote CIDRs in our classifier to filter N-S connectivity 18:51:38 i believe some of us had the discussion earlier, and were considering associating a CIDR with the external policy 18:52:09 just wanted to bring this up, something we should think through and follow up on 18:52:43 there was also another request from a user regarding this #link https://bugs.launchpad.net/group-based-policy/+bug/1501657 18:52:43 Launchpad bug 1501657 in Group Based Policy "Explicit router provided for L3 Policy creation is not used" [High,New] - Assigned to Robert Kukura (rkukura) 18:53:03 which was supposed to work, but in my testing did not work 18:53:09 I have not looked into that yet, but will 18:53:22 rkukura: thanks for volunterring on that 18:53:48 anything else that we missed for today? 18:54:32 alright, thanks everyone 18:54:37 bye! 18:54:40 bye 18:54:41 bye 18:54:54 * tbachman sneaks off 18:54:57 #endmeeting