04:00:05 #startmeeting fwaas 04:00:06 Meeting started Wed May 25 04:00:05 2016 UTC and is due to finish in 60 minutes. The chair is njohnston. Information about MeetBot at http://wiki.debian.org/MeetBot. 04:00:07 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 04:00:09 The meeting name has been set to 'fwaas' 04:00:18 #chair SridarK_ 04:00:18 Current chairs: SridarK_ njohnston 04:00:26 Hi everyone 04:00:34 Hello All 04:00:39 Hi 04:00:41 Hi 04:00:44 hi 04:00:53 lets get started - i see most folks are on 04:01:00 hello 04:01:21 #topic Announcements 04:01:21 njohnston: thx for getting the agenda set 04:01:37 I just want to note that we are now 1 week from N-1. Tempus fugit! 04:02:09 This means that we are 1/3 done with the Newton cycle. We should think about our velocity to make sure we are on track. 04:02:31 +1 04:02:40 which leads me to... 04:02:44 Hi, sorry for late. 04:02:48 #topic Mid-Cycle 04:03:08 In Austin when we met, we had talked about doing a virtual mid-cycle 04:03:22 I am still in favor of that idea, but what does everyone think? 04:03:43 If we decide we can add it to the wiki page https://wiki.openstack.org/wiki/VirtualSprints#Schedule_of_Upcoming_OpenStack_Virtual_Sprints 04:03:51 We should be aware of the timing of the Neutron mid-cycle: https://wiki.openstack.org/wiki/Sprints#Future_sprints_for_Newton 04:04:04 +1 also can you post the meeting agenda? ;) 04:04:25 https://etherpad.openstack.org/p/fwaas-meeting 04:04:39 +1 04:04:43 +1 04:05:12 yes i too think a virtual mid-cycle is more favorable with geography of contributors 04:05:20 Does anyone have any thoughts about timing, or is it premature to think about that? 04:05:48 but perhaps lets first get some traction going - so we can be more productive during the mid-cycle 04:06:18 i think we have some things to decide before we get to this 04:06:22 ok, so sounds like we still like the virtual idea, but it's too early to get something on the books. 04:06:39 i am hoping we can get some thing going in the next week 04:07:10 once we have some skeletal pieces beginning to talk to each other - then we can make some quick progress by blocking some time 04:07:15 just my thought 04:07:22 +1 04:07:33 #topic L2 and L3 04:08:02 SO we want to integrate with 2 agents, both L2 and L3 04:08:27 Do we want to approach this in parallel, or work on one then the other? 04:08:56 I was having a converation with mfranc213 about this today and I realized we weren't on the same page, so I wanted to get the team's thought. 04:09:14 i think we should approach them in parallel 04:09:47 if some folks can get the L2Agent integration started - out of this we can evolve some patterns that can be used in L3 04:10:03 L3 is a bit simpler w.r.t iptables 04:10:14 L3 may need to be on the back burner for a bit while the L3 agent extension gets wrangled out, and then I agree we can re-use patterns from the L2 implementation. 04:10:26 mfranc213: what do you think? 04:10:44 Now with the change to get us going, in theory we can work thru the L3Agent to go end to end 04:10:51 i agree that L2 should, minimally, be started first. 04:10:53 as we are operational 04:11:02 i don't believe we have enough people to do these in parallel 04:11:27 but i like what srikark_ is saying 04:11:36 which i think is (tell me if i'm wrong): 04:12:11 start the L2 to get the general architecture understood/in place. then go to the L3 which is simpler to implement. 04:12:30 mfranc213: yes along those lines 04:12:32 that may get us with something to present soonest 04:12:35 perhaps 04:12:39 mfranc213: yes 04:13:04 if we have flushed out iptables on L2 then we can continue forth with the L2 implementation 04:13:34 in terms of actually getting some things comitted we will need a reference backend in place 04:13:40 njohnston: what do you think? 04:13:57 which is simpler in L3 04:14:36 and which has been done, thought in a different way, in v1. right? 04:14:40 The quickest way to get something running is to run with noopfirewall driver (for security groups). In order to run with iptablesfirewall driver (for security groups), the coexistence changes have to be made. 04:14:47 s/thought/though 04:15:04 I agree with everything which has been said, and I look forward to tearing into the reference backend. I think the risk is in leaving the L3 agent extension to be done by a third party, which surrenders the initiative so to speak. That to me is a long pole that is difficult to control, and so is a risk item. 04:15:10 mickeys: yes we can do that for sure, but to get changes in - we cannot break SG 04:15:44 agreed 04:15:54 in essence the crux of the FW agent code should be very similar for L2 & L3 04:15:59 possibly the same 04:16:19 how we bolt things in to the L2 & L3 framework could be a bit different 04:16:26 and some validations etc 04:16:33 +1 04:17:04 OK, I think we talked that one out. 04:17:10 #topic Explicating the dependency chain 04:17:19 It'd be good to know for sure what depends on what so we can prioritize 04:17:25 but if we can flush out the L2 extension and how we interact with it we can get some things in place for L3 04:17:34 sorry 04:17:54 This came about when I was trying to test the existing code for the FWaaS DB change and found it depended on one or two other changes 04:18:12 I made a stab at a graphical representation, please check it for accuracy: https://cdn.pbrd.co/images/1aBZwDKm.png 04:18:35 njohnston: on the DB piece, we should have a dependency on the v2 extensions 04:18:50 and this is mostly okay - i have tested that 04:19:03 Yes, that was what I discovered :-) 04:19:14 there are probab some more few bits that need filling in 04:19:27 and I think the DB piece needs to be in place before versioned objects can really be gated on 04:19:27 shwetaap: i think u have started looking into this ? 04:19:54 njohnston: i have some basic stuff on rules added to the db patch 04:20:23 but we will need at least a skeletal piece of code on the Firewall group 04:20:45 before we can hit the versioned object - agent interaction if i am not mistaken 04:21:33 which change is the firewall group? 04:21:46 if we can start getting these piece integrated that will could be our first steps 04:22:01 the db changes to support firewall group 04:22:19 ah, I see 04:23:22 The good news is that Armando's fix merged last week, so the gate should be working fine now. Yay! 04:23:31 yes it is 04:23:36 SridarK_: sorry, yea I have taken a look at the extension patch, but I havent made any changes to it yet. 04:23:39 we have tempest passing too 04:23:43 :) 04:23:54 shwetaap: ok no worries 04:24:13 so we are pretty much operational 04:24:53 #topic L3 Agent Extension 04:25:14 The spec is still waiting for comments: https://review.openstack.org/#/c/315745/ 04:25:37 njohnston: yes i am looking at it 04:25:40 If folks could take a look and give feedback, or +1 if it looks good to you. 04:25:51 njohnston, Sure. 04:25:54 Thanks SridarK_! 04:26:00 Thanks yushiro! 04:26:28 #topic Observer Hierarchy 04:26:58 mfranc213 was wondering about the RPC callbacks/observer hierachy: can the same mechanism, whatever it may end up looking like, be the same for L2 and L3 callbacks/notifications? 04:27:05 I have pinged the submitter and he is retesting the patch 04:27:27 njohnston: i don't know, unfortunately 04:27:39 what does it look like for L2 04:27:45 i suspect it should be similar 04:27:55 miguel may be able to speak to this. 04:27:59 we are essentially looking for changes in port state 04:28:13 or router state in L3 04:29:05 but my belief is that these would be structurally the same if not simply one thing. ("these" = the two RPC callback implementations, for L2 or L3) 04:29:17 again, i think miguel can shed light on this 04:29:20 for L3 extensions - this piece is already present as the Observer Hierarchy 04:29:36 yes 04:29:49 there are some router events that one can register callbacks to get notifications 04:30:07 sorry, change would for could ^^ 04:30:29 i suspect for L2 we can register for port events ? 04:30:49 Yes, I believe that is correct, looking at how QoS does it: https://github.com/openstack/neutron/blob/master/neutron/services/qos/notification_drivers/manager.py 04:31:43 mfranc213, SridarK_ regarding L2, I think it is very similar architecture QoS too. 04:31:54 yushiro: ok 04:31:57 actually that is the manager, here is where it registers for notifications: https://github.com/openstack/neutron/blob/master/neutron/services/qos/notification_drivers/message_queue.py 04:32:02 However, we have to define new agent_driver, right? 04:32:05 SridarK_# Isn't that what German's patch does? https://review.openstack.org/#/c/268316/2/neutron_fwaas/services/firewall/agents/v2/l2_agent_extension/fwaas_extension.py? 04:32:47 padkrish: yes i think German starting pulling things together - but i think it is still WIP 04:33:24 Basically get port notifications from L2 agent extensions and FW resource notifications similar to observer hierarchy 04:33:31 SridarK_#Yes agreed 04:33:50 padkrish, correct. 04:34:52 on the L2 level, is it just notifications of port updates that is needed by the agent? 04:35:23 i would think so 04:35:26 please fix the grammar of that question as you read it :) 04:35:38 mfranc213: :-) 04:35:44 mfranc213, yes. I think so too. 04:36:20 i recall something about namespace information needed by the agent. maybe i imagined that? 04:36:35 mfranc213: that is for L3 04:36:40 thank you 04:36:45 ok i think we need to sort out some of these interactions for the agents 04:37:03 yes 04:37:13 yushiro: , padkrish: are u guys looking into this ? 04:37:48 SridarK_, currently, we're looking into QoS implementation for L2. 04:37:48 njohnston: , mfranc213: i think both of u have domain expertise here 04:38:03 yushiro: ok 04:38:05 padkrish, SridarK_, And, I'll try to update xgerman's patch(https://review.openstack.org/#/c/268316/ 04:38:14 SridarK_# Yes.. thanks to mfranc213, njohnston for the QoS pointers 04:38:24 Happy to help, anytime :-) 04:38:52 padkrish: yushiro: other things u guys would like discuss on this topic ? 04:39:19 mfranc213: njohnston: thx for all the pointers 04:39:36 if i understand right, the versioned objects in the patch https://review.openstack.org/#/c/268316/2/neutron_fwaas/objects/ports.py is where more work is needed 04:39:52 padkrish: ok 04:40:18 SridarK_, currently OK. 04:40:20 this is where i think, German was referring to following QoS patch more closely 04:40:31 yes agreed 04:40:35 Yep 04:40:50 xgerman# there you are :) 04:41:11 now to make progress on the L3 Agent framework, we should look for defining a workflow that is quite similar 04:41:19 Yeah, I heard my name :-) 04:41:24 :-) 04:41:28 :) 04:41:28 :-) 04:41:37 :) 04:41:43 smile! 04:41:52 i love this meeting 04:41:53 just to be in loop :) 04:42:09 I think the last major thing I have been tracking is 04:42:16 hang on 04:42:31 * njohnston hangs 04:42:36 So we can try to get some comments on to njohnston's spec 04:42:59 i would like to see if we can flush out a basic L3 workflow along the lines of L2 04:43:37 i think having a similar model greatly increases how quickly we can get support on the L3 framework 04:43:41 Yeah, the more we can share the better 04:43:54 +1 04:44:43 and if we have a model in place, given that we have a working L3 model - we can refactor the code (to commonize with L2) and fit the final L3 model 04:44:54 at least in a ideal world this might work 04:45:33 ok i am done on this topic 04:45:48 #topic Neutron iptables changes 04:46:20 I still need to file the bugs and start on the code for security groups coexistence 04:46:34 mickeys: Is there anything we can do to facilitate? 04:46:41 mickeys: would you be able to email the group sample sample iptables doc that show before (with SG but not FW entries) and after (with both SG and FW entries 04:47:06 Yes I can email the sample. Actually I should just put that in the bug? 04:47:16 yes, perfect. 04:47:23 +1 04:47:25 +1 04:47:42 and this would show the wrapped and unwrapped stuff 04:47:46 Yes 04:47:51 nice 04:48:44 chandanc: Any chance you can take on the conntrack coexistence issue? 04:49:06 Sure will take a look 04:49:11 I can file the bug 04:49:54 chandanc: SarathMekala: Did u guys also have any other questions ? 04:50:16 not at this time 04:50:33 I was going through Mickeys patch.... and felt if it would be helpful if there is a spec for reference 04:51:14 any pointers in this regards? 04:51:47 I don't have anything written up. The main starting point is to understand the security groups iptables firewall driver. 04:51:58 We could have a call? 04:52:12 Yeah that would be helpful 04:52:55 Well, we have seven minutes left, so let's open the floor 04:53:06 #topic Open Discussion 04:53:28 by the way, one thing i just realized (maybe i'm the last to realize): to add myself as a reviewer too all relevant fwaas-related patches in order to get notifications 04:54:22 I try and add folks when I see them missing, but I don't know everyone's launchpad ID. In particular padkrish and shwetaap I had a hard time finding you to add as a reviewer for a change. 04:54:49 also, nate's dependency chart and some things that i will finish when i get cycles should go on our wiki. sorry for keep mentioning the wiki. 04:55:14 mfranc213: yes pls let me know what u would like to add 04:55:21 we can sync offline 04:55:30 Wiki is good. But we should also start a doc directory in tree 04:55:40 i think there are issues with access so i can proxy 04:55:48 xgerman: +1 04:55:51 I like a doc directory in tree 04:55:58 xgerman +1 04:56:01 #njohnston# I will add myself as reviewer and will mail my launchpad ID 04:56:05 +3 really 04:56:10 Thanks padkrish! 04:56:15 So, is this meeting time still working for everyone? 04:56:49 not me, really :( i'm sorry. but i will keep doing it. 04:57:11 Hi, I'm summarizing FWaaSv2 L2 part at etherpad in order to share our(my) progress. I'll send link ASAP. 04:57:32 for me, anyway: maybe ask this same question, about meeting time, next week? 04:57:32 yushiro: ok sounds good thx 04:57:43 :-) 04:57:48 mfranc213: sure thing 04:57:56 yushiro: Thanks, I look forward to reading it 04:58:29 njohnston: mfranc213: will sync offline to see how we can get some basic integration with plugin going 04:58:37 xgerman, hi. I'd like to ask you some question about your patch. 04:58:40 i would like for us to start churning the patchsets 04:58:51 +100 to churn 04:58:57 yushiro: sure 04:59:07 also folks can u pls pull a devstack env - albeit v1 04:59:10 How about discussing #openstack-fwaas 04:59:11 ? 04:59:15 then u can start playing with things 04:59:18 Sure 04:59:24 padkrish, How about you ? 04:59:27 1 min 04:59:28 sure yushiro 05:00:00 xgerman, padkrish Thank you. Let's move #openstack-fwaas after 1 minute. 05:00:05 all right, I have to call time 05:00:07 K 05:00:08 Oh, it's now :) 05:00:08 #endmeeting