04:00:02 #startmeeting fwaas 04:00:03 Hi FWaaS folks 04:00:03 Meeting started Wed Aug 17 04:00:02 2016 UTC and is due to finish in 60 minutes. The chair is njohnston. Information about MeetBot at http://wiki.debian.org/MeetBot. 04:00:04 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 04:00:06 The meeting name has been set to 'fwaas' 04:00:13 Hi everyone! 04:00:15 Hi all O/ 04:00:18 Hi All 04:00:25 been a busy week 04:00:28 indeed! 04:00:36 thx all for pulling together 04:00:44 Hi! 04:00:48 lets get started and move quickly 04:00:56 #topic FWaaS v2 04:01:04 Hi 04:01:10 njohnston: oops need chair priv pls 04:01:15 oops 04:01:19 #chair SridarK 04:01:20 Current chairs: SridarK njohnston 04:01:22 #chair xgerman 04:01:23 Current chairs: SridarK njohnston xgerman 04:01:30 #chair yushiro 04:01:31 Current chairs: SridarK njohnston xgerman yushiro 04:01:34 #topic FWaaS v2 04:01:38 njohnston, thanks :) 04:01:44 thx njohnston 04:01:54 lets run thru the patches in order 04:02:42 #link https://review.openstack.org/#/c/264489/ 04:02:48 for ext 04:02:49 Hi, I think mine should be first .. i have addressed the comments. I should have the new patch up in like 5 mins. 04:03:00 Excellent, thanks shwetaap! 04:03:06 shwetaap: ok thx - i was just checking 04:03:21 Hi 04:03:23 shwetaap, great. I'll review it again. 04:03:27 shwetaap: all comments addressed ? 04:03:46 i will review too, we can try to sign off tonight pending Jenkins 04:04:10 shwetaap: just to be safe - pls rebase b4 u push 04:04:15 Current Jenkins gate queue delay is like 12 hours, so it'll take a bit. :-( 04:04:24 njohnston: sigh 04:04:25 https://twitter.com/openstackstatus/status/765740473327771648 04:04:32 sure, i was running into git issues. But should be up in a bit. Yea I think there were commens from njohnston, yushiro and SridarK 04:04:51 i have addressed those. 04:05:07 shwetaap: ok - pls to do UT run too - so we can avoid another spin thru Jenkins 04:05:16 shwetaap: thx 04:05:25 yea the UT's are passing locally 04:05:37 ok cool lets move on 04:05:41 #link https://review.openstack.org/#/c/311159/ 04:05:46 the db patch 04:05:55 njohnston: and i have been working thru this 04:06:07 indeed, and thanks all for the valuable input 04:06:20 ran into an issue with project_id which was painful 04:06:48 so we came to an agreement that we will revert back to tenant_id 04:07:08 had issues with context getting set properly, i think neutron has to do some changes here 04:07:17 yushiro: thx for ur help 04:07:19 After the tenant_id problem, there are some lingering UT issues, but a number of them look like legitimate logic errors. 04:07:33 SridarK, no warries. 04:07:39 So I am grateful for the UTs, they are helping us 04:07:43 njohnston: yes likely - we can pick them off quickly over tomorrow 04:07:47 SridarK, +1 to revert 'tenant_id'. 04:07:49 yes 04:07:58 sigh yes made it almost feel like TDD :-) 04:08:25 TDD - tears driven development :-) 04:08:39 ROTFL !! 04:08:40 :D 04:08:44 TDD! 04:09:21 after breaking my head from Sun, i also should thank one my colleagues (Bob Melander) who provided another set of eyes 04:09:40 SridarK, njohnston I've commented into DB patch. sorry for periodic comment. But please check it. 04:09:53 yushiro: The periodic comment is no problem at all. :-) 04:10:06 it is kind of painful to debug the REST errors as we dont kind of know what exactly failed 04:10:09 njohnston, thanks. 04:10:17 yushiro: I have 33000 emails in my Openstack mailbox, one or two more is not a problem. 04:10:28 njohnston: +1 many thx yushiro - it has been very helpful 04:10:54 njohnston, wow, thanks for your kindness. 04:11:30 SridarK, no warries. I think we shouldn't use context.get_admin_context() 04:11:36 njohnston: & I will push thru on this and we hope we will have it ready by end of day tomorrow 04:11:44 Yes, very optimistic we can get the DB patch merged in the next 24 hours. 04:12:06 yushiro: yes i agree - but thx for ur patience in providing insight - my brain had become mush 04:12:29 njohnston: anything else to add on this 04:12:50 Nothing terribly important, no. 04:12:55 Just little things to chase down 04:13:06 njohnston: ok yes agree 04:13:15 ok moving on next to the plugin 04:13:18 #link https://review.openstack.org/#/c/267046/ 04:13:58 i kind of went over to help out on the db patch, so i got back to this today - i think i wired up the basic UT with the right extensions 04:14:27 basically in need to get the UT in place, and a few clean up items on it 04:14:57 the plugin mostly passes things over to the db - where most of the heavy lifting is done 04:15:49 we will keep the old rpc model for now to get things in coordination with mfranc213 & chandanc_ 's patch 04:15:57 which we can get to next 04:16:26 L3 Agent + Driver (mfranc213 & chandanc_ ) 04:16:38 #link https://review.openstack.org/#/c/337699/ 04:17:02 mfranc213 has been working on the UTs 04:17:04 i think this patch is mostly in place, mfranc213 also added in the UTs 04:17:22 njohnston: identical thoughts :-) 04:17:27 :-) 04:17:41 chandanc_: thx for jumping in with the driver piece 04:17:51 and also running the end to end tests 04:17:59 no worries, was a oppertunity to learn :) 04:18:11 which gives another measure of confidence over and above the UT 04:18:47 BTW once you guys are mostly done with the patches i would like to do a final integration test 04:18:53 chandanc_: i think mfranc213 went ahead and added some UT for the driver too - as she communicatd to u 04:18:59 yes 04:19:09 chandanc_: yes that will be good 04:19:16 chandanc_: absolutely, we should all do so 04:19:27 if we find some issues we can pick it up in one of the later patches too 04:19:48 ok with that we cover our first major push 04:20:06 folks pls feel free to interrupt with any questions or thoughts 04:20:36 ok lets move on 04:20:52 #topic FWaaS v2 Phase 2 04:21:17 the new patch is uploaded, once the jenkins test completes, please review the patch. 04:21:25 The next critical patch to go in should be the CLI patch 04:21:37 in terms of time lines i believe 04:21:42 agreed 04:21:54 yushiro: i know u have this in progress 04:22:14 is there anything u will need ? 04:22:15 SridarK, Yes. in CLI patch, I've refrected comments in my local env. 04:22:22 yushiro: ok 04:22:33 I'll push the CLI patch within today. 04:22:44 if it is easy - we can add that to our integration test 04:22:50 yushiro: ok great 04:23:17 sure can have it in the integration test 04:23:49 I'm considering the command format. Hence, please feel free comment my next CLI patch. 04:23:57 will do yushiro! 04:24:12 +1 04:24:24 thank you all! 04:24:28 one thing we should think abt is how to reflect the L2 model 04:25:20 because unlike L3, we will need to dynamically add as a VM comes up 04:25:29 something to think abt 04:25:42 may be we can exchg some emails 04:25:51 I think you will get a call through the L2 ext right ? 04:25:55 I've been looking at the L2 code, and I have some thoughts 04:26:05 yes, should be a create port RPC message will come through 04:26:08 njohnston: yes that is my understanding 04:26:14 SridarK# aren't we hooking with the OVS neutron agent for port update events to address that? 04:26:36 padkrish_: yes i think thru the L2 ext framework as njohnston is saying 04:26:46 yes +1 04:27:06 SridarK# yes, that already seems to be there...need to tie in all the pieces 04:27:15 I think it's pretty close 04:27:20 all the pieces are there 04:27:43 njohnston: perhaps we will need to state that we want the fw applied in some for to all VMs in project for example 04:27:49 #njohnston# if my memory serves me right, we may need to add some parameters to the get_port_details RPC.... 04:27:55 since we will not be tied in to nova create 04:27:59 Will confirm 04:28:22 But yes, let's get an email chain started 04:28:23 padkrish_, Sorry, a trigger method is 'handle_port' ? 04:28:32 njohnston: yes 04:28:59 yushiro# yes, from agent perspective. 04:29:08 padkrish_: could you email the details that need to be added to get_port_details? 04:29:15 padkrish_, OK. we're on same page :) 04:29:23 njohnston# sure, will do 04:29:27 thanks! 04:29:45 ok i think we can have a plan in place quickly 04:30:35 njohnston: is already thinking abt this, mfranc213: & padkrish_ are looking at versioned obj - so i think in some combination of folks 04:30:42 we can get this ball rolling 04:30:45 yep 04:31:18 oh and the other piece is the L3 agent ext framework in all of this 04:31:20 yes 04:31:30 njohnston: great congrats on the patch getting merged 04:31:47 thanks! 04:32:03 njohnston, congrats!! 04:32:04 njohnston: now we will also refactor the L3 agent around this correct ? 04:32:18 mfranc213 has put up a PS for refactoring the fwaas L3 extension to use the L3 agent extension mechanism 04:32:21 #link https://review.openstack.org/#/c/355576/ 04:32:33 So thanks to her we are ahead of the game there 04:32:44 oh ok great - yes - she did mention this - yes 04:33:01 ok that is covered too 04:33:20 It's on my 'to review' list as soon as we get past the DB patch 04:33:27 ok so we have our work cut out over the next few days 04:33:43 now on the iptables pieces 04:33:56 chandanc_: & SarathMekala: pls go ahead 04:34:15 chandanc_: thx for reaching to get this on kevin's radar 04:34:18 #link https://review.openstack.org/#/c/348177/ 04:34:31 I hope we can get attention on it while the midcycle is going on 04:34:44 We had some back from Kevin on the commit message, he also went through our doc 04:34:48 njohnston: yes - chandanc_ has added it to the etherpad 04:34:54 Yep. Will be great if we can get the code reviewed 04:35:02 yes the etherpad is updated 04:35:20 I can reachout to Kevin once more for a reminder 04:35:27 hoangcx: many thx for addin Ha Van also 04:35:54 Is there anyone else who can give us some feedback on the patch ? 04:36:03 yes SridarK 04:36:20 SridarK: No problem. 04:36:20 chandanc_: i think u can reach out to Ha Van 04:36:30 sure will do 04:36:34 hoangcx: pls help make this happen andn many thanks 04:37:19 SridarK: He is investigating in the design. Will push comments soon (maybe today or tomorrow). 04:37:25 We are going to start with the driver patch this week 04:37:28 hoangcx: ok great 04:37:38 thx 04:37:41 hoangcx, thanks. 04:37:49 SridarK: Thank you too :-) 04:38:23 We have gone through Yushiro's L2 agent code 04:38:50 will reachout to him for integrating with the driver code3 04:39:02 SarathMekala, OK. 04:39:08 SarathMekala: ok thx 04:40:08 and how are things looking with the driver - should that be straightfwd along the lines of the L3 04:40:50 once u have the neutron piece in place and the L2 Agent piece - the driver as such will bind the rules to a VM port 04:40:51 I think it will be a bigger change then the l3 agent, We have looked at Mickeys patch as referance 04:40:59 chandanc_: ok 04:41:08 yes SridarK 04:41:48 pls let us know how we can help 04:41:53 +1 04:41:54 sure 04:42:10 sure.. will ping you for any info 04:42:20 hoangcx: we will keep Ha Van in the loop for any suggestions or help too 04:42:23 as time is short 04:42:38 ok 04:42:56 SridarK: Sure. He is yours :-) 04:42:56 ok anything else on the driver pieces 04:43:01 hoangcx: thx 04:43:04 :-) 04:43:26 chandanc_: & SarathMekala: pls reach out 04:43:37 u heard it from hoangcx: :-) 04:43:42 No , we will start by reaching out to Yushiro 04:43:48 :D.. sure 04:43:59 we will be coming out with a patch soon 04:44:04 ok cool 04:44:29 if nothing else lets move on 04:44:55 #topic new cores 04:45:29 as in the email congrats and thx to njohnston: & yushiro: for taking on the additional responsibilities 04:45:41 this will enable our velocity 04:45:47 +1 04:45:48 Congrats njhonston & yushiro 04:45:54 Congrats to Nate and Yushiro 04:45:57 Thanks for the trust. Please let me know, anyone, if I can help you. 04:46:14 congrats to Nate and Yushiro :-) 04:46:18 Thank you all! I'll do my best to realize FWaaS v2!! 04:46:25 +1 04:46:38 #topic open discussion 04:47:11 firstly many thx for the cohesiveness of the team - we are all kind of all over the place - helping out as needed 04:47:44 we will probab work in this fashion with a little lack of structure to push things fwd 04:47:52 +100 04:48:13 :) 04:48:37 Hi, I have 1 thing about firewall_group status. I'd like to sync my understanding with you. 04:48:57 the next few days are going to be crazy. Lets target by Fri to get things in to give us a little buffer 04:49:14 yushiro: yes pls 04:49:19 (by next week Fri) 04:50:05 yes, in my understanding, the 'status' of firewall_group relates port association. 04:50:31 is it only L3 for next Fri? 04:51:12 no ports association -> "INACTIVE", associated ports -> "ACTIVE", waiting for update -> "PENDING_UPDATE", waiting for delete -> "PENDING_DELETE" 04:51:16 yushiro: yes and also to reflect that the driver has applied the changes and it is marked ACTIVE 04:51:28 yushiro: yes exactly 04:52:23 SarathMekala: no the week of Aug 29 is Feature Freeze - i would not count on that week 04:52:31 njohnston: am i correct ? 04:52:32 SridarK, OK, thanks. So, how about current situation? firewall_group has no ingress_firewall_policy_id and egress_firewall_policy_id and associated with ports. 04:52:46 SridarK: yes 04:53:20 yushiro: that is interesting - we cannot really apply anything 04:53:40 earlier the policy was a mandatory attribute 04:54:22 now we have a default of NULL - which makes sense as we need not have both ingress and egress 04:54:26 SridarK, Yes. That's is my opinion. How about changing mandatory params either 'ingress_firewall_policy_id' or 'egress_firewall_policy_id'? 04:54:53 yushiro: yes 04:55:09 we can do the validation in the plugin 04:55:27 we can still keep the attribute spec as optional 04:55:47 but the plugin can check if either one is present 04:55:57 SridarK, OK. I understand. 04:56:00 and we have to handle the update case on fw grp 04:56:12 what if we ingress policy only 04:56:16 SridarK, Sure. we also take care about it. 04:56:31 and now we update the fw grp and try to remove it 04:56:37 we can fail that 04:58:02 we can also create a fwg and if no policy we can keep it INACTIVE and we can fail if user tries to bind ports to a fwg that has no policy 04:58:25 yushiro: Earlier we had an option to start the firewall with state DOWN. Hope its taken care with Firewall groups as well. 04:58:28 SridarK, ah, yes! it's better. 04:58:37 great point yushiro: - we are almost out of time - shall we continue on email 04:58:48 or on irc 04:58:57 SridarK, sure. I'll send e-mail to all. 04:58:59 ok we are almost at time 04:59:02 yushiro: +1 04:59:11 +1 04:59:14 we can add the L2 discussion also 04:59:26 ok thanks again all 04:59:35 lets get those patches merging 04:59:50 yes! 04:59:59 bye all 05:00:01 #endmeeting