04:00:05 <njohnston> #startmeeting fwaas
04:00:06 <openstack> Meeting started Wed Sep 14 04:00:05 2016 UTC and is due to finish in 60 minutes.  The chair is njohnston. Information about MeetBot at http://wiki.debian.org/MeetBot.
04:00:07 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
04:00:09 <openstack> The meeting name has been set to 'fwaas'
04:00:13 <njohnston> #chair SridarK
04:00:14 <openstack> Current chairs: SridarK njohnston
04:00:14 <chandanc__> Hello All
04:00:23 <hoangcx> hi
04:00:29 <njohnston> xgerman sends his regrets - he is feeling under the weather this evening
04:00:37 <SridarK> hi all
04:01:19 <SridarK> Also yushiro had informed us that he will not make it today
04:01:41 <SridarK> lets get started
04:02:07 <SridarK> #topic Final things for Newton
04:02:29 <SridarK> njohnston: thx for taking care of the project_id related issues
04:02:54 <njohnston> Sure thing, it was a real gate-buster so we had to clear it
04:03:04 <chandanc__> Sorry i missed on all the action, catching up on the IRC logs
04:03:04 <njohnston> Newton RC-1 is in the process of being cut, as evidenced by things like this
04:03:04 <njohnston> #link https://review.openstack.org/369762 "Tag the alembic migration revisions for Newton"
04:03:24 <SridarK> i guess now we are at a stage where only something that is really critical shd go thru
04:03:45 <njohnston> yes.
04:03:50 <SridarK> i was testing a fix for an issue on rule updates
04:03:58 <annp> hi
04:04:01 <njohnston> But good point - Ocata development will be opening up
04:04:19 <SridarK> when i saw the migration patch go thru - that seemed like the gates were closing
04:04:48 <njohnston> if you can land the patch in master, perhaps we can backport it to stable/newton before the final release
04:04:59 <njohnston> we can check with ihrachys
04:05:00 <SridarK> njohnston: good point
04:05:21 <SridarK> i will push it in as soon as we jump to ocata
04:05:51 <SridarK> i know there are a few more clean up issues - that can go into Ocata
04:06:34 <SridarK> chandanc__: i know u had one too, regarding the cleanup of iptables
04:06:43 <SridarK> that can also be pushed for Ocata
04:07:14 <chandanc__> sorry, can you tell me which one yopu are refering to
04:07:46 <SridarK> chandanc__: the issue we discussed on a port getting deleted
04:07:54 <chandanc__> ah yes
04:07:57 <chandanc__> will do
04:07:58 <SridarK> and notification thru the agent ext
04:08:43 <SridarK> so the port is gone, but since the plugin does not initiate this - we will not have a specific delete msg reg the port
04:08:55 <SridarK> (just for some context for all )
04:08:59 <njohnston> If you note those changes using #link, we can all go through the meeting notes later and make sure your patches get reviewed very quickly
04:09:28 <SridarK> njohnston: good point
04:09:53 <SridarK> or let us add it to the Ocata etherpad
04:10:06 <SridarK> will be good to get a descriptive bug opened as well
04:10:07 <njohnston> sure
04:10:24 <njohnston> #link https://etherpad.openstack.org/p/fwaas-ocata Ocata etherpad
04:10:56 <SridarK> njohnston: anything else that u want to bring up as pending for Newton
04:11:02 <njohnston> nope
04:11:09 <SridarK> Others as well ?
04:11:14 <njohnston> since the reno note just merged literally seconds ago :-)
04:11:51 <SridarK> ok good lets move on
04:11:54 <SridarK> :-)
04:12:15 <SridarK> #topic Ocata L2
04:12:32 <SridarK> i think we have clarity on what needs to be fixed
04:13:08 <njohnston> yes
04:13:11 <SridarK> chandanc__: SarathMekala: as Newton final things get pulled pls do push for reviews on the neutron patchs
04:13:18 <SridarK> *patches
04:13:22 <chandanc__> yes we know the neutron part well, the fwaas driver for l2 need some work
04:13:57 <chandanc__> sure will ping kevin again
04:13:59 <SarathMekala> SridarK, sure
04:14:06 <SridarK> chandanc__: yes understood
04:14:37 <SridarK> padkrish: pls go ahead - u had some points to discuss on the L2 Agent
04:15:01 <padkrish> Sridark# Sure, we discussed this over emails, just summarizing here for everyone's benefit
04:15:35 <padkrish> When a port is created the FW L2 agent extension gets called from Neutron agent (e.g ovs_neutron_agent)
04:16:20 <njohnston> yes
04:16:21 <padkrish> Neutron agent used get_devices_details RPC to get the port related details. Some features like QoS has piggybacked some information on this RPC...
04:17:13 <njohnston> ok
04:17:24 <padkrish> When handle_port gets to fwaas agent, it needs to know the FWG_ID that is associated with the port
04:18:06 <padkrish> so, one option is to enhance neutron RPC like what Qos did and piggyback FWaaS V2 related stuffs like fwg_id
04:18:42 <padkrish> another option is to call fwaas plugin RPC from fwaas L2 extension to retrieve the information like fwg_id of interest
04:19:01 <padkrish> In order to be independent from neutron, i assume we want to take the later option
04:19:12 <padkrish> right?
04:19:32 <chandanc__> yes , the second option looks better
04:19:32 <SridarK> padkrish: that seems the better option to me
04:19:36 <njohnston> The second option does sound similar to what we did in the L3 agent
04:20:28 <SarathMekala> We can investigate using proxy pattern on the Agent
04:20:39 <SridarK> the one minor difference would be that we will have a messaging from the agent to the plugin asking "I am a new port, pls tell me the fwg i should associate with"
04:20:53 <SarathMekala> the extension calls the agent which in turn does the RPC and hands over the data to the extenstion
04:21:09 <padkrish> SarathMekala# then that means changes to Neutron agent?
04:21:14 <SridarK> next, the plugin can associate the port with the fwg and provide the fwg dict to the agent
04:21:34 <SarathMekala> Yeah... to be clean.. multiple other plugins may have similar requirement
04:21:48 <njohnston> What we did in the L3 agent was that we just had the FWaaS L3 plugin handle RPC communications directly
04:22:04 <njohnston> no need for a proxy
04:22:19 <padkrish> njohnston, sarathmekala# i also prefer that, atleast to not have dependency on neutron
04:22:26 <njohnston> because the plugin is in the same process as the neutron agent, it has access to the rpc capabilities as well
04:22:32 <padkrish> i mean prefer not having a proxy
04:22:46 <SridarK> And L3 is a bit different - the association to a L3 port is driven by the user thru the plugin
04:23:02 <padkrish> SridarK# correct
04:23:29 <SarathMekala> njohnston, I agree it works.. just thinking.. did we tweak the topic so that the extension can receive the message
04:23:52 <njohnston> yes, we used a separate topic for communcations to the extension
04:24:28 <padkrish> consts.FIREWALL_PLUGIN is the topic
04:24:37 <SarathMekala> njohnston, ok..sounds good.. the proxy approach can be used without changing the topic
04:24:59 <SarathMekala> but we can go with this approach
04:25:16 <SarathMekala> I meant the L3 approach
04:25:17 <padkrish> ok, if we are going with the approach of a separate RPC to the plugin
04:25:39 <SridarK> SarathMekala: for the msg to be seen by the FW plugin we will need to use the appropriate topic
04:25:43 <padkrish> then, one thing, is it adds an extra RPC, which is a penalty
04:25:55 <chandanc__> yes
04:26:05 <SridarK> keeping things separate from neutron will be  a good thing
04:26:57 <SridarK> Now one concern, here is we will doing an RPC in the context of the handle_port callback
04:27:01 <chandanc__> thats the price we need to pay for separation of plugin / extension
04:27:04 <padkrish> yes, we can call the RPC from the handle_port context, which blocks creation of the port
04:27:40 <SridarK> essentially we will do the whole FW agent -> plugin, then plugin response to agent all in this context
04:27:48 <njohnston> I believe the l3 agent extension manager doesn't get handle_port called on it until the port has already been created
04:29:18 <SridarK> So broadly this approach seems to be appropriate
04:29:42 <padkrish> sorry, suddently didn't receive messages after njohnston's "I believe the l3 agent extension manager doesn't get handle_port called on it until the port has already been created"
04:30:03 <njohnston> I did not either
04:30:14 <chandanc__> me too
04:30:33 <padkrish> ok, then probably no one typed :)
04:30:45 <SarathMekala> padkrish, any thoughts on solving the blocking call
04:30:53 <chandanc__> so, will the fw driver also be called in the port creation context ?
04:31:02 <SridarK> oh
04:31:09 <SridarK> i have been typing
04:31:11 <SridarK> :-)
04:31:26 <padkrish> oh
04:31:49 <padkrish> after njohnston, i only got your next message as "So broadly this approach seems to be appropriate"
04:31:49 <njohnston> #link https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1537-L1539 where handle_port gets called in the OVS L2 agent context
04:32:07 <SridarK> ah ok
04:32:14 <hoangcx> same here :-)
04:32:24 <njohnston> note that handle_port is called after _update_port_network which actually accomplishes the adding/updating of the port
04:32:36 <padkrish> njohnston# yes, i was following this and able to get the fwaas extension called in my prototype
04:32:54 <padkrish> this also happens only after ovs flows are programmed, i believe
04:33:11 <njohnston> ok, good, I just wanted to be clear on the order of things because I believe this precludes blocking the port creation of we take action inside handle_port
04:33:22 <njohnston> s/of/if/
04:34:27 <SridarK> ok but still we will be doing multiple rpc's in the context of handle_port
04:35:09 <njohnston> Yes.  The alternative to that is to get deeply embedded into neutron.
04:36:09 <SridarK> hmm ok, now since we are in the L2 Agent context anyways, the former is probab ok
04:36:26 <SridarK> esp if this is not holding up the port creation
04:38:28 <padkrish> if there's any failure in fwaas agent extension, then the port creation also should fail, is it not?
04:38:56 <padkrish> or probably not
04:39:01 <SridarK> padkrish: may be the fwg association part is a bit premature for us to force that
04:39:42 <padkrish> SridarK# ok..
04:40:03 <SridarK> at least that is my thinking now
04:40:55 <SridarK> What do others think ?
04:42:21 <njohnston> I am on the fence, but I would tend to think that if someone has configured FWaaS, they will be expecting security.  If we fail to apply a firewall to a new port, then it seems like that port add should fail, because otherwise we leave an unsecured port, which is something security-conscious people will not like.
04:42:22 <_SarathMekala_> sure.. fine for now
04:42:32 <chandanc__> yes, may be for now we can let the port creation pass even if the FWG fails
04:42:47 <njohnston> But that being said, this is new, and we can iterate on it.
04:42:52 <SridarK> njohnston: that would be the ideal behavior
04:42:55 <padkrish> njohnston: +1
04:43:15 <SridarK> njohnston: but that can come after wider adoption
04:43:27 <padkrish> njohnston# basic question: does qos extension force that?
04:43:35 <SridarK> it will be a hard sell to push for that now
04:44:02 <njohnston> padkrish: Honestly, I don't recall.  I'll have to fire up a devstack and see. :-/
04:44:29 <padkrish> njohnston# no worries, i have one...will inject a failure and find out :)
04:45:06 <njohnston> :-)
04:46:06 <njohnston> I would expect QoS to allow the port creation to continue
04:46:43 <njohnston> because, with the exception of DSCP, the QoS feature is not security-related
04:47:28 <njohnston> ok, we are starting to run down on time
04:47:31 <padkrish> ok
04:47:38 <njohnston> SridarK: what else do you have on your agenda?
04:47:56 <SridarK> So it seems we are on the same page here, padkrish more investigation here perhaps
04:48:10 <padkrish> SridarK# sure, will update the etherpad
04:48:12 <SridarK> one other thing we need to nail down in relation to this
04:48:18 <hoangcx> njohnston: I have one on the etherpad. Waiting ...
04:48:42 <SridarK> how do we specify the equivalent of the default fwg to apply when a port comes up
04:48:56 <SridarK> lets take that discussion to the Ocata etherpad
04:49:40 <SridarK> hoangcx: did u want to discuss something in this context here ?
04:49:52 <hoangcx> About [Challengable work] Support logging API for firewall-rule
04:49:52 <SridarK> njohnston: i had nothing else
04:50:02 <SridarK> hoangcx: ok
04:50:05 <hoangcx> We have been proposing/developing Logging API for SG in Neutron [BP].
04:50:10 <hoangcx> [BP] https://blueprints.launchpad.net/neutron/+spec/security-group-logging
04:50:12 <njohnston> I have 1 thing, after hoangcx
04:50:16 <hoangcx> The spec[1] looks so smooth and it may be merged in O-1.
04:50:20 <hoangcx> [1] https://review.openstack.org/#/c/203509
04:50:25 <hoangcx> For source code [2], We will implement in Ocata.
04:50:29 <hoangcx> [2] https://review.openstack.org/#/c/353440
04:50:33 <hoangcx> We think this feature is necessary not only SG but also firewall.
04:50:37 <hoangcx> If possible, we would like to provide more resource on FWaaS v2 for the feature.
04:50:54 <njohnston> excellent!
04:51:00 <SridarK> hoangcx: ok, lets do more discussion on the etherpad
04:51:04 <hoangcx> I would like to get comments/feeback about that?
04:51:17 <njohnston> I'll take a look at those changes and give feedback
04:51:38 <hoangcx> njohnston SridarK: Sure. Thank you
04:51:53 <SridarK> hoangcx: i think u have answered most queries around the need for rate limit and any perf impacts on any rule logging
04:51:54 <hoangcx> The point is that we would like to support the feature for FWaaS v2 also.
04:52:15 <SridarK> hoangcx: sounds good
04:52:46 <hoangcx> SridarK: Yes. We will provide more resource to tackle it
04:53:01 <SridarK> njohnston: pls go ahead, we can go to Open Discussion if that makes more sense
04:53:10 <SridarK> hoangcx: thx
04:53:11 <njohnston> #topic Stadium compliance
04:53:16 <njohnston> I went through the items required to meet Neutron stadium compliance.
04:53:21 <njohnston> #link http://docs.openstack.org/developer/neutron/stadium/governance.html#checklist Stadium Checklist
04:53:27 <njohnston> So I dropped some changes to support the items I identified that were not in progress.
04:53:34 <njohnston> #link https://review.openstack.org/#/q/branch:master+topic:fwaas-stadium Stadium checklist items
04:53:41 <njohnston> Others are already in flight, like OSC support
04:54:05 <njohnston> But please, if you see anything that I have missed either let me know, or feel free to dive in and fix it.
04:54:20 <SridarK> njohnston: thx will do
04:54:29 * njohnston steps off his soapbox
04:54:36 <SridarK> :-)
04:55:00 <SridarK> njohnston: also we would like to make the push to get these addressed as early as possible in Ocata
04:55:23 <njohnston> Indeed, since the in-or-out call on Stadium status will be at the O-1 mark IIRC
04:55:33 <SridarK> yes
04:56:19 <njohnston> #topic Open Discussion
04:57:13 <njohnston> Anyone have anything else?
04:57:16 <SridarK> nothing much more to add, the Ocata etherpad is our to do list
04:57:21 <njohnston> indeed
04:57:22 <_SarathMekala_> SridarK, have a question.. did you sync with the horizon folks regarding UI?
04:57:52 <SridarK> _SarathMekala_: thx for the reminder, will do so and keep u in the loop
04:57:54 <njohnston> chandanc__: let me know if there is anything I can help with for the neutron iptables changes
04:57:59 <_SarathMekala_> not sure if I can go ahead and start prototyping
04:58:01 <padkrish> SridarK# one more thing on the agent extension, that SridarK also raised is the default fwg that should be associated to a port
04:58:18 <SridarK> _SarathMekala_: pls go ahead with it and kick things off
04:58:24 <chandanc__> njohnston, I will be pusing the update this week, will ping you in case i need help with the UT
04:58:37 <njohnston> chandanc__: thanks, I will look forward to it
04:58:42 <SridarK> padkrish: lets discuss on etherpad
04:58:48 <padkrish> ok
04:59:00 <SridarK> i think we are almost at time
04:59:21 <njohnston> thanks everyone!
04:59:25 <SridarK> Thx all
04:59:26 <hoangcx> 1 minute
05:00:00 <njohnston> #endmeeting