21:03:48 <markmcclain> #startmeeting networking
21:03:48 <openstack> Meeting started Mon Aug 19 21:03:48 2013 UTC and is due to finish in 60 minutes.  The chair is markmcclain. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:03:49 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:03:51 <openstack> The meeting name has been set to 'networking'
21:03:51 <garyk> can i go back to sleep :)
21:04:04 <markmcclain> #link https://wiki.openstack.org/wiki/Network/Meetings
21:04:38 <markmcclain> this one will probably run a little longer than 30s
21:04:44 <markmcclain> #topic Announcements
21:04:59 <markmcclain> #link https://launchpad.net/neutron/+milestone/havana-3
21:05:32 <markmcclain> #info The feature proposal freeze is end of day Friday August 23rd
21:05:40 <markmcclain> #link https://wiki.openstack.org/wiki/FeatureProposalFreeze
21:06:24 <markmcclain> We're actually looking pretty good for this deadline.  We've got almost everything already proposal
21:06:36 <markmcclain> s/proposal/proposed/
21:07:18 <markmcclain> which means we'll spend the next 2 weeks reviewing/merging and closing out the critical bugs
21:07:21 <markmcclain> #topic bugs
21:07:30 <markmcclain> #link https://bugs.launchpad.net/neutron/+bugs?search=Search&field.importance=Critical&field.status=New&field.status=Confirmed&field.status=Triaged&field.status=In+Progress
21:08:06 <markmcclain> We've got 3 open critical bugs
21:08:14 <markmcclain> https://bugs.launchpad.net/neutron/+bug/1208661
21:08:16 <uvirtbot> Launchpad bug 1208661 in neutron "nova show command sometimes don't includes ip address" [Critical,In progress]
21:08:17 <markmcclain> and
21:08:24 <markmcclain> https://bugs.launchpad.net/neutron/+bug/1210483
21:08:27 <uvirtbot> Launchpad bug 1210483 in neutron "ServerAddressesTestXML.test_list_server_addresses FAIL" [Critical,Confirmed]
21:08:34 <markmcclain> are related to bugs discovered tempest testing
21:08:40 <nati_ueno> Accoding to 1208661,
21:08:51 <nati_ueno> Sometimes, nova show didn't has ip address.
21:09:08 <nati_ueno> I can't reproduce it yet, but I send devstack patch for adding more logging. It is merged now
21:09:15 <nati_ueno> so We may get more hint
21:09:33 <nati_ueno> May be we can drop priority for high, because error late by this one is not so high
21:10:00 <markmcclain> ok
21:10:53 <markmcclain> any other critical or high bugs we need to discuss?
21:11:11 <marun> yes
21:11:32 <marun> the gate blocking bug is fixed, waiting on a merge of the renabling patch
21:11:57 <marun> there is a bug in the grenade gate that is delaying approval, but hopefully it goes in before any other bugs slip through
21:12:11 <markmcclain> marun: do you have a number?
21:12:38 <marun> https://bugs.launchpad.net/devstack/+bug/1210664
21:12:40 <uvirtbot> Launchpad bug 1210664 in tempest "neutron can't setup basic network connnectivity in gate jobs" [Undecided,In progress]
21:12:55 <marun> the fixes are in, it's just the tempest patch to enable the connectivity check that is pending
21:13:07 <markmcclain> ah cool
21:13:17 <salv-orlando> marun, markmcclain: the latest gating issue led me to think that probably we should try and cover more scenarios in tempest. What do you reckon?
21:13:37 <marun> salv-orlando: no comment
21:14:12 <marun> salv-orlando: the lack of good lower-level testing make adding more scenarios a problematic choice
21:14:45 <salv-orlando> cool. Thanks for the input. I don't think this is the right meeting topic
21:14:45 <salv-orlando> \
21:14:51 <salv-orlando> so let's move on.
21:14:55 <marun> fair enough
21:15:44 <markmcclain> salv-orlando: yeah I think we to bulk up the tests on a few different levels.. we should definitely setup a discussion on this
21:15:57 <salv-orlando> cool
21:16:01 <markmcclain> Any other bug items?
21:16:31 <nati_ueno> https://review.openstack.org/#/c/42737/
21:16:49 <nati_ueno> This is third times duplicated work for multi workers.
21:17:00 <nati_ueno> # I'm not sure we should talk on it now
21:17:34 <markmcclain> yeah.. this not really a bug as much as a new small feature
21:17:57 <marun> Whatever is finally merged, is there consensus that a non-threaded mode is desirable to aid in debugging?
21:18:34 <jackmccann> there is a bp for multi-workers
21:18:47 <nati_ueno> Ok I'll -2 for this patch, and let's go on meetings
21:19:08 <markmcclain> jackmccann: right.. I think that is where the duplication occurred because one version filed via BP and the other via a bug
21:19:38 <nati_ueno> markmcclain: so https://review.openstack.org/#/c/36487/ is correct one?
21:19:41 <markmcclain> marun: yes it is nice to be able to trigger single threaded mode at times
21:20:39 <markmcclain> nati_ueno: possibly I'll compare the different versions after the meeting
21:20:46 <marun> markmcclain: I'll make a note on the bp
21:20:53 <markmcclain> marun: thanks
21:21:08 <markmcclain> #topic docs
21:21:09 <nati_ueno> markmcclain: Ok so could you -2 for https://review.openstack.org/#/c/42737/. After you decided correct one
21:21:24 <markmcclain> nati_ueno: done
21:21:32 <nati_ueno> markmcclain: Thanks
21:21:39 <emagana> Paul M. committed the VPNaaS API: https://review.openstack.org/#/c/41702/
21:21:52 <emagana> reviews are welcome!
21:21:56 <emagana> and needed!
21:22:00 <nati_ueno> yes sir!
21:22:05 <annegentle> oo! With samples, nice.
21:22:06 <markmcclain> pcm_: thanks for writing the docs
21:22:30 <annegentle> o/ wanted to ask a Q
21:22:52 <markmcclain> annegentle: hi.. go ahead
21:23:13 <annegentle> The doc team is noodling on two things -
21:23:40 <annegentle> one is, assigning a "doc lead" per team. emagana has been doing a great job at this, just wanted to let you all know we want to replicate across teams
21:24:08 <annegentle> basically the doc lead will help triage doc bugs and help get info. emagana is good at this :)
21:24:15 <emagana> annegentle: Thanks!!
21:24:28 <annegentle> two is, we're probably going to "only" release the Install Guide and a Config Ref for Havana
21:24:50 <annegentle> this means a couple of things - the Network Admin Guide may have some sections deleted/moved
21:25:11 <markmcclain> +1 I think having a doc lead is needed role
21:25:42 <markmcclain> ok.. is there a preliminary list of sections to be cut or it too early?
21:25:45 <annegentle> It's possible the Network Admin Guide will fold into the Config Ref. I was able to do that for the Block Storage Admin Guide.
21:25:48 <annegentle> Not so sure it'll work on the Network Admin Guide, but, it is a ton of config
21:26:27 <annegentle> markmcclain: for example, move install info into the install guide with one patch
21:27:04 <markmcclain> ok.. agreed the network guide is definitely a lot of config
21:27:08 <annegentle> markmcclain: your Admin guide is one I hate to do too much to, but it is mostly config
21:27:39 <annegentle> markmcclain: all of this is to try to offer "OpenStack unified docs" rather than project-by-project... it's still under discussion
21:27:50 <annegentle> markmcclain: but I want to bring it up at individual project meetings to get input and feedback
21:27:58 <annegentle> make sure we're not missing something obvious, and so on
21:28:11 <annegentle> since neutron's core it's pretty easy to say what "has" to be documented
21:28:22 <annegentle> Integrated, it's harder to say.
21:28:38 <annegentle> So just wanted to share and let you all know what' we're noodling on ahead of release
21:28:50 <annegentle> And, I want the teams to be able to know where to put docs
21:29:04 <annegentle> questions?
21:29:41 <nati_ueno> annegentle: How about API guide?
21:29:51 <annegentle> nati_ueno: the API guide remains the same
21:29:57 <nati_ueno> annegentle: gotcha
21:30:06 <markmcclain> thanks for sharing.. .I think unified docs make for a better user experience so I'm glad the discussion is occurring
21:30:10 <annegentle> this discussion is all about "release" docs
21:30:26 <annegentle> API docs are ongoing
21:30:37 <annegentle> markmcclain: thanks for having a docs topic in your meetings!
21:30:59 * annegentle is done now
21:31:12 <salv-orlando> annegentle: I tend to agree with a unified admin guide for all openstack projects. My only latent concern is that it will become a sort of encyclopaedia (i.e.: huge set of books)
21:31:25 <salv-orlando> a bit like windows NT administrator guides
21:31:49 <annegentle> salv-orlando: oh believe me I share that concern :) it's intimidating as can be.
21:32:18 <annegentle> salv-orlando: I think the Operations Guide struck a balance, but it doesn't have SDN to speak of
21:32:51 <annegentle> it's totally possible that Networking needs its own guide
21:32:55 <salv-orlando> annegentle: SDN? what is SDN? ;) we understand only neutrons here
21:33:00 <annegentle> salv-orlando: hee hee
21:33:08 * mestery chuckles.
21:33:26 <annegentle> what's your sense of Who are the admins? Operators? Network admins? Sys admins?
21:33:47 <annegentle> we have done some user analysis with the user committee's data but would like your sense of it too
21:34:00 <salv-orlando> In my experience, we get all of them - but I do not have a lot of data points.
21:34:07 <annegentle> salv-orlando: yep
21:34:27 <markmcclain> my experience has been a random mix of folks too.. really depends on the size of the deployment
21:35:19 <markmcclain> Any other doc items?
21:35:23 <annegentle> markmcclain: yeah at a meetup in Austin 2 months ago I ran into a few network admins but they are few and far between. And they don't like SDN that much (security concerns mostly it seemed, like "you're gonna let users do what with my network? )
21:35:32 <marun> hah
21:35:32 <salv-orlando> elect doc lead for Neutron?
21:35:40 <emagana> new bug file for Mellanox plugin: https://bugs.launchpad.net/openstack-manuals/+bug/1214117
21:35:42 <uvirtbot> Launchpad bug 1214117 in openstack-manuals "Include Mellanox plugin in networking guide" [Undecided,New]
21:35:48 <annegentle> salv-orlando: emagana volunteered! I don't wanna add more governance :)
21:36:01 <salv-orlando> so emagana is
21:36:30 <emagana> :-0
21:36:41 <annegentle> salv-orlando: though I consider you the point for API docs
21:37:11 <nati_ueno> +1 for emagana :)
21:37:29 <salv-orlando> annegentle: Yup, I am still the API docs guy.
21:37:59 <markmcclain> speaking of the API
21:38:04 <markmcclain> #topic API
21:38:07 <salv-orlando> I haven't yet been moaning about API docs for the new features, because the guys have been really busy with the admin docs
21:38:20 <salv-orlando> but I will begin moaning, or barking, soon.
21:38:49 <salv-orlando> Codewise, everything is quiet on the API side.
21:38:58 <markmcclain> except for the commit discussion
21:39:01 <salv-orlando> However, there is this thing about commit logic for Fw rules
21:39:09 <salv-orlando> markmcclain: yup
21:39:17 <salv-orlando> I am happy to move the discussion to gerrtit
21:39:28 <salv-orlando> unless we feel we want to chat about it here
21:39:35 <salv-orlando> in which case I suggest the open discussion topic
21:39:43 <salv-orlando> as we're already 42 mins into the meeting
21:39:54 <markmcclain> ok.. let's focus the discuss on the review
21:40:07 <markmcclain> #topic VPNaaS
21:40:34 <nati_ueno> OK still codes are in review.
21:40:40 <markmcclain> nati_ueno: seems to be a bit of progress this week on whether to support openswan or strongswan
21:41:22 <markmcclain> I understand the reason for switching
21:41:23 <nati_ueno> markmcclain: I'm going to support both. because RHEL only support openswan
21:41:24 <nati_ueno> markmcclain: Ah no, we don't switch, I'm adding both
21:41:24 <markmcclain> but wondering whether the burden will higher than if we just picked one
21:41:49 <markmcclain> which would have to be openswan unless strong could be included in RDO correct?
21:41:53 <nati_ueno> markmcclain: it should be 200-300 line review burden https://review.openstack.org/#/c/42264/
21:42:08 <markmcclain> ok
21:42:24 <nati_ueno> markmcclain: so in other hand, may guys want to use StrongSwan,
21:42:39 <nati_ueno> markmcclain: so 200-300 lines code is good for this
21:43:00 <nati_ueno> markmcclain: also who don't like mounting can use openswan version
21:43:03 <marun> nati_ueno: too much free time :P
21:43:23 <nati_ueno> marun: free time?
21:43:27 <markmcclain> right.. the burden will be more on docs and support as the there will be differences
21:43:28 <salv-orlando> nati_ueno: is that patch for strong swan or open swan?
21:43:37 <marun> nati_ueno: if resources are limited, pick one and get it right before supporting two
21:43:43 <nati_ueno> salv-orlando: 42264 is for openswan
21:44:21 <salv-orlando> cool the commit message confused me a bit (first line says openswan and the 2nd says 'wii' to support also openswan)
21:44:35 <nati_ueno> salv-orlando: Ah sorry the patch is still WIP
21:44:41 <salv-orlando> so you will have a single driver for both openswan and strong swan?
21:44:44 <salv-orlando> or two drivers?
21:44:50 <nati_ueno> salv-orlando: Two drivers
21:44:56 <nati_ueno> OK so let's choose
21:45:06 <salv-orlando> ok - so chances of strong swan support breaking open swan are negligible?
21:45:09 <nati_ueno> A) 1 driver with StronSwan
21:45:13 <nati_ueno> B) 2 drvier
21:45:18 <nati_ueno> C) 1 driver with OpenSwan
21:45:31 <nati_ueno> salv-orlando: I believe so
21:45:50 <markmcclain> let's take a look at the reviews and can discuss this more over the ML
21:45:51 <nati_ueno> Anyway, in future, we will have multiple drvier. so discussion is for H3
21:45:51 <salv-orlando> I will vote for the safest solution, i.e.: the one in which I know openswan driver works pretty much everywhere.
21:46:24 <nati_ueno> markmcclain: gotcha
21:46:41 <nati_ueno> markmcclain: please note the openswan support is in WIP
21:46:48 <nati_ueno> That's all from me
21:46:53 <markmcclain> nati_ueno: noted… thanks for the update
21:46:55 <markmcclain> #topic Nova
21:46:59 <markmcclain> garyk: hi
21:48:09 <markmcclain> #undo
21:48:10 <openstack> Removing item from minutes: <ircmeeting.items.Topic object at 0x29c1990>
21:48:17 <markmcclain> #topic FWaaS/SG
21:48:39 <SumitNaiksatam> hi
21:48:44 <SumitNaiksatam> for FWaaS - Driver patch was merged, nice work RajeshMohan, and thanks to the reviewers
21:48:54 <SumitNaiksatam> Devstack patch (rchunduru) and Horizon patch (KC) are awaiting (hopefully) a final push from the reviewers
21:49:08 <SumitNaiksatam> and of course - the "hot" topic as voted by salv-orlando - commit API :-)
21:49:29 <SumitNaiksatam> I removed the WIP on the commit operation patch over the weekend, it's ready for review
21:49:38 <SumitNaiksatam> RajeshMohan has started CLI work for this
21:49:52 <SumitNaiksatam> thoughts/questions/comments?
21:50:00 <SumitNaiksatam> (my update is done)
21:50:24 <salv-orlando> I am reviewing the code, even if I will now suspend and resume when i wake up. I will comment on gerrit
21:50:34 <SumitNaiksatam> salv-orlando: thanks
21:50:52 <markmcclain> gerrit is a good place to focus the discussion
21:51:17 <markmcclain> #topic LBaaS
21:51:32 <markmcclain> enikanorov: hi
21:51:37 <enikanorov> hi
21:51:56 <enikanorov> two major patches are pending
21:52:21 <enikanorov> integration with service type framework - I think we need some discussion about noop driver
21:52:56 <enikanorov> nati_ueno has a followup patch that applies the same approach to vpnaas
21:53:37 <nati_ueno> SumitNaiksatam: FWaaS is going support this?
21:53:54 <SumitNaiksatam> nati_ueno: not in H3, there isn't' enough time
21:54:06 <SumitNaiksatam> it wasn't planned to begin with
21:54:10 <nati_ueno> SumitNaiksatam: ok
21:54:16 <salv-orlando> the initial issue, to give more context, is 'what happens when a provider is removed'?
21:54:49 <enikanorov> there are a couple of issues, I believe. but right, that is where it started
21:55:02 <salv-orlando> it seems, from previous discussions, that this a sort of 'natural' use case, and neutron should take steps for 'salvaging' orphaned resources
21:55:27 <salv-orlando> now, which steps should be taken, and how orphaned resources should be treated is what is being discussed
21:55:43 <enikanorov> i think it's a part of the question
21:55:46 <salv-orlando> It seems there's wide agreement on the communities working on VPN and LB on this approach
21:55:56 <salv-orlando> I don't know what people working on FW think of it.
21:56:03 <enikanorov> more generic is whether to allow update on 'provider' attribute
21:56:46 <nati_ueno> IMO, we need provide way to remove a provider for admin
21:57:17 <enikanorov> i don't see why it should be restricted to admins, actually
21:57:49 <salv-orlando> Exactly what problem are you aiming at tackling for the Havana release?
21:57:51 <markmcclain> I agree with enikanorov that should we restrict a tenant from switching for simplicity the answer for Havana might be no or will be something outside the scope we'll support
21:57:58 <markmcclain> *should not
21:58:00 <enikanorov> if we ever allow changing providers for the resource, we must support removing provider from resource as well
21:58:10 <enikanorov> that would be logicaly natural
21:58:52 <enikanorov> and it also will cover the case of 'removed' provider
21:59:20 <markmcclain> agreed.. honestly this feels a like a good summit session instead of something the cram in at the last minute
21:59:47 <salv-orlando> My option would be to just deal, if we want, with the configuration issue arising from removing a provider which is used by active resources
21:59:59 <nati_ueno> My comment is for H3, so "at least admin can remove provider"
21:59:59 <enikanorov> ok. another phrasing could be 'creating pure logical resources'
22:00:13 <salv-orlando> and as markmclain suggest, discuss provider associates updates and unbound resources at the summit
22:00:32 <markmcclain> We're running short on time, so we can carry this over to a ML discussion
22:00:50 <markmcclain> #topic ML2
22:00:50 <enikanorov> ok, sure
22:00:50 <salv-orlando> enikanorov: I think you'll find people arguing all neutron resources are purely logical, but let's move on
22:00:50 <markmcclain> mestery:  want to update?
22:00:58 <mestery> marmcclain: Hi
22:01:31 <mestery> So, ML2 is coming along nicely. We're tracking the 2 final BPs for H3 now (port-binding and multi-segment API).
22:01:41 <mestery> We are striving to get reviews out for those before feature freeze.
22:01:59 <mestery> Also, some MechanismDrivers are close to merging as well, so that's good news.
22:02:09 <markmcclain> good news all around
22:02:10 <mestery> Overall, ML2 is humming along nicely.
22:02:17 <markmcclain> thanks for the update
22:02:27 <mestery> markmcclain: sure, thanks!
22:03:02 <markmcclain> #topic Open Discussion
22:03:21 <geoffarnold> As I mentioned a couple of weeks ago, we’re submitting a (potentially controversial) proposal for managing multivendor multi-instance phys/virt L3-L7 resources. We’re talking about Icehouse, not Havana. We plan to demo a proof-of-concept implementation in Hong Kong and discuss including it in the Icehouse program. I’ve just posted the Blueprint: it’s at https://blueprints.launchpad.net/neutron/+spec/dynamic-netw
22:03:29 <enikanorov> -2 minutes left for open discussion
22:03:33 <enikanorov> -3 even
22:03:52 <nati_ueno> FYI we can see list of reviews here http://status.openstack.org/reviews/#neutron sorted by priorites
22:04:03 <mestery> geoffarnold: Thanks for the link, will take a peek at this.
22:04:07 <markmcclain> geoffarnold: make sure you submit a design session when it opens
22:04:10 <enikanorov> geoffarnold: broken link?
22:04:14 <mestery> geoffarnold: broken link
22:04:24 <geoffarnold> https://blueprints.launchpad.net/neutron/+spec/dynamic-network-resource-mgmt
22:04:53 <markmcclain> with the conference and summit running concurrently it is hard for many on the core team to see too many conference sessions until the videos are posted
22:04:56 <geoffarnold> Full version here: https://wiki.openstack.org/wiki/File:Dnrm-blueprint-001.pdf
22:05:25 <geoffarnold> I've submitted a conf session; I'll submit a design session too
22:05:44 <markmcclain> Any other open discussion items?
22:05:57 <markmcclain> since enikanorov pointed out we're over time :)
22:06:07 <enikanorov> :)
22:07:17 <markmcclain> I want to thank everyone for their hard work getting the proposals in on time
22:07:49 <markmcclain> Be on the lookout for a few discussions to spring up the ML or the gerrit reviews we discussed
22:07:53 <markmcclain> #endmeeting