21:03:06 #startmeeting Networking 21:03:07 Meeting started Mon Jul 29 21:03:06 2013 UTC and is due to finish in 60 minutes. The chair is markmcclain. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:03:08 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:03:11 The meeting name has been set to 'networking' 21:03:16 hi 21:03:27 Hello 21:03:46 #link https://wiki.openstack.org/wiki/Network/Meetings 21:03:52 #topic Announcements 21:04:36 Congrats to our two new core team members: armax and mestery 21:05:00 markmcclain: have you told them about the ritual? 21:05:05 ? 21:05:06 congratulations mestery and armax! 21:05:11 :) 21:05:12 they have to buy drink for all the other cores at the next summit 21:05:15 one round each 21:05:19 congrat! 21:05:23 salv-orlando: No mention of a ritual, is this something to happen in Hong Kong? 21:05:34 mestery: ^ 21:05:45 ah, okay for a second I feared the worst :P 21:05:50 salv-orlando: I will be happy to do that in Hong Kong. :) 21:05:55 armax: that's the part we can talk about it public :p 21:06:08 oh my 21:06:28 ah the lies I would tell for some free booze. Don't you find it tastes a lot better when it's free? 21:06:35 :) 21:07:09 both guys will be able to help with our review bandwidth especially with the large amount of work we have to accomplish in H3 21:07:34 I hope I won't let you guys down :) 21:07:38 they have ti buy all drinks at the next summit. congratulations 21:07:59 speaking of work.. we're getting close to our target number of med or high blueprints for H3 21:08:26 #info armax and mestery promoted to core 21:08:52 now that is it official in the logs can continue :) 21:08:55 #link https://launchpad.net/neutron/+milestone/havana-3 21:09:52 when grooming the blueprint list I've become a little concerned with several of the blueprints that have not been started 21:11:40 It looks like some bp with 'not started' have reviews.. (Qos etc) 21:12:09 which one? 21:12:58 markmcclain: ah sorry it is my mistake 21:13:39 no worries.. there are several in that area and I thought I had missed one 21:14:08 #topic Bugs 21:14:16 https://bugs.launchpad.net/neutron/+bug/1194026 21:14:18 Launchpad bug 1194026 in neutron "check_public_network_connectivity fails with timeout" [Critical,In progress] 21:14:34 This bug is still open. nati_uen_ want to update? 21:15:05 OK In this week, I found ping time out is just 20 sec. 21:15:15 so I fixed tempest, but it looks like no effects 21:15:27 I'm working on this https://review.openstack.org/#/c/37576/ 21:15:49 latest patch looks no failure in some gating. so I'll keep investigation 21:15:59 ok.. thanks for working through it 21:16:43 I can't find the way to reproduce it in my env for this time.. This is very nasty bug 21:17:11 Yes it definitely is… I've tried to replicate it a few ways too 21:17:30 Any other bugs the team should tracking? 21:18:42 #topic Docs 21:18:54 still working on: https://bugs.launchpad.net/openstack-manuals/+bug/1202331 21:18:55 Launchpad bug 1202331 in openstack-manuals "renaming to neutron in non-networking docs " [Medium,In progress] 21:19:29 will be completed this week! I was busy updating a plugin, you can guess which one! 21:19:42 the linuxbridge :p 21:19:59 markmcclain: OVS ;-) 21:19:59 seriously, thanks for updating the other manuals 21:20:38 Any questions for docs? 21:20:43 on VPNaaS, will start ASAP! BP is created and assigned to me 21:20:56 cool 21:20:56 on FWaaS, Sumit can provide update 21:21:11 emagana: please assign it for me :) 21:21:31 I've starting on https://bugs.launchpad.net/openstack-api-site/+bug/1203865 21:21:32 Launchpad bug 1203865 in openstack-api-site "Neutron VPNaaS API Docs" [Undecided,New] 21:21:34 nati_uen_: thanks! 21:21:55 Waited till quantum->neutron change occurred. 21:22:32 Good idea 21:22:35 #topic API 21:22:41 salv-orlando: hi 21:22:54 hello people 21:23:11 on the API side, there is no major news worth mentioning 21:23:27 i'd like to ask 21:23:30 I think there is not a lot left to sort out for FW and VPN 21:23:38 perhaps just about port ranges on FW 21:23:46 but I've no checked gerrit over the weekend 21:23:55 'api core for services' blueprint has high priority for H3 21:23:57 anyway, nothing that cannot be discussed directly on gerrit 21:24:05 enikanorov: yep 21:24:13 which 'service' extension worth moving to core? 21:24:20 routers, i guess? 21:24:31 I think that's related to L2/L3 split 21:25:03 IMHO, doing the blueprint you mentioned make sense only if we manage to make progress on the L2/L3 split 21:25:23 I don't think, at this stage, we are in a position such to consider anything else core 21:25:26 I see. then I don't see why it should have high priority 21:25:42 as i didn't see much progress with l2/l3 split 21:26:29 I am happy to keep on the same priority level on the blueprint for splitting layer-2 and layer-3 plugin 21:26:48 and talking about that blueprint… do we have a real use case for that for Havana? 21:27:19 salv-orlando: What was the original use case for that BP, can you refresh my memory? 21:27:38 not for Havana, I guess. 21:27:56 allowing for implementing plugins which provided L3 functionality over another L2 plugin 21:28:00 IMO, l2/l3 split should be high priority. IMO Henry has some reviews 21:28:23 i remember that was a 5k lines patch 21:28:28 for instance you might use ML2 for L2, and then 'use-my-router-and-your-network-will-be-faster-than-ever' for L3 21:28:35 Bob has rebased this a few times, the patch is quite large, if people are interested, I'll see if we can get this pushed out again soon. 21:28:38 after having a first set of -1 it was abandoned 21:28:51 I'm also happy to review it 21:29:08 To cut a long story short, the rules for all the other patches will apply to this one too. 21:29:10 OK, I will talk to Bob and get him to push it out, though he may be on vacation for the rest of this week. 21:29:31 We can downgrade it to low if we deem there's not enough interest around it. 21:29:48 salv-orlando: I think there is plenty of interest it appears. 21:29:49 At that stage, we might downgrade also the blueprint assigned to enikanorov 21:29:59 ok 21:30:18 I'll put both at medium for now 21:30:19 yes, that seems reasonable. but i even suggest to postpone it to Icehouce 21:30:24 *s 21:30:47 (I'm talking about https://blueprints.launchpad.net/neutron/+spec/api-core-for-services now) 21:30:52 salv-orlando: thoughts on deferring? 21:31:18 I have no reason for making it a priority for Havana (the layer-2/layer-3 split)/ 21:31:44 And even api-core-for-services… I don't seem to have any reason for making it happen in Havana 21:32:06 Let's start moving api-core-for-services to low. 21:32:14 Then we'll sync up with Bob on the other blueprint 21:32:21 ok 21:32:26 works for me too 21:32:58 It looks like some routers service are proposed, so IMO layer-2/layer-3 split should be earlier 21:33:52 nati 21:34:22 nati_uen_: I am aware of a draft blueprint for a L3 plugin, and of another 'provider router' blueprint still under discussion 21:34:50 but not anything else - with priority medium or higher - which will require this change for havana 21:35:00 salv-orlando: I got it 21:35:14 As long as router is implemeted as a part of core pluign, we don't need to split L2/3. 21:35:24 amotoki: correct. 21:35:51 The split allows for something like the Embrane work to exist without a L2 plugin, though. Just FYI. 21:36:08 mestery: that is the draft I mentioned 21:36:09 Heads-up: we'll be proposing a blueprint to address multivendor router support in the next few weeks 21:36:24 GeoffArnold: For Havana? 21:36:31 GeoffArnold: you should propose it kind of… now 21:36:32 No, icehouse 21:36:35 ah ok! 21:36:35 Got it. 21:36:43 Ok.. then we're fine on timing 21:36:48 GeoffArnold: +1 21:36:55 Early enough to prep for a useful discussion in Hong Kong 21:37:23 sure, it might be good if you check this blueprint from Bob Melander then, and see how it fits in your blueprint 21:37:39 We're looking seriously at multivendor configurations and also virtual appliance provisioning (they're interrelated) 21:37:51 wilco 21:38:13 #link https://blueprints.launchpad.net/neutron/+spec/quantum-l3-routing-plugin L3 Routing BP 21:38:55 mestery: thanks for the link 21:39:15 Yeah, I looked at that. The challenge is how to do policy-based resource allocation across resources from multiple vendors 21:39:54 will be interested to read the BP 21:40:14 Agreed, and to see how it ties into Bob's work. 21:40:26 to make sure we stay on time.. I'll follow up with Bob, Salvatore 21:40:34 Any other API issues to discuss? 21:40:51 markmcclain: not from me. 21:41:05 nope. 21:41:15 Thanks for the update 21:41:20 I would like to discuss about default quota API. I will send a dev mail later. 21:41:35 s/a dev mail/a mail to dev ML/ 21:42:02 please go ahead. 21:42:11 amotoki: sounds good 21:42:18 #topic VPNaaS 21:42:37 nati_uen_: Looks like getting real close and most of the iterations have been small changes 21:43:15 markmcclain: I think so too. 21:43:49 salv-orlando: Is the pending policy OK for you? not allow update resource on pending. allow delete anytime 21:44:05 lgtm 21:44:13 salv-orlando: Thanks. 21:44:23 amotoki: do you have any more concerns? 21:44:35 nati_uen_: nothing except PENDING above 21:44:54 nati_uen_: i agree with your policy 21:45:06 amotoki: latest patch is following the policy 21:45:17 amotoki: so pending issue is solved? 21:45:24 nati_uen_: will check 21:45:32 amotoki: thanks 21:45:48 markmcclain: amotoki: do you guys have concerns for driver patch? 21:46:24 need to take a last look and test a bit more 21:46:47 I don't see more concerns so far. will test it after the meeting. 21:46:50 markmcclain: Thanks. https://wiki.openstack.org/wiki/Quantum/VPNaaS/HowToInstall is up-to-date, so plese use this 21:46:54 amotoki: Thanks 21:47:01 nati_uen_: will do 21:47:05 #topic Nova 21:47:08 One news is heat support is in review :) That's all from us 21:47:20 nati_uen_: great! 21:48:04 markmcclain: Thanks! 21:48:16 I'm sure most saw, but nova-net deprecation has been delayed a release 21:48:30 So the earliest it will disappear is J 21:48:41 do we still want to try and have quantum set as the default for devstack? 21:48:58 garyk: yes 21:48:58 I think we should try for that, yes. 21:49:00 sorry neutron - bad habits are hard to break 21:49:17 The pushback has been passing tempest tests unmodified 21:50:45 Anything else for Nova integration? 21:51:28 #topic FWaaS 21:52:05 SumitNaiksatam: FW is in the same situation as VPN.. super close to merging with only minor fixups made the last 2-3 days 21:52:16 markmcclain: hi 21:52:30 yeah, the patch has been stable for a few days now 21:52:47 i hope 50 is a lucky number :-) 21:52:57 me too :) 21:53:12 i don't seem to have any pending items 21:53:53 sorry - internet went down and came back up again. did i miss anything? 21:53:55 last i checked there were 6 cores who commented on the reviews 21:54:03 is everyone happy? :-) 21:54:27 talking about the API patch: https://review.openstack.org/#/c/29004/ 21:54:39 I think so.. it it is really testing 21:54:53 If nobody complains about dichotomy with port ranges wrt security groups I am fine too 21:55:08 salv-orlando: thanks 21:56:05 the agent and the driver patches are also done 21:56:09 salv-orlando: you're talking n:n in one column vs range_min and range_max in a separate attributes (columns)? 21:56:16 but really waiting for the API patch to get through 21:56:43 markmcclain: yes 21:57:13 yeah.. I've gone back and forth on it 21:57:27 sumitnaiksatam replied that after a careful review they asserted that this widely accepted as standard way of configuring throughout the industry 21:57:39 markmcclain, salv-orlando: this seemed more natural coming from the firewalls/iptables world 21:58:17 i am not religious about this, but talking to a lot of people they seemed this was more usable and easier 21:58:26 markmcclain: I think both solutions are totally valid. It's the fact that they are different that makes me unhappy 21:58:59 understand.. that's my concern 21:59:24 we can talk offline since we're running out of time 21:59:41 If we have a large consensus that the range in a single attribute is the way to go, I'd add it to security groups and deprecate the range (keeping it for bw compatibility) 21:59:43 ok 21:59:53 -1 on a single attribute 22:00:09 we're storing this in a db 22:00:17 storing as a blob is a dumb idea 22:00:33 marun: dumb??? 22:00:34 maybe it works in fw land, where indexed access is easy 22:00:39 * salv-orlando please don't tell me I've opened a can of worms 22:00:42 yes, dumb in the context of db storage 22:01:04 it's not smart to concatenate columns unless there is a performance reason to do so 22:01:09 i'm not saying this has to affect the api 22:01:16 but storage should be separate 22:01:46 marun: concatenation of columns? not sure if you have looked at the patch 22:02:01 i heard 'one column vs range_min and range_max' 22:02:01 i think we are digressing here talking about performance 22:02:04 yeah, we should discuss it on the review 22:02:19 fair enough 22:02:23 yeah.. we can chat after in #openstack-neutron 22:02:37 we're at an hour and have several folks staying up late 22:02:41 markmcclain: sounds good, lets sort this out an move on 22:02:48 an -> and 22:02:51 #topic LBaaS 22:02:56 enikanorov: anything new? 22:02:59 yep 22:03:09 i'd like to have this bp in for h-3: https://blueprints.launchpad.net/neutron/+spec/lbaas-integration-with-service-types 22:03:22 i've also posted a question to dev ML 22:03:31 I set the milestone to H3 22:03:32 regarding the possible issue in API 22:03:36 is not showing that way for you? 22:03:40 markmcclain: but it's not approved yet 22:03:59 sorry.. too many boxes to check on different screens.. I'll fix that 22:04:05 thanks! 22:04:35 ok.. I'll look at the ML for the API issue 22:04:35 beside that, SumitNaiksatam, nati_uen_, salv-orlando, please share your thought on the question in ML 22:04:49 thanks! 22:04:53 that's all from my side 22:05:00 Thanks enikanorov 22:05:02 enikanorov: will do 22:05:06 #topic Stable Branch 22:05:13 enikanorov: sure 22:05:35 garyk: since me a text that connection dropped, but wanted to let everyone know that the next Stable release will be Aug 8th 22:05:57 #topic Horizon 22:06:06 amotoki: Any important items to highlight? 22:06:25 FWaaS beta is available 22:06:47 markmcclain: For stable, I'd like to see if we can get this bug in there: https://bugs.launchpad.net/neutron/+bug/1204125 22:06:48 Launchpad bug 1204125 in neutron "Neutron DHCP agent generates invalid hostnames for new version of dnsmasq" [Medium,Fix committed] 22:06:59 And default quota read API discussion is raised in horizon bug. I will send a mail later. 22:07:03 markmcclain: I marked it as backport potential. 22:07:03 that's all. 22:07:18 mestery: yeah.. that's on the list 22:07:32 markmcclain: Thanks (sorry for being late here, was digging hte link out) 22:08:08 amotoki: cool.. I'll have to try the FWaaS UI 22:08:17 Any other questions on Horizon? 22:09:04 #topic Open Discussion 22:09:22 Anything we didn't cover that needs to be talked about? 22:09:24 FYI: I have a new revision of the ML2 devstack patch out for review here: https://bugs.launchpad.net/devstack/+bug/1200767 22:09:25 Launchpad bug 1200767 in devstack "Add support for setting extra network options for ML2 plugin" [Undecided,In progress] 22:09:28 markmcclain: When we discuss service-agent? 22:09:36 Anyone who wants to try it out, please do and provide feedback. 22:09:48 nati_uen_: thanks for the reminder 22:11:19 IRC chat about service agents Wednesday 1530 UTC in #openstack-meeting-alt 22:11:37 markmcclain: sure. Thanks :) 22:12:01 we will discuss the high level changes needed to the l3 agent now that VPN and FW will be available 22:13:16 mestery: thanks for sharing the devstack link 22:13:24 Anything else for this week? 22:13:50 https://review.openstack.org/#/c/30447/ 22:13:59 yeah, I would like to rewrite Neutron in javascript 22:14:03 jk :) 22:14:14 salv-orlando: I want LISP first 22:14:27 salv-orlando: Why not Ruby? 22:14:34 I was about to say Eiffel, but then people might have called me reactionary 22:14:35 erlang erlang erlang! 22:14:50 dkehn: I'll take a look and test concurrently with API changes 22:15:01 markmcclain, thx 22:15:35 dkehn: sorry for the late response. I'll look it. 22:15:43 sweet 22:15:45 I cannot run neutron unit test suite twice in a row. Anyone else seeing this? 22:15:58 HenryG: No 22:16:02 How are you invoking? 22:16:07 amotoki, has all the suggestions that you brought up 22:16:09 tox -e py27 22:16:28 HenryG: I occasionally see random failures. Try a third run. :)_ 22:16:37 neutron.tests.unit.ml2.test_agent_scheduler.Ml2AgentSchedulerTestCase.test_network_add_to_dhcp_agent always fails in my env, but it is ok on gating 22:16:39 not I 22:17:01 HenryG: testr will actually reorder tests based on previous runtimes to try and maximize throughput 22:17:04 dkehn: Thanks for addressing them. I will check both API and CLI sides. 22:17:08 nati_uen_: the same failures in my env 22:17:13 HenryG: its possible that reording results in a thing that fails 22:17:28 HenryG: nati_uen_ it is probably worth debugging the failures as chances are they are real bugs 22:17:29 enikanorov: I'm grad to hear I'm not only one :) 22:17:29 amotoki, please read the review notes there was a side effect on one of them 22:17:43 clarkb: agreed 22:17:53 HenryG: nati_uen_ `testr run --analyze-isolation` after a failed run is very useful for finding test interactions that cause failures 22:17:58 and they'r not failing if run alone 22:18:13 clarkb: I'm assumption is the function using waiting timer. if the test exceeds 1 sec, it will fail 22:18:16 I've got to drop off folks, have a good night! 22:18:22 clarkb: I agree, we should debug it 22:18:29 clarkb: Thanks 22:18:31 mestery: bye 22:18:41 mestery: bye! 22:19:05 enikanorov: that usually indicates there is a second test (or more) that is interferring with your test 22:19:11 Thanks clarkb - I will try your suggestions when I get a chance. Will ping you if I have further questions. 22:19:16 enikanorov: by running concurrently or before the failing test 22:19:41 clarkb: i understand. the failures are pretty stable. 100% i'd say 22:20:18 It is very likely that we have a few tests that unintentionally don't clean up or change read-only datastructures 22:20:59 Everyone have a good evening/morning/afternoon and talk to everyone on the mailing list and IRC 22:21:03 #endmeeting