21:03:28 #startmeeting Networking 21:03:29 Meeting started Mon Jun 3 21:03:28 2013 UTC. The chair is markmcclain. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:03:30 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:03:32 The meeting name has been set to 'networking' 21:03:33 armax: you're closest to arosen, can you kick him :) 21:03:45 haha 21:03:50 hang on :) 21:04:13 #link https://wiki.openstack.org/wiki/Network/Meetings 21:04:15 done 21:04:15 what a long leg ... 21:04:17 Hi 21:04:17 :) 21:04:22 * salv-orlando if my leg was 6,000 miles long I could have kicked him 21:04:26 hi 21:04:34 hi 21:05:05 any changes to the agenda before we get going? 21:05:27 #topic Announcements 21:05:40 #info H1 was released last week 21:06:00 hi, all 21:06:05 H1 included the initial work commit for ML2 21:06:45 if you have some spare cycles take a moment to test the builds and report back any findings 21:07:02 H2 will be released July 18th 21:07:12 hi all 21:07:18 #link https://launchpad.net/quantum/+milestone/havana-2 21:07:29 we have 50 blueprints targeted 21:07:42 and is way over our normal velocity 21:07:45 * salv-orlando is having a laugh 21:08:33 I'd like to ask that the subteam leads take a look through the H2 blueprint list and refine the items that need to be in H2 21:08:50 also we have glut of Medium blueprints 21:09:56 With that let's run through the sub teams reports 21:09:58 #topic API 21:10:04 hello 21:10:16 nothing major to report if not that we've merged a few patches. 21:10:49 I will refine blueprints targeting H-2 as suggested by markmcclain 21:11:01 and unless I've missed it, we do not have any major bug 21:11:34 ok.. cool 21:12:12 we need more testing to find some 21:12:54 we've got one bug I've highlighted, but we can talk about it when we talk Nova 21:13:07 Any questions for salv-orlando? 21:13:25 gongysh: yeah the API lacks coverage especially when it comes to scale and performance 21:13:39 but that involves several components, not just the API layer. 21:14:15 marun is also work on improving the API test coverage too 21:14:43 #topic VPNaaS 21:15:00 nati_ueno: how are things progressing 21:15:12 markmcclain: we are doing our bit with Tempest also 21:15:13 looks like StrongSwan is back? 21:15:34 mlavalle: good point 21:16:33 nati_ueno: around? 21:17:09 We'll skip to Nova and come back to VPN when he returns 21:17:12 #topic Nova 21:17:15 hi 21:17:17 sorry 21:17:29 May I report it ? 21:17:43 nati_ueno: sure 21:17:56 VPN has two important bp for H2 21:17:59 markmcclain: Thanks 21:18:05 [1] CRUD API and DB model for IPSec https://blueprints.launchpad.net/quantum/+spec/vpnaas-python-apis 21:18:14 In review 21:18:14 Quantum Advanced Service Plugin for VPNaas (Patchset 5) https://review.openstack.org/#/c/29812/ 21:18:19 Quantum Client for VPNaaS (Patchset 3) https://review.openstack.org/#/c/29811/ 21:18:24 [2] Agent Impl https://blueprints.launchpad.net/quantum/+spec/ipsec-vpn-reference 21:18:35 About to finish investigation. I'm start implementing 21:18:46 As mark said, it looks like we can use strongswan 21:19:10 using nswrap script by Francois 21:19:16 That's all from me 21:19:58 cool.. thanks for the update 21:20:08 any VPNaaS questions? 21:20:16 MarkAtwood: there is an open review for the host parameter to the plugin 21:21:04 MarkAtwood: i saw Nova 21:21:18 markmcclain: we talking about nova now? 21:21:24 yes 21:22:06 garyk: you're talking about the extension to notify quantum of the host? 21:22:20 markmcclain: yes. i'll post the review in a sec 21:22:24 ok 21:22:45 markmcclain: https://review.openstack.org/#/c/29767/ 21:22:56 this is really holding back the multi host feature 21:23:34 we need some core review on nova side 21:24:08 right.. let me work on getting Nova cores to review it 21:24:32 great 21:24:33 markmcclain: thanks 21:25:05 I also wanted to ask who can look into this bug: https://bugs.launchpad.net/quantum/+bug/1160442 21:25:08 Launchpad bug 1160442 in quantum "when boot many vms with quantum, nova sometimes allocates two quantum ports rather than one" [High,Incomplete] 21:25:38 markmcclain: this is on my to do list for tomorrow. 21:25:57 markmcclain: I was able to reproduce that one 21:25:58 ok.. also looks like arosen has been digging into to it too 21:26:14 I've actually seen this occur on our cloud too but haven't had time to track it down :/ 21:26:21 markmcclain: suppose to provide more logs to arosen and gary 21:26:43 emagana: ok.. thanks for posting the first set 21:26:50 I was supprised emagana was able to reproduce this with just one HV. I haven't been able to do that. 21:26:58 markmcclain: arosen emagana: i'll work on this tomorrow. 21:26:59 just got stuck into tasks but I will update the launchpad page 21:27:08 it seems we have already minds on it before I want to jump in. 21:27:28 i had earlier posted the logs in the launchpad anwers 21:27:39 difficult to figure out with just the logs 21:27:48 need to put debug statement 21:28:23 ok.. I just wanted to make sure we've got folks working on it as it's current rated high 21:28:32 garyk: any update on Nova migration? 21:28:39 markmcclain: sorry, no. 21:29:06 ok 21:29:11 Any other Nova items? 21:29:24 markmcclain: there is a little update which may be worth exploring 21:29:48 sure.. what's the update? 21:30:02 markmcclain: it may be libvirt centric, but there may be a flag in libvirt whcih can be set which can enable a tap device to be moved from a linux bridge to an open vswitch. 21:30:41 markmcclain: i need to investigate a little more. if so then mabe we need to move that support to nova and later this be leveraged to enable a migration. 21:30:46 it is still early days 21:31:00 oh cool.. that would help out 21:31:28 #topic Security Groups 21:31:35 arosen: any updates? 21:31:52 no updates from me 21:31:57 alright 21:32:02 #topic FWaaS 21:32:11 SumitNaiksatam: quick update? 21:32:14 hi 21:32:18 more progress in the past week 21:32:36 RajeshMohan has posted a patch for the iptables driver for the ref impl 21:32:48 KC has started work on client and CLI 21:32:56 cool 21:33:00 i need to do more work on the API patch 21:33:04 ok 21:33:10 both iptables driver and cli is being documented in the same fwaas google doc as before: https://docs.google.com/document/d/1PJaKvsX2MzMRlLGfR0fBkrMraHYF0flvl0sqyZ704tA 21:33:32 RajeshMohan: u there for an update? 21:33:45 you covered it all - nothing from me 21:34:06 ok, i think there is good info in the google doc 21:34:14 pl check and comment 21:34:40 is the link in the BP desc? 21:34:44 we might have to rework the review patch since it currently is not based off the API patch 21:34:53 gongysh: yeah, its in the bp 21:35:09 we are actually using the same google doc that we used for the other fwaas bp 21:35:14 ok, I don't want to scan the meeting log to find the doc. 21:35:28 just want to make sure that we find all info in one place 21:35:28 The description for IPTables BP has the li nk 21:35:34 SumitNaiksatam: ok 21:35:56 thats the quick update 21:36:04 SumitNaiksatam: can you update the blueprint assignees with person leading the dev work? right now you're still listed for a few of them 21:36:27 yeah, i will be posting patches for those (plugin and agent) 21:36:41 other bps already assigned 21:36:48 oh.. I that was farmed out to others on the team 21:36:51 SumitNaiksatam: gongysh: each BP has the spec link, but there is no list which covers all of FWaaS.... It is better to have a summary page. 21:36:55 we're good then 21:37:19 plugin and agent might be smaller patches so i was thinning of rolling them into the api patch, might make it easier to test 21:38:29 amotoki: summary page is already present 21:38:36 just finding the link, one sec 21:38:37 yeah it nicer to be able to test end to end 21:38:43 I think the summary is here: 21:38:43 https://blueprints.launchpad.net/quantum/+spec/quantum-fwaas 21:39:03 https://wiki.openstack.org/wiki/Quantum/FWaaS/HavanaPlan 21:39:15 markmcclain: thanks, that too 21:39:43 ok any other FWaaS questions? 21:39:44 SumitNaiksatam: thanks, good summary! 21:40:22 SumitNaiksatam: thanks for the update 21:40:41 #topic LBaaS 21:40:49 LBaaS is bit behind schedule 21:41:02 to get back on track the team is going to focus on getting this blueprint done 21:41:02 https://blueprints.launchpad.net/quantum/+spec/multi-vendor-support-for-lbaas-step0 21:41:26 once this step is complete others who want to write drivers will have stable base to work from 21:41:53 This is currently in review: https://review.openstack.org/#/c/28245/ 21:42:22 Also once this step is complete, LBaaS team will resume working on some of the other items on on the roadmap 21:42:27 Any LBaaS questions? 21:42:54 Is there any conclusion for inherit VS composition ? 21:42:55 i have started t review https://review.openstack.org/#/c/28245/ 21:43:14 once again i am sleeping. i have a really minor nit with the last pacth 21:43:42 composition is something we really should look at, but we need to do this on a project wide basis 21:44:17 markmcclain: I agree. so this is related to VPN code also. We should do composition in new code ? or wait some big refactoring? 21:44:39 I expect there is going to be some necessary refactoring 21:44:41 all mixin into components? 21:45:06 that is ahead in the future for us to align with the other projects that moving to Pecan & WSME 21:45:41 the current wsgi framework which we inherited is planned for deprecation in Oslo and other projects 21:45:59 I expect to talk about this in Hong Kong more than what we touched on in Portland 21:46:23 nati_ueno: to answer your question specifically, I think we should strive to be consistent for now 21:46:35 so you are expecting we will work on the refactoring in next release? 21:46:45 markmcclain: OK so we should choose inherit for now? 21:46:46 the nice thing about drivers is that we should be able to refactor with minimal disruption to vendor impls 21:47:27 nati_ueno: I think the gist of the discussion is: whatever you choose there's going to a major refactoring (likely to happen over 2 release cycles) 21:47:34 nati_ueno: refactoring will be a large task 21:48:07 I got it 21:48:35 salv-orlando is correct.. this will be a long term process 21:48:47 #topic ML2 21:49:27 the initial ml2 code was merged, but we still need core reviewers for the devstack patch: https://review.openstack.org/#/c/27576/ 21:50:08 without this, the ml2 plugin isn't going to get sufficient testing as various merges change it 21:50:09 garyk: can you take a look at it? 21:50:32 markmcclain: sure 21:50:51 thanks! 21:51:12 Any other ML2 updates? 21:51:13 salv-orlando had looked at an earlier version, so I'd appreciate if he takes another look 21:51:13 so new feature for ovs and linuxbridge plugin should go to ml2? 21:51:31 rkukura: np. That patch worked well for me. 21:51:44 nati_ueno: ideally once we're at fully parity yes 21:51:45 will we set ML2 plugin as default one in devstack? 21:52:12 To get to full parity, we need https://blueprints.launchpad.net/quantum/+spec/ml2-gre 21:52:16 markmcclain: so kyle is going to add VXLan for ovs, this work should go to ml2 or not? 21:52:30 gongysh: eventually yes.. as there will be only one OpenSource plugin 21:52:51 rkukura: ah I got it, after the ml2-gre, we will quit to get new feature for ovs and linuxbridge. 21:53:00 I've been focusing on trying to make sure the agent side of the various tunnel-related work will all work with ml2 21:53:28 nati_ueno: there will be a few bits of overlap 21:53:35 So there is vxlan work going on right now for both agents, plus work on partial mesh and l2-population 21:54:23 All of these potentially effect the tunnel management RPCs that ml2 needs to implement for gre 21:54:37 the vxlan work for the current plugin is relatively small 21:54:39 markmcclain: rkukura: OK please let me know when we quit to add new feature for ovs and lb. 21:54:44 I think it makes sense to still review it 21:54:46 Its been suggested to have a weekly IRC meeting on all this 21:55:07 rkukura: do we need a weekly meeting 21:55:13 Also, I filed 5 BPs today for H-2 21:55:25 or one time meeting to get the next steps organized and then move back to the ML? 21:55:29 *mailing list 21:55:47 We can't do all that without more people involved, so a weekly meeting might make sense 21:56:01 k 21:56:02 Of the BPs, only the GRE one is high priority 21:56:07 ok 21:56:39 Anything else? 21:56:50 So I'd really like to get names on most of these soon, or take them off h-2 21:56:57 Why is GRE so different? 21:57:04 that covers it for now - see the agenda for links, etc. 21:57:14 GRE is needed for parity right now. 21:57:30 I mean the implementation 21:57:32 gongysh: we'll have to talk about GRE offline as we're running short on time 21:57:33 There's a BP for VXLAN, which would be needed for parity if it gets in openvswitch first 21:57:37 ok 21:58:12 #topic CLI 21:58:28 A tarball for 2.2.2a was created last week 21:58:40 I still cannot see it in https://pypi.python.org/pypi/python-quantumclient 21:58:56 any problems found with the tarball? 21:58:57 gongysh: since it was tagged as an alpha 21:59:05 it won't show up in PYPI 21:59:09 http://tarballs.openstack.org/python-quantumclient/python-quantumclient-2.2.2a.tar.gz 21:59:19 is the link for folks that want to test 21:59:54 I hope to push it out to PyPI later today 22:00:14 #topic Testing 22:00:19 mlavalle: Quick update? 22:00:44 I will send an email to the ML with what will be delivered for Havana in Tempest 22:00:48 this week 22:01:02 awesome.. we'll be on the lookout for it 22:01:08 #topic Horizon 22:01:16 I am also cleaning up gate-tempest-devstack-vm-quantum-full 22:01:27 and I need help from arisen this week 22:01:31 arosen 22:02:01 mlavalle: sure. If this on the security group stuff and ping? 22:02:05 cool.. will this make the gate more stable? 22:02:10 yes, security groups 22:02:17 mlavalle: lets talk in #openstack-dev after and i'm happy to help. 22:02:20 markmcclain++ 22:02:32 mlavalle: let's me help too 22:02:40 arosen: ;-) 22:02:57 nati_ueno: ;-) 22:03:00 amotoki: Looks like you've got assignees for the Quantum related blueprints in Horizon 22:03:15 yes. i updated Wiki. no big progress from last week. H-2 started, so i will check the target and progress of each blueprint this week. 22:03:57 that's all from me this week. 22:04:59 great.. thanks for the update 22:05:04 #topic Open Discussion 22:05:47 i am going to crash. good night all 22:05:49 markmcclain: IMO, we should also manage service framework progress 22:05:52 garyk: night 22:05:59 garyk: have a good dream :) 22:06:36 ciao ciao 22:06:49 thanks for the all of the hard work on H1 22:07:31 next week we'll look that blueprints since 50 is way too many 22:07:39 have a great week everyone 22:07:41 #endmeeting