21:03:36 <markmcclain> #startmeeting Networking
21:03:37 <mestery> hi
21:03:37 <emagana> hi folks!
21:03:37 <openstack> Meeting started Mon Jun 24 21:03:36 2013 UTC.  The chair is markmcclain. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:03:38 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:03:40 <openstack> The meeting name has been set to 'networking'
21:03:42 <openstack> markmcclain: Error: Can't start another meeting, one is in progress.
21:03:52 <markmcclain> ok.. now the bot is awake :)
21:03:59 <salv-orlando> openstack: shut up!
21:04:06 <markmcclain> #link https://wiki.openstack.org/wiki/Network/Meetings
21:04:35 <markmcclain> #topic Announcements
21:04:40 <emagana> salv-orlando: +1
21:04:56 <markmcclain> It took a while, but we have a new name: Neutron
21:05:20 <gongysh> so, what will we need to do after the name changed?
21:05:37 <gongysh> admin guide doc, code change, api change ...
21:05:43 <mestery> gongysh: A lot of work, I suspect. :)
21:05:57 <gongysh> we can divide the work
21:05:58 <dkehn> no kidding
21:06:10 <mestery> yes
21:06:31 <markmcclain> gongysh: working through producing a draft of the changes we'll need to make
21:06:57 <mestery> markmcclain: As granular as it's possible to make it, the easier we can spread the renaming load.
21:07:13 <markmcclain> the plan is to publish a wiki that contains all of the items we need to complete along with a timeline to sync up with H2
21:07:48 <markmcclain> mestery: yes I plan to make it granular so that we can get spread the load to switch rather quickly
21:10:42 <salv-orlando> markmcclain: so the plan is that we wait for you and few other folks to produce a plan, and then we divide work items?
21:10:42 <markmcclain> yeah.. we're working on a draft and then we'll let everyone know just to make sure we didn't miss anything
21:10:42 <markmcclain> and the assign out the items to complete
21:10:43 <mestery> markmcclain: Sounds like a good plan!
21:10:43 <gongysh> ok,  it is a plan.
21:10:43 <markmcclain> the other trick is going to maintain compatibility in some places
21:10:43 <armax> do we assume that there will be a sort of a  freeze for bug/features merges until the naming change takes place?
21:11:09 <armax> to mitigate potential (needless) conflicts?
21:11:15 <salv-orlando> I think infra will be taken down for a while. So there will be a forced freeze
21:11:27 <markmcclain> armax:  ^^^
21:11:55 <armax> cool
21:12:04 <armax> thanks for clearing that up
21:12:11 <markmcclain> otherwise I want us to keep working on the changes and I'll also try to have a short script to clean-up patchsets
21:12:34 <markmcclain> for patches that are in review when the changeover occurs
21:12:56 <garyk> maybe we should set aside a week or 2 and all focus on the effort
21:13:25 <garyk> upside is we can stop opening bugs for the period of time we are working on the transition
21:13:53 <SumitNaiksatam> garyk: +1
21:14:04 <markmcclain> garyk: I considered it, but we have lots of items in review now and I dont to do a full stop since we have 3 wks until h2 cut
21:15:42 <emagana> markmcclain: Dan was leading Documentation, who is reviewing that part now?
21:16:06 <salv-orlando> I heard a rumour it's emagana...
21:16:22 <markmcclain> we all should rights to review docs
21:16:25 <garyk> salv-orlando: +1
21:17:43 <emagana> if you dont mind having a non-english native speaker on  that task!
21:18:00 <markmcclain> I'll complete the specifics in the next day or so and post for comments
21:18:19 <salv-orlando> emagana: thanks for volunteering for writing the docs in spanish!
21:18:38 <markmcclain> I'll be creating bugs associated with the tasks so that progress can be tracked.
21:18:41 <emagana> salv-orlando: +1 (n.p.)
21:18:50 <emagana> salv-orlando: si amigo!
21:19:12 <gongysh> bugs form is a good idea.
21:19:14 <markmcclain> so that's the current update on renaming any questions?
21:19:18 <mlavalle> salv-oralando: his Spanish is not that good….
21:20:03 <salv-orlando> markmcclain: not from me. thanks for the update.
21:21:03 <emagana> mlavalle: what??? :-)
21:21:04 <markmcclain> This will add some extra work, so I appreciate everyone's patience as we work through the process
21:21:54 <markmcclain> we still have lots of other work going on too… let run through the reports
21:22:12 <markmcclain> #topic API
21:22:36 <markmcclain> salv-orlando: hi
21:22:51 <salv-orlando> hello again
21:23:00 <salv-orlando> We shall be quick as there's a lot to discuss
21:23:08 <salv-orlando> the API is fairly quited.
21:23:11 <salv-orlando> quiet
21:23:23 <salv-orlando> No major bugs, blueprints are proceeding smoothly.
21:23:35 <markmcclain> cool
21:23:52 <salv-orlando> I've posted a spec for https://blueprints.launchpad.net/quantum/+spec/sharing-model-for-external-networks
21:23:53 <markmcclain> #topic VPNaaS
21:24:00 <salv-orlando> markmcclain: I've added a bug
21:24:06 <salv-orlando> to the meeting agenda to discuss
21:24:13 <salv-orlando> can we spare a second for it?
21:24:34 <markmcclain> salv-orlando: yes
21:24:36 <salv-orlando> Bug 1184484
21:24:46 <salv-orlando> hey bot?
21:24:53 <salv-orlando> bug #1184484
21:25:01 <salv-orlando> nvm https://bugs.launchpad.net/quantum/+bug/1184484
21:25:02 <markmcclain> https://bugs.launchpad.net/quantum/+bug/1184484
21:25:22 <salv-orlando> the problem was very easy to reproduce without using code from:
21:25:29 <salv-orlando> https://review.openstack.org/#/c/27265/
21:25:44 <salv-orlando> and https://review.openstack.org/#/c/29513/ (now merged)
21:30:13 <salv-orlando> however the reported said it still occurs, and at fairly small scale. It seems that concurrent requests immediately send quantum out of connections
21:30:13 <salv-orlando> regardless of whether pooling is enabled or not.
21:30:13 <salv-orlando> This can be mitigated increasing the pool size
21:30:13 <salv-orlando> But with the default pool size, it's been reported that even 10 vm spawns concurrently executed cause the issue again
21:30:13 <gongysh> I think pool size is not a permanent solution.
21:30:13 <salv-orlando> The solution would be avoiding quantum sucks up connection
21:30:13 <salv-orlando> 1 request = 1 connection.
21:30:13 <markmcclain> yeah we've experienced this issue internally too
21:30:13 <salv-orlando> and then then connection is immediately released
21:30:13 <salv-orlando> So I just wanted to say that if you can provide more details, please comment on the bug report
21:30:14 <markmcclain> will do
21:30:14 <salv-orlando> and provide logs and stuff
21:30:14 <rkukura> salv-orlando: Are we sure nested transactions are using additional connections? Why?
21:30:14 <salv-orlando> nested transaction are doing that at the moment because of an issue with the way we do db pooling at the moment
21:30:14 <salv-orlando> https://review.openstack.org/#/c/27265/ fixes that
21:30:14 <salv-orlando> but the issue still remains
21:30:14 <gongysh> so 27265 is not a fix for the db pool problem.
21:30:21 <gongysh> right?
21:30:28 <markmcclain> gongysh: it fixes part of the issue
21:30:45 <markmcclain> and also aligns us with how some of the other projects are using the db
21:31:18 <garyk> salv-orlando: i hope to have the review ready for https://review.openstack.org/#/c/27265/ tomorrow (tests are failing at the moment)
21:31:39 <garyk> i do not think it will be the magic bullet but as said above it will align us with the community
21:31:40 <salv-orlando> garyk: saw that, thanks
21:32:09 <markmcclain> salv-orlando: thanks for calling attention to that bug
21:32:40 <markmcclain> folks can comment on the bug offline we can work on it more
21:32:48 <markmcclain> anything else for the api?
21:32:55 <gongysh> and I heard nova is introducing a mysql db api, without sqlalchemy
21:33:52 <gongysh> salv-orlando: is it a just rumor?
21:34:02 <markmcclain> gongysh: no idea what they're doing, but wouldn't fix this issue
21:34:43 <salv-orlando> the root cause might as well by in sqlalchemy, but would you cut your arm if you bruised it  :
21:34:45 <salv-orlando> :)
21:35:04 * markmcclain blames eventlet
21:35:14 <gongysh> ok, If I find URL of the BP, I will send u it.
21:35:27 <markmcclain> salv-orlando: anything else?
21:35:51 <salv-orlando> nope
21:35:59 <markmcclain> thanks for the report
21:36:02 <markmcclain> nati_ueno: hi
21:36:06 <nati_ueno> markmcclain: ok
21:36:29 <markmcclain> quick update on VPN?
21:36:45 <nati_ueno> We finished to move new directory structure. We are still working on UT. I except we can remove WIP in 1 or 2 weeks.
21:36:47 <nati_ueno> That all
21:36:49 <gongysh> salv-orlando: I want u to experiment, hope u will not bruise it. :)
21:37:25 <markmcclain> nati_ueno: ok 2 weeks is right around the h2 feature cutoff
21:37:59 <nati_ueno> markmcclain: gotcha.
21:38:18 <nati_ueno> I'll do my best for 1 week to remove UT :)
21:38:25 <markmcclain> cool
21:38:28 <nati_ueno> sorry typo remove WIP
21:38:40 <markmcclain> sounds good
21:38:47 <markmcclain> #topic Nova Integration
21:38:53 <markmcclain> garyk: hi
21:39:15 <garyk> markmcclain: no updates on the migrations
21:39:28 <garyk> markmcclain: but there is a patch that id like people to look at
21:39:42 <markmcclain> which one?
21:40:16 <garyk> markmcclain: https://review.openstack.org/#/c/33054/
21:40:25 <garyk> (sorry it took me a while to find)
21:41:02 <salv-orlando> news on bug 1192131 ?
21:41:03 <markmcclain> ok
21:41:11 <gongysh> garyk: I am guessing you will talk about the host id patch too.
21:41:16 <markmcclain> https://bugs.launchpad.net/quantum/+bug/1192131
21:41:21 <garyk> that was next on the list
21:41:28 <gongysh> https://review.openstack.org/#/c/29767/
21:41:34 <markmcclain> ^^^ is a big bug that is randomly breaking the gate
21:41:52 <markmcclain> I know arosen and some of the nova devs were working on tracking it down
21:41:52 <garyk> salv-orlando: no, i have not seen anything regarding https://bugs.launchpad.net/quantum/+bug/1192131
21:42:17 <garyk> markmcclain: i recall seeing a mail from arosen saying it was related to eventlet. not sure
21:42:23 <gongysh> shared quantum client 1192131?
21:42:53 <markmcclain> the problem with 1192131 is that folks thus far have been unable to track down which change made gate more unstable
21:43:17 <gongysh> I am trying to do a version to share the admin token
21:43:21 <garyk> if someone can help me reproduce the bug then i can take a look at it. i have yet to get it to reproduce
21:44:03 <garyk> markmcclain: thats about all at the moment
21:44:07 <markmcclain> garyk it is a race that occurs in random places
21:44:12 <salv-orlando> garyk: it's elusive. Not an heisenbug, but needs concurrency and some other condition that we still need to figure out
21:44:16 <gongysh> garyk: yes, that is a very random problem. IBM QA found it too.
21:44:37 <garyk> is it just caused by runing tempest?
21:44:49 <markmcclain> running the gate will sometimes trigger it
21:45:34 <garyk> i'll try and look at it tomorrow
21:45:40 <markmcclain> great...we've covered some important stuff, but we're starting to run short on time
21:45:41 <gongysh> but nova has reverted the shared client, right?
21:45:53 <markmcclain> gongysh: the reversion did not have an impact
21:46:04 <markmcclain> so the revert was abandoned
21:46:24 <markmcclain> #topic FWaaS
21:46:32 <markmcclain> SumitNaiksatam: quick update?
21:46:39 <SumitNaiksatam> Hi
21:46:42 <SumitNaiksatam> yeah quick
21:46:44 <SumitNaiksatam> We have most of the fwaas patches that we were targeting for H2 in review now (except devstack and horizon).
21:46:51 <SumitNaiksatam> API/Plugin: https://review.openstack.org/#/c/29004/ Agent: https://review.openstack.org/#/c/34064/ Driver: https://review.openstack.org/#/c/34074/ CLI: https://review.openstack.org/#/c/33187/
21:46:56 <markmcclain> awesome
21:46:58 <SumitNaiksatam> We are trying to work through the integration, finding and fixing bugs (hence the patches are marked WIP), but we do have the flow from the REST call reaching the driver
21:47:26 <markmcclain> nice
21:47:36 <SumitNaiksatam> thats the quick update, unless RajeshMohan or SridarK want to add or if there are questions
21:47:45 <SridarK> nothing more to add
21:47:48 <gduan> this is gary from vArmour
21:47:57 <gduan> we are following Sumit's patch
21:48:09 <SumitNaiksatam> thanks gduan for that update
21:48:12 <gongysh> markmcclain: the reversion is done, I see the code is reverted if my eye is right.
21:48:18 <gduan> and rework our rest api to fit into the structure
21:48:27 <SumitNaiksatam> gduan: we will catch up
21:48:29 <markmcclain> gduan: good to know… please feel free to comment on the work in progress
21:48:36 <gduan> sure
21:48:49 <markmcclain> gongysh: shouldn't be merged: https://review.openstack.org/#/c/33555/
21:49:09 <markmcclain> SumitNaiksatam: Thanks for the update
21:49:13 <SumitNaiksatam> sure
21:49:20 <markmcclain> #topic ML2
21:49:26 <markmcclain> rkukura or mestery?
21:49:39 <mestery> markmcclain: Hi.
21:49:58 <rkukura> we are making progress towards the H2 BPs
21:50:15 <rkukura> details are on agenda wiki
21:50:22 <mestery> #link https://wiki.openstack.org/wiki/Meetings/ML2
21:50:35 <rkukura> mestery: anything you want to bring up here?
21:50:45 <gongysh> markmcclain: https://review.openstack.org/#/c/33499/
21:50:59 <mestery> rkukura: Nope, other than to say if people want to talk ML2 in more detail to join the sub-team meeting on #openstack this Wednesday at 1400UTC
21:51:24 <rkukura> anything else from anyone on ml2?
21:51:54 <markmcclain> thanks for updating us
21:51:56 <rkukura> mestery: Make the #openstack-meeting
21:52:09 <mestery> rkukura: Good catch. :)
21:52:12 <gongysh> where is the meeting minute of Ml2?
21:52:26 <mestery> #link http://eavesdrop.openstack.org/meetings/networking_ml2/
21:52:44 <mestery> gongysh: Just posted (http://eavesdrop.openstack.org/meetings/networking_ml2/)
21:53:00 <gongysh> bookmarked it. thanks
21:53:09 <markmcclain> #topic python client
21:53:26 <gongysh> no big problem is here. just about 2.2.2
21:53:33 <markmcclain> Seems that the feedback for 2.2.2a1 has been positive
21:53:46 <markmcclain> so we'll push 2.2.2 in the PyPI overnight
21:54:41 <gongysh> ok
21:54:49 <gongysh> no more from me.
21:54:57 <markmcclain> alright
21:55:28 <markmcclain> #topic Horizon
21:55:38 <amotoki> hi. sorry for my absense last 2 weeks. I had family affairs.
21:55:47 <amotoki> About horizon, I have good progresses for H2 horizon blueprints: secgroup support and extension aware features.
21:56:03 <amotoki> SumitNaiksatam: SumitNaiksatam: I think it is better to move FWaaS support to H3. What do you think?
21:56:31 <SumitNaiksatam> amotoki: sure
21:56:40 <SumitNaiksatam> if that works better
21:56:41 <amotoki> SumitNaiksatam: thanks.
21:56:50 <SumitNaiksatam> i will coordinate with you offline
21:57:10 <amotoki> i have no convern about other h2 items and will check their status.
21:57:28 <amotoki> no more from me.
21:57:34 <markmcclain> amotoki: welcome back and thanks for the update
21:57:49 <markmcclain> #topic lbaas
21:58:17 <markmcclain> enikanorov_: looks like there are a few minor items in review, but other things are stabilizing. correct?
21:58:27 <enikanorov_> right
21:58:42 <enikanorov_> mmajor one is adding agent scheduling to reference implementation
21:58:55 <enikanorov_> would be great if gongysh could take a look
21:59:18 <markmcclain> gongysh: mind taking a lookg?
21:59:22 <enikanorov_> this one: https://review.openstack.org/#/c/32137/
21:59:27 <gongysh> enikanorov_: ok, it is on my todo list in these two days.
21:59:37 <enikanorov_> gongysh: thanks
21:59:51 <markmcclain> gongysh: thanks
22:00:01 <markmcclain> enikanorov_: anything else?
22:00:12 <enikanorov_> not this time
22:00:18 <markmcclain> thanks for updating
22:00:24 <markmcclain> #topic Open Discusssion
22:00:52 <amotoki> dkehn: around?
22:01:06 <dkehn> I'd like to get core folks review of https://review.openstack.org/#/c/30441/ and https://review.openstack.org/#/c/30447/ all reviews have been addressed
22:01:31 <dkehn> if possible
22:02:25 <markmcclain> both amotoki and I are cores on 30441
22:02:34 <markmcclain> we can coordinate offline
22:02:40 <dkehn> k
22:02:49 <garyk> i am going to call it a night. good night everyone
22:02:52 <markmcclain> Any other open discussion items?
22:02:55 <SumitNaiksatam> going back to: https://review.openstack.org/#/c/29767 is there anything we need to as a quantum team to facilitate?
22:02:55 <markmcclain> garyk: night
22:03:37 <SumitNaiksatam> thanks to gongysh for his perseverance on it (host_id from nova to quantum issue)
22:04:04 <gongysh> yes, I have rebased many times.
22:04:36 <rkukura> looks like gongysh responded to Phil Day's issue, and no change should be needed
22:04:37 <gongysh> nova guys mark it as low priority. I hate that.
22:05:00 <markmcclain> SumitNaiksatam: I think we need make sure the −1 is understood and that gongysh has responded back
22:05:11 <markmcclain> the −1 is likely causing other reviewers to skip
22:05:19 <SumitNaiksatam> markmcclain: it seems gongysh response is correct
22:05:36 <SumitNaiksatam> i think gongysh did respond promptly but its a new -1 every time
22:06:05 <markmcclain> right.. all by different reviewers
22:06:23 <gongysh> there are no fixed core members on it. so a new one comes in, gives it a scan and fires a -1.
22:06:25 <salv-orlando> I know. In general it's not a good idea to -1 a patch when the reviewer have a question but no specific concern with the patch itself.
22:06:39 <SumitNaiksatam> salv-orlando: +1
22:06:56 <gongysh> problem is he does not come back again after that.
22:07:03 <SumitNaiksatam> markmcclain: anything we can do catch the attention of two cores who can shepherd this?
22:07:24 <markmcclain> yeah we can work offline on it
22:07:32 <salv-orlando> push a new patch set, will clear the -1, and send a notification to all the reviewers who reviewed beforehand on it
22:07:51 <rkukura> I had mentioned this to the nova PTL a while back, and I'll ask him again if he can raise the priority
22:08:38 <markmcclain> yeah.. when you rebase let me know and I'll chat with a few cores offline about it
22:09:02 <markmcclain> make sure the profile of the review is raised
22:09:15 <gongysh> markmcclain: rebase?
22:10:12 <markmcclain> actually may not need it since it was last pushed yesterday
22:10:21 <gongysh> I have just rebased and fixed some conflicts because of the reversion patch https://review.openstack.org/#/c/33499/
22:15:05 <markmcclain> k
22:15:06 <markmcclain> we'll can work on this offline.. any other open discussion items?
22:15:06 <gongysh> there is a new problem on nova integration side:
22:15:06 <markmcclain> what's that?
22:15:06 <amotoki> gongysh: what?
22:15:06 <gongysh> since the quantum  client will be created many times in nova side, the keystone token in db will multiple many times.
22:15:06 <gongysh> I think this is the purpose of the patch to using shared quantum client.
22:15:07 <gongysh> which is reverted by https://review.openstack.org/#/c/33499/
22:15:07 <markmcclain> gongysh: right there are issues with how eventlet monkey patches.. the Http objects
22:15:08 <markmcclain> we'll need to step back and think of a different approach
22:15:08 <gongysh> So I will push a shared token version on nova side.
22:15:08 <salv-orlando> gongysh: you're saying you will use a single auth context for all resources?
22:15:08 <salv-orlando> regardless of tenant?
22:15:08 <gongysh> with it, nova side can make use of the token as long as possible.
22:15:08 <gongysh> no, it is just for admin features on nova side.
22:15:08 <salv-orlando> k
22:15:08 <gongysh> not for normal API invocation.
22:15:42 <gongysh> I am done.
22:16:27 <markmcclain> ok.. still have to be mindful or how it's implemented
22:17:09 <markmcclain> alright everyone have good night/afternoon/morning
22:17:12 <markmcclain> #endmeeting