17:00:14 <sdague> #startmeeting qa
17:00:15 <openstack> Meeting started Thu Oct  3 17:00:14 2013 UTC and is due to finish in 60 minutes.  The chair is sdague. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:16 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:18 <openstack> The meeting name has been set to 'qa'
17:00:23 <sdague> ok, who's around for the qa meeting?
17:00:28 <mlavalle> Hi
17:00:36 <mtreinish> hi
17:00:39 <kwhitney> Hi
17:00:50 <ravikumar_hp> hi
17:00:55 <sdague> #link Agenda - https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Proposed_Agenda_for_October_3_2013
17:01:15 <dkranz> hi
17:01:40 <sdague> ok, lets get started on blueprint cleanup
17:01:50 <sdague> #topic Blueprints (sdague)
17:02:01 <afazekas> hi
17:02:11 <sdague> so, I started going through and closing out blueprints that I think are done this morning
17:02:21 <sdague> https://blueprints.launchpad.net/tempest
17:02:22 <ravikumar_hp> sdague: I have closed some
17:02:36 <sdague> however, we have a ton in Unknown state
17:02:50 <ravikumar_hp> sdague: anything pending more than 6 months and with out any action should be closed
17:02:52 <sdague> so it would be really good if people could update those
17:02:56 <sdague> agreed
17:03:23 <dkranz> sdague: I think a lot of them are Started based on patches/reviews
17:03:26 <sdague> mlavalle: is this still "Not Started" - https://blueprints.launchpad.net/tempest/+spec/quantum-basic-api
17:03:43 <mlavalle> sdague: Not started
17:04:10 <sdague> ok, that I think we still want, what's your target in icehouse for it?
17:04:37 <mlavalle> sdague: icehouse 2
17:04:37 <giulivo> hi
17:05:10 <giulivo> sdague, I'm not in the launchpad group to be able to work on the blueprints
17:05:20 <sdague> giulivo: ok, I can fix that
17:05:23 <mlavalle> sdague: I want to tackle first the neutron job thing
17:05:25 <sdague> will do after
17:05:36 <sdague> mlavalle: that's fine, I marked it medium / icehouse-2
17:05:45 <mlavalle> cool
17:05:57 <sdague> #action sdague to put giulivo into tempest-drivers
17:06:11 <sdague> ok, other updates on blueprints there?
17:06:28 <sdague> do we know about stevebaker's bp?
17:06:30 <giulivo> I think there are some duplicates about the neutron APIs
17:06:50 <giulivo> I reviewed quite a few changes actually implementing those which weren't using the appropriate topic
17:07:15 <giulivo> so the blueprints would need some love around that, I'd be happy to do that
17:07:24 <sdague> giulivo: ok, great, if you could collapse some of those, that would be great
17:07:45 <sdague> in future I'd like to not use so many small blueprints, as I think they carry a lot more weight to manage
17:07:54 <dkranz> sdague: +1
17:08:17 <sdague> honestly, I'll probably see how ttx's storyboard project is progressing at icehouse, and see if it would be good to make tempest an early user of it
17:08:24 <sdague> as a replacement for blueprints
17:08:34 <sdague> at icehouse summit that is
17:08:59 <sdague> the inability for us to target blueprints to multiple projects (like tempest + core project) is really annoying
17:09:26 <mtreinish> sdague: normally you can get around that with dependecies though
17:09:26 <sdague> ok, I'll send out an email today to the list giving people a week to update items in Unknown state, and then plan to script purge them
17:09:39 <sdague> mtreinish: you can... but it gets a little clunky
17:09:46 <mtreinish> yes it is
17:10:00 <sdague> alright, any other blueprint discussions?
17:10:23 <sdague> #topic Neutron job status (sdague)
17:10:25 <dkranz> sdague: I was going to do some work on the log errors blueprint Friday but don't want to conflict with any one.
17:10:43 <sdague> dkranz: honestly, I won't be able to touch if for a bit, so go ahead
17:11:14 <dkranz> sdague: OK, you had some ideas already so perhaps you can tell me out-of-meeting, or I coujld just proceed.
17:11:14 <sdague> ok, on the neutron jobs, mlavalle you have an update on where we are with the full job?
17:11:27 <sdague> dkranz: lets chat in irc after the meeting
17:11:31 <dkranz> sdague: k
17:11:41 <mlavalle> sdague: I've been running it. We still have 14 modules with failures
17:12:05 <mlavalle> sdague: a couple of them are related to quotas
17:12:15 <mtreinish> mlavalle: is that more or less than before?
17:12:15 <sdague> mlavalle: ok, is it in a state where we should turn it back on non voting so it's easier to work the issues?
17:12:44 <mlavalle> Yes, that will make it easier for me
17:13:04 <sdague> mlavalle: ok, lets do that. You want to submit the review? or we need someone else to?
17:13:15 <mlavalle> I'll do it
17:13:29 <mlavalle> if I need help, i'll yell
17:13:37 <sdague> #action mlavalle to submit review to turn neutron-full back on non-voting
17:13:46 <sdague> great
17:14:00 <dims> sdague, the bogus fd patch is holding up
17:14:07 <sdague> we're now running 4 devstack/tempest runs on neutron jobs to help prevent races from sliding through
17:14:35 <sdague> dims: we're not running any tenant isolation jobs atm (because I screwed up the config)
17:14:37 <afazekas> I guess this bug needs to be fixed in order to pass the quota tests: https://bugs.launchpad.net/neutron/+bug/1189671
17:14:38 <uvirtbot> Launchpad bug 1189671 in neutron "default quota driver not suitable for production" [High,In progress]
17:14:45 <sdague> so that might come back when we get that changed again
17:14:51 <sdague> salv-orlando had patches
17:14:54 <dims> sdague, will keep an eye out
17:15:09 <mlavalle> afazekas: correct
17:15:31 <sdague> ok, any other neutron things?
17:15:42 <afazekas> the other issue is the inerfaces extension, can we skip it until it is not fixed ?
17:16:12 <sdague> if we get down to just a skip or two that we need, we can definitely propose that
17:16:42 <sdague> ok, next up...
17:16:47 <afazekas> I possible workaround for the quota issue is to configure it by devstack
17:16:47 <sdague> #topic Elastic Recheck Top issues (mtreinish)
17:17:05 <mtreinish> sdague: so this was what the big offenders are now?
17:17:11 <sdague> mtreinish: yep
17:17:32 <mtreinish> well right now bug 1226337 is still the biggest one I think
17:17:33 <uvirtbot> Launchpad bug 1226337 in cinder "tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern 'qemu-nbd: Failed to bdrv_open'" [Critical,Fix committed] https://launchpad.net/bugs/1226337
17:18:01 <mtreinish> followed by a few neutron ones
17:18:09 <sdague> mtreinish: so do we need to reopen the cinder bug?
17:18:12 <mtreinish> the biggest of which was avoided by stopping tenant isolation
17:18:12 <jog0> sdague: http://status.openstack.org/elastic-recheck/
17:18:16 <sdague> it's in fix released
17:18:25 <sdague> jog0: yeh, I was about to share that :)
17:18:50 <mtreinish> sdague: not sure I haven't really been watching things closely enough the past day or so
17:18:54 <jog0> sorry for beating you to the punch
17:18:57 <mtreinish> I just have the aggregate numbers
17:18:57 <sdague> though it occurs to me that we actually should scale the metrics based on the % chance that it fails it jobs
17:19:17 <sdague> because the neutron bugs don't look as bad now that they only trip up in neutron jobs
17:19:27 <mtreinish> it looks like it's fixed based on the graphs
17:19:51 <mtreinish> yeah the neutron ones have a bunch of attention right now
17:19:53 <jog0> sdague: yeah this isn't perfect for showing how severe but does show when something is fixed
17:20:06 <sdague> mtreinish: yeh 1226337 actually does look probably fixed
17:20:21 <sdague> though... the gate's been really quiet today
17:20:37 <sdague> with the RCs going out
17:21:04 <jog0> 15 jobs up on zuul ATM which is quiet
17:21:23 <sdague> yeh, the nova db patch had the gate all to itself this morning
17:21:42 <sdague> ok, anything else on ER?
17:21:52 <mtreinish> no I don't think so
17:22:07 <sdague> which has already shown great value, can't wait to see how it evolves
17:22:28 <jog0> I have one thing
17:22:33 <sdague> ok
17:22:36 <jog0> https://bugs.launchpad.net/nova/+bug/1233923
17:22:38 <uvirtbot> Launchpad bug 1233923 in neutron "FAIL: tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_stop_terminate_instance -- nova VM fails to go to running state when using neutron" [Undecided,Confirmed]
17:23:03 <jog0> we don't have a query for that bug because don't know why what to look for in the logs to find it
17:23:28 <jog0> its one of a few reasons why test_run_stop_termiante_instance can fail
17:23:53 <sdague> right, this is because we don't know why the network setup is wrong?
17:24:25 <jog0> well I think its a two part issue both nova and neutron
17:24:36 <sdague> ok
17:24:43 <jog0> but its very unclear to me whats going on so extra eyes would be useful
17:24:44 <afazekas> https://review.openstack.org/#/c/49439/ it might fix/workaround a lot of neutron issue, I wonder why not monky patch the mysql module
17:25:15 <dkranz> jog0: It is Unassigned
17:25:18 <sdague> #action extra eyes asked for on https://bugs.launchpad.net/nova/+bug/1233923
17:25:20 <uvirtbot> Launchpad bug 1233923 in neutron "FAIL: tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest.test_run_stop_terminate_instance -- nova VM fails to go to running state when using neutron" [Undecided,Confirmed]
17:25:43 <jog0> although it hasn't been seen almost 24 hours, but that may be a quiet gate issue
17:25:59 <sdague> even if people don't manage to figure out the whole thing, we did a pretty good job on the tenant isolation issue by lots of people adding little bits to the bug
17:26:00 <mlavalle> jog0: I'll take a look and ping you if i can help
17:26:10 <sdague> thanks mlavalle !
17:26:32 <jog0> thanks
17:26:38 <sdague> ok, next topic
17:26:55 <sdague> #topic Stable branch timing (sdague)
17:27:10 <sdague> so this came up in IRC today, and I figured we could use a plan
17:27:41 <mtreinish> sdague: haven't we just done it around release day in the past
17:27:43 <sdague> last time we basically waited until the week before release and set the stable branch on tempest / devstack / devstack-gate that day
17:28:08 <sdague> mtreinish: well in grizzly we actually did it once we broken stable/grizzly bitrot :)
17:28:15 <sdague> but I think 1 week prior
17:28:24 <sdague> is a good plan
17:28:25 <dkranz> sdague: I think that makes sense to delay as long as we can
17:28:31 <sdague> dkranz: ok, sure
17:28:32 <dkranz> new tests are pouring in
17:28:43 <sdague> dkranz: that's actually my concern
17:28:49 <mtreinish> well this brings up the backport policy
17:28:51 <sdague> master is now open for 1/2 the projects for icehouse
17:28:52 <dkranz> sdague: I would like to touch on the post-release issue as well , which is related
17:28:58 <mtreinish> do we have a specific one for new tests
17:29:19 <sdague> mtreinish: I think our decision was new tests could backport if people wanted to do the work
17:29:19 <dkranz> mtreinish: I think some folks will be maintaining havana for a while
17:29:28 <giulivo> so btw that is about cutting a line we don't have strict rules about what we can or can't add in the period preceding the branch, is that correct?
17:29:30 <sdague> though I'd say only for the last stable
17:29:37 <dkranz> The question is whether they should be encouraged to add tests upstream or down
17:29:54 <sdague> i.e. we stop landing grizzly tests once havana is out
17:30:00 <dkranz> sdague: Yes
17:30:03 <mtreinish> sdague: that's reasonable
17:30:07 <giulivo> sdague, not even cherry pick?
17:30:15 <sdague> giulivo: not 2 releases back
17:30:24 <giulivo> ah okay, missed that sorry
17:30:57 <giulivo> so cherry pick from icehouse to havana is okay
17:31:01 <sdague> yep
17:31:03 <dkranz> sdague: Do you have an opinion on my question above?
17:31:17 <sdague> what's the post-release issue?
17:31:38 <mtreinish> dkranz: I think master is priority if that's what you're asking
17:31:40 <dkranz> sdague: Whether folks continuing to add tests to havana should be encouraged to do that upstream or down
17:31:58 <sdague> right, they need to go master first
17:32:01 <dkranz> mtreinish: Of course they would go in trunk first
17:32:10 <dkranz> but about the backporing
17:32:14 <dkranz> backporting
17:32:28 <sdague> other than that, havana backporting is fine
17:32:42 <dkranz> Do we want a lot of actgivity on stable/havana, or not is the question
17:33:11 <sdague> dkranz: I'm not sure I have a huge opinion on that
17:33:45 <dkranz> sdague: ok
17:34:00 <dkranz> sdague: I am concerned about the review burden for backports
17:34:05 <sdague> I'm completely ok with people doing havana backports. I think if we think it's becoming a distraction for other work, we can sort it out later
17:34:08 <dkranz> sdague: It is a tradeoff
17:34:18 <dkranz> sdague: OK, works for me.
17:34:18 <sdague> I think this time around the amount of grizzly activity was fine
17:34:35 <dkranz> sdague: Yes, but I predict havana will be more.
17:34:45 <sdague> FYI, I did close a blueprint this morning that someone put in for backporting quantum tests to folsom
17:34:46 <dkranz> sdague: But we can see.
17:34:57 <dkranz> sdague: :)
17:35:02 <sdague> which is crazy talk :)
17:35:17 <sdague> dkranz: yeh, I'd rather deal with problems when they come up
17:35:31 <sdague> ok so
17:35:42 <sdague> #agreed wait as long as possible to set stable branch
17:35:58 <sdague> next topic...
17:36:03 <sdague> #topic Design Summit Initial Planning (sdague)
17:36:17 <sdague> #link https://etherpad.openstack.org/icehouse-qa-session-planning
17:36:48 <sdague> so, this isn't going to be a top focus for 2 weeks, as we don't need a schedule till after the release
17:36:59 <sdague> however I wanted to get people thinking about design summit
17:37:06 <sdague> who all is planning on being there?
17:37:10 <sdague> o/
17:37:20 <dkranz> o/
17:37:30 * afazekas me
17:37:30 <mtreinish> o/
17:37:34 <ravikumar_hp> me
17:37:52 * giulivo isn't sure
17:38:10 <mtreinish> sdague: I think mkoderer said he was going to be there earlier (I don't think he's around now)
17:38:19 <sdague> ok
17:38:44 <sdague> so we'll eventually need things in summit.openstack.org
17:38:59 <sdague> but like last time, I think planning for a coherent track is better in etherpad
17:39:17 <sdague> so I'm starting to fill out ideas that I think we surely need to discuss
17:39:27 <dkranz> sdague: I think multi-node test runs upstream would be a good topic (again) but not sure if that is us or infra
17:39:50 <sdague> dkranz: so I think it's actually an infra talk, because what it really is is a nodepool discussion
17:40:00 <sdague> about whether node pool can allocate us a set of machines
17:40:11 <dkranz> sdague: Yeah, I just want it to happen :)
17:40:27 <sdague> dkranz: that being said, we've talked about that at every summit I've ever been at
17:40:27 <mkoderer> I am waiting to get my final approval for the trip
17:40:33 <dkranz> sdague: But it also raises the issue of "install"
17:41:00 <sdague> so I'm going to be weary of taking sessions that have shown up before and gotten no progress on them
17:41:02 <dkranz> sdague: Which we have ignored with all-in-one only
17:41:16 <sdague> dkranz: so devstack will do multi node
17:41:19 <mtreinish> dkranz: devstack has multinode support
17:41:29 <afazekas> heat + devstack  sounds cool :)
17:41:45 <sdague> yeh with heat and nodepool I feel like we have more tools this time
17:42:01 <dkranz> sdague: I think it is worth a topic either here or infra
17:42:16 <sdague> dkranz: sure, can you put something in the etherpad in proposed for it
17:42:22 <dkranz> sdague: The chance of succes if more now
17:42:25 <sdague> yep
17:42:27 <dkranz> sdague: Yes
17:42:42 <sdague> we've got 13 sessions this time, and we won't overlap with infra
17:42:46 <sdague> we'll be in the same room
17:42:51 <sdague> so that's good
17:43:08 <sdague> personally, my top priority in HK is a solid neutron plan
17:43:15 <ravikumar_hp> sdague: how many max Qa sessions?
17:43:22 <mlavalle> sdague: +1
17:43:27 <sdague> and markmcclain1 and I have been talking about that already
17:43:38 <sdague> that may actually be in the neutron track
17:43:40 <dkranz> ravikumar_hp: 13
17:44:20 <markmcclain1> I'd like to be in Neutron if everyone can attend
17:44:42 <sdague> markmcclain1: yep, I think that's a good call. I was actually going to look at schedule blocks later
17:44:58 <mkoderer> I would like to speak about the stress tests.. but it's still not sure if the trip will be approved
17:45:14 <sdague> I think that's an important enough session we should do it in neutron when there is a QA slot and just drive all the QA folks into the neutron room
17:45:36 <sdague> mkoderer: you have an idea when you'll know about approval?
17:45:45 <sdague> for now I'll save you a slot, because I think that's important
17:45:46 <dkranz> sdague: So "dual session"?
17:45:53 <sdague> dkranz: basically
17:45:57 <mkoderer> sdague: hopefully next week
17:46:01 <sdague> that will ensure we aren't conflicting
17:46:10 <mkoderer> sdague: ok sounds great
17:46:10 <mlavalle> sdague: I won't go to HK but I want to be part of the Neutron plan
17:46:20 <dkranz> sdague: Good idea
17:46:36 <sdague> mlavalle: ok, no problem, we'll make sure to get etherpad notes up in advance
17:47:06 <sdague> reminder to everyone proposing things, we want an etherpad URL to link to with outline of the discussion
17:47:25 <mtreinish> sdague: how soon?
17:47:25 <sdague> I'm not going to give final approval to anything that doesn't have a good etherpad with agenda, as we want to make the most of our time
17:47:32 <sdague> mtreinish: 2 weeks
17:47:36 <mtreinish> ok
17:47:38 <dkranz> sdague: Let's put the urls in a canonical place
17:47:54 <sdague> I'll start evaluating on release day
17:47:57 <sdague> Oct 17
17:47:59 <dkranz> sdague: Last time around I created them all when populating the schedule
17:48:24 <sdague> dkranz: ok, well how about getting folks to link them in https://etherpad.openstack.org/icehouse-qa-session-planning
17:48:34 <sdague> I'll build a template on the train tomorrow
17:48:49 <dkranz> sdague: sounds good
17:48:49 <sdague> I'm down in NYC for most of the day, so I have a train ride to do stuff like that
17:49:05 <sdague> we can revisit next week as well
17:49:39 <sdague> ok, any other design summit questions / conversations?
17:50:41 <sdague> ok...
17:50:46 <sdague> #topic Critical Reviews
17:51:00 <sdague> any critical reviews that need to get eyes on?
17:51:43 <giulivo> I've one not critical per-se but which doesn't pass jenkins failing because of devstack errors on the neutron VM (this is on stable/grizzly)
17:52:05 <sdague> giulivo: right, I wonder how those regressed
17:52:14 <sdague> because we had stable/grizzly working again
17:52:24 <sdague> do we know when it started failing?
17:52:36 <sdague> and what could have caused it?
17:52:40 <psedlak> it's the https://bugs.launchpad.net/neutron/+bug/1233264 right?
17:52:41 <uvirtbot> Launchpad bug 1233264 in python-neutronclient "stable branch patches failing in check queue due to missing 'find_resourceid_by_name_or_id'" [High,Fix released]
17:52:47 <mtreinish> sdague: on the stable branch note there is this: https://review.openstack.org/#/c/47337/ with your -2
17:53:06 <giulivo> sdague, mine was pushed on Oct 3, 2013 10:53 and never got the +1
17:53:13 <psedlak> sorry, the https://code.launchpad.net/bugs/1234181
17:53:15 <giulivo> (well, not from neutron)
17:53:16 <uvirtbot> Launchpad bug 1234181 in neutron "stable/grizzly patches are failing jenkins in check-tempest-devstack-vm-neutron" [Undecided,New]
17:53:23 <sdague> mtreinish: ok, I'll put that back in the check queue
17:54:02 <giulivo> sdague, people who could work on it I can refer to, to help?
17:54:14 <giulivo> #openstack-infra?
17:54:44 <sdague> giulivo: honestly the #openstack-neutron channel might be better
17:55:05 <giulivo> got it, thanks :)
17:55:21 <dkranz> Has this particular devstack exit happened more than once?
17:55:33 <psedlak> dkranz: yes
17:55:39 <giulivo> dkranz, yeah it seems to be blocking actually
17:55:49 <sdague> giulivo: so something I experienced is jenkins kills the script late
17:55:55 <sdague> so the actual fail might be 100 lines up
17:56:01 <psedlak> dkranz: it's already on rechecks ... Affecting changes: 48300, 49485, 49487, 49490, 49491
17:56:03 <sdague> you have a link to a fail log?
17:56:06 <giulivo> sdague, I see, thanks
17:56:11 <giulivo> sure
17:56:18 <giulivo> http://logs.openstack.org/91/49491/2/check/check-tempest-devstack-vm-neutron/1fbbb16/
17:56:23 <sdague> giulivo: actually, lets take that to -qa after the meeting
17:56:28 <sdague> we have 4 mins left
17:56:33 <sdague> anything else from folks?
17:56:56 <mkoderer> stress tests sometimes failing in the nightly
17:57:18 <mkoderer> I will enhance the logging to get some more information about it
17:57:46 <sdague> mkoderer: great!
17:58:03 <sdague> ok, I have to run to another meeting. so thanks everyone for coming
17:58:10 <sdague> we'll talk more in -qa
17:58:12 <mlavalle> ciao
17:58:13 <sdague> #endmeeting