09:00:09 <gmann> #startmeeting qa
09:00:10 <openstack> Meeting started Thu Apr 20 09:00:09 2017 UTC and is due to finish in 60 minutes.  The chair is gmann. Information about MeetBot at http://wiki.debian.org/MeetBot.
09:00:11 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
09:00:13 <openstack> The meeting name has been set to 'qa'
09:00:18 <gmann> who all here topday
09:00:22 <jordanP> o/
09:00:25 <andreaf> o/
09:00:38 <martinkopec> o/
09:01:08 <chandankumar> \o
09:01:14 <gmann> #link https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_April_20th_2017_.280900_UTC.29
09:01:16 <gmann> ^^ today agenda
09:01:34 <gmann> #topic Previous Meeting Action review
09:01:46 <gmann> seems like no open action item from last meeting
09:01:57 <gmann> #topic The Forum, Boston
09:02:13 <gmann> we have 1 sessions approved
09:02:18 <gmann> #link http://forumtopics.openstack.org/openid/login/?next=/cfp/details/111
09:03:06 <andreaf> gmann: yeah I'm curious to see how that one turns out
09:03:19 <gmann> andreaf that was decided by TC?
09:03:31 <jordanP> I've just read the proposal, lgtm
09:03:34 <andreaf> there was a sessions review committee
09:03:39 <jordanP> I know Tempest is used in production at several places
09:03:44 <gmann> other seems rejected #link http://forumtopics.openstack.org/openid/login/?next=/cfp/details/112  http://forumtopics.openstack.org/openid/login/?next=/cfp/details/113
09:03:44 <andreaf> I don't think it was the TC
09:03:50 <gmann> ohk
09:04:23 <gmann> andreaf: whats the plan for that?
09:04:36 <andreaf> for the rejected ones?
09:04:46 <gmann> no approved one
09:05:07 <gmann> andreaf:  that is 40 min or 15 one?
09:05:24 * gmann still get confused with Forum and onboard sessions
09:05:39 <andreaf> I think it's a 40min one
09:05:45 <gmann> ok
09:06:35 <chandankumar> gmann: andreaf what about hosting a hangout session or recorded session for the rejected forum topics after the summit and put it on youtube. just a thought
09:06:36 <andreaf> I plan to leave it really open ended - if no-one has feedback we can use it to answer questions
09:07:19 <gmann> andreaf: i see, any presentation etc from our side?
09:08:00 <andreaf> gmann: I think this is more like a design session in the old summit - I will setup an etherpard with some details about our projects but that's about it
09:08:03 <gmann> chandankumar: we can see how much space/time we have in summit but not sure about recording
09:08:15 <gmann> andreaf: i see. cool
09:08:18 <andreaf> chandankumar: heh I was thinking about starting in the ML
09:08:47 <andreaf> chandankumar: if there is enough interest we can setup some kind of meeting, IRC or hangout or whatever works best
09:09:12 <chandankumar> andreaf: that will be good.
09:09:28 <gmann> #action andreaf to setup etherpad for Forum session
09:09:31 <gmann> andreaf: ^^
09:09:32 <andreaf> chandankumar: the main idea behind this cross-session is to have interactive discussion with folks so a recorded session might not be that effective
09:09:53 <gmann> Onboarding session: #link https://etherpad.openstack.org/p/BOS-QA-onboarding
09:09:55 <chandankumar> andreaf: then ML would be a better idea
09:10:03 <andreaf> chandankumar: if there is enough to discuss when we get to the next PTG we will re-submit them there
09:10:11 <chandankumar> +1
09:10:15 <gmann> feel free to put ideas on Onboarding etherpad
09:10:28 <gmann> andreaf: chandankumar yea. that will be nice
09:11:09 <gmann> i am hoping to have at least half a day for code sprint in summit or at least QA area with feature discussions
09:11:27 <gmann> including new developers
09:11:50 <andreaf> gmann: heh yeah I think there will be some space for unconference type of meetings
09:12:15 <gmann> yea.
09:12:34 <gmann> i have my presentation on Monday so free for next 3 days :)
09:13:02 <gmann> till my boss doe not bother me :)
09:13:21 <gmann> #topic Gate Stability - status update
09:13:30 <gmann> something goes up in check queue
09:13:32 <gmann> #link https://goo.gl/ptPgEw
09:14:03 <gmann> this one #link http://status.openstack.org/elastic-recheck/#1355573
09:14:19 <gmann> but i did not get chance to look into detail
09:14:29 <gmann> andreaf: jordanP any updates from your side
09:14:30 <jordanP> first I think we can say that most of the libvirt issues are gone
09:14:43 <jordanP> thanks to the usage of UCA packages
09:14:55 <jordanP> so the situation is much much better now
09:14:56 <gmann> \o/
09:15:03 <andreaf> \o/
09:15:22 <chandankumar> yay!!
09:15:33 <gmann> ceph job  is failing due to pre merge of infra patch
09:15:43 <gmann> i am reverting that #link https://review.openstack.org/#/c/458349/
09:15:56 <gmann> in case anyone curious about failure.
09:16:02 <gmann> i replied on ML also
09:16:04 <jordanP> we also have a couple of periodic job with 100% failure, but it's not a big deal to fix them
09:16:43 <gmann> yea 'all' job was broken since long and nobody noticed
09:17:02 <gmann> any suggestion to track those? track status in weekly meeting?
09:17:30 <jordanP> periodic jobs are not top priority, not sure it's worth to talk about them in meetings
09:17:39 <jordanP> just have a look at them from time to time
09:17:40 <gmann> at least we will have eyes on periodic job status
09:17:42 <jordanP> yeah
09:18:10 <gmann> may be just status not any further discussion which take time in meeting
09:19:01 <jordanP> ok
09:19:03 <gmann> may be under gate stability we can check. and in agenda we can have link to have a glance
09:19:07 <gmann> #topic Specs Reviews
09:19:10 <andreaf> jordanP, gmann: we might want to have something like a periodic job liason or a rotation like for bug triage - I'll think about it
09:19:34 <gmann> andreaf: how about merging that in bug triage
09:19:46 <andreaf> gmann: yeah we could do that
09:19:57 <gmann> and same people can provide status instead of tracking separately
09:20:07 <gmann> ok
09:20:27 <andreaf> gmann: ok can you add that on to the bug etherpad?
09:20:34 <gmann> sure
09:20:52 <gmann> #action gmann to add periodic job status tracking on bug triage etherpad
09:21:00 <gmann> on specs
09:21:03 <gmann> #link https://review.openstack.org/#/q/status:open+project:openstack/qa-specs,n,z
09:21:24 * andreaf forgot the power adapater and will run out of battery at some point...
09:21:36 <gmann> we have HA testing spec updated from samP
09:21:40 <gmann> #link https://review.openstack.org/#/c/443504/
09:21:51 <gmann> i doubt i can have look into before summit
09:22:09 <gmann> if anyone else have time feel free to provide feedaback
09:22:12 <gmann> feedback
09:22:27 <andreaf> ok will do I didn't manage this time
09:23:08 <gmann> andreaf: thanks
09:23:19 <gmann> anything else on specs?
09:23:23 <andreaf> btw, the upgrade testing tools spec was abandoned - I assume the entire osic crue was affected by Intel pulling the plug on osic
09:23:38 <gmann> #link https://review.openstack.org/#/c/449295/
09:23:54 <andreaf> I didn't manage to talk to castulo yet though
09:23:58 <gmann> andreaf: yea, thats was not good for upstream
09:24:27 <gmann> OSIC folks doing great contribution on nova side too
09:25:15 <gmann> #topic Tempest
09:25:27 <gmann> #link https://review.openstack.org/#/q/project:openstack/tempest+status:open
09:25:55 <gmann> open review ^^
09:26:33 <gmann> 1 thing we should merge first is cinder API version things
09:26:55 <gmann> i saw 1 patches doing cinder v3 tests and adding duplicate client for v3 also
09:27:28 <gmann> i hope we will go with non version client but its not clear that based on catalog_type is best approach
09:27:55 <gmann> oomichi might have something thinking about those
09:29:08 <andreaf> gmann: ok I need to check back on those
09:29:15 <gmann> mainly we need to merge the v2 and v3 clients in single place and continue on v3 testing without adding duplicate client for v3
09:29:24 <andreaf> ideally the catalog should be unversioned and a version list can be retrieved and cached or so (but it must be project independent)
09:29:36 <gmann> andreaf: yes,
09:29:42 <andreaf> I'm not quite sure what the status is on the catalog work
09:30:07 <gmann> currently those are version ed and devstack register diff endpoints for v2 and v3 under different catalog_type
09:30:29 <andreaf> yeah in devstack - that is not very nice
09:31:17 <andreaf> from a Tempest side we still replace the version in the URL - an alternative could be to pull the list of versions and cache it somewhere to avoid the extra round trip every time
09:31:21 <gmann> #link https://github.com/openstack-dev/devstack/blob/master/lib/cinder#L383-L398
09:31:41 <andreaf> but for now I think rewriting the version in the URL is still the best bet
09:31:53 <andreaf> I haven't heard of anyone having issues with that
09:32:14 <gmann> andreaf: and version things will be fetched from test case or clients?
09:32:56 <gmann> andreaf: if from client then we have to have separate client classes with version overridden  which i do not like actually
09:33:03 <andreaf> gmann: if we wanted to avoid rewriting the version in the URL we could have a singleton which fetches the versioned URLs the first time
09:33:22 <gmann> like this - https://review.openstack.org/#/c/442691/18/tempest/lib/services/volume/v3/volumes_client.py
09:34:26 <gmann> andreaf: you mean depends on which version people/job want to run tests can be loaded on url at starting ?
09:34:30 <andreaf> gmann: yeah basically we can continue to assume that the URL contains v2 or v3 and just append that to the base URL like we do today
09:35:02 <gmann> andreaf: but that need dummy client class for each version like - https://review.openstack.org/#/c/442691/18/tempest/lib/services/volume/v3/volumes_client.py
09:36:02 <andreaf> gmann: I"m just talking about how we obtain the version specific endpoint
09:36:23 <gmann> ok.
09:36:27 <gmann> anyways let's discuss that separately. we know the where the prob is and we can find best solution
09:36:38 <andreaf> gmann: the proper way would be to get it from the catalog - but I think it's fine if we continue to just build it by hand like today
09:36:51 <andreaf> gmann: yeah let's move on
09:37:02 <gmann> yea
09:37:03 <gmann> next is Bug Triage:
09:37:15 <gmann> #link https://etherpad.openstack.org/p/pike-qa-bug-triage
09:37:31 <gmann> it was mkopec turn this week
09:37:41 <martinkopec> yes, I've confirmed two bugs
09:37:50 <martinkopec> and created a report in the etherpad
09:37:54 <gmann> thanks.
09:37:57 <gmann> #link https://etherpad.openstack.org/p/tempest-weekly-bug-report
09:38:44 <gmann> martinkopec: anything urgent we need to check on any bug, like critical one etc
09:38:48 <andreaf> wow 8 high importance and 145 open!
09:39:20 <martinkopec> gmann, I think it would be good to finish some bugs
09:39:24 <andreaf> martinkopec: thank you for doing the bug triage! :)
09:39:27 <martinkopec> because the number is quite huge
09:39:36 <andreaf> martinkopec: +1
09:39:36 <martinkopec> *the number of open bugs
09:39:47 <gmann> martinkopec: yea. thanks a lot.
09:39:56 <gmann> open bugs we should kill
09:40:12 <andreaf> martinkopec: did you look in new bugs only or old ones as well?
09:40:31 <martinkopec> andreaf, I looked on new bugs mainly
09:40:39 <andreaf> martinkopec: ok
09:41:01 <gmann> next turn is oomichi
09:41:17 <andreaf> we might want to do a bug smash at some point because the back log is growing too much now
09:41:39 <gmann> +1
09:41:47 <andreaf> and I'm really sure the data is valid anymore - e.g. how can we have 8 high importance issue which are not addressed?
09:42:15 <gmann> #link https://bugs.launchpad.net/tempest/+bugs?search=Search&field.importance=High&field.status=New&field.status=Incomplete&field.status=Confirmed&field.status=Triaged&field.status=In+Progress&field.status=Fix+Committed
09:42:24 <gmann> ^^ high importance one
09:42:36 <gmann> few of them are very old
09:43:15 <andreaf> for instance, https://bugs.launchpad.net/tempest/+bug/1609156, I forgot to mark it as resolved :P
09:43:17 <openstack> Launchpad bug 1609156 in tempest "Test accounts periodic job is broken" [High,Fix committed] - Assigned to Andrea Frittoli (andrea-frittoli)
09:43:38 <gmann> heh. \o/ 1 less now :)
09:44:03 <gmann> some are pending on patches side like #link https://review.openstack.org/#/c/392464/
09:44:19 <gmann> let's move next
09:44:38 <gmann> #topic patrole
09:45:12 <gmann> patrole team need suggestion on patrole release model
09:45:19 <gmann> there is ML #link http://lists.openstack.org/pipermail/openstack-dev/2017-April/115624.html
09:45:30 <andreaf> is anyone around from Patrole?
09:45:52 <andreaf> gmann: yes thanks for replying to that
09:45:56 <gmann> i think they
09:46:17 <gmann> are online on 17utc
09:46:20 <gmann> IMO it should be brachless and release same way as Temepst
09:46:31 <andreaf> gmann: I guess patrole should run the release job eventually
09:46:36 <gmann> but more feedback suggestion are welcome
09:46:46 <gmann> +1
09:47:11 <andreaf> gmann: yeah branchless it to ensure consistency across releases which applies to policy testing as well
09:47:21 <gmann> yea.
09:47:50 <andreaf> gmann: one thing I'm really worried about patrole though is that there is all done by folks working from the same company
09:48:10 <andreaf> gmann: and the rest of us really as little involvement / insight into it
09:48:24 <gmann> only burden might be feature flag but policy things change very rarely in those term. at least i can tell from nova perspective
09:48:29 <gmann> andreaf: yea thats true
09:48:39 <andreaf> gmann: we need to change that before we start investing more in Patrole I think
09:48:58 <andreaf> but that's just my feeling
09:49:08 <gmann> andreaf: i occasionally review there but we should start putting review bandwidth there
09:49:35 <andreaf> well I would like for the Patrole folks to actively seek for new non-at&t contributors :)
09:49:43 <andreaf> I will raise the point at the next meeting
09:49:57 <andreaf> honestly I won't have much bw to review patrole code myself
09:50:11 <gmann> true, contributor from different companies are key in upstream
09:50:44 <gmann> 10 min left let's move next
09:50:47 <gmann> #topic DevStack
09:51:07 <andreaf> jordanP patch about swift services was merged :)
09:51:14 <jordanP> yeah, that's good
09:51:16 <gmann> nice.
09:51:30 <andreaf> jordanP: also sdague did a lot of work around uswgi
09:51:40 <gmann> yea
09:51:41 <jordanP> I think we are back to a normal memory consumption, but we should always be aware of it
09:51:53 <jordanP> I am not sure what the impact of uwsgi will be
09:52:01 <jordanP> in terms of memory
09:52:41 <jordanP> also, given that I feel we should stick to serial scenarios run and concurrency = 2 for api, we also should be very aware of the overall run time of our jobs
09:53:12 <jordanP> 1h20 is a lot already. And it's super hard to delete tests, so when new tests are added, we should make super sure it's not a duplicate test
09:53:40 <gmann> yes, +1 for serial on scenario
09:54:04 <andreaf> jordanP: yeah well it depends a lot on the test node, it can be below 1h in the best case
09:54:16 <andreaf> but I agree 1h 20' is a lot
09:54:45 <jordanP> for instanc https://review.openstack.org/#/c/458092/, it's all over 1hour
09:54:53 <andreaf> jordanP: if we manage to make gate-tempest-dsvm-neutron-scenario-multinode-ubuntu-xenial-nv voting we could move all scenario tests there
09:55:05 <jordanP> 1h05 is the fastest job
09:55:32 <jordanP> yeah
09:55:48 <andreaf> jordanP: but I'm not sure having two jobs is really ideal either
09:55:53 <jordanP> but that multinode job is already slow, so not sure it will help
09:56:01 <jordanP> no, we have already too many jobs :)
09:56:16 <gmann> we should merge both
09:56:31 <gmann> will it be too slow?
09:56:44 <andreaf> gmann: well we just separated them I don't think we should merge them back
09:57:17 <gmann> not curerent scenario one. i mean multinode and scenario multinode
09:57:24 <andreaf> gmann, jordanP: if we increased concurrency to two in gate-tempest-dsvm-neutron-scenario-multinode-ubuntu-xenial-nv, or even better if we removed some unnecessary scenario tests, the job would be much faster
09:57:55 <gmann> if there is coverage somewhere else then we can remove otherwise we should not i think
09:58:08 <jordanP> andreaf, +1. We should skip some scenarios
09:58:12 <gmann> we should get rid of slow API tests if somehow we can
09:58:23 <gmann> 2 min left
09:58:25 <gmann> let's move
09:58:31 <jordanP> gmann, it's hard, slow api tests make sense,
09:58:37 <gmann> i will skip grenade and o-h
09:58:43 <jordanP> we can merge some API tests to not pay the setup cost twice
09:58:47 <jordanP> but this as some side effects
09:58:57 <jordanP> *has
09:58:59 <gmann> jordanP: humm
09:59:05 <gmann> #topic Destructive Testing
09:59:10 <gmann> we already talked about it
09:59:33 <gmann> @topic open discussion
09:59:37 <andreaf> jordanP: perhaps we should have a review of our jobs with some proposals and discuss about it in the next meeting
09:59:40 <gmann> anything on open
09:59:47 <andreaf> jordanP: can you setup an etherpad for that?
09:59:50 <gmann> #topic open discussion
09:59:51 <jordanP> andreaf, yes !
09:59:58 <jordanP> will do
10:00:02 <andreaf> gmann: Tempest 16.0.0 is out
10:00:04 <gmann> jordanP: thanks
10:00:05 <andreaf> jordanP: thanks
10:00:07 <gmann> #endmeeting