21:03:56 <ttx> #startmeeting project
21:03:57 <openstack> Meeting started Tue Jun 10 21:03:56 2014 UTC and is due to finish in 60 minutes.  The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:03:58 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:04:00 <openstack> The meeting name has been set to 'project'
21:04:04 <ttx> Agenda for today is available at:
21:04:07 <ttx> #link http://wiki.openstack.org/Meetings/ProjectMeeting
21:04:13 <eglynn> o/
21:04:24 <ttx> #topic News from the 1:1 sync points
21:04:39 <ttx> #link http://eavesdrop.openstack.org/meetings/ptl_sync/2014/ptl_sync.2014-06-10-08.41.html
21:05:02 <ttx> #topic Other program news
21:05:08 <ttx> Infra, QA, Docs... anything you'd like to mention ?
21:05:16 <ttx> mtreinish, fungi ?
21:05:26 <ttx> annegentle (if still around) ?
21:05:37 <fungi> ttx: nothing new from infra. be kind, test your patches ;)
21:05:58 <ttx> fungi: the gate fire seems to be mostly under control now ?
21:06:02 <fungi> the gate's not nearly as far behind as last week
21:06:07 <fungi> right
21:06:17 <mtreinish> ttx: nothing from me today, was heads down trying to debug the gate last week
21:06:21 <fungi> looking at ~3-4 hour delays in approval to merge at the moment
21:06:22 <ttx> Still a lot to process for juno-1
21:06:35 <ttx> which brings us to the next topic
21:06:42 <dhellmann> 3-4 hours is pretty good for a weekday
21:06:45 <ttx> #topic Juno-1 tagging
21:06:52 <ttx> #info juno-1 needs to be tagged between now and Thursday
21:07:10 <eglynn> are we going to explicit wait for gate drainage?
21:07:13 <ttx> By the end of the day the j-1 plans should reflect what is already approved and in-flight
21:07:18 <eglynn> *explicitly
21:07:48 <ttx> eglynn: hmm, not sure I understand your question
21:07:56 <ttx> We should generally shoot for tagging tomorrow, rather than stretch it to Thursday
21:07:57 <eglynn> ttx: ... I mean, wait for the verification queue of approved patches to drain
21:08:17 <mikal> ttx: I was intending to produce a git sha interactively with you tonight my time, tomorrow morning your time. Is that ok with you?
21:08:21 <eglynn> ttx: ... or just go with whatever has landed by EoD tmrw
21:08:56 <ttx> eglynn: ideally we should tag when what was approved today gets merged
21:08:57 <eglynn> just mainly wondering about gate-lag
21:09:16 <eglynn> ttx: k, that sounds reasonable
21:09:20 <ttx> but we can also have exceptions, if like a blueprint completion is a few hours away in the queue
21:09:55 <ttx> so when you have everything you want in juno-1, just let me know a SHA I can use for the juno-1 tag
21:09:59 <ttx> and i'll make it happen
21:10:18 <ttx> It's fine if stuff is deferred, juno-1 is just a point in time
21:10:33 <ttx> it should reflect what we amnaged to squeeze in by that date
21:10:44 <ttx> Questions on juno-1 ?
21:10:53 <SergeyLukjanov> ttx, will we have proposed branches this time?
21:11:08 <ttx> mikal: I have to run an errand tomorrow morning
21:11:19 <ttx> SergeyLukjanov: no
21:11:24 <mikal> ttx: ok, I can just send you an email then
21:11:24 <SergeyLukjanov> ttx, ok
21:11:26 <ttx> SergeyLukjanov: we tag directly on master, new process
21:12:01 <ttx> mikal: you can also say "tag when change N merges"
21:12:14 <mikal> ttx: ok
21:12:20 <fungi> ttx: give me or someone in infra a heads up before you push the first tag on master just so we can be on top of triage if it goes bad
21:12:34 <ttx> as long as you tell me what to do if it gets booted out of the gate queue (block or release)
21:12:39 <fungi> though in reality i can't think of reasons it should
21:12:55 <ttx> fungi: ok, will wait until you're up
21:13:20 <ttx> I have a tag for Glance already
21:13:31 <ttx> err, a SHA for glance
21:13:45 <markwash> go glance!
21:13:50 <ttx> fungi: we could try to push the tag for it just after meeting ?
21:13:57 * markwash contributes
21:14:04 <fungi> ttx: sure
21:14:06 <ttx> Other questions ?
21:14:24 <ttx> #topic Enable Autokick BP adjustment script
21:14:25 * dolphm is envious of real PTLs like markwash
21:14:34 <ttx> I finalized a new version of the script that adjusts milestone and series goal in Launchpad
21:14:45 <ttx> For projects using -specs it will also automatically clear the "milestone target" field if it's set on an unprioritized blueprint
21:14:56 <ttx> The goal being to use the milestone target as a blessed list
21:15:05 <ttx> rather than duplicate the entry point for people wanting to submit features (spec in Gerrit + BP in Launchpad)
21:15:20 <ttx> I've run it for most projects during the 1:1 sync today, and plan to run it on cron every 2/3 hours starting tomorrow
21:15:25 <ttx> Any final objection ?
21:15:52 <ttx> notmyname: I did not enable the script for swift, since you don't have a juno milestone yet
21:15:55 <zaneb> ttx: consider putting "this is an automated message" in the comment it leaves
21:15:58 <mestery> None here, thanks for setting this up ttx!
21:16:05 <dolphm> mestery: ++
21:16:10 <notmyname> ttx: ack
21:16:20 <ttx> zaneb: I just like to get the angry people complaining about it
21:16:23 <fungi> sounds pretty awesome
21:16:30 <ttx> #agreed autokick script to run under cron from now on
21:16:39 <ttx> NB: I promised a script that would facilitate creating a BP to track an accepted spec, still working on it
21:16:50 <ttx> It's delayed due to launchpad not supporting creating BPs from the API, so I have to play tricks around it
21:16:51 <zaneb> ttx: I'm more worried about people who think they have brought your wrath down on themselves ;)
21:16:58 <ttx> Stay tuned -- in the mean time just manually file them, like you always did :)
21:17:09 <eglynn> is the wiki updated to describe the user visible aspects of the new process?
21:17:29 <ttx> #action ttx to add "this is an automated message" to the whiteboard warning
21:17:46 <eglynn> ... i.e. that the filing of a BP in LP will now be a drivers/PTL action *after* the corresponding action has landed
21:17:47 <ttx> eglynn: unfortunately, not. It's on my TODO list, hope to do it tomorrow
21:17:55 <eglynn> ttx: cool, thanks!
21:18:15 <eglynn> ... corresponding *spec has landed
21:18:39 <ttx> #topic Sahara-dashboard to Horizon merge progress
21:18:48 <ttx> SergeyLukjanov: floor is yours
21:18:54 <SergeyLukjanov> ttx, thanks
21:19:25 <SergeyLukjanov> so, the current progress is near to 0
21:19:32 <SergeyLukjanov> mostly all patches are under review
21:19:53 <SergeyLukjanov> but there is a lack of reviews to make them all landed in juno
21:19:59 <eglynn> SergeyLukjanov: are the patches getting reviewer traction?
21:20:06 <eglynn> k
21:20:09 <SergeyLukjanov> I presume crobertsrh is here, he is doing this work from sahara side
21:20:27 <crobertsrh> Yes, I am here.
21:20:35 <ttx> mrunge is here for Horizon
21:20:38 <eglynn> I think david-lyle is on vacation
21:20:43 <SergeyLukjanov> crobertsrh, cool, any details?
21:20:43 <mrunge> yes, correct
21:20:50 <crobertsrh> Reviewer traction has increased a bit today, but we haven't seen much action from the cores yet.
21:21:20 <mrunge> it seems like most Horizon cores are quite busy with other stuff
21:21:40 <mrunge> and we have an insane queue of patches to review in horizon
21:21:58 <mrunge> that the sad status
21:22:10 <mrunge> that's ^
21:22:24 <SergeyLukjanov> heh, I understand
21:22:39 * fungi thinks we all know what that's like
21:23:01 <SergeyLukjanov> we've agreed on design summit that if we'll not be able to merge sahara-dashboard into horizon in j2 timeframe
21:23:04 * jgriffith sighhs
21:23:17 <SergeyLukjanov> than we'll need to rollback to the separate repo for juno release
21:24:07 <SergeyLukjanov> mrunge, do you have any suggestions for improving the review process?
21:24:16 <lcheng> crobertsrh: is it possible to have a recorded demo of sahara to help reviewers visualize how it should look (I'm not sure how easy it is to setup sahara in devstack)
21:24:34 <SergeyLukjanov> docs for running / testing horizon with sahara were requested initially and AFAIK crobertsrh shared them
21:24:43 <crobertsrh> lcheng:  There are a couple of videos on youtube.  I will try to find them for you.
21:25:23 <SergeyLukjanov> lcheng, I have a screencast recorded for icehouse release somewhere, I'll find it and share
21:25:25 <lcheng> crobertsrh: awesome, just add the links on the BP
21:25:47 <lcheng> SergeyLukjanov: thanks!
21:26:09 <SergeyLukjanov> ttx, are you ok with rollback plan?
21:26:41 <SergeyLukjanov> mrunge, thanks for +2ing sahara patch ;)
21:27:07 <crobertsrh> mrunge, that made my day :)
21:27:18 <mrunge> SergeyLukjanov, crobertsrh that's just one +2
21:27:27 <ttx> I think it's very important to expose the integrated functionality in horizon
21:27:29 <mrunge> and you just dropped off 8k lines of code
21:28:00 <crobertsrh> I'm easy to please
21:28:01 <ttx> it's a major new feature of Juno Horizon
21:28:01 <eglynn> are those 8 KLOCs nicely split up?
21:28:25 <crobertsrh> 9 or 10 patch sets.  Generally 1 new panel per patch
21:28:26 <eglynn> ... or just a few monolithic patches?
21:28:31 <eglynn> k
21:28:32 <ttx> if it's not landed by j-2 then yes, probably better to ship it separately
21:28:47 <fungi> ~1000 loc per patch is still pretty huge
21:28:51 <ttx> but we should try our best to have it in by then
21:28:58 <mrunge> I still think, we should be able to merge
21:29:17 <mrunge> given, that reviewers will do their duties
21:29:25 <SergeyLukjanov> mrunge, ttx, yup, I still think that we'll be able to do it too
21:29:32 <ttx> SergeyLukjanov, mrunge: we should watch progress there from time to time, don't hesitate to raise it again at future meetings
21:29:37 <eglynn> ... so is the train of patches to be gated on a -2'd sentinel patch?
21:29:55 <eglynn> ... so that either all or none land by juno-2?
21:29:56 <mrunge> rollback?
21:30:07 <eglynn> (... in the style of the recent swift approach)
21:30:17 <mrunge> there were a bunch of minors until now
21:30:32 * zaneb disappears
21:30:36 <mikal> You could try the approach we're planning for the nova ironic driver
21:30:37 <ttx> eglynn: I think the current thinking is to roll them back. They are guarded by a feature switch IIUC
21:30:47 <mikal> Build a chain of patches, with the first one having a -2
21:30:56 <eglynn> http://openstack.10931.n7.nabble.com/Swift-storage-policies-merge-plan-td41512.html
21:30:56 <SergeyLukjanov> ttx, ack, I'm monitoring the progress
21:30:59 <mikal> Once all the rest of the chain is approved, remove that first -2
21:31:02 <mikal> They merge as a block
21:31:14 <eglynn> ^^^ that's the swift approach I mentioned
21:31:18 <mikal> Oh nice
21:31:27 <mikal> I thought we'd invented it
21:31:28 <mikal> :P
21:31:37 <fungi> yeah, infra has done that a few times in the past for similar situations as well
21:31:46 <fungi> i think it's convergent evolution
21:32:02 <eglynn> "convergent evolution" ... /me likee :)
21:32:03 <SergeyLukjanov> mrunge, could horizon split affect sahara merge to horizon?
21:32:21 <mrunge> SergeyLukjanov, in theory: no
21:32:23 <notmyname> we're actually going to try to end up with one merge commit to gate. to avoid weeks-to-months in the gate to land the chain
21:32:56 <eglynn> seems like a neat idea, whoever invented it
21:33:42 <fungi> yeah, it's worth noting we've seen gerrit's jgit behave badly on dependent patch series >~4-5 changes. it will just arbitrarily claim it ran into merge conflicts. i think there's an upstream bug reported but not sure of the status
21:33:47 <mrunge> SergeyLukjanov, you're just putting code to openstack_dashboard, not to horizon dir. that shouldn't be affected at all by horizon split
21:33:59 <SergeyLukjanov> mrunge, that's cool
21:34:25 <ttx> ok, anything more on atht topic ? We should revisit it every 2 or 3 meetings to make sure we are all in sync
21:34:46 <ttx> SergeyLukjanov: I'll let you put it back on the agenda regularly ;)
21:35:03 <ttx> #topic Open discussion
21:35:08 <SergeyLukjanov> ttx, ok, thx
21:35:13 <mrunge> SergeyLukjanov, probably you guys should show up on the Horizon weekly meetings as well
21:35:20 <mestery> So, I wanted to discuss the "ssh timeout" issue here.
21:35:23 <mestery> https://bugs.launchpad.net/neutron/+bug/1323658
21:35:24 <uvirtbot> Launchpad bug 1323658 in nova "SSH EOFError - Public network connectivity check failed" [Undecided,New]
21:35:33 <SergeyLukjanov> mrunge, crobertsrh is attending them AFAIK
21:35:41 <mestery> I put some comments in the bug, but it looks like in the cases I've examined, the guest VM never comes back from a resize/restart.
21:35:48 <mestery> Was hoping to get some nova eyes on that bug. :)
21:35:56 <crobertsrh> Yes, I attend them
21:36:02 <ttx> mikal: ^?
21:36:03 <crobertsrh> I'll try to be more vocal
21:36:15 <mikal> What version of libvirt are we running in the gate now?
21:36:28 <sdague> mikal: just be aware that if you drop a ton of patches all at once into the gate - you do a mini dos attack
21:36:48 <mestery> Myself, armax and salv-orlando spent a lot of time debugging that over the past week and a half to get to this point. :)
21:36:53 <mikal> mestery: I think you might need to start an openstack-dev thread about that if you haven't already
21:37:04 <mikal> mestery: I don't have an answer off the top of my head
21:37:05 <mestery> mikal: Will do, I'll start that after the meeting.
21:37:07 <sdague> oh, we should get mriedem in here
21:37:10 <mestery> mikal: OK, thanks!
21:37:21 <mikal> mestery: although we've seen some weird behaviour with snapshots in our ancient version of libvirt
21:37:24 <sdague> the sneaking suspicion at this point is it's floating ip related
21:37:28 <mikal> mestery: I wonder if this is similar
21:37:38 <mestery> mikal: I was suspecting an older version of libvirt possibly as well.
21:37:53 <sdague> and it times with when ceilometer landed a floating ip polling patch
21:38:02 <sdague> which causes tracebacks in n-api
21:38:12 <mestery> The logs all show neutron has set everyting up.
21:38:24 <mestery> when I added a console dump of the guest when the failurie is hit, there is no console output in the return value. :)
21:38:43 <ttx> mestery: if we can't blame neutron anymore for all weird issues...
21:38:55 <mestery> ttx: :P
21:40:59 <ttx> mestery: we won't get to a solution in the meeting, so yes, raising a ML thread about it should help getting nova eyes on it. This is a major offender so hopefully it should get people attention
21:41:07 <ttx> Anything else, anyone ?
21:41:10 <mestery> ttx: Doing it now in fact. Thanks!
21:41:30 <jgriffith> ttx: your script works wonderfully!
21:41:32 * ttx quicklooks the gate
21:41:49 <ttx> jgriffith: as in people blame me instead of you ?
21:41:51 <jgriffith> ttx: now if it can just hit people over the head with a virtual hammer when they retarget something after it's run
21:41:55 <jgriffith> ttx: haha!
21:42:05 <jgriffith> ttx: another advantage I hadn't thought of
21:42:26 <fungi> jgriffith: you need to come at them with a clue-by-four
21:42:36 <jgriffith> fungi: LOL
21:42:45 <notmyname> ttx o/
21:42:48 <jgriffith> fungi: I should be careful, lest I'm hit with one myself
21:43:11 <ttx> eglynn: is hbase-events-feature completed by the merging of https://review.openstack.org/#/c/91408/ ?
21:43:15 <ttx> notmyname: shoot
21:43:41 <eglynn> ttx: yes, it sure is
21:43:51 <ttx> ok will mark it implemented
21:44:04 <notmyname> as we've been using a feature branch in swift, I'd like to propose a summit session on using feature branches as a common thing (as opposed to squashed branches as one patch). something to discuss in paris
21:44:30 <notmyname> err...to rephrase, we've used one feature branch. it has advantages and disadvantages
21:44:42 <ttx> notmyname: ok, hopefully we'll remember that by then :)
21:44:56 * eglynn wonders why the LP auto-update didn't kick in when the patch landed ...
21:44:58 <notmyname> ttx: I'll remind you. we'll have one for the erasure code work anyway
21:45:33 <ttx> eglynn: I don't think we have such a thing
21:46:01 <eglynn> ttx: ... a-ha, /me was thinking of LP bugs
21:46:51 <ttx> ok, looks like we don't have anything else to discuss
21:46:55 <ttx> at least for today
21:47:03 <ttx> so...
21:47:09 <ttx> #endmeeting