21:03:28 <ttx> #startmeeting project
21:03:29 <openstack> Meeting started Tue Mar 18 21:03:28 2014 UTC and is due to finish in 60 minutes.  The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:03:30 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:03:32 <openstack> The meeting name has been set to 'project'
21:03:41 <SergeyLukjanov> щ/
21:03:52 <SergeyLukjanov> :) it was o/
21:04:12 <annegentle> SergeyLukjanov: cool trick :)
21:04:16 <devananda> o/
21:04:43 <SergeyLukjanov> annegentle, it's the russian letter on the same button with 'o' :)
21:05:09 <ttx> #link http://wiki.openstack.org/Meetings/ProjectMeeting
21:05:15 <ttx> #topic Current Feature freeze exceptions: status review
21:05:28 <ttx> We have just a few FFEs still running at this point:
21:05:36 <ttx> * cinder/prophetstor-dpl-driver
21:05:46 <ttx> This one is likely to be deferred at Cinder meeting tomorrow, unless it's ready to go in AND core people there want that new driver in Icehouse
21:05:58 <ttx> * neutron/ml2-deprecated-plugin-migration
21:06:07 <ttx> This one is a migration script that improves the Neutron Icehouse story
21:06:15 <ttx> Pending https://review.openstack.org/#/c/76533/, needs to be in before EOW
21:06:23 <ttx> * nova/smarter-network-cache-update
21:06:28 <russellb> deferred
21:06:33 <ttx> OK cool
21:06:38 <ttx> * nova/rpc-major-version-updates-icehouse
21:06:40 <russellb> *just* merged
21:06:42 <markmcclain> ttx: two cores are currently reconciling a testing bug they found
21:06:43 <russellb> and marked implemented
21:06:44 <ttx> merged now ?
21:06:58 <ttx> OK, that's all I have. Anything I missed ?
21:07:10 <jgriffith> seems about right from here
21:08:07 <ttx> alrighty then
21:08:08 <ttx> #topic RC1 buglists
21:08:14 <ttx> #link http://old-wiki.openstack.org/rc/
21:08:22 <ttx> (Still need to fix the redirect on status.o.o at some point)
21:08:23 <russellb> some sharp drops today
21:08:45 <ttx> yes, that's the "ttx comes back from vacation and asks for cleanup" effect
21:08:59 <russellb> haha
21:09:00 <ttx> We need to quickly get those lines set up at their relevant height and then start burning them down
21:09:13 <ttx> The idea is to try to get to 0 release-critical bugs sometimes next week so that we can tag the first RC1s then. So...
21:09:24 <ttx> (1) make sure that you only list release-critical bugs in there (you can use a tag to track the nice to haves)
21:09:34 <ttx> (2) go through the other reported bugs and make sure all known release-critical issues are tracked
21:09:37 * russellb uses icehouse-rc-potential as the nice to have tag
21:09:40 <russellb> if we want to be consistent ...
21:09:48 <ttx> russellb: +1
21:09:49 <ttx> (3) burn them down
21:09:57 <ttx> Questions on that process ?
21:10:05 <markmcclain> russellb: +1
21:10:29 <devananda> ttx: just curious, what would be needed to get a similar graph going for incubated projects?
21:10:33 <ttx> At this point I think neutron, and maybe Cinder, are still flying a bit high
21:10:38 <tjones1> ttx: can you paste the link again?  i joined late
21:10:48 <ttx> so they are either really broken or have not refined their lists enough :)
21:10:50 <russellb> tjones1: http://old-wiki.openstack.org/rc/
21:10:58 <tjones1> thx
21:11:17 <markmcclain> ttx: there's more refinement to be done
21:11:29 <jgriffith> ttx: same here as per this morning
21:11:39 <ttx> markmcclain, jgriffith: good news!
21:12:09 <ttx> Try to get the lists refined asap, like before EOD tomorrow
21:12:33 <ttx> I'd like to start looking into the evolution of those curves as a useful metric rather than cleanup artifacts
21:13:04 <ttx> OK, moving on
21:13:10 <ttx> #topic Dependency freeze
21:13:22 <ttx> zigo raised an interesting discussion on the ML about dependency freeze, and I'd like us to discuss when and how we can achieve that
21:13:29 <ttx> #link http://lists.openstack.org/pipermail/openstack-dev/2014-March/030277.html
21:13:44 <ttx> Feature freeze is in effect but some dependency changes are not feature-related, and Swift is not feature-frozen yet
21:13:51 <ttx> When do you think we can freeze addition of new dependencies ?
21:13:58 <ttx> When do you think we can freeze dependency version bumps ?
21:14:04 <notmyname> swift is ok from a dependency perspective
21:14:07 <ttx> And final question... Do you think we can use openstack/requirements to enforce that, or its scope is too braod for us to leverage it ?
21:14:37 <ttx> Last release I had a hard time watching openstack/requirements changes as a way to control the freeze
21:14:48 <ttx> notmyname: no dep addition expected at this point ?
21:15:07 <ttx> notmyname: what about dep version bumps ?
21:15:25 <notmyname> ttx: no. at least none that aren't already in openstack/requirements. the only one potential are swiftclient adding six and/or python-keystoneclient
21:15:26 <sdague> ttx: well we did only just discover trove had requirements not in the global list, so I think that should try to get reconsiled
21:15:28 <ttx> mordred: your input wanted on final question above
21:15:37 <dolphm> someone on list mentioned pending client releases... would they not be an exception?
21:15:47 <jeblair> ttx: i think os/requirements is exactly for handling this sort of thing
21:15:55 <ttx> dolphm: up to a given date yes
21:16:05 <sdague> dolphm: why exactly are we always bumping client release minimums in global requirements?
21:16:16 <jeblair> ttx: reviewers should be apply extra caution to approving those changes, and respect whatever rules we apply to dep freeze
21:16:29 <notmyname> ttx: same with dependency bumps. we keep dependencies working with centos6/precise generic installs, so no worries from our perspective on version bumps
21:16:31 <ttx> jeblair: so last cycle there were a lot of extra additions to cover for non-integrated projects
21:16:42 <ttx> jeblair: like stuff just being added to incubation
21:16:44 <jeblair> ttx: incubated projects?
21:16:51 <ttx> jeblair: yes
21:17:07 <SergeyLukjanov> savannaclient was added after rc1 I think
21:17:12 <ttx> jeblair: that made it difficult to spot freeze players
21:17:18 <sdague> it's also super common for a new driver in a project to come in with requirements, some reasonable, some odd
21:17:25 <ttx> notmyname: good to know, thanks
21:17:28 <jeblair> ttx:  we used to select the mirror based on namespace
21:17:38 <jeblair> ttx: now we select the mirror based on presence in projects.txt in requirements
21:17:58 <jeblair> ttx: so there's no reason to add a project to projects.txt if they can't follow the current rules for that repo
21:18:06 <SergeyLukjanov> jeblair, ttx, probably, we should ensure that all openstack/ projects are in list?
21:18:20 <jeblair> ttx: in other words, if an incubated project can't handle the current requirements, then they should wait until after the dep freeze to add themselves to it
21:18:27 <SergeyLukjanov> jeblair, ++
21:18:35 <ttx> jeblair: also as soon as we opened master for icehouse dev (after havana-rc1) the slew of new features added deps as well
21:18:58 <ttx> basically we didn't cut a stable/havana branch for requirements at RC time
21:19:18 * dhellmann arrives late
21:19:21 <devananda> sdague: we're starting to push out requirements for optional drivers. AIUI, those dont need to be in global requirements
21:19:41 <sdague> my opinion is we should resolve sqla and the 3 trove requirements, and call it frozen after that point
21:19:54 <jeblair> ttx: i _think_ a milestone-propose branch for requirements would probably work
21:19:57 <ttx> anyway, we can work on the details of using openstack/requirements to control the dep freeze, if you all think we should be able to use that
21:20:05 <devananda> sdague: let the driver load fail if its dependency isn't present, rather than enforce all those dependencies are installed even when a deployer doesn't want to use those drivers
21:20:08 <jogo> sdague: ++ to resolving trove issues first
21:20:08 <jeblair> ttx: and then branch stable/icehouse on requirements from that too
21:20:11 <ttx> the main question is WHEn we can enforce that freeze
21:20:24 <jeblair> ttx: that's untested afaik, so we'll need to pay close attention to that
21:20:28 <ttx> now ? next week ?
21:20:54 <sdague> however, I think you will find the the requirements reviewers are a pretty diverse set, which probably needs to be revisited, because there is definitely some uneveness in there
21:20:55 <ttx> how about next week, so that we can prepare any branch magic in openstack/requirements that may be needed
21:21:21 <ttx> sdague: if we rely on a MP branch, or a stable branch, the set of reviewers change
21:21:21 <sdague> I think that's fine, the 4 reviews in question are up
21:21:35 <sdague> ttx: true
21:21:42 <ttx> sdague: so that could actually work
21:21:44 <sdague> I guess that would be a thing for next time
21:22:06 <dhellmann> sdague: +1 to reviewing the list of reviewers
21:22:27 <mordred> ttx: sup?
21:22:30 <mordred> ttx: reading
21:22:43 <ttx> Should we first freeze additions and then bumps ?
21:22:55 <ttx> or do a one-step dep freeze with exceptions ?
21:23:29 <dhellmann> it seems like the process for the latter is simpler
21:23:36 <jeblair> (anyone want to volunteer to be the requirements ptl?)
21:23:37 <markmcclain> +1 for simple
21:23:41 <ttx> dhellmann: yes, i'm leaning towards that too
21:23:54 <dhellmann> jeblair: I thought we already had one
21:24:01 * markwash touches his nose
21:24:04 <dhellmann> haha
21:24:25 <mordred> ttx: I agree with jeblair on everything he said
21:24:42 <ttx> so. Dep freeze Tuesday next week, working out the details around using openstack/requirements to enforce it by then ?
21:25:26 <ttx> works for everyone ?
21:25:33 <sdague> jeblair: so I kind of naturally assume that should be the release manager, so I nominate ttx
21:25:41 <notmyname> ttx: works for swift
21:25:43 <sdague> ttx: I think that will be fine
21:25:48 <jeblair> sdague: seconded
21:25:54 <dhellmann> sdague: +1
21:26:02 <markmcclain> sdague: +1
21:26:03 <dhellmann> ttx: that works for me
21:26:29 * ttx is honored -- just note that for a requirements PTL I'm way behind in requirements regular reviews :)
21:26:29 <SergeyLukjanov> ttx, fwiw +1
21:26:35 <dhellmann> ttx: we should announce the pending freeze on the ML, too
21:26:53 <ttx> right, that's why I say one week -- sounds like the minimum notice
21:27:09 <ttx> #agreed Dep freeze Tuesday next week
21:27:31 <ttx> #action ttx to work out the details around using openstack/requirements to enforce depfreeze before next week
21:27:49 <ttx> OK, thanks everyone
21:27:56 <ttx> #topic UTF-8 charset migrations for oslo-db
21:28:00 <ttx> markwash: floor is yours
21:28:05 <markwash> yay me
21:28:13 <markwash> I put some relevant links in the agenda
21:28:33 <markwash> the basic problem is that db tables should have UTF-8 as the charset
21:28:46 <markwash> but the migration in cases where they *arent* already utf8 are pretty scary
21:28:55 <markwash> and we think the deployer ultimately has to be responsible for it
21:29:13 <markwash> but there are a few options
21:29:21 <dolphm> markwash: what makes them scary?
21:29:24 <markwash> 1) we could just immediately require utf-8 with no migration
21:29:30 <markwash> 2) we could provide a migration
21:29:43 <dhellmann> dolphm: we have to re-encode the data in the tables
21:29:44 <markwash> 3) or we could have a deprecation period where we say "you better fix this but we won't hose you yet!)
21:29:53 <markwash> s/)/"/
21:30:12 <sdague> dhellmann: we did data migrations all the time in nova, why is that a problem?
21:30:35 <markwash> I'm also concerned about he migration being a whole lot of time and work for folks who maybe don't actually have any utf8 characters
21:30:40 <markwash> but maybe that's just me being silly
21:30:44 <bnemec> I think we're primarily basing this on http://lists.openstack.org/pipermail/openstack-dev/2014-March/029708.html
21:30:44 <dhellmann> sdague: we would have to write one for each table, no? maybe that would work.
21:31:18 <bnemec> Which suggested this isn't as simple as "make everything utf8".
21:31:19 <sdague> markwash: ascii -> utf8 is a no op right, just a type flip
21:31:41 <dolphm> we did something like this for keystone, all affected tables in one migration: https://github.com/openstack/keystone/blob/master/keystone/common/sql/migrate_repo/versions/005_set_utf8_character_set.py
21:31:53 <dolphm> i don't think i've ever seen a single bug report against that migration
21:32:08 <markwash> sdague: well, its probably latin1, I don't think there is any thing that verifies that you don't have any high-bit-set characters to decide if you can do the easy or the hard form of the migration
21:32:24 <dhellmann> maybe that ML post made it a bigger deal than it is?
21:32:27 <markwash> dolphm: that's encouraging, but many public clouds don't run keystone yet :not a trollface:
21:32:36 <sdague> markwash: I honestly think #2 has been the default operating model for openstack
21:32:39 <dolphm> markwash: ack :)
21:32:56 <sdague> and the commits should come with giant UpgradeImpact on them
21:33:26 <sdague> especially a warning that this might take a long time
21:33:40 <dolphm> sdague: ++
21:33:42 <markwash> sdague: yeah, that would work to handle the lengthiness
21:33:58 <markwash> I'm still worried about possible corruption as was illuminated in the ML thread
21:34:04 <markwash> but I don't understand it perfectly so
21:34:07 <markwash> . . .
21:34:25 <sdague> and is this only a mysql issue?
21:34:34 <dolphm> markwash: i'm not sure what to say about corruption without being able to reproduce and understand it :(
21:34:43 <bnemec> sdague: Yes
21:34:57 <sdague> yeh, it would be good to build a test case for what bad data here ends up looking like
21:34:58 <dhellmann> sdague: IIUC, it has to do with mysql having many ways to set defaults, and then changing those defaults between versions
21:35:17 <dhellmann> assuming we write the needed migration(s), does that mean we don't need an option to turn off the charset check in oslo?
21:35:32 <markwash> dolphm: I think its basically utf8(latin1(binary(utf8(foo)))) != utf8(foo)
21:35:41 <markwash> something to that effect
21:36:01 <markwash> dhellmann: that's correct. if theres a migration we don't need a switch
21:36:32 <dhellmann> markwash: ok
21:36:40 <sdague> so you could probably be smarter and figure out if there is anything high bit before doing the migration
21:36:44 <sdague> for fastpath
21:36:54 <sdague> with a raw bulk query
21:36:58 <markwash> dolphm: the supposition being that utf8 data was going into some latin1 tables via binary(utf8(foo))
21:37:47 <dolphm> markwash: i'd have to try it, but i'd expect something to throw an exception before that succeeded :-/
21:37:57 <markwash> sdague: I'm not realy confident we can confidently catch those kinds of errors :-( that's why I want to pass the ball to deployers
21:38:15 <markwash> apparently my standard is double confidence
21:38:19 <sdague> markwash: that's a giant punt though
21:38:45 <sdague> "hey guys, we screwed up, we don't know how to fix it. You tell you fixed it before we do anything"
21:39:09 <markwash> well, except we only screwed up by allowing them to screw up with latin1 as a default
21:39:19 <markwash> so there's a slight political out
21:39:25 <sdague> markwash: sure, but that was still our fault
21:39:37 <jgriffith> I'm just curious, haven't all the other projects already done this?
21:39:40 <sdague> like when we forced everything to innodb
21:39:42 <dolphm> markwash: if it's possible to detect corruption at migration time, and abort... that would be beneficial to the 99% of deployments that *don't* have corruption to worry about
21:39:51 <jgriffith> I mean... cidner just looped through and did a convert to UTF8 in it's base migration
21:39:57 <jgriffith> not sure about others
21:40:00 <markwash> jgriffith: great question! if I'm just behind the times that's fine I can pipe down :-)
21:40:19 <jgriffith> markwash: no no... I'm just wondering, maybe this is more flushed out than we think
21:40:24 <jgriffith> markwash: ie lower risk
21:40:29 <markwash> yes
21:40:39 <ttx> Yep, it sounds safe to align with what other projects have done. At least you can deflect blame on them when all else fails
21:40:46 <jgriffith> ttx: ha!
21:40:49 <markwash> anyway it sounds like the general push here is to just go with a utf8 migration
21:41:04 <markwash> less deflection, more contextualization ;-)
21:41:25 <markwash> so I can take "allow utf8 migration" as the message
21:41:31 <markwash> and we can move on
21:41:38 <ttx> I hear Cinder and keystone have migrations doing it, I think Glance is in the same bag
21:42:12 <ttx> usual-dataset-size-wise ?
21:42:31 <markwash> seems likely, especially with cinder
21:42:55 <ttx> so yeah, i don't think it's special enough to justify diverging
21:43:06 <markwash> everyone is special :-)
21:43:11 <ttx> the other ones seem to have not caused that much havoc
21:43:23 <ttx> markwash: can we move on ?
21:43:26 <markwash> indeed
21:43:32 <ttx> #topic Red Flag District / Blocked blueprints
21:43:33 <jgriffith> markwash: and nova
21:43:41 <ttx> Any inter-project blocked work that this meeting could help unblock ?
21:43:59 <ttx> any RC bug fix needing some other project to do work for them ?
21:45:00 <ttx> That will come in due time, I guess
21:45:02 <ttx> #topic Incubated projects
21:45:07 <SergeyLukjanov> o/
21:45:15 <SergeyLukjanov> re sahara rc1 - https://launchpad.net/sahara/+milestone/icehouse-rc1
21:45:20 <ttx> SergeyLukjanov, kgriffs, devananda: you still belong here :)
21:45:28 <devananda> o/
21:45:53 <SergeyLukjanov> ttx, we're now in renaming hell, should complete 95% of it this week
21:46:00 <ttx> SergeyLukjanov: OK, a lot left to cover I see
21:46:28 <ttx> SergeyLukjanov: when you're done, maybe trim the RC1 buglist to only contain release-critical issues
21:46:34 <SergeyLukjanov> ttx, yup, so, we'd like to have +1w before the rc1 I think
21:46:41 <ttx> so that we can reach RC1 in a reasonable time
21:46:45 <SergeyLukjanov> ttx, sure, that's I'm planning to do
21:47:07 <ttx> RC1 in two weeks is fine by me, for incubated projects
21:47:19 <SergeyLukjanov> ttx, it should work for us
21:47:24 <ttx> leaves well enough time to catch a regression and respin
21:47:51 <ttx> not as much integration concerns as for integrated release projects
21:48:03 <SergeyLukjanov> yeah
21:48:07 <ttx> https://launchpad.net/ironic/+milestone/icehouse-rc1
21:48:28 <devananda> we're focusing on getting CI up and the various bugs we're finding along the way
21:48:47 <devananda> i'm doing triage a couple times a week to keep ^ accurate
21:48:53 <ttx> devananda: is that buglist listing all release-critical fixes you'd like to get in before your icehouse release ?
21:48:57 <devananda> yes
21:49:10 <ttx> ok, sounds like you're in good shape
21:49:39 <devananda> as far as docs go, a quick question
21:49:54 <devananda> we're aiming at hosting deployer docs in the same repo as our developer docs for now
21:49:57 <devananda> is that reasonable?
21:50:03 <devananda> eg, docs.openstack.org/developer/ironic/
21:50:08 <annegentle> devananda: yep it's fine
21:50:09 <ttx> devananda: ideally you would come up with someting usable enough so that people would actually play with that release as an alternative to nova-bm
21:50:20 <devananda> annegentle: thanks
21:50:20 <ttx> that should get you new bugs for Juno :)
21:50:25 <devananda> ttx: we have that :)
21:50:37 <annegentle> devananda: think of eventually fitting into one of the install guides (we have four for four distros)
21:50:39 <devananda> ttx: in devstack, you now can just set 2 ENV vars and it switches everything to ironic
21:50:43 <ttx> devananda: now you just need to market it :)
21:50:59 <devananda> ttx: yep, heh... deployer docs will help a lot there I think
21:51:14 <ttx> devananda: the goal is to limit the effects of "not being integrated" to a bare minimum
21:51:31 <ttx> so yes, easy devstack switching is a good idea
21:51:38 <devananda> annegentle: i know that's the goal, just not sure how we ca do that prior to graduation / without help from you(r team). happy to talk out of meeting if you have suggestions
21:52:10 <devananda> ttx: ack. will keep that in mind
21:52:41 <ttx> devananda: could also engage with distros in early Juno so that they fully support ironic in their next set
21:52:45 <devananda> moving the nova "ironic" driver into our codebase has made that so much simpler.
21:53:01 <devananda> ttx: afaik, RH and Deb/ubuntu are both packaging ironic in their next releases
21:53:12 <ttx> devananda: great
21:53:14 <devananda> ttx: http://packages.ubuntu.com/search?keywords=ironic
21:53:36 <lifeless> zigo has debian covered, RH are shipping tripleo so definitely have ironic covered
21:53:48 <ttx> but packaged doesn't mean it works :) tested means it works :)
21:54:09 <devananda> yep
21:54:33 <ttx> any other question from our still-incubated-in-icehouse crowd ?
21:55:00 <ttx> #topic Open discussion
21:55:27 <ttx> summit.o.o is open for session suggestion since March 7
21:55:47 <ttx> 96 proposals at this point
21:56:04 <ttx> keep them coming
21:56:21 <ttx> Anything else, anyone ?
21:56:57 <hub_cap> thx ttx!
21:57:14 <ttx> #endmeeting