21:01:10 <russellb> #startmeeting nova
21:01:11 <openstack> Meeting started Thu Nov 21 21:01:10 2013 UTC and is due to finish in 60 minutes.  The chair is russellb. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:01:12 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:01:15 <openstack> The meeting name has been set to 'nova'
21:01:22 <russellb> hello everyone!
21:01:23 <mriedem1> hi!
21:01:29 <alaski> hi
21:01:30 <melwitt> hi
21:01:31 <johnthetubaguy> hi
21:01:31 <n0ano> o/
21:01:32 <dripton> hi
21:01:39 <hartsocks> \o
21:01:40 <russellb> #link https://wiki.openstack.org/wiki/Meetings/Nova
21:01:43 <jog0> o/
21:01:52 <russellb> #topic general announcements
21:01:55 <dansmith> o/
21:02:07 <russellb> the compute program needs a mission statement
21:02:10 <lifeless> o/
21:02:20 <russellb> i drafted one, if anyone has feedback let me know, before i propose it to the governance repo
21:02:26 <russellb> #link http://lists.openstack.org/pipermail/openstack-dev/2013-November/019677.html
21:02:35 <cyeoh> hi
21:02:41 <bnemec> \o
21:02:42 <russellb> it's kinda buzzwordy, but ... trying to capture everything we do into a brief statement
21:02:49 <mspreitz> hi
21:03:02 <russellb> anyway, not a huge deal, don't want to bikeshed it, but want to give everyone a chance to provide input if you want to :)
21:03:23 <russellb> next thing ... mid cycle meetup
21:03:26 <russellb> we talked about this a bit last week
21:03:29 <johnthetubaguy> looks quite good to me, I wonder if we should mention the API, but yeah, looks good
21:03:50 <russellb> dates are confirmed as Feb 10-12
21:03:51 <russellb> 3 days
21:03:59 <russellb> in Orem, UT
21:04:02 <russellb> (right outside SLC)
21:04:08 <dansmith> woohoo!
21:04:16 <russellb> at Bluehost's facility, thanks to geekinutah :)
21:04:23 <jog0> I assume there will be a ML announcement
21:04:26 <russellb> yes
21:04:28 <jog0> kk
21:04:30 <russellb> we're very close
21:04:31 <jog0> and woot
21:04:40 <russellb> last thing was getting hotel links on our wiki page with all the details
21:04:43 <mikal> Morning
21:04:46 <russellb> we'll also have an eventbrite thing to register
21:05:08 <russellb> but once the details are on the page, i'll post to the ML
21:05:12 <russellb> just wanted to share the status
21:05:16 <mikal> That's super awesome, thanks for tweaking the dates
21:05:41 <russellb> i'm hoping to tack on a day of snowboarding :)
21:05:47 <russellb> if anyone wants to join ...
21:05:52 <russellb> and laugh at me
21:05:57 <jaybuff> russellb: i'm in!
21:06:00 <mikal> russellb: sorry, I'm a nerd and can't do that. I would hurt my typing hands.
21:06:01 <russellb> woot
21:06:09 <russellb> maybe we could go tubing one night, heh
21:06:19 * n0ano (skier) refuses to comment on knuckle draggers :-)
21:06:24 <mriedem1> doesn't anyone ski anymore?
21:06:25 <mikal> I would like to go skidooing
21:06:25 <johnthetubaguy> mikal: I am with you, is there a good pub with a nice view?
21:06:26 <russellb> hotel should be just under $100/night
21:06:45 <mikal> johnthetubaguy: we can find one I am sure. Oh, except its Utah. Do they drink there?
21:06:47 <russellb> so that should be enough to go ask nicely for budget
21:07:07 <jaybuff> mikal: every day except sunday (serious)
21:07:15 <mikal> jaybuff: works for me
21:07:18 <russellb> next item of business!
21:07:20 <mikal> Sunday can be liver recovery day
21:07:21 <russellb> #topic sub-teams
21:07:32 * johnthetubaguy raises hand
21:07:39 <russellb> johnthetubaguy: k, you can go first
21:07:44 * n0ano scheduler
21:07:58 * hartsocks waves
21:07:59 <johnthetubaguy> so, xenapi stuff, making progress with tempest tests
21:08:17 <johnthetubaguy> quite a few blueprint bits and bobs going in
21:08:24 <johnthetubaguy> I think thats about all from me
21:08:45 <russellb> cool, smokestack come back?
21:08:55 <johnthetubaguy> not that I know of, its a packaging issues
21:09:07 <johnthetubaguy> danp is on the case though, I am told
21:09:27 <russellb> ok
21:09:39 <russellb> n0ano: scheduler!
21:09:54 <n0ano> lively discussion this week (I should go away more often)
21:09:58 <n0ano> instance groups: hope to complete V3 api's for Icehouse
21:09:59 <n0ano> scalaility: hoping to get info from Boris, goal of ~1000 nodes, simulation vs. real world numbers
21:09:59 <n0ano> sql vs. nosql: discussed a little, google Omega project is worth looking at
21:09:59 <n0ano> scheduler as a service: Nova, Cinder & Neutron all need a scheduler, very early on this
21:10:18 <n0ano> that's pretty much what we went through for now.
21:10:26 <russellb> ok, and i think there's a thread starting on openstack-dev about now about the common scheduler
21:10:30 <russellb> so that's something to watch
21:10:33 <russellb> i'm very keen on that idea
21:10:53 <russellb> to just bite the bullet and run with that sooner than later ... with a very tightly limited scope to only replace nova's scheduler to start with
21:11:07 <n0ano> maybe, neutron's needs are simple so I'm not totally convined we need a separate service but we'll see
21:11:36 <russellb> well i think it's more about the information living somewhere common
21:11:42 <russellb> failure domains, locality
21:11:52 <russellb> because scheduling instances needs all of that
21:12:03 <russellb> but anyway, a hot topic in the next week i expect
21:12:10 <russellb> let's take it to the list :)
21:12:15 <n0ano> there are other proposals to have multiple schedulers, reducing failure domains a little but good discussion
21:12:25 <russellb> OK
21:12:29 <russellb> hartsocks: you're up
21:12:32 <hartsocks> So..
21:12:37 * mriedem1 raises hand after hartsocks
21:12:42 <hartsocks> We have a couple bugs folks marked "Critical" ...
21:12:42 <hartsocks> #link https://bugs.launchpad.net/nova/+bug/1195139
21:12:42 <hartsocks> ... kills the vmware drivers for anyone on postgres it turns out. oops.
21:12:43 <hartsocks> #link https://bugs.launchpad.net/nova/+bug/1252827
21:12:43 <hartsocks> ... kills our CI environment. oops.
21:12:44 <hartsocks> Still working on priorities for our BP work. We're excited to do something on the configuration validator that will be broadly usable. *woot*
21:12:45 * lifeless puts up hand for coordinating the common scheduler stuff :)
21:12:45 <uvirtbot> Launchpad bug 1195139 in nova "vmware Hyper  doesn't report hypervisor version correctly to database" [Critical,In progress]
21:12:46 <uvirtbot> Launchpad bug 1252827 in openstack-vmwareapi-team "VMWARE: Intermittent problem with stats reporting" [Critical,Confirmed]
21:13:02 <hartsocks> I typed that up ahead of time :-)
21:13:05 <russellb> lifeless: awesome
21:13:11 * melwitt raises hand after mriedem1
21:13:13 <devananda> FWIW, as schedulers go, ironic would probably be involved if there's a general scheduling service, particularly if it starts being aware of failure domains and such
21:13:48 <hartsocks> The scheduler stuff is really interesting IMHO.
21:14:06 <russellb> OK, please be careful when using critical ...
21:14:13 <n0ano> we have weekly meetings on Tues (check the wiki for details), the more the merrier
21:14:15 <mspreitz> Failure domain awareness is one of the motivating use case for my work on holistic scheduling.
21:14:25 <yjiang51> hartsocks: +1
21:14:28 <hartsocks> We really want these two to go back soon.
21:14:36 <hartsocks> back port to Havana that is.
21:15:17 <hartsocks> Basically our driver kills things if you're on Postgres… and the other bug is killing our CI. :-)
21:15:34 <russellb> OK, fixes up?
21:15:51 <hartsocks> yep. or very soon on the way.
21:15:58 <hartsocks> The DB thing is up now.
21:16:18 <russellb> ok cool
21:16:18 <russellb> anything else?
21:16:27 <hartsocks> that's my pleading for today.
21:16:32 <hartsocks> :-)
21:16:38 <russellb> k!
21:16:42 <russellb> mriedem1: hi!
21:16:43 <dansmith> zomg zomg zomg
21:16:44 <mriedem1> for powervm we were requesting some core eyes on a review: https://review.openstack.org/#/c/57774/
21:17:01 <dansmith> I call dibs on +A on that
21:17:15 <mriedem1> questions?
21:17:18 <russellb> heh
21:17:41 <mriedem1> ha, so we should probably tell dperaza
21:17:42 <russellb> can you please add a link to that review to the ReleaseNotes/Icehouse wiki page?
21:17:44 <russellb> i think that's the one
21:17:48 <mriedem1> he just pushed a change for config drive
21:17:53 <russellb> fail.
21:17:53 <mriedem1> sure
21:17:53 <dansmith> mriedem1: I just -2d that :D
21:17:56 <mriedem1> haha
21:18:10 <dperaza> here now
21:18:14 <dperaza> reading
21:18:14 <lbragstad> mriedem1: yeah I just saw that. I no longer need at ask my question either :)
21:18:15 <russellb> well hello there
21:18:25 <mriedem1> dperaza: https://review.openstack.org/#/c/57774/
21:18:35 * russellb fine with it if you guys are
21:18:39 <mriedem1> yup
21:18:41 <mriedem1> i'll update the wiki
21:18:43 <russellb> easy enough
21:18:46 <dansmith> I'm more than fine with it
21:19:01 <russellb> should we apply something to stable/havana, like a deprecation log message or something?
21:19:06 <russellb> just ... in case?
21:19:15 <johnthetubaguy> I would ask questions about deprecation windows but?
21:19:27 <mriedem1> russellb: we can if someone wants us to
21:19:28 <dansmith> russellb: for the principle, yeah, but..
21:19:37 <russellb> heh, k
21:19:46 * mrodden was wondering about that too
21:19:58 <mrodden> to my knowledge there aren't any external users of it
21:20:03 <russellb> if it's not possible that anyone would be using it, meh
21:20:14 <russellb> what about internal users you don't know about?
21:20:19 <russellb> anyway, just stuff to consider, no rush
21:20:20 <mriedem1> it's us i think
21:20:26 <russellb> k
21:20:29 <mriedem1> i don't need to get into the internal drama here
21:20:37 <mriedem1> it's fun, but not here
21:20:38 <russellb> melwitt: hello!  have some novaclient info?
21:20:44 <russellb> k
21:20:57 <melwitt> I have our weekly report. :)
21:20:57 <melwitt> 105 bugs open !(fix released), 78 bugs open !(fix committed), 35 new bugs, no high priority bugs
21:20:57 <melwitt> 10 patches up and being actively reviewed and updated or WIP
21:20:57 <melwitt> and I just started going through cleaning the bug list, trying to reproduce the issues and closing old bugs that have since been fixed
21:20:57 <melwitt> that's about it
21:21:24 <russellb> awesome!
21:21:33 <russellb> really appreciate the bug list cleanup, long overdue
21:21:41 <russellb> if anyone wants to help, please talk to melwitt !
21:21:49 <russellb> a good opportunity to contribute :)
21:22:07 <russellb> probably some good bug fix opportunities in that list
21:22:39 <russellb> one last thing for sub-teams, since a lot are aronud compute drivers
21:22:53 <russellb> please see dansmith's post and nice wiki page on the driver deprecation plan ... http://lists.openstack.org/pipermail/openstack-dev/2013-November/019355.html
21:23:11 <russellb> we've talked about how we needed to write down more details about the driver testing requirement and deprecation plan, so there it is
21:23:50 <russellb> so let us know if you have any questions on any of it!
21:24:04 <russellb> i think everyone is well aware of the requirements at this point
21:24:21 <russellb> #topic bugs
21:24:26 <russellb> lifeless: hello, bug czar!
21:24:35 <lifeless> https://etherpad.openstack.org/p/nova-bug-triage
21:24:46 <lifeless> is my current thoughts
21:25:19 <lifeless> I've now basically finished my massive set of HP paper reviews so I have bandwidth to push this forward
21:25:53 <mriedem1> lifeless: so we talked about this a bit, and i added to the agenda, but wondering about how to handle CI/gate breaker bugs special?
21:25:54 <mriedem1> https://wiki.openstack.org/wiki/Meetings/Nova
21:25:57 <lifeless> next week I hope to have some metrics, and if we have consensus on the desired properties I will get onto the larger discussion about the exact process to achieve them
21:26:01 <mriedem1> i think we should work that into your etherpad somehow
21:26:19 <mriedem1> jog0 should have input there
21:26:22 <lifeless> mriedem1: jeblair's email is what I think the answer is: qa get to say 'X is critical' in any gate related project.
21:26:36 <mriedem1> lifeless: yup, his email was good timing
21:26:42 <jaybuff> if it breaks the gate, why not revert?
21:26:53 <russellb> it's not always obvious what broke it
21:26:54 <mriedem1> jaybuff: sometimes finding what to revert is the problem
21:26:59 <lifeless> jaybuff: thats certainly an option, where feasible.
21:26:59 <russellb> if it was obvious, it wouldn't have made it in
21:27:07 <jaybuff> i see
21:27:12 <lifeless> jaybuff: and folk do try that. The point in the bug context isn't /how/ the bug is fixed.
21:27:22 <lifeless> jaybuff: its 'is such a thing critical'- yes it is. and
21:27:29 <lifeless> jaybuff: how do we get folk working on it quickly
21:27:52 <lifeless> essentially crisis management
21:28:35 <lifeless> mriedem1: So, I don't have a view [yet] on specific tags, but I think having a single tag for recheck issues [which such things are by definition] across all of openstack is desirable
21:28:35 <jog0> mriedem1: I agree with everything said
21:28:50 <russellb> i think jog0 did a great job this week of raising awareness
21:29:02 <russellb> which is a big part of getting people working on it
21:29:06 <jog0> so when gate starts failing, the first step is to find a fingerprint for the bug ideally that will be close to the root cause, but not always
21:29:10 <russellb> raising awareness, and tracking all the moving pieces
21:29:15 <lifeless> +1
21:29:20 <dansmith> agreed
21:29:22 <lifeless> anyhow, thats all I have right now
21:29:29 <russellb> lifeless: cool thanks :)
21:29:41 <jog0> then we can actaully see how frequent it occurs.  And if its bad we mark as critical
21:29:41 <jog0> such as https://bugs.launchpad.net/nova/+bug/1251920
21:29:41 <jog0> which isn't fixed, we just disabled the test
21:29:44 <uvirtbot> Launchpad bug 1251920 in nova "Tempest failures due to failure to return console logs from an instance" [Critical,In progress]
21:29:49 <jog0> so mikal has some more work on that one i THink
21:29:51 <jog0> as we all do
21:29:56 <mikal> jog0: sigh.
21:30:19 <jog0> going forward we need to be more aggressive about fixing gate bugs
21:30:21 <russellb> that bug is annoying.
21:30:22 <jog0> so we never wedge the gate again
21:30:46 <russellb> jog0: thoughts on how we do that?
21:30:54 <russellb> if we have a gate critical bug, just lock the gate?
21:30:56 <clarkb> I did look at that test a little, the one that fails was typically running after the test that was skipped, they run in the same test runner and use the same VM
21:30:59 <russellb> that's a brute force solution..
21:31:20 <clarkb> fyi for whoever ends up actually looking at it. I believe it is an inter test interaction due to test orderign and shared resources
21:31:46 <jog0> russellb: well we want to run all gate jobs x times where x > 1
21:32:00 <jog0> but also what jim said in his email
21:32:20 <jog0> just do a better job of making sure critical bugs get priority (which why its important to be careful with the critical tag)
21:32:30 <russellb> clarkb: interesting
21:32:36 <russellb> jog0: +1
21:33:14 <johnthetubaguy> +1 to using critical sparingly
21:33:42 <jog0> also any critical gate issue should be assigned to the next milestone
21:33:52 <jog0> at least thats the idea I am going with now
21:34:05 <jog0> to help with visibility  (there may be a better answer to that though)
21:34:27 <cyeoh> jog0: so openstackrecheck reports to irc when a patch fails due to a recognized bug - could it also report how many times it's seen that failure?
21:34:49 <jog0> so we are working on a summery of what happend, in short two bugs took us from unstable gate to wedged in a few days
21:35:05 <jog0> cyeoh: we have that but hidden a bit status.openstack.org/elastic-recheck/
21:35:21 <cyeoh> jog0: thx
21:35:25 <johnthetubaguy> we have so many jobs now, a small error rate on each adds up to a huge error rate I guess
21:35:49 <russellb> right, and with 50 things trying to go in at once, a failure can be really disruptive
21:35:58 <russellb> and cause a ton of wasted test time
21:35:59 <jog0> bingo
21:36:09 <jog0> also we are exploring killing 'reverify no bug'
21:36:13 <jog0> as that got us here
21:36:16 <russellb> jog0: nice
21:36:18 <bnemec> +1
21:36:18 <mikal> Yeah, I've only worked on that one bug this week
21:36:22 <mikal> Its been a huge pain in the ass
21:36:23 <cyeoh> +1
21:36:26 <jog0> also we are still in critical bug fix only mode
21:36:30 <jog0> so no +A yet
21:36:41 <jog0> getting close to being out ofthis though
21:36:54 <jog0> mikal: that was a bitch of a bug, and we still don't understand it
21:37:18 <russellb> but thinking that it might be test interaction?
21:37:28 <russellb> clarkb's comment a bit ago was very interesting
21:37:30 <jog0> russellb: all signs point to yes
21:37:31 <russellb> a good hint...
21:37:41 <geekinutah> jog0: I like killing reverify no bug, and recheck no bug
21:37:49 <geekinutah> at least we end up classifying things
21:37:50 <russellb> i do too
21:37:55 <russellb> it's abused way too much
21:38:02 <russellb> will be annoying for a few types of failures, but worth it overall i think
21:38:02 <geekinutah> which gets us closer to figuring out if they are critical, etc
21:38:07 <jog0> anyway I don't want to derail the nova meeting, more info coming soon
21:38:16 <russellb> jog0: ok, thanks a bunch
21:38:28 <russellb> everyone give jog0 a high five
21:38:30 <russellb> jog0: ^5
21:38:32 <russellb> :)
21:38:34 <jog0> I would personally like to thank mikal for banging his head against the wall for way too long
21:38:38 * bnemec high fives jog0
21:38:39 <russellb> and mikal!
21:38:41 <russellb> mikal: ^5
21:38:43 <mikal> jog0: you're welcome
21:38:48 <mspreitz> me ^5
21:38:52 * mspreitz ^5
21:38:54 * bnemec high fives mikal
21:39:04 <jog0> \o/ to all
21:39:07 <cyeoh> jog0, mikal: ^5
21:39:10 * bnemec is starting to feel like a cheerleader
21:39:14 * johnthetubaguy jog0 mikal ^6
21:39:17 <dripton> be careful with the double high fives; you might fall over
21:39:20 <russellb> johnthetubaguy: ooh
21:39:22 <mriedem1> bnemec: the skirt might have something to do with that
21:39:29 <russellb> alright, next topic :)
21:39:31 <russellb> #topic blueprints
21:39:41 * mriedem1 raises hand
21:39:44 <mriedem1> http://lists.openstack.org/pipermail/openstack-dev/2013-November/020041.html
21:39:49 <russellb> we made some really good progress in the last week getting icehouse-1 in shape
21:39:54 <russellb> roadmap wise
21:40:02 <russellb> i suspect a bunch won't make it that's there though
21:40:10 <russellb> merges need to be in by 1.5 weeks fro mnow
21:40:17 <russellb> and that includes a long US holiday weekend ...
21:40:22 <russellb> and the gate issues ... so ...
21:40:34 <geekinutah> throws a wrench in things
21:40:42 <russellb> please move things to icehouse-2 that you know won't make it, that will save some paperwork as we approach the milestone
21:40:57 <russellb> also, here's my latest email to the ML on the Icehouse roadmap status in general
21:40:59 <russellb> #link http://lists.openstack.org/pipermail/openstack-dev/2013-November/019786.html
21:41:04 <russellb> i short: icehouse-1 looking good
21:41:06 <russellb> in*
21:41:16 <russellb> nova-core folks need to start sponsoring some blueprints
21:41:27 <russellb> and nova-drivers, we still have some work to do for icehouse-2 and 3
21:41:39 <russellb> lots of work now early in the cycle, but should calm down in a few weeks
21:41:56 <russellb> https://blueprints.launchpad.net/nova/icehouse
21:41:56 <russellb> #link https://blueprints.launchpad.net/nova/icehouse
21:41:58 <russellb> that's the full icehouse list
21:41:58 <russellb> 90 blueprints earlier today
21:42:10 <russellb> OH, i think a bunch are missing still too
21:42:20 <russellb> for things we talked about summit ... so get your blueprints posted :)
21:42:29 <russellb> i think that's all i had ... now on to specific ones!
21:42:34 <russellb> mriedem1: ok now you can go
21:42:46 <mriedem1> russellb: was just going to point out some relate bps,
21:42:52 <mriedem1> about supporting v2 APIs for cinder and glance
21:42:55 <mriedem1> #link http://lists.openstack.org/pipermail/openstack-dev/2013-November/020041.html
21:43:05 <mriedem1> i wanted to raise attention since the ML thread kind of died
21:43:19 <mriedem1> but now there is a patch to do the same for cinder v2 so bringing it back up
21:43:26 <mriedem1> since you guys are reviewing blueprints
21:43:53 <russellb> OK
21:44:00 <russellb> so, trying to remember where we left that one ...
21:44:08 <russellb> deployers still want a knob to force it to one or the other
21:44:21 <mriedem1> right, but there were thoughts on discovery
21:44:25 <russellb> in the absense of config, we want some automatic version discovery
21:44:26 <mriedem1> and the keystone service catalog
21:44:41 <mriedem1> essentially can the service catalog abstract a lot of this garbage from nova
21:44:44 <russellb> i think the catalog should be to discover the endpoint, not the versions it has
21:44:55 <russellb> and then you talk to the endpoint to do discovery
21:45:15 <russellb> and cache it so you don't double up on API requests for everything
21:45:19 <russellb> IMO
21:45:42 <mriedem1> yeah, cache on startup
21:45:48 <johnthetubaguy> russellb: +1 it seems a good end goal
21:45:57 <russellb> cool
21:46:00 <mriedem1> i just wasn't sure if jog0 was working something else related to the service catalog that interacted with this
21:46:02 <cyeoh> russellb: agreed. There's been a bit of a discussion about version discovery given that catalog entries for specific versions of apis have started to appear
21:46:21 <johnthetubaguy> well its cached per user though, its a little tricky
21:47:16 <jog0> we should make sure keystone agrees with this idea
21:47:22 <johnthetubaguy> we also have list of glance servers to round robin, not sure how that fits in the keystone catelog, I guess it might
21:47:28 <jog0> also we cache the keystone catalog for cinder already
21:47:29 <russellb> mriedem1: can you ping dolph and get him to weigh in?
21:47:30 <jog0> in the context
21:47:34 <russellb> mriedem1: and maybe summarize what we said here?
21:47:44 <dolphm> \o/
21:47:47 <cyeoh> for some apis we might need one version specific catalog entry temporarily as the endpoints often currently point to a specific version rather than the root
21:47:50 <mriedem1> russellb: sure
21:48:28 <johnthetubaguy> cyeoh: yeah, I agree, we probably need those for the short term
21:48:28 <russellb> great thanks!
21:49:30 <russellb> OK, well thanks for raising awareness of this
21:49:31 <jog0> mriedem1: FYI the only thing I worked on was not saving the entire keystone catalog in the context and that was a while ago
21:49:45 <mriedem1> jog0: ok, i'll ping you in nova
21:49:54 <jog0> mriedem1: ack
21:50:21 <russellb> #topic open discussion
21:50:26 <russellb> mriedem1: you're on tire with topics today
21:50:28 <russellb> fire, too
21:50:36 <mriedem1> my other one is open discussion
21:50:37 <dolphm> mriedem1: russellb: +1 for the bits about cached version discovery against unversioned endpoints in the catalog
21:50:50 <mriedem1> russellb: so i'll defer for now
21:51:08 <russellb> dolphm: woot, that was quick
21:51:19 <russellb> mriedem1: open discussion time now
21:51:23 <mriedem1> christ
21:51:28 <mriedem1> ok, so oslo-sync
21:51:37 <mriedem1> ML: http://lists.openstack.org/pipermail/openstack-dev/2013-November/019098.html
21:51:39 <mriedem1> Important because of lifeless's sync for the recent gate breakage: https://review.openstack.org/#/c/57509/ and then this:https://review.openstack.org/#/c/57511/
21:51:42 * lbragstad lurks...
21:51:42 <dolphm> i just hope that the version discovery mechanism is smart enough to realize when it's handed a versioned endpoint, and happily run with that
21:51:45 <mriedem1> there seems to be some debate here
21:51:54 <mriedem1> bnemec: lifeless ^
21:52:00 <dolphm> (by calling that endpoint and doing proper discovery)
21:52:06 <bnemec> o/
21:52:24 <russellb> dolphm: yeah, need to handle that gracefully ...
21:52:37 <bnemec> Full Oslo syncs are scary right now, for sure.
21:52:39 <russellb> not sure we can really tackle the oslo thing now
21:52:42 <russellb> it's really a cross project issue
21:52:49 <russellb> and we'd need dhellmann
21:52:57 <russellb> it might be a good topic for the cross-project weekly meeting on tuesdays
21:53:03 <mriedem1> yeah, just another timely thing for the gate issues this week and the ML thread
21:53:06 * russellb nods
21:53:13 <russellb> definitely important, but bigger than nova i think
21:53:18 <mriedem1> totally
21:53:22 <mikal> I've been trying to slowly sync the things I maintain
21:53:31 <mikal> I feel its mostly a maintainter thing
21:53:40 <mriedem1> last full sync was march
21:53:41 <mikal> And random people shouldn't be doing super syncs they don't know about
21:53:42 <russellb> would like to hear what dhellmann thinks is the future here
21:53:45 <russellb> mikal: yep
21:53:48 <russellb> and aggressively moving toward libs
21:53:50 <jog0> russellb: agreed, good tuesday chat
21:54:09 <mikal> russellb: I feel the interface stability thing is still a good argument for the current way for some things
21:54:09 <lifeless> I think we should not be syncing at all
21:54:12 <lifeless> oslo should be a library[6~
21:54:16 <mikal> russellb: libs would be just as bad for changed interfaces
21:54:21 <lifeless> we can solve the interface stability concerns
21:54:22 <russellb> for the tuesday chat, please ask ttx to put it on the agenda
21:54:46 <mriedem1> i'll take that TODO
21:54:49 <mriedem1> since i raised it
21:54:52 <hartsocks> When is this on Tuesday? I want to learn what's going on there… I'm confused on OSLO.
21:54:53 <russellb> perfect
21:54:54 <mikal> lifeless: it would be exactly the same work, just a different mechanism
21:55:02 <lifeless> mikal: no, it wouldn't
21:55:11 <russellb> hartsocks: we've had a project release status meeting at the same time as this meeting for a long time
21:55:26 <russellb> but we've changed the format, to be more discussion based for cross-project topics
21:55:28 <hartsocks> russellb: okay, I'll lurk that one.
21:55:31 <russellb> instead of weekly release status syncing
21:55:34 <russellb> yeah it should be a good one
21:55:44 <russellb> if you're interested in cross project issues anyway
21:55:57 <russellb> which everyone should be to some degree :)
21:56:00 <hartsocks> recently becoming a problem for us...
21:56:05 * bnemec adds it to his calendar
21:56:07 <hartsocks> :-)
21:56:31 <russellb> i'll be doing the release status sync with ttx in a 1-1 chat
21:56:34 <russellb> but in a public channel
21:56:45 <mikal> As performance art
21:56:45 <russellb> it's 1515 UTC tuesday
21:56:48 <russellb> if anyone cares :)
21:57:01 <russellb> right now in #openstack-dev
21:57:26 <russellb> we go through bugs/blueprint status for the upcoming milestone, and the release roadmap (icehouse) in general
21:57:33 <mriedem1> is this the link? https://wiki.openstack.org/wiki/Meetings#OpenStack_Project_.26_Release_Status_meeting
21:57:34 <lifeless> mikal: I will be happy to discuss with you later about this
21:57:35 <russellb> make sure we have movement on the highest priority things, etc
21:57:40 <russellb> mriedem1: yes
21:57:50 <mriedem1> looks like the wiki has the wrong time
21:58:00 <mikal> lifeless: sure, more than happy to, but pleae try to understand the historical reasons for the way things are first before assuming we're all wrong
21:58:14 <lifeless> mikal: I was here when the decision was made.
21:58:22 <lifeless> mikal: I know the historical reason :)
21:58:50 <russellb> mriedem1: why is the time wrong?  says 2100 UTC, same as this meeting
21:59:06 <mriedem1> russellb: thought you just said it's 1515
21:59:09 <mriedem1> i must be confused
21:59:19 <russellb> oh, no, that's my weekly nova status sync
21:59:25 <mriedem1> ah, ok
21:59:26 <russellb> the above 2100 UTC meeting is the cross-project meeting
21:59:29 <mriedem1> yeah
22:00:05 <mriedem1> times up
22:00:10 <mriedem1> this has been fun
22:00:12 <russellb> alright, time up for today!
22:00:18 <russellb> thanks everyone!
22:00:22 <russellb> #endmeeting