14:00:33 <johnthetubaguy> #startmeeting nova
14:00:33 <openstack> Meeting started Thu Dec 10 14:00:33 2015 UTC and is due to finish in 60 minutes.  The chair is johnthetubaguy. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:34 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:37 <openstack> The meeting name has been set to 'nova'
14:00:40 <alaski> o/
14:00:53 <doffm> o/
14:01:00 <raildo> o/
14:01:01 <bauzas> \o
14:01:02 <claudiub> o/
14:01:05 <mriedem> o/
14:01:06 <takashin> o/
14:01:09 <jroll> \o
14:01:11 <johnthetubaguy> welcome all
14:01:11 <andrearosa> hi
14:01:14 <jichen> p/
14:01:15 <jichen> o/
14:01:18 <johnthetubaguy> #topic Release Status
14:01:23 <edleafe> \o
14:01:33 <johnthetubaguy> #info Jan 21: Nova non-priority feature freeze
14:01:42 <johnthetubaguy> #info Thursday December 17th: non-priority feature review bash day
14:01:50 <johnthetubaguy> #info Jan 19-21: mitaka-2
14:01:52 <rlrossit> o/
14:01:53 <ndipanov> 'sup
14:01:53 <scottda> hi
14:01:58 * kashyap waves
14:02:00 <johnthetubaguy> #info Blueprint and Spec Freeze was last week
14:02:26 <johnthetubaguy> so lots of dates there, just to share them
14:02:26 <andreykurilin__> o/
14:02:26 <johnthetubaguy> #link https://wiki.openstack.org/wiki/Nova/Mitaka_Release_Schedule
14:02:26 <markus_z> o/
14:02:34 <johnthetubaguy> so just a think about the freeze
14:02:49 <johnthetubaguy> I have been through all the exception requests, and commented on the specs
14:02:56 <johnthetubaguy> I need to loop back on some that made revisions
14:03:00 <johnthetubaguy> #link https://etherpad.openstack.org/p/mitaka-nova-spec-review-tracking
14:03:07 <johnthetubaguy> one spec-less one I wanted to raise
14:03:25 <johnthetubaguy> #link https://blueprints.launchpad.net/nova/+spec/osprofiler-support-in-nova
14:03:37 <johnthetubaguy> now there is some new code up to add osprofiler into nova
14:03:56 <johnthetubaguy> its no where near as invasive as it was last time we saw that code
14:04:08 <johnthetubaguy> so I am tempted to approve that, let me know if thats crazy
14:04:14 <johnthetubaguy> Any more on process or release status
14:04:22 <mriedem> haven't looked at the osprofiler changes
14:04:24 <johnthetubaguy> mriedem: how is v3.0 of python-novaclient heading?
14:04:30 <danpb> i think it is pretty clear that osprofiler is going to be the standard mechanism for openstack projects
14:04:45 <mriedem> johnthetubaguy: i think we're waiting on a keystoneauth change from mordred
14:04:53 <danpb> so even if there are problems with it, i feel the best thing is to take it and work on improving osprofiler where needed
14:04:58 <mriedem> this https://review.openstack.org/#/c/245200/
14:05:04 <johnthetubaguy> mriedem: ack, didn't see an update last time
14:05:36 <danpb> as we don't want nova to be the odd one out with profiling - we've already seen people propose other nova specific solutions for profiling such as the spec to hijack notifications as the mechanism
14:05:42 <johnthetubaguy> danpb: yeah, we pushed back last time because there was a huge heap of profiler code in Nova, that looks to have been fixed, seems OK now
14:05:57 <mriedem> this is the code btw https://review.openstack.org/#/c/254703/
14:06:03 <mriedem> it's mostly middleware and config options
14:06:37 <danpb> i've not looked in detail, but on the surface the propsoed code is pretty reasonable imho and similar to what i would have proposed for my own profiling attempts
14:07:07 <bauzas> if that's an optional middleware, that seems good to me
14:07:14 <alaski> if it's basically out of the way when off it seems reasonable to get it in and try it
14:07:29 <johnthetubaguy> bauzas: its actually better than that, but agreed
14:07:31 <bauzas> from a design PoV, should I say
14:07:45 <danpb> alaski: IIUC, it is a no-op unless it is enabled for an API call
14:07:47 <mriedem> sounds like we're ok with specless bp approval
14:08:01 * alex_xu waves hands late
14:08:01 <alaski> danpb: that's my understanding as well, but I haven't looked closely yet
14:08:08 <danpb> works for me,
14:08:14 <bauzas> mriedem: agreed
14:08:23 <johnthetubaguy> yep, lets move on
14:08:25 <danpb> IMHO there's no sense in a nova spec, as that'd just split the discussion from the cross project spec
14:08:31 <johnthetubaguy> reviewing the code is welcome
14:08:44 <johnthetubaguy> danpb: +1 totally why I am pushing for specless
14:09:06 <johnthetubaguy> as per the newly approved pattern from the summit discussions on cross project stuff
14:09:14 <johnthetubaguy> cool, so any more on freeze stuff
14:09:21 <johnthetubaguy> just a claficiation
14:09:28 <johnthetubaguy> API bugs that need specs, are still allowed
14:09:46 <johnthetubaguy> thats proper bugs, not add 17 new API calls kinds of bugs
14:09:53 <andrearosa> i have a question about the non priority freeze
14:09:59 <johnthetubaguy> andrearosa: sure
14:10:07 <andrearosa> considering the ZXmas time can we extend
14:10:12 <andrearosa> the deadline
14:10:22 <andrearosa> maybe for 2 more weeks?
14:10:30 <andrearosa> not it is Jan 21: Nova non-priority feature freeze
14:10:36 <johnthetubaguy> andrearosa: its already extended for christmas by mitaka-2 being further away
14:10:40 <bauzas> andrearosa: honestly, it was communicated since the beginning
14:10:50 <ndipanov> bauzas, so what?
14:10:56 <mriedem> yeah i don't want to extend,
14:11:00 <andrearosa> ok
14:11:03 <mriedem> i feel like we spent most of m-1 on spec reviews
14:11:13 <ndipanov> so?
14:11:26 <johnthetubaguy> I mean if we hadn't already extended the release schedule an extra week for christmas, I would consider it
14:11:32 <DinaBelova> johnthetubaguy, bauzas, mriedem, danpb - thank you guys for the approval of the bp
14:11:43 <ndipanov> we focus on specs so we should leave as little time as possible to write code
14:11:48 <bauzas> looking at number of weeks per milestone
14:11:59 <bauzas> m-2 is definitely the higher
14:12:10 <johnthetubaguy> ndipanov: we aimed to get specs merged before the summit, thats the aim, but it always spills over a little
14:12:12 <bauzas> because it takes account of Xmas
14:12:27 <mriedem> people can also write POC code while working on a spec
14:12:32 <bauzas> ++
14:12:35 <ndipanov> mriedem, lol
14:12:43 <ndipanov> that's not what we tell them
14:12:45 <ndipanov> anyway
14:12:47 <johnthetubaguy> anyways, to be clear, its our deadline, and its not aligned with the tagging
14:12:47 <mriedem> it's not?
14:12:54 <ndipanov> well we may say that
14:12:59 <ndipanov> but not what we encourage
14:13:03 <mriedem> i encourage it
14:13:04 <raildo> "Thursday December 17th: non-priority feature review bash day" Will be a day, that we can ask for review on the non-priority feature?
14:13:07 <mriedem> for the reason you're bringing up
14:13:12 <ndipanov> I'm in the minority here so I'll just step away
14:13:29 <mriedem> if anyone is discouraging writing POC code in parallel with a spec, i'd say they are wrong
14:13:30 <ndipanov> but the reality is
14:13:33 <ndipanov> we merged a ton of specs
14:13:39 <johnthetubaguy> raildo: well its a day I want everyone to folks on reviewing the low priority blueprints that have been up for review the longest
14:13:39 <ndipanov> most of them won't make it
14:13:49 <ndipanov> a lot will be close and people will get super frustrated
14:13:55 <ndipanov> come Nandos
14:13:57 <ndipanov> we start over
14:13:58 <raildo> johnthetubaguy: nice :)
14:14:02 <ndipanov> makes no sense imho
14:14:04 * ndipanov stops
14:14:15 <mriedem> we also have a gate on fire,
14:14:16 <jroll> ndipanov: ahem, N is Nutella :P
14:14:21 <mriedem> and people need to work on those
14:14:27 <bauzas> or Nachos
14:14:28 <ndipanov> mriedem, because we focus on the wrong things in the gate
14:14:38 <mriedem> let's move on
14:14:39 <ndipanov> but that';s a whole other topic we won't agree on
14:14:59 <johnthetubaguy> so, we have 121 blueprints approved, I expect us to have bandwidth to merge 70
14:15:02 <johnthetubaguy> ish
14:15:12 <johnthetubaguy> assuming we don't get loads more folks doing quality reviews
14:15:18 <ndipanov> we won't
14:15:25 <raildo> johnthetubaguy: maybe we can extend for another day, if it is not enough? :)
14:15:35 <johnthetubaguy> hence the spec deadline
14:16:00 <johnthetubaguy> anyways, this isn't a discussion that works well on IRC
14:16:02 <johnthetubaguy> lets move on
14:16:37 <johnthetubaguy> final point is if you are curious why we do the process this way, its mostly documented in here: https://wiki.openstack.org/wiki/Nova/Process
14:17:00 <johnthetubaguy> #topic Regular Reminders
14:17:17 <johnthetubaguy> so, we have subteams with lots of code ready for core review
14:17:23 <johnthetubaguy> #link https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking
14:17:39 <johnthetubaguy> #topic Bugs
14:17:43 <johnthetubaguy> mriedem: so the gate
14:18:03 <mriedem> waiting on https://review.openstack.org/#/c/253901/
14:18:11 <mriedem> multinode grenade fails >50% of the time
14:18:20 <johnthetubaguy> a bad gate is almost always the sign of unstable code, so it really matters to me fixing this
14:18:42 <sdague> mriedem: actually, that's mostly a nicety
14:18:53 <johnthetubaguy> so this is about pinning liberty?
14:18:54 <sdague> upper-constraints comes from the requirements repo
14:18:56 <mriedem> sdague: i see it's dropped off http://status.openstack.org/elastic-recheck/gate.html
14:19:00 <mriedem> sdague: oh right
14:19:02 <mriedem> yeah so we're good now
14:19:10 <mriedem> as of last night
14:19:15 <johnthetubaguy> oh, OK, so this is more defensive
14:19:22 <mriedem> well, it's the g-r sync
14:19:27 <mriedem> we blacklisted oslo.messaging 3.1.0
14:19:28 <sdague> oslo.messaging started having a much higher failure rate after service restart
14:19:34 <bauzas> oh, so there was a workaround for the job ?
14:19:41 <sdague> it's a 3.1.0 bug
14:19:45 <bauzas> oh I see
14:19:53 <bauzas> good to hear \o/
14:20:00 <mriedem> we still have a bunch of volume-related failures in the gate, at least one of which should be fixed by https://review.openstack.org/#/c/254428/
14:20:07 <sdague> some default timeouts were changed in a way that service restarts would often fail to reconnect to rabbit
14:20:09 <mriedem> or made less frequent by
14:20:09 <johnthetubaguy> ah, I see !=3.1.0
14:20:22 <johnthetubaguy> ah, so that makes sense, ouch
14:20:47 <johnthetubaguy> markus_z: any more bug related things you wanted to highlight today?
14:21:03 <markus_z> yepp
14:21:07 <markus_z> #info: zero known criticals. High prio bugs which are not in progress are steady with 41 bugs
14:21:12 <markus_z> I haven't yet found time to clean up the very old ones
14:21:16 <markus_z> #info: latest bug stats at http://lists.openstack.org/pipermail/openstack-dev/2015-December/081415.html
14:21:25 <markus_z> The amount of bug reports is rising heavily since mitaka-1. We have 76 "new" bugs overall.
14:21:31 <markus_z> 23 bugs out of the 76 are without any first bug triage and I'm out of bandwidth to do it. This will be a heavy load later when we want to be RC ready.
14:21:36 <markus_z> #help: anyone (or more) volunteering for a "bug skimming duty" for a week?
14:21:40 <johnthetubaguy> mriedem: btw, do we have people working the top couple of gate issues, or are there some specific ones folks need to jump on?
14:21:50 * markus_z drops mic
14:22:02 <markus_z> na, seriously, we need more people triaging
14:22:06 <mriedem> johnthetubaguy: i have that patch up for the volumes one
14:22:07 <doffm> markus_z: Sure. Some of the week count? :)
14:22:17 <johnthetubaguy> more help with bug triage would be a good thing!
14:22:37 <mriedem> johnthetubaguy: i think http://status.openstack.org/elastic-recheck/gate.html#1522488 was fixed by infra, the 2 nodes had the same hostname
14:22:42 <mriedem> dansmith figured that out
14:22:43 <markus_z> doffm: What do you mean?
14:23:06 <doffm> markus_z: I mean I'm happy to help with triaging and bug skimming.
14:23:19 <ndipanov> mriedem, I see there was some good discussion on that patch so will look at it now
14:23:32 <mriedem> ndipanov: yeah, alaski and i looked into the locking more yesterday
14:23:36 <lxsli> markus_z: are there instructions?
14:23:42 <markus_z> doffm: cool! the more the better.
14:24:04 <markus_z> lxsli: I tried to summarize it once but didn't get it merged in any of the manuals
14:24:19 <sdague> mriedem: yeh the fail rates are getting back under control - http://tinyurl.com/zbg788y
14:24:30 <markus_z> lxsli: That's all I know: http://markuszoeller.github.io/posts/openstack-bugs/
14:24:41 <johnthetubaguy> markus_z: lets try get it into ours again
14:24:58 <lxsli> markus_z: thanks!
14:25:32 <johnthetubaguy> so lets keep moving
14:25:52 <johnthetubaguy> #topic Stuck Reviews
14:26:05 <johnthetubaguy> something tells me I didn't clean these out, I could be wrong
14:26:14 <mriedem> the uefi one still needs discussion i think
14:26:23 <mriedem> i see there was an exception request, and danpb is +2
14:26:27 <mriedem> https://review.openstack.org/#/c/235983/
14:26:32 <johnthetubaguy> mriedem: OK
14:26:34 <andrearosa> teh nova-manage is just a quick remainder I added it on purpose
14:26:44 <markus_z> doffm: lxsli: ping me in #openstack-nova after the meeting if you have further questions
14:26:44 <mriedem> i think i'm ok with this not having testing as long as there is a warning when it's used that it's experimental
14:27:37 <johnthetubaguy> it currently says add a flag for this in tempest, which sounds wrong
14:28:19 <mriedem> that's wrong
14:28:21 <bauzas> are we writing somewhere that an experimental feature means that we can remove it without a deprecation cycle ?
14:28:32 <mriedem> that implies testing upstream, which we can't do until we have new enough libvirt
14:28:42 <mriedem> bauzas: no
14:28:43 <johnthetubaguy> bauzas: so that was my feature classification thing
14:28:50 <mriedem> oh
14:28:55 <johnthetubaguy> mriedem: well its just not a flag in tempest right, its an image setting
14:29:04 <bauzas> mriedem: I'd feel that it would help people balance the benefits vs. the risks
14:29:10 <mriedem> johnthetubaguy: right, per my ML thread, you'd configure tempest with the uefi image id
14:29:12 <mriedem> to enable that test
14:29:23 <mriedem> but you need an env with libvirt 1.2.9 and the ovmf package
14:29:28 <mriedem> which we don't have
14:29:47 <rgerganov> what about https://review.openstack.org/#/c/228778/ ? it pretty much a bugfix but I posted a spec because of the API change
14:29:49 <johnthetubaguy> mriedem: bauzas: well the feature classification doesn't quite do that, just helps us audit what is untested, and think about removing some of it
14:29:54 <mriedem> intel could provide 3rd party ci, but i'm not sure how much we care as long as there is a warning about it being untested and experimental
14:30:06 <bauzas> johnthetubaguy: I certainly agree, I was just thinking of something more formal like what we say for our supported APIs
14:30:10 <mriedem> cells v1 is experimental but if we removed that w/o a deprecation cycle people's heads would explode
14:30:14 <johnthetubaguy> right, with a warning, it seems an edge case, thats thats OK
14:30:38 <bauzas> mriedem: I know, but that's just something we want to explain that it's risky
14:31:04 <johnthetubaguy> mriedem: bauzas: yeah, I should clarify, any feature removal would need a deprecation cycle, I miss read that, I am thinking about deprecating stuff thats just not being maintained
14:31:04 <mriedem> i think this is why we need to make the bar to inclusion high
14:31:06 <bauzas> honestly, considering that cells v1 as possibly being removed without further notice doesn't really make me afraid :)
14:31:16 <edleafe> bauzas: doesn't it mean "not guaranteed to work 100% of the time"?
14:31:19 <mriedem> not the exit criteria, because just removing stuff pisses people off
14:31:29 <mriedem> edleafe: right
14:31:33 <mriedem> like zookeeper in nova
14:31:36 <bauzas> edleafe: well, it sounds people are not afraid by experimental things
14:31:38 <mriedem> there are lots of untested things
14:31:53 <edleafe> bauzas: the thrill of living on the edge! :)
14:31:54 <johnthetubaguy> right, there was a plan to deprecate that, its just not moving very quickly
14:32:04 <mriedem> johnthetubaguy: the tooz thing?
14:32:07 <johnthetubaguy> yeah
14:32:08 <mriedem> anyway, just an example
14:32:15 <johnthetubaguy> a very valid one
14:32:33 <bauzas> so I'm fine with adding some notion of bleeding edge, but that should be very explicit that it's not really fully covered, nor supported
14:32:47 <mriedem> johnthetubaguy: so for this uefi one, i just asked that the spec point out there will be a warning that it's untetsed
14:32:47 <johnthetubaguy> we getting stuck here
14:32:49 <mriedem> *untested
14:32:51 <mriedem> then i'm +2
14:32:59 <bauzas> johnthetubaguy: fair to say that
14:33:12 <johnthetubaguy> mriedem: yeah, I would be happy with that too
14:33:29 <johnthetubaguy> it doesn't seem big enough to worry about any more than that
14:33:35 <mriedem> agreed
14:33:56 <johnthetubaguy> rgerganov: you linked a bug fix that is a spec, thats not affected by the freeze
14:34:19 <rgerganov> johnthetubaguy, ah ok then
14:34:21 <johnthetubaguy> there way another one on the list right
14:34:40 <johnthetubaguy> andrearosa: the nova-manage volume thingy
14:34:45 <andrearosa> johnthetubaguy: yes
14:34:52 <andrearosa> I dop not want to discuss it here
14:35:01 <andrearosa> sdague: started a ML thread to get more info
14:35:07 <johnthetubaguy> we didn't want an API for a DB hack, so we said nova-manage, but the current nova-manage command looks a bit like an API
14:35:15 <johnthetubaguy> or something like that
14:35:18 <andrearosa> I tried to recap the long discussions we had
14:35:37 <andrearosa> if ppl interested could follo-up the ML thread I think that will help in moving on
14:35:39 <sdague> andrearosa: ... where is that recap?
14:35:51 <sdague> I don't see anything back on the mailing list
14:36:01 <scottda> recap: http://lists.openstack.org/pipermail/openstack-dev/2015-December/081119.html
14:36:19 <DuncanT> http://lists.openstack.org/pipermail/openstack-dev/2015-December/081119.html
14:36:24 <mriedem> the message got chopped a bit
14:36:31 <sdague> hmm... and broke threading
14:36:33 <andrearosa> scottda: thanks
14:36:44 <sdague> so I didn't see the response
14:36:46 <mriedem> there he is
14:36:55 <andrearosa> sdague: yes sorry I blame MS outlook
14:37:25 <andrearosa> as I said I do not want to discuss it here,the ML seems more appropriate
14:38:14 <mriedem> scottda: DuncanT: andrearosa: you might also be interested in https://review.openstack.org/#/c/254428/
14:38:15 <johnthetubaguy> well are there any points people want to discuss here
14:38:19 * sdague takes todo to go read the other part of the thread
14:38:24 <johnthetubaguy> OK
14:38:32 <andrearosa> sdague: thanks.
14:38:53 <johnthetubaguy> making the DELETE API just work for when its stuck in the deleteing state, seems to make sense to me
14:39:21 <johnthetubaguy> not totally sure how we sort that, feels like the thing that failed should have moved the state from deleting back to attched
14:39:28 <johnthetubaguy> but like you say, lets take that offline
14:39:35 <johnthetubaguy> seems like:
14:39:46 <johnthetubaguy> * should the API just do the correct thing?
14:40:01 <e0ne> does cinder can help with this?  we're going to change API a bit to store connector_info in the DB
14:40:03 <mriedem> thought you were taking it offline? :)
14:40:03 <johnthetubaguy> * do we need some other API or DB hack to fix up some other edge cases that are left?
14:40:14 <johnthetubaguy> mriedem: yeah
14:40:28 <mriedem> we also discussed race related volume fails in https://review.openstack.org/#/c/254428/
14:40:35 <mriedem> like moving volume create to api or conductor rather than compute
14:40:42 <johnthetubaguy> #topic open discussion
14:40:43 <mriedem> so we always have the volume_id before we build the instance
14:40:56 <johnthetubaguy> mriedem: oh good point, also related to detaching a volume for a shelved instance
14:41:21 <mriedem> e0ne: we also store the connection_info in the nova db
14:41:41 <johnthetubaguy> so I have two things for open discussion
14:41:59 <johnthetubaguy> the meetings on 24 December and 31st December
14:42:03 <e0ne> mriedem: I didn't  know it. need to do deeper into the code
14:42:13 <johnthetubaguy> I was thinking about cancelling those ones, or do people still want to do those?
14:42:26 <markus_z> nope, I won't be here
14:42:32 <mriedem> xmas eve and new years eve
14:42:40 <sdague> cancel them
14:42:43 <johnthetubaguy> mriedem: yeah, seems like prime skip candidates
14:42:44 <mriedem> we should maybe force people to take those days off?
14:42:46 <claudiub> maybe on some other earlier days?
14:42:47 <bauzas> ++
14:42:54 <johnthetubaguy> mriedem: not a terrible idea
14:43:02 <johnthetubaguy> so yeah, lets skip those meetings
14:43:05 <dansmith> like make the -nova channel moderated? :)
14:43:12 <johnthetubaguy> dansmith: heh
14:43:20 <mriedem> proxy all discussion through twitter
14:43:22 <bauzas> kick ayone discussing by those days ?
14:43:37 <johnthetubaguy> so I don't think anyone jumped for joy, but does someone fancy the cross project CPL job?
14:44:35 <johnthetubaguy> OK, offer is still out there
14:44:40 <mriedem> so is that someone sitting in the cross project specs repo?
14:44:59 <johnthetubaguy> mriedem: yeah, and attending the cross project meeting, should it happen
14:45:17 <johnthetubaguy> its not very europe timzone friendly though
14:45:43 <johnthetubaguy> so its open open stuff
14:45:49 <johnthetubaguy> I am guessing we are done
14:46:00 <johnthetubaguy> not alaski or mriedem quick, but a touch early
14:46:03 <johnthetubaguy> thanks all
14:46:12 <johnthetubaguy> #endmeeting