14:00:14 <mriedem> #startmeeting nova
14:00:15 <openstack> Meeting started Thu May  5 14:00:14 2016 UTC and is due to finish in 60 minutes.  The chair is mriedem. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:16 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:19 <openstack> The meeting name has been set to 'nova'
14:00:22 <mriedem> adrian_otto akuriata alevine alexpilotti aloga andreykurilin anteaya artom auggy
14:00:22 <mriedem> bauzas belliott belmoreira bobball cburgess claudiub danpb dguitarbite _diana_
14:00:22 <mriedem> diana_clarke dims duncant edleafe efried flip214 funzo garyk gcb gjayavelu
14:00:22 <mriedem> irina_pov jaypipes jcookekhugen jgrimm jichen jlvillal jroll kashyap klindgren
14:00:26 <dansmith> o/
14:00:27 <BobBall> o/
14:00:27 <doffm> o/
14:00:27 <takashin> o/
14:00:29 <diana_clarke> o/
14:00:30 <andrearosa> hi
14:00:30 <edleafe> \o
14:00:30 <mriedem> krtaylor lbeliveau lxsli macsz markus_z mdorman med_ mikal mjturek mnestratov
14:00:30 <mriedem> moshele mrda nagyz ndipanov neiljerram nic Nisha PaulMurray raildo rgeragnov
14:00:30 <mriedem> sc68cal scottda sdague sileht sorrison swamireddy thomasem thorst tjones tonyb
14:00:30 <mriedem> tpatil tpatzig xyang rdopiera sarafraj woodster sahid rbradfor junjie gsilvis
14:00:31 <jroll> hai
14:00:34 <johnthetubaguy> o/
14:00:37 <rlrossit> o/
14:00:38 <sdague> o/
14:00:40 * kashyap waves
14:00:42 <alex_xu> o/
14:00:44 <rbradfor> o/
14:00:44 <alaski> o/
14:00:44 <scottda> hi
14:00:51 <andreykurilin__> \o
14:00:52 <thorst> hi
14:00:52 <raildo> o/
14:00:55 <efried> yo
14:00:59 <dansmith> cripes
14:00:59 <lbeliveau> hey
14:01:05 <lxsli> o/
14:01:10 <abhishek> o/
14:01:10 <dims> o/
14:01:11 <cdent> o/
14:01:16 <mriedem> an army
14:01:16 <gibi> o/
14:01:23 <mriedem> #link agenda https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting
14:01:25 <efried> post-summit spike?
14:01:29 <mriedem> of course
14:01:29 <dims> reporting for duty sir! :)
14:01:39 <mriedem> #topic release news
14:01:45 <mriedem> #link Rough outline of Newton release milestones is up: https://wiki.openstack.org/wiki/Nova/Newton_Release_Schedule
14:01:57 <mriedem> ^ is based on the priorities session we had last week
14:02:22 <mriedem> i have a patch to get it into the releases repo also https://review.openstack.org/#/c/312245/
14:02:31 <mriedem> just need dims to approve that
14:02:43 <dims> yep, done
14:03:05 <dims> was just reviewing that :)
14:03:07 <mriedem> the next milestone is n-1 which is also non-priority spec approval freeze, on june 2nd
14:03:20 <mriedem> so we have ~4 weeks to review specs
14:03:26 <mriedem> *non-priority specs
14:03:47 <mriedem> what are the priorities? you ask
14:03:50 <mriedem> well, bam! https://specs.openstack.org/openstack/nova-specs/priorities/newton-priorities.html
14:04:04 <mriedem> also from the session last week
14:04:33 <mriedem> we have oodles of specs coming in from after the summit, several are just TODOs from discussions
14:04:37 <mriedem> a lot around cells v2 and API
14:04:38 * johnthetubaguy nods with approval in mriedem general direction
14:05:03 <mriedem> any questions on the release schedule?
14:05:16 <mriedem> #topic bugs
14:05:17 <jlvillal> o/
14:05:33 <mriedem> #link check queue gate status http://status.openstack.org/elastic-recheck/index.html
14:05:55 <mriedem> the only news i have here is devstack switched to using fernet tokens by default on 4/29
14:06:05 <mriedem> and now we have race failures in the identity testes
14:06:07 <mriedem> *tests
14:06:19 <dansmith> and some neutron tests in our unit tests right?
14:06:21 <mriedem> a fix merged yesterday but i'm still seeing failures in logstash
14:06:31 <mriedem> dansmith: which looks like mox?
14:06:34 <mriedem> i've seen something like that
14:06:41 <dansmith> some port create failure or someting
14:06:45 <mriedem> yeah
14:06:55 <mriedem> i don't have a bug or query for that one
14:07:04 <efried> Need to port away from mox?
14:07:06 <mriedem> usually those were in py34 jobs, but that one wasn't
14:07:15 <mriedem> efried: well,
14:07:18 <dansmith> I saw these in py27
14:07:20 <mriedem> have you seen that class?
14:07:26 <mriedem> mega mox setup in the base class
14:07:35 <mriedem> so porting that one is going to be a massive undertaking
14:07:43 <dansmith> in fact,
14:07:51 <efried> mriedem, I can't remember why we hate mox.
14:07:56 <dansmith> it's going to nearly require a parallel effort to make sure we get coverage before we remove the mox ones,
14:07:59 <mriedem> efried: it doesn't support py3 for one
14:08:06 <dansmith> because those tests are soooo tightly coupled to the code
14:08:06 <mriedem> dansmith: yeah
14:08:23 <mriedem> efried: there is a doc about it somewhere with more details
14:08:39 <mriedem> we could at least start by converting the test that's failing to mock, and then drop the old one
14:08:48 <mriedem> i'm sure it's some variant of allocate_for_instance
14:08:50 <efried> mriedem, +1
14:09:29 <mriedem> efried: it's in http://docs.openstack.org/infra/manual/developers.html#peer-review
14:09:45 <mriedem> i'm not seeing our faithful bugs subteam people
14:09:57 <mriedem> the bugs meeting this week was pretty short, people are still recovering from the summit
14:10:13 <mriedem> our new bugs are rising though https://bugs.launchpad.net/nova/+bugs?field.searchtext=&field.status%3Alist=NEW&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_no_branches.used=&field
14:10:37 <mriedem> so if you get a spare 10 min, triage a bug, won't you?
14:10:53 <mriedem> #topic reminders
14:11:01 <mriedem> #info Newton mid-cycle RSVP closes on Tuesday 5/10: http://lists.openstack.org/pipermail/openstack-dev/2016-May/093815.html
14:11:24 <mriedem> if you are planning, or thinking of asking, to go to the midcycle, please rsvp ^ by next tuesday
14:11:45 <mriedem> #info api-ref docs cleanup review sprints on Monday 5/9 and Wednesday 5/11: http://lists.openstack.org/pipermail/openstack-dev/2016-May/093844.html
14:11:52 <edleafe> The Watcher midcycle is also at Hillsboro on the same dates, FYI
14:12:00 <mriedem> ok
14:12:29 <mriedem> we're going to do a review sprint next monday and wednesday for the api-ref docs cleanup
14:12:43 <mriedem> there are already a bunch of changes up for that
14:13:07 <mriedem> well. there were yesterday, looks like there were some busy beavers over night
14:13:23 <mriedem> #link Newton review focus list: https://etherpad.openstack.org/p/newton-nova-priorities-tracking
14:13:40 <mriedem> just remember to refresh ^
14:13:58 <mriedem> sdague also has dashboards for the virt driver subteams now i think?
14:14:15 <sdague> mriedem: I had a proposed set, was looking for feedback on it
14:14:23 <sdague> it's posted to the list
14:14:40 <mriedem> #link driver review dashboards ML thread http://lists.openstack.org/pipermail/openstack-dev/2016-May/093753.html
14:14:45 <mriedem> yeah, i haven't read it all yet
14:14:50 <mriedem> thanks for posting that though
14:14:59 <BobBall> Just checking that you saw my feedback sdague :)
14:15:11 <sdague> BobBall: yep
14:15:28 <BobBall> Coolio
14:15:29 <mriedem> claudiub and garyk aren't here
14:15:33 <mriedem> so i guess...
14:15:51 <mriedem> We have 50 approved blueprints: https://blueprints.launchpad.net/nova/newton - 5 are completed, 5 have not started, 3 are blocked
14:16:05 <mriedem> ^ as of yesterday
14:16:26 <mriedem> #help https://wiki.openstack.org/wiki/Nova/BugTriage#Weekly_bug_skimming_duty Volunteers for 1 week of bug skimming duty?
14:16:37 <johnthetubaguy> sdague: hmm, that looks really quite good, with just looking for a +1, nice
14:16:52 <mriedem> looks like the table in https://wiki.openstack.org/wiki/Nova/BugTriage#Weekly_bug_skimming_duty needs to be updated
14:16:56 <mriedem> #action mriedem to update the table in https://wiki.openstack.org/wiki/Nova/BugTriage#Weekly_bug_skimming_duty
14:17:03 <jroll> sdague: what's the best way to add ironic there?
14:17:27 <jroll> is it in the dashboard repo or?
14:17:37 <mriedem> jroll: he's proposing it for that i think
14:17:45 <sdague> jroll: yes, it's in the dashboard repo already in that form
14:17:47 <mriedem> see what's in the ML thread and copy/change for ironic
14:17:57 <jroll> sdague: okay, I'll propose a change there
14:18:02 <sdague> jroll: great
14:18:08 <jroll> thanks
14:18:11 <mriedem> anything else on bugs?
14:18:28 <mriedem> #topic stable branches
14:18:34 <mriedem> #link Stable branch status: https://etherpad.openstack.org/p/stable-tracker
14:18:49 <mriedem> there is nothing really new for issues affecting nova as far as i know
14:18:56 <mriedem> stable team is on summit hangover also
14:19:23 <mriedem> we have quite a few mitaka and liberty backports open for review
14:19:38 <mriedem> and kilo is in eol freeze
14:19:59 <mriedem> Daviey has been going through all stable/kilo changes and -2ing them to prep for the final release before EOLing the branch
14:20:25 <mriedem> so if you have something in https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/kilo,n,z that you think needs to get in, you need to propose a freeze exception to the dev list
14:20:46 <mriedem> #topic Subteam Highlights
14:20:56 <mriedem> i hope people are here :)
14:21:01 <mriedem> alaski: cells v2 meeting recap?
14:21:14 <alaski> lots of jetlag
14:21:16 <mriedem> was mostly do reviews
14:21:22 <mriedem> bunch of specs up for review
14:21:29 <mriedem> dansmith is working on migrating keypairs to the api db
14:21:34 <alaski> yes, a lot of db migrations under discussion
14:21:45 <alaski> melwitt is working on mq switching
14:21:59 <alaski> ccarmack on testing still
14:22:11 <alaski> and everyone else on db migrations
14:22:33 <mriedem> ok, and the add-buildrequest-obj series
14:22:38 <alaski> yep
14:22:44 <mriedem> which looks like partially merged last week or so
14:22:48 <alaski> I'm sidetracked on policy briefly and then I'll be back to that
14:22:52 <mriedem> yup
14:22:56 <mriedem> ok, thanks
14:23:12 <mriedem> we don't have jaypipes, anyone else for the scheduler meeting highlights?
14:23:14 <mriedem> cdent: ?
14:23:24 <cdent> hola
14:23:27 * cdent thinks
14:23:48 <mriedem> the spec for generic-resource-pools is top priority this week i think
14:24:01 <mriedem> https://review.openstack.org/#/c/300176/
14:24:07 <jroll> not from the meeting, but we had a long talk with jay about scrapping the currently proposed ironic multiple compute host thing in favor of leveraging generic-resource-pools
14:24:07 <cdent> Yeah, nothing huge to report from the meeting, but yes, the specs rolling in on the resource-provider stack are active
14:24:11 <mriedem> and then resource-providers-allocations
14:24:23 <cdent> yeah, dynamic resource classes was posted by jay recently
14:24:25 <mriedem> jroll: cool
14:24:44 <mriedem> jroll: when i went back and read the re-approved compute hosts spec for ironic i was thinking it sounded a lot of like generic-resource-pools
14:24:49 <jroll> mriedem: might end up with a spec coming your way on that
14:24:50 <cdent> there's some heated debate going on related to how to do logging of hosts not matching during scheduling
14:24:51 <jroll> yeah
14:25:01 <jroll> there's some ops-ability things to work out but I think it's solvable
14:25:06 <mriedem> cdent: cfriesen's spec?
14:25:15 <cdent> and some quibbling over whether or not to use the db to resolve scheduling
14:25:18 <cdent> mriedem: yes
14:25:25 <cdent> loggin: https://review.openstack.org/#/c/306647/
14:25:43 <cdent> dynamic resoource classes: https://review.openstack.org/#/c/312696/
14:25:54 <mriedem> ok, i do see some ops people +1ing https://review.openstack.org/#/c/306647/
14:25:57 <edleafe> seems like we had that filter action logging discussion years ago
14:25:58 <mriedem> i brought that up in the ops meeting yesterday
14:26:00 <mriedem> glad they chimed in
14:26:03 <cdent> scheduling in db: https://review.openstack.org/#/c/300178/
14:26:22 <cdent> yeah, quite a lot of input on the logging for "new" people, which is very useful
14:26:30 <edleafe> cdent: +1
14:26:59 <mriedem> ok, and there is the thread about placement CLI/API in the dev list, but let's not get into that here
14:27:08 <cdent> I'm, personally, still vaguely confused on the need for the logs (see my comment on https://review.openstack.org/#/c/300178/ for doing resource management via filters being bad)
14:27:34 <mriedem> #link dev list thread on placement API/CLI http://lists.openstack.org/pipermail/openstack-dev/2016-May/093955.html
14:27:57 <mriedem> moving on
14:28:03 <mriedem> PaulMurray isn't around
14:28:11 <mriedem> anyone can speak for the live migration meeting?
14:28:12 <mriedem> mdbooth: ?
14:28:30 <mdbooth> I hadn't prepared anything
14:28:30 <lxsli> mriedem: give him one sec
14:28:35 <lxsli> yelling at him
14:28:46 <mdbooth> PaulMurray would be best :)
14:28:51 <PaulMurray> hi -
14:29:00 <PaulMurray> just joined - whatsup
14:29:06 <mriedem> PaulMurray: hi! any highlights from the live migration meeting this week?
14:29:39 <PaulMurray> we did a recap of the summit
14:30:05 <mriedem> ok
14:30:15 <PaulMurray> pretty much the same as the email I put on ML
14:30:17 <mriedem> moving on?
14:30:37 <mriedem> at some point i plan on adding a patch for an lvm gate job
14:30:42 <mriedem> for mdbooth's refactor series
14:30:44 <mdbooth> mriedem: +1
14:30:58 <mriedem> sdague: want to highlight the api subteam meeting?
14:31:04 <sdague> sure
14:31:12 <mdbooth> mriedem: Not sure this is the right time, but I'm also not convinced we have CI coverage of both Qcow2 and 'Raw' backends
14:31:21 <mdbooth> Suspect we only cover 1, but may be wrong
14:31:39 <sdague> a lot of it was on the deletion of the legacy v2 code in tree, which we agreed at summit
14:32:05 <sdague> the paste.ini entries are fully removed now, as well as the functional tests for the legacy code
14:32:25 <sdague> so there will be a series of test tear downs, then the final code remove over the next couple of weeks
14:33:02 <sdague> we're doing the api-ref sprint next week, I'm trying to work out something for a progress, burn down chart
14:33:33 <sdague> and lastly people are focussed on the policy in code specs to get those reviewed to completion
14:33:46 <sdague> that's the big focus for this week
14:34:02 <mriedem> ok, thanks, already making good progress on the summit items
14:34:15 <mriedem> i don't see moshele around
14:34:29 <sdague> yeh, there are a few specs on delete / deprecation which are being posted as well, based on summit items
14:34:36 <mriedem> but he did post some notes from the sriov/pci meeting this week to the dev list
14:34:59 <mriedem> lbeliveau and sfinucane are working on filling gaps in the NFV docs
14:35:36 <mriedem> and it sounds like mellanox ci is working on moving to containers for their ci like how the intel nfv ci does it
14:35:38 <lbeliveau> mriedem: a couple of ones are already merged
14:35:46 <mriedem> lbeliveau: ah, cool
14:35:51 <mriedem> thanks for working on that
14:36:04 <mriedem> gibi: did you want to share anything about the notifications meeting?
14:36:10 <gibi> sure
14:36:34 <gibi> the etherpad https://etherpad.openstack.org/p/nova-versioned-notifications is up to date after the summit
14:36:46 <gibi> we are working on to get the transformation spec merged
14:37:01 <mriedem> https://review.openstack.org/#/c/286675/
14:37:10 <gibi> johnthetubaguy gave feedback so couple of things needs to be updated
14:37:12 <mriedem> i see john has the -1 of death on there
14:37:27 <johnthetubaguy> its not too serious, its mostly little things
14:37:34 <mriedem> of death
14:37:37 <gibi> :)
14:37:52 <johnthetubaguy> we have an odd dependency on stuff that modifies notifications
14:37:53 <mriedem> it sounds like the searchlight team is also interested in helping with this, which is great
14:38:07 <johnthetubaguy> but an idea came up where we just add TODOs for new notifications, while gibi gets that sorted
14:38:08 <gibi> besides that we have quite much of new notification specs are up on review from different parties
14:38:40 <johnthetubaguy> because of that "no more old style notifications" code review rule we decided on
14:39:00 <gibi> yes, I will try to be put of PoC code soon for instance.delete
14:39:08 <mriedem> johnthetubaguy: is someone going to put that in the nova devref review guide page?
14:39:10 <gibi> that will help others
14:39:23 <gibi> mriedem: I think it is already there...
14:39:39 <johnthetubaguy> yeah, its in there already
14:39:41 <johnthetubaguy> down the bottom
14:39:44 <mriedem> http://docs.openstack.org/developer/nova/code-review.html#notifications
14:39:50 <mriedem> hells bells
14:39:53 <gibi> bottom of the page
14:39:54 <johnthetubaguy> thats the one
14:40:05 <mriedem> alright, sounds good, thanks
14:40:16 <mriedem> moving on
14:40:19 <mriedem> #topic stuck reviews
14:40:27 <mriedem> there is nothing in the agenda
14:40:40 <mriedem> #topic open discussion
14:40:51 <mriedem> abhishek: are you around?
14:41:00 <abhishek> hi
14:41:00 <mriedem> review request for: Set migration status to 'error' on live-migration failure - https://review.openstack.org/#/c/215483/
14:41:10 <abhishek> I have already requested alaski for his review
14:41:47 <alaski> I've mostly lost the context on it since it's from a while ago
14:42:04 <alaski> but there's a short term workaround that can be applied if we modify task states a bit on failure
14:42:19 <mriedem> abhishek: have you talked to tdurakov about this?
14:42:21 <alaski> and then an idea to proactively clean stuff up and not rely on periodic tasks doing error cleanups
14:42:22 <johnthetubaguy> this is error vs failed in the migration object right?
14:42:25 <mriedem> seems like something related to a thing he was working on
14:42:36 <alaski> johnthetubaguy: yes
14:42:40 <abhishek> johnthetubaguy: right
14:42:53 <abhishek> mriedem: no I will catch him
14:42:59 <PaulMurray> alaski, there were two queries on this
14:43:14 <PaulMurray> one was failed vs error as an api visible change
14:43:28 <PaulMurray> the other was about continuing with the hacky cleanup - which is
14:43:33 <PaulMurray> what your referring to I think
14:43:45 <PaulMurray> do we care about the failed vs error question ?
14:44:03 <johnthetubaguy> I am more worried about the clean up working, honestly
14:44:41 <alaski> personally I would rather this be done correctly, i.e. without relying on a periodic
14:44:53 <abhishek> IMO right now there is now way we can cleanup this other than periodic task
14:44:54 <mriedem> might be worth asking on the ops channel
14:44:56 <alaski> but I don't understand the complexities of changing the failure state
14:45:16 <PaulMurray> alaski, its just that its seen in the api
14:45:19 <mriedem> or ops list
14:45:32 <mriedem> yeah, so it would be a behavior change in the API
14:45:58 <PaulMurray> not sure about meaning of error vs failed in this case too
14:46:10 <PaulMurray> I think one is meant to be recoverable and the other not ?
14:46:13 <alaski> yeah, I guess I don't understand why those are two different states
14:46:18 <mriedem> "The 'error' means something unexpected happened, and nova can't cleanup  it automatically. The 'failed' means something wrong happened, and nova  can handle it clearly."
14:46:23 <mriedem> from alex_xu in the patch
14:47:13 <mriedem> there might be more info about this in ndipanov's migratoin state machine spec?
14:47:14 <alaski> this gets into a whole area of api versioning that I don't think is well covered yet
14:47:19 <PaulMurray> so it looks like we are using error to trigger an auto cleanup ?
14:47:32 <alaski> if we change the state machine in Nova do we have to advertise that?
14:48:22 <dansmith> we don't publish the actual state machine (because there isn't really one) so I don't think it matters
14:48:28 <alaski> I mean, shouldn't there be an expectation that either error or failure could happen
14:48:30 <dansmith> plus it differs between hypervisors for things like boot
14:48:31 <mriedem> if you're writing client side code that's checking for error or failed on the migration object, i doubt you're doing anything differently for either state
14:48:40 <johnthetubaguy> alaski: we have been hiding the state changes with alias and unknown status, AFAIK
14:48:40 <mriedem> to a client they are both failures
14:49:05 <alaski> mriedem: yeah
14:49:11 <johnthetubaguy> dansmith: hmm, true, the state changes and progress increments in slightly different ways I guess
14:49:16 <dansmith> yeah
14:50:08 <alaski> so to me it seems that changing from error to failure, or the other way, should be fine
14:50:17 <alaski> if what's being used now is wrong
14:50:33 <mriedem> the change is using 'error' instead of 'failed' for live migration
14:50:37 <mriedem> to work like cold migrate/resize
14:50:42 <mriedem> so that periodic cleanup task finds it
14:50:54 <abhishek> right
14:50:56 <mriedem> because that checks for 'error' migrations
14:51:40 <mriedem> anyway, sounds like we're meh on the state change
14:51:52 <mriedem> there were comments in the patch about not wanting to rely on the periodic task
14:52:00 <alaski> I am. seems fine to me, with the acknowledgement that this is all still hacky
14:52:09 <mriedem> i don't know what the solutions are for that,
14:52:17 <mriedem> but we don't need to get into those details in this meeting either probably
14:52:25 <PaulMurray> I think this should be a short term work around for now
14:52:29 <mriedem> i added tdurakov to the review to see if he has input
14:52:39 <mriedem> PaulMurray: i think that's reasonable
14:52:41 <PaulMurray> we should redo the migration process anyway
14:52:47 <mriedem> since there is talk of refactoring a lot of this
14:52:54 <PaulMurray> we were going that way in the friday session
14:53:05 <alaski> yeah, that's what tdurakov is working on
14:53:15 <abhishek> ok, thank you mriedem and all
14:53:17 <mriedem> abhishek: ok, so i guess for now just rebase the patch
14:53:23 <mriedem> and talk to tdurakov
14:53:36 <abhishek> I will also catch tdurakov
14:53:42 <abhishek> thank you for your time
14:53:46 <mriedem> it also wouldn't hurt to give a heads up on the ops list
14:54:04 <mriedem> ok, anyone else have anything for open discussion?
14:54:06 <mdbooth> Request to merge storage pools 'pre-patches': http://lists.openstack.org/pipermail/openstack-dev/2016-May/094059.html
14:54:24 <mdbooth> Just to highlight that, no need to discuss now
14:54:45 <mriedem> ok, i guess we need to get https://review.openstack.org/#/c/302117/ in first
14:55:00 <mriedem> mdbooth: can you check on that qcow2 vs raw in the gate question?
14:55:01 <mdbooth> These are independent of that
14:55:06 <mdbooth> mriedem: Yes, will do
14:55:08 <dansmith> mriedem: I was +2 on that minus a couple nits
14:55:15 <mriedem> mdbooth: they are linked to the bp
14:55:17 <mriedem> which is linked to the spec
14:55:18 <dansmith> mriedem: which I think have been fixed now
14:55:33 <mdbooth> mriedem: Yeah, so the patch series depends on them, but they're not really part of the patch series
14:55:42 <mdbooth> If you see what I mean
14:55:43 <dansmith> mriedem: they're bugish things
14:55:46 <mriedem> sure,
14:55:58 <mriedem> then remove the bp link in those semi unrelated cleanup patches :)
14:55:59 <dansmith> we should just merge the spec anyway
14:56:05 <mriedem> if you don't want my -2 hammer of procedure on them
14:56:13 <mriedem> or we just merge the spec yeah :)
14:56:23 * mdbooth is cool either way
14:56:32 <mriedem> #action dansmith to do that thing he does
14:56:34 <dansmith> I will look at the spec right after this and poke mriedem if it's good
14:56:39 <mriedem> cool
14:56:40 <PaulMurray> can you look at the storage pools specs too
14:56:46 <mdbooth> dansmith: I think we still had some upgrade details to thrash out.
14:56:48 <PaulMurray> (as well as mdbooth one)
14:56:52 <mriedem> danpb is specs core too you know...
14:56:59 <mriedem> and this is all libvirty things
14:57:11 <PaulMurray> will bug him too
14:57:17 <mriedem> excellent
14:57:22 <mriedem> alright, 3 minutes to spare
14:57:25 <dansmith> can we be done now?
14:57:31 <mriedem> ....
14:57:32 <mriedem> yes
14:57:35 <mriedem> #endmeeting