16:59:57 <jroll> #startmeeting ironic
16:59:58 <openstack> Meeting started Mon Feb  1 16:59:57 2016 UTC and is due to finish in 60 minutes.  The chair is jroll. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:00 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:02 <openstack> The meeting name has been set to 'ironic'
17:00:04 <jroll> 3 seconds early \o
17:00:06 <devananda> o/
17:00:09 <jroll> hi everyone
17:00:15 <sergek> o/
17:00:20 <rloo> o/
17:00:23 <thiagop> o/
17:00:27 <lintan_> o/
17:00:29 <jlvillal> o/
17:00:31 <stendulker> o/
17:00:45 <krtaylor> o/
17:00:50 <vdrok> o/
17:01:01 <dtantsur> o/
17:01:08 <TheJulia> o/
17:01:08 <NobodyCam> o/
17:01:15 <maurosr> \o
17:01:20 <mgould> o/
17:01:25 <jroll> #topic announcements and reminders
17:01:34 <jroll> so, before we start announcing things
17:01:47 <jroll> I've been, by my own accord, helping out a lot downstream the last few weeks
17:01:58 <jroll> and not spending enough time upstream
17:02:02 <davidlenwell> o/
17:02:04 <jroll> and that isn't fair to you all
17:02:10 <jroll> so I wanted to apologize for that
17:02:15 <rpioso> o/
17:02:20 <jroll> and thank the folks that have been driving the project forward in the meantime
17:02:21 <Nisha> o/
17:02:31 <jroll> so thank you all for that
17:02:45 <jroll> I still have some loose ends but should be contributing more from here on out
17:03:08 <jroll> and with that. announcements.
17:03:17 <jroll> our gate is down due to the devstack/keystone v3 fallout
17:03:33 <mkovacik> o/
17:03:37 <mjturek1> o/
17:03:49 <jroll> that is being reverted; we've also added v3 support to ironicclient, which is released, and waiting on a global-requirements patch. which is failing due to pypi mirrors being out of sync
17:03:52 <jlvillal> Thanks jroll. I know for a lot of us other work does get in the way of upstream.
17:04:10 <sergek> pas-ha had a patch with keystone
17:04:16 <zer0c00l> 0/
17:04:17 <rpioso> I would like to introduce myself to the group.  I work at Dell with cdearborn.  I'll be participating in the mid-cycle and attending the April Summit.  Lookning forward to working with you.
17:04:18 <zer0c00l> o/
17:04:30 <jroll> hi rpioso, welcome :)
17:04:53 <jroll> any other announcements?
17:04:55 <rloo> hi rpioso and welcome.
17:05:12 <devananda> rpioso: welcome!
17:05:13 <rpioso> jroll, rloo: Thx :)
17:05:19 <jroll> also, a reminder to focus on our priorities - gate improvements, neutron integration work, manual cleaning
17:05:23 <BadCub> howdy folks
17:05:31 <jroll> I'd *love* to get manual cleaning out the door and do an ironic release this week
17:05:38 <jroll> but, the gate may prevent that, so ya know
17:05:43 <devananda> jroll: want to remind folks about the midcycle dates?
17:05:52 <jroll> yep
17:06:02 <jlvillal> Only a day left to submit a talk proposal for the summit
17:06:22 <jroll> reminder that our midcycle is happening ONLINE on february 16-18
17:06:30 <jroll> please add your topics and rsvp here: https://etherpad.openstack.org/p/ironic-mitaka-midcycle
17:06:45 <devananda> #info reminder that our midcycle is happening ONLINE on february 16-18
17:06:59 <jroll> I'm still working on the a/v situation, I expect to have a thing picked out by next week
17:07:14 <rpioso> devananda: Thx!
17:07:25 <devananda> #info reminder to focus on our priorities - gate improvements, neutron integration work, manual cleaning
17:07:40 <jroll> #chair devananda
17:07:41 <openstack> Current chairs: devananda jroll
17:07:45 <jroll> thanks for reminding me on bot comands :P
17:07:50 <devananda> np :)
17:08:21 <jroll> anything else here?
17:08:47 <jroll> #topic subteam status reports
17:08:54 <jroll> as always, these are on the whiteboard:
17:08:56 <jroll> #link https://etherpad.openstack.org/p/IronicWhiteBoard
17:09:05 <cdearborn> o/
17:09:08 <jroll> I'll give folks a few minutes to review and ask questions
17:09:40 <rloo> wow, dtantsur is giving us a monthly summary for bug stats now :)
17:10:02 <zer0c00l> what needs to be done in manual cleaning?
17:10:10 <rloo> zer0c00l: review
17:10:33 <devananda> zer0c00l: reviews / landing the patches
17:10:53 <devananda> jroll: if the gate gets unbroken for long enough, I'd like to finish landing the tempest-lib migratoin
17:11:01 <devananda> we were so close two weeks ago, then *boom*
17:11:08 <jroll> devananda: +1
17:11:15 <zer0c00l> The reviews specified in the spec are all abandoned
17:11:24 <rloo> jroll: wrt network isolation. i guess you still want us to get the ironic parts merged asap even though the nova part is delayed til Neutron?
17:11:47 <jroll> rloo: so, it will work with nova if we can land a patch to bump the API version nova uses
17:11:51 <jroll> but the portgroups won't work
17:11:56 <jroll> which is mega :(
17:12:02 <devananda> rloo: yes. if we don't, it'll take even longer
17:12:12 <jroll> and yeah, what deva sais
17:12:14 <jroll> said*
17:12:24 <rloo> zer0c00l: wrt manual cleaning: https://review.openstack.org/#/q/topic:bug/1526290
17:12:31 <devananda> it makes sense in nova's world for them to wait until after we do a release with the new APIs
17:12:44 <devananda> if we had been able to do that earlier this cycle, landing the changes in Nova now'ish would be fine with them
17:12:45 <rloo> devananda, jroll: got it. am on it too :)
17:12:59 <devananda> rloo: yah. thanks - your reviews on the neutron integration have been good
17:13:09 <jroll> +1, ty rloo
17:13:57 <devananda> rloo: please start +2'ing any of those patches you feel are good enough. I'll start approving everything up to the REST API changes soon
17:14:10 <rloo> devananda: will do
17:14:21 <NobodyCam> I like jroll hope to be wrapping up my downstream stuff and will be starting on hte networking stuff
17:15:15 <NobodyCam> is this the best jumping in point: https://etherpad.openstack.org/p/ironic-neutron-mid-cycle ?
17:15:35 <jroll> NobodyCam: https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1526403
17:15:44 <rloo> besides reviewing, gate seems to be the biggest headache. is the tinyipa stuff high priority? (maybe it was mentioned)
17:16:08 <jroll> I'd like to get it landed soon, yes
17:16:11 <devananda> rloo: the gate has been flat-out broken by several other things, but yes
17:16:14 <NobodyCam> TY jroll :)
17:16:20 <jroll> at least so we can play with it and see how it does
17:16:21 <sambetts> whoop
17:16:23 <jroll> and make a decision from there
17:16:32 <devananda> the tinyipa stuff should help. we can't pivot to it immediately, but we should land it, and get a non-voting job going that uses it
17:16:34 <devananda> so we can collect data
17:16:41 <jroll> yep
17:16:59 <jlvillal> I thought we were actually doing pretty well for like 36 hours before the keystone v3 thing happened.
17:17:34 <rloo> jlvillal: i think things were still randomly timing out though. so lets get tinyipa in there.
17:17:54 <jlvillal> And big thanks to dtantsur for fixing the "< something" bug last week. That was a killer
17:17:56 <rloo> there should be a rule about not merging devstack changes before/on weekend.
17:18:13 <krtaylor> ++
17:18:22 <lintan_> +1 :)
17:18:32 <devananda> jlvillal: yah, for, like, 36 hours ...
17:18:57 <zer0c00l> yes lets land tinyipa
17:18:59 <zer0c00l> :)
17:19:22 <jlvillal> I like the idea of tinyipa as I have plans of having three IPA instances for the Grenade job...
17:20:26 <rloo> krtaylor: wrt 3rd party CI. is the problem with the third party providers and/or should we give guidance wrt the pairing. (i don't actually know what the trouble is, just reading your notes in report)
17:21:15 <krtaylor> rloo, I am having trouble checking to see if everyone has registered to meet the m-2 milestone
17:21:45 <rloo> krtaylor: can you contact the folks directly, assuming we have contact info?
17:21:56 <krtaylor> I need to ping thingee and see if I can get a list of email, I thought we had it in a etherpad somewhere, but I cant find it for the life of me
17:22:11 <sambetts> krtaylor: Isn't that what the third party wiki was for?
17:22:42 <krtaylor> we have partial info in several places for ironic
17:22:50 <cdearborn> It would be great if someone could take a look and verify that we've met the milestone or let us know if there are dangling chads that need to be addressed
17:23:21 <rloo> that thirdparty CI isn't that useful at first glance, to see which are related to ironic: https://wiki.openstack.org/wiki/ThirdPartySystems
17:23:25 <krtaylor> cdearborn, I did review yours and it lgtm
17:23:26 <sambetts> krtaylor: I much perfer the stackalytics driver list for working our who/what has CI etc
17:23:39 <thingee> krtaylor: hey meant to sync up with you last week, but was at an offsite. I kind of dropped the ball on getting the communication going. Can we sync up again at the ironic qa meeting
17:24:00 <cdearborn> krtaylor, thx very much - appreciate it!
17:24:07 <krtaylor> sambetts, but I'm not sure that is complete, it is just the systems that have registered with stackalytics
17:24:29 <krtaylor> thingee, no worries, absolutely, I'll ping you after
17:24:31 <sambetts> Yeah, I wish it was the standard instead of that wiki page though -> http://stackalytics.com/report/driverlog?project_id=openstack%2Fironic
17:24:45 <sambetts> definatly not complete
17:25:43 <devananda> sambetts: ++
17:25:56 <devananda> krtaylor: registering with stackalytics should be a requirement
17:26:03 <krtaylor> agreed, we can push on this in the -qa meeting
17:26:20 <krtaylor> but the infra requirement is the thirdpartysystems page
17:26:28 <devananda> hrmm
17:26:35 <devananda> perhaps the page could be organized by project?
17:26:59 <krtaylor> #link https://wiki.openstack.org/wiki/ThirdPartySystems
17:27:02 <rloo> krtaylor: the stackalytics has a 'CI' column. instead of a checkmark, could it have a link to their corresponding third-party-CI wiki?
17:27:15 <TheJulia> kr/win 43
17:27:18 <TheJulia> doh
17:27:21 <krtaylor> devananda, it would be really hard, the systems span projects
17:27:27 <devananda> krtaylor: ah, gotcha
17:28:02 <rloo> krtaylor: how about another column that thirdpartysystems that lists the projects?
17:28:09 <krtaylor> rloo, well, it kinda is hard to get folks to update that if it is not a requirement
17:28:09 <devananda> krtaylor: I see different entries by many companies, one per project
17:28:27 <jroll> can we take this into open discussion or something else?
17:28:28 <devananda> also, we're side tracking -- this is a discussion to bring up with infra
17:28:30 <rloo> krtaylor: we can make it an ironic requirement
17:28:30 <devananda> yea
17:28:49 <rloo> oops. sorry, moving on now... :)
17:28:59 <jroll> ok, moving on
17:29:03 <jroll> #topic should we support a new feature to accept header 'X-Openstack-Request-ID' as request_id?
17:29:04 <krtaylor> devananda, yes, we (powerkvm) were in stackalytics, but got removed for some reason, so I don't trust it a whole lot
17:29:10 <jroll> #link https://etherpad.openstack.org/p/Ironic-openstack-request-id
17:29:13 <jroll> lintan_: this is you
17:29:21 <lintan_> thanks jroll
17:29:40 <lintan_> what I want to say is most on the etherpad
17:29:47 <lintan_> https://etherpad.openstack.org/p/Ironic-openstack-request-id
17:30:19 <lintan_> I need want to get a decision here
17:30:33 <jroll> right, so, the main question is, should we accept a request id and use it as our own
17:30:47 <jroll> one thing to note, I don't believe we log request IDs, do we?
17:31:04 <rloo> jroll: does ironic use request IDs now?
17:31:13 <devananda> rloo: no
17:31:14 <lintan_> yes, we have the request id
17:31:18 <jroll> I believe we have them in the context
17:31:21 <jroll> but do not log them
17:31:24 <mgould> jroll, AFAICT the idea is so you can track a user's request across all the services that it touches, for debugging
17:31:24 <mgould> which sounds like a Really Good Idea
17:31:25 <lintan_> using oslo context
17:31:30 <jroll> which is mildly infuriating every time I realize it
17:31:34 <jroll> mgould: I'm getting there...
17:31:34 <mgould> so +1 to accepting them and logging them
17:31:38 <devananda> h. last time I checked, we just ignore the header
17:31:42 <jroll> so 1) we need to log them before we do anything
17:31:59 <jroll> 2) I think we *should* accept a request ID from the api client
17:32:05 <devananda> there are very specific ways to accept the header -- we DO NOT want to accept and log what ever header is passed in
17:32:09 <jroll> 3) we should make our nova driver send them
17:32:39 <devananda> jroll: we should generate our own request-id if one is not passed in and log that
17:32:49 <jroll> right.
17:32:49 <NobodyCam> I ageee with all that jroll just said.
17:32:55 <jroll> we already generate one, afaik
17:33:00 <jroll> we just don't log them
17:33:05 <devananda> let me dig for a moment to find the discussion on accepting request-id's from API clients
17:33:14 <devananda> jroll: oh. that should be easy to fix then
17:33:33 <jroll> right
17:33:55 <lintan_> I don't think Ironic have to log them, it is possible to show them using oslo thing in log
17:34:11 <rloo> so if there is a request ID we use it, and if there isn't we generate one. and that is all in the X-OpenStack-Request-Id?
17:34:28 <rloo> honestly, I think we should have a RFE that describes what we want/need to do
17:34:33 <devananda> lintan_: yea, oslo knows how to log them, but we're not passing this correctly
17:34:43 <devananda> rloo: there are cross project specs already done for this. we don't need another one ...
17:35:06 <rloo> devananda: it goes to a question i had for jroll last week. i have no idea which xproject specs ironic has decided to follow.
17:35:08 <devananda> rloo: unless you mean an RFE bug to track it , in which case I completely agree
17:35:21 <jroll> +1 for RFE to track it
17:35:24 <rloo> devananda: and even if ironic follows, yeah, would be good to know the work involved to adopt.
17:35:53 <rloo> devananda: so I don't mean to track the work, but also a description of the work that needs to be done.
17:35:59 <devananda> rloo: gotcha
17:36:02 <devananda> that's fair
17:36:07 <rloo> devananda: /track/just track/
17:36:19 <rloo> basically, what we decide now or whatever, should go in that rfe.
17:36:22 <devananda> I just do not want us re-designing it and ending up diverging unnecessarily
17:36:26 <lintan_> devananda, the question is according to the cross-spec, no one will pass X-OpenStack-Request-Id to other project
17:36:30 <rloo> devananda: definitely.
17:36:54 <vdrok> http://lists.openstack.org/pipermail/openstack-dev/2016-January/085176.html
17:36:56 <devananda> lintan_: that is ... not what was discussed in the design summit on this a while back :(
17:36:59 <jroll> lintan_: is it no one *will*, or no one *must*
17:37:15 <jroll> lintan_: in other words, is it optional to pass it, or are we not allowed to pass it
17:37:16 <vdrok> that mail says about logging 3 different request ids
17:38:02 <rloo> maybe we should wait til rocky/someone writes that spec
17:38:19 <mgould> I guess that makes it *possible* to track requests across services, but it's a lot harder than just grepping all the logs for a single string
17:38:20 <jroll> yeah, let's not go logging 3 req-ids yet
17:39:33 <lintan_> hmmm, it seems that a spec or a ref should be done before we get an agreement
17:40:09 <jroll> I can put one up with how I see it working
17:40:14 <jroll> if nobody is opposed to that
17:40:27 <jroll> or rather, with the work I see that needs to be done
17:40:28 <rloo> jroll: as long as you do the high priority stuff first :)
17:40:47 <jroll> I'm just writing the RFE, not the code :P
17:40:56 <lintan_> OK, I will also continue on the work
17:41:04 <devananda> this looks like the best description I've seen so far: https://etherpad.openstack.org/p/request-id // http://specs.openstack.org/openstack/openstack-specs/specs/return-request-id.html
17:41:34 <devananda> first step is both returning this header and logging it
17:41:47 <jroll> the "return" is a python client thing
17:41:51 <mgould> https://blueprints.launchpad.net/nova/+spec/cross-service-request-id seems pretty clear that the request-ID changes at service boundaries :-(
17:41:51 <devananda> yep
17:41:55 <devananda> jroll: well, also API response
17:41:56 <jroll> I believe our api responses return it
17:41:58 <devananda> nope
17:42:00 <jroll> fun
17:42:17 <devananda> mgould: that is unfortunate
17:42:21 <jroll> mgould: that's so old, this work is to try to help unwind it
17:42:25 <devananda> omg that is old
17:42:26 <jroll> ignore that BP
17:42:27 <devananda> yea
17:42:36 <mgould> jroll, ah, great
17:42:40 <devananda> closed 2 years ago
17:42:42 <mgould> whew :-)
17:42:58 <lintan_> I try to add the header to our api's response
17:43:17 <devananda> mgould: see https://etherpad.openstack.org/p/icehouse-summit-nova-cross-project-request-ids which is ALSO two years old
17:43:24 <NobodyCam> lintan_: awesome
17:43:26 <jroll> #agreed jroll to write an RFE with a list of work to do
17:43:28 <devananda> but has a better description of the problem folks want to solve
17:43:50 <lintan_> but get confused  about should we accept an external request-id
17:44:19 <mgould> right
17:44:29 <devananda> lintan_: right. let's not accept external request id yet
17:44:33 <jroll> why not.
17:44:36 <lintan_> this is something not expected from that cross-spec or in other projects like neutron/cinder
17:44:39 <devananda> jroll: see that etherpad
17:44:45 <devananda> it is not trivial
17:44:45 <mgould> so we have to accept an external ID, generate our own, log both, then tag all our logs with *our* ID
17:44:47 <jroll> that would be a HUGE improvement if we passed them between nova and ironic
17:44:58 <TheJulia> jroll: rfe in relation to processing and logging the id, as in completely unrelated to https://bugs.launchpad.net/ironic/+bug/1505119 ?
17:45:00 <openstack> Launchpad bug 1505119 in Ironic "[RFE] Ironic is missing a header X-Openstack-Request-Id in API response" [Wishlist,In progress] - Assigned to Tan Lin (tan-lin-good)
17:45:01 <devananda> jroll: I totally agree. but it's also a HUGE problem to accept unsigned headers
17:45:06 <jroll> I mean
17:45:07 <mgould> and debuggers have to recursively follow the chain of ID changes
17:45:21 <jroll> if an admin wants to pass a request ID
17:45:29 <jroll> that is "wrong" for whatever reason
17:45:32 <jroll> do we care?
17:45:32 <devananda> jroll: nope. bad idea.
17:45:35 <devananda> yes we care
17:45:39 <jroll> why
17:45:41 <devananda> what if I send a 4k header
17:45:48 <jroll> you can do that anyway
17:45:52 <mgould> n00b question: do we currently sign headers?
17:45:56 <TheJulia> 4k header with an exploit
17:45:59 <devananda> mgould: nope
17:46:01 <devananda> TheJulia: exactly
17:46:03 <jroll> validate it looks like "req-$uuid" and move on
17:46:17 <devananda> nope
17:46:19 <devananda> jroll: see https://etherpad.openstack.org/p/icehouse-summit-nova-cross-project-request-ids
17:46:30 <devananda> we discussed this at length at a cross project summit a few times
17:46:34 <jroll> so, how can we exploit a system by reading a string?
17:46:40 <jroll> that's what confuses me
17:46:43 <devananda> let's not rehash that right now ...
17:46:44 <mgould> jroll: /req-[0-9a-f]{48}/ or something?
17:47:00 <devananda> we should be generating, logging, and returning the header now
17:47:07 <devananda> and sort out the cross project bits after that
17:47:10 <TheJulia> ++
17:47:24 <devananda> because one step at a time ....
17:47:35 <mgould> ++
17:47:39 <jroll> sure
17:47:50 <jroll> TheJulia: you're right, the rfe exists, I may add to that
17:48:01 <lintan_> OK, generating, logging and returning :)
17:48:11 <devananda> lintan_: thanks
17:48:35 <TheJulia> jroll: just wanted to make sure the agreed note was not specific and that rfe already existed :)
17:48:35 <lintan_> :) my pleasure
17:48:35 <jroll> thanks lintan_
17:48:50 <jroll> TheJulia: yeah, I forgot about it, ty
17:48:53 <jroll> moving on then
17:48:55 <TheJulia> np
17:48:57 <jroll> #topic open discussion
17:49:01 <jroll> anyone have a thing?
17:49:03 <jroll> 11 minutes
17:49:31 <jlvillal> Any Ironic related summit talks I should be prepared to vote for?  :)
17:49:48 <devananda> jlvillal: submission deadline is today
17:49:50 <sambetts> jroll: the tinyipa project config patch got a +2 from Andreas earlier today
17:49:55 <jlvillal> devananda: Actually tomorrow
17:49:56 <devananda> so the list isn't up yet
17:50:00 <devananda> jlvillal: ah, right
17:50:01 <jlvillal> It was extended.
17:50:13 <jroll> sambetts: cool
17:50:20 <zer0c00l> There was a discussion on adding 'tar' format to glance
17:50:30 <zer0c00l> i want to bring it to the attention
17:50:54 <cdearborn> unfortunately won't be able to attend this mid-cycle, but rpioso will be there
17:50:55 <NobodyCam> zer0c00l: as in tarball deployments
17:51:02 <zer0c00l> Basically glance suggested that we use 'os_tarball' instead of 'tar'  to avoid confusion
17:51:05 <zer0c00l> NobodyCam: yes
17:51:08 <jlvillal> cdearborn: It is 'virtual' as a note
17:51:27 <jroll> zer0c00l: I've been meaning to reply to that thread
17:51:30 <zer0c00l> And they would approve the glance spec to add 'tar' after ironic approves the tar-payload spec
17:51:39 <zer0c00l> jroll: sure, please do.
17:51:47 <jroll> zer0c00l: tl;dr, I don't see the use case? why haven't people asked for this feature in virt?
17:51:48 <cdearborn> jlvilla, yup - have partner meetings the entire time
17:51:50 <devananda> zer0c00l: neat
17:52:08 <jroll> zer0c00l: I would think if tarballs were super useful like this, people would have wanted them in the past
17:52:14 <devananda> jroll: clone-a-server ?
17:52:25 * devananda is guessing
17:52:34 <jroll> devananda: the spec says "they're easier to build"; dunno if I buy that
17:52:44 <devananda> jroll: hm. yea, I don't buy that either
17:52:51 <zer0c00l> jroll: it is. You installall the packages in  a chroot
17:52:54 <zer0c00l> and compress them
17:53:01 <zer0c00l> *install all the packages
17:53:07 <NobodyCam> thats all I've really heard too..."their easier"
17:53:16 <devananda> zer0c00l: that's what DIB does ... except it outputs a qcow, not a tgz
17:53:20 <jroll> http://libguestfs.org/virt-make-fs.1.html
17:53:25 <jlvillal> Are they faster too?
17:53:40 <jroll> I don't see the point in building an entire feature to solve what virt-make-fs already solves
17:54:05 <devananda> glance has discussed a few times creating an image-format-conversion service
17:54:11 <devananda> seems like a reasonable addon to me, not a core feature
17:54:55 <devananda> zer0c00l: is there a compelling reason why tarballs can't be converted to .img / .qcow ?
17:55:11 <devananda> prior to uploading, I mean
17:55:24 <zer0c00l> devananda: just curious can we use .qcow2 as a ironic image format?
17:55:29 <jroll> yes
17:55:31 <devananda> zer0c00l: yah
17:55:58 <zer0c00l> devananda: i haven't tried coverting tar format. At yahoo we do OS releases as tarballs
17:56:14 <zer0c00l> we have these tarballs from back in 2008's
17:56:27 <devananda> zer0c00l: I hope you're patching the kernels in there .....
17:56:34 <zer0c00l> i have to check and see if those tarballs can be converted to qcow2 and their implications
17:56:44 <zer0c00l> sure we do
17:56:47 <devananda> :)
17:57:26 <zer0c00l> it's just easy to add this feature and get it working than converting 20+ images to qcow2
17:57:34 <devananda> zer0c00l: it is not easier
17:57:46 <jroll> yeah, definitely disagree
17:57:51 <zer0c00l> okay
17:57:54 <rloo> zer0c00l: ++ not easier
17:57:54 <devananda> zer0c00l: converting 20 images is MUCH better than causing two projects to adopt and carry support for a few image format
17:57:54 <TheJulia> totally disagree
17:58:06 <zer0c00l> :)
17:58:07 <zer0c00l> okay
17:58:08 <devananda> s/few/new/
17:58:14 <jroll> does anyone oppose abandoning this spec, then?
17:58:22 <devananda> jroll: nope
17:58:42 * jroll will do it shortly if he doesn't hear otherwise
17:58:49 * mgould would still like to understand why it's wanted
17:58:54 <rloo> if that is the only reason, then yeah, i don't think we need that spec.
17:58:55 <zer0c00l> i just wish this happened earlier
17:59:00 <zer0c00l> this discussion
17:59:00 <mgould> are they smaller? fs-agnostic? anything else?
17:59:05 <NobodyCam> *one* minute
17:59:09 <zer0c00l> they are fs agnostic
17:59:10 <zer0c00l> yes
17:59:15 <mkovacik> I'd like to get your opinions on https://bugs.launchpad.net/ironic/+bug/1538653 ; would like to get some precedence decision on whether 202+Location header endpoints for async requests is OK/preferred
17:59:15 <openstack> Launchpad bug 1538653 in Ironic "fix redirection of async endpoints response codes from "202 - Accepted" to "303 - See other" " [Wishlist,Opinion]
17:59:18 <zer0c00l> you can create any fs you want to
17:59:26 <jroll> zer0c00l: we've been paying attention to the priority work, sorry :(
17:59:27 <zer0c00l> that is one point i would like to make
17:59:36 <jroll> so we're out of time
17:59:40 <zer0c00l> jroll: we need fs agnotic os images too
17:59:41 <jroll> let's continue on the spec
17:59:46 <zer0c00l> jroll: sure
17:59:53 <jroll> thanks all, good meeting
17:59:57 <TheJulia> Thank you
17:59:59 <jroll> #endmeeting