14:00:50 <nikhil_k1away> #startmeeting Glance
14:00:51 <openstack> Meeting started Thu Jul  2 14:00:50 2015 UTC and is due to finish in 60 minutes.  The chair is nikhil_k1away. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:52 <kragniz> hihi
14:00:53 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:55 <openstack> The meeting name has been set to 'glance'
14:00:57 <llu-laptop> o/
14:01:00 <TravT> o/
14:01:01 <jokke_> o/
14:01:02 <mfedosin> o/
14:01:03 <dshakhray> o/
14:01:05 <bpoulos> o/
14:01:07 <flaper87> o/
14:01:14 <lakshmiS_> o/
14:01:15 <kragniz> o/
14:01:17 <flaper87> nikhil_k1away IS BACK!
14:01:22 <tsufiev> hi
14:01:31 <kragniz> nikhil_k1notaway
14:02:08 <hemanthm> o/
14:02:14 <nikhil_k> Hi all
14:02:31 <nikhil_k> I was away for past few days so don't have much on updates
14:02:39 <nikhil_k> except for one thing
14:02:41 <nikhil_k> #topic
14:02:45 <nikhil_k> #topic Updates
14:03:11 <nikhil_k> #info added entry for Glance mid-cycle to #link https://wiki.openstack.org/wiki/Sprints
14:03:31 <nikhil_k> #action all: please visit https://etherpad.openstack.org/p/liberty-glance-mid-cycle-meetup and fill out the necessary surveys
14:04:34 <nikhil_k> Thanks for bringing that up flaper87 on the ML. I will response for everyone's awareness hwoever, updates and meeting minutes on the event would be a preference
14:05:10 <flaper87> nikhil_k: awesome, thanks
14:05:13 <nikhil_k> I am arranging for vidyo capability in the conf room and that enables remote participation very likely
14:05:53 <nikhil_k> Also, I will try to investigate the tool mentioned by Jeremy on the ML if that seems more useful. We had little luck with mumble last time :/
14:06:17 <nikhil_k> moving on..
14:06:21 <nikhil_k> #topic Glance images streaming from Horizon
14:06:49 <nikhil_k> tsufiev: please go ahead
14:06:57 <nikhil_k> #link https://review.openstack.org/#/c/166969/
14:07:19 <tsufiev> so, the problem is with support of file-like object during images upload via Glance v2 API
14:07:44 <tsufiev> AFAIK (or more precisely, I was told), there isn't such capability in v2 API
14:07:47 <tsufiev> is that true?
14:08:41 <jokke_> tsufiev: could you please be bit more precise?
14:09:39 <flaper87> I believe tsufiev's refering to stream whatever is being uploaded to horizon directly to glance
14:09:45 <flaper87> without re-uploading it to glance
14:10:00 <tsufiev> jokke_, sure. So, when I'm to upload an image file into Glance, I could pass some object as 'data' key value, the most important requirement for it is to have .read() method implemented - that works for me in v1 API
14:10:22 <tsufiev> I'm speaking about using glanceclient from Horizon api wrappers
14:10:25 <flaper87> I guess I got it wrong...
14:10:35 <jokke_> tsufiev: as far as I know only problem we have (and which is not solvable) is that the client cannot do progress bar for stuff like fifos etc. where it cannot check the size
14:11:03 <flaper87> in which cases we can just disable the progress bar
14:11:12 <flaper87> but since this is horizon, I doubt they are using the CLI at all
14:11:12 <jokke_> other than that I see no problem passing the data through horizon (obviously you need to implment the bits to make that happen in horizon end)
14:11:16 <tsufiev> flaper87, nope, we're speaking about the same thing, just on different level of detalization
14:11:17 <flaper87> but rather the library itself
14:11:25 <flaper87> tsufiev: a-ha
14:11:35 <flaper87> I thought you could pass f-like objs to glanceclient
14:11:52 <flaper87> or at least it was possible
14:11:53 <flaper87> mmhh
14:12:04 <tsufiev> flaper87, well, our glance experts said I couldn't do it for API v2
14:12:05 <jokke_> tsufiev: you can push the image data through fifo or just stdin on v2 as well no problem
14:12:14 <tsufiev> mfedosin, ^^^
14:12:25 <nikhil_k> tsufiev: The file like obj should work on glance v2 API. If they don't work in glanceclient then we have a bug. But the functionality should exist
14:13:02 <flaper87> tsufiev: that's were the difference between your comment and mine is. If you're asking whether glanceclient supports file like object, then the answer is (probably) yes
14:13:28 <jokke_> it's there and we fixed the behavior around early juno ... the defaults did not work great on glanceclient back then, but it has been working perfectly fine for about a year now
14:13:32 <flaper87> if the question is whether glanceclient honors the file-like object the right way, then I guess it doesn't
14:13:45 <tsufiev> flaper87, okay, but is .read() method on that object in v2 API supported?
14:14:14 <tsufiev> hm... perhaps I was misinformed...
14:14:31 <nikhil_k> tsufiev: I see, it's just a data stream
14:14:45 <flaper87> tsufiev: https://github.com/openstack/python-glanceclient/blob/master/glanceclient/common/http.py#L58-L64
14:15:05 <nikhil_k> you may be referring to copy-from in v1 that's more along the lines of .read() you are asking for
14:16:27 <tsufiev> flaper87, nikhil_k: the above line seems like the thing that should work for me
14:16:27 <nikhil_k> and that's how I could make more sense of you comment on the review:- Since image_update is run in a separate thread and the browser itself
14:16:30 <nikhil_k> sends the large file via XHR, it is imperative that User doesn't close
14:16:34 <nikhil_k> Horizon page/tab before the image transfer is complete - otherwise it
14:16:35 <sigmavirus24> tsufiev: can you point to the code you'r eusing for v1 right now?
14:16:37 <nikhil_k> will fail.
14:16:51 <sigmavirus24> maybe that will help clear up the confusion
14:17:37 <mfedosin> https://github.com/openstack/glance/blob/master/glance/common/utils.py#L156
14:17:53 <mfedosin> I think CooperativeReader uses in both apis
14:18:05 <mfedosin> (is used)
14:18:16 <flaper87> wait, I'm confused now. Are we talking about glanceclient or glance
14:18:46 <tsufiev> as a Horizon developer I'm only able to use glanceclient directly :)
14:18:53 <flaper87> tsufiev: exactly
14:19:01 <jokke_> that's what I thought as well
14:19:28 <jokke_> and API perspective neither of the API gets file ... they get datastream
14:20:17 <tsufiev> jokke_, so there are no serious differences in v1 vs v2 datastream processing?
14:21:07 <flaper87> not that I can think of
14:21:15 <flaper87> tsufiev: is there an issue your facing ?
14:21:25 <flaper87> Error? Failure? different behaviour ?
14:21:35 <flaper87> it may be related to the API but not to the datastream
14:21:36 <jokke_> tsufiev: I didn't say that (meaning I'm not sure), but API perspective there is no file for the REST api ... it gets datastream from the client.
14:22:13 <jokke_> and the differencies in the python-glanceclient side were unified about year ago
14:22:28 <tsufiev> flaper87, I shall try for myself with v2 - will ping you back (file a bug) if something wrong happens
14:22:41 <flaper87> tsufiev: roger, that'd be very useful
14:22:41 <tsufiev> sorry for possibly false alarm
14:22:46 <sigmavirus24> Shouldn't this have been brought up before the emeting?
14:22:48 <jokke_> but as long as we're talking about doing something with around upload process I'm assuming we're talking about the client
14:22:57 <sigmavirus24> In #openstack-glance for example?
14:23:29 <flaper87> sigmavirus24: yeah
14:23:37 <sigmavirus24> Can we move on to the next topic?
14:23:41 <tsufiev> sure
14:23:45 <jokke_> sigmavirus24: I see no problem having discussions like this on meeting or mailing list ... that's why we have these
14:24:06 <flaper87> jokke_: erm, I disagree
14:24:14 <flaper87> support -> openstack-glance
14:24:16 <nikhil_k> Ok, folks. Let's clear the confusion offline.
14:24:19 <flaper87> and you gate it 24h/7d
14:24:25 <flaper87> s/gate/get/
14:24:41 <nikhil_k> #topic Reviews/Bugs/Releases
14:24:45 <sigmavirus24> == flaper87
14:24:48 <flaper87> anywa, we're halfway through our meeting, lets move on
14:24:59 <nikhil_k> #info change default notification exchange name
14:25:08 <nikhil_k> #link https://review.openstack.org/#/c/194548/
14:25:26 <llu-laptop> so, this patch is meant to change the default exchange name for glance notification
14:25:40 <llu-laptop> just like nova/cinder/etc
14:25:55 <llu-laptop> from original 'openstack' to 'glance'
14:26:06 <sigmavirus24> s/original/default/
14:26:27 <llu-laptop> sigmavirus24: thanks for the clarification
14:26:32 <flaper87> uuu, I just realized llu-laptop replied, I'm sorry about that
14:26:59 <flaper87> so, ceilo may listen by default on glance and that's fine
14:27:04 <llu-laptop> yeah, need your reviews :)
14:27:13 <nikhil_k> What's the motivation for changing defauls? (besides others doing it)
14:27:14 <sigmavirus24> Yeah I think I agree with llu-laptop
14:27:29 <sigmavirus24> nikhil_k: I think we were supposed to have changed them when we adopted the notifications work
14:27:42 <sigmavirus24> It's not that nova, et. all, are just doing this now
14:27:50 <sigmavirus24> It's that we never did it (if I understand correctly)
14:27:54 <flaper87> I understand it doesn't prohibit admins to set control_exchange
14:27:55 <jokke_> llu-laptop: sorry, can check the review right now, but what's the reasoning behind this. I'm really really reluctant to change our defaults lightly
14:28:22 <flaper87> my concern is that we're not warnning them that default has changed
14:28:25 <flaper87> :)
14:28:35 <nikhil_k> sigmavirus24: gotcha, so a antique bug :)
14:28:48 <flaper87> which means, if there are other downstream apps listenting on the 'openstack' exchange, then we'll break them\
14:28:57 <sigmavirus24> nikhil_k: it's an antique free-range artisinal bug
14:28:57 <jokke_> flaper87: ++
14:29:08 <nikhil_k> heh
14:29:14 <flaper87> aaaaaaaand, this is part of the public API
14:29:15 <llu-laptop> jokke_: follow the openstack norm, is my first motivation
14:29:21 <flaper87> well, not really public
14:29:24 <flaper87> but under-cloud public
14:29:30 <sigmavirus24> flaper87: I think the real question is "Does anyone really listen to notifications?" =P
14:29:31 <flaper87> llu-laptop: I think your change is fine
14:29:41 <flaper87> sigmavirus24: we'll never know until we break them
14:29:41 <nikhil_k> flaper87: :) notifications public?
14:29:45 <flaper87> but I'd assume yes
14:29:47 <sigmavirus24> flaper87: my point exactly
14:29:53 <sigmavirus24> so let's change them and find out what happens =P
14:29:59 <flaper87> sigmavirus24: I'll give them your home address
14:30:04 <flaper87> nikhil_k: they are in the under-cloud
14:30:15 <sigmavirus24> flaper87: yeah that's fine
14:30:18 <flaper87> say you have your own billing service, you may want to listen there
14:30:22 <sigmavirus24> I have an extra bed or two I could loan them
14:30:27 <flaper87> sigmavirus24: can you give me your home address ?
14:30:27 <nikhil_k> second level, right. tfter som processing. so...
14:30:29 <flaper87> :D
14:30:35 <nikhil_k> llu-laptop: I don't mind changing it given it makes more consistent/ Just want to ensure that we have it done right.
14:30:37 <sigmavirus24> flaper87: so the thing is that operators may already be changing this to do billing right
14:30:50 <sigmavirus24> Can we send a message to openstack-operators asking for feedback
14:30:59 <sigmavirus24> Like we can sit here and guess at the impact, or we can ask the people who know
14:31:02 <jokke_> Seriously guys, you're ok breaking our next upgrade just for cosmetics?!?!?
14:31:12 <sigmavirus24> jokke_: I'm trolling
14:31:24 <flaper87> sigmavirus24: I'm all for changing it but I'd like to see if there's a way to avoid the breakage OR at least have a warnning
14:31:26 <nikhil_k> llu-laptop: these flags seem important to get people attn: DocImpact, UpgradeImpact and the email shoutout like sigmavirus24 mentioned
14:31:29 <flaper87> and deprecate it
14:31:40 <flaper87> oh and yeah, what nikhil_k just said
14:31:50 <llu-laptop> nikhil_k: got that, will put those into commit message
14:31:56 <flaper87> My point is that assumption is the mother of all fuck ups
14:31:58 <flaper87> or most of them
14:32:02 <jokke_> ++
14:32:15 <flaper87> and I don't feel comfortable assuming people have control_exchange=glance already
14:32:31 <sigmavirus24> I said "may" not "probably"
14:32:31 <sigmavirus24> =P
14:32:32 <flaper87> so, my two options are: 1) We change it in a way we don't break it
14:32:42 <flaper87> 2) we deprecate it the right way
14:32:44 <sigmavirus24> I'm fine with saying that we're changing this in M
14:32:50 <flaper87> 2 means, we change it in M
14:32:50 <sigmavirus24> You have L to set it correctly
14:32:56 <flaper87> right
14:33:09 <nikhil_k> +1
14:33:18 <jokke_> so L having deprecation warning of the old option and M changing the default
14:33:27 <jokke_> I'd be fine with that
14:34:04 <nikhil_k> #action llu-laptop : ensure https://review.openstack.org/#/c/194548 has mention and work done for -- L having deprecation warning of the old option and M changing the default
14:34:08 <llu-laptop> jokke_: it's not deprecating old options, but changing the default value
14:34:17 <nikhil_k> oh right
14:34:30 <nikhil_k> #action llu-laptop : ensure https://review.openstack.org/#/c/194548 has mention and work done for -- L having deprecation warning of the change in default and M changing the default
14:34:39 <flaper87> llu-laptop: sorry for giving trouble for such a small change but breaking backwards compatibility is something that we try really freaking hard not to do: https://www.youtube.com/watch?v=EYKdmo9Rg2s
14:34:50 <jokke_> llu-laptop: yes old default value rather than option
14:35:01 <llu-laptop> flaper87: understandable
14:35:02 <jokke_> llu-laptop: what I thought wasn't what I typed ;)
14:36:24 <nikhil_k> Let's move on
14:36:29 <nikhil_k> #info  scrub images in parallel
14:36:42 <nikhil_k> #link https://review.openstack.org/#/c/196240/
14:37:10 <nikhil_k> guess hemanthm is away
14:37:15 <sigmavirus24> Why are we reducing that from 1000 to 100?
14:37:16 <nikhil_k> I had a chat with him
14:37:22 <jcook> +1 to scrub images in parallel
14:37:27 <sigmavirus24> "Oh to not overwhelm the swift cluster"
14:37:32 <sigmavirus24> (it as in the commit message)
14:38:26 <jcook> at Rackspace production scale it can't scrub all the images in serial
14:38:34 <sigmavirus24> jcook: right
14:38:37 <jokke_> I think last time we discussed about this it was discarded because of the extra load the parallel scrubbing would cause, so rather do it slower, but ensuring that the services are not overloaded
14:38:59 <flaper87> I haven't reviewed the patch but the idea sounds nice
14:39:38 <jcook> I saw on the order of 10s of thousands in the backlog that in parallel cleared up rather quickly and not noticeable prod issues
14:39:45 <nikhil_k> Yeah, the alternative is to refactor the scrubber to enable it to do the deletes over the day/week etc
14:39:52 <jcook> Swift is designed for parallel scaling for speed of downloads and uploads
14:40:00 <jcook> yeah I have a spec for that
14:40:33 <jcook> I think hemanthm is working with an intern on task scrubbing and maybe image scrubbing updates
14:40:36 <jokke_> sorry not being able to check the review atm. but are/can we do that as configurable?
14:40:43 <nikhil_k> I agree that production deployments of swift should be able to handle this
14:40:48 <jcook> in the short term though, the scrubber does not scale
14:41:01 <jcook> and swift (in my experience) handles it fine
14:41:20 <nikhil_k> not necessarily devstack
14:41:24 * jokke_ would like to remind that we have other backends than just swift
14:41:35 <nikhil_k> yeah
14:41:41 <jcook> that is true, make it configurable?
14:42:16 <jcook> parallelizing deletes that is
14:42:20 <nikhil_k> I doubt if 100 threads would be too much for other backends
14:42:40 <jokke_> I would prefer than ... I know that it's again one more thing to take in account when deploying, but it would be way safer bet than again assuming that based on experience with swift the others wouldn't be affected
14:43:09 <jokke_> /than/that/
14:43:11 <sigmavirus24> nikhil_k: shouldn't be a problem for the filesystem backend, http store doesn't allow deletes, cinder is broken miserably
14:43:13 <sigmavirus24> rbd?
14:43:27 <jcook> either that or just make thread pool configurable and default 1
14:43:32 <sigmavirus24> flaper87: would that maybe be a problem for rbd (same question to sabari for vmware)
14:44:01 <nikhil_k> I doubt if vmwaer is using scrubber
14:44:32 <flaper87> I don't think it would be a problem for RBD itself. What would worry me is:
14:44:50 <flaper87> if you're using  rbd for nova, glance and cinder, you may be stressing it more than you want
14:45:05 <flaper87> and vms/devices will be affected
14:45:23 <kragniz> cinder should be less broken if this lands: https://review.openstack.org/#/c/166414/
14:45:25 <flaper87> so, having it configurable is definitely the right thing to do in this regard
14:45:26 <jcook> #startvote What to do with scrubber? approve-it, config-for-parallel, config-for-thread-pool, config-both, other
14:45:26 <openstack> Only the meeting chair may start a vote.
14:45:34 <jcook> boo
14:45:44 <nikhil_k> #startvote What to do with scrubber? approve-it, config-for-parallel, config-for-thread-pool, config-both, other
14:45:44 <openstack> Only the meeting chair may start a vote.
14:45:53 <jokke_> haha! :P
14:45:56 <jcook> yay!
14:46:01 <nikhil_k1away> #startvote What to do with scrubber? approve-it, config-for-parallel, config-for-thread-pool, config-both, other
14:46:01 <openstack> Begin voting on: What to do with scrubber? Valid vote options are approve-it, config-for-parallel, config-for-thread-pool, config-both, other.
14:46:03 <openstack> Vote using '#vote OPTION'. Only your last vote counts.
14:46:10 <jcook> #vote approve-it
14:46:17 <jcook> but I'm ok with config too
14:46:19 <nikhil_k1away> #chair nikhil_k
14:46:20 <openstack> Warning: Nick not in channel: nikhil_k
14:46:22 <openstack> Current chairs: nikhil_k nikhil_k1away
14:46:22 <tsekiyam> kragniz: yeah, I'd like to make it progress. Any reviews for it are appreciated
14:46:39 <sigmavirus24> #vote config-both
14:46:46 <flaper87> #vote config-both
14:46:50 <sigmavirus24> Why not increase testing complexity? (He says semi-seriously)
14:46:59 <jokke_> #vote config-both
14:47:01 <kragniz> #vote config-both
14:47:23 <nikhil_k> #vote config-for-thread-pool
14:47:30 <nikhil_k> config-for-thread-pool solves both problems
14:47:34 <mfedosin> #vote config-both
14:48:11 <jcook> yeah config for pool does solve both
14:48:37 <jcook> though having an option to not use threads is fine
14:48:43 <jcook> erm
14:48:46 <flaper87> 10 mins left
14:48:50 <flaper87> 12 to be precise
14:48:56 <jcook> I guess we are using them either way
14:49:01 <nikhil_k> yep
14:49:14 <jcook> #vote config-for-thread-pool
14:49:19 <nikhil_k> moving away from eventlet would be our goal when the entire ecosystem does so
14:49:34 <flaper87> if the entire ecosystem does so ;)
14:49:40 * jokke_ is really looking for that day
14:49:42 <nikhil_k> heh
14:49:51 <jokke_> even more than deprecating v1 api
14:50:00 <sigmavirus24> #vote config-for-thread-pool
14:50:02 <sigmavirus24> I hate eventlet
14:50:11 <sigmavirus24> jokke_: are we ever going to port glance to twisted?
14:50:24 <flaper87> is the vote still open?
14:50:25 <jokke_> sigmavirus24: I seriously would love that option
14:50:29 <flaper87> longest voting ever :P
14:50:30 <nikhil_k> yes
14:50:37 <jcook> >_>;
14:50:39 <sigmavirus24> #endflaper87vote
14:50:44 <flaper87> hahahaha
14:50:45 <jcook> lol
14:50:54 <nikhil_k> people still voting, what can I do.. :P
14:50:56 <nikhil_k> #endvote
14:50:56 <openstack> Voted on "What to do with scrubber?" Results are
14:50:57 <openstack> config-for-thread-pool (3): sigmavirus24, nikhil_k, jcook
14:50:58 <openstack> config-both (4): kragniz, mfedosin, jokke_, flaper87
14:51:11 <jcook> both it is
14:51:30 <nikhil_k> people are showing bitterness for more options in openstack
14:51:37 <jcook> hemanthm: ^
14:51:38 <nikhil_k> configurations to be precise
14:51:50 <nikhil_k> #topic Open Discussion
14:51:57 <jokke_> nikhil_k: certain devs are ;)
14:52:06 <tsekiyam> I'd like to make progress my proposal for implement upload/download to cinder glance_store.
14:52:11 <tsekiyam> https://review.openstack.org/#/c/166414/
14:52:15 <mfedosin> afair we wanted to have artifacts 5min discussions
14:52:16 <tsekiyam> This fix up the broken cinder backend.
14:52:28 <nikhil_k> flaper87: what's up? You seemed antsy to move on?
14:52:30 <tsekiyam> Any advice to make it forward? Does it help if I ask cinder experts for reviews for it?
14:52:32 <sigmavirus24> mfedosin: we have 9 minutes to have a 5 min discussion
14:52:33 <flaper87> what cinder backend? #joke
14:52:38 <tsekiyam> lol
14:52:44 <sigmavirus24> tsekiyam: yeah having cinder SMEs look at it would be helpful
14:52:56 <nikhil_k> mfedosin: yep, let's do that if you
14:53:05 <flaper87> nikhil_k: haha, all good. I was just doing some sigmavirus24trolling
14:53:06 <mfedosin> so, just everyone to be aware of it...
14:53:14 <nikhil_k> you've time. I forgot to update the agenda and people had added items for today
14:53:21 <nikhil_k> :)
14:53:25 <flaper87> tsekiyam: I'll review your patch and get back to you
14:53:43 <mfedosin> Alex Tivelkov status: He is fighting with oslo.versioned object. He made several commit there, but they were reverted, because they broke nova and cinder. Now he's working on fixing it.
14:54:11 <tsekiyam> okey, I'll ask someone in cinder devs to review it.
14:54:30 <flaper87> mfedosin: :(
14:54:37 <flaper87> mfedosin: but I'm happy he's working on that
14:54:44 <flaper87> that's a great cross-project effort
14:54:45 <mfedosin> My and Darja status: We are working on the glance v3 client
14:54:57 <mfedosin> Currently we have 3 commits there:
14:54:59 <nikhil_k> sigmavirus24: flaper87 do you guys have anything for drivers' meeting that  happened this week? #link http://eavesdrop.openstack.org/meetings/glance_drivers/2015/glance_drivers.2015-06-30-14.02.html
14:55:03 <mfedosin> initial one (https://review.openstack.org/#/c/189860/)
14:55:11 <mfedosin> CRUD (https://review.openstack.org/#/c/192261/)
14:55:30 <mfedosin> blob operations (https://review.openstack.org/#/c/197970/)
14:55:45 <mfedosin> Also we have a commit in domain model https://review.openstack.org/#/c/197053/ and it wants an approve :)
14:55:51 <jokke_> nikhil_k: I think worth of mentioning, I managed to troll flaper87 totally on the drivers meeting :P
14:55:51 <mfedosin> Future plans are to start working on api stabilization with api-wg and move out from domain model to something more plain.
14:56:12 <nikhil_k> jokke_: :) my guess is glance_store?
14:56:17 <jokke_> nikhil_k: yup
14:56:18 <sigmavirus24> nikhil_k: just the email flaper87 sent to the ML
14:56:26 <nikhil_k> thanks sigmavirus24 !
14:56:36 <nikhil_k> jokke_: :)
14:56:49 <nikhil_k> mfedosin: thanks for the update. I guess the action item is
14:57:47 <mfedosin> Also there is a good bug https://bugs.launchpad.net/glance/+bug/1469817
14:57:47 <openstack> Launchpad bug 1469817 in Glance liberty "Glance doesn't handle exceptions from glance_store" [High,Confirmed] - Assigned to Mike Fedosin (mfedosin)
14:57:55 <nikhil_k> #action review needed for input for glanceclient artifacts support
14:58:23 <mfedosin> It was found by Alexei Galkin and now we have 39% test coverage instead of 38 :)
14:58:35 <mfedosin> My idea to inherit all glance exceptions from similar glance_store exception and then catch glance_store exceptions on the upper level
14:58:35 <nikhil_k> ha
14:58:58 <mfedosin> God help us when there will be 42%
14:59:50 <tsekiyam> mfedosin: they only says that "sorry for inconvenience"
15:00:02 <nikhil_k> We are out of time.
15:00:03 <flaper87> jokke_: ++ for the troll :D
15:00:05 <nikhil_k> Thanks all!
15:00:07 <flaper87> sorry I got distracted
15:00:08 <nikhil_k> #endmeeting