14:00:37 <jokke_> #startmeeting glance
14:00:40 <openstack> Meeting started Thu Dec  6 14:00:37 2018 UTC and is due to finish in 60 minutes.  The chair is jokke_. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:42 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:44 <openstack> The meeting name has been set to 'glance'
14:00:46 <jokke_> #topic roll-call
14:00:50 <jokke_> o/
14:01:27 <rosmaita> o/
14:02:14 <abhishekk> o/
14:02:28 <jokke_> #link https://etherpad.openstack.org/p/glance-team-meeting-agenda
14:03:43 <jokke_> #topic updates
14:03:56 <jokke_> I have only one thing on my list
14:04:23 <jokke_> our survey closes today, 9 responses and so far everyone are up for the default visibility change
14:04:39 <rosmaita> hooray!
14:04:44 <jokke_> so unless something dramatic happens today, we should be good to go with that
14:05:00 <abhishekk> great
14:05:06 <LiangFang> good
14:05:24 <rosmaita> i wonder if it will break tempest again :(
14:05:49 <jokke_> rosmaita: time will tell, most likely ;)
14:06:40 <jokke_> we'll fight that battle once we get the patch up to see how many thing it breaks. I'm assuming it breaking somewhere between lots and tons of our own tests as well
14:07:28 <jokke_> #topic release/periodic jobs updates
14:07:48 <abhishekk> that would be me
14:08:00 <abhishekk> We are in R-18 week, Stein 2 milestone will be on 11th January which will be exactly 1 month away from now
14:08:25 <abhishekk> so focus on our targets for milestone 2 and help with the reviews as well
14:08:25 <jokke_> abhishekk: you've got opportunity to follow if we have anything we need to take care of at this point? I'd assume we need at least stable cuts once the backports are in
14:08:41 <abhishekk> jokke_, right
14:08:57 <adam_zhang_> hello, everybody
14:09:01 <abhishekk> I have 2-3 patches up which needs to be backported as well
14:09:36 <abhishekk> Regarding periodic jobs, 2-3 jobs are failing today, yesterday it was all green
14:09:49 <jokke_> :(
14:09:58 <jokke_> any idea yet what broke?
14:10:00 <abhishekk> most of the tests are failing with String exception
14:10:35 <abhishekk> jokke_, not at the moment, but these are intermittent
14:10:38 <jokke_> that's not good, sounds like something really broke
14:10:56 <abhishekk> I will try to spend some time on this tomorrow
14:11:08 <jokke_> sounds great, thanks
14:11:30 <abhishekk> we can move to next topic
14:11:41 <jokke_> ok
14:11:53 <jokke_> #topic python-glanceclinet v1 support
14:12:00 <jokke_> rosmaita: you wanted to highlight this
14:12:12 <rosmaita> yeah, just in case anyone forgot
14:12:27 <rosmaita> i don't think i mentioned it last week because it's not really a config option
14:12:49 <rosmaita> but we have said that v1 client support ends in stein
14:13:13 <jokke_> that's a lots of cleanup to be happening \\o \o/ o// o/7
14:13:20 <rosmaita> i don't know who has bandwidth / how we want to prioritize it
14:13:39 <rosmaita> but we do still have the tests running against queens v1 glance
14:13:55 <rosmaita> so if we don't remove it, we can still be sort of confident that it still works
14:13:59 <jokke_> from master?
14:14:10 <rosmaita> i think so, gimme a sec to check
14:14:15 <abhishekk> rosmaita, probably I will take that
14:14:47 <jokke_> I was more thinking of dropping the tests and the option from the client to use it and start cleaning the code base block by block
14:14:52 <rosmaita> #link http://git.openstack.org/cgit/openstack/python-glanceclient/tree/.zuul.yaml
14:15:00 <jokke_> similar way as with the service
14:15:02 <rosmaita> looks like we chackout glanceclinet master
14:15:10 <jokke_> kk
14:15:26 <rosmaita> yeah, stable/queens devstack with glanceclient master
14:15:44 <jokke_> ok, so we can drop that test
14:16:07 <rosmaita> right, after we eliminate the code :)
14:16:43 <jokke_> just before ... can't drop the code if that change is gated agaist the test testing it is still there :P
14:16:44 <abhishekk> we also have some v1 related code in glance code base as well
14:16:54 <jokke_> abhishekk: yes we do
14:17:13 <jokke_> there is quite a few bits that still needs cleanup
14:17:46 <abhishekk> yes
14:18:51 <jokke_> ok, that's good reminder, thanks Brian
14:19:03 <rosmaita> happy to be of service
14:19:14 <jokke_> #topic Foreign key constrain patch
14:19:24 <jokke_> LiangFang: I think this was yours
14:19:40 <LiangFang> this is just a call to rosmaita:)
14:19:45 <rosmaita> i noticed!
14:19:51 <LiangFang> yes
14:20:03 <rosmaita> looks like this meeting will end early, i'll use the time to look at it
14:20:15 <rosmaita> it looks ok, but i wanted to look more carefully
14:20:22 <LiangFang> thanks, I have a question to rosmaita
14:20:23 <jokke_> #link https://review.openstack.org/#/c/617889/
14:20:34 <rosmaita> sure
14:20:42 <LiangFang> postpone to open discuss section
14:20:46 <rosmaita> ok
14:20:54 <jokke_> well lets get in there
14:21:04 <jokke_> #topic Open Discussion
14:21:18 <abhishekk> I have PoC ready for cache-manage using v2
14:21:37 <jokke_> abhishekk: you wanna link those patches here you needed eyes on and needed backporting as well
14:21:38 <rosmaita> cool
14:21:39 <abhishekk> just pending some time to test it, may be couple of days
14:21:58 <rosmaita> #link http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000665.html
14:22:04 <abhishekk> s/pending/spending
14:22:08 <abhishekk> jokke_, sure
14:22:22 <abhishekk> https://review.openstack.org/620781
14:22:23 <rosmaita> i don't know what to respond, i wasn't really paying a lot of attention to the cache management in the rocky cycle
14:22:29 <abhishekk> https://review.openstack.org/618468
14:22:36 <abhishekk> https://review.openstack.org/618093
14:22:56 <abhishekk> I will reply to that mail
14:23:02 <rosmaita> excellent
14:23:04 <rosmaita> thanks
14:24:16 <abhishekk> but I haven't received that mail :(
14:24:33 <jokke_> #link https://review.openstack.org/620781
14:24:42 <jokke_> #link  https://review.openstack.org/618468
14:24:51 <jokke_> #link https://review.openstack.org/618093
14:25:02 <rosmaita> abhishekk: that's weird
14:25:24 <abhishekk> yeah
14:25:27 <rosmaita> but i have noticed that deliveries from openstack-discuss seem to lag from when stuff shows up on the website
14:25:44 <abhishekk> not in my inbox, searched all of it now
14:26:12 <abhishekk> that's why in the first place it didn't come to my attention I guess :(
14:26:21 <rosmaita> probably!
14:26:47 <LiangFang> rosmaita: i want to confirm with you, about refactoring location policy code, will you do this with two month?
14:27:01 <LiangFang> within two month?
14:27:05 <rosmaita> jokke_: how about you?
14:27:11 <abhishekk> jokke_, we need to discuss about location patch submitted by imacdonn
14:27:14 <jokke_> it's great when decently working things needs to be changed just for the sake of change, isn't it ;)
14:27:42 <rosmaita> LiangFang: i will not be doing any location code refactoring
14:27:48 <rosmaita> someone needs to, though
14:27:57 <LiangFang> seems have network delay
14:28:15 <jokke_> tbh I'm just incredibly scared to touch that
14:29:18 <abhishekk> I guess your concern about nova snapshot with location does not have to do with this patch, snapshot will create a new image it will not give a call to add_location (as per my understanding)
14:29:34 <jokke_> I know we have more and more need to do it and we still have the feature branch for it, so I think just need to take a hit and start poking it and see what all breaks
14:30:13 <jokke_> abhishekk: I'm not sure if nova calls add or replace
14:30:39 <abhishekk> Also we discussed 2 days before that we need to change nova and cinder to use the multiple backend support provided by glance, in that way when nova calls snapshot create they can specify which backend snapshot should be uploaded
14:30:42 <jokke_> but nova uses the locations API when it creates the ceph snapshot to register the location to the image
14:30:52 <abhishekk> same goes for cinder upload-volume-to-image
14:31:43 <jokke_> I do not know about cinder, do they upload the image or do they do it inband as well?
14:32:05 <abhishekk> jokke_, I will try to understand nova snapshot code
14:32:11 <jokke_> actually it must be add as that image goes to active
14:32:37 <abhishekk> I have tried to setup the environment but it is failing
14:33:58 <jokke_> abhishekk: thry to get in touch with some of our internal Nova folks. I'm very hopeful there is someone who understands how the ceph integration code works there
14:34:17 <abhishekk> jokke_, will do
14:34:51 <abhishekk> will update in/before next meeting
14:35:22 <abhishekk> kindly review my patches :D
14:35:53 <jokke_> iiuc the workflow is that Nova and Glance uses the same ceph backend for ephemeral storage and image store, Nova accesses the locations via the locations api and uses the direct path, when snapshotting it just takes snapshot on the ceph side and registers new image to glance and adds the location to the image
14:36:03 <jokke_> abhishekk: I will do
14:36:35 <abhishekk> jokke_, ack
14:37:15 <jokke_> In ceph that is just cow snapshot, which is also the reason why we have the delete dance on the ceph driver as ceph refuses to delete the parent as long as there is snapshots of the image
14:38:03 <abhishekk> jokke_, will go through it, thanks for the pointers
14:38:17 <jokke_> hopefully that helps
14:38:23 <jokke_> anything else?
14:38:25 <LiangFang> abhishekk: which failure when you setup environment?
14:38:44 <abhishekk> LiangFang, http://paste.openstack.org/show/736692/
14:38:48 <LiangFang> ok
14:38:50 <abhishekk> jokke_, that's it
14:39:08 <jokke_> LiangFang: rosmaita: ?
14:39:29 <abhishekk> LiangFang, http://paste.openstack.org/show/736693/
14:39:31 <LiangFang> I site beside a ceph team
14:39:36 <rosmaita> oops, not paying attention
14:39:38 <abhishekk> this is my local.conf
14:39:39 <LiangFang> sit
14:40:03 <LiangFang> so if you encounter some issue about ceph, I may can get help from them
14:40:06 <abhishekk> LiangFang, great. please drop me a mail if you find something
14:40:25 <jokke_> ok, 20min win! \\o \o/ o// o/7
14:40:29 <jokke_> Thanks all!
14:40:33 <LiangFang> thanks
14:40:40 <abhishekk> thank you all
14:40:48 <jokke_> #endmeeting