21:00:49 <notmyname> #startmeeting swift
21:00:50 <openstack> Meeting started Wed Dec 16 21:00:49 2015 UTC and is due to finish in 60 minutes.  The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:51 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:54 <openstack> The meeting name has been set to 'swift'
21:01:05 <notmyname> hello, everyone. who's here for the swift meeting?
21:01:09 <minwoob> o/
21:01:12 <blmartin> o/
21:01:14 <cschwede> o/
21:01:16 <jrichli> o/
21:01:17 <onovy> o/
21:01:18 <ho> o/
21:01:20 <acoles> \o
21:01:25 <mattoliverau> o/
21:01:25 <kota_> hello
21:01:32 <peterlisak> hi
21:01:37 <eranrom> hi
21:01:47 <mattoliverau> sorry got lost looking at the new gerrit
21:02:05 <notmyname> yup. new gerrit was just deployed
21:02:11 <notmyname> have fun, and good luck :-)
21:02:24 <kota_> mattoliverau: wow, new gerrit
21:02:38 <notmyname> agenda for this week
21:02:40 <notmyname> #link https://wiki.openstack.org/wiki/Meetings/Swift
21:02:53 <notmyname> I'm not sure how, but I think we got even longer than last week!
21:03:15 <notmyname> tdasilva: are you here?
21:03:49 <cutforth> hello
21:03:52 <notmyname> first topic is one that's a little different
21:03:58 <notmyname> #topic glance driver
21:04:11 <notmyname> #link http://lists.openstack.org/pipermail/openstack-dev/2015-December/081966.html
21:04:22 <notmyname> there was a mailing list message about this (linked above)
21:04:43 <notmyname> the glance team is looking for support in maintaining their drivers for various backend storage things
21:04:48 <notmyname> one of them is the swift driver
21:05:07 <notmyname> here's the current status
21:05:08 <notmyname> #link https://etherpad.openstack.org/p/glance-store-drivers-status
21:05:50 <notmyname> the basic idea is that each driver does the Right Thing for that particular storage system, and the glance team has neither the time nor the expertise to know them all
21:06:05 <notmyname> so I think there are a few options (listed in our meeting agenda)
21:06:27 <notmyname> 1) we could move the swift driver from glance into some new repo that we maintain
21:06:48 <notmyname> 2) we could have one person from our community as the point person for all questions and reviews
21:07:17 <notmyname> 3) we could build a joint glance-swift-store review team made up of glance core and swift core, but the code live in glance
21:07:22 <notmyname> 4) we could do nothing
21:07:51 <onovy> how much code is it?
21:07:55 <notmyname> I don't like option 4, just because I don't think that's good or healthy for openstack in general to let this cross project stuff fall by the wayside
21:08:18 <notmyname> mattoliverau: hav eyou had a chance to look since we talked about it?
21:08:38 <mattoliverau> not yet
21:08:41 <onovy> and +1 for do something :)
21:09:05 <notmyname> mattoliverau: who were we talking to from glance about ti?
21:09:14 <mattoliverau> I volenteered to be contact person so swift support wont be depreciated. But I like option 3
21:09:16 <notmyname> flaper87 ?
21:09:22 <cschwede> yep
21:09:25 <mattoliverau> yeah, that was him
21:09:47 <notmyname> there's really two options, I think
21:09:51 <acoles> mattoliverau: having more than one person maintaining seems better than option 2
21:09:52 <onovy> in option "2" is only "one person". this is not good. 3 is better
21:09:58 <notmyname> mostly around where the code lives
21:10:11 <mattoliverau> acoles: that's what I told him
21:10:22 <notmyname> does the code stay in glance and we poke at it (perhaps with commit rights) or do we move the code to be a project under the swift ubbrella?
21:10:31 <onovy> https://github.com/openstack/glance_store/tree/master/glance_store/_drivers/swift <-- this is it?
21:10:40 <notmyname> onovy: yes
21:10:44 <mattoliverau> I think if we want people to use it, it should stay with the project that needs it (glance)
21:10:53 <acoles> mattoliverau: +1
21:10:59 <onovy> +- 1k lines of code
21:11:11 <onovy> mattoliverau, +1 too
21:11:13 <notmyname> yeah, I agree
21:11:29 <notmyname> and I also think that it has to be a group (rather than a person) mainaining it
21:11:39 <jrichli> +1 for #3
21:11:42 <peterlisak> option 3 makes sense
21:11:57 <notmyname> seeing a lot of support for that
21:12:17 <onovy> so keep it inside glance, and create new group from glance+swift members
21:12:19 <notmyname> that was flaper87's idea. it's a good one
21:12:24 <acoles> notmyname: my hpe glance core colleague, stuart, advised me against option 1, fwiw
21:12:35 <notmyname> acoles: ah, interesting. why?
21:13:08 <notmyname> I think it would actually be a core team for glance drivers, and swift-core would be a member with an agreement that we don't poke the non-swift stuff
21:13:50 <acoles> notmyname: because glance should own how glance talks to other core services, just like swift owns keystoneauth, nova has glance driver etc
21:14:20 <notmyname> acoles: ok
21:14:32 <onovy> notmyname, maybe it's possible to setup permissions for this. just to make changes in some subdirs, etc. don't know
21:15:00 <acoles> notmyname: the part of this that concerns me is the plan to change the driver API, at a time when they say they don't have sufficient resource to maintain
21:15:01 <notmyname> onovy: perhaps. that was actually a conversation in tokyo (wrt neutron, I think), and IIRC the conclusion was "trust people"
21:15:09 <notmyname> acoles: :-)
21:15:09 <onovy> ok
21:15:26 <lifeless> notmyname: and in vancouver :)
21:15:44 <notmyname> lifeless: yeah, you were teh outspoken one there (with wisdom) ;-)
21:15:56 <notmyname> so I'll talk to flaper87 again and say that we want to have a joint core for glance drivers. unless anyone is opposed?
21:16:07 <mattoliverau> +1
21:16:35 <acoles> +1
21:16:43 <onovy> +1
21:16:46 <ho> +1
21:16:52 <minwoob> +1
21:16:56 <peterlisak> +1
21:17:01 <notmyname> ack
21:17:32 <notmyname> #agreed swift-core team requests core rights for glance-drivers to help maintain the swift driver in the glance repo
21:17:37 <notmyname> great!
21:17:41 <kota_> if making the joint team, do we need one +2 from glance core at least?
21:18:07 <notmyname> kota_: meh. we can work those details out. I'm not too worried about someone doing the wrong thing (intentionally)
21:18:08 <ho> i think swift and swiftclient core are good for this.
21:18:24 <kota_> ok
21:18:24 <notmyname> FYI flaper87 ^ (for your scrollback later)
21:18:42 <mattoliverau> Well that's our recommendation, flaper87 will take it to the glance meeting tomorrow (if I remember correctly)
21:18:54 <notmyname> mattoliverau: ah, right. thanks
21:19:11 <notmyname> ok, hold on to your seats. time to move fast
21:19:25 <notmyname> #topic testing testr tests
21:19:35 <notmyname> the testr functests landed
21:19:45 <notmyname> thank you everyone who helped get it through!
21:20:02 <notmyname> acoles: I believe you have a final issue on it for follow-on patch?
21:20:34 <notmyname> oh, maybe I'm misrememerign?
21:20:35 <acoles> some of os-testr options did not work, i found that switching to unittest2 fixed that
21:20:41 <notmyname> oh yeah
21:20:48 <acoles> i just went to find the patch but...gerrit...
21:20:51 <notmyname> lol
21:20:57 <gmmaha> acoles: patch 258578
21:20:57 <patchbot> gmmaha: https://review.openstack.org/#/c/258578/ - Fix func test --until-failure and --no-discover op...
21:20:59 <gmmaha> gerrit is back
21:21:09 <notmyname> gmmaha: but the UI is ....new...
21:21:17 <gmmaha> notmyname: aah yeah! :)
21:21:22 <acoles> gmmaha: thanks
21:22:01 <notmyname> acoles: doesn't unittest2 also bring in that cool subtest thing?
21:22:22 <acoles> notmyname: idk. tbh it was luck that led me to that fix.
21:22:37 <acoles> notmyname: luck and the exception trace :)
21:22:50 <acoles> gerrit sign in not working for me
21:22:54 <notmyname> yeah https://docs.python.org/3/library/unittest.html#distinguishing-test-iterations-using-subtests
21:23:04 <notmyname> so there might be other nice reasons to use unittest2 anyway
21:23:09 <notmyname> ok, so that's the status there
21:23:26 <notmyname> #topic patch 206105
21:23:27 <patchbot> notmyname: https://review.openstack.org/#/c/206105/ - Use entrypoints for storage policy implementation ...
21:23:33 <notmyname> this is the entrypoints patch
21:23:56 <notmyname> bringing it up again this week because it still needs reviews
21:24:12 <mattoliverau> I was suppose to take a better look at that, but have failed, but now that testr is in, i'll find some time
21:24:20 <notmyname> beyond the technical things for the patch itself, what does it imply we support in swift
21:24:28 <notmyname> the gerrit comments are good
21:24:41 <notmyname> so look at that
21:24:43 <notmyname> please
21:24:59 <notmyname> #topic other patches to look at
21:25:35 <notmyname> janonymous isn't online, but he asked for reviews on patch 227855 about keepalive settings
21:25:35 <patchbot> notmyname: https://review.openstack.org/#/c/227855/ - Eventlet green threads not released back to pool
21:26:02 <notmyname> ho: your keystone RBAC tests? did they all get messed up from the testr patch?
21:26:08 <tdasilva> hello, sorry i'm late
21:26:41 <ho> notmyname: yeah, i will rebase them today.
21:26:58 <notmyname> ho: thanks. not that testr has landed, I'd like to spend some time on them
21:27:08 <notmyname> other patches that need review
21:27:13 <ho> notmyname: thanks!
21:27:18 <notmyname> copy middleware: patch 156923
21:27:19 <patchbot> notmyname: https://review.openstack.org/#/c/156923/ - Refactor server side copy as middleware
21:27:24 <notmyname> this one is blocking other work!
21:28:15 <tdasilva> notmyname: i'm working on it. right now focusing on decoupling versioning from copy. Hoping to have something posted by friday
21:28:20 <notmyname> thanks
21:28:21 <acoles> sorry i didnt get to that yet. did the copy_hook stuff get removed?
21:28:29 <acoles> tdasilva: ok. thanks
21:28:35 <tdasilva> acoles: not yet, but i'm working on it
21:28:42 <notmyname> concurrent reads patch 117710
21:28:42 <patchbot> notmyname: https://review.openstack.org/#/c/117710/ - Add concurrent reads option to proxy
21:29:06 <notmyname> mattoliverau: what's the status here?
21:29:12 <mattoliverau> I've marked it as WIP while I finish some left over work due toe clays last patch
21:29:28 <mattoliverau> I'll try and get that up by Friday (or at least pre-xmas)
21:29:40 <notmyname> ok, thanks
21:30:12 <notmyname> small segments in SLOs patch 252096
21:30:12 <patchbot> notmyname: https://review.openstack.org/#/c/252096/ - Allow smaller segments in static large objects
21:30:39 <notmyname> just needs one more +2 and it really helps out users, especially after the SLO-ranges feature
21:30:40 <briancline> oops, here o/
21:31:02 <acoles> i started looking at that, didnt get too far yet though
21:31:12 <notmyname> cschwede: kota_: would either of you have a chance to look at that one?
21:31:19 * peluse says 'oh yeah, a meeting'...
21:31:20 <kota_> sure
21:31:22 <cschwede> notmyname: yep, will do
21:31:25 <notmyname> thansk
21:31:56 <cschwede> oh, i already starred that. so it’s already on my list ;)
21:32:04 <notmyname> onovy: peterlisak: the two of you have a couple of patches mentioned
21:32:10 <notmyname> patch 238799
21:32:11 <patchbot> notmyname: https://review.openstack.org/#/c/238799/ - Change schedule priority of daemon/server in config
21:32:15 <onovy> here i just want to know if: swift should support it natively or not. as i wrote in gerrit, it's usefull inside swift itself
21:32:51 <notmyname> that is a great question, especially for briancline, ahale, and other ops
21:33:09 <peterlisak> i tried to discuss on irc few times but no answer ... so I put in agenda and hope to get more attention ...
21:33:30 <onovy> it can't be done inside init script, because we have swift-init
21:33:44 <onovy> only solution now is to use cron job which is really really weird
21:33:48 <notmyname> peterlisak: thanks
21:34:12 <cschwede> +1 on that. don’t need to modify any other scripts
21:34:25 <ahale> hi here o/ yeah thats a good idea imo
21:34:45 <briancline> onovy: I actually asked our ops folks about this recently. we already control these in production through other means, so we're fine with this going in as long as its optional, which it certainly seems to be
21:34:56 <onovy> briancline, yep, it's
21:35:07 <peterlisak> fyi ... most of lines are changes in doc
21:35:18 <notmyname> peterlisak: yeah
21:35:21 <onovy> yes that's true. patch itself is simple
21:35:34 <briancline> i can see situations later where it'll help us to have it in config
21:35:54 <notmyname> peterlisak: onovy: ok, so based on views from cschwede and ahale and briancline, definitely a good idea. thanks for working on it
21:36:29 <onovy> perfect, so the second one
21:36:32 <notmyname> the other you had listed on the agenda is patch 229532
21:36:33 <patchbot> notmyname: https://review.openstack.org/#/c/229532/ - swift-init: SIGKILL daemon after kill_wait period.
21:36:34 <peterlisak> and pls maybe test on other distros ... i just tested on debian
21:37:14 <onovy> i need to do this hack in init script: http://anonscm.debian.org/cgit/openstack/swift.git/tree/debian/FOOTER.init.in in debian
21:37:20 <onovy> which is really weird too :)
21:37:24 <notmyname> onovy: how does this compare to the swift-oldies script?
21:37:31 <onovy> notmyname, problem is somewhere else
21:37:36 <onovy> when i do swift-init .. stop
21:37:39 <onovy> that daemons doesn't stop
21:37:45 <onovy> so you can't start - port in use
21:38:14 <onovy> we have this problem in production
21:38:49 <onovy> so, now when i want to stop/restart daemon i signal process with sigterm/sighup (using swift-init), wait 60 seconds and then sigkill
21:39:01 <onovy> if parent process didn't die
21:39:26 <onovy> my patch moves this func from init script to swift-init itself, which is better place
21:39:31 <briancline> onovy: i'll have a look through after meeting and likely +1 this, we've definitely seen this where the process will get stuck
21:39:49 <onovy> briancline, perfect
21:39:52 <briancline> as long as it doesn't affect graceful shutdowns, right?
21:40:06 <onovy> briancline, and that's a question
21:40:21 <onovy> from my point of view we should kill parent process if it doesn't day in grace period too
21:40:25 <onovy> die
21:40:31 <briancline> in the situations where we've seen it get stuck i believe when we strace the process it's just sitting there trying to read() for data that never comes (we're talking hours at least)
21:40:48 <onovy> didn't tried to strace it
21:41:17 <onovy> how it should work: send sigterm/sighup to PARENT process. parent process die, childs keep running -> this is fine
21:41:34 <onovy> but when parent process didn't die -> it's stuck, kill it all after grace period with sigkill
21:41:47 <onovy> all=>process group
21:42:07 <ho> is there any condition to reproduce?
21:42:21 <briancline> well, i would think on a graceful shutdown (swift-init shutdown / reload) it still should respect that a child proc could still be alive because its serving a request (keepalive sessions stick around like this).. the way eventlet doesn't expose enough information about the connections/etc seems like it's best to err on the side of caution on graceful shutdowns, but for hard stops (swift-init stop) it should do what your patch is proposing, i would think
21:42:23 <onovy> ho, no :/ it hangs "sometimes"
21:42:48 <onovy> briancline, so you are suggesting what peterlisak suggests too
21:42:56 <onovy> when there are gracefull shutdown, kill only parent, not group
21:42:56 <ho> onovy: thanks.
21:43:15 <onovy> same for reload (gracefull shutdown + start)
21:43:16 <onovy> right?
21:43:42 <briancline> well
21:44:00 <briancline> what i'm thinking is don't try to force a kill on a graceful shutdown, only on the hard shutdown
21:44:14 <onovy> how can you do gracefull reload then?
21:44:23 <briancline> or, i guess we could set the grace period really high for graceful shutdowns, but that would mean hard shutdowns also use that same timeout too
21:44:48 <ho> I'm interested in the problem. i will investigate it.
21:45:06 <briancline> same here, i think i need to revisit the patch and test with it a bit
21:45:29 <notmyname> onovy: thanks for bringing it up. ho and briancline, thanks for looking
21:45:42 <onovy> ok, thanks you. me and peterlisak will respon to your questions
21:45:52 <notmyname> I'll see if anyone here (swiftstack) has any opinions on it too
21:46:03 <peterlisak> notmyname, yeah, thanks
21:46:06 <notmyname> kota_: you had 2 patches listed on the agenda
21:46:18 <kota_> yes
21:46:22 <notmyname> patch 198633 and patch 198634
21:46:22 <patchbot> notmyname: https://review.openstack.org/#/c/198633/ - Last-Modified header support on HEAD/GET container
21:46:23 <patchbot> notmyname: https://review.openstack.org/#/c/198634/ - Support last modified on listing containers
21:46:45 <kota_> that allows to make container-listing show the last-modified timestamp for each container.
21:46:52 <kota_> I talked in vancouver
21:47:15 <kota_> i think they are too closer to land, I got +2 sometimes
21:47:29 <kota_> but rebasing affects to loose it :/
21:47:37 <acoles> kota_: i need to cycle back round to those
21:47:48 <kota_> so briging up to ask someone to re-review :)
21:48:08 <notmyname> ok, thanks
21:48:11 <kota_> not so a problem and new changes
21:48:59 <notmyname> #topic general stuff
21:49:22 <notmyname> first up, are we going to meet during the next two weeks? there's often a lot of vacation (at least in the US)
21:49:49 <notmyname> so who would be here if we had a meeting the next two weeks?
21:50:08 <acoles> notmyname: 23rd maybe, 30th no
21:50:26 <kota_> maybe I'm
21:50:26 * onovy says: Merry Christmas and happy new year :) so no
21:50:32 <mattoliverau> next week is Xmas eve for me, I might be be available tho I do have to drive 8 hours that day, the week after I wont be around.
21:50:42 <notmyname> ok, then how about this
21:50:47 <acoles> mattoliverau: you always have to be first :)
21:50:57 <tdasilva> i'd be around the 23rd, the off for a while
21:51:06 <notmyname> let's not schedule any more meetings for this year, and stay active in IRC if there are questions
21:51:12 <mattoliverau> lol
21:51:30 <kota_> +1
21:51:31 <mattoliverau> +1
21:51:33 <ho> notmyname: +1
21:51:35 <acoles> ok
21:51:36 <jrichli> +1
21:51:37 <notmyname> lot's of time for reviews :-)
21:51:41 <briancline> have an enjoyable holiday everyone
21:51:51 <minwoob> +1
21:51:55 <peterlisak> +1
21:52:14 <onovy> +2
21:52:19 <notmyname> any updates from anyone else on ongoing stuff? crypto? container sync? container sharding? ec-related patches?
21:52:23 <mattoliverau> On that note then, I raised a new patch late yes arvo (for me) patch 258280 which once I push a new patchset up later this morning, I think would b e good to review cause it fixes up a problem an Op was having yesterday.
21:52:23 <patchbot> mattoliverau: https://review.openstack.org/#/c/258280/ - Pass HTTP_REFERER down to subrequests
21:52:50 <notmyname> mattoliverau: right! that's a good one. fixes what looks to be an annoying (to users) bug
21:52:59 <acoles> mattoliverau: good catch
21:53:09 <mattoliverau> it allows you to set the same referer on a *LO segments container rather then having to have it global readable
21:53:47 <mattoliverau> 1 lline fix, multi line in functional tests :)
21:54:19 <notmyname> last bit of stuff from me is about the hackathon. acoles and I are still working on logistics. will update people as we have more info. it's in bristol, UK the week of feb 28
21:54:32 <notmyname> anything else from anyone?
21:54:35 <eranrom> Making progress with acoles on container sync (thanks acoles!) otherwise, we have found a juicy one today: all client calls catch ClientException where they should catch urllib.HTTPError exceptions
21:54:38 <blmartin> Container sharding, minwoob and bjkeller have been working on fixing unit tests that do not agree with the sharding changes!
21:54:45 <jrichli> i have something, but can talk in channel if needed
21:54:46 <onovy> just fyi: swauth 1.1.0 is out
21:54:49 <acoles> is jrichli here to talk about encrypting content-type?
21:54:54 <jrichli> acoles, yes
21:54:59 <jrichli> some of us working on crypto have made a decision/proposal that needs agreement from everybody
21:54:59 <acoles> jrichli: oops cross-posted
21:55:09 <notmyname> jrichli: go
21:55:16 <notmyname> (fast ;-)
21:55:17 <jrichli> we are suggesting that we no longer encrypt the content-type for our first version of encryption
21:55:19 <mattoliverau> blmartin et el: thanks guys
21:55:28 <jrichli> encrypting the content-type has caused many issues - and much time. and we still don't have solutions for all the problems that it causes
21:55:39 <notmyname> content-type in the container DB or in the object metadata too?
21:55:40 <mattoliverau> blmartin: now that testr's in I'll loop back around on sharding trello :)
21:55:53 <blmartin> mattoliverau: nice!
21:56:07 <jrichli> notmyname: it would not be encrypted at rest anywhere.  (right, both)
21:56:12 <notmyname> ok
21:56:20 <acoles> notmyname: object metadata creates the harder problems
21:56:46 <notmyname> not what I would have expected
21:56:48 <jrichli> we have considered whether it will be ok to implement this encryption later, and to the best of our knowledge, the answer to that is yes
21:56:49 <acoles> notmyname: with multipart responses, we don't know how long they are a priori when decrypting
21:57:54 <onovy> if someone can get me opinion for patch 251151 ; just say localtime or gmt, that's all :)
21:57:55 <patchbot> onovy: https://review.openstack.org/#/c/251151/ - Show local time in swift-recon in replication part
21:57:55 <acoles> notmyname: because we don't know how may encrypted content-type headers will come in the multiple parts
21:58:17 <acoles> s/may/many
21:58:28 <notmyname> acoles: ok. I'll have to look more to understand :-)
21:58:31 <jrichli> any concerns?  thoughts?  I think that we will want to confirm with clayg and torgomatic when they are around
21:58:44 <notmyname> but in general I trust that acoles and jrichli are making the right call
21:59:14 <notmyname> jrichli: yeah, there's the general concern of "not everything is encrypted", but that's now new
21:59:15 <acoles> i see this as tactical scope reduction in order to make progress to some working artifact
21:59:35 <notmyname> acoles: which is exactly how jrichli described it to me. and that's reasonable
22:00:00 <mattoliverau> I think reducing scope is ok, things can be added later
22:00:09 <notmyname> we're at time
22:00:15 <jrichli> thanks :-).  I guess we will go with that for now, but again, i will feel better when there is a thumb's up from clayg and torgomatic
22:00:24 <notmyname> thank you for coming today and thanks for working on swift
22:00:33 <notmyname> jrichli: yes. ping them in -swift or tomorrow
22:00:42 <notmyname> #endmeeting