21:00:50 <timburke> #startmeeting swift 21:00:51 <openstack> Meeting started Wed Mar 27 21:00:50 2019 UTC and is due to finish in 60 minutes. The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:52 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:54 <openstack> The meeting name has been set to 'swift' 21:01:03 <timburke> who's here for the swift meeting? 21:01:05 <clayg> o/ 21:01:10 <kota_> o/ 21:01:16 <m_kazuhiro> o/ 21:01:22 <mattoliverau> o/ 21:01:26 <rledisez> o/ 21:01:59 <timburke> agenda's pretty short 21:02:04 <timburke> #link https://wiki.openstack.org/wiki/Meetings/Swift 21:02:11 <tdasilva> hi 21:02:20 <timburke> #topic Swift 2.21.0 21:02:29 <tdasilva> woo! 21:02:49 <kota_> yey 21:02:58 <timburke> so we had a release! that'll be our stein release; stable branch has already been cut 21:03:54 <timburke> a lot of great stuff went out in it; thanks to everyone for writing and reviewing patches, and just generally making swift better :-) 21:04:05 <mattoliverau> nice 21:04:33 <timburke> #topic pyeclib release 21:05:09 <timburke> it's been a bit since we did one of these 21:05:42 <timburke> and i was thinking it might be good (by which i mean, it would make my life easier ;-) 21:06:08 <timburke> authors/change log patch is up at https://review.openstack.org/#/c/647656/ 21:06:18 <clayg> things happen in swift for two reasons 1) make timburke's life easeir 2) for the lulz 21:06:51 <timburke> if anyone has anything else they'd like to see for that, you should probably speak up in the next couple days 21:06:54 <tdasilva> fwiw, it would also make my life easier 21:07:11 <kota_> only pyeclib? liberasurecode too? 21:07:29 <timburke> kota_, i hadn't thought that hard about it :-) 21:07:34 <tdasilva> i just need pyeclib for now, but we still have libec patch to review... 21:07:45 <mattoliverau> making life easier is a great reason for a new release :) 21:07:47 <tdasilva> one sec, let me dig it up 21:07:53 <kota_> ok 21:08:20 <tdasilva> https://review.openstack.org/#/c/635605/ 21:08:30 <timburke> oh, yeah... zaitcev's patch might be good to release... 21:09:09 <timburke> https://review.openstack.org/#/c/636556/ 21:09:12 <clayg> @timburke can we get them all on priority reviews between now and next meeting and plan to do the release the week after that (2 weeks from now-ish)? 21:10:47 <timburke> clayg, i think the stuff i'm really interested in releasing is landed... tdasilva's right that we should really review the quadiron patches (since we invited them to submit them and all), but idk that it should keep us from doing a release sooner rather than later 21:11:09 <clayg> was the aim to get a new tagged released packaged by downstream as stein? 21:11:34 <timburke> nope; too late for that. this'd be part of train regardless at this point 21:12:12 <timburke> i just want something pip-installable that fixes https://bugs.launchpad.net/pyeclib/+bug/1780320 21:12:13 <openstack> Launchpad bug 1780320 in PyECLib "If find_library('erasurecode') in setup.py does not return a library version, try to append it " [Undecided,Fix released] 21:12:38 <clayg> tdasilva: ok well at a minimum they have some pre-req "fix" patches that are probably good candidates for "priority" review along with zaitcev's crc32 fix thing? 21:12:40 <kota_> yeah, if we need some patches for stein, anyway backporting to stable/stein branch is needed. 21:13:10 <tdasilva> just realized that libec is already 1.6.0 21:13:20 <timburke> neither pyeclib nor liberasure code have stable branches, as i understand it 21:13:21 <clayg> so land good code and cut a release then do backports - is anyone against the timeline of "on priority review this week; merge what we can next week; then cut a release"? 21:13:46 <clayg> bam - stable branches are for scardy cats 21:14:27 <clayg> oh, the crc32 is already landed 21:14:38 <timburke> fwiw, the lists of open patches are remarkably short: https://review.openstack.org/#/q/project:openstack/pyeclib+is:open https://review.openstack.org/#/q/project:openstack/liberasurecode+is:open 21:15:09 <clayg> so do we have important patches outstanding that we could realistically land before a release? maybe timburke was just saying "new tag incoming" 21:15:27 <timburke> i think the quadiron changes (and my prep-for-release patch) are the only "live" changes 21:15:30 <clayg> 🤦 21:15:31 <mattoliverau> then it's priority reviews until patches are landed or until we just want to cut a relase. be it next week or earlier (if things have landed) 21:15:57 <tdasilva> clayg: i don't think we have anything that is so important that needs to land before a release, agree quadiron could wait for next release as it will take a bit of time to land 21:16:20 <clayg> ok, good talk 👍 21:16:20 <kota_> +1 21:16:46 <timburke> so, on to updates! 21:16:50 <clayg> also WTG everyone who's been doing work/review on py/libec! that's an amazingly short backlog 21:17:23 <timburke> clayg, it's almost like we don't really think much about it ;-) 21:17:26 <timburke> #topic losf update 21:17:51 <timburke> kota_, rledisez: how's the feature branch going? 21:18:29 <rledisez> not much move for the last days. alecuyer was working on a patch to automatically clean empty volumes. he will start working again on replacing grpc with http next week 21:18:48 <kota_> sorry, not so much from my side because of business trip in the last week. I had asked to Norio to push their docs, he has bee preparing the patch I think. 21:19:27 <kota_> I'll work in this week and the next to setup packaging and testing for that, in my plan. 21:19:41 <timburke> cool! eventlet-friendly losf seems good, as do more docs :-) 21:20:08 <timburke> #topic py3 updates 21:20:40 <timburke> looks like zaitcev isn't here today, but i know he's been pushing up patches for DLO recently 21:21:04 <timburke> i've been spiking hard on getting some in-process functional tests running in the gate 21:21:17 <mattoliverau> timburke: nice work on that 21:21:20 <timburke> for example, https://review.openstack.org/#/c/645895/ 21:22:20 <timburke> they'll have similar ratchets as the unit tests. currently targeting py37 (largely because that's what i've got as my default py3) 21:22:44 <clayg> great! 21:22:46 <kota_> excellent! 21:23:13 <timburke> and mattoliverau's been working on getting staticweb running on py3! 21:23:35 <mattoliverau> just picked a middleware that hasn't been touched yet :) 21:24:43 <timburke> i think that's about it for updates -- am i missing anything? 21:25:13 <timburke> #topic open discussion 21:25:41 <timburke> since we've got both rledisez and m_kazuhiro here, how about we bring up https://review.openstack.org/#/c/601950/ again? 21:26:40 <clayg> does it need another rev - or it's good to go? 21:26:58 <rledisez> sure, give me a second to re-read the last answer of m_kazuhiro :) 21:27:07 <clayg> looks like mostly docs 21:27:39 <m_kazuhiro> Yes. Another rev and rledisez's answer will be helpful for me. 21:28:29 <mattoliverau> feel free to sqash the docs update into that patch if you want: https://review.openstack.org/#/c/616076 21:28:47 <mattoliverau> or we can land them one after the other, whatevs 21:28:58 <clayg> i thought object-server needed an internal-client so it couldn't use the object-server's config because that config already has a pipeline section that points to the object-server:app instead of the proxy:app? 21:29:11 <rledisez> I guess we can ask here opinions: does it seem reasonable to decide of the behavior of a daemon based on the name of the config file? 21:29:42 <timburke> clayg, surely we could point to an internal-client.conf or something though, yeah? 21:29:48 <kota_> mattoliverau: it looks like that patch already was squashed. 21:29:51 <m_kazuhiro> mattoliverau: I have already squashed the patch. 21:29:57 <mattoliverau> oh, great 21:30:01 <mattoliverau> never mind me then :P 21:30:30 <clayg> i see some code that says "read_conf_for_queue_access" - but that's no on master is it? 21:30:30 <kota_> mattoliverau: appreciated for working that :P 21:30:42 <clayg> oh it is 21:32:14 <timburke> rledisez, that does seem a little odd... 21:32:58 <clayg> yeah i'm confused, having the object expirer use the internal-client.conf for the pipeline config seems absolutely brilliant tho... 21:33:22 <timburke> why can't we just drive off the dequeue_from_legacy setting? 21:34:32 <clayg> so it looks like this patch is not done - or at least all the core reviewers that look at it get confused really quickly which isn't a great sign for "ready" - but maybe we need to do a better job of enumerating the complete set of issues which much be resolved if we want it to land 21:35:02 <timburke> seems about right 21:35:17 <clayg> AFAIK this is a pre-req for the generalized task queue which everyone thinks is a brilliant idea, so ... maybe I'll put it on my list for Friday? 21:35:18 <timburke> anyone have anything else to bring up? 21:37:05 <m_kazuhiro> clayg: Thank you for your question review, I'll add answer review to the patch. 21:37:21 <clayg> timburke: I hear this one person using s3api wants bulk delete to be more faster/async? is that right? 21:37:43 <timburke> yup. so... i'm gonna be thinking about how to do that in a sane way 21:38:51 <timburke> it's looking... messy. like, i'm debating about making some REPLICATE requests from the proxy to clear out listings earlier. not sure yet about how good any of my ideas are 21:39:12 <rledisez> timburke: that's interesting, even for non-s3. i used to pass a regex with DELETE that was matching all entries of a container and create an object-expirer entry before removing the line from the container. 21:40:31 <timburke> rledisez, that sounds rather similar to what i'm thinking about trying to do -- bulk-insert some expirer entries, then bulk-insert some tombstones... 21:40:48 <timburke> how did you get around the expirer wanting to send x-if-delete-at? 21:41:45 <rledisez> timburke: it was 5 years ago (what, already?!) and we removed it from prod since, so I would have to check 21:42:28 <timburke> eh, don't worry too much -- just curious. my current plan is to use a distinct content-type on the expirer queue entry 21:43:13 <timburke> https://review.openstack.org/#/c/635040/ seems like a good thing, but maybe a little scary 21:43:23 <timburke> (Include some pipeline validation during proxy-server start-up) 21:44:22 <timburke> we've had a few bugs get reported that ultimately came down to badly-configured pipelines (or badly-behaved auto-insertion of required middlewares) 21:45:22 <timburke> that patch tries to prevent such situations from arising, but may keep your proxy from starting on upgrade (with old configs) 21:45:58 <mattoliverau> yeah, I think we need to do something to ease pipeline issues. at least the major gotchas. 21:47:24 <timburke> https://review.openstack.org/#/c/645624/ addresses some issues with our lower-constraints job, which apparently wasn't testing what we thought it was testing 21:48:44 <timburke> it involves a couple dependency up-revs (for cryptography and netifaces, in particular), but i think they're old enough that it shouldn't really impact anyone? 21:49:45 <timburke> if someone could take another look at that, i'd appreciate it -- i did enough to fix it up that i'm not sure i should be the one to +A 21:49:55 <clayg> timburke: why can't you just +A that one - the infra guys all signed off - do you need someone else to load it into their head for any specific reason? 21:50:23 <timburke> fine -- done :P 21:50:34 <timburke> anyone have anything else? 21:51:09 <clayg> I mean getting it on master is a great way to find out if it breaks "something else we don't know about" - and even then the remediation is the same "figure out how to have both things works and merge the fix" 21:51:23 <clayg> getting it on master *right after a release I might say 21:52:55 <timburke> all right, i think i'm calling it then 21:53:09 <timburke> thanks for coming everyone, and thank you for working on swift! 21:53:19 <timburke> #endmeeting