21:01:20 <timburke> #startmeeting swift
21:01:21 <openstack> Meeting started Wed Jul 10 21:01:20 2019 UTC and is due to finish in 60 minutes.  The chair is timburke. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:01:22 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:01:24 <openstack> The meeting name has been set to 'swift'
21:01:27 <timburke> who's here for the swift meeting?
21:01:30 <mattoliverau> o/
21:01:45 <kota_> hi
21:01:50 <rledisez> hi o/
21:02:42 <tdasilva> hello
21:02:56 <clayg> heyoh!
21:02:58 <mattoliverau> rledisez: you all moved? Welcome back :)
21:03:09 <timburke> i neglected to update the agenda... but it's at
21:03:11 <timburke> #link https://wiki.openstack.org/wiki/Meetings/Swift
21:03:29 <rledisez> mattoliverau: yep :)
21:04:01 <timburke> so first, a warm welcome back to rledisez :-)
21:04:03 <clayg> rledisez: I was just thinking about the class Canadian children's song "Donkey Riding" - You're all assimilated to canadian culture now Ay?
21:04:12 <clayg> *classic Canadian...
21:04:39 <rledisez> clayg: i'm trying to learn *their* french first, i'll check the music culture after ;)
21:04:41 <clayg> apparently i've also been doing some object oriented programming 😬
21:05:13 <mattoliverau> lol
21:05:24 <timburke> glad to have you back rledisez -- hopefully i'll even get us merging code to losf again ;-)
21:05:26 <clayg> yeah mattoliverau know about "their English" pleanty well
21:05:46 <timburke> speaking of... updates!
21:05:51 <mattoliverau> lol, still learning my american english tho :P
21:06:04 <timburke> #topic py3
21:06:43 <timburke> all of the patches patches to get us to a voting py2-tests-against-py3-services are approved!
21:06:45 <mattoliverau> rledisez: welcome to a commonwealth country (like Australia). :P
21:06:54 <mattoliverau> \o/
21:07:13 <timburke> now it's just a matter of getting them through the gate ;-)
21:07:16 <kota_> nice
21:07:24 <mattoliverau> and then a release!
21:07:53 <timburke> once those are in, i'll rebase the authors/changelog patch and update it for whatever else has landed
21:08:15 <timburke> #link https://review.opendev.org/#/c/668990/
21:08:16 <patchbot> patch 668990 - swift - Authors/changelog for 2.22.0 - 1 patch set
21:08:50 <timburke> if you know of anything else that should absolutely be in this release, let me know soon
21:09:05 <timburke> though worst case, we can always have more releases :)
21:09:28 <kota_> :/
21:09:52 <timburke> after that... i saw zaitcev's been starting to review the func test changes
21:09:58 <timburke> thank you zaitcev!
21:10:21 <timburke> that's about it for py3
21:10:28 <timburke> #topic lots of small files
21:10:42 <clayg> tdasilva: I think we can get in cache shard ranges yeah?
21:11:01 <clayg> timburke: what about the reconciler+sharding bug?  worth holding the release?
21:11:07 <tdasilva> clayg: i think so, i left a couple of comments for timburke but they are mostly nits
21:11:31 <timburke> clayg, i'm not sure. maybe?
21:12:03 <rledisez> about losf, alecuyer told me about this patch to add unit tests to a core component of losf (the file-like interface)
21:12:06 <rledisez> #link https://review.opendev.org/#/c/666378/
21:12:07 <patchbot> patch 666378 - swift (feature/losf) - Add tests for vfile.py - 4 patch sets
21:12:20 <timburke> yes! and i think i found the source of the gate trouble on feature/losf
21:12:23 <timburke> #link https://review.opendev.org/#/c/670139/
21:12:23 <patchbot> patch 670139 - swift (feature/losf) - Back out urllib3 requirement - 1 patch set
21:12:43 <kota_> oic, I should go to look at the patch.
21:12:52 <kota_> nice work timburke
21:13:50 <timburke> i think those were the main points of progress there. assuming that my urllib3 change is sufficient to get the gate happy again, i'll get to work on getting a fresh merge from master
21:14:08 <kota_> i'll add +A when the gate passed.
21:14:14 <timburke> which will also involve
21:14:16 <timburke> #link https://review.opendev.org/#/c/664424/
21:14:17 <patchbot> patch 664424 - swift (feature/losf) - Get new unit tests running under py3 - 2 patch sets
21:14:35 <kota_> don't we have to black list any viersion?
21:15:19 <timburke> kota_, i would've thought that requests should do that, since they've got the dep. maybe we need to blacklist some version(s) of requests, though?
21:15:20 <kota_> e.g. != 1.25.x
21:15:55 <timburke> since we don't actually use urllib3 directly, i'd rather not add it to our requirements if we don't have to...
21:16:15 <timburke> in fact, i'd kind of like to get rid of requests. we only use it for s3token iirc
21:16:18 <kota_> oic. the dependency should be resolved at requests. make sense.
21:17:07 <timburke> does anybody have anything else on losf?
21:18:27 <timburke> #topic sharding
21:19:34 <timburke> i looked at the doc -- seems promising! and i think i can see how at least some parts of this could be broken out so we could have small, digestible pieces to review
21:20:09 <timburke> i know i'm super excited about the idea of fully-automated sharding
21:20:15 <mattoliverau> I saw that, thanks Tim
21:20:37 <timburke> but... we might have a few bugs we want to squash before turning it on
21:20:55 <mattoliverau> I still want to write a probe test to test the potential edgecase. but haven't done it yet.
21:21:05 <mattoliverau> yeah
21:21:13 <timburke> in particular, i'm thinking about https://bugs.launchpad.net/swift/+bug/1836082
21:21:14 <openstack> Launchpad bug 1836082 in OpenStack Object Storage (swift) "Reconciler-enqueuing needs to be shard-aware" [High,Confirmed]
21:21:23 <timburke> and to a lesser degree, https://bugs.launchpad.net/swift/+bug/1834097
21:21:23 <clayg> such a great bug
21:21:24 <openstack> Launchpad bug 1834097 in OpenStack Object Storage (swift) "container-sharder has no latch when reporting stats" [Undecided,New]
21:21:36 <clayg> oh yeah... classic
21:22:00 <timburke> but of course, the level of automation would be configurable, so those shouldn't really be thought of as blockers to mattoliverau's wonderful work
21:22:25 <timburke> just wanted to mention them since we're talking about sharding anyway
21:22:50 <mattoliverau> fair enough. Yeah. I want to be sure we're ready when we recommend/support autp sharding
21:23:05 <mattoliverau> but want to get there, so getting the ball rolling ;)
21:23:36 <timburke> absolutely
21:24:10 <timburke> i think that's it for updates
21:24:18 <timburke> #topic shanghai
21:24:33 <kota_> shanghai ¥o/
21:24:55 <timburke> the PTG organizers are starting to ask about how many people to expect for swift
21:25:25 <diablo_rojo_phon> Yes please :)
21:25:32 <clayg> where in the world is tdasilva come ptg time?
21:25:46 <clayg> I wanna go!  when is it...
21:25:48 <timburke> we've got some time before we need to give them an answer, but i thought i should start prodding people now to figure out whether they're going ;-)
21:25:55 <diablo_rojo_phon> By the start of August ideally.
21:26:23 <timburke> summit's nov. 4-6; ptg's nov. 6-8
21:26:39 <tdasilva> ¯\_(ツ)_/¯
21:26:44 <timburke> and if you're like me, you'll need a visa to go
21:26:58 <timburke> #link http://www.china-embassy.org/eng/visas/hrsq/
21:27:01 <clayg> timburke: a passport isn't good enough!?
21:27:08 <rledisez> timburke: maybe an etherpad as usual? it's not confirmed yet but alecuyer and I should be present
21:27:09 <mattoliverau> yeah, I think I'll need a visa too.
21:27:12 <timburke> #link https://openstackfoundation.formstack.com/forms/visa_form_shanghai_summit
21:27:19 <timburke> will probably both be useful
21:27:19 <kota_> i'm planning to get there. got approved by a verbal promise so far
21:27:27 <mattoliverau> still waiting to hear back from work on who's going.
21:27:29 <timburke> kota_, \o/
21:27:53 <timburke> i'm planning on going too
21:27:57 <diablo_rojo_phon> I highly recommend using a visa service especially for those that don't live close to a Chinese embassy.
21:28:03 <kota_> oh, VISA? I think Japanese passport is the strongest in the world.
21:28:24 <timburke> rledisez, yeah, i'll make an etherpad by next week
21:28:31 <kota_> nice timeburke. I'm looking forward to see your first project update
21:28:45 <mattoliverau> +1 in chinese :P
21:28:50 <kota_> lol
21:29:33 <kota_> one thing, did anyone buy a ticket for the Summit+PTG?
21:29:47 <timburke> not yet. i should
21:29:53 <kota_> I'm confused to VIP or non-VIP for myself.
21:30:11 <timburke> #link https://app.eventxtra.link/registrations/6640a923-98d7-44c7-a623-1e2c9132b402?locale=en/?aff=Shanghaih
21:30:19 <kota_> including reception or not.
21:30:35 <tdasilva> i wonder how easy it is (visa wise) to bring spouse + family
21:30:42 <diablo_rojo_phon> kota_: to my understanding VIP just includes a VIP party? Im told it's a common thing for events in China.
21:30:46 <kota_> either is fine to me so far. just I want a same ticket with other contributors.
21:31:05 <rledisez> sould we expect a voucher this time for the summit+ptg ticket?
21:31:26 <kota_> rledisez: both include summit+ptg, my understanding.
21:31:48 <kota_> both -> every kind of ticket, i mean.
21:32:04 <diablo_rojo_phon> rledisez: you mean like a discount?
21:32:20 <kota_> oh
21:32:20 <timburke> extra 200 for a party?? whoa
21:33:12 <rledisez> I mean, before buyng the summit+ptg ticket, should I wait for a voucher/discount (we always had one previously if we attended PTG or being a ATC, …)
21:33:15 <diablo_rojo_phon> timburke: yeah I guess this is a normal thing? Other conferences held in China have done the same.
21:33:34 <timburke> oh, but yeah -- i expect there will be a discount code again. so it's a good thing i haven't done anything yet :-)
21:33:46 <clayg> procrastinate ftw
21:34:04 <timburke> diablo_rojo_phon, i believe you, just surprising to me :-)
21:34:08 <diablo_rojo_phon> rledisez: there will likely be discounts for contributors but the detail of how much havent been decided yet. The plan is to have that by the end of August I think? Maybe the start? I can't remember.
21:34:26 <diablo_rojo_phon> timburke: it surprised me too if that's any consolation.
21:34:38 <rledisez> diablo_rojo_phon: ok, thx, i'll wait a bit before buying then
21:34:41 <kota_> diablo_rojo_phon: good info. I should wait the info for now.
21:34:45 <timburke> diablo_rojo_phon, thanks! you've saved me from trying to dig through my email mid-meeting
21:35:20 <timburke> that's all i really wanted to bring up
21:35:22 <diablo_rojo_phon> timburke: one of my ping words came up and so I started paying attention and figured I'd spread the knowledge since I'm here :)
21:35:29 <timburke> #topic open discussion
21:35:40 <diablo_rojo_phon> also... Storyboard? :)
21:36:17 <timburke> seems goodish? can't wait for the performance optimizations :)
21:36:40 <timburke> what other topics should we be discussing?
21:36:49 <diablo_rojo_phon> When might you be interested in migrating? :)
21:37:30 <timburke> diablo_rojo_phon, ok, maybe i meant "CAN wait for the performance optimizations" :P
21:38:23 <kota_> one thing, I'll join an OpenStack event in Japan. And there, I'll be a mentor for OpenStack Upstream Institute. If anyone keeps good hanging fruits for newbees, please let me know anytime.
21:38:34 <diablo_rojo_phon> Lol. Fair. I'll come prod y'all when we make some changes.
21:39:00 <timburke> oh, cool! that's great kota_
21:39:13 <timburke> https://bugs.launchpad.net/swift/+bug/1835324 might be a good one
21:39:14 <openstack> Launchpad bug 1835324 in OpenStack Object Storage (swift) "Internal client's delete_object always logs 499 for 404s" [Low,Confirmed]
21:39:39 <timburke> we should probably try to curate https://bugs.launchpad.net/swift/+bugs?field.tag=low-hanging-fruit a bit
21:39:57 <kota_> looks fine, that confirmed but lower priority bug. thanks timburke
21:40:13 <kota_> okay.
21:40:28 <mattoliverau> kota_: nice!
21:41:37 <timburke> oh, i've got a question for people: how do you usually deploy the reconciler? is it running on every node? just one node? some subset of nodes?
21:42:08 <timburke> iirc we do it on every container node... and it turns out that it can be kinda terrible that way
21:42:52 <timburke> see also: https://bugs.launchpad.net/swift/+bug/1836111
21:42:53 <openstack> Launchpad bug 1836111 in OpenStack Object Storage (swift) "container-reconciler needs to scale better" [Undecided,New]
21:42:53 <kota_> iirc, one node. or no-reconciler if customer is using only one policy.
21:43:04 <rledisez> timburke: we run it on every container node
21:43:23 <timburke> which clayg ('cause he's a smart guy) has a patch for at https://review.opendev.org/#/c/103779/
21:43:24 <patchbot> patch 103779 - swift - Add support for multiple container-reconciler - 9 patch sets
21:43:49 <clayg> that was forever ago, i *used* to be smart
21:44:03 <tdasilva> 2014??
21:44:23 <clayg> basically right after we finished storage policies
21:44:33 <timburke> just a heads-up, as much as anything
21:44:47 <clayg> somehow i'ver never seen much work in the .misplaced_objects queue 🤷‍♂️
21:44:54 <clayg> it's possible I've not spent much time looking for it
21:45:11 <clayg> I have a story to add more monitoring to that queue - but we don't really keep tabs on it ATM
21:45:43 <zaitcev> I run reconciler on every node, which is probably wasteful and bad but my cluster is too small to tell.
21:46:13 <zaitcev> Especially in view of that patch set.
21:46:18 <clayg> ok, so sounds like everyone is already putting running it the way we expected - and we just need to go ahead and add the process/processes?
21:46:36 <clayg> or maybe we should try to unify it with the expirer re-work and merge all that stuff!
21:46:51 <zaitcev> Kazuhiro's legacy, huh
21:46:52 <clayg> move the reconciler config into the container-server config - the whole nine
21:47:07 <rledisez> clayg: +1
21:47:22 <clayg> something to think about 🤔
21:48:08 <tdasilva> probably makes sense to move to the new model
21:50:38 <timburke> anything else to bring up?
21:50:45 <clayg> rledisez: oh, i was thinking about the layout for the queues we where talking about - how to divide it up ring/part/job/obj where the containers were <ring-part> and man... I think that's pretty solid layout
21:50:53 <clayg> I think that's what we agreed on 🤔
21:51:26 <clayg> Anyway the reason I like it is cause when processing it *really* starts to look like "walking the partition space" but using container listing instead of filesystem listdirs!
21:52:19 <rledisez> clayg: yep, I thought of it also and I like it. I started to work on it but had to stop to move. i'll continue next week
21:53:18 <timburke> i kinda wonder if it'll need a new partition-power-like thing though -- something to make that probe test time less terrible
21:53:19 <clayg> oh it's .task-<ring>-<type> accounts and <partition> container - even better!
21:54:10 <clayg> @timburke it's a pretty simple optimization to do a account level listing and only list part containers that are assigned to you (and also exist)
21:54:32 <rledisez> and the object start with the timestamp of the scheduled task, so we can just iterate the container listing. no more need for time bucket and stuff… we have sharding!
21:54:53 <clayg> the problem is it's an optimization that only makes sense in DEV - in prod you'll have the parts - so we could maybe like "if you get a 404 trying to container list go ahead and account list"
21:56:59 <clayg> @rledisez yeah like if you have 10T objects in a 2^20 🤔
21:57:27 <clayg> we were recently contemplating our first part power increase - which will have the same problem in the queue as it does on disk...
21:58:33 <clayg> maybe we could encode the epoch into the queue or something so we can recognize a "this object isn't in this part anymore" from "this object isn't in this cluster anymore"
21:58:35 <rledisez> hum… moving entries between containers… (i don't like that)
21:58:47 <timburke> all right, we're coming up on time. i propose we move the task queue discussion to #openstack-swift :-)
21:58:55 <clayg> task queue!
21:59:17 <timburke> thank you all for coming, and thank you for working on swift!
21:59:25 <timburke> #endmeeting