*** gyee has quit IRC | 00:26 | |
*** takamatsu has quit IRC | 00:45 | |
*** takamatsu has joined #openstack-swift | 00:51 | |
*** rcernin has quit IRC | 03:25 | |
*** rcernin_ has joined #openstack-swift | 03:26 | |
*** dsariel has quit IRC | 03:35 | |
*** dsariel has joined #openstack-swift | 03:35 | |
*** psachin has joined #openstack-swift | 03:57 | |
*** evrardjp has quit IRC | 04:33 | |
*** evrardjp has joined #openstack-swift | 04:33 | |
*** m75abrams has joined #openstack-swift | 05:24 | |
*** dosaboy has quit IRC | 08:17 | |
*** dosaboy has joined #openstack-swift | 08:17 | |
*** tdasilva has quit IRC | 08:33 | |
*** tdasilva has joined #openstack-swift | 08:35 | |
*** ChanServ sets mode: +v tdasilva | 08:35 | |
*** adriant has quit IRC | 08:48 | |
*** adriant has joined #openstack-swift | 08:48 | |
*** rcernin_ has quit IRC | 09:39 | |
*** baojg has joined #openstack-swift | 09:53 | |
*** m75abrams has quit IRC | 14:49 | |
*** tkajinam has quit IRC | 14:57 | |
*** gyee has joined #openstack-swift | 15:26 | |
ormandj | timburke: fwiw, we're going to work on doing the servers per port thing, we'll retest after the implementation and let you know how it goes | 15:38 |
---|---|---|
ormandj | we'll use '2' as you suggested | 15:39 |
ormandj | in 748043 you may want to consider giving an example with a setting of '2' if that's a best practice starting point | 15:42 |
timburke | i think it can vary pretty widely based on how many cores and how many disks per chassis | 16:00 |
ormandj | any good rule of thumb on calculating that, then, based on N drives and Y cores or something? | 16:00 |
timburke | not sure offhand -- i feel like clayg probably has a better idea than me, though | 16:01 |
clayg | I like 4 with 10-20 disk chassis, and only turn it down when there's more like 25+ disks. Didn't you say 50+ disk per chassis!? I think 2 is a great place to start. | 16:03 |
ormandj | yeah, 56 disks in these | 16:06 |
ormandj | + 4 for account/container | 16:06 |
ormandj | (ssd of course) | 16:06 |
ormandj | if that's a good starting point then my suggestion would be to add that into the docs on deployment, it would be great to help peope get off on at least a better foot, if not perfect | 16:07 |
clayg | ormandj: makes sense to me; have you ever done a patch in gerrit before? | 16:09 |
ormandj | nope | 16:09 |
*** dosaboy has quit IRC | 16:19 | |
*** dosaboy has joined #openstack-swift | 16:19 | |
*** psachin has quit IRC | 16:21 | |
*** dsariel has quit IRC | 16:52 | |
openstackgerrit | Merged openstack/swift master: docs: Clean up some formatting around using servers_per_port https://review.opendev.org/748043 | 17:03 |
*** jv_ has quit IRC | 17:13 | |
openstackgerrit | Clay Gerrard proposed openstack/swift master: swift-init: Don't expose misleading commands https://review.opendev.org/748281 | 17:27 |
*** cwright has joined #openstack-swift | 17:55 | |
timburke | clayg, thinking about ^^^ -- how many workers did you have? if it's more than one, shouldn't we still have a listener available for new connections? | 17:58 |
clayg | from p 747332 > If I had *more* than one worker killing just one wouldn't interrupt connections at all! | 17:59 |
patchbot | https://review.opendev.org/#/c/747332/ - swift - wsgi: Allow workers to gracefully exit - 3 patch sets | 17:59 |
clayg | but even if there's 3-4 workers - closing ALL of the sockets is NOT seamless? | 18:00 |
clayg | maybe in practice with enough workers-per-port it's unlikely they'd all have someone holding a socket open and the window before the parent respawns one is small? 🤔 | 18:01 |
clayg | what's the justification for exposing the new command? | 18:01 |
timburke | mm -- i get you now. would it be better to space the socket-closes out a little? or i just need to do that second attempt i mentioned, and see about keeping listen sockets around as long as possible ;-) | 18:02 |
timburke | i was mainly just dribbing from the seamless work. you're almost certainly right to drop it as a command | 18:02 |
timburke | *cribbing | 18:03 |
clayg | timburke: I think it's great except when you kill ALL the children | 18:10 |
clayg | i probably don't have a clear picture of the future work you're imaging | 18:10 |
clayg | maybe drop the SIGUSR1 handling and just make them do HUP - the "graceful" is very clearly what's happening from the worker perspective | 18:10 |
timburke | so i had USR1 do the same thing because *otherwise*, the default is to just terminate -- and it seemed likely that someone who got used to sending USR1 to parents might try it on a child expecting similarly graceful behavior | 18:15 |
timburke | maybe it'd be better to just ignore USR1? at least, until we find a better use for it in children? i don't think there's any way the child can achieve similar semantics to what USR1 means in the parent | 18:16 |
clayg | oh, yeah i assumed it'd default to no op instead of term - my bad | 18:25 |
clayg | I think getting used to usr1 to the parent is reasonable - leave it how you wrote it! | 18:25 |
*** baojg has quit IRC | 18:52 | |
*** baojg has joined #openstack-swift | 18:53 | |
openstackgerrit | Tim Burke proposed openstack/swift master: Suppress CryptographyDeprecationWarnings https://review.opendev.org/748297 | 20:10 |
timburke | the more i think about it, the more i want our seamless reload to pass listen socket fds over domain sockets... and have the graceful shutdown just turn off wsgi.is_accepting... | 20:15 |
timburke | hmm... i wonder if eventlet's monkey-patching negates the "The socket is assumed to be in blocking mode." note at https://docs.python.org/2/library/socket.html#socket.fromfd ... | 20:51 |
timburke | almost meeting time! i expect it'll be a short one | 20:51 |
clayg | that early in the process life-cycle it probably doesn't matter if it's blocking or not (I'm assuming it means the *returned* socket may not inherit all the parent socket options; like blocking) | 20:55 |
clayg | yay meeting! | 20:55 |
kota_ | good morning | 20:55 |
mattoliverau | Morning | 20:58 |
ormandj | timburke: another few quick ones: 1) when doing servers_per_port of 2, are we basically setting each device in the ring file to use unique port, and in effect servers per port is running a few server processes for each of these, allowing requests to come in even when one is blocked (downside of higher servers per port is more memory) 2) what do bind_ip and bind_port look like in the default section | 21:17 |
ormandj | of config?/buffer 6 | 21:18 |
timburke | yeah, typically you'd have each disk get its own port in the ring. if the nodes were more cpu-constrained, i'd maybe say group a few disks together -- it wouldn't be ideal, but at least it limits the blast-radius a bit compared to having them all on a single port | 21:23 |
ormandj | servers per port working the way i think then? and the bind_ip and bind_port? | 21:24 |
timburke | iirc correctly, bind_ip and bind_port are ignored when using servers_per_port -- all the info we need is coming from the ring (and checking what ips the node has available) | 21:25 |
ormandj | ok, so we can just leave them at the 'primary' ip and port or w/e like we have them now, and should be gtg | 21:25 |
ormandj | (we'll test in dev, obviously) | 21:25 |
mattoliverau | I'll loop back to shrinking this week, and see if I can progress the audit to something hacky but mostly working as a POC. | 21:37 |
*** neonpastor has joined #openstack-swift | 21:52 | |
*** rcernin has joined #openstack-swift | 22:40 | |
*** tkajinam has joined #openstack-swift | 23:01 | |
*** baojg has quit IRC | 23:21 | |
*** baojg has joined #openstack-swift | 23:21 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!