*** gyee has quit IRC | 01:23 | |
*** manuvakery has joined #openstack-swift | 02:34 | |
*** rcernin has quit IRC | 02:41 | |
*** rcernin has joined #openstack-swift | 02:43 | |
*** evrardjp has quit IRC | 04:33 | |
*** evrardjp has joined #openstack-swift | 04:33 | |
openstackgerrit | Merged openstack/python-swiftclient master: Clean up some warnings https://review.opendev.org/736426 | 05:23 |
---|---|---|
*** m75abrams has joined #openstack-swift | 05:31 | |
*** rpittau|afk is now known as rpittau | 06:34 | |
*** ccamacho has joined #openstack-swift | 07:02 | |
*** manuvakery has quit IRC | 07:44 | |
*** rcernin has quit IRC | 08:18 | |
alecuyer | clayg: just saw your message, reading up now | 09:07 |
openstackgerrit | Hervé Beraud proposed openstack/swift master: Remove elementtree deprecated methods https://review.opendev.org/737468 | 09:32 |
*** rpittau is now known as rpittau|bbl | 10:12 | |
*** tkajinam has quit IRC | 10:17 | |
openstackgerrit | Hervé Beraud proposed openstack/swift master: Remove lxml deprecated methods https://review.opendev.org/737468 | 11:32 |
*** rpittau|bbl is now known as rpittau | 12:14 | |
*** zaitcev has quit IRC | 12:42 | |
*** zaitcev has joined #openstack-swift | 12:44 | |
*** ChanServ sets mode: +v zaitcev | 12:44 | |
*** zaitcev has quit IRC | 13:01 | |
*** zaitcev has joined #openstack-swift | 13:02 | |
*** ChanServ sets mode: +v zaitcev | 13:02 | |
*** m75abrams has quit IRC | 13:40 | |
openstackgerrit | David Sariel proposed openstack/swift master: Let developers/operators add watchers to object audit (simplified) https://review.opendev.org/706653 | 14:08 |
*** m75abrams has joined #openstack-swift | 15:08 | |
*** renich has joined #openstack-swift | 15:33 | |
*** admin0 has joined #openstack-swift | 15:50 | |
admin0 | hi all .. anyone knows about swift implementation in kolla-ansible ? | 15:50 |
*** gyee has joined #openstack-swift | 15:59 | |
*** rpittau is now known as rpittau|afk | 16:00 | |
openstackgerrit | Clay Gerrard proposed openstack/swift master: s3pi: test retry complete/abort https://review.opendev.org/737570 | 16:04 |
*** renich_ has joined #openstack-swift | 16:17 | |
*** renich has quit IRC | 16:17 | |
*** renich_ is now known as renich | 16:18 | |
*** renich has quit IRC | 16:19 | |
timburke | good morning | 16:19 |
*** m75abrams has quit IRC | 16:33 | |
*** josephillips has joined #openstack-swift | 16:58 | |
josephillips | hey guys | 16:58 |
josephillips | 2 questions | 16:58 |
josephillips | setting overload factor requiered perfrom rebalance? | 16:59 |
josephillips | and requiered move the .gz file to storage nodes? | 16:59 |
clayg | josephillips: overload is *not* required - but if you have unbalanced zones (and less zones than replicas) it's highly recommend you run with about 10% overload (trading some dispersion for balance) | 17:15 |
clayg | it can also be helpful during a rebalance that's adding a new zone to run with some overload 🤔 | 17:15 |
clayg | but either way overload or no after you rebalance, YES, you must distribute .gz ("the ring") to all nodes 👍 | 17:16 |
josephillips | good | 17:16 |
clayg | timburke: I'm seeing a LOT of HEADs triggered from PUTs - what do you think it would take to beat p 735738 into shape and get it over the finish line? | 17:16 |
patchbot | https://review.opendev.org/#/c/735738/ - swift - s3api: Don't do naive HEAD request for auth - 1 patch set | 17:16 |
josephillips | exist a way to confirm that the gz file have the overload setting on it? | 17:17 |
timburke | clayg, i'm less worried about sending the request down the pipeline now -- i think rledisez said he deploys ceilometermiddleware left of s3api, so it won't be a problem. even if someone had it to the right, a request method like TEST should be pretty easy to filter out ;-) | 17:21 |
timburke | if we can get a test to demonstrate the fix, that'd be great, but even without it's pretty easy for me to functionally test and see that things seem better | 17:21 |
timburke | josephillips, i'd consult the builder file to see whether overload is set or not. i usually view the .ring.gz as a pretty opaque artifact and just make sure that the MD5 matches between all nodes and where ever i've got the builder | 17:24 |
josephillips | ok | 17:25 |
josephillips | i ask this because i add the function to a juju charm to perform overload | 17:25 |
josephillips | and im just want to make sure is moving to the nodes | 17:25 |
josephillips | thanks | 17:25 |
josephillips | :) | 17:25 |
josephillips | looks like is working | 17:26 |
timburke | \o/ | 17:26 |
admin0 | hi .. i deployed swift using kolla-ansible, but its not working .. so i want to know how to troubleshot it .. how to check the content of the ring on what disks are included ? | 18:23 |
timburke | admin0, `swift-ring-builder <builder file>` will tell you the devices in the ring | 18:39 |
timburke | if your builder is missing/lost, you can use `swift-ring-builder <ring.gz file> write_builder` to get a builder based on a ring, but note that there may be information present in the original builder that wouldn't show up in the recovered builder | 18:40 |
admin0 | timburke, this is how its built/done in kolla-ansible https://docs.openstack.org/kolla-ansible/pike/reference/swift-guide.html | 18:49 |
admin0 | i have the account.ring.gz container.ring.gz and object.ring.gz .. which of these files have info related to the disk layout ? and what command to use to view them ? | 18:53 |
DHE | swift-ring-builder filename.ring.gz write_builder # will convert back to a builder file, but with some metadata lost | 19:40 |
DHE | then you can examine the contents using the above methods | 19:40 |
DHE | if you have existing builder files those would be preferred | 19:40 |
timburke | admin0, fwiw, looks like kolla will put the builders somewhere like /etc/kolla/config/swift/object.builder | 19:53 |
timburke | all three rings/builders have information about which disks are used: account for account DBs, container for container DBs, object for object data. | 19:53 |
timburke | these can all use the same disks, or all separate disks, or anywhere in between | 19:54 |
timburke | often, ops will put account and container DBs on SSDs (since they demand more IOPS) and object on a separate set of spinning disks | 19:55 |
timburke | from the looks of those docs, kolla puts three disks per node in each of the rings | 19:56 |
timburke | i *think* you can run something like `docker run --rm -v /etc/kolla/config/swift/:/etc/kolla/config/swift/ $KOLLA_SWIFT_BASE_IMAGE swift-ring-builder /etc/kolla/config/swift/object.builder` to see the device table | 19:57 |
ormandj | how are people keeping replication from slaughtering cpu, while still keeping objects replicated? :) we've got a worker per disk and it's eating cpu at the default 30s interval | 20:21 |
ormandj | (this is train, fwiw) | 20:22 |
admin0 | timburke, that worked .. but i think i made a mistake . as the name looks funny: https://gist.github.com/a1git/9651c807c4061fe81f142554e85a1705 | 20:27 |
admin0 | i only had one disk .. | 20:28 |
admin0 | next is, how do I map/validate that the d{1} to the actual disks in the server | 20:28 |
timburke | ormandj, makes me think of https://review.opendev.org/#/c/715298/ ... if you don't mind hacking up your swift, you could try upping that sleep in swift/common/daemon.py to see if that helps | 20:29 |
patchbot | patch 715298 - swift - Optimize obj replicator/reconstructor healthchecks (MERGED) - 5 patch sets | 20:29 |
timburke | admin0, that does indeed look suspicious :-( | 20:33 |
timburke | on each server, i'd run `ls /srv/node/` | 20:33 |
admin0 | nothig is inside /srv/node | 20:33 |
admin0 | i already checked | 20:33 |
timburke | ...maybe have a look in fstab? is there anything that's *supposed* to get mounted there? | 20:34 |
admin0 | can the name be anything there ? or does that name have to match what is int he name to /srv/node/name ? | 20:34 |
admin0 | so if i redo it and make it d1 (for disk1) , does it have to be mounted as /srv/node/d1 ? | 20:34 |
timburke | that's the way it's usually done. doesn't have to be at /srv/node, though, i suppose -- our saio instructions, for example, make a few "disks" under /srv/node1, /srv/node2, /srv/node3, and /srv/node4 (since we want to simulate multiple servers in a single VM) | 20:36 |
timburke | check the `devices` option in object-server.conf -- if it's not set, it'll use /srv/node | 20:36 |
timburke | the dir name under that point *does* have to match what's in the ring, though | 20:37 |
admin0 | the object-server.conf has devices = /srv/node | 20:37 |
admin0 | so i need to mount /dev/sdb as /srv/node/d1 to match the builder ? | 20:37 |
zaitcev | Sounds plausible. | 20:39 |
zaitcev | Although it's not obligatory, strictly speaking, I usually do not name devices the same d1 on all nodes, just for convenience. | 20:41 |
admin0 | ok | 20:42 |
zaitcev | example of a small cluster: https://pastebin.com/viGsh4zY | 20:43 |
admin0 | zaitcev, how does it appear in /fstab and /srv/node ? | 20:47 |
zaitcev | https://pastebin.com/mA8qSyD0 | 20:49 |
admin0 | got it | 20:49 |
admin0 | thanks | 20:49 |
zaitcev | actually, I think that advice is no longer operational | 20:49 |
zaitcev | Because the cluster was created before Systemd, back when faulty /etc/fstab was a problem. | 20:50 |
zaitcev | admin0: Remember that the mount point itself must be owned by root, but the root of the volume is owned by swift. This way, if something didn't get mounted on boot, you aren't filling up your root filessytem. | 20:52 |
*** tkajinam has joined #openstack-swift | 22:53 | |
*** rcernin has joined #openstack-swift | 23:02 | |
*** rcernin has quit IRC | 23:08 | |
*** rcernin has joined #openstack-swift | 23:08 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!