Saturday, 2021-02-20

*** redcaptrickster has quit IRC00:01
*** luksky has quit IRC00:03
*** jmasud has quit IRC00:11
*** bbowen_ has joined #openstack00:23
*** bbowen has quit IRC00:25
*** Ra4cal has joined #openstack00:51
*** Ra4cal has quit IRC00:55
*** waxfire7 has joined #openstack01:13
*** waxfire has quit IRC01:13
*** waxfire7 is now known as waxfire01:13
*** arnoldoree has joined #openstack01:15
*** rcernin has joined #openstack01:16
*** __ministry has joined #openstack01:22
*** LowKey has quit IRC01:25
*** rcernin has quit IRC01:27
*** jmasud has joined #openstack01:29
*** jmasud has quit IRC01:40
*** samuelbernardo has joined #openstack02:01
*** rcernin has joined #openstack02:05
*** rcernin has quit IRC02:14
*** LowKey has joined #openstack02:20
*** LowKey has quit IRC02:25
*** Ra4cal has joined #openstack02:34
*** jmasud has joined #openstack02:35
*** Ra4cal has quit IRC02:38
*** jmasud has quit IRC03:06
*** dviroel has quit IRC03:06
*** jmasud has joined #openstack03:31
*** dsneddon has quit IRC03:37
*** dsneddon has joined #openstack03:38
*** waxfire4 has joined #openstack03:41
*** waxfire has quit IRC03:41
*** waxfire4 is now known as waxfire03:41
*** usrGabriel has quit IRC03:46
*** __ministry has quit IRC04:51
*** jmasud has quit IRC05:05
*** Ra4cal has joined #openstack05:11
*** jmasud has joined #openstack05:24
*** Ra4cal has quit IRC05:31
*** Ra4cal has joined #openstack05:31
*** gyee has quit IRC06:31
*** jmasud has quit IRC06:41
*** ddstreet has quit IRC07:02
*** ddstreet has joined #openstack07:02
*** lemko7 has joined #openstack07:30
*** lemko has quit IRC07:30
*** lemko7 is now known as lemko07:30
factorDoing Openstack install , no wonder so puppet install was used. Manual is a high labor task07:33
factorWhile the docs are now much better, quick tips will be needed for my own install.07:33
*** waxfire has quit IRC07:36
*** waxfire has joined #openstack07:36
*** sergiuw has joined #openstack07:41
dirtwashIambchop: so far it seems snapshots dont work and only since upgrade from ussuri to victoria...the whole error is: Failed to store image 0cddb159-0e77-4490-8a87-6d5aaca84579 Store Exception RBD incomplete write (Wrote only 8388608 out of 8399166 bytes). Yes always the same byte number07:57
dirtwashsuper weird07:57
dirtwashI dont even know wher to start debugging tbh07:58
*** cah_link has joined #openstack07:58
dirtwashthe behavior is: it creates te image ID, it tries to store, fails, image disappears due to failure. I guess thats default behaior07:59
*** mmethot has joined #openstack08:00
*** mmethot_ has quit IRC08:01
*** cah_link has quit IRC08:02
*** sergiuw has quit IRC08:06
*** sergiuw has joined #openstack08:06
factorI thought they changed snapshot to shelf , which I did not like somewhere in the course of Rocky08:15
*** slaweq has joined #openstack08:19
dirtwashim new to openstack, defo need some hints on what to check, trying to get better ceph logs but not seeing much there so far08:20
dirtwashits defo the rbd.pyx throwing the errors08:20
factorHave not worked with ceph or the logs.08:23
dirtwashanything i could check more on openstack? probably not if the error is thrown by glance i guess08:24
dirtwashfunny part is..only affects snapshots08:24
dirtwashif I push a new image or something, writes to ceph work fine08:24
factorI usually worked with OPP other peoples puppet deploys. Now trying to figure out my own.08:25
factorWas not a fan of what they did to snapshots.08:25
factorI am sure lots of changes have been done to that recently. May cause issue08:26
*** slaweq has quit IRC08:26
factordirtwash, have you looked through the glance logs?08:27
jrosserif you can make this error happen at the cli and use --debug and put the output at paste.openstack.org..... hard to know where to start otherwise08:28
factorI recall specific names it did not like , that still may be a thing. Special character outside of letters and numbers.08:28
factoryes openstack with the --debug option I had started using.08:29
dirtwashfactor: theres nothing else in the glance logs08:32
dirtwashit just shows the python error08:32
dirtwashone sec I pastebin08:32
*** slaweq has joined #openstack08:33
dirtwashhttp://paste.openstack.org/show/802855/08:33
dirtwashi dont think it ets more verbose than this08:33
factorokay08:35
dirtwashgotta find out wat ceph is saying08:38
dirtwashtrying to figoure out how to verbose rbd logs08:38
jrosseryou can set debug=true in /etc/glance/glance.conf, should be right at the top08:42
jrosseri see you are using openstack-ansible08:43
*** slaweq has quit IRC08:43
jrosserplease do join #openstack-ansible - there are plenty of openstack operators hang out there using this for real08:43
jrosserhowever it is the weekend and theres most folk around weekdays EU timezone08:43
factorAlso, I dont know the ceph stuff. But this may just be glance ,limits.08:45
factorThat number Seeing that number in max mem for virsh 8G08:49
factorchecking08:49
dirtwashfactor: hm?08:51
dirtwashjrosser: yea I know its weekend, was writing here anyway, maybe someone has a decent hint08:51
factordunno yet08:51
dirtwashincomplete write is weird, nobody ever seem to have had this issue yet, 0 google results08:51
factor8388608 Bytes (B)=8 Megabytes (MB)08:56
factorYour hitting an 8G limit08:56
factorI think it should use cache , maybe a cache setup issue08:56
dirtwashi hate contianer stuff08:57
dirtwashfactor: how do u get from 8388608 bytes to some 8G limit?08:57
factorhttps://www.flightpedia.org/convert/8388608-bytes-to-megabytes.html08:58
factor1024 K equal to 1k etc. etc.08:58
factorOh sorry 8Mb not G08:59
factor:)08:59
factorBut anyway hitting a limit08:59
factorit failed after the first write09:00
factorglance has a 8mb chunk size.09:01
*** dlan has joined #openstack09:01
dirtwashyea ...but that doesnt get us anywhere, already know its failing upon trying to write :D09:01
factorthen it maybe a permission issue09:01
factorLooks for error in /var/log09:02
dirtwashno otherwise nothing would work, its only snapshots09:02
dirtwashloading images into rbd backend and running VMs works fine09:02
factordepednes maybe user perms09:02
dirtwashrbd has no special snapshot permissions, thers only read/write09:02
dirtwasheither it is allowed to write or not09:02
factorOpenstack has a ton of user perms, why I am reinstalling over and over tonight to find these issues09:02
dirtwashi assume glance uses same for all rbd?09:02
factorGlance ceph nova all different perms.09:03
dirtwashand a user permissions would not show like this, I hope,09:03
factorI was having glance issues as well , everything else seemed to work.09:05
factorbut auth for glance, did not seems to be Openstack auth either09:05
factordid not seem to be OS atuth^09:05
dirtwashdoes glance have different creds than cinder?09:06
factorhumm09:06
factorI think it does, I dont think I can answer your questions thoug , just yet09:06
factorI was going to install all the systems with the same auth and password, to see if I ran into the glance write issue again09:07
factorBut I do think your issue is a Glance one not ceph.09:08
dirtwashcan I change glance format from qcow2 to raw in some webui? or only via reconfigureation of config09:08
dirtwashim not the main admin for oepenstack, i dont deal with it much, i dont know it well09:09
factorhorizon will let you do that,09:11
factorI normally use qcow2, but I recall it had an option for format in the gui09:11
factorIn horizon its individualsaves not always.09:13
jrosserceph keys should look something like this https://github.com/openstack/openstack-ansible/blob/master/inventory/group_vars/all/ceph.yml#L52-L5809:13
factor Possible values:09:14
factor# raw - RAW disk format09:14
factor# qcow2 - KVM default disk format09:14
factor# vmdk - VMWare default disk format09:14
factor# vdi - VirtualBox default disk format09:14
dirtwashjrosser: yea cant be a permission issue if glance also is used for images given that works09:14
factorDetermine the snapshot image format09:14
dirtwashhm i think the format is irrelenva toto09:15
dirtwashkinda running out of ideas09:15
factor /etc/nova/nova.conf #snapshot_image_format=<None>09:15
dirtwashits in nova.conf?09:15
factorthought the same thing09:16
factorHad to double check09:16
factor[libvirt] section09:16
factorAnyway shutting down my test vms , reinstalling with all the same password to see if I can clear up my glance issues.09:17
factorYou could check the nova logs..09:18
factorThat is how its passes the image it seems09:18
dirtwashwhat is cinder-backup used for normally?09:18
dirtwashi have a suspicion09:19
*** BakaKuna has joined #openstack09:19
dirtwashif cinder-backup is used for backups and snapshots, maybe it fails because its missing09:20
*** BakaKuna has quit IRC09:22
factorNot for sure but I had never had to use cinder. Although some storage may be needed.09:22
factorCan recall what storage module I used.09:22
factorcant^09:22
*** luksky has joined #openstack09:22
factorSo you maybe right.09:23
factorI think  had used loopback device in the past. So some storage is required.09:24
dirtwashi dont know if that backup service has anyting to do with snapshots09:24
factorI think you would only need some storage device for openstack.09:25
factorI had never used "in the past" any backup service.09:26
jrosserdirtwash: withoug steps to reproduce this is hard, glance is the image store, snapshots would normally be taken in cinder09:38
dirtwashyea, so is it related to cinder-backup service?09:39
jrosserno, thats specifically for backups, you can take snapshots in the block storage without that09:39
jrosseryou mght use cinder-backup to put backups onto an alternate backend, we have nfs backed cinder-backup and ceph for block devices, for example09:40
jrosserbut you can of couse make cinder-backup use ceph too if you want, it's kind up down to what the use case is09:40
dirtwashok so its unrelated then09:41
jrosserif you've got a set of steps to reproduce that would be really helpful09:41
jrosseryou can also raise a bug at https://bugs.launchpad.net/openstack-ansible09:41
dirtwashim not even sure its a openstack erorr yet09:41
dirtwashor bug09:42
dirtwashtrying to figoure out why it fails with snapshots09:42
jrosserit feels somewhat in the ground between openstack and ceph tbh09:42
dirtwashbut not with image uploads09:42
jrosserthats why i'm unsure whats happening here09:42
dirtwashif image writes work...then it shouldnt be ceph the issue09:42
jrossersnapshot would be in the cinder pool09:42
dirtwashwell images too no?09:42
dirtwashcinder user writes to ceph09:42
jrosserto create an image from a snapshot (and therefore involving glance) is a different thing altogether09:42
dirtwashbut the error is from rbd.py sayin the write was incomplete09:43
dirtwashso not sure its glance related/09:43
jrosserwell indeed, thats what i'm wanting to get at with how to reproduce this09:44
jrosseranyway, i must do $weekend09:44
dirtwashwhere are snapshots stored, in same pool I guess09:44
dirtwashvolumes..09:44
jrosserraise a bug with whatever info you have would be really helpful09:44
jrosseryes it should be09:44
jrosserthough depending how things are set u the volume for cinder could be a snapshot of the original glance image09:45
jrosserfor all the copy-on-write goodness09:45
dirtwashmight not be a bug? maybe its a config issue, from my experience bugs reported dont get immediate attention anyway :D im trying to fix this asap09:46
*** jonaspaulo has joined #openstack09:51
dirtwashim confused why i see rbd.py in glance logs but cinder is doing the image writing?09:54
dirtwashopenstack is confusing09:54
*** jonaspaulo has quit IRC09:57
*** PabloMartinez has quit IRC10:10
*** pcaruana has quit IRC10:27
*** waxfire3 has joined #openstack10:34
*** jangutter has joined #openstack10:35
*** waxfire has quit IRC10:35
*** waxfire3 is now known as waxfire10:35
*** jangutter_ has quit IRC10:37
*** jangutter_ has joined #openstack10:40
*** jangutter has quit IRC10:43
*** wallacer has quit IRC10:56
*** packetchaos has joined #openstack11:00
*** wallacer has joined #openstack11:02
*** sergiuw has quit IRC11:09
*** sergiuw has joined #openstack11:09
*** LowKey has joined #openstack11:28
*** mataeragon has joined #openstack11:30
*** mataeragon has quit IRC11:42
*** slaweq has joined #openstack11:48
*** slaweq has quit IRC11:57
*** __ministry has joined #openstack11:57
Iambchopdirtwash: what do you have in glance-api.conf for rbd_store_chunk_size? that value is in MB, so "8" would match up with the 8388608, not sure where the 8399166 is coming from.11:59
*** LowKey has quit IRC12:00
*** LowKey has joined #openstack12:00
*** __ministry has quit IRC12:01
Iambchophmm... "Improved performance of rbd store chunk upload" https://docs.openstack.org/releasenotes/glance/victoria.html12:03
dirtwashIambchop: 812:05
dirtwashi debugged more and nova reports broken pipe errors butnot sure if its caused by the other reror or just the reason itself12:06
dirtwashlot o red herrings12:06
dirtwashIambchop: what was that value before?12:11
IambchopI don't know the example, config in the docs is 812:13
dirtwashi can try to find the code changes they di there12:13
dirtwashsadly nothing is referenced..12:13
dirtwash'improved perofrmance', could be anything12:13
Iambchopis rbd_thin_provisioning on? that was added in victoria, default is off12:20
dirtwashhow do I check?12:20
*** packetchaos has quit IRC12:26
Iambchopthat would be glance-api.conf I think12:27
IambchopI think this is the perf change in vic: https://review.opendev.org/plugins/gitiles/openstack/glance_store/+/c43f19e8456b9e20f03709773fb2ffdb94807a0a12:27
dirtwashbut that seems unrelated anyway, the thin prov stuff12:28
*** bbowen_ has quit IRC12:29
dirtwashthe rbd resize hm12:30
dirtwashkinda out of ideas wat to look at anymore12:40
*** waxfire9 has joined #openstack12:52
*** waxfire has quit IRC12:53
*** waxfire9 is now known as waxfire12:53
dirtwashUnable to establish connection to http://172.29.236.9:9292/v2/images/4ebae431-6af0-45b7-9c70-e26ad39a35e0/file: [Errno 32] Broken pipe13:13
dirtwashquestion is if that is because the rbd write fails or the rbd write fails because the pipe breaks..13:13
*** waxfire has quit IRC13:35
*** waxfire has joined #openstack13:36
*** sergiuw has quit IRC13:44
Iambchopwrite exception.13:44
dirtwashIambchop: yea but we dont know why13:45
IambchopI would think the bpipe is triggered by the write exception.13:45
dirtwashi tink so too13:46
Iambchopwas that pasted log snippet running with debug set true in glance-api?13:51
dirtwashno13:52
dirtwashdo i have to rerun openstack ainsible to apply that change?13:52
dirtwashor is there a quick and easy way13:53
dirtwashi hate conatiner13:54
dirtwasheveryting nowadays is needlessly complicated and hiddden13:54
Iambchopyou might be able to directly edit and restart the service; we're not using ansible so don't know13:56
dirtwashrestart it ow13:56
dirtwashdoesnt exist as systemd13:56
dirtwashglance is a container13:56
dirtwashi hate container stuff :D13:59
dirtwashIambchop: i test it now with debug on14:16
dirtwashdoubt i see more14:17
dirtwashprobably unrelated but i see lot of:  glance.api.middleware.version_negotiation [-] Unknown version. Returning version choices. p14:22
dirtwashseems harmless ok14:23
dirtwashIambchop: http://paste.openstack.org/show/802857/14:23
dirtwashthats with debug on14:23
*** PabloMartinez has joined #openstack14:24
dirtwashwhy is it creating the image with size 014:26
dirtwashand then: resizing image to 8192.0 KiB14:26
*** LowKey has quit IRC14:28
*** luksky has quit IRC14:29
Iambchopthe resize 8192 is the new opt; starting with size 0 indicates unknown total size so it will allocate as it goes14:29
dirtwashok "creation of 0 size image in Glance which is a link to real volume created before. That Glance image contain no data (just an address of data) so it is normal for image to have 0 size"14:30
dirtwashhm no idea14:30
*** bbowen has joined #openstack14:31
*** waxfire has quit IRC14:31
*** waxfire has joined #openstack14:31
*** PabloMartinez has quit IRC14:36
*** PabloMartinez has joined #openstack14:37
dirtwashi wonder if it can be a ceph issue, but writing images to ceph works, so te rbd connectin/writes must be working14:44
dirtwashand again: this only fails since upgrade to victoria14:44
dirtwashIambchop: any other ideas?15:00
*** genekuo has joined #openstack15:02
jrosserdirtwash: equate the openstack-ansible lxc containers to hosts15:05
jrossertheres no dockerism type stuff at all15:05
jrosserall the services run under systemd just like if it were bare metal server or a vm15:06
jrosser"glance is a container" no, glance-api is running as a systemd service in an lxc container15:06
jrosserthis is very different to how application containers work15:06
*** genekuo has quit IRC15:07
jrosserroot@infra1-glance-container-6ea6d9e9:~# systemctl status glance-api15:08
jrosser‚óŹ glance-api.service - glance-api service15:08
dirtwashjrosser: ah ok, learned something15:11
dirtwashstill gotta figure out my issue15:12
jrosseryou can just dive in and fiddle with /etc/glance/glance.conf and restart with systemd15:12
dirtwashyea i did enable debug15:12
dirtwashsadly no more new info15:12
jrosserultimately rbd.pyx is part of the ceph python bindings, so it's an error thats surfaced up from librbd15:13
dirtwashyea15:16
dirtwashjust not sure how to debug this further tbh15:16
dirtwashi guess I'd have to somehow see what happens exacctly on the ceph side but thats not so easy either15:17
*** redrobot has quit IRC15:21
*** lpetrut has joined #openstack15:24
*** lpetrut has quit IRC15:25
*** redrobot has joined #openstack15:26
*** benfelin has joined #openstack15:33
*** luksky has joined #openstack15:54
*** slaweq has joined #openstack16:15
*** jmasud has joined #openstack16:29
*** skyraven has joined #openstack16:36
*** waxfire0 has joined #openstack16:46
*** waxfire has quit IRC16:46
*** waxfire0 is now known as waxfire16:46
*** slaweq has quit IRC16:47
*** benfelin has quit IRC17:19
*** slaweq has joined #openstack17:28
*** usrGabriel has joined #openstack17:30
*** slaweq has quit IRC17:34
*** BrownBear has quit IRC17:46
*** slaweq has joined #openstack17:51
*** slaweq has quit IRC17:57
*** waxfire7 has joined #openstack18:18
*** waxfire has quit IRC18:20
*** waxfire7 is now known as waxfire18:20
*** jmasud has quit IRC18:25
*** lemko has quit IRC18:48
*** lemko has joined #openstack18:49
*** jangutter_ has quit IRC19:02
*** jangutter has joined #openstack19:02
*** usrGabriel has quit IRC19:11
*** jmasud has joined #openstack19:42
*** lemko5 has joined #openstack19:42
*** lemko has quit IRC19:42
*** lemko5 is now known as lemko19:42
*** benfelin has joined #openstack19:53
*** jmasud has quit IRC20:17
*** jmasud has joined #openstack20:19
*** benfelin has quit IRC20:23
*** benfelin has joined #openstack20:30
*** benfelin has quit IRC20:34
*** shokohsc has quit IRC20:36
*** benfelin has joined #openstack20:36
*** luksky has quit IRC20:42
*** shokohsc has joined #openstack20:43
*** luksky has joined #openstack21:01
*** sergiuw has joined #openstack21:04
*** jmasud has quit IRC21:09
*** jmasud has joined #openstack21:34
*** Ra4cal has quit IRC21:43
*** jmasud has quit IRC21:53
*** sergiuw has quit IRC22:27
*** Ra4cal has joined #openstack22:34
*** jmasud has joined #openstack22:35
*** jmasud has quit IRC22:45
*** cah_link has joined #openstack22:47
*** luksky has quit IRC22:54
*** cah_link has quit IRC23:02
*** jmasud has joined #openstack23:05
*** Ra4cal has quit IRC23:07
*** jmasud has quit IRC23:31

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!