*** redcaptrickster has quit IRC | 00:01 | |
*** luksky has quit IRC | 00:03 | |
*** jmasud has quit IRC | 00:11 | |
*** bbowen_ has joined #openstack | 00:23 | |
*** bbowen has quit IRC | 00:25 | |
*** Ra4cal has joined #openstack | 00:51 | |
*** Ra4cal has quit IRC | 00:55 | |
*** waxfire7 has joined #openstack | 01:13 | |
*** waxfire has quit IRC | 01:13 | |
*** waxfire7 is now known as waxfire | 01:13 | |
*** arnoldoree has joined #openstack | 01:15 | |
*** rcernin has joined #openstack | 01:16 | |
*** __ministry has joined #openstack | 01:22 | |
*** LowKey has quit IRC | 01:25 | |
*** rcernin has quit IRC | 01:27 | |
*** jmasud has joined #openstack | 01:29 | |
*** jmasud has quit IRC | 01:40 | |
*** samuelbernardo has joined #openstack | 02:01 | |
*** rcernin has joined #openstack | 02:05 | |
*** rcernin has quit IRC | 02:14 | |
*** LowKey has joined #openstack | 02:20 | |
*** LowKey has quit IRC | 02:25 | |
*** Ra4cal has joined #openstack | 02:34 | |
*** jmasud has joined #openstack | 02:35 | |
*** Ra4cal has quit IRC | 02:38 | |
*** jmasud has quit IRC | 03:06 | |
*** dviroel has quit IRC | 03:06 | |
*** jmasud has joined #openstack | 03:31 | |
*** dsneddon has quit IRC | 03:37 | |
*** dsneddon has joined #openstack | 03:38 | |
*** waxfire4 has joined #openstack | 03:41 | |
*** waxfire has quit IRC | 03:41 | |
*** waxfire4 is now known as waxfire | 03:41 | |
*** usrGabriel has quit IRC | 03:46 | |
*** __ministry has quit IRC | 04:51 | |
*** jmasud has quit IRC | 05:05 | |
*** Ra4cal has joined #openstack | 05:11 | |
*** jmasud has joined #openstack | 05:24 | |
*** Ra4cal has quit IRC | 05:31 | |
*** Ra4cal has joined #openstack | 05:31 | |
*** gyee has quit IRC | 06:31 | |
*** jmasud has quit IRC | 06:41 | |
*** ddstreet has quit IRC | 07:02 | |
*** ddstreet has joined #openstack | 07:02 | |
*** lemko7 has joined #openstack | 07:30 | |
*** lemko has quit IRC | 07:30 | |
*** lemko7 is now known as lemko | 07:30 | |
factor | Doing Openstack install , no wonder so puppet install was used. Manual is a high labor task | 07:33 |
---|---|---|
factor | While the docs are now much better, quick tips will be needed for my own install. | 07:33 |
*** waxfire has quit IRC | 07:36 | |
*** waxfire has joined #openstack | 07:36 | |
*** sergiuw has joined #openstack | 07:41 | |
dirtwash | Iambchop: so far it seems snapshots dont work and only since upgrade from ussuri to victoria...the whole error is: Failed to store image 0cddb159-0e77-4490-8a87-6d5aaca84579 Store Exception RBD incomplete write (Wrote only 8388608 out of 8399166 bytes). Yes always the same byte number | 07:57 |
dirtwash | super weird | 07:57 |
dirtwash | I dont even know wher to start debugging tbh | 07:58 |
*** cah_link has joined #openstack | 07:58 | |
dirtwash | the behavior is: it creates te image ID, it tries to store, fails, image disappears due to failure. I guess thats default behaior | 07:59 |
*** mmethot has joined #openstack | 08:00 | |
*** mmethot_ has quit IRC | 08:01 | |
*** cah_link has quit IRC | 08:02 | |
*** sergiuw has quit IRC | 08:06 | |
*** sergiuw has joined #openstack | 08:06 | |
factor | I thought they changed snapshot to shelf , which I did not like somewhere in the course of Rocky | 08:15 |
*** slaweq has joined #openstack | 08:19 | |
dirtwash | im new to openstack, defo need some hints on what to check, trying to get better ceph logs but not seeing much there so far | 08:20 |
dirtwash | its defo the rbd.pyx throwing the errors | 08:20 |
factor | Have not worked with ceph or the logs. | 08:23 |
dirtwash | anything i could check more on openstack? probably not if the error is thrown by glance i guess | 08:24 |
dirtwash | funny part is..only affects snapshots | 08:24 |
dirtwash | if I push a new image or something, writes to ceph work fine | 08:24 |
factor | I usually worked with OPP other peoples puppet deploys. Now trying to figure out my own. | 08:25 |
factor | Was not a fan of what they did to snapshots. | 08:25 |
factor | I am sure lots of changes have been done to that recently. May cause issue | 08:26 |
*** slaweq has quit IRC | 08:26 | |
factor | dirtwash, have you looked through the glance logs? | 08:27 |
jrosser | if you can make this error happen at the cli and use --debug and put the output at paste.openstack.org..... hard to know where to start otherwise | 08:28 |
factor | I recall specific names it did not like , that still may be a thing. Special character outside of letters and numbers. | 08:28 |
factor | yes openstack with the --debug option I had started using. | 08:29 |
dirtwash | factor: theres nothing else in the glance logs | 08:32 |
dirtwash | it just shows the python error | 08:32 |
dirtwash | one sec I pastebin | 08:32 |
*** slaweq has joined #openstack | 08:33 | |
dirtwash | http://paste.openstack.org/show/802855/ | 08:33 |
dirtwash | i dont think it ets more verbose than this | 08:33 |
factor | okay | 08:35 |
dirtwash | gotta find out wat ceph is saying | 08:38 |
dirtwash | trying to figoure out how to verbose rbd logs | 08:38 |
jrosser | you can set debug=true in /etc/glance/glance.conf, should be right at the top | 08:42 |
jrosser | i see you are using openstack-ansible | 08:43 |
*** slaweq has quit IRC | 08:43 | |
jrosser | please do join #openstack-ansible - there are plenty of openstack operators hang out there using this for real | 08:43 |
jrosser | however it is the weekend and theres most folk around weekdays EU timezone | 08:43 |
factor | Also, I dont know the ceph stuff. But this may just be glance ,limits. | 08:45 |
factor | That number Seeing that number in max mem for virsh 8G | 08:49 |
factor | checking | 08:49 |
dirtwash | factor: hm? | 08:51 |
dirtwash | jrosser: yea I know its weekend, was writing here anyway, maybe someone has a decent hint | 08:51 |
factor | dunno yet | 08:51 |
dirtwash | incomplete write is weird, nobody ever seem to have had this issue yet, 0 google results | 08:51 |
factor | 8388608 Bytes (B)=8 Megabytes (MB) | 08:56 |
factor | Your hitting an 8G limit | 08:56 |
factor | I think it should use cache , maybe a cache setup issue | 08:56 |
dirtwash | i hate contianer stuff | 08:57 |
dirtwash | factor: how do u get from 8388608 bytes to some 8G limit? | 08:57 |
factor | https://www.flightpedia.org/convert/8388608-bytes-to-megabytes.html | 08:58 |
factor | 1024 K equal to 1k etc. etc. | 08:58 |
factor | Oh sorry 8Mb not G | 08:59 |
factor | :) | 08:59 |
factor | But anyway hitting a limit | 08:59 |
factor | it failed after the first write | 09:00 |
factor | glance has a 8mb chunk size. | 09:01 |
*** dlan has joined #openstack | 09:01 | |
dirtwash | yea ...but that doesnt get us anywhere, already know its failing upon trying to write :D | 09:01 |
factor | then it maybe a permission issue | 09:01 |
factor | Looks for error in /var/log | 09:02 |
dirtwash | no otherwise nothing would work, its only snapshots | 09:02 |
dirtwash | loading images into rbd backend and running VMs works fine | 09:02 |
factor | depednes maybe user perms | 09:02 |
dirtwash | rbd has no special snapshot permissions, thers only read/write | 09:02 |
dirtwash | either it is allowed to write or not | 09:02 |
factor | Openstack has a ton of user perms, why I am reinstalling over and over tonight to find these issues | 09:02 |
dirtwash | i assume glance uses same for all rbd? | 09:02 |
factor | Glance ceph nova all different perms. | 09:03 |
dirtwash | and a user permissions would not show like this, I hope, | 09:03 |
factor | I was having glance issues as well , everything else seemed to work. | 09:05 |
factor | but auth for glance, did not seems to be Openstack auth either | 09:05 |
factor | did not seem to be OS atuth^ | 09:05 |
dirtwash | does glance have different creds than cinder? | 09:06 |
factor | humm | 09:06 |
factor | I think it does, I dont think I can answer your questions thoug , just yet | 09:06 |
factor | I was going to install all the systems with the same auth and password, to see if I ran into the glance write issue again | 09:07 |
factor | But I do think your issue is a Glance one not ceph. | 09:08 |
dirtwash | can I change glance format from qcow2 to raw in some webui? or only via reconfigureation of config | 09:08 |
dirtwash | im not the main admin for oepenstack, i dont deal with it much, i dont know it well | 09:09 |
factor | horizon will let you do that, | 09:11 |
factor | I normally use qcow2, but I recall it had an option for format in the gui | 09:11 |
factor | In horizon its individualsaves not always. | 09:13 |
jrosser | ceph keys should look something like this https://github.com/openstack/openstack-ansible/blob/master/inventory/group_vars/all/ceph.yml#L52-L58 | 09:13 |
factor | Possible values: | 09:14 |
factor | # raw - RAW disk format | 09:14 |
factor | # qcow2 - KVM default disk format | 09:14 |
factor | # vmdk - VMWare default disk format | 09:14 |
factor | # vdi - VirtualBox default disk format | 09:14 |
dirtwash | jrosser: yea cant be a permission issue if glance also is used for images given that works | 09:14 |
factor | Determine the snapshot image format | 09:14 |
dirtwash | hm i think the format is irrelenva toto | 09:15 |
dirtwash | kinda running out of ideas | 09:15 |
factor | /etc/nova/nova.conf #snapshot_image_format=<None> | 09:15 |
dirtwash | its in nova.conf? | 09:15 |
factor | thought the same thing | 09:16 |
factor | Had to double check | 09:16 |
factor | [libvirt] section | 09:16 |
factor | Anyway shutting down my test vms , reinstalling with all the same password to see if I can clear up my glance issues. | 09:17 |
factor | You could check the nova logs.. | 09:18 |
factor | That is how its passes the image it seems | 09:18 |
dirtwash | what is cinder-backup used for normally? | 09:18 |
dirtwash | i have a suspicion | 09:19 |
*** BakaKuna has joined #openstack | 09:19 | |
dirtwash | if cinder-backup is used for backups and snapshots, maybe it fails because its missing | 09:20 |
*** BakaKuna has quit IRC | 09:22 | |
factor | Not for sure but I had never had to use cinder. Although some storage may be needed. | 09:22 |
factor | Can recall what storage module I used. | 09:22 |
factor | cant^ | 09:22 |
*** luksky has joined #openstack | 09:22 | |
factor | So you maybe right. | 09:23 |
factor | I think had used loopback device in the past. So some storage is required. | 09:24 |
dirtwash | i dont know if that backup service has anyting to do with snapshots | 09:24 |
factor | I think you would only need some storage device for openstack. | 09:25 |
factor | I had never used "in the past" any backup service. | 09:26 |
jrosser | dirtwash: withoug steps to reproduce this is hard, glance is the image store, snapshots would normally be taken in cinder | 09:38 |
dirtwash | yea, so is it related to cinder-backup service? | 09:39 |
jrosser | no, thats specifically for backups, you can take snapshots in the block storage without that | 09:39 |
jrosser | you mght use cinder-backup to put backups onto an alternate backend, we have nfs backed cinder-backup and ceph for block devices, for example | 09:40 |
jrosser | but you can of couse make cinder-backup use ceph too if you want, it's kind up down to what the use case is | 09:40 |
dirtwash | ok so its unrelated then | 09:41 |
jrosser | if you've got a set of steps to reproduce that would be really helpful | 09:41 |
jrosser | you can also raise a bug at https://bugs.launchpad.net/openstack-ansible | 09:41 |
dirtwash | im not even sure its a openstack erorr yet | 09:41 |
dirtwash | or bug | 09:42 |
dirtwash | trying to figoure out why it fails with snapshots | 09:42 |
jrosser | it feels somewhat in the ground between openstack and ceph tbh | 09:42 |
dirtwash | but not with image uploads | 09:42 |
jrosser | thats why i'm unsure whats happening here | 09:42 |
dirtwash | if image writes work...then it shouldnt be ceph the issue | 09:42 |
jrosser | snapshot would be in the cinder pool | 09:42 |
dirtwash | well images too no? | 09:42 |
dirtwash | cinder user writes to ceph | 09:42 |
jrosser | to create an image from a snapshot (and therefore involving glance) is a different thing altogether | 09:42 |
dirtwash | but the error is from rbd.py sayin the write was incomplete | 09:43 |
dirtwash | so not sure its glance related/ | 09:43 |
jrosser | well indeed, thats what i'm wanting to get at with how to reproduce this | 09:44 |
jrosser | anyway, i must do $weekend | 09:44 |
dirtwash | where are snapshots stored, in same pool I guess | 09:44 |
dirtwash | volumes.. | 09:44 |
jrosser | raise a bug with whatever info you have would be really helpful | 09:44 |
jrosser | yes it should be | 09:44 |
jrosser | though depending how things are set u the volume for cinder could be a snapshot of the original glance image | 09:45 |
jrosser | for all the copy-on-write goodness | 09:45 |
dirtwash | might not be a bug? maybe its a config issue, from my experience bugs reported dont get immediate attention anyway :D im trying to fix this asap | 09:46 |
*** jonaspaulo has joined #openstack | 09:51 | |
dirtwash | im confused why i see rbd.py in glance logs but cinder is doing the image writing? | 09:54 |
dirtwash | openstack is confusing | 09:54 |
*** jonaspaulo has quit IRC | 09:57 | |
*** PabloMartinez has quit IRC | 10:10 | |
*** pcaruana has quit IRC | 10:27 | |
*** waxfire3 has joined #openstack | 10:34 | |
*** jangutter has joined #openstack | 10:35 | |
*** waxfire has quit IRC | 10:35 | |
*** waxfire3 is now known as waxfire | 10:35 | |
*** jangutter_ has quit IRC | 10:37 | |
*** jangutter_ has joined #openstack | 10:40 | |
*** jangutter has quit IRC | 10:43 | |
*** wallacer has quit IRC | 10:56 | |
*** packetchaos has joined #openstack | 11:00 | |
*** wallacer has joined #openstack | 11:02 | |
*** sergiuw has quit IRC | 11:09 | |
*** sergiuw has joined #openstack | 11:09 | |
*** LowKey has joined #openstack | 11:28 | |
*** mataeragon has joined #openstack | 11:30 | |
*** mataeragon has quit IRC | 11:42 | |
*** slaweq has joined #openstack | 11:48 | |
*** slaweq has quit IRC | 11:57 | |
*** __ministry has joined #openstack | 11:57 | |
Iambchop | dirtwash: what do you have in glance-api.conf for rbd_store_chunk_size? that value is in MB, so "8" would match up with the 8388608, not sure where the 8399166 is coming from. | 11:59 |
*** LowKey has quit IRC | 12:00 | |
*** LowKey has joined #openstack | 12:00 | |
*** __ministry has quit IRC | 12:01 | |
Iambchop | hmm... "Improved performance of rbd store chunk upload" https://docs.openstack.org/releasenotes/glance/victoria.html | 12:03 |
dirtwash | Iambchop: 8 | 12:05 |
dirtwash | i debugged more and nova reports broken pipe errors butnot sure if its caused by the other reror or just the reason itself | 12:06 |
dirtwash | lot o red herrings | 12:06 |
dirtwash | Iambchop: what was that value before? | 12:11 |
Iambchop | I don't know the example, config in the docs is 8 | 12:13 |
dirtwash | i can try to find the code changes they di there | 12:13 |
dirtwash | sadly nothing is referenced.. | 12:13 |
dirtwash | 'improved perofrmance', could be anything | 12:13 |
Iambchop | is rbd_thin_provisioning on? that was added in victoria, default is off | 12:20 |
dirtwash | how do I check? | 12:20 |
*** packetchaos has quit IRC | 12:26 | |
Iambchop | that would be glance-api.conf I think | 12:27 |
Iambchop | I think this is the perf change in vic: https://review.opendev.org/plugins/gitiles/openstack/glance_store/+/c43f19e8456b9e20f03709773fb2ffdb94807a0a | 12:27 |
dirtwash | but that seems unrelated anyway, the thin prov stuff | 12:28 |
*** bbowen_ has quit IRC | 12:29 | |
dirtwash | the rbd resize hm | 12:30 |
dirtwash | kinda out of ideas wat to look at anymore | 12:40 |
*** waxfire9 has joined #openstack | 12:52 | |
*** waxfire has quit IRC | 12:53 | |
*** waxfire9 is now known as waxfire | 12:53 | |
dirtwash | Unable to establish connection to http://172.29.236.9:9292/v2/images/4ebae431-6af0-45b7-9c70-e26ad39a35e0/file: [Errno 32] Broken pipe | 13:13 |
dirtwash | question is if that is because the rbd write fails or the rbd write fails because the pipe breaks.. | 13:13 |
*** waxfire has quit IRC | 13:35 | |
*** waxfire has joined #openstack | 13:36 | |
*** sergiuw has quit IRC | 13:44 | |
Iambchop | write exception. | 13:44 |
dirtwash | Iambchop: yea but we dont know why | 13:45 |
Iambchop | I would think the bpipe is triggered by the write exception. | 13:45 |
dirtwash | i tink so too | 13:46 |
Iambchop | was that pasted log snippet running with debug set true in glance-api? | 13:51 |
dirtwash | no | 13:52 |
dirtwash | do i have to rerun openstack ainsible to apply that change? | 13:52 |
dirtwash | or is there a quick and easy way | 13:53 |
dirtwash | i hate conatiner | 13:54 |
dirtwash | everyting nowadays is needlessly complicated and hiddden | 13:54 |
Iambchop | you might be able to directly edit and restart the service; we're not using ansible so don't know | 13:56 |
dirtwash | restart it ow | 13:56 |
dirtwash | doesnt exist as systemd | 13:56 |
dirtwash | glance is a container | 13:56 |
dirtwash | i hate container stuff :D | 13:59 |
dirtwash | Iambchop: i test it now with debug on | 14:16 |
dirtwash | doubt i see more | 14:17 |
dirtwash | probably unrelated but i see lot of: glance.api.middleware.version_negotiation [-] Unknown version. Returning version choices. p | 14:22 |
dirtwash | seems harmless ok | 14:23 |
dirtwash | Iambchop: http://paste.openstack.org/show/802857/ | 14:23 |
dirtwash | thats with debug on | 14:23 |
*** PabloMartinez has joined #openstack | 14:24 | |
dirtwash | why is it creating the image with size 0 | 14:26 |
dirtwash | and then: resizing image to 8192.0 KiB | 14:26 |
*** LowKey has quit IRC | 14:28 | |
*** luksky has quit IRC | 14:29 | |
Iambchop | the resize 8192 is the new opt; starting with size 0 indicates unknown total size so it will allocate as it goes | 14:29 |
dirtwash | ok "creation of 0 size image in Glance which is a link to real volume created before. That Glance image contain no data (just an address of data) so it is normal for image to have 0 size" | 14:30 |
dirtwash | hm no idea | 14:30 |
*** bbowen has joined #openstack | 14:31 | |
*** waxfire has quit IRC | 14:31 | |
*** waxfire has joined #openstack | 14:31 | |
*** PabloMartinez has quit IRC | 14:36 | |
*** PabloMartinez has joined #openstack | 14:37 | |
dirtwash | i wonder if it can be a ceph issue, but writing images to ceph works, so te rbd connectin/writes must be working | 14:44 |
dirtwash | and again: this only fails since upgrade to victoria | 14:44 |
dirtwash | Iambchop: any other ideas? | 15:00 |
*** genekuo has joined #openstack | 15:02 | |
jrosser | dirtwash: equate the openstack-ansible lxc containers to hosts | 15:05 |
jrosser | theres no dockerism type stuff at all | 15:05 |
jrosser | all the services run under systemd just like if it were bare metal server or a vm | 15:06 |
jrosser | "glance is a container" no, glance-api is running as a systemd service in an lxc container | 15:06 |
jrosser | this is very different to how application containers work | 15:06 |
*** genekuo has quit IRC | 15:07 | |
jrosser | root@infra1-glance-container-6ea6d9e9:~# systemctl status glance-api | 15:08 |
jrosser | ● glance-api.service - glance-api service | 15:08 |
dirtwash | jrosser: ah ok, learned something | 15:11 |
dirtwash | still gotta figure out my issue | 15:12 |
jrosser | you can just dive in and fiddle with /etc/glance/glance.conf and restart with systemd | 15:12 |
dirtwash | yea i did enable debug | 15:12 |
dirtwash | sadly no more new info | 15:12 |
jrosser | ultimately rbd.pyx is part of the ceph python bindings, so it's an error thats surfaced up from librbd | 15:13 |
dirtwash | yea | 15:16 |
dirtwash | just not sure how to debug this further tbh | 15:16 |
dirtwash | i guess I'd have to somehow see what happens exacctly on the ceph side but thats not so easy either | 15:17 |
*** redrobot has quit IRC | 15:21 | |
*** lpetrut has joined #openstack | 15:24 | |
*** lpetrut has quit IRC | 15:25 | |
*** redrobot has joined #openstack | 15:26 | |
*** benfelin has joined #openstack | 15:33 | |
*** luksky has joined #openstack | 15:54 | |
*** slaweq has joined #openstack | 16:15 | |
*** jmasud has joined #openstack | 16:29 | |
*** skyraven has joined #openstack | 16:36 | |
*** waxfire0 has joined #openstack | 16:46 | |
*** waxfire has quit IRC | 16:46 | |
*** waxfire0 is now known as waxfire | 16:46 | |
*** slaweq has quit IRC | 16:47 | |
*** benfelin has quit IRC | 17:19 | |
*** slaweq has joined #openstack | 17:28 | |
*** usrGabriel has joined #openstack | 17:30 | |
*** slaweq has quit IRC | 17:34 | |
*** BrownBear has quit IRC | 17:46 | |
*** slaweq has joined #openstack | 17:51 | |
*** slaweq has quit IRC | 17:57 | |
*** waxfire7 has joined #openstack | 18:18 | |
*** waxfire has quit IRC | 18:20 | |
*** waxfire7 is now known as waxfire | 18:20 | |
*** jmasud has quit IRC | 18:25 | |
*** lemko has quit IRC | 18:48 | |
*** lemko has joined #openstack | 18:49 | |
*** jangutter_ has quit IRC | 19:02 | |
*** jangutter has joined #openstack | 19:02 | |
*** usrGabriel has quit IRC | 19:11 | |
*** jmasud has joined #openstack | 19:42 | |
*** lemko5 has joined #openstack | 19:42 | |
*** lemko has quit IRC | 19:42 | |
*** lemko5 is now known as lemko | 19:42 | |
*** benfelin has joined #openstack | 19:53 | |
*** jmasud has quit IRC | 20:17 | |
*** jmasud has joined #openstack | 20:19 | |
*** benfelin has quit IRC | 20:23 | |
*** benfelin has joined #openstack | 20:30 | |
*** benfelin has quit IRC | 20:34 | |
*** shokohsc has quit IRC | 20:36 | |
*** benfelin has joined #openstack | 20:36 | |
*** luksky has quit IRC | 20:42 | |
*** shokohsc has joined #openstack | 20:43 | |
*** luksky has joined #openstack | 21:01 | |
*** sergiuw has joined #openstack | 21:04 | |
*** jmasud has quit IRC | 21:09 | |
*** jmasud has joined #openstack | 21:34 | |
*** Ra4cal has quit IRC | 21:43 | |
*** jmasud has quit IRC | 21:53 | |
*** sergiuw has quit IRC | 22:27 | |
*** Ra4cal has joined #openstack | 22:34 | |
*** jmasud has joined #openstack | 22:35 | |
*** jmasud has quit IRC | 22:45 | |
*** cah_link has joined #openstack | 22:47 | |
*** luksky has quit IRC | 22:54 | |
*** cah_link has quit IRC | 23:02 | |
*** jmasud has joined #openstack | 23:05 | |
*** Ra4cal has quit IRC | 23:07 | |
*** jmasud has quit IRC | 23:31 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!