jrosser | maybe important to understand that all of these services are very agnostic to the actual choice of backend store | 00:00 |
---|---|---|
eatthoselemons | ah so the cinder-volume stores the actual packages/config/etc for the vm's? | 00:00 |
jrosser | cinder is block devices | 00:00 |
jrosser | like a hard disk, 4k blocks | 00:00 |
eatthoselemons | glance just provides the boot images so the speed of glance only matters for bootup? | 00:00 |
jrosser | usually yes | 00:00 |
jrosser | it's getting late here, i'm done for today | 00:01 |
*** jpvlsmv has quit IRC | 00:01 | |
jrosser | you can explore this all in the AIO :) | 00:01 |
*** jpvlsmv has joined #openstack-ansible | 00:02 | |
eatthoselemons | Okay that is making sense | 00:03 |
eatthoselemons | I will mess with the aio | 00:03 |
eatthoselemons | thanks for all your help! Hope you have a great evening! | 00:03 |
*** tosky has quit IRC | 00:17 | |
*** eatthoselemons has left #openstack-ansible | 00:24 | |
*** eatthoselemons has quit IRC | 00:25 | |
openstackgerrit | Merged openstack/ansible-role-python_venv_build stable/victoria: Import wheels build only when necessary https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/774804 | 00:28 |
*** maharg101 has joined #openstack-ansible | 00:34 | |
*** maharg101 has quit IRC | 00:38 | |
*** macz_ has quit IRC | 00:42 | |
*** macz_ has joined #openstack-ansible | 01:15 | |
*** fanfi has quit IRC | 01:17 | |
*** macz_ has quit IRC | 01:20 | |
*** ianychoi has joined #openstack-ansible | 02:09 | |
*** macz_ has joined #openstack-ansible | 02:29 | |
*** macz_ has quit IRC | 02:33 | |
*** maharg101 has joined #openstack-ansible | 02:35 | |
*** maharg101 has quit IRC | 02:40 | |
*** spatel has joined #openstack-ansible | 03:08 | |
*** gyee has quit IRC | 03:33 | |
*** LowKey has joined #openstack-ansible | 04:18 | |
*** maharg101 has joined #openstack-ansible | 04:36 | |
*** maharg101 has quit IRC | 04:40 | |
*** evrardjp has quit IRC | 05:33 | |
*** evrardjp has joined #openstack-ansible | 05:33 | |
*** spatel has quit IRC | 06:43 | |
*** CeeMac has joined #openstack-ansible | 07:07 | |
*** miloa has joined #openstack-ansible | 07:11 | |
*** kleini has joined #openstack-ansible | 07:12 | |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible master: Do not use tempestconf for ironic role tests https://review.opendev.org/c/openstack/openstack-ansible/+/772907 | 07:44 |
*** rpittau|afk is now known as rpittau | 07:53 | |
noonedeadpunk | morning | 08:05 |
noonedeadpunk | wow, jrosser, are you sleeping at all?:) | 08:06 |
jrosser | not enough | 08:06 |
noonedeadpunk | you shouldn't really burning midnight oil | 08:10 |
noonedeadpunk | (I guess) | 08:11 |
jrosser | true, wierd lockdown times i think | 08:12 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Add hosts integrated tests https://review.opendev.org/c/openstack/openstack-ansible/+/774685 | 08:12 |
noonedeadpunk | Yeah, we feel it less here I guess, since we don't have reall lockdowns here. Well on paper we do, but in reality nobody controls it, so everybody is kind of free to do whatever they want... | 08:14 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_murano master: Add global override for service bind address https://review.opendev.org/c/openstack/openstack-ansible-os_murano/+/775077 | 08:15 |
*** andrewbonney has joined #openstack-ansible | 08:15 | |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_murano master: Use the utility host for db setup tasks https://review.opendev.org/c/openstack/openstack-ansible-os_murano/+/747236 | 08:16 |
noonedeadpunk | I'm reading through ML regarding CI and devstack parallel installation and thinking if we could parallize setup-openstack (except keystone) as well... The tricky thing is resource creation... | 08:17 |
noonedeadpunk | which should be totally done afterwards all setup I guess | 08:17 |
noonedeadpunk | (in case it's run in parallel). but with our architecture it seems hardly achievable without nasty hacks... or just moving out all resource creation to the separate thing | 08:18 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_murano master: Add global override for service bind address https://review.opendev.org/c/openstack/openstack-ansible-os_murano/+/775077 | 08:18 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_murano master: Use the utility host for db setup tasks https://review.opendev.org/c/openstack/openstack-ansible-os_murano/+/747236 | 08:19 |
noonedeadpunk | murano is broken on tempest for some time... | 08:19 |
noonedeadpunk | it's just timeouting and I don't see the reason why actually... | 08:19 |
noonedeadpunk | project is pretty much deserted overall | 08:20 |
jrosser | hmmm, seems we get a long way behind on merging stuff for the role as a result | 08:21 |
noonedeadpunk | yeah, I even stopped pushing patches for it until figure out what's wrong with tempest... | 08:21 |
jrosser | i saw jobs break with not being able to bind to 0.0.0.0 due to our metal setup changes | 08:21 |
noonedeadpunk | I guess we would need to squash these changes anyway? | 08:22 |
*** maharg101 has joined #openstack-ansible | 08:23 | |
jrosser | maybe, for metal galera[0] == utility host anyway so it might be ok | 08:23 |
jrosser | though upgrade jobs are kind of pointless with it in this state | 08:23 |
noonedeadpunk | btw https://review.opendev.org/c/openstack/openstack-ansible-os_murano/+/747236/ | 08:23 |
noonedeadpunk | ah, that's what you rebased | 08:24 |
noonedeadpunk | lol | 08:24 |
noonedeadpunk | ok | 08:24 |
noonedeadpunk | yeah upgrade totally useless now | 08:25 |
*** jbadiapa has joined #openstack-ansible | 08:26 | |
*** ianychoi has quit IRC | 08:33 | |
*** tosky has joined #openstack-ansible | 08:36 | |
*** ianychoi has joined #openstack-ansible | 08:48 | |
jrosser | yes about parallelising stuff | 08:54 |
jrosser | it's a real shame there is no natural construct for that | 08:54 |
jrosser | in ansible | 08:54 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder master: Fix cert verification logic for cinder api https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/775079 | 08:56 |
noonedeadpunk | well, for setup-hosts we can probably use free strategy. thet won't help us in CI though | 08:57 |
noonedeadpunk | *that | 08:58 |
jrosser | also from back in time https://review.opendev.org/c/openstack/openstack-ansible/+/497742 | 09:03 |
*** fanfi has joined #openstack-ansible | 09:16 | |
*** mindthecap has joined #openstack-ansible | 09:28 | |
*** miloa has quit IRC | 09:32 | |
*** miloa has joined #openstack-ansible | 09:32 | |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: [doc] Add ceph_mons note https://review.opendev.org/c/openstack/openstack-ansible/+/775085 | 09:39 |
openstackgerrit | Andrew Bonney proposed openstack/openstack-ansible-os_horizon master: Fix race condition in compression of static files https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/775086 | 09:40 |
*** maciejel has joined #openstack-ansible | 09:44 | |
*** fanfi has quit IRC | 09:47 | |
noonedeadpunk | how do you think https://bugs.launchpad.net/openstack-ansible/+bug/1732481 is relevant? I guess qemu nowadays does include apparmor/selinux by default from system packages? | 09:52 |
openstack | Launchpad bug 1732481 in openstack-ansible "qemu config should set security driver to apparmor on ubuntu" [Medium,In progress] | 09:52 |
admin0 | good morning all .. i need to upgrade one platform from ubuntu 16 (rocky) to latest one | 09:54 |
admin0 | i need a bit of links on getting started .. i know we had some etherpad on it | 09:54 |
noonedeadpunk | ebbex made pretty good doc out of them https://docs.openstack.org/openstack-ansible/rocky/admin/upgrades/distribution-upgrades.html | 09:57 |
admin0 | noonedeadpunk, thank you | 10:03 |
admin0 | i will work on this today | 10:03 |
*** jbadiapa has quit IRC | 10:04 | |
*** macz_ has joined #openstack-ansible | 10:12 | |
*** macz_ has quit IRC | 10:17 | |
*** gokhani has joined #openstack-ansible | 10:28 | |
gokhani | Hi folks, I can mount nfs share from host but not from container. ı am getting access denied error on container side. Do I need a new config from lxc side ? | 10:31 |
admin0 | it depends on the acl/rules from the nfs server | 10:39 |
admin0 | check via tcpdump what ips its receiving and if its in the allow range | 10:39 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible master: Make requirements repo available during distro CI builds https://review.opendev.org/c/openstack/openstack-ansible/+/775095 | 10:40 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible master: Update pip/setuptools/wheel to latest version https://review.opendev.org/c/openstack/openstack-ansible/+/770284 | 10:41 |
*** jbadiapa has joined #openstack-ansible | 10:49 | |
gokhani | admin0 ı am using netapp and they are in same ip range, weird thing is I can mount from host | 10:50 |
admin0 | are host and containers in the same L2 ? | 10:50 |
admin0 | if not,is there NAT happening on the gateway ? | 10:51 |
gokhani | yes, I created br-nfs and they are connected with this bridge. | 10:52 |
admin0 | is it possible to do a tcpdump and see the outgoing ip and mac from host and also from bridge ( you can run 1 single tcpdump to netapp as host ) to see this | 10:53 |
admin0 | and some logs in the netapp side | 10:53 |
*** ioni has quit IRC | 10:58 | |
kleini | noonedeadpunk: does the above linked distribution upgrade also apply for bionic -> focal? when would be the best time frame to do this OS upgrade? with U or V or maybe later OSA release? | 11:00 |
gokhani | admin0 , I am listening with 'tcpdump -s 111 port nfs -i br-nfs' and run /'bin/mount 10.1.100.21:/ussuri_glance_nfs /tmp/destek -t nfs -o clientaddr=10.1.100.252,_netdev' from glance container. I can't see any trafic :( | 11:03 |
admin0 | ok .. so how tcpdump works is for incoming its before firewall and for outgoing its after firewall | 11:03 |
admin0 | means something is blocking that .. firewall, nat rules, etc | 11:03 |
admin0 | you need to find and fix :) | 11:03 |
admin0 | you can actually only do "mount 10.1.100.21:/ussuri /tmp/destek" .. and it should still mount fine | 11:05 |
noonedeadpunk | kleini: I think we need to revise and maybe adjust it | 11:07 |
*** ioni has joined #openstack-ansible | 11:07 | |
noonedeadpunk | I haven't made bionic->focal for myself yet, while already running victoria for some regions | 11:07 |
noonedeadpunk | so I have not answer for this yet. But focal is available since Ussuri, so I think it's up to you to decide about when to upgrade | 11:08 |
noonedeadpunk | in the meanwhile I'm not sure we will have bionic support for W | 11:08 |
kleini | thanks, so I need to plan either U -> V and then focal or U, focal, V | 11:11 |
gokhani | admin0, ı am again getting access denied error. I rebooted my server. | 11:14 |
admin0 | until you find and fix whats causing the blocks, you will have that issue :) | 11:14 |
admin0 | what you can do is do iptables -Z ( to reset the counters) | 11:14 |
admin0 | then run the command again from container and do iptables -L -nvx -t nat ( and one without -t nat) | 11:15 |
admin0 | to check the counters | 11:15 |
admin0 | usually if you do the mount 10 times, the counter in 1 or multiple rules will go up | 11:15 |
admin0 | and that might tell you exactly what line in the iptables is blocking this | 11:15 |
admin0 | if its not iptables, then its routing rules | 11:15 |
*** MickyMan77 has joined #openstack-ansible | 11:15 | |
MickyMan77 | Hi, is version 22.0.1 of osa the stable version of victoria ? | 11:18 |
noonedeadpunk | kleini: I guess you can even setup new hosts on focal in case you're on V. But for this you will need to upgrade at least one controller as well | 11:23 |
noonedeadpunk | or disable wheels build for the new hosts iirc | 11:23 |
noonedeadpunk | MickyMan77: yeah, kind of) | 11:24 |
noonedeadpunk | still have some bugs (as any osa version) but generally it's the way better then 22.0.0 | 11:24 |
noonedeadpunk | it's safe to use for sure | 11:24 |
gokhani | admin0, iptables doesn't block it. this is ip route show output > http://paste.openstack.org/show/802560/ | 11:27 |
*** macz_ has joined #openstack-ansible | 11:34 | |
gokhani | do you recommend mount nfs from host with /var/lib/lxc/infra3_glance_container-0c045fb1/rootfs/var/lib/glance/images/ ? I can't mount from glance container :( | 11:37 |
admin0 | no you have to fix the underlying issue and not take shortcuts or cut corners :) | 11:39 |
*** macz_ has quit IRC | 11:39 | |
admin0 | you can ping the nfs server ? is that captured in the tcpdump ? | 11:39 |
gokhani | admin0, yes ping is captured in tcp but ı can not see any nfs traffic :( | 11:47 |
gokhani | admin0 this is netstat -tupln output > http://paste.openstack.org/ | 11:50 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible master: Remove version cap on PrettyTable https://review.opendev.org/c/openstack/openstack-ansible/+/775126 | 11:53 |
kleini | noonedeadpunk: disabling wheels build would mean, that repo server would not be used for that host and everything is built locally? Is there documentation somewhere in OSA about wheels and repo server and so on? I don't really understand that construct yet. | 11:53 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible master: Make requirements repo available during distro CI builds https://review.opendev.org/c/openstack/openstack-ansible/+/775095 | 11:54 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible master: Update pip/setuptools/wheel to latest version https://review.opendev.org/c/openstack/openstack-ansible/+/770284 | 11:54 |
*** macz_ has joined #openstack-ansible | 11:55 | |
jrosser | kleini: check out the readme on this repo https://github.com/openstack/ansible-role-python_venv_build | 11:55 |
noonedeadpunk | kleini: yep, that would mean exactly that. I don't think we have any good explanation unfortunatelly. In case you want to have wheels, it's required that they should be built on the same OS, so there should be at least repo serer running focal for wheels to be built for focal | 11:56 |
noonedeadpunk | there might be weird things once you fully upgrade to focal because of the lsyncd though, but it' completely different story | 11:56 |
*** macz_ has quit IRC | 12:00 | |
kleini | I think, mixing up OS upgrade to focal and upgrade to V is not a good idea. Too much complexity for me. So, I will stay on U and try to upgrade nodes to focal either by having one controller on focal with a repo server or disabling the use of the repo server | 12:03 |
kleini | I think the later one looks more interesting to get used to focal and not breaking any controller nodes. once some computes run without problems on focal and U, I can move on the upgrade the controller nodes. | 12:04 |
jrosser | i've been thinking that adding an extra very minimal node to the environment might be useful for upgrades | 12:08 |
jrosser | it would be the new OS and for the purpose of the upgrade you override the venv build host to be that one | 12:08 |
gokhani | admin0 , I can only capture nfs v3 with tcpdump. http://paste.openstack.org/show/802565/ | 12:09 |
jrosser | theres probably detail i've not thought about but it might make things a little less interdependant when upgrading the controllers | 12:09 |
kleini | hmm, good idea. I can create a focal VM and connect that using VLAN to the mgmt network for this upgrade scenario. how do I overwrite the venv build host? | 12:12 |
jrosser | https://github.com/openstack/ansible-role-python_venv_build/blob/master/defaults/main.yml#L115-L121 | 12:15 |
jrosser | i guess it would need to have the repo_server role run against it and be the backend for the loadbalancer port 8181 | 12:16 |
kleini | Will check that out in my staging environment | 12:17 |
jrosser | i'm kind of hand-waving this a bit so yes good idea :) | 12:17 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_barbican master: Fix crypto_plugin defenition https://review.opendev.org/c/openstack/openstack-ansible-os_barbican/+/768201 | 12:49 |
*** mgagne has quit IRC | 12:52 | |
*** mgagne has joined #openstack-ansible | 12:53 | |
*** luksky has joined #openstack-ansible | 12:56 | |
mgariepy | morning everyone | 13:25 |
*** waxfire has quit IRC | 13:42 | |
gokhani | Hello again folks, I tried mount nfs from containers and I am getting permission denied errors > http://paste.openstack.org/show/802568/ . I used as nfs server both netapp and manually created nfs server. ıt gave same error. I doubt about lxc3. Dou you have any ideas ? This environment is Ussuri with ubuntu 18.04. I deployed it from OSA | 13:50 |
gokhani | stable/ussuri branch. jrosser do you have any idea about this problem ? | 13:50 |
*** waxfire has joined #openstack-ansible | 13:52 | |
*** spatel has joined #openstack-ansible | 13:53 | |
*** lemko has quit IRC | 14:09 | |
*** lemko has joined #openstack-ansible | 14:09 | |
*** miloa has quit IRC | 14:09 | |
gokhani | admin0 , jrosser I find my problem is about appormor lxc profile and I solve it with following these steps > http://paste.openstack.org/show/802573/. now I can mount from containers. I think we need to add these parameters to lxc profile on OSA lxc container create role. | 14:23 |
jrosser | gokhani: https://github.com/openstack/openstack-ansible-lxc_hosts/blob/master/templates/lxc-openstack.apparmor.j2 | 14:26 |
*** pcaruana has quit IRC | 14:29 | |
spatel | ceph question, I have configured cinder-api/volume services to integrate with ceph but do i need to tell nova about ceph otherwise how it will mount volume for vm? | 14:31 |
ioni | spatel: you will have to re-run nova playbook and it will detect that you have ceph configured and it will configure everything for you | 14:43 |
ioni | get ceph keys, configure nova.conf and so on | 14:43 |
ioni | it will attach the block volume to your instance and inside the vm you will see a new device called /dev/vdb or /dev/sdb depending on how you configured the bus | 14:44 |
spatel | I do have ceph running in older cloud which has cinder and i can mount everything.. so all good there.. its been long time so i don't know what i did there | 14:45 |
spatel | I don't think i have created any special config for nova related cinder | 14:45 |
spatel | all i can see i have /etc/ceph/ceph.client.cinder.keyring file on all compute nodes | 14:47 |
ioni | that's fine, i was thinking that only now you have configured cinder with ceph | 14:47 |
*** macz_ has joined #openstack-ansible | 14:47 | |
ioni | in this case you have to re-run os-nova-install to pick up the ceph stuff | 14:48 |
spatel | yes.. look like | 14:48 |
gokhani | jrosser, this file didn't work for me. It worked only after I added these variables in /etc/apparmor.d/lxc/lxc-default-cgns file. | 14:50 |
jrosser | sorry i'm in meetings pretty much the rest of the afternoon | 14:50 |
jrosser | it will need some debugging to find out why those settings are not applying | 14:50 |
jrosser | or its an LXC2/3 difference in config files | 14:50 |
*** macz_ has quit IRC | 14:52 | |
gokhani | yes may be. I think this is lxc3 issue. it seems those settings are not applying with lxc-openstack.apparmor file | 14:53 |
openstackgerrit | Andrew Bonney proposed openstack/openstack-ansible-os_horizon master: Fix race condition in compression of static files https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/775086 | 14:54 |
*** pcaruana has joined #openstack-ansible | 14:58 | |
*** macz_ has joined #openstack-ansible | 15:09 | |
*** macz_ has quit IRC | 15:13 | |
admin0 | gokhani, from my glance containers, i can mount nfs fine .. i did not do anything special/apparmor | 15:26 |
*** waxfire has quit IRC | 15:28 | |
*** macz_ has joined #openstack-ansible | 15:44 | |
*** luksky has quit IRC | 15:48 | |
*** macz_ has quit IRC | 15:48 | |
-openstackstatus- NOTICE: Recent POST_FAILURE results from Zuul for builds started prior to 15:47 UTC were due to network connectivity issues reaching one of our log storage providers, and can be safely rechecked | 15:50 | |
gokhani | admin0, I can't mount it. I deployed this environment yesterday. My OS is ubuntu 18.04.5 LTS and kernel version is 5.4.0.65. | 15:59 |
admin0 | is it listed in glance ? | 15:59 |
admin0 | i mean glance will mount it automatically | 15:59 |
admin0 | needs further checks .. | 15:59 |
spatel | ioni thats it, its working now | 16:02 |
ioni | spatel: nice | 16:02 |
spatel | after running playbook on nova it deployed ceph.cinder keyring and now my vm can access | 16:02 |
gokhani | yes, normally glance will mount automatically but in my environment it gets error when runs glance playbook. | 16:03 |
admin0 | gokhani, just to check . do you have a infra/deployment host server ? | 16:04 |
admin0 | not a container | 16:04 |
admin0 | try setting up a quick nfs server there .. and then try to mount that one | 16:04 |
admin0 | just want to check if its specific to your netapp or nfs in general | 16:04 |
*** luksky has joined #openstack-ansible | 16:05 | |
admin0 | apt install nfs-kernel-server ; and put /srv/glance 172.29.244.0/22(rw,sync,no_subtree_check,no_root_squash) in /etc/exports and you are done | 16:05 |
gokhani | admin0, ı tested it as you said and set up nfs server. It again gave same error "Permission Denied". | 16:05 |
admin0 | then you have a different issue which I do not know .. maybe kernel, firewall, permissions | 16:06 |
admin0 | someting in ubuntu | 16:06 |
gokhani | it is about lxc profile / apparmor settings and not specific to netapp. | 16:06 |
admin0 | why should it be .. for nfs, its just an IP ? | 16:07 |
admin0 | ip:/mountpoint | 16:07 |
admin0 | in one of my ubuntu 18.04 cluster where nfs is being used for glance, the kernel is 4.15.0-112-generic | 16:07 |
admin0 | though it think it makes no diff | 16:07 |
admin0 | gokhani, is it a new install ? | 16:08 |
admin0 | then why not go with ubuntu 20 and latest 22.0 version | 16:08 |
*** macz_ has joined #openstack-ansible | 16:09 | |
gokhani | for lxc we need to set this variable https://github.com/openstack/openstack-ansible-lxc_hosts/blob/master/templates/lxc-openstack.apparmor.j2#L18 for nfs working on containers. But in my environment it doesn't apply these settings. | 16:10 |
gokhani | yes it is new install | 16:10 |
gokhani | our test environment is ubuntu 18.04 and we didn't test ubuntu 20.04 | 16:12 |
admin0 | whats preventing you to test 20.04 :) | 16:25 |
*** waxfire has joined #openstack-ansible | 16:26 | |
admin0 | i am mostly an ops guy and not a hardcore dev .. and i don't understand why you not want to use a perfectly working 20.04 latest ubuntu with all new packages .. but want to still use 18.04 and again take a headache of upgrading it later | 16:29 |
admin0 | especially when its greenfield | 16:29 |
gokhani | admin0, yes you are right but there is a lot to do at my side ı need time:( it is in my plans. Also ubuntu 18.04 is working perfect and ı didn't use 20.04 yet. | 16:35 |
*** gyee has joined #openstack-ansible | 16:37 | |
admin0 | is this platform just for openstack ? or are you doing multiple things on it ? | 16:37 |
admin0 | i mean are you using the controller and compute for something else also ? | 16:37 |
admin0 | the whole reason why i use OSA is because i don't have to write, manage, test or even document anything .. its already done and tested and used by this community ( everyone clap for themselves) .. | 16:40 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-lxc_hosts stable/ussuri: Fix lxc_hosts_container_image_url condition https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/775218 | 16:47 |
*** ioni has quit IRC | 16:50 | |
gokhani | admin0, ı am also appreciating OSA guys. It is really awesome and I have used osa for 4 years. | 16:56 |
admin0 | so focal + osa is tested :) | 16:57 |
gokhani | we are mostly using sahara,magnum and gpus on openstack. we only have problem with time. | 17:00 |
*** ioni has joined #openstack-ansible | 17:07 | |
spatel | admin0 I have question, I have created cinder volume and mounted on vm-1 created filesystem and copy some important file on that volume. Now suddenly something happened and i delete vm-1 (Now if i create vm-2 can i mount that volume back and retrieve my data? ) | 17:08 |
admin0 | yes | 17:10 |
spatel | How? | 17:10 |
admin0 | unless that something happened was in the middle of an io operation and some data it was writing is lost or corrupted | 17:10 |
admin0 | mount to vm2 | 17:10 |
admin0 | a cinder volume is like a usb disk | 17:10 |
spatel | how does VM-2 understand its new disk or partitioned | 17:11 |
admin0 | you mount to 1 instance .. read/write data .. unplug .. mount to something else .. do the same | 17:11 |
admin0 | unmount from vm-1 | 17:11 |
admin0 | then mount to vm-2 | 17:11 |
admin0 | lsblk | 17:11 |
admin0 | will show /dev/vdX .. mount /dev/vdb /mnt/ | 17:11 |
admin0 | and that is all you need to do | 17:11 |
spatel | hmm let me create vm-2 and test quickly | 17:11 |
admin0 | again, treat it like a ubs disk | 17:11 |
admin0 | when you format a usb disk . you don't have to reformat it again everytime you move it to a diff system | 17:12 |
admin0 | if its pre-formatted and if the new OS knows that format, it will just mount it and you can read/write data | 17:12 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_neutron master: Combined patch to unblock CI https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/775239 | 17:18 |
spatel | admin0 +1 it works! | 17:38 |
*** maharg101 has quit IRC | 17:40 | |
*** rpittau is now known as rpittau|afk | 17:43 | |
*** alinabuzachis has joined #openstack-ansible | 18:23 | |
*** alinabuzachis has quit IRC | 18:26 | |
*** alinabuzachis has joined #openstack-ansible | 18:26 | |
noonedeadpunk | whaat `cgfsng - cgroups/cgfsng.c:cgfsng_monitor_destroy:1110 - No space left on device - Failed to move monitor 52075 to "/sys/fs/cgroup/cpuset//lxc.pivot"` https://a7bc13a3b1d3ff8939d4-b66311f00e65e72370f624798f3cdac4.ssl.cf5.rackcdn.com/775239/1/check/openstack-ansible-ovs-nsh-ubuntu-focal/6a3e119/logs/host/lxc/lxc-agents1.log.txt | 18:34 |
noonedeadpunk | focal on functional looks so weird... | 18:34 |
*** gokhani has quit IRC | 18:34 | |
jrosser | oh yes that | 18:35 |
*** luksky has quit IRC | 18:35 | |
jrosser | noonedeadpunk: if we could get this into shape it would all just go away https://review.opendev.org/c/openstack/openstack-ansible/+/534318 | 18:36 |
jrosser | trouble with the neutron role right now is that there are just too many simultaneous things need addressing | 18:37 |
noonedeadpunk | we will increase CI time from other side, but yeah | 18:38 |
jrosser | well it's about the big picture really not just job time | 18:46 |
jrosser | becasue we waste so many CI hours fighting with it like it is | 18:46 |
noonedeadpunk | yeah I will try to look into this tomorrow. In the meanwhile I also think that we should move ovs job to the neutron from integrated repo testing or really switch default from lxb to ovs | 18:47 |
noonedeadpunk | but not sure about that and if we have enough experience with ovn atm | 18:48 |
noonedeadpunk | s/ovs job/ovn job/ | 18:48 |
noonedeadpunk | s/from lxb to ovs/from lxb to ovn | 18:48 |
noonedeadpunk | we are looking at ovn now as well, but it's just for some kind of prespective, not sure when exactly this will be done, considering I can't get to trove while was supposed to deploy it till Christmas... | 18:50 |
*** luksky has joined #openstack-ansible | 18:52 | |
jrosser | i was just looking at the OVN feature gap list today | 18:52 |
jrosser | no BGP speaker it seems making ipv6 kind of tricky | 18:52 |
*** ioni has quit IRC | 18:53 | |
jrosser | btw we have snapshots on victoria broken https://bugs.launchpad.net/nova/+bug/1915400 | 18:54 |
openstack | Launchpad bug 1915400 in OpenStack Compute (nova) "Snapshots fail with traceback from API" [Undecided,Incomplete] | 18:54 |
*** alinabuzachis has quit IRC | 19:01 | |
*** alinabuzachis has joined #openstack-ansible | 19:02 | |
*** alinabuzachis has quit IRC | 19:16 | |
*** sshnaidm is now known as sshnaidm|afk | 19:19 | |
openstackgerrit | Merged openstack/openstack-ansible-lxc_hosts stable/ussuri: Fix lxc_hosts_container_image_url condition https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/775218 | 19:20 |
*** spatel has quit IRC | 19:25 | |
*** spatel has joined #openstack-ansible | 19:28 | |
*** maharg101 has joined #openstack-ansible | 19:36 | |
*** maharg101 has quit IRC | 19:41 | |
*** andrewbonney has quit IRC | 19:42 | |
*** jpvlsmv has quit IRC | 19:59 | |
noonedeadpunk | oops.... | 20:58 |
* noonedeadpunk should finally set rally/refstack to run after upgrades | 21:07 | |
noonedeadpunk | btw, I can't reproduce this bug I guess... http://paste.openstack.org/show/802585/ | 21:13 |
noonedeadpunk | maybe it's horizon makes some weird calls.... | 21:13 |
noonedeadpunk | oh, hm, but we don't use ssl for rabbit in the region where we have V | 21:16 |
noonedeadpunk | and my normal sandbox is broken atm to check this out :( | 21:19 |
djhankb | Hey dudes, quick question for you all - my main Openstack deployment is running an old Dell PS4100 iSCSI Array, configured with the now deprecated cinder.volume.drivers.dell_emc.ps.PSSeriesISCSIDriver, which does not use MPIO, and TBH is pretty slow. I don't have a lot of experience with LVM, although I know its sort of baked-in. What would be | 21:31 |
djhankb | the best approach to setting up some MPIO LUNs on that array that would be backed by the LVM driver? Would I need a dedicated machine for that? or could that run off of a controller node? | 21:31 |
* noonedeadpunk prefers ceph as storage backend | 21:33 | |
noonedeadpunk | eventually lvm might be good solution if you're looking for local storage only for your computes | 21:34 |
djhankb | Yeah, I would *love* to get into Ceph at some point in the future, I've been working on this POC with what I've got right now - and storage is sort of painful | 21:34 |
djhankb | Ceph and small scale don't really work if I understand correctly :-) | 21:35 |
noonedeadpunk | Well, I guess in case you're migrating between drivers, it shouldn't matter much between which? And ceph imo is more universal solution in case you kind of need shared storage and you have more then single compute:) | 21:36 |
noonedeadpunk | well, it works. the thing is that it's really recommended to have quorum of ceph monitors | 21:37 |
noonedeadpunk | but still, we deploy it on the single VM for CI for instance | 21:37 |
djhankb | Makes sense - I should probably try to get my feet wet at some point here with it | 21:38 |
noonedeadpunk | and you can use sparse files or whatever as osd backend. The question is in performance, yes, and in case it's sing;e compute, you probably just don't need shared storage | 21:38 |
noonedeadpunk | you can try out with aio on some VM with 4 CPU and 10-12 GB of RAM. | 21:39 |
djhankb | Right now, I have 2 nodes for Controller, and 2 for Compute | 21:39 |
djhankb | I would have had more, but power is limited in the Lab room | 21:40 |
noonedeadpunk | eventually... it's just `git clone https://opendev.org/openstack/openstack-ansible; cd openstack-ansible; ./scripts/gate-check-commit.sh aio_ceph` | 21:40 |
noonedeadpunk | and it will deploy whole openstack with ceph on your VM | 21:40 |
noonedeadpunk | oh, and you need 100gb of hard drive on the vm | 21:40 |
djhankb | interesting, I have not done an AIO build yet | 21:41 |
noonedeadpunk | well, full doc is here https://docs.openstack.org/openstack-ansible/latest/user/aio/quickstart.html but what I provided you is the way how our CI runs things | 21:41 |
djhankb | I've got 2 machines at home I wanted to spin up Openstack on. If I set up one using an AIO build, am I able to add the other as a compute? | 21:42 |
noonedeadpunk | yep | 21:42 |
noonedeadpunk | you will just need to configure network on it and add to openstack_user_config.yml and run playbboks agains | 21:43 |
djhankb | Cool. I may try that at the house... but that brings up another point - I can make manual MPIO luns all day long on my PS4100 at the office openstack lab, If I wanted to set up a handful of LUNS for OSDs, would I be able to set up Ceph that way on my 2 controller nodes? Or would that not be recommended? | 21:44 |
noonedeadpunk | BUT AIO is known to kind of break after reboot, because we have long-standing issues there with network configuration (systemd-networkd conflicts with smth I guess) and loop drives might be lost. So it's eventually more for POC and playing around | 21:44 |
noonedeadpunk | we never looked into since it anyway announced for testing only | 21:44 |
noonedeadpunk | but, all openstack-ansible configs, inventory and etc are generated absolutely properly. So if you configure networking and storage manually - you can just use these configs to setup openstack as well | 21:45 |
noonedeadpunk | well the issue with 2 controllers is that you can catch split brain, which would be nasty | 21:46 |
djhankb | For sure. I've been working with OSA for about 2 years now in my free time. Its just so damn vast its easy to get lost in the details | 21:46 |
noonedeadpunk | you can even have single controller but you know... | 21:46 |
noonedeadpunk | it's impossible to have quorum with 2 nodes | 21:46 |
noonedeadpunk | BUT | 21:46 |
djhankb | Yes, I noticed the split brain - I ran into that with Galera + RabbitMQ | 21:46 |
noonedeadpunk | ^ | 21:47 |
djhankb | I was thinking of adding a VM on a compute just for Galera + RabbitMQ to give a quorum. | 21:47 |
noonedeadpunk | you can create 2 raabitmq and galera containers on the single controller node | 21:47 |
djhankb | Ahh that's a good point too.. Does OSA Support Garbd? | 21:47 |
djhankb | I have run Garbd before in other Galera deployments to give a quorum | 21:48 |
noonedeadpunk | nah | 21:48 |
noonedeadpunk | yeah, I was running it to, but I guess I used some nasty override for that | 21:48 |
djhankb | I would do it here too but I like how eveyrthing works together so I don't want to go adding some extra manual thing to my deployment | 21:48 |
noonedeadpunk | you can use affinity like this to create more than single container https://opendev.org/openstack/openstack-ansible/src/branch/master/etc/openstack_deploy/openstack_user_config.yml.aio.j2#L139-L143 | 21:49 |
djhankb | Perfect, I was just going to ask that | 21:49 |
noonedeadpunk | here's some doc regarding this https://docs.openstack.org/openstack-ansible/latest/reference/inventory/configure-inventory.html#deploying-0-or-more-than-one-of-component-type-per-host | 21:51 |
noonedeadpunk | sorry, need to head out | 21:51 |
djhankb | I see - add the affinity under the host section. | 21:52 |
djhankb | No problem - thanks for your help as always! | 21:52 |
*** klamath_atx has joined #openstack-ansible | 21:53 | |
*** spatel has quit IRC | 22:03 | |
*** jbadiapa has quit IRC | 22:03 | |
*** jpvlsmv has joined #openstack-ansible | 22:12 | |
jpvlsmv | Quick (I think) question, where do I configure my (physical) hosts so that their lxc containers can communicate across hosts? i.e. on host1_utility_container I can't yet ping host2_utility_container's address. | 22:21 |
*** luksky has quit IRC | 22:25 | |
*** lemko7 has joined #openstack-ansible | 22:29 | |
*** lemko has quit IRC | 22:30 | |
*** lemko7 is now known as lemko | 22:30 | |
djhankb | jpvlsmv - this is where those would be set: https://opendev.org/openstack/openstack-ansible/src/branch/master/etc/openstack_deploy/openstack_user_config.yml.example#L240-L250 | 22:40 |
*** PrinzElvis has quit IRC | 22:41 | |
*** PrinzElvis has joined #openstack-ansible | 22:41 | |
djhankb | I think you should end up with an "eth0" in each container bound to an internal-only bridge and an "eth1" that would bridge to your management network | 22:42 |
djhankb | etc/network/interfaces.d/lxc-net-bridge.cfg is what the eth0 bridges to | 22:43 |
jpvlsmv | right, the eth0 connects to the lxcbr0 and I can ping the other containers on this host, so host1_utility can ping host1_galera with either the 172.29.x.y or 10.0.x.y galera's address | 22:43 |
jpvlsmv | is it Neutron that would put the traffic into & out of a tunnel? | 22:45 |
djhankb | Yes for VXLAN I assume? IIRC that is more for Instance traffic to controller | 22:46 |
*** waxfire has quit IRC | 22:46 | |
jpvlsmv | ah... likely "Tunneling cannot be enabled without the local_ip"... | 22:47 |
*** waxfire has joined #openstack-ansible | 22:47 | |
jpvlsmv | (error message from neutron-linuxbridge-agent.log) | 22:47 |
djhankb | Yes, VXLAN works by sending traffic back to a controller node over Multicast, IIRC you need to use regular bridged networks for containers | 22:48 |
*** LowKey has quit IRC | 22:53 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!