*** anbanerj|ruck is now known as frenzy_friday | 07:08 | |
*** rpittau|afk is now known as rpittau | 07:30 | |
noonedeadpunk | jrosser: we need to do smth with zun and katacontainers feature. Because seems like kata has discontinued deb packages for kata>1.12 at all | 07:48 |
---|---|---|
noonedeadpunk | alternatives are either snap or download/unpack/symlink releases from https://github.com/kata-containers/kata-containers/releases | 07:51 |
noonedeadpunk | both suck kind of | 07:53 |
jrosser | i would be quite worried about running virt type stuff thats installed as a snap | 08:41 |
yasemind | hi, I use stable/victoria (22.1.0) version at OSA, I want to change magnum service version to stable/wallaby. I changed magnum version to wallaby in repo-packeges file and os-magnum role to stable/wallaby. After I run openstack-ansible os-magnum-install.yml command, it gave an error like https://paste.opendev.org/show/809530/ and these are python | 08:44 |
yasemind | venv logs-> https://paste.opendev.org/show/809529/ . Is there anything to do, which I missed ? | 08:44 |
jrosser | i don't think you can change the version of a role to wallaby and expect that to work on a victoria deployment | 08:50 |
jrosser | you will have better chances just updating the magnum version in repo-packages and using the victoria version of the ansible role | 08:50 |
yasemind | thanks jrosser I tired and it worked | 09:00 |
thelounge05 | Hmm just realized you can give neutron bridge mappings in addition to interface mappings. https://docs.openstack.org/neutron/latest/configuration/linuxbridge-agent.html#linux-bridge | 09:52 |
thelounge05 | dont think "bridge_mappings" variable is implemented in OSA though | 09:52 |
thelounge05 | has anyone tried this? | 09:52 |
thelounge05 | relevant OSA part: https://github.com/openstack/openstack-ansible-os_neutron/blob/master/templates/plugins/ml2/linuxbridge_agent.ini.j2 | 09:53 |
*** thelounge05 is now known as fresta | 09:56 | |
*** fresta is now known as Guest755 | 09:57 | |
*** Guest755 is now known as fresta | 09:59 | |
admin1 | if we provide ssl cert, is "openstack-ansible certificate-authority.yml" still required ? or does rabbitmq also uses the provided certs | 09:59 |
jonher | adding a new compute-node on wallaby, running os-nova-install.yml with "--tags nova-key --limit nova_compute" errors out with "Group nova does not exist". the group is only created on "nova-group" tag | 12:18 |
jonher | shoud nova-key and nova-user be added here? https://github.com/openstack/openstack-ansible-os_nova/blob/stable/wallaby/tasks/nova_pre_install.yml#L22 | 12:19 |
jonher | for now i just ran with --tags nova-group before running with nova-key as the documentation for adding a new node suggests, but since the group should be created first we should either have users run the nova-group first then nova-key, or change so that it includes the group from user creation | 12:22 |
jonher | ^ ignore the above, seems like there are some local changes causing that | 12:36 |
spatel | noonedeadpunk question related upgrade, during upgrade if i don't want to touch ceph then any easy way to ignore that? | 13:19 |
noonedeadpunk | you will need to touch ceph-client side anyway | 13:21 |
spatel | ceph-client are compute right? | 13:21 |
spatel | all i am trying to do is not touch storage part and just upgrade openstack only | 13:21 |
noonedeadpunk | not only - also cinder and glance | 13:22 |
spatel | that is ok but i don't want to upgrade ceph :) | 13:22 |
noonedeadpunk | if you use run_uograde script - there's no easy way | 13:22 |
noonedeadpunk | but if you run manually - just don't run playbooks | 13:22 |
spatel | i had bad days in past with ceph so trying to do less impact in production | 13:22 |
spatel | good idea.. i can remove all ceph hooks from setup-openstack.yml | 13:23 |
spatel | ofc manual upgrade one by one.. i am not going to do all in single shot | 13:23 |
noonedeadpunk | jonher: well eventually, you run with `--tags nova-key` with second run after setup-openstack is finished | 13:25 |
noonedeadpunk | Then group and everything should be already in place and everything you need is to distribute keys | 13:25 |
noonedeadpunk | btw we also have a script nowadays:) https://opendev.org/openstack/openstack-ansible/src/branch/master/scripts/add-compute.sh#L33-L37 | 13:26 |
jonher | it worked out and compute node is added | 13:26 |
noonedeadpunk | and you can define PRE_OSA_TASKS and POST_OSA_TASKS env vars if needed | 13:26 |
mgariepy | the client part and the server part in ceph are not too picky (you can have older clients and it will continue to work) | 13:53 |
noonedeadpunk | or newer clients as well | 13:53 |
spatel | noonedeadpunk why this command always break at TASK [Clone git repos (parallel) ? ${SCRIPTS_PATH}/bootstrap-ansible.sh | 14:12 |
spatel | is there any good solution to make it clean ? | 14:12 |
noonedeadpunk | dunno. need to look at specific example | 14:13 |
spatel | i always go and change depth to 100 to just make it work | 14:13 |
spatel | TASK [Clone git repos (with git)] is clean so should i ignore that error | 14:14 |
spatel | This snippet failing - https://paste.opendev.org/show/809535/ | 14:15 |
spatel | can i ignore that RED error because ? noonedeadpunk | 14:16 |
spatel | May be this is the issue role_clone_default_depth | 14:18 |
noonedeadpunk | um, and don't you have shallow_since yet for ceph-ansible? | 14:18 |
spatel | what is that? | 14:18 |
noonedeadpunk | https://opendev.org/openstack/openstack-ansible/src/branch/stable/wallaby/ansible-role-requirements.yml#L281 | 14:18 |
spatel | where should i put it ? | 14:18 |
noonedeadpunk | because if it's about depth - this should have fixed it | 14:19 |
spatel | i do have shallow_since: '2021-09-08' | 14:20 |
spatel | i am upgrading V to W | 14:20 |
spatel | still getting error even after shallow_since: | 14:20 |
noonedeadpunk | any output of failed parallel clone? | 14:20 |
spatel | Question, is it safe to ignore or i should do something to fix it? | 14:20 |
spatel | let me send you | 14:20 |
noonedeadpunk | it's safe to ignore if Clone git repos (with git) is ok | 14:21 |
spatel | git is all Green | 14:21 |
noonedeadpunk | damn... then we have some bug there I don't see.... | 14:21 |
spatel | noonedeadpunk here is the full error - https://paste.opendev.org/show/809536/ | 14:23 |
spatel | i don't know why do we need two method to pull repo? why don't we just remove it | 14:23 |
noonedeadpunk | 'fatal: error in object: unshallow 24a62dc18613e732faade80efcf8c3408b0ecf68\nfatal: the remote end hung up unexpectedly' huh | 14:24 |
noonedeadpunk | because old one is 10 times slower | 14:25 |
spatel | Clone git repos (with git) is slow ? | 14:25 |
noonedeadpunk | yep | 14:25 |
noonedeadpunk | but parallel seems still buggy... | 14:25 |
spatel | i didn't see that.. it took 2 min around to finish it | 14:25 |
spatel | 2 minute doesn't matter :) | 14:25 |
noonedeadpunk | you was upgrading from where to where? | 14:26 |
spatel | V to @ | 14:26 |
spatel | V to W | 14:26 |
noonedeadpunk | stab/ev or ? | 14:26 |
spatel | 22.1.2 to 23.1.1 | 14:26 |
noonedeadpunk | aha | 14:26 |
spatel | any issue? | 14:26 |
noonedeadpunk | let me try to reproduce | 14:26 |
spatel | you can test that out but i am going to ignore and continue my upgrade hope no more bug :) | 14:26 |
noonedeadpunk | eventually if we fail new method we just fallback to old one. | 14:27 |
spatel | noonedeadpunk is ceph_all valid group name? | 14:36 |
spatel | planning to so --limit '!ceph_all' | 14:37 |
noonedeadpunk | never used ceph-ansible in prod | 14:37 |
spatel | what do you use? | 14:37 |
noonedeadpunk | used ceph-deploy or manual setup | 14:37 |
spatel | hmm | 14:38 |
noonedeadpunk | I'm scary of running ansible against ceph, but it's just me:) | 14:38 |
noonedeadpunk | we actually use nowadays ceph-ansible but it's in use by differeent team | 14:38 |
spatel | oh okay | 14:39 |
noonedeadpunk | based on https://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/env.d/ceph.yml ceph_all is valid thing | 14:40 |
spatel | i have 200 compute nodes it will take hell of time to upgrade openstack :( | 14:40 |
spatel | +1 thanks | 14:40 |
noonedeadpunk | but I think it might fail | 14:40 |
admin1 | ceph-ansible works good noonedeadpunk .. and you can import existing ceph ansible, decouple it from osa and then you can manage them independently | 14:41 |
spatel | noonedeadpunk do you guys upgrade openstack in ever release in production? | 14:41 |
noonedeadpunk | admin1: yeah we manage it independently | 14:41 |
noonedeadpunk | spatel: limiting without ceph_all will most likely fail on ceph-client | 14:41 |
noonedeadpunk | in case you use ceph-mon to get ceph.conf | 14:42 |
noonedeadpunk | because you might be not able to run there... but not 100% sure | 14:42 |
spatel | hmm why it will fail ? | 14:42 |
spatel | currently running setup-host across my cloud | 14:43 |
noonedeadpunk | I'm not sure if this limit will avoid this from running or not https://opendev.org/openstack/openstack-ansible-ceph_client/src/branch/master/tasks/ceph_get_keyrings_from_mons.yml#L26 | 14:44 |
spatel | i will run on single node to check before running across the world | 14:45 |
noonedeadpunk | hm... parallel git clone hasn't failed for me.... | 15:01 |
noonedeadpunk | so, I bootstrapped 22.1.2, then checkout to 23.1.1, bootstrapped, Clone git repos (parallel) not failed.... | 15:13 |
spatel | hmmm | 15:25 |
noonedeadpunk | well, I can imagine things fail when tree of roles is not clean.... | 15:37 |
noonedeadpunk | will check that... | 15:37 |
spatel | cool | 15:40 |
spatel | any idea how do i remove ovs config option? i want to remove vhost-sock-dir but not able to find command | 15:40 |
spatel | ovs-vsctl clear Open_vSwitch . other_config:vhost-sock-dir | 15:40 |
spatel | clear didnt work | 15:41 |
spatel | this works - ovs-vsctl remove Open_vSwitch . other_config vhost-sock-dir vhost_sock | 15:43 |
*** rpittau is now known as rpittau|afk | 16:02 | |
dmsimard | FYI, this is going to be my last reminder about it because I don't mean to spam but AnsibleFest and Ansible Contributor Summit are next week and there is still time to sign up -- it's free and virtual: https://www.ansible.com/ansiblefest && https://www.eventbrite.com/e/ansible-contributor-summit-202109-tickets-167664955395 | 18:39 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!