*** cshen has joined #openstack-ansible | 00:02 | |
*** rmcall has quit IRC | 00:06 | |
*** cshen has quit IRC | 00:07 | |
*** markvoelker has joined #openstack-ansible | 00:19 | |
*** jbadiapa has quit IRC | 00:29 | |
*** gyee has quit IRC | 00:36 | |
*** cshen has joined #openstack-ansible | 02:03 | |
*** cshen has quit IRC | 02:08 | |
*** spatel has joined #openstack-ansible | 02:10 | |
*** markvoelker has quit IRC | 02:19 | |
*** rmcall has joined #openstack-ansible | 02:31 | |
*** spatel has quit IRC | 02:49 | |
*** spatel has joined #openstack-ansible | 03:22 | |
*** spatel has quit IRC | 03:22 | |
*** spatel has joined #openstack-ansible | 03:23 | |
*** spatel has quit IRC | 03:23 | |
*** spatel has joined #openstack-ansible | 03:30 | |
*** spatel has quit IRC | 03:30 | |
*** redrobot has quit IRC | 03:36 | |
*** cshen has joined #openstack-ansible | 04:04 | |
*** cshen has quit IRC | 04:09 | |
*** markvoelker has joined #openstack-ansible | 04:11 | |
*** markvoelker has quit IRC | 04:15 | |
*** evrardjp has quit IRC | 04:33 | |
*** evrardjp has joined #openstack-ansible | 04:33 | |
*** markvoelker has joined #openstack-ansible | 05:01 | |
*** cshen has joined #openstack-ansible | 05:02 | |
*** markvoelker has quit IRC | 05:05 | |
*** cshen has quit IRC | 05:07 | |
*** udesale has joined #openstack-ansible | 05:22 | |
*** udesale has quit IRC | 06:07 | |
*** cshen has joined #openstack-ansible | 06:15 | |
*** udesale has joined #openstack-ansible | 06:20 | |
jrosser | tow: can you give some more information, which operating system you are using? | 06:38 |
---|---|---|
janno | when adding designate to an existing cluster, which part would be responsible for adding the queues to rabbitmq? | 06:44 |
janno | we are currently seeing such errors: http://paste.openstack.org/show/795931/ | 06:46 |
CeeMac | morning | 06:48 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_ceilometer stable/ussuri: [New files - needs update] Update paste, policy and rootwrap configurations 2020-07-15 https://review.opendev.org/741096 | 06:51 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_ceilometer stable/ussuri: [New files - needs update] Update paste, policy and rootwrap configurations 2020-07-15 https://review.opendev.org/741098 | 06:54 |
*** this10nly has joined #openstack-ansible | 06:55 | |
jrosser | seems i have no idea how to bump the role sha using the release scripts :( | 06:56 |
CeeMac | jrosser: based on https://opendev.org/openstack/openstack-ansible/src/branch/master/osa_toolkit/generate.py#L670-L677 | 07:15 |
CeeMac | and in o_u_c the br-mgmt provnet uses group_binds: hosts | 07:16 |
CeeMac | but hosts isn't a member of 'physical_host_group' | 07:16 |
CeeMac | so its falling back to the ansible_host ip? | 07:16 |
*** andrewbonney has joined #openstack-ansible | 07:17 | |
*** cshen has quit IRC | 07:18 | |
CeeMac | im also not sure about this comment https://opendev.org/openstack/openstack-ansible/src/branch/master/osa_toolkit/generate.py#L560-L561 | 07:18 |
CeeMac | which seems to imply only the first IP will be used? | 07:19 |
CeeMac | although i haven't found the logic for that in the code yet | 07:19 |
jrosser | isnt that more to do with making sure that the ip to ssh to for ansible, for a container, is the physical host | 07:20 |
jrosser | rather than the IP of the mgmt net inside the container | 07:21 |
jrosser | remember everything that ansible does to the containers is kind of proxied via the host | 07:21 |
jrosser | CeeMac: give us a minute and my colleague will be along who's also been investigating | 07:22 |
CeeMac | sure | 07:24 |
*** tosky has joined #openstack-ansible | 07:46 | |
*** udesale has quit IRC | 07:47 | |
*** udesale has joined #openstack-ansible | 07:47 | |
*** also_stingrayza is now known as stingrayza | 07:54 | |
jrosser | CeeMac: my colleague andrewbonney thinks he has a solution | 08:12 |
*** markvoelker has joined #openstack-ansible | 08:12 | |
andrewbonney | Hi. So I haven't fully traced how this worked before, but I've been tracing back and taking a look at the nova source | 08:13 |
andrewbonney | The nova docs suggest that 'live_migration_inbound_addr' can't be used if 'live_migration_tunnelled' is enabled (https://docs.openstack.org/nova/latest/configuration/config.html) | 08:14 |
andrewbonney | But whilst tunneled was enabled in https://github.com/openstack/openstack-ansible-os_nova/commit/12e09a3402cb810c53188a94ad1c820086d8e302 | 08:14 |
andrewbonney | The nova source suggested the inbound_addr config option may still get used: https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L8894 | 08:15 |
andrewbonney | Having set that config on our live instance (after having fixed the Nova 'my_ip' config to use the correct network too) our migrations use the correct interface | 08:15 |
*** markvoelker has quit IRC | 08:17 | |
CeeMac | hi andrewbonney | 08:21 |
CeeMac | i'm not sure i'm following that last bit in the driver.py, that still seems to suggest that tunnel should be disabled? | 08:22 |
andrewbonney | I'm afraid I'm not familiar enough with that to say at the moment | 08:23 |
*** mmethot_ has joined #openstack-ansible | 08:24 | |
CeeMac | so in your environment you set live_migration_inbound_addr? did you take out live_migration_uri and live_migration_tunnelled settings? | 08:24 |
*** dmsimard has quit IRC | 08:25 | |
andrewbonney | No, I left those in. As far as I can tell (admittedly I haven't traced the full path) the inbound_addr is passed as metadata to the host which ultimately uses the live_migration_uri | 08:25 |
andrewbonney | This replaces the %s | 08:26 |
*** dmsimard has joined #openstack-ansible | 08:26 | |
CeeMac | oh, interesting | 08:26 |
andrewbonney | It's a little confusing that they're all grouped together in the docs | 08:27 |
CeeMac | and you used the br-mgmt ip for inbound_addr and my_ip both? | 08:27 |
CeeMac | yes, the docs aren't clear in a lot of places | 08:27 |
jrosser | andrewbonney: do you think that the nova docs don't represent actually what the code does? | 08:28 |
andrewbonney | Yes, that's how I've got it set at present. It may be that you could just set inbound_addr to make live migration work, but we adjusted my_ip too as the git history suggested it was broken | 08:28 |
andrewbonney | jrosser: my impression is there is an error yes, but I'm willing to accept I could be missing a detail | 08:28 |
*** mmethot has quit IRC | 08:29 | |
CeeMac | seems sensible | 08:29 |
CeeMac | did you happen to look into how osa is selecting the container_address incorrectly in the dynamic inventory | 08:29 |
andrewbonney | Yeah, I traced that back to this commit: https://github.com/openstack/openstack-ansible/commit/4c04c688e70ff16ebed4ddcaf20e8e8d712a47b0#diff-6d3cb25b2133a32fce9452487bf728b2R24 | 08:32 |
andrewbonney | It looks like there may have been some confusion between 'management_bridge' and 'container_address' | 08:32 |
andrewbonney | A local patch to use 'container_address' in a couple of places in playbooks/common-playbooks/nova.yml seems to fix it, but I'd like to see if there's a cleaner way | 08:34 |
CeeMac | it also looks like manage_address used to be a provnet too | 08:34 |
CeeMac | *in a | 08:34 |
andrewbonney | I'll put some patches together to share the end result for further comment. It starts to get confusing following all of the threads | 08:38 |
*** shyamb has joined #openstack-ansible | 08:41 | |
CeeMac | cool | 08:42 |
*** cshen has joined #openstack-ansible | 08:51 | |
*** cshen has quit IRC | 08:56 | |
CeeMac | sorry was multi-tasking badly while in a all there | 08:57 |
CeeMac | I saw here https://opendev.org/openstack/openstack-ansible/src/branch/master/osa_toolkit/generate.py#L1141-L1145 | 08:58 |
*** aedc has joined #openstack-ansible | 08:58 | |
CeeMac | that there must have originally been a management ip_q and provnet which is probably where the "management_address" logic is derived from | 08:58 |
CeeMac | andrewbonney: dynamic-address-fact.yml looks like it was changed again after that commit, some time between queens and rocky | 09:04 |
CeeMac | not sure how best to track that | 09:04 |
CeeMac | but the login in it now seems to be causing container_networks: container_address: address: to be populated by the ansible_host address, not the br-mgmt one | 09:05 |
CeeMac | by, or from, can't fathom which | 09:06 |
CeeMac | s/login/logic | 09:08 |
CeeMac | i'm still trying to get my head around what it happening in generate.py | 09:09 |
*** markvoelker has joined #openstack-ansible | 09:11 | |
*** markvoelker has quit IRC | 09:15 | |
*** shyam89 has joined #openstack-ansible | 09:17 | |
*** shyamb has quit IRC | 09:17 | |
*** jbadiapa has joined #openstack-ansible | 09:25 | |
*** dpaclt has joined #openstack-ansible | 09:57 | |
*** jbadiapa has quit IRC | 09:57 | |
dpaclt | Hi all good day ,I am unable to launch new vms getting error as http://paste.openstack.org/show/795937/ | 10:05 |
*** shyamb has joined #openstack-ansible | 10:05 | |
dpaclt | Can anyone suggest | 10:06 |
*** shyam89 has quit IRC | 10:07 | |
*** dpaclt has quit IRC | 10:09 | |
*** dpaclt has joined #openstack-ansible | 10:12 | |
*** jbadiapa has joined #openstack-ansible | 10:18 | |
*** jbadiapa has quit IRC | 10:19 | |
*** jbadiapa has joined #openstack-ansible | 10:19 | |
*** shyamb has quit IRC | 10:23 | |
*** sshnaidm is now known as sshnaidm|afk | 10:24 | |
*** shyamb has joined #openstack-ansible | 10:24 | |
*** shyamb has quit IRC | 10:27 | |
*** shyamb has joined #openstack-ansible | 10:27 | |
openstackgerrit | Andrew Bonney proposed openstack/openstack-ansible-os_nova master: Add nova_management_address to defaults https://review.opendev.org/741146 | 10:29 |
*** cshen has joined #openstack-ansible | 10:52 | |
*** cshen has quit IRC | 10:57 | |
*** shyam89 has joined #openstack-ansible | 10:57 | |
*** shyamb has quit IRC | 11:00 | |
openstackgerrit | Andrew Bonney proposed openstack/openstack-ansible-os_nova master: Use Nova management IP for live migrations https://review.opendev.org/741155 | 11:04 |
*** tosky has quit IRC | 11:05 | |
openstackgerrit | Andrew Bonney proposed openstack/openstack-ansible-os_nova master: Use nova_management_address as a default VNC bind address https://review.opendev.org/741156 | 11:06 |
*** tosky has joined #openstack-ansible | 11:10 | |
*** udesale_ has joined #openstack-ansible | 11:31 | |
*** udesale has quit IRC | 11:34 | |
*** dpaclt has quit IRC | 11:43 | |
CeeMac | andrewbonney: so the nova management address will default to loopback unless overridden in user cars? | 11:44 |
*** dkopper has joined #openstack-ansible | 11:48 | |
*** dkopper has quit IRC | 11:50 | |
CeeMac | s/cars/vars | 12:00 |
*** shyam89 has quit IRC | 12:04 | |
*** mgariepy has joined #openstack-ansible | 12:09 | |
andrewbonney | As far as I can see at present, nova_management_address only comes from playbooks/common-playbooks/nova.yml. The default addition is just intended as a cleanup, and I picked loopback to match up with a similar case for cinder | 12:14 |
andrewbonney | I'd be happy to take advice on a better default. These patches are just some prep to make the fix around the dynamic address fact a little easier | 12:14 |
*** vapjes has joined #openstack-ansible | 12:35 | |
CeeMac | Makes sense, just checking I understand correctly. I can't remember if there was a push recently to move away from binding loopback in favour of binding management address. But there is the outstanding issue of the management address being incorrect when using OOB IP for physical hosts. | 12:35 |
CeeMac | Your principles seem sound though, and having it over rideable gives the user control if/where they need it | 12:36 |
andrewbonney | I wouldn't be surprised if I'm missing some history in some places as I'm quite new to the detail, so it may be that one or two things will need swapping to match current best practice | 12:37 |
CeeMac | I'm still trying to work my head around a lot of it too | 12:38 |
CeeMac | I believe jrosser had been doing some work on the loopback ip binding issue iirc | 12:38 |
openstackgerrit | Andrew Bonney proposed openstack/openstack-ansible master: Fix management address lookup for metal hosts in some deployments https://review.opendev.org/741167 | 12:39 |
*** sshnaidm|afk is now known as sshnaidm | 12:53 | |
*** cshen has joined #openstack-ansible | 12:53 | |
*** cshen has quit IRC | 12:58 | |
*** spatel has joined #openstack-ansible | 12:59 | |
openstackgerrit | Andrew Bonney proposed openstack/openstack-ansible-os_neutron master: Remove unused neutron_management_ip https://review.opendev.org/741177 | 13:02 |
*** dkopper has joined #openstack-ansible | 13:03 | |
admin0 | when doing a minor upgrade from 20.0.2 to 20.1.3, on setup infrastructure, i get this error message "The galera_cluster_name variable does not match what is set in mysql" .. | 13:04 |
admin0 | how is that even possible :D | 13:04 |
admin0 | it says To ignore the cluster state set '-e galera_ignore_cluster_state=true | 13:04 |
admin0 | how safe is it .. or how do I fix this up | 13:04 |
admin0 | bootstrap, setup-hosts showed no errors | 13:05 |
openstackgerrit | Andrew Bonney proposed openstack/openstack-ansible master: Conform cinder management address to pattern used for nova https://review.opendev.org/741180 | 13:05 |
*** dkopper has quit IRC | 13:06 | |
*** watersj has joined #openstack-ansible | 13:25 | |
watersj | for masakari hostmonitor. what parts are needed to get that working. Looks like it needs pacemaker-(remote?) corosync but have not found a document on setup | 13:27 |
watersj | any pointers be great help | 13:27 |
jrosser | watersj: there are patches not yet merged to add corosync/pacemaker for this, so i'd say thats not available yet | 13:31 |
jrosser | CeeMac: my work on binding was to make sure the services all bound to the openstack management network IP rather than 0.0.0.0 | 13:32 |
jrosser | logan-: would be interested to know what you think of this https://review.opendev.org/#/c/741155/ | 13:34 |
watersj | jrosser, patches of masakari or to install/setup of corosync/pacemaker | 13:37 |
jrosser | watersj: all i know is whats in here https://review.opendev.org/#/c/739146/ :) | 13:38 |
watersj | jrosser, that confirms some of what I saw about corosync, ty | 13:39 |
jrosser | watersj: if you were in a position to test that and comment on the patch it would be great | 13:40 |
admin0 | error I am facing is here: https://gist.github.com/a1git/712b8ed02c64a82fa73690bdd70bf4d7 | 13:42 |
*** Guest14648 has joined #openstack-ansible | 13:43 | |
jrosser | admin0: you have errors related to haproxy before the galera error ? | 13:44 |
admin0 | jrosser, nova_libvirt_live_migration_inbound_addr .. isn't nova_libvirt_live_migration_inbound_interface better ? | 13:44 |
admin0 | as an operator, i would not know what address to put in that | 13:44 |
admin0 | jrosser, haproxy playbook runs fine .. and is working fine . | 13:45 |
*** Guest14648 is now known as redrobot | 13:45 | |
jrosser | admin0: the first thing in your paste is "RUNNING HANDLER [haproxy_endpoints : Set haproxy service state]" failed | 13:46 |
admin0 | rerunning haproxy playbook now to see if i can catch this | 13:47 |
admin0 | it ran fine .. the haproxy in itself runs fine .. but when i run the galera playbook/setup-infra, i think at one point it checks for service in the container .. which is not up yet because mysql could not be instlaled | 13:48 |
admin0 | haproxy playbook recap is ok-40, changed =0 in all 3 controller it runs on | 13:49 |
admin0 | jrosser, https://asciinema.org/a/FI5KvhP2VTW4KOIfHg3ciTMCl --- recorded this | 13:52 |
jrosser | admin0: i can't really offer anything but debugging tips | 13:56 |
jrosser | the tasks are here https://opendev.org/openstack/openstack-ansible-galera_server/src/branch/stable/train/tasks/galera_cluster_state.yml | 13:56 |
jrosser | and adding some -vv (or more) to the cli should print out the actual and expected cluster names | 13:57 |
admin0 | rerunning with -vvvv | 13:58 |
admin0 | https://asciinema.org/a/kFkWI1RPj6gIpHdsZQ6VST1kb | 14:00 |
admin0 | as far as i can understand, it failing due to the fact that its unable to find show variables like cluster name from the output, but no mysql has been installed there yet | 14:00 |
admin0 | do you guys use anything to push terminal output to gist ? | 14:02 |
jrosser | admin0: sorry for the silly question but if its an upgrade from 20.0.2 to 20.1.3 why is mysql not already installed? | 14:06 |
admin0 | during the playbook run, it said container name mismatch, so without using -vvv or extra thinking, i just nuked the c1 galera container and recreated it | 14:07 |
*** this10nly has quit IRC | 14:07 | |
jrosser | i think there is some statefulness about if galera is installed or not | 14:11 |
jrosser | so that it doesnt get re-installed / restarted unless you absolutely need | 14:12 |
jrosser | so this might now be more like a major upgrade where you need to pass -e 'galera_upgrade=true' | 14:12 |
jrosser | but i'm kind of guessing a bit | 14:12 |
admin0 | trying | 14:13 |
jrosser | admin0: look here it sets a fact on installation https://opendev.org/openstack/openstack-ansible-galera_server/src/branch/stable/train/tasks/galera_install.yml#L22-L31 | 14:15 |
jrosser | not sure if that got deleted or not when the container was destroyed | 14:15 |
CeeMac | jrosser: ah, OK, I knew it was something to do with management address but couldn't remember the details | 14:16 |
admin0 | jrosser, https://asciinema.org/a/WQV8nSiPwDtjlAreaaxVdmuhr -- | 14:16 |
admin0 | looks like a haproxy galera ping-pong error in each run | 14:17 |
CeeMac | nova_libvirt_live_migration_inbound_addr is derived from the nova variable live_migration_inbound_addr so it makes sense in context | 14:17 |
CeeMac | admin0: ^ | 14:17 |
*** also_stingrayza has joined #openstack-ansible | 14:17 | |
admin0 | CeeMac, as an operator, in user_variables, i put stuff that is common .... so for a variable in defaults/main.yml like nova_libvirt_live_migration_inbound_addr: .. how would I know what to put there . as addr will be different for each host . if it was _cidr, would make sense | 14:19 |
admin0 | i am looking it from an operator prospective who goes to the main.yaml trying to figure out what to put where | 14:19 |
jrosser | admin0: we have lots of things like that in defaults which are different for each host | 14:20 |
spatel | jamesdenton: are you around ? | 14:20 |
jrosser | *potentially different | 14:20 |
*** stingrayza has quit IRC | 14:21 | |
CeeMac | admin0: the challenge is the same as if you were manually adding live_migration_inbound_addr variable to nova.conf | 14:21 |
CeeMac | that is the variable that nova uses, so to set it programatically it is good practice to use the same naming | 14:21 |
admin0 | i mean if i have br-transfer interaface at 172.29.229.0/23, will that variable allow me to tell nova use this network/interface for the vm transfers ? | 14:21 |
jrosser | admin0: and also remember that all the things in role defaults can also be overridden in /etc/openstack_deploy/group / host vars which are by definition host specific | 14:21 |
CeeMac | admin0: that is part of the challenge we're facing | 14:22 |
CeeMac | you would need the specific IP of the interface on the host whose nova.conf variable you were setting | 14:23 |
CeeMac | or an equivalent dynamic inventory value | 14:23 |
admin0 | will this use --p2p ? | 14:24 |
jrosser | it would be nova_libvirt_live_migration_inbound_addr:"{{ hostvars[inventory_hostname]['ansible_' + 'br-transfer']['ipv4']['address'] }}" | 14:24 |
CeeMac | admin0: as an osa operator I actually do spend a lot of time looking in defaults/main.yml to see what variables are set that I can override to do what I want/need :) | 14:24 |
admin0 | yep .. | 14:25 |
admin0 | i try not to play too much :D | 14:25 |
CeeMac | looking isnt playing :p | 14:25 |
admin0 | fair enough | 14:30 |
CeeMac | :D | 14:30 |
CeeMac | I understand your issue though | 14:30 |
admin0 | jrosser, i am destroying the container and rebuilding it again .. i failed to rm -rf /openstack/galera folder data | 14:30 |
admin0 | hopefully it will rejoin and fix itself this time | 14:30 |
CeeMac | wading through the docs for config variables for the various openstack projects can be a challenge | 14:31 |
admin0 | in my case, diff dept have their diff clusters with diff settings .. so keeping track of it all together is also a challenge | 14:32 |
jrosser | i don't think it's correct that every overridable variable has a single constant value | 14:34 |
jrosser | a lot of them are host dependant | 14:34 |
*** dave-mccowan has quit IRC | 14:43 | |
*** dave-mccowan has joined #openstack-ansible | 14:47 | |
*** mmethot_ is now known as mmethot | 14:48 | |
CeeMac | agreed | 14:51 |
*** cshen has joined #openstack-ansible | 14:54 | |
*** cshen has quit IRC | 14:59 | |
tow | jrosser: base os is updated Centos7 | 15:08 |
openstackgerrit | Georgina Shippey proposed openstack/openstack-ansible-ops master: Collect keystone apache federation files https://review.opendev.org/741236 | 15:09 |
tow | jrosser: it always fails with the placement, heat_api, aodh containers | 15:09 |
jrosser | tow: and you mentioned it was missing git and other packages? | 15:09 |
tow | jrosser: apparently just git, the setup-openstack playbook halts, then if we go into the container and yum install git, rerunning the playbook goes through | 15:11 |
jrosser | can you show me the log from where it stops? | 15:12 |
jrosser | tow: if you are able to paste something at paste.openstack.org to give some context it would be really helpful | 15:15 |
tow | jrosser: http://paste.openstack.org/show/795951/ | 15:18 |
jrosser | tow: there are a few odd things, i would expect the name of the venv to include the openstack-ansible version number release rather than be /openstack/venvs/placement-train | 15:22 |
jrosser | then it's using python2 which it really shouldnt be | 15:22 |
jrosser | and then last the clone of the placement repo with git on the placement container itself is not what i'd expect | 15:23 |
jrosser | normally the clone and build of the python wheels would happen on the repo server container | 15:23 |
jrosser | tow: what stage are you at with this, is it an early attempt in a lab environment, single node, multinode, ..... | 15:24 |
tow | this is a production deployment, multinode 3-node control plane, 16 compute nodes, connecting to external ceph | 15:25 |
tow | yes, it is very strange indeed, we've used OSA in the past, that's why we are at a loss | 15:26 |
jrosser | perhaps the first thing to check would be that you have the repo containers configured properly https://github.com/openstack/openstack-ansible/blob/master/etc/openstack_deploy/openstack_user_config.yml.prod.example#L93-L100 | 15:26 |
jrosser | OSA uses the presence of the ansible group for the repo containers to be the thing that decides if the wheels are built locally, or deferred to the repo server | 15:27 |
tow | hmmm interesting | 15:31 |
jrosser | tow: it ultimately is decided here where the wheels are built https://github.com/openstack/ansible-role-python_venv_build/blob/897e97eb58cfc78cadf8e9e183ed0eb1297535b6/vars/main.yml#L65-L78 | 15:31 |
jrosser | it's a bit complicated but it finds an operating system and architecture match in the repo_all group | 15:31 |
tow | ok let us look into it, I'll let you know, but makes sense, it has to be related to the repo containers | 15:32 |
jrosser | and that then ends up being used here https://github.com/openstack/ansible-role-python_venv_build/blob/aabd3c07c29fd89a56cadc01ec3a2e3b682e5e14/tasks/python_venv_wheel_build.yml#L24 | 15:32 |
jrosser | i'm thinking that for some reason thats running against localhost | 15:32 |
jrosser | no not localhost, i mean ansible_host | 15:33 |
jrosser | i.e the target | 15:33 |
tow | jrosser: btw, is there any recommendation on using the stable tag vs the version number tag? e.g. stable/train or 20.1.3 | 15:38 |
jrosser | stable/train is always the tip of the stable branch | 15:39 |
jrosser | every two weeks or so a tag will get dropped on that branch to be a 'minor release' | 15:39 |
jrosser | and at that point the SHA for all the underlying openstack services (cinder/nova) are moved to the head of whatever their stable branch is | 15:40 |
jrosser | and the SHA for all the OSA ansible roles are moved forward to the head of their stable branches | 15:40 |
jrosser | what the tags are saying is that all of that in combination passed the CI tests | 15:40 |
tow | ok, understood, looking into the repo containers as we speak | 15:46 |
*** mgariepy has quit IRC | 16:22 | |
*** vblando has joined #openstack-ansible | 16:22 | |
*** udesale_ has quit IRC | 16:38 | |
*** cshen has joined #openstack-ansible | 16:55 | |
*** cshen has quit IRC | 17:00 | |
*** mgariepy has joined #openstack-ansible | 17:09 | |
*** watersj has quit IRC | 17:11 | |
*** gyee has joined #openstack-ansible | 17:30 | |
logan- | jrosser: re https://review.opendev.org/#/c/741155/, i'm not sure. as of rocky, where my deployments are currently marooned, it seems to still use the %s replacement in live_migration_uri when tunneling is enabled. I'll have to look through the nova repo to see if that changes in later releases. they've been kicking around ideas for removing live_migration_uri for a long time so its certainly possible something changed. | 17:36 |
*** cshen has joined #openstack-ansible | 17:44 | |
*** andrewbonney has quit IRC | 17:57 | |
*** mloza has quit IRC | 18:04 | |
jrosser | logan-: there certainly seems to be a difference between the nova docs and the actual behaviour | 18:06 |
jrosser | in our deployment setting live_migration_inbound_addr to the IP of the interface we actually want seems to make the migration traffic go the right way | 18:07 |
jrosser | which seems contrary to the docs | 18:07 |
jrosser | this is whilst leaving live_migration_uri as it is currently | 18:08 |
*** cshen has quit IRC | 18:20 | |
*** tosky has quit IRC | 18:25 | |
*** tosky has joined #openstack-ansible | 18:25 | |
*** cshen has joined #openstack-ansible | 18:50 | |
*** cshen has quit IRC | 18:55 | |
*** arkan has joined #openstack-ansible | 18:58 | |
CeeMac | It is most definitely strange | 19:05 |
CeeMac | Out of interest was the variable introduced and outcome validated in isolation before my_ip etc was also updated? | 19:06 |
CeeMac | jrosser: ^ | 19:07 |
jrosser | no, I think we changed it all together | 19:08 |
*** cshen has joined #openstack-ansible | 19:12 | |
*** KeithMnemonic has joined #openstack-ansible | 19:12 | |
CeeMac | Is it worth retesting by maybe commenting that one variable out and validating the outcome? Interests of science and all that | 19:18 |
*** rmcall has quit IRC | 19:22 | |
*** rmcall has joined #openstack-ansible | 19:23 | |
*** rmcallis has joined #openstack-ansible | 19:25 | |
*** rmcall has quit IRC | 19:28 | |
*** rmcallis__ has joined #openstack-ansible | 19:28 | |
*** rmcallis has quit IRC | 19:31 | |
*** cshen has quit IRC | 19:36 | |
*** cshen has joined #openstack-ansible | 20:02 | |
*** cshen has quit IRC | 20:07 | |
*** cshen has joined #openstack-ansible | 20:52 | |
*** cshen has quit IRC | 20:56 | |
arkan | is this still active ? https://bugs.launchpad.net/openstack-ansible/+bug/1877421 | 21:07 |
openstack | Launchpad bug 1877421 in openstack-ansible "Cinder-volume is not able to recognize a ceph cluster on OpenStack Train." [Undecided,New] | 21:07 |
arkan | I'm getting this error from cinder volumes | 21:07 |
arkan | cinder.exception.ClusterNotFound: Cluster {'name': 'ceph@ceph'} could not be found. | 21:07 |
arkan | but I can do this from the container | 21:08 |
arkan | rbd -p cinder-volumes --id cinder -k /etc/ceph/ceph.client.cinder.keyring ls | 21:08 |
arkan | it's working, but the service still throwing that error | 21:08 |
*** gyee has quit IRC | 21:10 | |
arkan | I think I solved the problem with authorisation with magnum, the user stack_domain_admin was not added to the heat domain with a role admin | 21:11 |
*** gyee has joined #openstack-ansible | 21:11 | |
arkan | I ran this openstack role add --domain heat --user-domain heat --user stack_domain_admin admin | 21:11 |
*** rmcall has joined #openstack-ansible | 21:27 | |
*** rmcallis__ has quit IRC | 21:28 | |
*** rmcall has quit IRC | 21:30 | |
*** rmcall has joined #openstack-ansible | 21:31 | |
*** rmcall has quit IRC | 21:35 | |
*** rmcallis has joined #openstack-ansible | 21:36 | |
*** rmcallis has quit IRC | 21:42 | |
*** spatel has quit IRC | 22:15 | |
*** logan- has quit IRC | 22:17 | |
*** logan- has joined #openstack-ansible | 22:19 | |
*** spatel has joined #openstack-ansible | 22:29 | |
*** vapjes has quit IRC | 22:33 | |
*** spatel has quit IRC | 22:34 | |
*** tosky has quit IRC | 22:48 | |
*** watersj has joined #openstack-ansible | 22:50 | |
*** cshen has joined #openstack-ansible | 22:52 | |
*** cshen has quit IRC | 22:57 | |
*** markvoelker has joined #openstack-ansible | 23:11 | |
*** markvoelker has quit IRC | 23:15 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!