*** akahat is now known as akahat|ruck | 04:37 | |
*** ysandeep|away is now known as ysandeep | 04:49 | |
anskiy | spatel: hello! When you would have a minute, could you, please, check https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/855829? :) | 07:25 |
---|---|---|
noonedeadpunk | anskiy: Um, I tend to not agree here. Well, partially. As for "{{ venv_build_host != inventory_hostname }}" important part how venv_build_host is generated. Before Yoga it was set to groups['repo_all'][-1]. So for first controller venv_wheel_build_enable was renderred as true. For the last it was false, but it didn't matter, as wheels were already in place | 07:39 |
noonedeadpunk | and now it's the first host. So condition is conflicting now | 07:39 |
noonedeadpunk | I mean this one https://opendev.org/openstack/ansible-role-python_venv_build/src/branch/master/tasks/main.yml#L68-L69 | 07:40 |
noonedeadpunk | and it has changed as well with logic of how we get this https://opendev.org/openstack/ansible-role-python_venv_build/src/branch/master/vars/main.yml#L44-L57 | 07:41 |
noonedeadpunk | ah, we did not change that, ok, yes, you're right | 07:41 |
noonedeadpunk | Indeed, I've changed just condition when to build wheels | 07:42 |
noonedeadpunk | hm, so maybe there's easier way to fix that issue... | 07:43 |
noonedeadpunk | so basically we can reverse ansible_play_hosts as well to fix that | 07:48 |
noonedeadpunk | not sure that does make sense though | 07:48 |
noonedeadpunk | as in case of serial you do need first host | 07:49 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_placement master: Install git into placement containers https://review.opendev.org/c/openstack/openstack-ansible-os_placement/+/858258 | 07:55 |
noonedeadpunk | But yes, the line you've mentioned is incorrect indeed. Thanks! | 07:55 |
opendevreview | Dmitriy Rabotyagov proposed openstack/ansible-role-python_venv_build master: Change default value for venv_wheel_build_enable https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/857897 | 07:59 |
anskiy | noonedeadpunk: I wasn't able to reproduce this issue without your patch... | 08:05 |
anskiy | (just to be clear) | 08:05 |
noonedeadpunk | It's reproducible only for metal deploys (without lxc containers at all) | 08:05 |
anskiy | yes, I've bootstrapped cluster with two control-plane nodes yesterday | 08:06 |
anskiy | and it was all is_metal (never tried LXC, honestly :) ) | 08:07 |
noonedeadpunk | ah, so you tried to roproduce with metal | 08:07 |
noonedeadpunk | huh | 08:07 |
noonedeadpunk | that is interesting | 08:07 |
anskiy | like I've said, maybe, gluster is faster than lsyncd was, and you can see the missing files from jamesdenton's trace on node, where play looks for them | 08:08 |
anskiy | `"file not found: /var/www/repo/os-releases/25.1.0.dev68/ubuntu-20.04-x86_64/requirements/utility-25.1.0.dev68-constraints.txt"}` this one, for example | 08:09 |
opendevreview | Jiri Podivin proposed openstack/openstack-ansible-os_tempest master: DNM: Testing change for CIX https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/858260 | 08:19 |
*** ysandeep is now known as ysandeep|lunch | 08:22 | |
*** ysandeep|lunch is now known as ysandeep | 09:00 | |
anskiy | setting up neutron AZs with OVN and OSA is a little bit clunky :( | 09:05 |
noonedeadpunk | well. I'm dealing with AZ deployment now and hopefully push some docs soon | 09:10 |
noonedeadpunk | But in short - use custom env.d to define az groups | 09:10 |
noonedeadpunk | you can have az1-neutron_server and az2-neutron_server, which both will be children in neutron_server group | 09:11 |
anskiy | you bind node to AZ via ovs-vsctl :) | 09:11 |
noonedeadpunk | and then you can set group_vars | 09:11 |
noonedeadpunk | but I don't know about OVN as working with OVS | 09:11 |
anskiy | with OVS you do this in neutron.conf | 09:12 |
noonedeadpunk | I bet for OVN we just miss definition of AZs | 09:12 |
noonedeadpunk | yup | 09:12 |
anskiy | with OVN you do it here: https://opendev.org/openstack/openstack-ansible-os_neutron/src/branch/master/tasks/providers/setup_ovs_ovn.yml#L25 | 09:12 |
anskiy | in ovn-cms-options | 09:13 |
anskiy | with OVS, you define them via custom config overrides, right, so they would land in appropriate sections of /etc/neutron/l3_agent.ini and /etc/neutron/dhcp_agent.ini? | 09:15 |
anskiy | as I don't see anything AZ related in os_neutron at all... | 09:16 |
noonedeadpunk | I think it will also affect neutron.conf, but not 100 % sure | 09:17 |
noonedeadpunk | so yes, it's jsut configuration setting | 09:17 |
opendevreview | Danila Balagansky proposed openstack/openstack-ansible-os_neutron master: WIP: neutron OVN plugin AZ support https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/858271 | 09:26 |
anskiy | I think, I need something like this ^. Gonna need to test it a bit tho | 09:27 |
noonedeadpunk | anskiy: I used these ovverrides for OVS: https://paste.openstack.org/show/bYsk27AB9DqT4iVdy2tE/ | 09:32 |
noonedeadpunk | but IIRC default_availability_zones is a tricky thing and there might be cases when you want to avoid defining it. | 09:32 |
noonedeadpunk | unless I mixing up things with nova and cinder | 09:33 |
noonedeadpunk | anskiy: I wonder maybe for this usecase it makes sense to add availability-zone key to neutron_provider_networks.... | 09:35 |
anskiy | this is a good idea, as now I think I have to add `neutron_ovn_controller` group to openstack_user_config, and add `neutron_availability_zones` to `host_vars` there, or something like that | 09:40 |
anskiy | noonedeadpunk: I do believe that, for nova you always have "default" AZ: `nova`, and for cinder, there are these things in variables: https://opendev.org/openstack/openstack-ansible-os_cinder/src/branch/master/defaults/main.yml#L53-L54 | 09:42 |
anskiy | the problem with adding it to neutron_provider_networks, is that for OVN (and looks like for OVS too) AZ is chassis/node-level option, not the network one. | 09:50 |
noonedeadpunk | anskiy: I do recall some trickiness when you want nova scheduler to pick AZ on itself, but you use bfv and default_az is set in cinder. | 10:20 |
noonedeadpunk | `cinder.cross_az_attach is False, default_schedule_zone is None, the server is created without an explicit zone but with pre-existing volume block device mappings. In that case the server will be created in the same zone as the volume(s) if the volume zone is not the same as default_availability_zone. ` | 10:21 |
noonedeadpunk | https://docs.openstack.org/nova/latest/admin/availability-zones.html#implications-for-moving-servers | 10:21 |
noonedeadpunk | anskiy: ok, yes, then let's add az variable. but we would need to cover ovs/lxb options as well with the same patch | 10:22 |
*** anbanerj is now known as frenzyfriday | 10:26 | |
*** ysandeep is now known as ysandeep|afk | 10:34 | |
anskiy | noonedeadpunk: AZs are per-service properties, except the time when they are not :( | 10:39 |
anskiy | noonedeadpunk: do you want `default_availability_zones` for neutron to be added too, or only per-host one? | 10:43 |
noonedeadpunk | if add a variable for default_availability_zones - I think it should be undefined (or empty) by default. | 10:44 |
noonedeadpunk | though I don't have any strict opinion if add it or not - it's quite simple override from one side, but we don have suc variable for nova and cinder from another | 10:49 |
noonedeadpunk | *but we do have such | 10:49 |
anskiy | this would probably break scheduling (creating networks), if, at the same time, `neutron_availability_zones` (which would go into agent's configuration) would be set to a different value. | 10:49 |
anskiy | except for the case, when it would be set to an empty value too, but I'm not sure if it would be valid for all the drivers | 10:50 |
noonedeadpunk | oh yes, and it also reuqires scheduler to be changed. So we can actually also add variable as an undefined one (comment out in defaults for documentation purpose) and configure neutron conditionaly to whether it's defined or not | 10:52 |
anskiy | okay, I've got your point, thank you. Will see, if I would be able to come up with something sane :) | 10:57 |
anskiy | hopefully this week | 10:57 |
anskiy | strange thing: dhcp_agent (https://docs.openstack.org/neutron/latest/configuration/dhcp-agent.html#agent) and l3_agent (https://docs.openstack.org/neutron/latest/configuration/l3-agent.html#agent) docs say, that their respective `availability_zone` is defined to `nova`, by default. | 11:20 |
anskiy | and `default_availability_zones` is empty, but docs say, that if hints and default zone are empty, AZs are still considered: https://docs.openstack.org/neutron/latest/configuration/neutron.html#DEFAULT.default_availability_zones | 11:21 |
anskiy | noonedeadpunk: looks like, with OVS setup you should see AZs in `openstack availability zone list --network`, even if nothing is configured yet | 11:21 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Include install_method variables for openrc https://review.opendev.org/c/openstack/openstack-ansible/+/858303 | 11:35 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Bump ansible-core version to 2.13.4 https://review.opendev.org/c/openstack/openstack-ansible/+/857506 | 11:35 |
*** ysandeep|afk is now known as ysandeep | 12:03 | |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/yoga: Fix dynamic-address-fact gathering with tags https://review.opendev.org/c/openstack/openstack-ansible/+/858329 | 12:04 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/wallaby: Fix dynamic-address-fact gathering with tags https://review.opendev.org/c/openstack/openstack-ansible/+/858330 | 12:05 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/xena: Fix dynamic-address-fact gathering with tags https://review.opendev.org/c/openstack/openstack-ansible/+/858331 | 12:05 |
opendevreview | Merged openstack/openstack-ansible master: Imported Translations from Zanata https://review.opendev.org/c/openstack/openstack-ansible/+/856910 | 12:31 |
jamesdenton | hello all | 12:39 |
damiandabrowski | hi! | 12:39 |
jamesdenton | *high five* | 12:39 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_nova master: Add new line after proxyclient_address https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/858375 | 13:08 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_keystone master: Revert "Check the service status during bootstrap against the internal VIP" https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/858335 | 13:26 |
*** prometheanfire is now known as Guest931 | 13:29 | |
anskiy | jamesdenton: hey! Do you have repo on gluster when seeing this error: https://bugs.launchpad.net/openstack-ansible/+bug/1989506? | 13:32 |
jamesdenton | IIRC the gluster repo is configured, but wheels are skipped on the first host so there's nothing to sync | 13:33 |
jamesdenton | i believe i tested noonedeadpunk's patch successfully, but need to test to make sure it behaves on metal, lxc, single/multi, etc | 13:34 |
jamesdenton | looking into some keepalived issues at the moment, stuff be broken there, too | 13:35 |
anskiy | ahh, I see: I have `venv_wheel_build_enable: true` in my user_variables :) That's why I wasn't able to reproduce that thing... | 13:36 |
jamesdenton | that'll do it | 13:37 |
-opendevstatus- NOTICE: As of the weekend, Zuul only supports queue declarations at the project level; if expected jobs aren't running, see this announcement: https://lists.opendev.org/pipermail/service-announce/2022-September/000044.html | 13:37 | |
anskiy | that means I have to retest this patch with two-node metal control-plane | 13:38 |
noonedeadpunk | yeah, venv_wheel_build_enable: true is simple thing to make this work | 13:38 |
noonedeadpunk | maybe we indeed should jsut hardcode it... | 13:38 |
anskiy | in W when set to true it would successfully run on first node and fail on second one, at least that's what my comment says | 13:41 |
anskiy | spatel: hello! | 13:46 |
spatel | anskiy hey | 13:57 |
anskiy | spatel: could you, please, check https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/855829 if you have some time? | 14:08 |
spatel | sure! | 14:09 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_keystone master: Bootstrap when running against last backend https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/858385 | 14:09 |
jamesdenton | ^^ i happened to create a similar bug, it looks like: https://bugs.launchpad.net/openstack-ansible/+bug/1990008 | 14:10 |
noonedeadpunk | jamesdenton: ^ this should fix the issue you saw when bootstrapping keystone | 14:10 |
jamesdenton | good deal, thanks | 14:10 |
noonedeadpunk | aha | 14:10 |
noonedeadpunk | Ok, I missed it | 14:11 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_keystone master: Bootstrap when running against last backend https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/858385 | 14:11 |
jamesdenton | an unrelated question... have you seen any differences between using with_dict vs a loop w/ dict2items? | 14:12 |
noonedeadpunk | I hardly used loop| dict2items so can't say | 14:12 |
noonedeadpunk | eventually I find it more and more handy to use list of mappings then just dict. Or simple list | 14:13 |
noonedeadpunk | what I'm not sure about this patch if it will work nicely with IDP setup | 14:15 |
noonedeadpunk | but andrewbonney seems not around :( | 14:15 |
jamesdenton | i guess the BBC is pretty busy right now | 14:16 |
noonedeadpunk | oh, well | 14:16 |
noonedeadpunk | true | 14:16 |
jamesdenton | So, this changed from with_dict to loop recently, and the conditional is no longer effective: https://github.com/evrardjp/ansible-keepalived/blob/master/tasks/main.yml#L164-L165 | 14:17 |
jamesdenton | thus, our keepalived implementation is broken at the moment | 14:17 |
jamesdenton | and this was also implemented, https://github.com/evrardjp/ansible-keepalived/blob/master/tasks/main.yml#L133, which causes failure since the tracking scripts haven't been dropped yet | 14:18 |
jamesdenton | i'm shuffling stuff around to get it working, but that loop w/ dict2items seems to behave differently | 14:18 |
spatel | anskiy agreed we just need openvswitch but not started/enable. | 14:18 |
noonedeadpunk | jamesdenton: huh, interesting | 14:20 |
jamesdenton | there is a type here, too: https://github.com/evrardjp/ansible-keepalived/blob/master/tasks/main.yml#L164. that should be keepalived_scripts | 14:20 |
jamesdenton | *typo | 14:20 |
jamesdenton | but the main issue is "item.value.src_check_script is defined" evaluates to false w/ the loop vs with_dict | 14:21 |
anskiy | spatel: should you leave some kinda of LGTM comment, if that's appropriate?.. I'm not sure if I'm breaking review process with this move --- | 14:21 |
spatel | I did anskiy | 14:21 |
anskiy | thank you! | 14:22 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_neutron master: Do not start/enable Open vSwitch on ovn-northd nodes https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/855829 | 14:23 |
noonedeadpunk | rebased jsut to trigger recheck for gates | 14:24 |
noonedeadpunk | as seems rdo repo for centos9 was in weird state during previous one | 14:24 |
noonedeadpunk | I wonder though... Didn't ovs start/enabled on itself upon installation for Ubuntu/Debian? | 14:32 |
noonedeadpunk | As I bet when ovs is being upgraded - it restarts on it's own with postinstall hook or smth like that | 14:33 |
noonedeadpunk | anskiy: jamesdenton ^ | 14:33 |
noonedeadpunk | ans spatel :) | 14:34 |
spatel | noonedeadpunk it does restart itself in Ubuntu during upgrade | 14:34 |
jamesdenton | yes, IIRC openvswitch would be started automatically during service install. In this case, it seems openvswitch-switch is no longer a dependency for ovn-northd so it's really not needed on those nodes, best i can tell | 14:34 |
spatel | jamesdenton still it required ovn binary on ovn-northd | 14:35 |
spatel | sorry ovs* | 14:35 |
spatel | because ovn needs ovs db | 14:36 |
spatel | but you can keep service down | 14:36 |
jamesdenton | not according to this: https://packages.ubuntu.com/focal/ovn-central | 14:38 |
jamesdenton | vs https://packages.ubuntu.com/bionic/ovn-central | 14:39 |
jamesdenton | i went a few tiers deep and didn't see openvswitch-switch. maybe common, but not switch | 14:39 |
jamesdenton | but i did test that patch on a MNAIO and found that while openvswitch-switch was not deployed on the controllers, OVN functioned as expected | 14:39 |
jamesdenton | northd was there | 14:40 |
noonedeadpunk | ah, ok, true. That's spatels comment that has confused me then | 14:41 |
noonedeadpunk | with `agreed we just need openvswitch but not started/enable.` | 14:41 |
spatel | Yes ovn-central has that dependency to install ovs components for DB function only. | 14:42 |
anskiy | ovsdb-tool, which is needed on control-plane nodes is part of the openvswitch-common package | 14:43 |
anskiy | on which ovn-central depends | 14:44 |
spatel | i thought when you install openvswitch-common it will auto install other ovs components also. like openvswitch-switch etc.. | 14:44 |
anskiy | jamesdenton said some time ago, it was like this on 18.04, but now, it's: Depends: openssl, python3-six, python3:any, libc6 (>= 2.29), libcap-ng0 (>= 0.7.9), libssl1.1 (>= 1.1.0), libunbound8 (>= 1.8.0) | 14:47 |
jamesdenton | seems like switch relies on common, but not the other way around. | 14:47 |
anskiy | and I don't see `openvswitch-switch` being installed with this patch on freshly bootstrapped test environment | 14:48 |
spatel | great! | 14:49 |
spatel | we only need openvswitch-common which contain ovs-db - https://packages.ubuntu.com/focal/amd64/openvswitch-common/filelist | 14:49 |
*** ysandeep is now known as ysandeep|dinner | 14:58 | |
noonedeadpunk | yeah, that makes way more sense to me now) | 15:01 |
jamesdenton | noonedeadpunk I'll ping JP next time I see him, but here's what I've put together so far: https://github.com/evrardjp/ansible-keepalived/pull/240 | 15:44 |
noonedeadpunk | oh, I bet I told him about that checking with `in` is better one day | 15:48 |
jamesdenton | :D | 15:49 |
noonedeadpunk | jamesdenton: there's more same issues then | 15:50 |
noonedeadpunk | or not | 15:50 |
jamesdenton | in that playbook? probably. | 15:50 |
noonedeadpunk | L177 | 15:50 |
noonedeadpunk | eventually also regarding notify script | 15:51 |
jamesdenton | oh yes, definitely. i wasn't using keepalived_sync_groups so i didn't want to jump the gun | 15:52 |
jamesdenton | i can fix them all | 15:52 |
noonedeadpunk | eventually there's quite a list | 15:53 |
jamesdenton | thx | 15:54 |
noonedeadpunk | makes sense to fix them at once | 15:54 |
*** ysandeep|dinner is now known as ysandeep | 16:02 | |
*** ysandeep is now known as ysandeep|out | 16:03 | |
jamesdenton | got 'em, i think. off to lunch with the kiddo | 16:13 |
*** admin16 is now known as admin1 | 21:41 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!