gokhan | good morning noonedeadpunk | 07:44 |
---|---|---|
noonedeadpunk | o/ | 07:48 |
gokhan | we are deploying osa 29.0.2, on skyline it is doing yarn rebuild on repo hosts. it seems it needs to install yarn and dependencies and make package on skyline container. because it can't find the /openstack/src/skyline-console when installing python venv packages | 07:48 |
gokhan | https://opendev.org/openstack/openstack-ansible-os_skyline/src/branch/master/defaults/main.yml#L138-L143 | 07:49 |
noonedeadpunk | so there's option to build or not build yarn | 07:50 |
noonedeadpunk | https://opendev.org/openstack/openstack-ansible-os_skyline/src/branch/master/defaults/main.yml#L38-L41 | 07:51 |
noonedeadpunk | so if you are to use `skyline_console_git_install_branch` being set to a specific tag rather then SHA - it will just install from pypi | 07:51 |
jrosser | is there a bug there with the skyline install (i.e works in AIO but not multinode)? | 07:53 |
jrosser | gokhan: it's always good to link to a paste of the actual error you get | 07:53 |
noonedeadpunk | ++ | 07:54 |
noonedeadpunk | I've installed skyline on 29.0.0 only on multinode and it worked nicely | 07:54 |
jrosser | ah ok | 07:55 |
noonedeadpunk | but can imagine that sha bump could switch skyline from tag to sha | 07:55 |
jrosser | it just sounded like there was something getting being shared between repo and skyline container through /openstack | 07:55 |
noonedeadpunk | not really: https://opendev.org/openstack/openstack-ansible/src/tag/29.0.2/inventory/group_vars/skyline_all/source_git.yml#L24 | 07:55 |
jrosser | which would not be good for multinode | 07:55 |
noonedeadpunk | oh, well, I didn't try that on metal | 07:56 |
noonedeadpunk | on multinode | 07:56 |
noonedeadpunk | but the thing is that it should not be building yarn when version is set to a tag | 07:57 |
gokhan | I rerun again and share logs | 07:57 |
gokhan | jrosser, https://paste.openstack.org/show/bsN0MDnZysRQveCDyhPz/ | 08:06 |
gokhan | it seems we need make shared /openstack/src/skyline-console on repo containers | 08:06 |
gokhan | skyline_console_yarn_build is true | 08:16 |
kleini_ | regarding ANSIBLE_INJECT_FACT_VARS=True: I doubt, this can be fixed as "ansible_architecture" is used in prometheus collection: https://github.com/prometheus-community/ansible/blob/main/roles/node_exporter/vars/main.yml#L9 | 08:26 |
gokhan | noonedeadpunk, jrosser for quick workaround I changed skyline_console_yarn_setup_host to inventory_host and it worked | 08:35 |
noonedeadpunk | kleini: yeah, true... then you're right about ANSIBLE_INJECT_FACT_VARS :( | 08:39 |
noonedeadpunk | eventually if you're running fresh enough osa - you can have /etc/openstack_deploy/user.env file and set `export ANSIBLE_INJECT_FACT_VARS=True` there | 08:40 |
noonedeadpunk | to store all in-tree | 08:40 |
kleini | okay, trying to make reviews for openstack-ansible-ops documentation | 08:40 |
noonedeadpunk | gokhan: oh, ok, that can be a valid bug for sure | 08:59 |
noonedeadpunk | gokhan: is it the same if you don't supply `-e skyline_console_yarn_build=yes`? | 09:08 |
noonedeadpunk | but yeah, I kinda see that logic does not add up in some cases | 09:09 |
noonedeadpunk | it really looks like I haven't thought through the case with repo_host and wheels build enabled | 09:10 |
noonedeadpunk | could be the reason why `skyline_console_yarn_build` is expected to be False by default | 09:11 |
noonedeadpunk | gokhan: but actually it's interesting, as in your case you do skip wheels build, but with that code goes through path if wheels build is reuqired | 09:15 |
noonedeadpunk | as you really fully skip wheels despite `-e venv_wheels_rebuild=yes` | 09:16 |
noonedeadpunk | I will try to check on that, but smth looks off in the deployment | 09:43 |
noonedeadpunk | as ofc if no wheels are avalable but role expects for them to be present - it will fail | 09:43 |
gokhan | noonedeadpunk, skyline git install branch is master and it makes skyline_console_yarn_build=yes, we do not need to run with skyline_console_yarn_build=yes. I will make it to be installed with shared repo. | 11:14 |
noonedeadpunk | gokhan: but why are wheels build are actually skipped ? | 11:34 |
noonedeadpunk | I also wonder how exact same flow does pass in our CI: https://zuul.opendev.org/t/openstack/build/637d3f28a5f5416eae1a44effc183e6c/log/job-output.txt#17641-17692 | 11:37 |
noonedeadpunk | though, we also do skip building wheels there | 11:37 |
noonedeadpunk | what actually breaks behaviour potenitally - is `-e venv_wheels_rebuild=yes` where there're actually no wheels | 11:38 |
gokhan | how https://zuul.opendev.org/t/openstack/build/637d3f28a5f5416eae1a44effc183e6c/log/job-output.txt#17688 worked ? how skyline console buildable dir is reched on skyline container, it is on repo container | 11:42 |
gokhan | noonedeadpunk, wheels built is not skipped if we run with -e venv_wheels_rebuild=yes. ı think it is working as expected | 11:54 |
gokhan | also it is very slow to write anything on /var/ww/repo/. I am trying to make buildable skyline console package under /var/ww/repo/ | 11:55 |
noonedeadpunk | `Including python_venv_wheel_build tasks` is skipped in your paste | 11:58 |
noonedeadpunk | and paste was with `-e venv_wheels_rebuild=yes` | 11:59 |
noonedeadpunk | which meens no wheels are built | 11:59 |
gokhan | oh yes you are right, it is skipped | 12:02 |
gokhani | why hypervisor name is registered as computename.openstack.local ? For masakari if hostname is not same, host failure is not working | 15:32 |
gokhani | do we need any specific config on nova side? | 15:33 |
gokhani | in /etc/hosts file hosts are added like 10.x.x.x test-compute3.openstack.local test-compute3 | 15:34 |
noonedeadpunk | gokhani: I think you're using old masakari though | 16:11 |
noonedeadpunk | as I think I did patch for that couple of years ago | 16:11 |
noonedeadpunk | today masakari should be jsut using compute service list name | 16:12 |
noonedeadpunk | but answering your question - hypervisor name is coming from libvirt | 16:12 |
noonedeadpunk | and compute service is coming from python's socket.gethostname() | 16:12 |
noonedeadpunk | so basically, what you'd need to do to align these two, is to make `python3 -c "import socket; print(socket.gethostname())" to be same as `python3 -c "import socket; print(socket.getfqdn()" | 16:14 |
gokhani | thanks noonedeadpunk, I am trying on caracal, I made a test and host failure is not triggered. ok compute service list is same which is in /etc/masakari/masakarimonitors.conf. ı recalled wrong that hypervisor list name must be same in masakari config file | 16:19 |
noonedeadpunk | so I think that would boil down to how it's named in corosync cluster | 16:20 |
noonedeadpunk | and how you added it to masakari api | 16:20 |
noonedeadpunk | if you do everywhere as in compute service list - that should work | 16:20 |
gokhani | I will recheck but it is the same | 16:24 |
noonedeadpunk | So unless this was reverted - it should work... https://review.opendev.org/c/openstack/masakari/+/728629 | 16:27 |
noonedeadpunk | (and not caring about hypervisors) | 16:29 |
gokhani | noonedeadpunk: do we need to enable oslo messaging notifications on nova and masakari? | 17:10 |
gokhani | masakari host monitor send notification https://paste.openstack.org/show/bEfQqYNv45dzGmlta4GN/ but masakari engine ignores this notification | 17:12 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!