sahid | o/ | 07:41 |
---|---|---|
elodilles | hi nova, note that tooz 4.0.0 dropped py38 support and upper constraints was bumped on master to new tooz version thus all py38 job is failing now on master with not finding proper version of tooz | 08:25 |
bauzas | elodilles: but we don't use tooz, right? | 08:26 |
frickler | note that this not only affects tox-py38 type jobs, but also all jobs that still run on focal | 08:26 |
frickler | but other services do, so all devstack jobs will be affected | 08:26 |
frickler | like tempest-integrated-compute-ubuntu-focal https://zuul.opendev.org/t/openstack/build/0af6ddcd55cf4b02ab996f9134d0c7f4 | 08:29 |
frickler | the latter is easy to mistake for a mirror or pypi issue | 08:30 |
elodilles | bauzas: with a quick glance i see that nova-tox-functional-py38 is failing with this issue. though i guess we shouldn't have this job on master anymore as py39 and py310 are the supported runtimes | 08:32 |
bauzas | elodilles: hmmm, lemme check | 08:33 |
bauzas | because when some people were asking whether we should tooz, we said no before | 08:33 |
bauzas | should *use* | 08:33 |
bauzas | oh damn https://github.com/openstack/nova/blob/72370a188c0755bc9c864b5a5e4a972077cb8dd6/nova/virt/ironic/driver.py | 08:34 |
bauzas | sorry, I was only thinking about the servicegroup drivers | 08:35 |
bauzas | so, we don't directly use zool in Nova but yeah, we have it for the ironic driver | 08:36 |
* bauzas wonders whyè | 08:36 | |
bauzas | ha ok | 08:37 |
frickler | IMO don't focus too much on tooz, likely other libs will follow and drop py38 support soon | 08:38 |
bauzas | frickler: well, I'm afraid we depend on some library that we don't really need | 08:38 |
bauzas | frickler: at least the ironic driver could use tooz by its client | 08:39 |
frickler | oslo.db in its latest release is >= py3.9, too | 08:39 |
bauzas | frickler: elodilles: anyway, what's then the plan ? forcing us to no longer support py3.8 because of some lib that 99% of Nova doesn't use ? | 08:41 |
elodilles | bauzas: i'm about to propoze a patch that removes the py38 job from .zuul.yaml :) | 08:43 |
bauzas | for master, that's OK https://governance.openstack.org/tc/reference/runtimes/2023.2.html | 08:45 |
bauzas | but I'm very sad of how we're forced to drop support for a python version due to some lib | 08:46 |
bauzas | a counter-proposal could be to not use tooz 4 yet | 08:47 |
bauzas | particularly when it comes to SLURP upgrades | 08:47 |
bauzas | https://governance.openstack.org/tc/reference/runtimes/2023.1.html was guaranteeing py3.8 | 08:47 |
bauzas | so operators upgrading to 2024.1 would have to raise their OS versions before the bump | 08:47 |
opendevreview | Elod Illes proposed openstack/nova master: Drop nova-tox-functional-py38 https://review.opendev.org/c/openstack/nova/+/881339 | 08:48 |
elodilles | yes, but master is for 2023.2 Bobcat already :) | 08:48 |
bauzas | I know | 08:48 |
bauzas | I'm saying that the requirements we have for 2023.1 can't be used when upgrading to 2024.1 now | 08:49 |
bauzas | or rather 2023.2 | 08:49 |
bauzas | I mean, I'm an operator wanting to upgrade from A to B | 08:50 |
bauzas | and ideally from A to C | 08:50 |
bauzas | if I check the common requirements between A and B, I know that I'll be able to use 22.04 but not py38 | 08:50 |
elodilles | i see, but still, the same issue would happen from C to E, as clearly we can't expect to keep py38 till end of time | 08:51 |
elodilles | nevertheless i'm also not fond of dropping py38 support IF there is no compatibility issue | 08:52 |
bauzas | elodilles: sure, people have to upgrade anyway by some cadence | 08:52 |
bauzas | and we're not discussing of any distro, like the one I'm paid for | 08:53 |
bauzas | I'm just saying that dropping a python version for a release that's not purely required and will prevent operators to seamlessly upgrade from A to B is at least unfortunate | 08:53 |
bauzas | and I'd like us to take a bit of a thought before we merge this | 08:54 |
elodilles | bauzas: ack, unfortunately with the tooz release we already started that road :/ | 08:59 |
bauzas | I've seen it | 08:59 |
* bauzas rereads the SLURP TC resolution again | 09:00 | |
bauzas | elodilles: what python version is shipped with 22.04 Jellyfish ? | 09:01 |
bauzas | 3.10, sorry found it | 09:01 |
elodilles | to be honest i'd rather keep the old python versions in all deliverables as long as the code is compatible with them. but that is harder to follow and we might not notice when we add incompatible changes (unless we keep old jobs, or lower-constraints like jobs around... :S) | 09:02 |
bauzas | elodilles: found the TC resolution | 09:07 |
bauzas | elodilles: read this : | 09:07 |
opendevreview | Elod Illes proposed openstack/nova master: Drop nova-tox-functional-py38 https://review.opendev.org/c/openstack/nova/+/881339 | 09:07 |
bauzas | Testing: Just as we test and guarantee that upgrades are supported between adjacent releases today, we will also test and guarantee that upgrades between two “SLURP” releases are supported. Upgrades are tested for most projects today with grenade. A skip-level job will be maintained in the grenade repository that tests a normal configuration between the last two “SLURP” releases. The job will be updated on every new “ | 09:07 |
bauzas | SLURP” release, and there will always be a regular single-release grenade job testing between the previous release and current one, as we have today. | 09:07 |
bauzas | it says nothing for the dependencies | 09:08 |
elodilles | yes, no word about runtimes | 09:08 |
kashyap | What are "SLURP" releases, again? | 09:11 |
bauzas | kashyap: this is the new upstream release cadence where you can skip some release https://governance.openstack.org/tc/resolutions/20220210-release-cadence-adjustment.html | 09:13 |
elodilles | bauzas: note that this failure can be now in other repositories as well, it is likely that other projects will start dropping their py38 support / job (neutron for example merged a similar patch already) | 09:27 |
bauzas | elodilles: yeah, looks like the boat has sailed, but at least for nova, I'd prefer us to take a bit of time for discussing it | 09:29 |
bauzas | elodilles: at least because of computes :) | 09:29 |
kashyap | bauzas: Thanks for the link! | 09:36 |
sean-k-mooney | bauzas: elodilles im +2 on the removal of python 3.8 testing because we should not be testing with 20.04 in the antelope to bobcat grenade job | 10:22 |
sean-k-mooney | you are required to upgrade your host os before you can upgrade form 2023.1 -> 2023.2 | 10:23 |
sean-k-mooney | the upgrade order is zed -> 2023.1 , ubuntu 20.04 -> ubuntu 22.04, 2023.1->2023.2 | 10:24 |
sean-k-mooney | bauzas: so not only should we expect that they have alredy upgraded there python we should require it | 10:25 |
sean-k-mooney | we support one release for 2 version of the base os. that was antelope for the ubuntu 20.04 -> 22.04 transtion | 10:26 |
sean-k-mooney | bobcat does not need to supprot python 3.8 computes even with the slurp cadance nor does C | 10:27 |
sean-k-mooney | before you can do the skip lelvel upgrade you will need to upgrade the host os | 10:27 |
sean-k-mooney | actully i have some other comments on https://review.opendev.org/c/openstack/nova/+/881339 so dropign down to -1 as that patch does not remove the 3.8 support in the setup.cfg and tox | 10:32 |
sean-k-mooney | we should not drop any test coverage until all jobs are usign at least python 3.9 includign grenade | 10:33 |
elodilles | sean-k-mooney: thanks, yes, that is what i understood as well. about the patch: it's more about dropping the py38 based job, not about 'dropping py38 support of nova', but yes, i can propose a patch (on top of the original patch) that removes py38 from setup.cfg | 11:10 |
sean-k-mooney | if we claim we supprot it it should be tested | 11:11 |
sean-k-mooney | so to me droping the jobs and dropign the supprot is tied | 11:11 |
sean-k-mooney | it could be two patches but i woudl prefer to merge them together in that case | 11:12 |
sean-k-mooney | do we currently ahve a gate blocker due to tooz? | 11:12 |
sean-k-mooney | or do we have time to do this proeprly | 11:13 |
elodilles | nova-tox-functional-py38 is blocking the gate | 11:18 |
elodilles | and it seem tempest-integrated-compute-ubuntu-focal and nova-ceph-multistore as well using py38 | 11:19 |
elodilles | hmm | 11:19 |
sean-k-mooney | we can correct the tempest jobs by defining the python version in devstack to be 3.9 or 3.10 | 11:20 |
sean-k-mooney | both are avaiabel on 20.04 | 11:20 |
elodilles | ack, i'll add that to the patch | 11:21 |
sean-k-mooney | im usre you know this but just set PYTHON3_VERSION=3.9 in the job def | 11:22 |
elodilles | sean-k-mooney: ack, thanks! | 11:23 |
sean-k-mooney | tempest-integrated-compute-ubuntu-focal should be deleted this release | 11:23 |
sean-k-mooney | and nova-ceph-multinode shoudl move to 22.04 eventulaly | 11:23 |
sean-k-mooney | so this is just temporay | 11:23 |
sean-k-mooney | tempest-integrated-compute-ubuntu-focal is needed for stable/antelope but not bobcat | 11:24 |
sean-k-mooney | if you want to just remvoe that feel free but nova-ceph-multistore need to be moved in a seperate patch so seting they python version is the correct workaround for now | 11:25 |
opendevreview | Elod Illes proposed openstack/nova master: Drop py38 based zuul jobs https://review.opendev.org/c/openstack/nova/+/881339 | 11:34 |
elodilles | sean-k-mooney: ^^^ | 11:34 |
elodilles | ceph job change from focal to jammy would be better in a separate patch i think | 11:35 |
opendevreview | Elod Illes proposed openstack/nova master: Drop py38 support from setup.cfg and tox.ini https://review.opendev.org/c/openstack/nova/+/881365 | 11:40 |
elodilles | sean-k-mooney: and this is the py38 support drop patch ^^^ | 11:41 |
sean-k-mooney | +1 on both i have a minor comment on the second patch ill take a look again once they run true ci | 11:45 |
opendevreview | Elod Illes proposed openstack/placement master: Drop py38 based jobs and add py310 instead https://review.opendev.org/c/openstack/placement/+/881366 | 11:51 |
elodilles | sean-k-mooney: thanks, then i'll wait until zuul results appear | 11:53 |
*** iurygregory_ is now known as iurygregory | 11:55 | |
opendevreview | ribaudr proposed openstack/nova master: Attach Manila shares via virtiofs (db) https://review.opendev.org/c/openstack/nova/+/831193 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Attach Manila shares via virtiofs (objects) https://review.opendev.org/c/openstack/nova/+/839401 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Attach Manila shares via virtiofs (manila abstraction) https://review.opendev.org/c/openstack/nova/+/831194 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Attach Manila shares via virtiofs (drivers and compute manager part) https://review.opendev.org/c/openstack/nova/+/833090 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Attach Manila shares via virtiofs (api) https://review.opendev.org/c/openstack/nova/+/836830 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Check shares support https://review.opendev.org/c/openstack/nova/+/850499 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Add metadata for shares https://review.opendev.org/c/openstack/nova/+/850500 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Add instance.share_attach notification https://review.opendev.org/c/openstack/nova/+/850501 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Add instance.share_detach notification https://review.opendev.org/c/openstack/nova/+/851028 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Add shares to InstancePayload https://review.opendev.org/c/openstack/nova/+/851029 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Add helper methods to attach/detach shares https://review.opendev.org/c/openstack/nova/+/852085 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Add libvirt test to ensure metadata are working. https://review.opendev.org/c/openstack/nova/+/852086 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Add virt/libvirt error test cases https://review.opendev.org/c/openstack/nova/+/852087 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Add share_info parameter to reboot method for each driver (driver part) https://review.opendev.org/c/openstack/nova/+/854823 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Support rebooting an instance with shares (compute and API part) https://review.opendev.org/c/openstack/nova/+/854824 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Add instance.share_attach_error notification https://review.opendev.org/c/openstack/nova/+/860282 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Add instance.share_detach_error notification https://review.opendev.org/c/openstack/nova/+/860283 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Add share_info parameter to resume method for each driver (driver part) https://review.opendev.org/c/openstack/nova/+/860284 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Support resuming an instance with shares (compute and API part) https://review.opendev.org/c/openstack/nova/+/860285 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Add helper methods to rescue/unrescue shares https://review.opendev.org/c/openstack/nova/+/860286 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Support rescuing an instance with shares (driver part) https://review.opendev.org/c/openstack/nova/+/860287 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Support rescuing an instance with shares (compute and API part) https://review.opendev.org/c/openstack/nova/+/860288 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Docs about Manila shares API usage https://review.opendev.org/c/openstack/nova/+/871642 | 12:32 |
opendevreview | ribaudr proposed openstack/nova master: Mounting the shares as part of the initialization process https://review.opendev.org/c/openstack/nova/+/880075 | 12:32 |
ralonsoh | sean-k-mooney, about PYTHON3_VERSION=3.9, I've tested that in Neutron and focal and doesn't work | 12:42 |
ralonsoh | even with https://review.opendev.org/c/openstack/devstack/+/881363 | 12:42 |
ralonsoh | about https://review.opendev.org/c/openstack/nova/+/868419, is it possible to merge this patch this week? that will unblock the Neutron CI | 12:43 |
ralonsoh | tested in https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/879036 | 12:43 |
sean-k-mooney | ralonsoh: well thats a problem since it was ment to be supported | 12:48 |
ralonsoh | 3.9 in focal? yes, we can install it | 12:49 |
ralonsoh | but uwsgi cannot | 12:49 |
sean-k-mooney | no i mean we had support for this in the past | 12:49 |
sean-k-mooney | so its obviously regressed in devstack | 12:49 |
ralonsoh | I'll check that | 12:50 |
sean-k-mooney | it proably regressed when the py2 support was remvoed | 12:50 |
sean-k-mooney | thats just a guess | 12:50 |
ralonsoh | from https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/881354 | 12:50 |
sean-k-mooney | but devstack is ment to support non default python versions | 12:50 |
ralonsoh | https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_a41/881354/2/check/neutron-tempest-plugin-openvswitch/a419d15/controller/logs/screen-keystone.txt | 12:51 |
ralonsoh | yes, devstack can | 12:51 |
ralonsoh | but it seems that focal uwsgi cannot | 12:51 |
bauzas | sean-k-mooney: ralonsoh: sorry, saw your pings, catching up | 12:51 |
sean-k-mooney | ralonsoh: we install uwsgi form pypi dont we? | 12:51 |
ralonsoh | let me check, one sec | 12:52 |
bauzas | sean-k-mooney: my main concern is that we are removing a python version support during a SLURP cadence | 12:52 |
sean-k-mooney | ralonsoh: maybe we dont but we might need to extxted devstack fixup script to work around it | 12:52 |
ralonsoh | sean-k-mooney, no, apt install | 12:52 |
bauzas | this means that operators need to prepare their computes *before* the SLURP upgrade | 12:52 |
sean-k-mooney | bauzas: yes and that perfectly fine to do | 12:52 |
sean-k-mooney | bauzas: yes that is intentional | 12:52 |
bauzas | by upgrading to 22.10 and py310 | 12:53 |
sean-k-mooney | no | 12:53 |
sean-k-mooney | by upgrading to 22.04 | 12:53 |
bauzas | whoops | 12:53 |
bauzas | 22.04 my bad | 12:53 |
sean-k-mooney | 22.04 use py310 | 12:53 |
* bauzas mixed 22.04 and py310 | 12:53 | |
sean-k-mooney | ya antelop is the release that supprot 22.04 and 20.04 | 12:53 |
bauzas | sean-k-mooney: my point is that I want to make sure that we can support the same python version and OS between SLURPs | 12:54 |
sean-k-mooney | ralonsoh: by they way we should not have any focal jobs on master right now | 12:54 |
sean-k-mooney | bauzas: we can py3.10 | 12:54 |
bauzas | at a SLURP release, operators can then upgrade | 12:54 |
ralonsoh | sean-k-mooney, that's right | 12:54 |
bauzas | sean-k-mooney: just checking yoga | 12:54 |
ralonsoh | but we still have the problem with nested virt | 12:54 |
sean-k-mooney | bauzas: yoga to antelope uses 20.04 and py 3.8 | 12:55 |
bauzas | https://governance.openstack.org/tc/reference/runtimes/yoga.html | 12:55 |
bauzas | ok, then I'm OK | 12:55 |
sean-k-mooney | then before you can SLURP to bobcat you have to do the 20.04 to 22.04 upgrade | 12:55 |
bauzas | operators can upgrade their OS to 20.04 and py3.8 during yoga | 12:55 |
bauzas | then they can skip-level upgrade to Antelope | 12:56 |
sean-k-mooney | yes and then upgrade there os to 22.04 | 12:56 |
bauzas | then, before skip-level upgrading again to C, they need to upgrade their computes to 22.04 and py310 | 12:56 |
sean-k-mooney | then slurp to 2024.1 | 12:56 |
bauzas | cool, then I accept the plan | 12:56 |
bauzas | sean-k-mooney: ok, then I'll clarify the context | 12:56 |
bauzas | sean-k-mooney: fwiw, I'm horrified we're pulling tooz as a dep for nova | 12:57 |
bauzas | just because of the ironic virt driver | 12:57 |
sean-k-mooney | its just for ironic and its going away | 12:57 |
sean-k-mooney | likely in 2024.2 assumign we get the shared supprot done in B or C | 12:57 |
sean-k-mooney | our aim shoudl be to deprecate teh peer list in b/C so it can go out in C and we can drop it in D | 12:58 |
sean-k-mooney | ralonsoh: what is the problem with ndested virt by the way | 12:58 |
ralonsoh | https://bugs.launchpad.net/neutron/+bug/1999249 | 12:58 |
sean-k-mooney | ralonsoh: you are aware that we are not ment to have any voting jobs that use it | 12:59 |
ralonsoh | timeouts in the jobs | 12:59 |
ralonsoh | so how we implement neutron-tempest-plugins jobs? | 12:59 |
sean-k-mooney | ralonsoh: they shoudl be useing QEMU not kvm | 13:00 |
sean-k-mooney | like all the nova jobs | 13:00 |
sean-k-mooney | ralonsoh: infra provied the abltiy to run josb with nested virt but we shoudl not have any voting jobs in any project that depend on them | 13:00 |
sean-k-mooney | ralonsoh: can you link to the job so i can confirm its actully using nested virt | 13:01 |
ralonsoh | sean-k-mooney, one sec | 13:01 |
sean-k-mooney | https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/857031/8/zuul.d/base-nested-switch.yaml#32 | 13:01 |
sean-k-mooney | so ya they are | 13:01 |
ralonsoh | https://github.com/openstack/neutron-tempest-plugin/blob/master/zuul.d/base-nested-switch.yaml | 13:01 |
ralonsoh | exactly | 13:01 |
sean-k-mooney | im kind of surpised that you have those since we were ask not to use the nested virt lables in any voting job | 13:02 |
sean-k-mooney | ralonsoh: what is the reason you are using nested virt there | 13:02 |
ralonsoh | sean-k-mooney, I can't reply to this, tbh | 13:02 |
sean-k-mooney | what kvm only funcitonality is neutron testing tha twould justify that | 13:02 |
ralonsoh | yk^^ | 13:03 |
ralonsoh | one sec | 13:03 |
sean-k-mooney | to be clare we have some kvm only fucntionliyt in nova we do not test sicne we did not have enough cloud provider to be comfortable with having a votign job | 13:03 |
ralonsoh | ykarel, hi! | 13:03 |
ykarel | hi ralonsoh | 13:03 |
ralonsoh | let me ask you the same question | 13:03 |
ralonsoh | what is the reason you are using nested virt there | 13:04 |
ralonsoh | in n-t-p | 13:04 |
ykarel | ralonsoh, context in https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/821067 | 13:05 |
ykarel | linked etherpad have more details | 13:05 |
ralonsoh | ykarel, then I think, for now, we should move to Jammy and non nested | 13:07 |
sean-k-mooney | ykarel: you realsie that we are not ment to have any voting job that use nested virt since we have not enough ci priorse for it. we were asked in teh past not to make any voting jobs unless we had 3 providers for it | 13:07 |
sean-k-mooney | it looks like we now have 2 clouds form vexhost and one form ovh | 13:08 |
sean-k-mooney | is this why this was made to use nested virt | 13:08 |
ykarel | sean-k-mooney, when we moved i recall there were 3 providers, need to check current status | 13:08 |
sean-k-mooney | i am not aware fo any anochment that it was now ok to do this | 13:08 |
sean-k-mooney | ykarel: so nested virt should only be used if a job needs it | 13:08 |
sean-k-mooney | what in this tempest plug need nested virt | 13:09 |
sean-k-mooney | we were previosuly asked not to enable it just to make the job faster to concerve the capsity | 13:09 |
ralonsoh | no one in particular, that was to improve the current stability of our jobs | 13:10 |
ykarel | sean-k-mooney, yes right nothing in those tests need nested virt, the switch was done just for better perf/times in jobs, and that worked great for us for almost an year before hte switch to jammy | 13:10 |
sean-k-mooney | ykarel: right so per/jobs times was specfic not a valid reason to use it in the past | 13:10 |
ralonsoh | ok, let's revert that for now and move to jammy | 13:11 |
ralonsoh | I'll push a DNM patch to test that | 13:11 |
lajoskatona | +1 let's have fresh results, and base any forward steps on that | 13:11 |
ykarel | sean-k-mooney, yes right i discussed this few months back with infra some months back and i was said it should be avoided just for perf but if team knows the issues with using nested virt(no support, less providers etc) it can be used as those nodes have resources available to use | 13:14 |
sean-k-mooney | ykarel: it used ot be stated in the docs somewhere too | 13:15 |
ykarel | and at that time we switched to focal until infra issue get's fixed(it requires infra compute nodes to be upgraded) | 13:15 |
sean-k-mooney | ykarel: my concern is we have thing that at least previous could only be tested with nested virt in nova | 13:15 |
sean-k-mooney | and it was previously not considerd ok to use the nested virt lables to do that | 13:16 |
ykarel | ralonsoh, lajoskatona due to qemu/libvirt version just revert will not work as we run many concurrent guest vms (6+) while qemu-5.0.0 it uses too much memory | 13:17 |
ykarel | 1gb per guest vm extra by default | 13:18 |
sean-k-mooney | because fo the tcib cache issue | 13:18 |
sean-k-mooney | when not using nested | 13:18 |
ykarel | libvirt-8.0.0 supports customizing it and for that DNM patch https://review.opendev.org/c/openstack/nova/+/868419 | 13:18 |
ykarel | will update it as per sean-k-mooney comments | 13:19 |
sean-k-mooney | this should be a specless blueprint | 13:19 |
ykarel | but even with we need to adjust(split into multiple ) our jobs | 13:19 |
sean-k-mooney | can we get this on the team meetign adjenda for tomorrow | 13:19 |
sean-k-mooney | bauzas: ^ | 13:19 |
sean-k-mooney | ykarel: i think we shoudl start a mail thread on using nested virt in votign jobs | 13:20 |
sean-k-mooney | https://lists.openstack.org/pipermail/openstack-discuss/2019-April/004681.html was the last time we discussed it on the list i think | 13:22 |
bauzas | sean-k-mooney: I'm a bit diverted by some meeting now, but agenda is up for everyone : https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting | 13:26 |
sean-k-mooney | bauzas: so tooz shoudl be blocked by UC right | 13:27 |
sean-k-mooney | i guess we dont have any focal based job that blocked the update | 13:28 |
sean-k-mooney | so in the short term i think we are goign to have to downgrwade the tooz verion in uc untill we can move the ceph jobs to focal or resolve the uwsgi issue | 13:28 |
dansmith | bauzas: any ideas on who else we could get to review this and the following patch? https://review.opendev.org/c/openstack/nova/+/880632 | 14:21 |
dansmith | I'm sort of waiting to rebase my actual compute ids stuff on that | 14:22 |
bauzas | dansmith: good question, let's say gibi or sean-k-mooney | 14:22 |
ykarel | sean-k-mooney, ack will add it to meeting agenda. and wrt mail thread sure could be done but as discussed in past voting should be avoided unless and until nested-virt is really needed as it's support is best effort and only few providers compared to other providers, but may be it can help in getting the messaging more clear to everyone | 14:22 |
dansmith | bauzas: ack, I just know they're both very busy with other things.. I added sean-k-mooney a while back | 14:22 |
frickler | sean-k-mooney: there could also be a specific py38 pin in u-c for tooz, like we had for py2.7 for some time. you may want discuss with reqs team (or maybe TC) | 14:45 |
frickler | also reminder that tooz isn't the only lib affected, oslo.db has the same situation except that u-c adoption is still blocked by other things stephenfin is having fun with | 14:47 |
opendevreview | Jorge San Emeterio proposed openstack/nova master: Have host look for CPU controller of cgroupsv2 location. https://review.opendev.org/c/openstack/nova/+/873127 | 15:07 |
opendevreview | Balazs Gibizer proposed openstack/nova master: Revert "Temporary skip some volume detach test in nova-lvm job" https://review.opendev.org/c/openstack/nova/+/881389 | 15:23 |
opendevreview | Balazs Gibizer proposed openstack/nova master: Revert "Temporary skip some volume detach test in nova-lvm job" https://review.opendev.org/c/openstack/nova/+/881389 | 15:43 |
*** dasm is now known as Guest12046 | 15:52 | |
bauzas | sean-k-mooney: dansmith: lemme get it right, so https://zuul.opendev.org/t/openstack/build/5f269ed5244d47be800dab988253c72c is failing because we bumped py to 3.9 and apache httpd complains ? https://review.opendev.org/c/openstack/nova/+/881339/3/.zuul.yaml#586 | 16:07 |
sean-k-mooney | yes | 16:07 |
sean-k-mooney | if we move tooz to an extra package | 16:08 |
sean-k-mooney | we can drop the bump to 3.9 | 16:08 |
sean-k-mooney | and that job should work since it does not have ironic | 16:08 |
sean-k-mooney | or barbican | 16:08 |
sean-k-mooney | so castalan and tooz wont be installed | 16:08 |
sean-k-mooney | or we can move the nova-ceph-multistore to 22.04 | 16:09 |
dansmith | I don't actually see the failure logged anywhere, other than keystone can't be imported | 16:09 |
bauzas | yeah me too, hence my question | 16:09 |
dansmith | oh I see that's the bump to 3.9 | 16:10 |
bauzas | this looks to me that keystone can't work with 3.9 | 16:10 |
dansmith | or uwsgi is using 3.8 more likely I think | 16:10 |
bauzas | because RegionOne isn't an UUID | 16:10 |
dansmith | it says importerror but doesn't actually show that it's not a valid module, so I'm not positive | 16:10 |
dansmith | either way, I feel like 3.9 on focal isn't really an option | 16:10 |
sean-k-mooney | dansmith: thats failing because uwsgi si installed under python 3.8.10 | 16:10 |
sean-k-mooney | Python version: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] | 16:10 |
dansmith | sean-k-mooney: yup | 16:11 |
sean-k-mooney | whihc i think you said was a compile time dep | 16:11 |
sean-k-mooney | for uwsgi | 16:11 |
dansmith | no, I didn't say that | 16:11 |
bauzas | fwiw, the u-c bump to tooz==4.0.0 is still on hold | 16:11 |
bauzas | so we may ask for a cap | 16:11 |
dansmith | I said some of our packages we install from bindep to avoid compiling things like mysqlclient (AFAIK) | 16:11 |
sean-k-mooney | ah | 16:11 |
sean-k-mooney | ya sorry you did mention mysql | 16:12 |
sean-k-mooney | not uwsgi | 16:12 |
dansmith | I think the uwsgi python module *is* tied directly to the python version though | 16:12 |
dansmith | so we'd need a python3.9-uwsgi-module-python3 (or whatever the name is) | 16:12 |
sean-k-mooney | at least in the disto package i think that is correct | 16:12 |
dansmith | uwsgi-plugin-python3 | 16:13 |
dansmith | this ^ is compiled directly against the base python3, not 3.9 | 16:13 |
dansmith | see the last answer here: https://stackoverflow.com/questions/68413988/building-a-uwsgi-plugin-for-python-3-9-failed-for-older-version-it-works-are-th | 16:14 |
dansmith | might be able to pip install uwsgi with the 3.9 python | 16:14 |
sean-k-mooney | for the devstack venv path i think you can override that with https://review.opendev.org/c/openstack/devstack/+/558930/26/files/apache-horizon.template | 16:14 |
sean-k-mooney | WSGIPYTHONHOME | 16:15 |
dansmith | I' | 16:15 |
dansmith | I am pretty sure not | 16:15 |
dansmith | uwsgi runs the python interpreter internally, AFAIK | 16:15 |
dansmith | that's how it provides its phantom importable modules, etc | 16:15 |
sean-k-mooney | i know i was doing somethign to get it too work in that seriess | 16:15 |
sean-k-mooney | but i dotn recall | 16:15 |
sean-k-mooney | oh it was this https://review.opendev.org/c/openstack/devstack/+/558930/26/functions-common#1605 | 16:16 |
sean-k-mooney | anywya that wont really help here | 16:16 |
sean-k-mooney | for the ceph job we whould rever back to 3.8 i guess | 16:16 |
sean-k-mooney | if we are keeping it on focal | 16:17 |
sean-k-mooney | mixing the workaround to run with uwsgin for a venv and the ceph job is not helpful i tought there was a simple fix in that but no | 16:18 |
bauzas | stupid question but I guess we can't block a specific tooz version in our own requirements.txt file ? | 16:19 |
sean-k-mooney | we can but we are not ment too | 16:19 |
sean-k-mooney | we can do tooz!=whatever | 16:19 |
sean-k-mooney | that will cause use to downgrade if its already installed | 16:20 |
bauzas | and tooz<=version ? | 16:20 |
sean-k-mooney | we dont want to cap | 16:20 |
dansmith | right but we'll still install the initial version first, then downgrade it, | 16:20 |
sean-k-mooney | but we can block know broken ones | 16:20 |
dansmith | and if someone that runs after us has >=4.0 it will be re-re-installed | 16:20 |
dansmith | so that's not the best solution, IMHO | 16:20 |
bauzas | I was just trying to buy us some time :) | 16:21 |
dansmith | yeah, it might be a quick fix | 16:21 |
bauzas | fwiw https://review.opendev.org/c/openstack/requirements/+/881329 | 16:31 |
bauzas | feel free to vote (loudly) | 16:31 |
sean-k-mooney | dansmith: was the same done for castalan? for glance? | 16:34 |
sean-k-mooney | its currently castellan===4.1.0 | 16:34 |
dansmith | we just dropped the 38 jobs | 16:34 |
sean-k-mooney | ah ok | 16:34 |
bauzas | so, before I leave, https://review.opendev.org/c/openstack/requirements/+/881329 should unblock the gate | 16:41 |
bauzas | this will give us time, ideally to investigate the uwsgi 3.9 support | 16:42 |
bauzas | that being said, I'm OK with dropping 3.8 support for the functional tests | 16:42 |
sean-k-mooney | i thihnk we should just see if we can deploy devstack with ceph on 22.04 using cephadm | 16:54 |
sean-k-mooney | if we can we can swap the nova ceph jobs to that | 16:55 |
sean-k-mooney | i can push a DNM patch to test that but i proably wont have time to actully try it myself | 16:55 |
dansmith | there's a patch started already IIRC | 16:57 |
dansmith | ah that was an old DNM I was thinking about | 16:59 |
*** Guest12046 is now known as dasm | 19:40 | |
opendevreview | Lin Yang proposed openstack/os-traits master: CPU: add traits for new X86 feature "AMX" https://review.opendev.org/c/openstack/os-traits/+/868149 | 21:59 |
dansmith | um, neutron now requires >=3.9 as well? https://zuul.opendev.org/t/openstack/build/c7a96046411c4e05a669003bb67b299d | 22:18 |
dansmith | yup: https://review.opendev.org/c/openstack/neutron/+/881333 | 22:18 |
clarkb | looks like they did it in response to the tooz update | 22:19 |
opendevreview | Dan Smith proposed openstack/nova master: Remove silent failure to find a node on rebuild https://review.opendev.org/c/openstack/nova/+/880632 | 22:23 |
opendevreview | Dan Smith proposed openstack/nova master: Stop ignoring missing compute nodes in claims https://review.opendev.org/c/openstack/nova/+/880633 | 22:23 |
opendevreview | Dan Smith proposed openstack/nova master: Make focal job non-voting https://review.opendev.org/c/openstack/nova/+/881409 | 22:23 |
dansmith | clarkb: yeah, not sure if they were trying to help with that or not... | 22:23 |
dansmith | so I imagine that's going to take a while to undo, if at all, so this ^ marks that job as non-voting | 22:23 |
dansmith | I'm also not sure why we have that focal job, tbh | 22:24 |
dansmith | ah: https://review.opendev.org/c/openstack/nova/+/861111 | 22:24 |
dansmith | so I can probably just nuke it instead of making it n-v since it'll fail | 22:25 |
opendevreview | Dan Smith proposed openstack/nova master: Remove focal job for 2023.2 https://review.opendev.org/c/openstack/nova/+/881409 | 22:27 |
opendevreview | Dan Smith proposed openstack/nova master: Remove silent failure to find a node on rebuild https://review.opendev.org/c/openstack/nova/+/880632 | 22:27 |
opendevreview | Dan Smith proposed openstack/nova master: Stop ignoring missing compute nodes in claims https://review.opendev.org/c/openstack/nova/+/880633 | 22:27 |
dansmith | gmann: ^ | 22:28 |
dansmith | oh right, the ceph jobs are focal still, which means we're blocked until neutron reverts that or we have to mark ceph jobs as n-v | 22:42 |
dansmith | fml | 22:42 |
artom | dansmith, hey, just going to drop this here drive-by style: https://github.com/openstack/devstack-plugin-ceph/blob/master/devstack/plugin.sh#L5-L9 | 23:07 |
dansmith | artom: what about it? | 23:08 |
artom | Apparently this cephadm thing is the New Correct Way of installing ceph, essentially bypassing all the Ubuntu Ceph RPMs | 23:08 |
dansmith | yeah we know :) | 23:08 |
artom | I'll follow up with an email tomorrow, but that's what I got out of a chat with Giulio | 23:09 |
dansmith | last I checked (when we discussed at the recent ptg) it either didn't work correctly, or something about what we need for our regular jobs doesn't work with that mode | 23:09 |
dansmith | I don't know the details, but that's what we need to get worked out | 23:09 |
artom | Ack - more info (and contact people) to follow tomorrow | 23:09 |
dansmith | I just rechecked this, which frickler refreshed recently: https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/865315 | 23:10 |
dansmith | which I'm hoping is supposed to actually work | 23:10 |
dansmith | you see the jobs there are still deploying without that flag and on focal "until it works" and that was more recent than the cephadm patches | 23:12 |
dansmith | I mean, before the revert of course | 23:12 |
dansmith | I want to say it's something like installing with cephadm doesn't leave the ceph config visible which nova needs to resolve the cluster or something like that | 23:13 |
dansmith | artom: yeah, the cephadm-based job fails pretty early and pretty hard: https://zuul.opendev.org/t/openstack/build/cce90efc7177459e9eebdab966bab75c | 23:40 |
clarkb | the jammy ceph version is also new enough for nova iirc | 23:40 |
dansmith | clarkb: right, the reason we didn't do that before was because they didn't yet have jammy packages IIRC, but I think they do now | 23:41 |
dansmith | either way, it's failing pretty hard | 23:41 |
clarkb | I think there was actually a misunderstanding. Everyone was looking for cloud archive packages for some reason, but the base distro had packages that were quite new | 23:42 |
dansmith | I mean the distro-package-based one | 23:42 |
dansmith | clarkb: no I think people were looking for ceph.org packages, or at least that's what I remembered | 23:42 |
clarkb | ya that was the first issue, then they looked for UCA packages and didn't find those either | 23:42 |
dansmith | but perhaps that was because jammy already had them and that's just how we did it before, I dunno | 23:42 |
dansmith | ack okay | 23:42 |
clarkb | but the base distro has newish packages that can probabl ybe made to work | 23:42 |
* dansmith nods | 23:43 | |
dansmith | the job running on jammy is about to finish, but it had lots of tempest fail in it | 23:43 |
dansmith | from this: https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/865315?tab=change-view-tab-header-zuul-results-summary | 23:43 |
dansmith | qemu: module block-block-rbd not found | 23:47 |
dansmith | in nova compute's log | 23:47 |
sean-k-mooney[m] | dansmith: i think you just need to add the ensure podman role to a pre playbook | 23:47 |
dansmith | sean-k-mooney[m]: could be, but my point is, I don't think that job is like super healthy and well-tested such that switching to it is the obvious choice :) | 23:48 |
sean-k-mooney[m] | hehe ya... | 23:48 |
sean-k-mooney[m] | so googleing a little i think there is some issue with cephadm assuming rpm based distors but its runing on ubuntu | 23:52 |
sean-k-mooney[m] | you can correct this via containers.conf apprently | 23:52 |
dansmith | yeah idk, but if jammy has new enough ceph that seems like it might be the easiest path to it working | 23:53 |
dansmith | although we must also be missing that qemu module | 23:53 |
sean-k-mooney[m] | it is older then whats on ceph.com | 23:53 |
sean-k-mooney[m] | but its relitivly recent | 23:53 |
dansmith | but quincy is the release they're trying to install and that's it, AFAICT | 23:54 |
sean-k-mooney[m] | i think manilla wanted to have a newer verion | 23:54 |
dansmith | however, I think we're going to have to ask neutron to revert the pin until we figure it out regardless | 23:54 |
dansmith | it's going to be ugly because they merged it to like all their repos | 23:55 |
sean-k-mooney[m] | ya i didnt actully get time to look at any of this today | 23:57 |
sean-k-mooney[m] | i wanted to check if anything had progress before i went to sleep but not much it seams thanks for rechecking that devstack patch | 23:57 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!