*** dmellado81918134 is now known as dmellado8191813 | 04:48 | |
opendevreview | yatin proposed openstack/devstack master: Revert "Woraround systemd issue on CentOS 9-stream" https://review.opendev.org/c/openstack/devstack/+/892090 | 06:28 |
---|---|---|
*** ykarel|away is now known as ykarel | 06:28 | |
kopecmartin | ayhanilhan: melwitt the test in question passed with the patch - https://review.opendev.org/c/openstack/tempest/+/884804 , see e.g. this https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_e67/884804/2/gate/tempest-slow-py3/e6755c0/testr_results.html | 07:30 |
kopecmartin | i wonder what's the difference between the env in the CI where it worked and the env where it failed | 07:31 |
kopecmartin | frickler: i wouldn't drop support for centos and rocky , let's keep the bug open until both distros work with global env .. in the meantime we can use the workaround (setting the global venv to false) | 07:32 |
kopecmartin | all the changes required to fix this need to land to devstack? or something else , e.g. the distro, has to be modified too? | 07:33 |
frickler | kopecmartin: are you mixing two issues here? for global_venv, I have no idea why it is failing for rocky, but I also have no motivation to find out. the issue is that we default GLOBAL_VENV=true except in some CI jobs, which is breaking local installs for those poor people choosing the wrong distro | 07:37 |
frickler | the other things that concerns me is that neutron was also having some further issues seemingly with unclear cause https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/891550 | 07:38 |
kopecmartin | my understanding is that if the job fails, setting GLOBAL_VENV=false helps as that overrides the default value set in devstack | 07:41 |
frickler | kopecmartin: yes, but how do you tell that to someone deploying devstack locally? | 07:43 |
kopecmartin | they google it, find a bug and notice what others did in a similar situation | 07:43 |
frickler | that's not very friendly to users, so my idea was to improve the distro check in stack.sh | 07:45 |
kopecmartin | right, ok, we know that it affects only centos and rocky (any others?) .. then we can override the default to false just for those, and then we'll find a volunteer for the task - remove the override without hitting failures | 07:47 |
kopecmartin | if it's documented, clearly stated the goal, easily reproducable I can pass it maybe to some of my interns or i can dedicate some time myself | 07:49 |
hamidlotfi_ | mornings | 08:27 |
hamidlotfi_ | i have problem in create network on admin section | 08:27 |
hamidlotfi_ | when click on the create button | 08:27 |
hamidlotfi_ | show me this error `Danger: An error occurred. Please try again later.` and on the log write this message: | 08:28 |
hamidlotfi_ | `[wsgi:error] [pid 2674:tid 140131176224320] [remote 172.17.111.1:42434] Undefined provider network types are found: ['f', 'l', 'a', 't', ',', 'v', 'l', 'a', 'n', ',', 'v', 'x', 'l', 'a', 'n']` | 08:28 |
hamidlotfi_ | `[wsgi:error] [pid 2674:tid 140131176224320] [remote 172.17.111.1:42434] Internal Server Error: /admin/networks/create/` | 08:28 |
hamidlotfi_ | help me please | 08:28 |
frickler | kopecmartin: the neutron failures were on ubuntu I think, not sure how easy reproducing those would be, iirc they resulted in job timeouts rather than failures | 08:37 |
frickler | hamidlotfi_: is that a devstack deployment? | 08:37 |
hamidlotfi_ | frickler: No, it's openstack ansible (OSA) | 08:51 |
kopecmartin | frickler: have you ever seen a patch which was approved but didn't get merged automatically? .. and now it says the change is submittable but I can't submit it o.O weird - https://review.opendev.org/c/openinfra/python-tempestconf/+/889874 | 09:10 |
kopecmartin | oh, and regarding the bug, let's set GLOBAL_VENV = false as a workaround in every project that experience an issue like that and the task will be to remove the workaround, overtime, i'll try find someone who can work on that on devstack and tempest repos, if the issue is in another repo and the job is not only inherited, than that will be up to the maintainers of that repo | 09:14 |
hamidlotfi_ | frickler: what's the problem in my environment? | 09:35 |
frickler | kopecmartin: for the tempestconf patch, yes, it needs a rebase. you can notice it by the orange color of the "(Merged)" comment on the relation chain | 10:03 |
kopecmartin | i commented and that triggered the jobs again :D maybe this time it gets merged | 10:04 |
frickler | hamidlotfi_: it looks like a configuration issue, using a string in a place where a list would be expected | 10:04 |
kopecmartin | it's interesting though, it's a state between merge conflict and no merge conflict :D | 10:04 |
frickler | I don't think the submittability has changed, not sure why zuul runs anyway, it might be a pure gerrit thing | 10:08 |
frickler | what the review asks gerrit to do is merge that change on top of an outdated version of a different patch. it cannot do that, since a newer version of that other patch was merged | 10:08 |
kopecmartin | yeah, that makes sense, I would expect that gerrit either merges it (with auto rebase) or throws a conflict .. i'll wait what happens this time around and then, if still needed, will rebase it | 10:10 |
*** ralonsoh is now known as ralonsoh_afk | 11:06 | |
*** ralonsoh_afk is now known as ralonsoh | 13:02 | |
ralonsoh | frickler, qq: why "openstack-tox-py311" is on debian? | 13:20 |
ralonsoh | there is a issue with a library, but in ubuntu 22.04 this library is present | 13:21 |
ralonsoh | error: https://06e0331b462ed6fdf264-7660c6eb3fc520d2639a30e0db226704.ssl.cf1.rackcdn.com/892132/2/check/openstack-tox-py311/009f134/job-output.txt | 13:21 |
frickler | ralonsoh: this was discussed in the patch, ubuntu has only a rc version of py3.11 in universe, debian has a recent, supported version | 13:39 |
frickler | I've seen a similar patch to bindep to adapt this somewhere, I think it was nova, let me check | 13:40 |
frickler | ralonsoh: seems I've seen it because I wrote it ... time flies ;) https://review.opendev.org/c/openstack/nova/+/891256 | 13:42 |
ralonsoh | frickler, perfect, I'll check that patch to do the same for Neutron CI | 13:45 |
ralonsoh | thanks | 13:45 |
gmann | sent the python 3.11 plan in ML also, I think many other might have same question and about the plan. | 18:17 |
opendevreview | sean mooney proposed openstack/whitebox-tempest-plugin master: [WIP] add role for multinode networking https://review.opendev.org/c/openstack/whitebox-tempest-plugin/+/892170 | 18:38 |
gmann | melwitt: I have same question what kopecmartin asked. test is passing in upstream CI right? | 18:57 |
*** elodilles is now known as elodilles_pto | 19:21 | |
melwitt | kopecmartin: thanks for pointing that out, I had managed to not find the test running in upstream CI at the time. so must be some difference in environment 🤔 cc gmann | 21:10 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!