opendevreview | Merged openstack/nova-specs master: Add maxphysaddr support for Libvirt https://review.opendev.org/c/openstack/nova-specs/+/861033 | 02:58 |
---|---|---|
opendevreview | Merged openstack/placement master: Avoid rbac defaults conflict in functional tests https://review.opendev.org/c/openstack/placement/+/869525 | 06:24 |
*** blarnath is now known as d34dh0r53 | 06:43 | |
gibi | fyi there is a low frequency but seems to be new functional test failure on the nova gate https://bugs.launchpad.net/nova/+bug/2002782 | 08:50 |
gibi | also I see multiple failures in varios nova jobs with keystone not having admin role defined | 08:57 |
gibi | Jan 13 03:22:30.365429 np0032719500 devstack@keystone.service[52368]: ERROR keystone.server.flask.application [None req-a8fb798b-0274-4f56-8a07-13659cf7afe4 None admin] Could not find role: admin.: keystone.exception.RoleNotFound: Could not find role: admin. | 08:57 |
gibi | example: https://zuul.opendev.org/t/openstack/build/8cec516802404c0a8af6a2724ac2b78b/log/controller/logs/screen-keystone.txt#1142 | 08:57 |
gibi | but there are successful job runs there since so I'm not sure if it wasn't just a temporary gate block resolved since | 08:58 |
kashyap | gibi: Morning, 'grenade-skip-level' and 'nova-ceph-multistore' jobs are failing for me (looks unrelated): https://review.opendev.org/c/openstack/nova/+/869950/ | 09:08 |
kashyap | One is: | 09:09 |
kashyap | --- | 09:09 |
kashyap | dpkg: error processing package pcp (--configure): installed pcp package post-installation script subprocess returned error exit status 1 | 09:09 |
kashyap | --- | 09:09 |
gibi | yepp that is unrelated | 09:15 |
gibi | https://bugs.launchpad.net/devstack/+bug/1943184 | 09:16 |
kashyap | Ah, thanks for the link | 09:17 |
kashyap | And the 'nova-ceph-multistore' job seems to crash/segfault Python due to this test: | 09:17 |
kashyap | tempest.api.compute.admin.test_volume.AttachSCSIVolumeTestJSON.test_attach_scsi_disk_with_config_drive[id-777e468f-17ca-4da4-b93d-b7dbf56c0494] | 09:17 |
kashyap | gibi: Wow, if 'pcp' has eeb unreliable for that long, I wonder if there's an alternative or if it's necessary at all | 09:18 |
frickler | it is only for stat collection, so mostly not necessary at all. I was also thinking we had disabled it by default, do you enable dstat in those job(s)? | 09:19 |
kashyap | frickler: I don't know off-hand if those jobs enable 'dstat', but I assume they do | 09:21 |
*** elodilles is now known as elodilles_pto | 09:58 | |
opendevreview | Sahid Orentino Ferdjaoui proposed openstack/nova master: compute: enhance compute evacuate instance to support target state https://review.opendev.org/c/openstack/nova/+/858383 | 10:29 |
opendevreview | Sahid Orentino Ferdjaoui proposed openstack/nova master: api: extend evacuate instance to support target state https://review.opendev.org/c/openstack/nova/+/858384 | 10:29 |
opendevreview | Alexey Stupnikov proposed openstack/nova master: Add functional tests to reproduce bug #1994983 https://review.opendev.org/c/openstack/nova/+/863416 | 12:01 |
opendevreview | Alexey Stupnikov proposed openstack/nova master: Log some InstanceNotFound exceptions from libvirt https://review.opendev.org/c/openstack/nova/+/863665 | 12:02 |
lajoskatona | Hi nova team, shall I ask about the CLI of migrate? The question is: "is there a chance to change the --wait option to wait for the migrate status instead of the server status in case of openstack server migrate .... --wait?" | 12:06 |
lajoskatona | The logic is here: https://opendev.org/openstack/python-openstackclient/src/branch/master/openstackclient/compute/v2/server.py#L3016-L3022 and as I saw it was (the login I mean) copy-pasted from novaclient, but for that I can't find why it was decided to wait for the server status instead of the status of the migration | 12:08 |
lajoskatona | I see reason for both, as even if the migration failed the server can remain on the same host and we are happy as it is active. | 12:08 |
opendevreview | Alexey Stupnikov proposed openstack/nova stable/zed: Remove deleted projects from flavor access list https://review.opendev.org/c/openstack/nova/+/870053 | 12:09 |
lajoskatona | But from the other perspective the user would be happy to see in this case that hey your migration failed (without extra check for the status of migration), as it can be misleading that the --wait returns happily but the migration failed | 12:10 |
sean-k-mooney | lajoskatona: if i recall there is not a good way to find the miration object | 12:13 |
sean-k-mooney | the migrate and live migrate calls dont retrun the migration uuid if i recall so you would need ot have a hureistic to find it client side | 12:14 |
sean-k-mooney | something like list the migrations or server events for the instnace and get the last one and hope that is the correct one for the currnt command | 12:15 |
sean-k-mooney | https://docs.openstack.org/api-ref/compute/?expanded=migrate-server-migrate-action-detail#migrate-server-migrate-action | 12:16 |
sean-k-mooney | if we had an api change to retrun the migration uuid form that and the live migrate endpoint then it would be easy for the client to wait on the migration status instead | 12:17 |
lajoskatona | sean-k-mooney: thanks, sounds interesting and true as I start to remember the migration things. I check and play with it to understand fully. | 12:25 |
pslestang | Hy all, is that because the relation chain is not totally reviewed that I can not merge this patchset https://review.opendev.org/c/openstack/nova/+/867832 or do I miss something else? | 13:20 |
sean-k-mooney | yes | 13:36 |
sean-k-mooney | the repoducer is not approved so the fix won be merged | 13:36 |
sean-k-mooney | when the parent merges the top patch will be merged by zuul | 13:36 |
pslestang | ok understood, will some of you get some times to approve it? | 13:39 |
sean-k-mooney | ya we will review it as normal i might have time to take a look later today | 13:46 |
sean-k-mooney | this code is incldued more or less in the follow up patch so i have glance over it already | 13:46 |
sean-k-mooney | so likely there will be no feedback but im doing some email stuff right now so dont want to swtich context | 13:47 |
kashyap | Man, I'm in Gate-hell, these jobs are failing w/ unrelated errors :( - nova-live-migration, nova-multi-cell, nova-ovs-hybrid-plug, nova-grenade-multinode | 13:47 |
kashyap | I wonder how many of these jobs really deserve to be "voting" | 13:48 |
sean-k-mooney | all of them | 13:51 |
sean-k-mooney | they are pretty stabel if there is currently an issue its new and we shoudl investigate that | 13:51 |
kashyap | Well, that is the "ideal" scenario, assuming there's "unlimited bandwidth" from contributors. I can't possibly keep investigating CI issues all day and week long | 13:56 |
sean-k-mooney | https://zuul.openstack.org/builds?job_name=nova-live-migration&job_name=nova-ovs-hybrid-plug&job_name=nova-multi-cell&job_name=nova-grenade-multinode&project=openstack%2Fnova&branch=master&skip=0 | 13:56 |
kashyap | I don't know if all of them are deserving, we have to re-evaulate some jobs to see if they're still worth their salt. | 13:56 |
sean-k-mooney | i do look at those jobs and the ones that you listed are valuable | 13:57 |
kashyap | What's annoing is all of these jobs passed in the previous run :-( | 13:57 |
sean-k-mooney | the grenade job failed on test_volume_backed_live_migration | 13:59 |
bauzas | kashyap: lemme look then, change id ? | 13:59 |
* bauzas was working back on its power management series | 14:00 | |
sean-k-mooney | presumable https://review.opendev.org/c/openstack/nova/+/869587 | 14:00 |
kashyap | bauzas: Thanks for the offer, I have already opened all the failing 4 jobs and seeing what's up one-by-one | 14:01 |
sean-k-mooney | or https://review.opendev.org/c/openstack/nova/+/869950 | 14:01 |
bauzas | sean-k-mooney: ack, will look | 14:01 |
bauzas | TGIF | 14:01 |
kashyap | Previously 'nova-ceph-multistore' was failing due to 'pcp' package unreliability | 14:01 |
kashyap | bauzas: That's the patch: https://review.opendev.org/c/openstack/nova/+/869950/ | 14:01 |
sean-k-mooney | ya i saw that in one of the other failaure | 14:01 |
sean-k-mooney | that might be a mirror issue | 14:02 |
sean-k-mooney | so not related to the job but the cloud it ran on | 14:02 |
sean-k-mooney | i know that there was issues with one of the ci providers running out os log storage i think during the week | 14:02 |
kashyap | Hmm, it's a pity that we don't have a way to selectively run the failing jobs (while retaining the older one) - if nothing has changed in a patch | 14:02 |
sean-k-mooney | we intentially dont because that is dangours | 14:02 |
sean-k-mooney | its call the green check policy | 14:03 |
kashyap | I know, the "danger" is introducing accidental regressions | 14:03 |
sean-k-mooney | the issue is that we use speculative execution in the gate | 14:03 |
sean-k-mooney | so running one job might mean you end up with each job testing diffent specultivly merged commits | 14:04 |
kashyap | I wonder what's wrong with this: *if* a patch has not changed from previous iteration, and a job has failed due to unrelated failure, then allow to selectively re-run just that | 14:04 |
sean-k-mooney | if you made sure the same commits were preseved for the rerun that might be ok but that is not how zuul works | 14:04 |
bauzas | yup | 14:04 |
bauzas | and honestly, I prefer this | 14:05 |
kashyap | Sure, the commit _is_ preserved | 14:05 |
sean-k-mooney | that not how zuul works | 14:05 |
bauzas | if we have some jobs that are not ok, we can then make non-voting in case | 14:05 |
sean-k-mooney | zuul rebases the commit you submit on top of master to test it merged to the curent state of master | 14:05 |
sean-k-mooney | kashyap: the grenade job failed becasue of rabbitmq | 14:18 |
sean-k-mooney | : ERROR oslo.messaging._drivers.impl_rabbit [None req-5a9299c4-d386-4e22-8cca-281e29799492 None None] Connection failed: [Errno 111] ECONNREFUSED (retrying in 19.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED | 14:18 |
sean-k-mooney | the second compute node did not stack | 14:19 |
sean-k-mooney | this is the second time i have seen that in two days so that looks like a real issue in devstack | 14:19 |
kashyap | sean-k-mooney: I see, thank you for looking. Is that an accidental thing? | 14:19 |
sean-k-mooney | i have only seen that since yesterday | 14:19 |
kashyap | Hm, 2nd time 2 days | 14:19 |
kashyap | But it passed just a few hours ago. Seems non-deterministic to me. | 14:19 |
sean-k-mooney | so i dont know if something changed this week that broke it | 14:20 |
sean-k-mooney | yes so it might depend on the provider what we shoudl do is check the conoterl and see if its running there | 14:20 |
kashyap | I see nothing "green" on the controller - https://zuul.opendev.org/t/openstack/build/9c1b4d7afc4c457fa90ae4ceb128dded | 14:21 |
kashyap | Although it says in _red_, "193 OK, 103 changed, 1 Failure" | 14:21 |
kashyap | Probably it's just a miscolored thing | 14:21 |
sean-k-mooney | looks like its running fine on the contoler | 14:22 |
sean-k-mooney | althouhg nova-compute is failing on the contoler too but that might be after/durign the upgrade | 14:23 |
sean-k-mooney | n-cpu is running fine initally at least | 14:24 |
sean-k-mooney | this could jsut be an network connectiy issue between the vms | 14:24 |
kashyap | Yeah, thought so; the classic "network connectivity" :) | 14:25 |
sean-k-mooney | grenade is able to run many of the test | 14:25 |
sean-k-mooney | https://zuul.opendev.org/t/openstack/build/9c1b4d7afc4c457fa90ae4ceb128dded/log/controller/logs/grenade.sh_log.txt | 14:25 |
sean-k-mooney | im not sure if it is a network connectivy test as it would not have got to grenade if it did not work before the upgrade | 14:26 |
sean-k-mooney | we run tempest twice for grenade | 14:26 |
sean-k-mooney | before upgrade and after | 14:26 |
kashyap | Yeah, your logic makes sense, though - if it is able do upgrade tests via grenade, then the prob is elsewhere | 14:26 |
sean-k-mooney | tempest.scenario.test_server_multinode.TestServerMultinode.test_schedule_to_all_nodes | 14:27 |
sean-k-mooney | is failing because the subnode is nolonger able to connect | 14:27 |
opendevreview | Alexey Stupnikov proposed openstack/nova master: Log some InstanceNotFound exceptions from libvirt https://review.opendev.org/c/openstack/nova/+/863665 | 14:27 |
kashyap | sean-k-mooney: Thanks; I admire your ability to tirelessly look at CI failures | 14:32 |
bauzas | sean-k-mooney: I briefly looked at kashyap's CI issues and it looked to me transient network issues | 14:33 |
sean-k-mooney | well i have swaped back to other thing but in generally all active contibutor are exepcted to help look at them | 14:33 |
dansmith | sean-k-mooney: I saw one of those connection refused to rabbit things yesterday as well | 14:33 |
sean-k-mooney | bauzas: i would agree but as i said this is the second tim i have seen the rabbit issues | 14:33 |
dansmith | along with several other failures that has me concerned | 14:33 |
sean-k-mooney | ya | 14:33 |
bauzas | lemme check the nodes | 14:33 |
sean-k-mooney | so i do think that something has regressed in grenade/devstack/job-defintiosn | 14:34 |
*** dasm|off is now known as dasm | 14:34 | |
sean-k-mooney | it could be provider related but i think we have seen isseus on more the one provider | 14:34 |
dansmith | on another note, sean-k-mooney bauzas I hope either of you can look at this soon: https://review.opendev.org/c/openstack/nova/+/866218 | 14:35 |
bauzas | https://f4c3b65ecec7dfeb9b12-92114ee8da6f13c40794db32b7bbd824.ssl.cf5.rackcdn.com/869950/4/check/nova-grenade-multinode/9c1b4d7/zuul-info/inventory.yaml | 14:35 |
bauzas | ovh | 14:35 |
dansmith | still waiting on one devstack thing, but it's small, and a dependent placement thing as well | 14:35 |
bauzas | dansmith: sure I can help | 14:35 |
bauzas | oh this | 14:35 |
bauzas | I said I was looking at it | 14:35 |
sean-k-mooney | dansmith: ya so i merged the placment fixture change last night | 14:35 |
dansmith | oh sweet thanks | 14:36 |
bauzas | already half-reviewed gmann's patch | 14:36 |
sean-k-mooney | bauzas: ack if you dont get to it by your end of day ill try and get to it before i sign off today | 14:36 |
kashyap | sean-k-mooney: Yes, of course - I look at them (within reason). I was just saying, can't drop all the other responsibilities and tend to it :( Just not enough hours | 14:36 |
dansmith | bauzas: sorry, but thanks | 14:36 |
dansmith | kashyap: well, someone has to :/ | 14:36 |
kashyap | dansmith: True, the mythical "someone"; it's the classic "tragedy of the commons" :/ | 14:37 |
dansmith | if we let it get out of control we'll never recover | 14:37 |
bauzas | sean-k-mooney: gmann: ok, so by now we only check functional tests using legacy RBAC | 14:37 |
bauzas | https://review.opendev.org/c/openstack/placement/+/869525/3/placement/tests/functional/fixtures/placement.py | 14:37 |
bauzas | I'm OK with this | 14:37 |
bauzas | but will we have a FUP modifying our tests for using the new defaults ? | 14:38 |
dansmith | bauzas: no, the main devstack job runs with old | 14:38 |
dansmith | and a new devstack job with new | 14:38 |
dansmith | that's what I had him add, so we could command them still to be old while we transition | 14:39 |
sean-k-mooney | well bauzas is askign about functional tests | 14:40 |
bauzas | dansmith: ok, I understand it so placement won't yet support new defaults, only nova/neutron/cinder blah, right? | 14:40 |
sean-k-mooney | at some point we shoudl test with new placemnt defaults too | 14:40 |
bauzas | that's my point | 14:41 |
sean-k-mooney | althogh i dont know if we want to just swap or add new tests | 14:41 |
bauzas | I just wanna make sure what we will do | 14:41 |
sean-k-mooney | largly for placment it wont matter much | 14:41 |
dansmith | oh sorry I thought you meant you thought we weren't getting any old coverage | 14:41 |
sean-k-mooney | as we always talk to it as admin more or less now anyway | 14:41 |
dansmith | right doesn't matter much for placement | 14:41 |
dansmith | glance does have a functional job running on both, but mostly because they have specific functional tests for it and their functional tests are a lot more realistic than ours | 14:42 |
dansmith | do we even do policy checking in our functionals? maybe some of them? | 14:43 |
bauzas | we don't check policy in our tests AFAIK | 14:45 |
bauzas | butn, | 14:45 |
bauzas | we sometimes ask for the admin role | 14:45 |
bauzas | so I guess that while nova will work with new defaults, we'll then call placement using old defaults, right? | 14:46 |
*** tosky_ is now known as tosky | 14:47 | |
opendevreview | Sofia Enriquez proposed openstack/nova master: WIP: Implement encryption on backingStore https://review.opendev.org/c/openstack/nova/+/870012 | 14:47 |
dansmith | bauzas: I don't know that there's really much difference for placement | 14:48 |
dansmith | admin is still admin, and we don't use the user's token like we do when we call neutron, cinder, etc | 14:49 |
dansmith | I've also seen this grenade failure where we fail on the old side waiting for compute to be registered in the catalog | 14:56 |
dansmith | bauzas: gibi sean-k-mooney: it would be really good to get this in as soon as we can, it's seriously annoying debugging the other gate fails without it: https://review.opendev.org/c/openstack/nova/+/869900 | 15:12 |
bauzas | dansmith: gmann: sent new RBAC defaults patch to the gate | 15:14 |
bauzas | looking now at your CI fix | 15:14 |
dansmith | bauzas: thanks | 15:14 |
dansmith | bauzas: that 9900 patch avoids us spewing thousands of warnings on each run about that deprecated function, | 15:16 |
dansmith | which makes reading other log dumps pretty hard | 15:16 |
bauzas | dansmith: I don't see what you mean :p https://review.opendev.org/c/openstack/oslo.messaging/+/862419/4/oslo_messaging/rpc/client.py#389 | 15:17 |
bauzas | (joking) | 15:17 |
dansmith | mmhmm :) | 15:17 |
bauzas | every single instanciation raises a warning | 15:17 |
bauzas | lovely | 15:17 |
dansmith | yeah, which we apparently do A LOT | 15:17 |
bauzas | I wasn't expecting it | 15:18 |
bauzas | but I guess those are for tests | 15:18 |
dansmith | 15,664 times in just one n-api log run | 15:18 |
bauzas | https://review.opendev.org/c/openstack/nova/+/869900/6/nova/tests/fixtures/nova.py | 15:18 |
dansmith | search RPC in here: https://391a5777f99d615e9bdd-6109476be9a7a65d8252c7a651ade8fd.ssl.cf1.rackcdn.com/863919/6/check/tempest-integrated-compute/6af4a31/controller/logs/screen-n-api.txt | 15:19 |
bauzas | woah | 15:19 |
dansmith | we log it even in production, tens of thousands of times | 15:19 |
dansmith | and by "we" I mean "we on behalf of o.msg" | 15:19 |
sean-k-mooney | so any reason fro me to not +2w this | 15:19 |
bauzas | oh wait | 15:19 |
sean-k-mooney | it looks ok to me | 15:19 |
bauzas | this isn't a singleton | 15:19 |
sean-k-mooney | it has the requiremtn bump | 15:19 |
* bauzas facepalmps | 15:19 | |
dansmith | https://imgur.com/a/0Vvw1ss | 15:20 |
sean-k-mooney | ci will kick it back if it fails again on the recheck | 15:20 |
bauzas | sean-k-mooney: sent to the gate too, for the gosh sake we could merge it eventually | 15:20 |
dansmith | note the scroll bar showing all the matches | 15:20 |
bauzas | dansmith: yeah, look at https://review.opendev.org/c/openstack/nova/+/869900/6/nova/rpc.py | 15:20 |
sean-k-mooney | what was the nova-next failure | 15:20 |
dansmith | yup | 15:20 |
bauzas | everytime we call get_client, we instantiate a RPCService | 15:20 |
dansmith | yup | 15:20 |
bauzas | I would have preferred this being a singleton, but meh noxw | 15:21 |
bauzas | now | 15:21 |
dansmith | well, | 15:21 |
* bauzas disappears for taxi reasons | 15:21 | |
gmann | dansmith: bauzas dansmith : for functional test testing placement new defaults. I will switch it once placement new defaults are merged, otherwise we need to add system scope token to make placement exiting defaults - https://review.opendev.org/c/openstack/placement/+/865618 | 15:21 |
dansmith | it can't always be because of cell stuff so we need to pool | 15:21 |
bauzas | dansmith: ah true | 15:21 |
bauzas | gmann: wfm | 15:22 |
* bauzas bbiab | 15:22 | |
gmann | dansmith: bauzas: sean-k-mooney: as you were on rbac things, can you check this which was missed in original change. keeping legacy admin for 'os-tenant-networks' policy https://review.opendev.org/c/openstack/nova/+/865071 | 15:23 |
sean-k-mooney | oh thats projec treader of gloabl admin | 15:25 |
sean-k-mooney | where as the curren tbehavior woudl requrie you to have admin on the current project | 15:25 |
gmann | yeah keeping legacy admin unimpacted | 15:26 |
sean-k-mooney | i.e. admin implies member implies reader but the project reader will also check the project in your token matches | 15:26 |
sean-k-mooney | i guess we dont have any tempest tests covering that | 15:27 |
sean-k-mooney | or it would have blocked eitehr enablign the new default or where we broke it | 15:27 |
gmann | sean-k-mooney: this ADMIN is just role:admin not just project admin. so rule is "role:admin or project-reader" where we do not check project_id for admin role token | 15:31 |
sean-k-mooney | yep that is what i was expecting | 15:45 |
opendevreview | Sofia Enriquez proposed openstack/nova master: WIP: Implement encryption on backingStore https://review.opendev.org/c/openstack/nova/+/870012 | 16:47 |
sean-k-mooney | i guess sofia has got time to start on ^ again but that spec is not appvoed for this cycle | 17:03 |
sean-k-mooney | on a related note to the other warning it looks like we are spaming the functionl logs with an sqlachemy deprecation warning too | 17:15 |
sean-k-mooney | https://paste.opendev.org/show/bWlWeI8pWyiwWyvJ7NtC/ ^ dansmith | 17:15 |
sean-k-mooney | im not seeign a nova line there | 17:16 |
sean-k-mooney | so i guess this needs to be fixed in oslo_db | 17:16 |
dansmith | sean-k-mooney: yeah I've seen that too | 17:25 |
dansmith | it's annoying, just less annoying :) | 17:25 |
sean-k-mooney | ya os i just mentioned it on the oslo channel and look at code search | 17:26 |
sean-k-mooney | other then in some puppet code this is off everywhere | 17:26 |
sean-k-mooney | so i think oslo.db just need to drop the kwarg and do a release | 17:26 |
kashyap | Is this Oslo DB error know issue in the "nova-tox-functional-py38" job? | 17:27 |
sean-k-mooney | its just a log message | 17:27 |
sean-k-mooney | its not breakign anything | 17:27 |
kashyap | {0} nova.tests.functional.libvirt.test_vgpu.VGPUTests.test_resize_servers_with_vgpu [6.373304s] ... FAILED() | 17:27 |
sean-k-mooney | but it makes runnign the test more annrying | 17:27 |
kashyap | (From here: https://zuul.opendev.org/t/openstack/build/a229b41daba64b6f8dfdeca8c839e9f7) | 17:27 |
sean-k-mooney | thats a diffent oslo.db thing then i was talkign about | 17:28 |
* kashyap goes to buy two goats and three pigs to sacrifice (on Monday) to the Zuul gods | 17:28 | |
sean-k-mooney | kashyap: that actully looks like a real bug | 17:29 |
kashyap | Again: it was not hit in the previous 3 runs :-( | 17:29 |
sean-k-mooney | im not sure why we woudl get a db conflict like that in a fucntional test | 17:29 |
kashyap | sean-k-mooney: But I agree - it "looks" on the surface like a real bug, but I'm not confident if it's _really_ a DB conflict, or a PEBKAC in the test or ... env snafu | 17:30 |
sean-k-mooney | so there are two tracebacks there | 17:30 |
sean-k-mooney | sqlite3.InterfaceError: Cursor needed to be reset because of commit/rollback and can no longer be fetched from. | 17:30 |
sean-k-mooney | and | 17:30 |
sean-k-mooney | Traceback (most recent call last): | 17:30 |
sean-k-mooney | File "/home/zuul/src/opendev.org/openstack/nova/.tox/functional-py38/lib/python3.8/site-packages/urllib3/connectionpool.py", line 440, in _make_request | 17:31 |
sean-k-mooney | httplib_response = conn.getresponse(buffering=True) | 17:31 |
sean-k-mooney | TypeError: getresponse() got an unexpected keyword argument 'buffering' | 17:31 |
sean-k-mooney | so it looks likethere si an issue wit urllib3 | 17:31 |
sean-k-mooney | as well | 17:31 |
kashyap | Right, the first one is the cause of the 2nd one | 17:31 |
kashyap | (If I'm parsin it correctly) | 17:31 |
sean-k-mooney | i dont see how urllib3 is for http requests not the db unless this is form parsing the db connection url | 17:32 |
sean-k-mooney | oh look urllib3 had a release 2 days ago... | 17:35 |
sean-k-mooney | hum ok but we have not change uc to allow it in 4 months | 17:37 |
kashyap | You mean upper-constraints? | 17:38 |
kashyap | Thanks for digging that. | 17:38 |
sean-k-mooney | ya so there has been a release but i dont think its in use | 17:39 |
sean-k-mooney | upper-constraits is still clamping to an older one on master | 17:39 |
kashyap | sean-k-mooney: As of now, just to show the "randomness" of the failures: all the jobs that failed in the previous run succeeded now - nova-live-migration, nova-multi-cell, nova-ovs-hybrid-plug, and nova-grenade-multinode jobs | 17:39 |
kashyap | (Except the new one above in the tox-functional-py38) | 17:39 |
kashyap | sean-k-mooney: Should we bump it? | 17:40 |
kashyap | Does it make sense to do so? | 17:40 |
sean-k-mooney | we have automation to bump it | 17:40 |
sean-k-mooney | that runs a set of jobs to check compatiablity with most of the projects | 17:40 |
kashyap | Oh, right; I forgot the bot; that's much safer | 17:41 |
* kashyap goes to make dinner; enough Zuul sleuthing for the night. Thanks, sean-k-mooney, as always! | 17:42 | |
sean-k-mooney | it ran with urllib3-1.26.12 | 17:45 |
sean-k-mooney | which is what i have locally and that works | 17:45 |
sean-k-mooney | my guess isthe two exceptions are somehow related | 17:45 |
sean-k-mooney | but i can see show directly | 17:45 |
opendevreview | Dan Smith proposed openstack/nova master: Persist existing node uuids locally https://review.opendev.org/c/openstack/nova/+/863918 | 18:51 |
opendevreview | Dan Smith proposed openstack/nova master: Make resource tracker use UUIDs instead of names https://review.opendev.org/c/openstack/nova/+/863919 | 18:51 |
opendevreview | Dan Smith proposed openstack/nova master: WIP: Detect host renames and abort startup https://review.opendev.org/c/openstack/nova/+/863920 | 18:51 |
opendevreview | Artom Lifshitz proposed openstack/nova master: Microversion 2.94: FQDN in hostname https://review.opendev.org/c/openstack/nova/+/869812 | 20:07 |
gmann | sean-k-mooney: bauzas: gibi: reminder for osc-placement gate fixes review https://review.opendev.org/q/I4e3e5732411639054baaa9211a29e2e2c8210ac0 | 20:37 |
sean-k-mooney[m] | gmann im not sure that is what we should be doing | 21:34 |
sean-k-mooney[m] | i was suggesting continng to use master on master and using the stable branch release or stable | 21:35 |
gmann | sean-k-mooney[m]: ok for stable. I was thinking we do the same way the Nova testing on master | 21:36 |
sean-k-mooney[m] | im not sure what that iss of the top of my head but ill take a look | 21:37 |
gmann | ok. let me know and I can update those accordingly. | 21:37 |
sean-k-mooney[m] | i guess if it works as it says in the comment that is ok | 21:38 |
sean-k-mooney[m] | i dont know how that works however | 21:38 |
sean-k-mooney[m] | i would not expect that to be how it works unless we are doing something special with tox siblings in the jobs | 21:39 |
sean-k-mooney[m] | the tox ini does not hve the described behavior on its own | 21:39 |
clarkb | siblings only take effect if you set the project as a required project on the zuul job (that pulls in the git repo and the tox role uses it as a single to install it from source) | 21:39 |
sean-k-mooney[m] | right so osc-placemnt would need to declarae placemnt as a required project | 21:40 |
sean-k-mooney[m] | and we would need the tox siblings stuff to overried whats in the tox ini | 21:41 |
sean-k-mooney[m] | and install placemtn form the git repo prepared by zuul | 21:41 |
sean-k-mooney[m] | is that how we have the functional job configured in nova/osc-placement | 21:41 |
gmann | if we change it to test with master only then we should have another job to test with the released version | 21:41 |
sean-k-mooney[m] | well its testing with master only now | 21:42 |
gmann | as comments says, we can test the master placement by replacing the deps line | 21:42 |
gmann | yes, in osc-placement and released version in nova | 21:42 |
sean-k-mooney[m] | so your propeosing change to using the released version | 21:42 |
sean-k-mooney[m] | yep | 21:42 |
sean-k-mooney[m] | so it would be nice for depends-on to work | 21:43 |
sean-k-mooney[m] | and in generall im fine with used the released verison | 21:43 |
sean-k-mooney[m] | but im questioning if the tox job in osc-placment will give the depends on behavior today | 21:43 |
gmann | While doing it for stable branch and I checked how Nova does I thought of doing the same for osc-placement also | 21:43 |
sean-k-mooney[m] | ok so we have the required projec tin the zuul.yaml | 21:45 |
sean-k-mooney[m] | https://github.com/openstack/osc-placement/blob/master/.zuul.yaml | 21:45 |
gmann | yeah | 21:45 |
sean-k-mooney[m] | yep i know i was just checking if we were using the jobs from the default template or if we had already overriden it | 21:45 |
sean-k-mooney[m] | since i did not see that in the patch you linked | 21:46 |
sean-k-mooney[m] | i was expecting both in the same patch | 21:46 |
sean-k-mooney[m] | ok this should be fine as is | 21:47 |
gmann | ack | 21:48 |
clarkb | I'm not sure the comment is correct though since siblings will be used | 21:52 |
clarkb | it will always be the latest version of the branch in the gate not the release | 21:53 |
dansmith | cripes, we're never going to get the rpc spam thing landed | 23:22 |
dansmith | seen this a few times now as well: https://zuul.opendev.org/t/openstack/build/f5aa5edd4d354c2685fc1f3e13d0ef77 | 23:22 |
dansmith | saying one of the tempest workers crashed | 23:22 |
dansmith | seems unlikely to me | 23:23 |
*** dasm is now known as dasm|off | 23:37 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!