| opendevreview | Steve Baker proposed openstack/nova master: Add VNC console support for the Ironic driver https://review.opendev.org/c/openstack/nova/+/942528 | 01:48 |
|---|---|---|
| opendevreview | Nicolai Ruckel proposed openstack/nova master: Preserve UEFI NVRAM variable store https://review.opendev.org/c/openstack/nova/+/959682 | 07:39 |
| opendevreview | Rajesh Tailor proposed openstack/nova-specs master: Show finish_time field in instance action show https://review.opendev.org/c/openstack/nova-specs/+/929780 | 07:41 |
| nicolairuckel | sean-k-mooney, sorry for the delay. I didn't realize that my responses were just drafts until today. But I think I'm finally getting the hang of Gerrit. | 08:44 |
| sean-k-mooney | nicolairuckel: :) | 08:44 |
| sean-k-mooney | ya the fact they are nto sent until you hit reply is slightly difefnt then gitlab and github which send them by defualt unless you explcitly tell it your doign a review | 08:45 |
| nicolairuckel | It's funny because I used to complain about their default because it creates so many notifications. :D | 08:45 |
| sean-k-mooney | nicolairuckel: sound like at least localy you not whave a mostly working version. | 08:46 |
| sean-k-mooney | nicolairuckel: ya i dnot use notifcation on github basically ever | 08:46 |
| sean-k-mooney | its an entirly broken feature given the signal to noise ratio | 08:46 |
| sean-k-mooney | nicolairuckel: so what im thinking currently is we fix nvram preservation for rebooo. we do the saem for vtpm | 08:48 |
| nicolairuckel | Same for GitLab where people just ignore every email and you have to notify them separately if there are relevant updates to a merge request. | 08:48 |
| nicolairuckel | sean-k-mooney, sounds good to me | 08:48 |
| sean-k-mooney | then we proably will need a spec to fix cold migration and maybe shelve as i think that will need net new desigin work | 08:48 |
| nicolairuckel | and then we can look into the cold migration separately as that seems a lot more complicated | 08:49 |
| nicolairuckel | right | 08:49 |
| sean-k-mooney | the cold migration part is actully not hard and might be a blueprint or bug | 08:49 |
| sean-k-mooney | shelve is harder because we need to decised where we woudl store the tpm and nvram data while the vm is not on a host | 08:49 |
| nicolairuckel | ah, I didn't look into that yet but that makes sense to me | 08:49 |
| sean-k-mooney | for cold migration we just need to copy the file to the correct location and deal with deleting it when you confirm ro revert | 08:50 |
| nicolairuckel | similar to the old version of my patch? | 08:51 |
| sean-k-mooney | kind of but diferent | 08:51 |
| sean-k-mooney | we will need to use the removeet filestyem supprot to copy the file between hosts rahter then within the same host | 08:52 |
| nicolairuckel | I see | 08:52 |
| sean-k-mooney | this https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume/remotefs.py#L172-L316 | 08:54 |
| sean-k-mooney | we just need to use create_dir and copy_file | 08:54 |
| sean-k-mooney | and then that will use ssh or rsync to move it | 08:55 |
| sean-k-mooney | depend9ign on which you ahve enabeld | 08:55 |
| sean-k-mooney | we do that via https://github.com/openstack/nova/blob/master/nova/virt/libvirt/utils.py#L314-L351 | 08:55 |
| sean-k-mooney | for cold migraiton you will jsut pass the host paramater | 08:56 |
| nicolairuckel | thanks, I will look into that | 08:57 |
| nicolairuckel | then I'll update the commit message of my patch accordingly | 08:58 |
| sean-k-mooney | ill see if i can test this again once you have the new verison with the release note up although proably not today | 08:58 |
| nicolairuckel | ah, how do I do the release note? I wasn't able to find documentation on that yet. | 09:00 |
| sean-k-mooney | given what you knwo know for this it woudl be good to sync with your coworker on updating https://review.opendev.org/c/openstack/nova/+/955657 | 09:00 |
| sean-k-mooney | ah its pretty simple to create one do tox -e venv reno new <some-string-with-dashes> | 09:01 |
| sean-k-mooney | tox -e venv -- preserve-nvram | 09:01 |
| sean-k-mooney | that will tempelate out a new file in the release notes folder | 09:01 |
| sean-k-mooney | you can delete 99% of the yaml content but keep the --- and the fixes section | 09:02 |
| sean-k-mooney | then just wirte a short operator focused descrtipion of what was fixed | 09:02 |
| sean-k-mooney | you can then test it locally with tox -e releasenotes | 09:02 |
| sean-k-mooney | https://docs.openstack.org/nova/latest/contributor/releasenotes.html | 09:03 |
| nicolairuckel | we agreed on that this is a bugfix, right/ | 09:10 |
| opendevreview | Nicolai Ruckel proposed openstack/nova master: Preserve UEFI NVRAM variable store https://review.opendev.org/c/openstack/nova/+/959682 | 09:15 |
| nicolairuckel | okay, then it should be ready for you to test now | 09:15 |
| sean-k-mooney | nicolairuckel:just realiased what version of libvirt you were checkign agaisnt sorry | 09:41 |
| sean-k-mooney | we only suppport libvirt 8.0.0 and above so we do not need to check if preserving the nvram is supproted | 09:41 |
| sean-k-mooney | nicolairuckel: for the vtpm change this was needed because it need 8.9.0 | 09:42 |
| sean-k-mooney | but nvram only needs 2.3.0 | 09:42 |
| nicolairuckel | ah, then I can remove that | 09:44 |
| sean-k-mooney | yes im also wonderign about these changes https://review.opendev.org/c/openstack/nova/+/959682/14/nova/tests/functional/libvirt/test_uefi.py | 09:45 |
| sean-k-mooney | is this a side effect of when you remvoed os.append(nvram) | 09:45 |
| sean-k-mooney | when your testing the generation fo the xml https://review.opendev.org/c/openstack/nova/+/959682/14/nova/tests/unit/virt/libvirt/test_config.py | 09:46 |
| sean-k-mooney | you are assert ign the nvram element is present which is correct | 09:46 |
| sean-k-mooney | it still feels like you have mroe changes tehn are required but i have not gone line by line to verify | 09:47 |
| nicolairuckel | you're right about the first one for sure. | 09:48 |
| nicolairuckel | Let me change that, the version check and run the tests locally again | 09:49 |
| sean-k-mooney | ack. this is pretty close to correct at thsi point so we jus tneed to get the last few bit fixed up and we can proceed with ti and start backproting once we get someone else to review as well | 09:49 |
| nicolairuckel | I guess I can remove everything to _may_keep_nvram then, right? | 09:51 |
| sean-k-mooney | ya | 09:52 |
| sean-k-mooney | what i woudl do is create a clean clone and apply just the https://review.opendev.org/c/openstack/nova/+/959682/14/nova/virt/libvirt/guest.py and https://review.opendev.org/c/openstack/nova/+/959682/14/nova/virt/libvirt/driver.py changes | 09:52 |
| sean-k-mooney | - the _may_keep_nvram bits then run the test and see what breaks | 09:53 |
| sean-k-mooney | that will tell you which parts of the rest of the chagne are actully needed | 09:53 |
| sean-k-mooney | https://review.opendev.org/c/openstack/nova/+/959682/14/nova/tests/unit/virt/libvirt/test_guest.py will be and https://review.opendev.org/c/openstack/nova/+/959682/14/nova/tests/unit/virt/libvirt/test_driver.py | 09:54 |
| sean-k-mooney | i dont think you need https://review.opendev.org/c/openstack/nova/+/959682/14/nova/tests/unit/virt/test_virt_drivers.py | 09:55 |
| sean-k-mooney | but maybe | 09:55 |
| sean-k-mooney | https://review.opendev.org/c/openstack/nova/+/959682/14/nova/tests/unit/virt/libvirt/test_config.py is good for extra test coverage but im not sure https://review.opendev.org/c/openstack/nova/+/959682/14/nova/tests/functional/libvirt/test_uefi.py is | 09:56 |
| sean-k-mooney | so i fyou can confirm that it woudl be good | 09:56 |
| nicolairuckel | I'll do that. | 09:56 |
| nicolairuckel | I'm not sure if I'll be able to do that today though. | 09:57 |
| sean-k-mooney | thats ok just let us know when it ready | 09:58 |
| opendevreview | sean mooney proposed openstack/nova master: add functional repoducer for bug 2048837 https://review.opendev.org/c/openstack/nova/+/919979 | 10:32 |
| opendevreview | sean mooney proposed openstack/nova master: ensure correct cleanup of multi-attach volumes https://review.opendev.org/c/openstack/nova/+/916322 | 10:32 |
| sean-k-mooney | gibi: bauzas i realsised that ^ is still open. woudl ye mind looking at those again | 10:33 |
| bauzas | sean-k-mooney: ack I can try | 10:33 |
| gibi | sean-k-mooney: added inline comments in the fix | 11:34 |
| sean-k-mooney | gibi: so many of those comemnt i dismissed as wont do in the previous revision | 11:37 |
| sean-k-mooney | partly because if dont wnat to reopen the formating holy wars | 11:37 |
| sean-k-mooney | well these comments https://review.opendev.org/c/openstack/nova/+/916322/8/nova/compute/manager.py im still readying the rest | 11:38 |
| sean-k-mooney | gibi: good question on the thread case. | 11:42 |
| sean-k-mooney | what i can do is aquire the named lock before i add it to the list | 11:42 |
| sean-k-mooney | that way i think only one thread will be able to do that | 11:43 |
| sean-k-mooney | so ill reorder https://review.opendev.org/c/openstack/nova/+/916322/8/nova/utils.py#1196 and 1197 | 11:43 |
| sean-k-mooney | that shoudl fix that. | 11:43 |
| gibi | sean-k-mooney: Im not sure that lock is enough | 12:21 |
| sean-k-mooney | well the sharign between thread you are descibign is not inteded to be supprote dbut ill comment more on the review | 12:23 |
| gibi | there can be guard([a,b,c]) at the staget of a released but b is not released yet by T1 in exit. Then T2 can enter and take lock a and manipulate the list | 12:23 |
| gibi | the doc of FairLockGuard does not really explain what is supported and what is not | 12:24 |
| sean-k-mooney | ya i can improve that | 12:24 |
| sean-k-mooney | to me its a logic bug to share a single instance of a context manager between treads | 12:28 |
| gibi | OK so T1 and T2 has its own context manager but both using the same list of names to aquire the same set of locks | 12:34 |
| gibi | that is the way we have shared locks but not shared context managers | 12:34 |
| gibi | that is not intuitive so needs to be documented | 12:34 |
| gibi | normally you do `with self.lock` in both T1 and T2 so the context manager (=lock) is shared | 12:35 |
| sahid | o/ | 12:38 |
| sahid | I have noticed this function during reboot: https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L4286 | 12:38 |
| sahid | we are in situation where we would like avoid to reboot the VMs | 12:39 |
| sahid | usually we use live-migration when we do have (expect) to fix issue like that | 12:39 |
| sahid | i 'm wondering if that make sen for you as-well that i share a patch to cleanup attachment that are not valid anymore during pre-live-migration process | 12:40 |
| sahid | basucally, somewhere around that point: https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L9342 | 12:41 |
| sahid | if that feel good for you I would provide a bug report and share a patch | 12:41 |
| sean-k-mooney | gibi: its pretty cheap for me to add that support | 12:46 |
| sean-k-mooney | gibi: so i can add it | 12:46 |
| gibi | OK | 12:47 |
| *** haleyb|out is now known as haleyb | 12:55 | |
| sean-k-mooney | gibi: i woudl still discurrage the patteern of creating a context manager and passign it as a parmater to 2 or more thread in general as i think that likely is not genericly thread safe but i can make it less unsafe in this case. sorry was on a 1:1 so was not really around for the last 30 mins | 12:57 |
| gibi | ack. I'm not advocating to make that work, I'm OK if we clearly document the usage pattern of the Guard | 13:28 |
| gibi | if the shared data is not the Guard just the list of names the Guard is intantiated with that is fine by me | 13:28 |
| gibi | just pointing out that | 13:29 |
| gibi | `with threading.Lock()` is wrong while `with Guard(names)` is the correct way is non intuitive | 13:29 |
| gibi | hence we need documentation | 13:29 |
| sean-k-mooney | right | 13:29 |
| sean-k-mooney | that the delta in usage | 13:30 |
| sean-k-mooney | i need to push a patch to cyborg and venos to fix a requriement issue that i just fixed in watcher | 13:30 |
| sean-k-mooney | but when im done ill update the patch again. thanks for reviewing | 13:30 |
| gibi | cool thanks | 13:31 |
| opendevreview | Vasyl Saienko proposed openstack/nova master: Set vnic model and queue for VDPA https://review.opendev.org/c/openstack/nova/+/966855 | 14:21 |
| opendevreview | Vasyl Saienko proposed openstack/nova master: Set vnic model and queue for VDPA https://review.opendev.org/c/openstack/nova/+/966855 | 14:39 |
| vsaienko | sean-k-mooney: I found that queues parameter is not passed to domain xml interface section in case of VDPA, this was a reason of slow iperf test. I've reported a bug https://bugs.launchpad.net/nova/+bug/2131150 and created a fix https://gerrit.mcp.mirantis.com/c/packaging/sources/nova/+/247944 please add to your review queue | 14:40 |
| sean-k-mooney | vsaienko: yep i just -1 your patch | 14:41 |
| sean-k-mooney | vsaienko: this si not a bug | 14:41 |
| sean-k-mooney | it was not possibel to use multi queu whne we added vdpa supprot | 14:41 |
| sean-k-mooney | and it was not implemented intentionally as a result | 14:41 |
| vsaienko | hm... but now its possible | 14:42 |
| sean-k-mooney | so 1 this is a new feature, two if we are got supprot this we will need much more testing the what is in the patch and mroe docs as well | 14:42 |
| sean-k-mooney | vsaienko: rigth so as this is a feature requst it woudl normally need a bluefpirnt or a spec | 14:42 |
| sean-k-mooney | we coudl do it under a bug in very limited cases but its not actully a bug from an upstream point of view | 14:42 |
| vsaienko | ack | 14:42 |
| sean-k-mooney | the main gaps are when did this supprot land in the kernel and qemu | 14:43 |
| sean-k-mooney | we need to document that both in a release note and in the vdpa docs | 14:43 |
| sean-k-mooney | depending o nthe qemu veriosn we may also need to add a version check | 14:43 |
| sean-k-mooney | vsaienko: have you tested packed ring format ? | 14:44 |
| sean-k-mooney | vsaienko: one of the open quetion too | 14:45 |
| sean-k-mooney | is what happen if you have more cpu then the VF that backs the vdpa device supprot in hardware queue | 14:45 |
| sean-k-mooney | i.e. is it ok for the number of queue we pass to qemu to exceed or even not match exactly the number of queue supproted by the hardware | 14:46 |
| sean-k-mooney | vsaienko: if we can answer those quetions and ensure there is no upgrade impact when live migrating between hsot with and without this supprot we may be fine to proceed with a specless bluepritn but those quetions are why this woudl normally require a spec | 14:48 |
| sean-k-mooney | for example it is poosble to set the queue for each vdpa device when they are created https://github.com/os-net-config/os-net-config/commit/384f46f8e0dfb847ab543e82f8cd08f5065edceb#diff-f43ec4c1cd48d8d241917be63e57e70516ef2bbba6e9ba786ce2ad5f1a92ca7dR628-R642 | 14:49 |
| sean-k-mooney | now we dont offically suprpot that in nova but we also dont not supprot it | 14:50 |
| -opendevstatus- NOTICE: Zuul job log URLs for storage.*.cloud.ovh.net are temporarily returning an access denied/payment required error, but the provider has been engaged and is working to correct it | 14:50 | |
| sean-k-mooney | in the same way the number of quese for a sriov vf is determiend by how you allocate teh vf | 14:50 |
| sean-k-mooney | that is howe it work today for vdpa as well so we need to make sure we dont break that flow if you dont enabel multi queu vis teh flavor. | 14:51 |
| sean-k-mooney | vsaienko: sorry that a bit of an info dump but you can see why this is non trivial to do corectly and why the documenation of this really matters | 14:52 |
| vsaienko | I haven't check packed ring format | 14:53 |
| vsaienko | is what happen if you have more cpu then the VF that backs the vdpa device supprot in hardware queue - probably this is always a case when we have no multiqueue, VM has always more CPU and queues. | 14:53 |
| sean-k-mooney | ack you change enabls opting in to that | 14:53 |
| sean-k-mooney | vsaienko: in nova we set the queu count equal to the number of cpus when you enabel multi queue | 14:54 |
| sean-k-mooney | so the question is fi the vdpa device only supprot 4 queue and you set it to 16 in qemu/libvirt what happens | 14:54 |
| vsaienko | if the queue count in xml is higher than on VDPA device VM will see number that is configured on vdpa side | 14:55 |
| vsaienko | I checked this | 14:55 |
| sean-k-mooney | ack so we woudl need to 1 agree thats ok, it shoudl be and 2 docuemnt that so OPs are not surpised by that | 14:55 |
| sean-k-mooney | we have a config option to clamp the number of queue so it is already posibel today for the vcpu count and queu count to be diffent even with muliqueueu enebled | 14:56 |
| sean-k-mooney | the reason i asked about packed form at is you can now set it here https://review.opendev.org/c/openstack/nova/+/966855/2/nova/virt/libvirt/vif.py#206 | 14:57 |
| sean-k-mooney | by the way did you intenilaly remove this https://review.opendev.org/c/openstack/nova/+/966855/1/nova/virt/libvirt/config.py form your v2? | 14:58 |
| vsaienko | I remove because driver_name is set not to viritio there, and that config.py change is not needed | 15:06 |
| sean-k-mooney | ack | 15:07 |
| sean-k-mooney | vsaienko: so it looks like the libviert supprot was added by https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/thread/D5QUZT2ZCUKF2526EOLGCB5JWCHHTIRV/?sort=thread and qemu by https://patchew.org/QEMU/20210903091031.47303-1-jasowang@redhat.com/ | 15:10 |
| sean-k-mooney | https://github.com/qemu/qemu/commit/402378407dbdce79ce745a13f5c84815f929cfdd | 15:13 |
| sean-k-mooney | so that is qemu v6.2.0 and later | 15:13 |
| vsaienko | yes, looks like its supported now | 15:14 |
| vsaienko | and need to check mtu, it seems we need to set it on xml as well | 15:14 |
| vsaienko | because right now I can't use more than 1500 inside guest | 15:15 |
| sean-k-mooney | we do suport settign the mtu i think | 15:15 |
| sean-k-mooney | alhtogh that is one fo the thing that will conflict with the vdpa tool | 15:15 |
| sean-k-mooney | our min qemu is currently 6.2.0 https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L221 | 15:16 |
| sean-k-mooney | so we dont have to check if that supprot vdpa multi queue | 15:16 |
| sean-k-mooney | but we do have to confirm whne the libvirt supprot actully hiped and if its in 8.0.0 | 15:16 |
| vsaienko | libvirt change is in 8x version https://bugzilla.redhat.com/show_bug.cgi?id=2024406 | 15:17 |
| sean-k-mooney | ya i was looking at that but that 8.2.0 | 15:18 |
| vsaienko | we do suport settign the mtu i think - does it mean that we can't use jumboframes with vdpa? | 15:19 |
| vsaienko | on 8 libvirt indeed setting mtu for vdpa is not possible https://paste.openstack.org/show/bi1VqvW3ScWE5obNpaRu/ | 15:20 |
| sean-k-mooney | so the mtu will only be set if the mtu on the neutorn netowrk is not 1500 | 15:20 |
| sean-k-mooney | i tought https://issues.redhat.com/browse/RHEL-7298 refence both mac and mtu but apprently not | 15:21 |
| vsaienko | yes they closed https://bugzilla.redhat.com/show_bug.cgi?id=2057748 as duplicate but forgot to implement? | 15:22 |
| vsaienko | on nova side mtu is not configured for vdpa device at all | 15:22 |
| vsaienko | even when neutron network has > 1500 | 15:22 |
| sean-k-mooney | ack the its beacuse of that orgial limiation | 15:22 |
| sean-k-mooney | right so i dont know if that was also fixed in libvirt/qemu or not | 15:23 |
| sean-k-mooney | vsaienko: so ya https://github.com/libvirt/libvirt/commit/a5e659f071ae5f5fc9aadb46ad7c31736425f8cf | 15:25 |
| sean-k-mooney | libvirt supprot for mutlqueue came in 8.2.0 | 15:25 |
| sean-k-mooney | that means we need to either rais our min version to libvirt 10 which is declared as or next min version | 15:26 |
| sean-k-mooney | or you woudl have to support libvirt 8.0.0 by checkign the local version before enabling it | 15:26 |
| sean-k-mooney | bumping the min version is the better chocies btu mroe work | 15:26 |
| sean-k-mooney | we now know the minium requiremts at lesat qemu:6.2.0 and libvirt:8.2.0 | 15:27 |
| sean-k-mooney | that woudl need to be captured in the release notes | 15:27 |
| vsaienko | ack | 15:27 |
| vsaienko | do we need additional check for libvirt version, because right now min libvirt is 8.0.0 and it is supported starting from 8.2.0 | 15:29 |
| sean-k-mooney | yep i sasid that while you disconencted | 15:29 |
| vsaienko | ack | 15:29 |
| sean-k-mooney | we either need to supprot 8.0.0 via a version check ro bump our min version | 15:29 |
| sean-k-mooney | whcih we can also do this cycle | 15:30 |
| sean-k-mooney | its a littel more work but we have 1 or 2 feature that it woudl simplfy | 15:30 |
| bauzas | sean-k-mooney: gibi: I can join the eventlet meeting but honestly I don't have the context | 15:30 |
| vsaienko | should I add limitations section to and add mtu limitation here? https://docs.openstack.org/nova/latest/admin/vdpa.html | 15:34 |
| sean-k-mooney | https://bugs.launchpad.net/nova/+bug/2119114 | 15:39 |
| gibi | sean-k-mooney: dansmith: bauzas: Why does it make sense to use a named lock instance in the provide tree? https://github.com/openstack/nova/blob/b7d50570c7a79a38b0db6476ccb3c662b237f69b/nova/compute/provider_tree.py#L252 It was there from the beginning added by Jay but it does not make sense to me. This means that any instances of the ProviderTree class will share the same named lock instead of have | 16:58 |
| gibi | its own lock. | 16:58 |
| dansmith | the providertree is always for a single compute provider root in placement (per process) right? | 16:59 |
| gibi | we are deep.copying ProviderTree objects around within the same process | 17:00 |
| dansmith | right but they represent the same provider (tree) in placement right? | 17:00 |
| gibi | but a single ProviderTree represents a single compute | 17:00 |
| gibi | hm | 17:01 |
| gibi | maybe ironic | 17:01 |
| gibi | working with multiple nodes per compute service | 17:01 |
| gibi | has multiple trees one for each node | 17:01 |
| gibi | the ProviderTree class allows multiplre roots for some reason :/ | 17:02 |
| bauzas | yeah I remember sometimes before we said we could have multiple roots | 17:04 |
| bauzas | at least jay said it :p | 17:04 |
| bauzas | I can't unfortunately remember the usecase where we could have multiple roots per compute | 17:04 |
| bauzas | probably shared storage or something like that | 17:05 |
| gibi | OK multiple roots is because the a single ProviderTree will have all the root providers that are connected to our compute root via placement aggregates | 17:06 |
| gibi | Returns a fresh ProviderTree representing all providers which are in the same tree or in the same aggregate as the specified provider, including their aggregates, traits, and inventories | 17:06 |
| opendevreview | Balazs Gibizer proposed openstack/nova master: Fix ProviderTree copying with threading Lock https://review.opendev.org/c/openstack/nova/+/956091 | 17:23 |
| opendevreview | Balazs Gibizer proposed openstack/nova master: [test]Further categorization of disabled unit tests https://review.opendev.org/c/openstack/nova/+/956092 | 17:23 |
| opendevreview | Balazs Gibizer proposed openstack/nova master: Do not fork compute workers in native threading mode https://review.opendev.org/c/openstack/nova/+/965466 | 17:23 |
| opendevreview | Balazs Gibizer proposed openstack/nova master: Compute manager to use thread pools selectively https://review.opendev.org/c/openstack/nova/+/966016 | 17:23 |
| opendevreview | Balazs Gibizer proposed openstack/nova master: Libvirt event handling without eventlet https://review.opendev.org/c/openstack/nova/+/965949 | 17:23 |
| opendevreview | Balazs Gibizer proposed openstack/nova master: Run nova-compute in native threading mode https://review.opendev.org/c/openstack/nova/+/965467 | 17:23 |
| opendevreview | sean mooney proposed openstack/nova master: ensure correct cleanup of multi-attach volumes https://review.opendev.org/c/openstack/nova/+/916322 | 17:31 |
| sean-k-mooney | gibi: ^ i swap to a reader writer lock in the test too to show that it really fixes the issue | 17:32 |
| sean-k-mooney | and hardened the context manager to technially work if shared across thread even if you shoudl not do that | 17:32 |
Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!