gmann | bauzas: sean-k-mooney:dansmith: removing HyperV driver/APIs are up for review, please check https://review.opendev.org/q/topic:%22bp/remove-hyperv-driver%22 | 02:23 |
---|---|---|
*** ministry is now known as __ministry | 05:14 | |
opendevreview | Amit Uniyal proposed openstack/nova master: Allow swap resize from non-zero to zero https://review.opendev.org/c/openstack/nova/+/857339 | 06:45 |
gibi | elodilles: when you have time could you take a look https://review.opendev.org/q/topic:%22power-mgmt-fixups%22+branch:stable/2023.2 ? | 08:40 |
gibi | hold on, I was naively thought the master rever landed | 08:41 |
elodilles | gibi: i'll have a look soon | 08:50 |
elodilles | i mean just after master poatch has landed :D | 08:51 |
gibi | I know the rules :) | 08:52 |
*** tosky_ is now known as tosky | 10:19 | |
opendevreview | Merged openstack/python-novaclient master: Update hacking version in pre commit config https://review.opendev.org/c/openstack/python-novaclient/+/906964 | 10:49 |
sean-k-mooney | gmann: thanks ill try and make my way through that series this week | 11:16 |
sean-k-mooney | skimming over it, the patches make sense | 11:17 |
gibi | bauzas: did you deploy the manila series in devstack? if so which backend you used nfs, or ceph? | 13:31 |
bauzas | nope I just reviewed the code for the moment | 13:32 |
bauzas | until we merge the API change, it's not a problem if we don't test it | 13:32 |
gibi | bauzas: I just tested it with nfs backend and it worked for me | 13:33 |
gibi | bauzas: so if you test it you can start with ceph to increase manual coverage :) | 13:33 |
gibi | I'm reviewing the code now | 13:33 |
bauzas | OK | 13:34 |
bauzas | I needed to work last week on a lot of changes, but I can start | 13:34 |
bauzas | I also need to test my own mdev live-migration series :) | 13:34 |
opendevreview | Merged openstack/nova master: Attach Manila shares via virtiofs (objects) https://review.opendev.org/c/openstack/nova/+/839401 | 15:43 |
bauzas | Uggla: now that patch is merged, a gentle reminder that I asked you a follow-up patch for https://review.opendev.org/c/openstack/nova/+/839401/36/nova/objects/share_mapping.py#93 | 16:34 |
Uggla | @bauzas, I'll do it no pb. | 16:35 |
opendevreview | zhouzhong proposed openstack/nova master: Validate flavor image min ram when resizing a volume-backend instance https://review.opendev.org/c/openstack/nova/+/906813 | 16:46 |
gibi | Uggla: I see a discrepancy between the spec and the impl | 17:43 |
gibi | The spec says | 17:43 |
gibi | Share attachment/detachment can only be done if the VM state is ``STOPPED`` | 17:43 |
gibi | or ``ERROR``. | 17:43 |
gibi | These are operations only on the database, and no RPC calls will be required | 17:43 |
gibi | to the compute API. This is an intentional design for this spec. | 17:43 |
gibi | As a result, this could lead to situation where the VM start operation fails | 17:43 |
gibi | as an underlying share attach fails. | 17:43 |
gibi | -- | 17:43 |
gibi | But I see that the actauly host attachment happens during the attach API call in my devstack | 17:43 |
gibi | is it intentional? | 17:43 |
Uggla | @gibi, if I'm not wrong this case is handled or I misunderstand something. If the share is in error it is "discarded" from the list and the VM should start. | 17:45 |
Uggla | If think it is in get_share_info | 17:45 |
gibi | my point is the spec states that attaching a share via the REST API is a DB only operation. But in my devstack the host attachment happens during the attach REST API call. | 17:46 |
gibi | I guess the original plan was to do the share attachment to the host during the VM start, but for some reason it is moved to the share attach call. | 17:52 |
Uggla | yes we discussed that, we moved from that because it made it more complex and causes some issue. But I do not remember well what were the issues. | 17:54 |
gibi | The current design means that all the share mounting logic and the retry loop waiting for the share access setup from manial is in a syncronous RPC call that holds up the share attach REST API response. | 17:54 |
Uggla | yes. It was the intention to have it syncronous. | 17:55 |
gibi | OK. If bauzas is OK with this then I can accept it, but we need to document for the share_apply_policy_timeout that it cannot be set over our RPC timeout as that will lead to strange errors. | 17:57 |
Uggla | I remembered discussing that with @bauzas too. | 17:57 |
bauzas | I quickly looked | 17:57 |
gibi | and we need to amend the spec | 17:57 |
bauzas | I need to leave by now | 17:57 |
bauzas | I obvisouly need to reload that context in my mind | 17:58 |
bauzas | oh I remember | 17:58 |
Uggla | @bauzas, I'm sure we discussed that synchronous etc stuff. But this is old discussion I don't remember well. | 17:58 |
bauzas | the fact is that you start making it asynchronous (which is OK with me) then you need to manage the states somewhere | 17:59 |
gibi | bauzas: the spec said that the share attach REST API is DB only so I assume the state is managed in the DB :) | 17:59 |
gibi | also I understood from the spec that the host mount happens during the start_instance RPC | 18:00 |
bauzas | because the instance has to be in stopped state, right? | 18:00 |
bauzas | (as a precondition) | 18:01 |
gibi | yes | 18:01 |
bauzas | I'm a bit afraid of a blocking call, I may have missed it | 18:01 |
Uggla | If I remember well, doing it at power_on led to "large" change in the error handling of power_on / power_off and other actions. | 18:02 |
bauzas | gibi: you only saw that behaviour by testing, right? I'm saying that because I haven't yet reviewed the API change | 18:02 |
bauzas | and I guess the blocking call to the compute RPC API is made there | 18:02 |
gibi | bauzas: I saw that in testing, but I can probably dig up the RPC call from the API code | 18:02 |
bauzas | no worries, I was just asking because I missed your very good point when reviewing | 18:02 |
bauzas | so it should have been done later in the API patch, or I am blind | 18:03 |
gibi | bauzas: https://review.opendev.org/c/openstack/nova/+/836830/42/nova/api/openstack/compute/server_shares.py#150 | 18:03 |
bauzas | Uggla: I think that we more likely discussed on how to decouple the RPC call | 18:03 |
bauzas | but not *when* we were calling it | 18:03 |
bauzas | (*when* or *where* if you prefer) | 18:04 |
bauzas | yeah, OK, I'm not blind, I just haven't went up to that point | 18:04 |
bauzas | Uggla: IIRC we discussed on decoupling two actions that were made in one lockstep | 18:05 |
bauzas | the first one was to "allocate" the mapping | 18:05 |
bauzas | the other one was to "mount" it | 18:06 |
bauzas | in your early patches, iirc you were doing all of them thru the compute manager and I asked you to provide RPC methods for differenciating both actions | 18:06 |
bauzas | amirite? | 18:06 |
Uggla | bauzas, we discussed the call /cast and yes all the previous code used the compute part, and we changed in that way. If I'm not wrong. | 18:11 |
Uggla | I have the old code somewhere, I could check. | 18:11 |
bauzas | ok, that's then a good news | 18:11 |
bauzas | I'm not yet forgetting everything | 18:11 |
bauzas | so, now you decoupled both actions, | 18:12 |
bauzas | you need to just ensure to issue those two actions in the right lifecycle we agreed | 18:12 |
bauzas | meaning that you should call the share mount only when the instance is starting | 18:12 |
bauzas | get rid of that single line : https://review.opendev.org/c/openstack/nova/+/836830/42/nova/api/openstack/compute/server_shares.py#150 | 18:13 |
bauzas | and just call mount_share in the compute manager | 18:14 |
bauzas | when the instance is starting | 18:14 |
bauzas | later you could call the RPC bus for that, but that would be for other actions | 18:14 |
Uggla | nope we revert from that, to not mount in the compute part. | 18:15 |
Uggla | that was the "original" code. | 18:16 |
bauzas | Uggla: let's do a gmeet with gibi tomorrow | 18:19 |
bauzas | this is late for all of us :) | 18:19 |
Uggla | ok yes and I'm cooking in // so I can not follow well. | 18:20 |
gibi | ack | 18:21 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!