Tuesday, 2025-01-07

*** __ministry is now known as Guest519801:39
opendevreviewTakashi Kajinami proposed openstack/nova master: doc: Use dnf instead of yum  https://review.opendev.org/c/openstack/nova/+/93849601:39
opendevreviewhuanhongda proposed openstack/nova master: Rollback live migration when virt driver failed  https://review.opendev.org/c/openstack/nova/+/93815411:21
dansmithartom: gibi sean-k-mooney melwitt do you want to jump on a high-bandwidth chat today about the tpm stuff?14:43
artomWas just about to say...14:44
sean-k-mooneyam sure i can whenever14:44
sean-k-mooneyim almost doen my first pass on the vfio-variant driver spec14:44
sean-k-mooney(the first one)14:44
dansmithI'm pretty open today14:44
artomThe ACL vs make the Barbican secret owned by the Nova user thing is kind of a red herring, in the sense that it's an implementation detail that achieves the exact same thing (mostly): the Nova service user can read the Barbican secret. What we're really debating is - do we give access to the Nova user unconditionally, or do we make it opt-in for the user.14:45
artomFor the human user.14:46
sean-k-mooneywell there is that topci14:46
sean-k-mooneybut also shoudl we change how the vtpm secret is stored in libvirt so we can read it back14:46
dansmithwell, or if we use libvirt for it14:47
dansmithyeah14:47
artomThat would make it a non-starter for rolling upgrades, so I thought we'd dropped that idea14:47
sean-k-mooneyand i think we shoudl check if libvirt can already copy it if its not marked as private for what its worth14:47
dansmithand one thing we could discuss is how much of the implementation is the same (i.e. the copying of a secret from source, vs. the dest fetching it) on each thing14:47
dansmithsean-k-mooney: I thought the spec said that wasn't implemented yet?14:47
sean-k-mooneyartom: im not conviced we shoudl be upgrading new vms to supprot this without requireing them to be cold migrated or hard rebooted or somethihng like that14:47
sean-k-mooneydansmith: its definaly not supproted for private secrtes14:48
sean-k-mooneybut the docs for this is slightly odd so i would like to check what happens if we dont make them private14:48
artomsean-k-mooney, OK, we definitely need a call then :P14:48
dansmithartom: no, I'm not saying we should require the upgrade path, remember, I just said your reason didn't sound good enough to exclude it.. but if we exclude because of a better reason it's fine14:48
artomOh right.14:49
sean-k-mooneyfyi "The value of the secret must not be revealed to any caller of libvirt, nor to any other node." the bit after the comma makes me wonder what happens for non private secrets14:49
dansmithsean-k-mooney: that sounds like it never tells anyone14:50
sean-k-mooneycorrefct if its private14:50
sean-k-mooneybut the defualt is private=no14:50
sean-k-mooneywe epxlity mark only vtpm secreate as private=yes14:50
sean-k-mooneyi asked interanly on the virt channle late last night but no one responed14:51
sean-k-mooneyartom: how much work would it be for you to push up a DNM patch ot diabel the api block and test live migration with vtpm and jsut remove setting private=yes?14:52
sean-k-mooneydo you have a simialr patch already?14:53
artomNot too much, I think?14:53
artomI can piggy back off https://review.opendev.org/c/openstack/nova/+/92577114:53
artomWait, no, I explicitly _don't_ want to piggy off that, because it has all the fetch-from-barbican-define-in-libvirt-on-the-dest mechanics14:54
sean-k-mooneywell you could jsut comment out https://review.opendev.org/c/openstack/nova/+/925771/6/nova/virt/libvirt/driver.py#1151214:54
artomAnd we want to test do-nothing-except-make-the-secret-public-and-see-if-it-magically-works14:54
sean-k-mooneywhere you are adding the download14:54
sean-k-mooneyyes exactly14:54
sean-k-mooneyoh actully no there is one other change14:55
sean-k-mooneyyou need to also stop nova deleteing the vtpm secrete after the vm boots14:55
gibidansmith: artom: sure I can jump on a call14:55
sean-k-mooneyso make it public and dont delete it14:55
gibiI just finished with the SRIOV vGPU call14:55
dansmithvkw-ozjo-pap14:56
dansmithmeet.google.com/vkw-ozjo-pap14:56
sean-k-mooneyim going to grab a drink but  ill join shortly14:57
artomComing, just on a call to the mechanic where my car is14:58
artom1.5 hours away from Montreal :(14:58
bauzas#startmeeting nova16:01
opendevmeetMeeting started Tue Jan  7 16:01:18 2025 UTC and is due to finish in 60 minutes.  The chair is bauzas. Information about MeetBot at http://wiki.debian.org/MeetBot.16:01
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:01
opendevmeetThe meeting name has been set to 'nova'16:01
bauzashi folks16:01
bauzaswho's around ?16:01
sean-k-mooneyo/16:01
gibi /o (partially)16:01
elodilleso/16:01
bauzas(I'm partially here too but I need to run the meeting :) )16:02
* bauzas likes doing three things at same time16:02
bauzasokay let's start and hopefully it will be quick16:03
bauzas#topic Bugs (stuck/critical) 16:03
bauzas#info No Critical bug16:04
bauzas#info Add yourself in the team bug roster if you want to help https://etherpad.opendev.org/p/nova-bug-triage-roster16:04
bauzasany bugs people wanna raise ?16:04
Ugglao/16:04
fwieselo/16:04
bauzaslooks not, moving on16:05
bauzas#topic Gate status 16:05
bauzas#link https://bugs.launchpad.net/nova/+bugs?field.tag=gate-failure Nova gate bugs 16:05
bauzas#link https://etherpad.opendev.org/p/nova-ci-failures-minimal16:05
bauzas#link https://zuul.openstack.org/builds?project=openstack%2Fnova&project=openstack%2Fplacement&branch=stable%2F*&branch=master&pipeline=periodic-weekly&skip=0 Nova&Placement periodic jobs status16:05
bauzasnova-emulation is in the weeds16:05
bauzas#info Please look at the gate failures and file a bug report with the gate-failure tag.16:06
bauzas#info Please try to provide meaningful comment when you recheck16:06
bauzas#topic Release Planning 16:07
bauzasa few things to mention here16:07
bauzas#link https://releases.openstack.org/epoxy/schedule.html16:08
bauzas#info Nova deadlines are set in the above schedule16:08
bauzas#info Implemention review day is planned tomorrow16:08
bauzasI'll send an email about it ^16:08
bauzas#action bauzas to notify about review day thru email16:08
bauzasthe other, also important :16:09
bauzas#info Specs approval freeze planned for Thursday EOB16:09
bauzasyou are warned16:09
bauzasI'm mostly burned today by spec reviews and I'll continue tomorrow16:09
bauzasanything worth mentioning now ?16:09
s3rj1klate hey to all, connection issues16:10
bauzasmoving on16:10
bauzas#topic Review priorities 16:11
bauzas#link https://etherpad.opendev.org/p/nova-2025.1-status16:12
bauzasnothing to mention, continuing16:12
bauzas#topic Stable Branches 16:12
bauzaselodilles: happy new year16:12
elodilles:)16:12
elodilleshappy new year too o/16:12
elodilles:)16:12
elodillesspeaking of that16:12
elodillesi see not much activity on stable branches in the past weeks16:13
elodillesbut i'm not aware of any stable gate issue16:13
elodilles(maybe because of the above reason o:))16:14
elodilles#info stable branch status / gate failures tracking etherpad: https://etherpad.opendev.org/p/nova-stable-branch-ci16:14
elodillesplease if you see any issue, add there ^^^16:14
elodillesthat's all about stable branches from me16:15
bauzascool16:15
bauzas#topic vmwareapi 3rd-party CI efforts Highlights 16:15
bauzasfwiesel: heya, happy new year too16:16
fwieselHi, happy new year. No updates from my side.16:16
bauzas++16:16
bauzas#topic Open discussion 16:16
bauzasone item in the agenda, quite on time given thursday's deadline16:16
bauzas(bauzas) Specless approval for https://blueprints.launchpad.net/nova/+spec/image-metadata-props-weigher ?16:16
bauzastl;dr: I'm hereby asking for an exception to write a spec16:17
bauzasthis is just a weigher, we have everything we already need16:17
bauzasin the past, some filters and weighers required a spec, some others not16:17
bauzasso I'm leaning towards asking you to grant this blueprint as it is16:18
gibibauzas: fine by me to have it specless16:18
bauzastbc, we have Instance objects in the HostState16:22
sean-k-mooneywe do16:23
sean-k-mooneyfor affinity/anti affintiy filters16:23
sean-k-mooneyand a sa result the weigher have them too for the same reason16:23
bauzasthe weigher design will be 'lookup at each of the instances from the host, see their imagemeta from the instance and compare with the request'16:24
sean-k-mooneyand the instance objects have teh image metadata avaialable16:24
sean-k-mooneythe only concern is is that lazy loaded today or not16:24
bauzasthe instances list ?16:24
sean-k-mooneynot the instnace list16:24
sean-k-mooneythe image metadta which is construction form the cached copy in teh instance_system_metadata table16:25
sean-k-mooneyi dont know if we load that in the host manager16:25
sean-k-mooneywe proably do just wanted ot call that out16:25
bauzasah good point16:25
bauzaswe can't load that later16:26
bauzasas we're on the scheduler16:26
bauzasor we need to target the cell db16:26
sean-k-mooneyhttps://github.com/openstack/nova/blob/master/nova/objects/instance.py#L70-L7116:26
sean-k-mooneywe are good the instnace system metadta is in teh defalt fields16:26
sean-k-mooneyso we have that already16:26
bauzasexcellenbt16:27
bauzasthanks for checking16:27
bauzasare we then ok with the design ?16:27
sean-k-mooneyi think so. i capture dmost of my feedback inthe blueprint already16:28
sean-k-mooneyso im more or less ok with proceedign to the implemenation reivew based on that design16:28
bauzas++16:29
bauzasokay, then let's approve it as specless16:29
bauzas#approved https://blueprints.launchpad.net/nova/+spec/image-metadata-props-weigher approved as specless16:30
bauzasdoh16:30
bauzas#agreed https://blueprints.launchpad.net/nova/+spec/image-metadata-props-weigher approved as specless16:30
bauzasthat was it for me16:30
bauzasanything else to mention ?16:30
sean-k-mooneyif there is time/enerty i have one topic16:30
bauzasI don't have energy but we have time16:31
sean-k-mooneyhttps://blueprints.launchpad.net/nova/+spec/host-discovery-distributed-lock16:31
sean-k-mooneywe talked about this at the ptg16:31
sean-k-mooneyand htere is a related spec https://review.opendev.org/c/openstack/nova-specs/+/93638916:32
sean-k-mooneytldr they woud like to be able to enabel the discover host periodic on multiple schduler at a time16:32
sean-k-mooneyso i dont actully like the proposal as written16:32
sean-k-mooneybut i did a short poc of one of the alternitives16:32
sean-k-mooneyhttps://review.opendev.org/c/openstack/nova/+/93852316:32
sean-k-mooneyi don think this could be a reasonable minor imporvment16:33
sean-k-mooneythe change is basically just this https://review.opendev.org/c/openstack/nova/+/938523/2/nova/scheduler/manager.py#11316:33
sean-k-mooneywhat i wanted to know is if we took this supper simpler approch of doing leader election in the perodic and returnign if not the leader16:34
sean-k-mooneywoudl this be a spec/bluepirnt or bugfix?16:34
bauzashmmmm16:35
sean-k-mooneyim askign becasue of the time constratint for the first two options16:35
gibiit has no API impact so I lean towards specless bp16:35
s3rj1kspecless from me if that counts :)16:35
sean-k-mooneys3rj1k: input is always welcome :)16:36
gibisean-k-mooney: but you can formulate it as a bug as nova allows enabling the periodic in multiple schedulers today16:36
s3rj1k(plus it somewhat covered in alternatives section of mentioned above spec)16:36
gibisean-k-mooney: without a safety net16:36
sean-k-mooneyya so my perspecitve was. if we are takign the heviry weight approch of adding a distibuted lock manager that needed a spec because its complex and  a large change16:37
bauzascan't we just add a config option ?16:37
sean-k-mooneyif we do a very very small change to just od best effored leader election it might be a bug16:37
bauzasI understand the reasoning, magically nova will elect a new leader16:37
sean-k-mooneybauzas: we could also and a config opiton yes16:38
bauzasbut I'm afraid this periodic could silently run somewhere else without the op noticing it16:38
dansmithomg, DLM?16:38
* dansmith reads up16:38
sean-k-mooneybauzas: so you have to opt into the perodic16:38
bauzasand you know, brainsplits and all the likes happen16:38
sean-k-mooneyby settign a config option to specify the interval today16:38
bauzasthat's my point16:38
sean-k-mooneyso this woudl just be a change to the behaiovr when you opt in16:39
bauzas-1 disables the periodic IIRC16:39
sean-k-mooneyyep and i dont want to change that16:39
bauzasbut I wouldn't trust python for electing my leader16:39
bauzasparticularly the sorted command16:39
sean-k-mooneyso today the db enfoces that bad thing dont happen16:39
sean-k-mooneyyou just get excptions in the log16:39
sean-k-mooneyif multiple race that is16:40
sean-k-mooneywhich is annoying for operators16:40
dansmithif you want to automate host discovery active-active, you can do that yourself with nova-manage and your own DLM right?16:40
sean-k-mooneyyes16:40
bauzasyes, that's why I don't like that approach16:41
sean-k-mooneythat a pain in k8s however which s3rj1k cares about16:41
bauzasyou're considering that DLM is just not needed because sorted exists16:41
gibidefinitely having a full DLM in nova just for this is way overkill16:41
sean-k-mooneyyep which is why i did no liek the dlm/tooz approch16:41
bauzasthe SG API isn't also good at doing live healthchecks16:41
bauzasthere could be a reasonable amount in time where nova wouldn't see a node done16:42
bauzasgone*16:42
sean-k-mooneywhich is fine16:42
gibitoday if you run the periodic in multiple schedulers you burn power unnecessarily but you don't break nova DB just get exceptions.16:42
sean-k-mooneywe can miss runnign the perodic for a protracted period of time with out bad impacts16:42
bauzaswe could just let CONF.scheduler.discover_hosts_in_cells_interval be mutable and leave DLMs to manage nova-scheduler A/Ps instead of us16:43
sean-k-mooneynope16:44
dansmithThere are also things we could do in nova that don't require a DLM if we really care16:44
sean-k-mooneythat does not work for k8s deployments which is the motivating usecase16:44
dansmithlike the periodic could say "am I the oldest-started nova-scheduler service that is currently up? If so, then run the discovery, if not, then don't"16:44
sean-k-mooneydansmith: right im proposing not using a DLM16:44
dansmiththen only one of them would run it until you shut down the oldest and then the next one would do it.. sort of lazy-consensus election16:44
sean-k-mooneydansmith: that basically what my patch does 16:44
gibidansmith: exactly, that is very close to what sean-k-mooney proposes16:44
bauzasdansmith: I don't like to trust the service group API for electing my leader16:44
dansmithoh, then.. yeah :D16:45
dansmithbauzas: it's not really trust, it's just optimization16:45
bauzassorted was one option, the oldest is another alternative16:45
gibibauzas: don't think about as leader election, it is more like limiting the number of discover host runs based on an input16:45
dansmithI see sean-k-mooney's patch now, sorry I'm catching up16:45
sean-k-mooneysorted is just for determinium16:45
sean-k-mooneydansmith: no worries16:45
dansmithdeterminism .. yeah, let's do something like that if we care, for sure16:46
gibibauzas: it could be a last run timestamp if we don't like age16:46
gibibauzas: but as we hav age we have a good set of value to find a single scheduler that min / max in that value to be the only one to do the work16:46
bauzaswell,16:46
sean-k-mooneyso for today what i really want to knwo is spec, bug or specless bluepritn so i know if we have till thurday or m3 to finalise the design and implamations16:47
bauzasthere are possibilities where a nova-scheduler could see itself being the oldest while another one, runnning 2 secs later could also see it as the oldest16:47
sean-k-mooneythat why im not using time16:47
sean-k-mooneyim sorting on the value of host16:48
bauzasI know16:48
bauzasbut that's still the same16:48
dansmithbauzas: sort by id, filter by up, that should be stable16:48
bauzassome host could see itself as up while someother too, 1 sec later16:48
dansmithif it does, then it's flapping from the view of the operator, which I think is not likely to be unnoticed16:48
sean-k-mooneyalso people run with this in production without this protection or any kind fo arbitation today.16:49
bauzasdansmith: if you are okay with operators unnoticing which nova-scheduler runs where, then ok16:49
sean-k-mooneyi.e. the just ignor the longs when there is a collioson16:49
dansmithbauzas: which nova scheduler runs...host discovery?16:49
bauzasyes16:49
sean-k-mooneywhy would the care?16:50
dansmithisn't that the point of this? instead of run-everywhere it's run-one-place, automatically managed?16:50
dansmithright now, they can all run it in parallel all the time, if you care, but that's expensive16:50
gibi^^ yepp16:50
dansmithI thought the point was to make it less expensive and automatically decide that (hopefully) only one does it each time.. seems fine to me16:50
bauzasOK, then let's go for the proposal16:50
dansmithworst case, two do it, no problem16:50
sean-k-mooneyyep ^16:51
bauzaswell, you're right16:51
bauzastwo running at same time don't split brains16:51
bauzasbecause of the periodic itself16:51
sean-k-mooneythey dont even cause an error if you have not added any hosts16:51
dansmithyeah16:51
bauzasokay, then to answer sean-k-mooney's question, I'm fine with it being specless16:51
bauzasI actually would prefer it to be specless please16:52
bauzasas I wouldn't want to step into DLM muds and leader elections16:52
sean-k-mooneyok i think ill file a new bluepritn for ti and update the description16:52
sean-k-mooneyif we are ok with that are we ok to appove that async16:52
bauzassure16:52
sean-k-mooneyill do it now and ping outside fo the meeting16:53
bauzas#agreed https://review.opendev.org/c/openstack/nova/+/938523 can be filed as a specless blueprint16:53
s3rj1ksean-k-mooney: thanks for spending time on this16:53
bauzascool thanks16:53
sean-k-mooneys3rj1k: no worreis i ment to do it before going on pto i just didnt finish hacking on it until i got back on monday16:54
sean-k-mooneyok thats if from me16:54
sean-k-mooneys/if/it/16:55
bauzascool16:55
bauzasthanks all then16:55
bauzas#endmeeting16:55
opendevmeetMeeting ended Tue Jan  7 16:55:58 2025 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:55
opendevmeetMinutes:        https://meetings.opendev.org/meetings/nova/2025/nova.2025-01-07-16.01.html16:55
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/nova/2025/nova.2025-01-07-16.01.txt16:55
opendevmeetLog:            https://meetings.opendev.org/meetings/nova/2025/nova.2025-01-07-16.01.log.html16:55
elodillesthanks o/16:56
s3rj1kthanks16:56
gibithanks folks16:58
sean-k-mooneys3rj1k: bauzas: i think https://blueprints.launchpad.net/nova/+spec/distributed-host-discovery covers it17:22
s3rj1kyea, looks good17:26
luvnhey guys! in openstack, is it possible — through some workaround ­— to detach a boot volume to extend and/or retype it?17:40
luvni found an old spec related to this but as far as i know it never moved forward — https://specs.openstack.org/openstack/nova-specs/specs/stein/approved/detach-boot-volume.html17:41
luvnso i was just wondering if we have a workaround for that17:42
dansmithI want to say you might be able to shelve/offload a BFV instance and take action on the volume, but don't quote me17:45
sean-k-mooneydansmith: i think when shleved its in the reserved state, im not sure if cinder will allow exented then but perhaps.17:51
sean-k-mooneyluvn: but no we dont allow detach of the root volume for bfv guests17:51
dansmithyeah idk, but I think it's detached from the host which definitely keeps it pinned17:51
sean-k-mooneyi think you can do retpe without special workarounds17:51
sean-k-mooneyim also not sure if you can just do live extened17:52
sean-k-mooneyyou can for non root voluems17:52
sean-k-mooneyluvn: i assume you tried just extending via cinder and it was unhappy17:52
dansmithretyping often means a different connection_info right?17:54
opendevreviewDouglas Viroel proposed openstack/nova master: WIP - Add support for showing scheduler_hints in server details  https://review.opendev.org/c/openstack/nova/+/93860417:55
sean-k-mooneyya but that calls the swap volume api right17:55
luvnthat's right, i did a quick experiment on devstack and was unsuccessful :(17:55
luvnyou can't really retype/resize the volume when it's reserved17:55
dansmithsean-k-mooney: idk, I didn't think so17:55
luvn(which is the state shelving brings it to)17:55
sean-k-mooneydansmith: i dont really know of a reason that the root volume shoudl be diffent form a data volume with resepct to resize/retype17:56
sean-k-mooneyim not say it not diffent just unsure what the reason woudl be for them to behave differntly17:56
dansmithsean-k-mooney: well, I guess I don't really know the story here.. I thought swap_volume was mostly for snapshots and things but maybe it should/would work for retype17:56
dansmithI guess the detach-and-reattach-elsewhere is the thing that is most special about BFVs17:57
sean-k-mooneyright becasue we dont want to allow an isntance to not have an assocated root disk17:59
sean-k-mooneybut retype or extend shoudl not impact that so im not sure either if it can work or not17:59
sean-k-mooneyshelve is worth a shot but i have never tired if im being honest18:00
dansmithsounds like luvn has18:00
sean-k-mooneyoh because its not allwoed in reserved state18:01
sean-k-mooneyperhaps its worth askign cinder about why that is.18:01
luvnyup, that's right18:01
dansmithluvn: and to be clear you've tried with it attached/running as well right?18:01
luvndansmith: i did! i don't really remember the output rn, but it complained about the volume status as well18:02
dansmithack18:03
dansmiththat's consistent with how I thought it worked18:03
dansmith(or didn't)18:03
luvnokay, so with the instance running and the volume attached:18:04
sean-k-mooneythat reminds me that we still have not completed https://review.opendev.org/c/openstack/nova/+/87356018:04
sean-k-mooneyhttps://blueprints.launchpad.net/nova/+spec/assisted-volume-extend18:04
luvnresize: Failed to set volume size: Volume is in in-use state, it must be available before size can be extended18:04
luvnretype: Failed to set volume type: Invalid volume: Retype needs volume to be in available or in-use state, not be part of an active migration or a consistency group, requested type has to be different that the one from the volume, and for in-use volumes front-end qos specs cannot change.18:04
luvnretyping appears to be more "loose" than resize, but still, can't really perform it18:05
sean-k-mooneywhat storage backend are you using out of interest18:05
luvni'm using ceph18:06
sean-k-mooneyok ya that should be the most flexiable18:06
sean-k-mooneyim kind fo torn, i guess this would be a new feature either on the cinder or nova side18:08
sean-k-mooneyfor nova to enabel it we would need a new instance action 18:08
sean-k-mooneyim not sure what woudl eb required on the cidner side to make it work if we did it form cinder insetead or if that is even possible18:08
sean-k-mooneythere is a complex back and forth between nova and cidner and the storage backend ot orchestrated depending on how the volume is attached and if the isntance is runnign so it wouldnt be a 1 liner to enabel unfrotuently18:11
luvnoh, i see18:11
sean-k-mooneyfor example for nfs if the vm is runnign nova need to grow the backign file on the nfs share (it cant be done by cidner)18:11
sean-k-mooneyfor ceph that pretty easy you just need to ask ceph to grow it18:12
sean-k-mooneyfor iscsi/fiber channel cinder woudl have to ask the storage backend to extend it and that may or may not be visabel to the guest before the iscisi export is remonted18:12
sean-k-mooneythe thing is we do supprot extnedign cidner volume when they are not the root volume (i dont recall in which casese it offline vs online)18:14
luvnnow i see why ceph is by far the most flexible heh18:14
luvnin any case, i sent a message on cinder's channel; let's see if we can get a better insight into why these limitations exist18:14
luvni'm legit curious18:14
dansmithlikely because three connection mechanisms require three different paths, among other things :)18:15
dansmithalthough in reserved state that should be not a problem I think18:15
sean-k-mooneyright reserved means for one that the vm is not running so cidner can alwasy do the opertion itself18:16
sean-k-mooneythe onely reason nova has to grow it for nfs when the vm is runnign is qemu has locked the file on the share18:16
sean-k-mooneyhttps://github.com/openstack/nova/blob/master/releasenotes/notes/nova-support-attached-volume-extend-88ce16ce41aa6d41.yaml18:18
luvnso i guess there's a world where it's a matter of cinder allowing these operations to be performed when the volume is reserved18:19
luvn(of course, if there are no impediments, etc)18:19
sean-k-mooneythat release note might be out of date and does nto mention BFV but it say "only the libvirt compute driver with iSCSI and FC volumes18:19
sean-k-mooney    supports the online volume size change."18:19
sean-k-mooneywe added supprot for extendign in use rbd voluem later 18:21
sean-k-mooneyhttps://github.com/openstack/nova/commit/eab58069ea6afba875fb0a7d7ac68c7e83ebf14a18:21
sean-k-mooneythat was stien18:21
sean-k-mooneyso everythign is in place for ceph for addtional ciner data volumes the vm is running18:27
sean-k-mooneythere must just be somethign missign for BFV but im not sure whty that could be18:28
opendevreviewArtom Lifshitz proposed openstack/nova master: DNM: Try vTPM live migration with non-private, non-eph secret  https://review.opendev.org/c/openstack/nova/+/93861319:56
opendevreviewArtom Lifshitz proposed openstack/nova master: DNM: Try vTPM live migration with non-private, non-eph secret  https://review.opendev.org/c/openstack/nova/+/93861320:34

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!