Monday, 2025-01-06

opendevreviewWenping Song proposed openstack/nova master: support create vm with ipv4 and ipv6  https://review.opendev.org/c/openstack/nova/+/93842702:07
*** __ministry is now known as Guest510302:54
opendevreviewbenlei proposed openstack/nova master: Abort live migration task when stop nova compute service  https://review.opendev.org/c/openstack/nova/+/93822305:44
*** __ministry is now known as Guest512406:30
*** elodilles_pto is now known as elodilles08:34
nimesh_Encountered following error in nova logs while trying to attack cinder volume to a vm.  Jan 06 12:52:09 root1-10 nova-compute[1281]: DEBUG os_brick.initiator.linuxfc [-] Could not get HBA channel and SCSI target ID, path: /sys/class/fc_transport/target8:*, reason: Unexpected error while running command. Jan 06 12:52:09 root1-10 nova-compute[1281]: Command: grep -Gil "500507681012b195" /sys/class/fc_transport/target8:*/port_name Jan 06 12:52:09 root108:53
harshHello everyone. we are getting the above error in attaching a volume using cinder driver. Has anyone come across this in your environment? we have not made any changes in the switch config or backend config. Our OpenStack deployment is done using DevStack.08:56
sean-k-mooneyhum fiberchannel i suspect that is coming form os-brick rather then nova08:57
sean-k-mooneynimesh_:^08:57
sean-k-mooneyoh yes its in the message os_brick.initiator.linuxfc 08:58
harshWe can see that the host's WWPN : 500507681012b195 is present on the storage backend and when we try to attach a volume directly on the storage backend to this host, it works fine. But on openstack, it triggers the addvdiskhostmap and then immediately triggers rmvdiskhostmap to terminate the host attach.09:00
harshand so the volume goes from available to reserved and back to available.09:00
sean-k-mooneythe cider folks may be able to help more but it sound like the volume is not appearing on the host with the expect identifier09:02
sean-k-mooneyi.e 500507681012b19509:02
nimesh_Thanks for the reference to os_brick.initiator.linuxfc09:41
sean-k-mooneythis could be a case where if you could just reboot the compute node it might fix itself. my guess is there is some invlaid state that is preventign the attachemnt fo the volume via the host fiberchannel adapter but i dont really have expecice with FC to say exactly how to detect/fix that09:43
sean-k-mooneyalthough i think a host reboot might resolve it09:43
sean-k-mooneythe cider folks have much more expirnce debugging that then we do09:43
aaronHey Nova folks. I’m trying to figure out how I can migrate a VM from a Centos 8 Stream system to an Ubuntu 22 host. Both running Yoga.  The main issue I have right now is that the VM uses machine type 'pc-i440fx-rhel7.6.0' which isn't available on my Ubuntu host.  Does anyone have any ideas on how I can work around this, or perhaps if I've missed any Nova tricks to get this working? I am trying to avoid recompiling qemu with the rhel m12:10
sean-k-mooneyaaron: unfortunetly the only way to do that is to cold migrate13:13
sean-k-mooneythat for two reasons 1 the machine type and 2 the qemu instnace path is diffent on centos vs ubunutu13:13
sean-k-mooneyso we cant live migrate but cold migrate will work13:13
sean-k-mooneythe fact its currently using the rhel specific pc machine type means that there is not operturnity to live migrate the instnace as that cannot be changed while the guest is running.13:14
sean-k-mooneybauzas: this sound kind of like  my boreign host weigher https://blueprints.launchpad.net/nova/+spec/image-metadata-props-weigher a weigher idea. not quite the same but similar. i wanted to select the most boring host based on traits rather then image properties. i also suggesstign adding image/flavor affinity weigher that would prefer packing/spreading instnace based on the13:21
sean-k-mooneyimage/flavor they request and the instance already on a host in this case i guess  you wanted to pack/spread isntance based on the image propeies on the current instance adn those requested by instances alredy ona host13:21
sean-k-mooneywe do have access to the instnace on a host already in the hoststate object13:25
bauzassean-k-mooney: not sure I understand what you say13:25
bauzassorry13:26
bauzassean-k-mooney: which host weigher are you explaining ?13:26
sean-k-mooneyso for hte last couple of release i have been debating adding 3 new weigher based on internal converstaion and customer bugs13:27
sean-k-mooneythe 3 i was condiering implemtiing were a image affinity weigher, flavor affinity weigher and boaring host weigher13:27
sean-k-mooneythe image propertiy weigher in https://blueprints.launchpad.net/nova/+spec/image-metadata-props-weigher13:27
sean-k-mooneysounds like its similar to those in intent13:28
sean-k-mooneyi assume the intent is ot look at the image propertites in the current request and prefer or avoid host that have instnace with the same properites correct?13:28
bauzasah, ok13:29
sean-k-mooneyso rather then soft affinity/anti based on the flavor/image have soft affinity/antiaffinty based on the image properites13:29
sean-k-mooneyso you can pack/spread similar image in this case OS_TYPE=windows it the motivating example13:30
bauzasyeah indeed, some *people* would want to be able to have Windows instances in some hosts13:30
bauzaslike, being packed13:30
sean-k-mooneybut without forcing it right13:30
bauzasexactly13:30
bauzasjust because of the money you need to pass to Microsoft :)13:31
sean-k-mooneyand would this use host aggrate metadata or just the instnace on the host 13:31
bauzasI discussed about the aggregate with the *people*13:31
sean-k-mooneyok the reason i asked that is i could see both as options but i think this would be the first aggrate aware weigher13:32
sean-k-mooneyits not actully a blocker either way13:32
bauzasbut the problem with aggregates is that you need to know *how many* hosts you will need to use per the aggregate (for example for Windows)13:32
sean-k-mooneyjust wondering13:32
sean-k-mooneyya so that why i was wondier if we woudl od this more organically13:32
sean-k-mooneyby looking at the instance already on a host rather then using aggreate metadata13:32
bauzasof course, you can add a new host to an existing aggregate, but if you want to remove one host from an aggregate, then you need to move the instances13:32
sean-k-mooneynot nessiasrly 13:33
bauzasthat's why they prefer to have a way to pack the windows instances13:33
sean-k-mooneywell ya kind of 13:33
bauzaswithout creating aggs13:33
sean-k-mooneyso i would be inclided ot do this without relying on aggreats13:33
sean-k-mooneythe image affiinity weigher i was condierding was a more restrctive version of that i.e. it would only look at the image uuid rather the the perperteis and try and pack/sptread based on that13:34
sean-k-mooneyif i was to implement https://blueprints.launchpad.net/nova/+spec/image-metadata-props-weigher i would not depend on host aggartes or provide two diffent implamtions 13:35
sean-k-mooneyone that looked at the instnace already on the host and a second that would look at metadat on the hosts aggreates13:36
sean-k-mooneybut i think as an operator i woudl want this to work more or less automatically without requireing me to add aggreate metadata or a lot of config 13:37
bauzasexactly13:37
bauzasthat's what I wrote in the description13:37
sean-k-mooneywell the reason i started this converstaion is i did not find the descript partically clear13:37
bauzasha13:38
sean-k-mooneyi could read between the lines but if you wanted to ask for a specless blueprint tomorrwo i think you need a clearer description13:38
bauzaswell, I wrote the problem and I explained why aggregates can be used but why we need to create a new weigher13:38
sean-k-mooneyso "That's why we would like to create a specific ImageMetadataPropsWeigher for looking at how many instances have a specific prop and then operators could use it for that case or for something else (like packing instances by machine types)"13:38
bauzaswithout using such aggs13:38
sean-k-mooneythe last line13:38
sean-k-mooneyis really want you want ot do right13:38
sean-k-mooneyit was not clear to me that you wanted to avoid aggreates13:39
bauzasI can surely modify the description :)13:39
sean-k-mooneyif you can add someting explict like "This blueprint propsoes addign a new weigher that will pack/spread instance based on teh image properies requested by the instance and the image propeteis of existing instnace on indivugale hosts"13:40
sean-k-mooneythen i think it will be fine13:40
sean-k-mooneythe other comment i have is more na implemation detail but since all weigher are enabeld by default the multipler for this oen should likely be 0 to avoid any upgrade impact and then +/- can be used for spread/pack13:42
bauzassean-k-mooney: done :)13:43
sean-k-mooneyim in two minds about if this shoudl requrie a spec or not but i assuem we will discuss that in the team metting tomorow13:43
sean-k-mooneyi dont think its particallary contoverial13:43
sean-k-mooneybut we can see what others think13:43
sean-k-mooneysince we already have the instnaces in memory in the hoststate object for the affinty/anti affinity fileter this should not require any addtional db lookup so i expect teh perfoamce impact to be pretty minimal13:45
sean-k-mooneyand if the multipleer defaults to 0 there is no upgrade impact13:45
sean-k-mooneyso im leaning towards specless 13:45
bauzassean-k-mooney: that's a good question, that's why I provided the bp, but we need to discuss this tomorrow indeed13:46
sean-k-mooneyi assume the algoitherim woudl be something like (count of common image properies in isntnace / count of total image propeies of exisiting instnaces)13:48
sean-k-mooneythat way the more image properies the current instnace as in common with the instance on a given host the more we prefer it13:48
sean-k-mooneycommon/total  will naturally pack or we can take 1 - (common/total) to spread by default13:51
sean-k-mooneybauzas:i updated the whiteboard with my comments14:04
*** sfinucan is now known as stephenfin14:06
sean-k-mooneybauzas: by the way the bill would not be huge if you spread window sinstance across all host14:06
sean-k-mooneybauzas: microsoft only does host based liceineing on hyperv14:06
sean-k-mooneyfor libvirt the volume lisenicing is not host base by vm based so packing or spreading has no impact in general on cost14:07
sean-k-mooneythe only way cost factors into this is is if you segreated your isntance today based on guest os then you may have under utilistation because fo the static partioning14:09
sean-k-mooneywhich the weigher can avoid to some degree by allowing linux guests to use the slack space as you no longer need to staticly partion. in a normal libvirt deployment i would not recommend partioning host based on OS_TYPE any more really. there are some cases where it could make sense but in general no so much14:10
sean-k-mooneythe weigher is still reasonabel even with out the cost motive IMO14:11
aaronthanks for the info sean-k-mooney - I did see your reply on the mailing list thread on this from 18 months ago. I can work around the qemu-kvm path using a symlink. I will do some testing with cold-migration rather than hacking qemu-kvm at this point14:13
sean-k-mooneyah ok. ya the symlink shoudl work for the path issue. unfotunetly rhel adn as a result centos 8 compiled out the standard machicne tyeps from upstream qemu so its not posible to use a common machine type between centos and ubuntu14:14
sean-k-mooneyaaron: as i said on past mail thread centos 6/7 coudl live migrate to the contempoery ubuntu becasue at least in the centos 6 time it had the upstream qemu machine types so you could select a common one14:15
sean-k-mooneyin rehl 7 or 8 they stopped packaging the upstream qemu machinetype in rhel which broke this interoprablity14:16
aaronsean-k-mooney - got it, that makes sense, thanks. i will have a play - cold migration is a valid way forward so that should be good :)14:28
aaron(fwiw i don't have a bouncer setup atm so if I DC then i'm not being rude - my laptop has probably died/slept but I can see history on the openstack eavesdrop pages )14:29
sean-k-mooneyno worreis i also dont use a boucher but i nerver turn off my laptop14:30
sean-k-mooneywell unless im on pto for a few weeks :)14:30
opendevreviewTakashi Kajinami proposed openstack/nova master: doc: Use dnf instead of yum  https://review.opendev.org/c/openstack/nova/+/93849616:21
opendevreviewTakashi Kajinami proposed openstack/placement master: doc: Use dnf instead of yum  https://review.opendev.org/c/openstack/placement/+/93849916:24
dansmithartom: can you resolve the comments on your tpm spec that have been addressed? It's cluttered with lots of things, many of which seem out of date now17:31
dansmithartom: left a bunch of comments that will make gibi hate me, let me know if you want to chat about them high-bandwidth18:03
artomdansmith, lemme take a look18:04
artom(I kinda assumed the original comment author would resolve their comment if my changes/response adequately addressed them. I always feel bad making others's stuff disappear)18:05
sean-k-mooneyartom: it depends,  genneral expect the patch autor to mark them as done if they mad the change18:07
sean-k-mooneybut its kind fo a judgement call18:07
dansmithyeah, which is why I hate the resolve stuff, but since it's there, someone has to do it18:10
dansmithsean-k-mooney: iso+gpt now passing: https://review.opendev.org/c/openstack/nova/+/93183318:22
sean-k-mooneyawsome18:33
artomdansmith, left some answers. I fixed all the typos, but I won't push just yet with no other changes so far, pending more discussion.18:48
artomdansmith, I think I'm coming around to your way of thinking - re: making the secrets owned by the Nova user, if that only means that compute host root and no one else gets access to them.19:04
artomOut of respect for gibi, I'd like to get his thoughts when he's back19:05
dansmitheither that or libvirt storing them seems like the ideals you're going for.. not exposing to the api admin person, but allowing the system to reboot and migrate instances on their behalf if authorized to do so19:05
dansmiththe benefit of the former means they're not stored on disk if in the theft case19:05
dansmithartom: for sure, wait for gibi of course19:05
artomStoring in libvirt would break your rolling upgrades live migration point.19:06
artomIf the source isn't using the new code, it can't read it back and send it to the new dest.19:06
dansmithyep19:06
artomSo we're converging on making it Nova-owned.19:06
dansmithI wasn't making the argument for that being mandatory, I was saying that your argument for why it's not in your arrangement seemed weak19:07
dansmithif there's a stronger technical reason for it, then it's reasonable19:07
artomAck.19:07
artomSo - do I want to rewrite this now, or chat to gibi first?19:08
dansmithnah, wait for gibi19:08
artom*procrastinating artom is happy*19:08
sean-k-mooneydansmith: artom:  i was try8ing to figure out why downstream ci was failing an gave up so i was not following ^19:41
artomsean-k-mooney, don't bother, it's an infra thing19:41
sean-k-mooneybut skiming back dansmith are you in favor fo the tpm secret beign owned by nova19:41
artomsean-k-mooney, you can just read the comments on the review19:41
artomsean-k-mooney, tl;dr yes that you just said19:41
opendevreviewMerged openstack/nova master: Refactor response schemas for share API  https://review.opendev.org/c/openstack/nova/+/93665620:36
opendevreviewsean mooney proposed openstack/nova master: [WIP] allow discover host to be enabeld in multiple schedulers  https://review.opendev.org/c/openstack/nova/+/93852321:25
opendevreviewsean mooney proposed openstack/nova master: [WIP] allow discover host to be enabeld in multiple schedulers  https://review.opendev.org/c/openstack/nova/+/93852321:37

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!