opendevreview | Merged openstack/nova master: block_device_info: Add swap to inline https://review.opendev.org/c/openstack/nova/+/826523 | 03:10 |
---|---|---|
opendevreview | Amit Uniyal proposed openstack/nova master: Adds check for VM snapshot fail while quiesce https://review.opendev.org/c/openstack/nova/+/852171 | 05:19 |
opendevreview | Merged openstack/nova master: libvirt: Improve creating images INFO log https://review.opendev.org/c/openstack/nova/+/826524 | 06:52 |
opendevreview | Merged openstack/nova master: libvirt: Remove defunct comment https://review.opendev.org/c/openstack/nova/+/826525 | 06:57 |
opendevreview | Balazs Gibizer proposed openstack/nova master: Trigger reschedule if PCI consumption fail on compute https://review.opendev.org/c/openstack/nova/+/853611 | 09:38 |
gibi | sean-k-mooney: for the yesterday pci filter bug discussion. The PciDevicePool object is differently set up depending on where it is used. in the compute it has a devices dict with PciDevice object, but that is not present from the scheduler perspective. | 10:04 |
gibi | so we cannot simply replace support_requests with consume_requests | 10:04 |
gibi | as consume_requests depends on the pool['devices'] to be present | 10:04 |
gibi | we can call apply_request and that will decrease the pool count so it will detect the double booking. the only downside is that it does not support dependent devices | 10:06 |
gibi | as that would again require pool['devices'] to be present in the stats | 10:07 |
sean-k-mooney | ack | 10:07 |
gibi | btw support_requests is called from both the PciPassthroughFilter and the NumaTopologyFilter | 10:07 |
sean-k-mooney | ya the schduler just works on the count | 10:07 |
sean-k-mooney | yes | 10:07 |
sean-k-mooney | the numa toplogy filter needs to validate teh device numa toplogy | 10:08 |
gibi | yepp | 10:08 |
gibi | so I will replace support with apply | 10:08 |
gibi | that will enhance the scheduling logic | 10:08 |
sean-k-mooney | you proably want to call both no | 10:08 |
sean-k-mooney | support then apply | 10:08 |
gibi | apply does what support do but also decrease counts | 10:09 |
sean-k-mooney | supporst is just a simple dict comprehention so its cheap | 10:09 |
sean-k-mooney | ack but is apply much more complex or about the same | 10:09 |
gibi | I can call both but I think apply is a superset of support | 10:09 |
sean-k-mooney | have not looked in a while | 10:09 |
gibi | support is basically _filter_pools, apply is _filter_pools + _decrease_pool_count | 10:10 |
sean-k-mooney | ya im just wondering about cost but i guess apply is already called for multi create | 10:10 |
gibi | apply is called at the end yes | 10:10 |
gibi | anyhow | 10:10 |
sean-k-mooney | ack so i guess that is fine | 10:10 |
sean-k-mooney | will apply decresase the pools if they all dont fit | 10:11 |
sean-k-mooney | i.e. can it handel partial cases | 10:11 |
sean-k-mooney | im just wonderign do we need to do an atomic swap | 10:11 |
gibi | I can call apply on a local copy of stats | 10:12 |
gibi | then drop the copy | 10:12 |
sean-k-mooney | ack | 10:12 |
gibi | so no interference between parallel requests | 10:12 |
sean-k-mooney | well i was thinking swap the copy with the orginal if it succeeeds | 10:12 |
sean-k-mooney | so that multi create works | 10:12 |
sean-k-mooney | althogh | 10:13 |
sean-k-mooney | maybe not | 10:13 |
sean-k-mooney | droping it might be better to not break the numa toplogy filters | 10:13 |
gibi | we have two phases for multicreate 1) running filters (this will apply on a copy) 2) consume the selected host (this already apply on a shared stats) | 10:13 |
sean-k-mooney | right we will need to do apply 3 times | 10:13 |
sean-k-mooney | pci filter, numa toplogy filter and host manager | 10:14 |
gibi | yes | 10:14 |
sean-k-mooney | the first two should be copies to not break each other | 10:14 |
gibi | yes | 10:14 |
gibi | so I will keep the support call to do the apply on a copy | 10:14 |
gibi | and keep the apply work on the shared stats | 10:14 |
sean-k-mooney | you could make supports call apply internally on a copy | 10:14 |
gibi | but then I need to signal to apply when to copy and when not to copy | 10:15 |
sean-k-mooney | i have not looked at the signiture but do we pass in the pools to apply | 10:15 |
sean-k-mooney | or does it get them itself | 10:15 |
sean-k-mooney | i was assumign we passed them in | 10:16 |
sean-k-mooney | so supports could do the copy for it | 10:16 |
sean-k-mooney | _apply takes the pools https://github.com/openstack/nova/blob/master/nova/pci/stats.py#L622-L653 | 10:17 |
sean-k-mooney | its up to you which you think is cleaner | 10:17 |
gibi | yes, but apply_requests doesnt | 10:17 |
gibi | anyhow I will push the code soon | 10:17 |
sean-k-mooney | let me know when its ready to review | 10:17 |
gibi | and we can look at that | 10:18 |
sean-k-mooney | cool | 10:18 |
gibi | thanks for the discussion | 10:18 |
sean-k-mooney | im going to try an power through and finish the vdpa patches today just an fyi | 10:24 |
sean-k-mooney | its mainly just tests and docs at this point | 10:24 |
opendevreview | efineshi proposed openstack/python-novaclient master: Fix nova host-evacuate won't work with hostname is like a.b.c https://review.opendev.org/c/openstack/python-novaclient/+/853465 | 10:24 |
sean-k-mooney | althouhg i have one downstream bug to look at first... | 10:25 |
gibi | sean-k-mooney: sure I can review the rest of the vdpa series when it is ready | 10:27 |
opendevreview | Balazs Gibizer proposed openstack/nova master: Trigger reschedule if PCI consumption fail on compute https://review.opendev.org/c/openstack/nova/+/853611 | 10:29 |
opendevreview | Balazs Gibizer proposed openstack/nova master: Trigger reschedule if PCI consumption fail on compute https://review.opendev.org/c/openstack/nova/+/853611 | 10:35 |
opendevreview | Merged openstack/nova master: imagebackend: default by_name image_type to config correctly https://review.opendev.org/c/openstack/nova/+/826526 | 12:31 |
opendevreview | Merged openstack/nova master: image_meta: Add ephemeral encryption properties https://review.opendev.org/c/openstack/nova/+/760454 | 12:31 |
opendevreview | Merged openstack/nova master: BlockDeviceMapping: Add encryption fields https://review.opendev.org/c/openstack/nova/+/760453 | 12:31 |
opendevreview | Merged openstack/nova master: BlockDeviceMapping: Add is_local property https://review.opendev.org/c/openstack/nova/+/764485 | 12:31 |
opendevreview | Merged openstack/nova master: compute: Update bdms with ephemeral encryption details when requested https://review.opendev.org/c/openstack/nova/+/764486 | 12:31 |
opendevreview | sean mooney proposed openstack/nova master: add sorce dev parsing for vdpa interfaces https://review.opendev.org/c/openstack/nova/+/841016 | 12:47 |
opendevreview | sean mooney proposed openstack/nova master: add sorce dev parsing for vdpa interfaces https://review.opendev.org/c/openstack/nova/+/841016 | 12:52 |
*** abhishekk is now known as akekane|home | 13:42 | |
*** akekane|home is now known as abhishekk | 13:43 | |
dansmith | gibi: can you look at my reply here real quick? https://review.opendev.org/c/openstack/nova/+/852900 | 13:49 |
dansmith | if you agree that I need to chase down the tests. that fail because of shared state, I'll give that a shot | 13:50 |
gibi | dansmith: sure I will look in 5 | 13:50 |
dansmith | but if you have some other idea about why that might be, I'll be glad to have it before getting into that rabbit hole :) | 13:50 |
dansmith | thx | 13:50 |
gibi | OK I need to run it locally to see the issues | 13:53 |
dansmith | the libvirt reshape test is the one I remember | 13:55 |
dansmith | hmm, maybe resetting the client during restart_compute_service is all I need | 13:58 |
dansmith | I need to run a full set now to see if the reshape ones are the only ones | 13:58 |
dansmith | what I really wanted to say in that comment was something like "this compute service stack is where we use a client with all the complex internal state and thus maybe we shouldn't share that with anything else that isn't part of this set of objects" | 14:00 |
dansmith | which is maybe still a good idea, I dunno | 14:00 |
dansmith | but re-using the same init code has the benefit of the similar error messages and things | 14:00 |
gibi | hm restart_compute_service could will trigger a creation of a new ComputeManager instance | 14:02 |
gibi | -could | 14:03 |
gibi | so you are right that behaves differently in the test where the two ComputeManager will share report client state, from the reality where a compute restart will reset the client | 14:03 |
gibi | s/reset/recreate/ | 14:03 |
dansmith | yeah, so I put a call there to reset the global state and it passed the few reshape tests I had in my history | 14:04 |
dansmith | running the full set now | 14:04 |
gibi | buut, in func test we run all of our computes in the same process, so now they will share the same report client across compute hosts | 14:05 |
dansmith | will that work because of the multi-root functionality you mentioned? | 14:06 |
sean-k-mooney | i kind of feel like we shoudl seperate the singelton changes form the lazy loading | 14:06 |
sean-k-mooney | if you have not already done that | 14:06 |
dansmith | but as I said above, I'm also fine keeping them separate for the compute manager part if you think that's better | 14:06 |
sean-k-mooney | just so there is less change to condier | 14:06 |
gibi | dansmith: I'm not sure about the scope of the multi root functionality | 14:06 |
dansmith | and I'll try to write something more concise than my verbose sentence above, but more useful than the one I put in there that caused the confusion | 14:07 |
dansmith | sean-k-mooney: they're already together, and can't be separated without losing some of the behavior that gibi wanted to keep | 14:07 |
gibi | I think in short term keep a separate client per ComputeManager instance and note why we are doing it. | 14:07 |
dansmith | gibi: ack, sounds good | 14:08 |
gibi | I dont like the global but I checked and in most cases it is safe based on how we use it | 14:08 |
gibi | the compute manager is an exception | 14:08 |
gibi | at least due to the func test, but also might be due to ironic multiple node per compute | 14:09 |
dansmith | the global results in fewer hits to keystone and also fewer places we could fail if a call to keystone fails, and mirrors our other client behaviors | 14:09 |
dansmith | (the internal state does not, of course, but...) | 14:09 |
gibi | yeah I accept the compromise | 14:10 |
gibi | the global has pros and cons | 14:10 |
dansmith | if you really hate the global I can switch it to per-use lazy load | 14:10 |
dansmith | but aside from the state thing, I don't know why we would perfer that | 14:10 |
gibi | yeah, only the share state that makes it complicated. | 14:11 |
dansmith | (or prefer even) | 14:11 |
dansmith | ack, so we could also make each call to get the singleton generate a new local state object so that that part is not shared everywhere | 14:11 |
dansmith | I could leave a node in there with the idea in case it becomes problematic in the future | 14:11 |
gibi | if I had time I would also trim the report client to have only those methods there that depend on the shared state and move the independent functions somehere else to make it clear what is problematic | 14:12 |
dansmith | ack, there are several things we *could* do to make this cleaner for sure | 14:12 |
gibi | the extra note in the singletone works for me | 14:12 |
* gibi needs a time machine | 14:12 | |
dansmith | ack will do that, no singleton for the compute manager and add that test | 14:13 |
gibi | ack | 14:13 |
gibi | thanks | 14:13 |
gibi | sean-k-mooney: I moved forward with the allocation candidate filtering in hardware.py step. That made me realize two things 1) placement tells us RP uuids in the allocation candidates and the stats hardware.py code needs to map those RP uuids to pools. For that I need extra information in the pool. 2) the prefilters are run in the scheduler so creating RequestGroups there are good for scheduling but | 14:17 |
gibi | not good for using those RequestGroups (especiall the provider mapping in them) later to drive the PCI claim as those RequestGroups are not visible outside of the scheduler process. For the QoS port we create the groups in the compute.api so those are visible to the conductor and the compute | 14:17 |
sean-k-mooney | i thinik for cpu pinning we do it else where too | 14:19 |
gibi | as prefilters are acting on the RequestSpec I'm tempted to move the prefilter run to the conductor | 14:19 |
gibi | I can try and see what falls out | 14:20 |
gibi | how do you feel about it? | 14:20 |
sean-k-mooney | moving part of the schduleing out of the schduler | 14:20 |
sean-k-mooney | that feel like more then we need to adress this | 14:20 |
gibi | alternatively I can try to return the modified request spec from the scheduler to the conductor | 14:20 |
gibi | but that is an RPC change | 14:20 |
gibi | afaik | 14:20 |
sean-k-mooney | im looking to see what we do for PMEM and cpu pinnign i rememebr we did it differntly for one of them | 14:21 |
sean-k-mooney | before the prefileters | 14:21 |
sean-k-mooney | im fine with moving the request group creation to before the call to the schduler | 14:22 |
sean-k-mooney | moving running the prefilters to the conductor however i think it probaly mre then we want to do | 14:22 |
gibi | ack, that would be another alternative, to do the request group generation not in a prefilter | 14:23 |
gibi | that would make this similar to how QoS works | 14:23 |
sean-k-mooney | this is where we do it for cpu pinning https://github.com/openstack/nova/blob/e6aa6373d98103348a8ee3c59814350ea1556049/nova/scheduler/utils.py#L80 | 14:28 |
gibi | I think that also runs in the scheduler | 14:29 |
sean-k-mooney | the implmenation i think i sin the request spec object | 14:30 |
sean-k-mooney | oh its not | 14:30 |
sean-k-mooney | https://github.com/openstack/nova/blob/e6aa6373d98103348a8ee3c59814350ea1556049/nova/scheduler/utils.py#L305-L321 | 14:30 |
sean-k-mooney | but those are free standing fucntions that budil up the resouce class request | 14:31 |
gibi | so that works as you never need to know from where the VPMEM resource was fulfilled | 14:31 |
sean-k-mooney | ya | 14:32 |
gibi | I will figure out something along the line of cyborg and qos requests | 14:32 |
gibi | both needs the request groups after the scheduling to drive the claim on the compute | 14:32 |
sean-k-mooney | ya | 14:33 |
sean-k-mooney | we do have some pci affintiy code there by the way added by https://github.com/openstack/nova/commit/db7517d5a8aaa5a24be12d9c3453dcd98d9a887e | 14:34 |
gibi | yeah that also only extends the unsuffixed request group | 14:34 |
sean-k-mooney | yep its just finding a host that supports it | 14:35 |
sean-k-mooney | but you would potentally have to do that per pci device now | 14:35 |
sean-k-mooney | or at least eventually | 14:35 |
sean-k-mooney | since the policy is setable per alias | 14:35 |
sean-k-mooney | we can proably pretend i did not mention this for now :) | 14:36 |
sean-k-mooney | we said in the spec that we would leave numa to the numa toplogy filter | 14:36 |
sean-k-mooney | so going back to your orginal issue | 14:37 |
sean-k-mooney | you need to be able to coralate the pci pools to the placement candiates | 14:37 |
sean-k-mooney | and provider summeries | 14:37 |
gibi | hm is that settable per alias? I see a flavor extra spec hw:pci_numa_affinity_policy': 'socket' | 14:37 |
sean-k-mooney | ya it is one sec | 14:37 |
opendevreview | Dan Smith proposed openstack/nova master: Unify placement client singleton implementations https://review.opendev.org/c/openstack/nova/+/852900 | 14:38 |
opendevreview | Dan Smith proposed openstack/nova master: Avoid n-cond startup abort for keystone failures https://review.opendev.org/c/openstack/nova/+/852901 | 14:38 |
sean-k-mooney | https://github.com/openstack/nova/blob/master/nova/pci/request.py#L104-L107 | 14:38 |
dansmith | gibi: ^ | 14:38 |
dansmith | (no rush) | 14:38 |
gibi | ahh we have "numa_policy": "required" in aloas | 14:38 |
gibi | alias | 14:38 |
gibi | dansmith: ack, I will look today | 14:38 |
sean-k-mooney | gibi: you can set any of the policies via the alias but they only take affect if not override by the flavor or image. | 14:39 |
sean-k-mooney | or port | 14:39 |
sean-k-mooney | but port is out of scope for now | 14:39 |
gibi | sean-k-mooney: I assume now (with a big bunch of ignorance) that it is handled by the NumaTopologyFilter properly regardless of the placement allocation :D | 14:40 |
gibi | but we have functional tests to see if this assumption holds | 14:40 |
sean-k-mooney | the numa toplogy fileter shoudl rejec tthe host if the host cant supprot it today | 14:41 |
sean-k-mooney | but its not currenlty allcoatio candiate aware | 14:41 |
sean-k-mooney | so its considring all posible devices | 14:41 |
gibi | the numa topology filter will be after my change as it calls support_request which will be a_c aware | 14:41 |
sean-k-mooney | not the ones in any one set of allcoaiton candiates | 14:42 |
sean-k-mooney | yep | 14:42 |
gibi | so it has a chance to work :) | 14:42 |
sean-k-mooney | yep | 14:42 |
sean-k-mooney | as i said we can ignor that wrinkel for now | 14:42 |
gibi | as of pool correlation with allocation candidate, I will try to add a list of RP uuids to each pool showing that where the pool gets its devices | 14:42 |
sean-k-mooney | ack. currently it shoudl be 1:1 | 14:43 |
sean-k-mooney | each pool shoudl be mapped to a singel RP correct | 14:43 |
gibi | that would be awesome if pool:RP is 1:! | 14:43 |
gibi | 1:1 | 14:43 |
gibi | I thought that VFs from two PFs might be and up in the same pool | 14:43 |
gibi | end | 14:43 |
sean-k-mooney | i think they will be two pools | 14:44 |
gibi | I will check | 14:44 |
gibi | but this sounds good at least | 14:44 |
sean-k-mooney | pools are not 1:1 to device_spec entires | 14:44 |
gibi | then I can driver the pools consumption logic based on the RP uuids in the allocation candidate | 14:44 |
sean-k-mooney | but i each PF gets its own pool | 14:44 |
sean-k-mooney | and VFs form differnt PFs are seperate | 14:44 |
* gibi makes a note to create extra test cases with multiple PFs providing identical VFs | 14:45 | |
sean-k-mooney | by the way if that is not thet case today i don tsee any reason we cant change it to make it 1:1 | 14:45 |
gibi | yeah that would have been my next proposal :) | 14:45 |
sean-k-mooney | the pools are stored in teh pci_stats object in the compute node recored | 14:46 |
gibi | hehe, I already have a note where to create RequestGroups from flavor https://github.com/openstack/nova/blob/master/nova/objects/request_spec.py#L534-L540 | 14:47 |
sean-k-mooney | hehe ya i was expecting it to be in the request_spec object | 14:48 |
sean-k-mooney | for cpu ectra | 14:48 |
sean-k-mooney | not the scudler utils | 14:48 |
gibi | ohh, and I even tried to resolve that todo at some point https://review.opendev.org/c/openstack/nova/+/647396 | 14:50 |
sean-k-mooney | hehe you have too many commits :P | 14:51 |
gibi | OK at least the commit message confirms my current view | 14:53 |
gibi | and adding just the pci alias related groups from the flavor does not seem problematic at that point | 14:54 |
sean-k-mooney | ya we already require that the alias defienition is the same on the api and the compute nodes | 14:55 |
sean-k-mooney | so you should be able to use the alsis form the current config safely | 14:57 |
sean-k-mooney | actully that does not matter you are not changing the requirement | 14:57 |
gibi | I don't think I depend on the alias on the compute but good to know | 14:58 |
sean-k-mooney | resize required the alisa to be the same to create the correct pci requests i belive | 14:58 |
gibi | hm, interesting | 14:59 |
sean-k-mooney | i have to join a call but we have docs about it | 15:00 |
sean-k-mooney | https://docs.openstack.org/nova/latest/admin/pci-passthrough.html#configure-nova-api i think we droped the reason form the doc | 15:01 |
JayF | melwitt: ack; thank you. Assuming you are OK if I want to run with trying to get those landed? | 15:08 |
melwitt | JayF: yes, please feel free | 15:09 |
JayF | Anything else vaguely-ironic-related you want to point me at, please do | 15:09 |
JayF | I'm throwing a backport party and all patches are invited ;) | 15:09 |
*** dasm|off is now known as dasm | 15:09 | |
melwitt | ok, can do :) | 15:13 |
opendevreview | John Garbutt proposed openstack/nova master: Ironic: retry when node not available https://review.opendev.org/c/openstack/nova/+/842478 | 15:24 |
opendevreview | John Garbutt proposed openstack/nova master: Ironic: retry when node not available https://review.opendev.org/c/openstack/nova/+/842478 | 15:27 |
opendevreview | Arnaud Morin proposed openstack/nova master: Unbind port when offloading a shelved instance https://review.opendev.org/c/openstack/nova/+/853682 | 15:55 |
amorin | hello, sean-k-mooney see ^ is a proposal for the shelved instance with bound ports | 15:57 |
amorin | I can write a unit test for this if it seems good to the team | 15:58 |
sean-k-mooney | ah you went with the flag approch rather then spliting it | 16:12 |
sean-k-mooney | ya that shoul work | 16:12 |
sean-k-mooney | ideally we woudl have both unit and funcitonal test for this | 16:12 |
sean-k-mooney | but the direction looks fine | 16:12 |
sean-k-mooney | i guess we can see what ci says | 16:13 |
sean-k-mooney | and if it breaks anything | 16:13 |
opendevreview | sean mooney proposed openstack/nova master: fix suspend for non hostdev sriov ports https://review.opendev.org/c/openstack/nova/+/841017 | 19:11 |
sean-k-mooney | dansmith: got a sec to confirm something. is bumping the compute service version and checking it in pre live migrate sufficent to assert the the source and dest supprot a feture where there are no other rpc changes required | 19:32 |
dansmith | yeah | 19:32 |
sean-k-mooney | i belive the answer is yes but just want to check before i add that and add tests | 19:33 |
sean-k-mooney | ok | 19:33 |
sean-k-mooney | its for the hot plug migration for vdpa. im addign a conductor check to bail if both host are not at the required compute service version | 19:33 |
sean-k-mooney | its what we did for sriov migration too | 19:34 |
dansmith | coo | 19:55 |
dansmith | l | 19:55 |
sean-k-mooney | https://github.com/openstack/nova/blob/e6aa6373d98103348a8ee3c59814350ea1556049/nova/objects/service.py#L34 is not where i was expecting that to be set... i was looking in the compute manager | 19:56 |
sean-k-mooney | is the service version really gobal accross the compute agent/schdluer/api ectra | 19:57 |
sean-k-mooney | i tough they all had there own version number | 19:58 |
sean-k-mooney | appart form the rpc version | 19:58 |
dansmith | it's tied to the object not the service, | 19:58 |
dansmith | and it's supposed to be global because it's kinda like a git hash.. "the code is up to this level" | 19:58 |
sean-k-mooney | ok | 19:58 |
dansmith | but within any given service (and like inside a container) it's always the same value, that's the point | 19:59 |
sean-k-mooney | ack im just tring to see where i need to bump it | 19:59 |
dansmith | there, there's only one place | 19:59 |
sean-k-mooney | cool | 19:59 |
sean-k-mooney | just making sure | 19:59 |
dansmith | it's global in the code, but set on the service record in the database when a service updates its live-ness, which is why you can check it per-binary and per-host | 20:00 |
sean-k-mooney | ah ok | 20:00 |
sean-k-mooney | so the services intialise them selevs with the current value when the prcoess starts and that is used to set the value in the db | 20:00 |
sean-k-mooney | so if a node has a differnt value in the db you know if its older or newer | 20:01 |
sean-k-mooney | i was just expecting to see th constant direcly in teh compute manger | 20:01 |
dansmith | right, but we can use service versions for any of the other services as well | 20:07 |
dansmith | so it's per-service not just for compute | 20:07 |
sean-k-mooney | yep i tought we had a sperate counter per service | 20:08 |
sean-k-mooney | but i can see why we share one | 20:08 |
sean-k-mooney | as that way we dont need to check fi schulder is x and conductor is y | 20:09 |
dansmith | you could still have that situation, of course | 20:10 |
dansmith | but instead of checking conductor_version=x, you check binary=conductor,version=x | 20:10 |
sean-k-mooney | ya you could | 20:11 |
sean-k-mooney | but at least there would be 1 x value for all serviecs for a given feature | 20:11 |
dansmith | no | 20:13 |
dansmith | if you have an old conductor and a new conductor, you could have different versions for each | 20:13 |
sean-k-mooney | that not what i ment | 20:13 |
dansmith | it's really host=$host,binary=conductor,version=$x | 20:13 |
dansmith | that's the tuple to get the version of one service on one host | 20:14 |
sean-k-mooney | i can define 62 to mean we support vdpa hotplug migration | 20:14 |
sean-k-mooney | and if that needed to be check for the compute or conductor or schduler its still just is that service >= 62 | 20:14 |
dansmith | yes, for a given host,binary pair, or (all),binary if you want to wait until they're all ready, yes | 20:15 |
sean-k-mooney | yes each service instance for the conductor might be above or below it but by having a single counter i can corralate that 62 means "support feature X" | 20:15 |
dansmith | yeah | 20:15 |
sean-k-mooney | so im currently debating beteween checkign the specific source and dest | 20:16 |
sean-k-mooney | or min compute service version in the deployment | 20:16 |
sean-k-mooney | technically i just need the specific ones but i would prefer to wait till your fully upgrade so min is tempting | 20:16 |
dansmith | yeah, min is pretty easy to understand and hard to screw up :) | 20:17 |
sean-k-mooney | yep ill go with that | 20:18 |
sean-k-mooney | if i do min i can actully do the check in the api instead | 20:19 |
sean-k-mooney | if instnace has vdpa ports and version <62 raise 400 | 20:20 |
opendevreview | Merged openstack/nova master: virt: Add ephemeral encryption flag https://review.opendev.org/c/openstack/nova/+/760455 | 20:30 |
opendevreview | sean mooney proposed openstack/nova master: [WIP] Add VDPA support for suspend and livemigrate https://review.opendev.org/c/openstack/nova/+/853704 | 20:38 |
sean-k-mooney | ok ill finish the docs and release note on ^ tomorrow but basically that seriese is code complete and ready for reveiw | 20:39 |
sean-k-mooney | im going to call it a night there and pick it up again tomrrow | 20:39 |
opendevreview | Jay Faulkner proposed openstack/nova stable/wallaby: Ignore plug_vifs on the ironic driver https://review.opendev.org/c/openstack/nova/+/821349 | 20:44 |
*** dasm is now known as dasm|off | 22:08 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!