opendevreview | Callum Dickinson proposed openstack/nova master: Add image meta to libvirt XML metadata https://review.opendev.org/c/openstack/nova/+/942766 | 00:23 |
---|---|---|
*** jayaanand_ is now known as jayaanand | 01:24 | |
opendevreview | melanie witt proposed openstack/nova master: Let oslo.limit handle retrieval of the endpoint ID https://review.opendev.org/c/openstack/nova/+/943507 | 02:54 |
opendevreview | Michael Still proposed openstack/nova master: Don't calculate the minimum compute version repeatedly. https://review.opendev.org/c/openstack/nova/+/940848 | 07:33 |
opendevreview | Michael Still proposed openstack/nova master: libvirt: Add extra spec for sound device. https://review.opendev.org/c/openstack/nova/+/926126 | 07:33 |
opendevreview | Michael Still proposed openstack/nova master: Protect older compute managers from sound model requests. https://review.opendev.org/c/openstack/nova/+/940770 | 07:33 |
opendevreview | Michael Still proposed openstack/nova master: libvirt: Add extra specs for USB redirection. https://review.opendev.org/c/openstack/nova/+/927354 | 07:33 |
opendevreview | Xiang Wang proposed openstack/nova master: Improve Error Handling for Placement API Responses in Nova https://review.opendev.org/c/openstack/nova/+/943530 | 08:42 |
opendevreview | Xiang Wang proposed openstack/nova master: Improve Error Handling for Placement API Responses in Nova https://review.opendev.org/c/openstack/nova/+/943530 | 08:43 |
opendevreview | Gaudenz Steinlin proposed openstack/nova master: Make SSH compression for remote copy configurable https://review.opendev.org/c/openstack/nova/+/943377 | 09:57 |
sean-k-mooney | gibi: bauzas this is a nice performance fix form mikal that came up during the spice series https://review.opendev.org/c/openstack/nova/+/940848 it would be nice to include that in epoxy | 10:49 |
bauzas | done | 10:53 |
* bauzas doing the rubber stamping paperwork to look at what we merged and what we need before RC1 | 10:53 | |
sean-k-mooney | cool let me know if we have anything pendign that need another pair of eyes | 10:54 |
bauzas | we'll need to merge the usual release patches | 10:55 |
bauzas | but I'll ping folks for sure | 10:55 |
* bauzas goes cycling, the weather is nice | 10:55 | |
sean-k-mooney | im starting to prepare the same for wathcer. i diid they cycle highlights yesterday and have a draft of the prelude but i need to update launchpad blueprints and the specs repo ectra | 10:56 |
sean-k-mooney | i also need to confirm if watcher needs the same rpc aliases ectra that we need for nova | 10:56 |
opendevreview | Merged openstack/nova master: Don't calculate the minimum compute version repeatedly. https://review.opendev.org/c/openstack/nova/+/940848 | 12:54 |
opendevreview | Gaudenz Steinlin proposed openstack/nova master: Make SSH compression for remote copy configurable https://review.opendev.org/c/openstack/nova/+/943377 | 13:45 |
dansmith | gibi: this is a little different than I remember it originally: https://github.com/openstack/nova/blob/master/nova/scheduler/client/report.py#L2059 | 15:15 |
dansmith | that seems to do an atomic swap and not an allocate..delete | 15:15 |
dansmith | I can go dig, but you might know quicker: will that check reserved or let us get away with the swap if we're reserved? | 15:16 |
gibi | dansmith: I have not context but at least that seems to be a singel request from nova to placement. I don't know how placement processes that. I know that when nova tries to move an allocation that is for a reserved resource the placement call will fail. | 15:24 |
dansmith | okay yeah i think it will | 15:24 |
gibi | but I don't know the exact behavior of placement here | 15:24 |
gibi | I mean the internal behavior | 15:24 |
dansmith | what I think placement does is start a transaction, delete the source allocations, create the replacement allocations, then _check_capacity_exceeded() and if that fails, roll back the transaction | 15:25 |
sean-k-mooney | https://github.com/openstack/placement/blob/master/placement/tests/functional/gabbits/allocations-bug-1779717.yaml | 15:25 |
sean-k-mooney | we can update/create mutliple allocation at once | 15:25 |
dansmith | right, but they will still check the reserved amount | 15:25 |
sean-k-mooney | possibly | 15:25 |
dansmith | they do | 15:26 |
sean-k-mooney | ok | 15:26 |
dansmith | https://github.com/openstack/placement/blob/master/placement/objects/allocation.py#L73 | 15:26 |
dansmith | see my logic above, this ^ will run after the swap and fail because we're over capacity due to the reserved amount | 15:26 |
sean-k-mooney | right i think that what happend for evacuate too | 15:26 |
dansmith | yeah, so I think from the outside, it's reasonable for an api consumer to assume that "allocations.replace_all()" is just a swap but it's not | 15:27 |
sean-k-mooney | well maybe | 15:28 |
sean-k-mooney | its was not orgianlly for swappign it was for doign muitple updates i think | 15:28 |
gibi | from API semantic perspective it is not well defined what it means when such a POST call has multiple consumers | 15:28 |
dansmith | it's also reasonable to assume that we check the capacity again, I'm just saying I think it's a bit ambiguous | 15:28 |
dansmith | gibi: exactly | 15:28 |
gibi | it might be a move it might be two independent allocation in the POST | 15:29 |
sean-k-mooney | yep | 15:29 |
dansmith | so one way to resolve that would be to say "if we were over capacity before, we allow you to be over capacity by the same exact amount after, otherwise fail" | 15:29 |
sean-k-mooney | that or a flag to contole the behavior | 15:29 |
sean-k-mooney | either way i think it need a new microversion | 15:29 |
dansmith | yeah I was also thinking a flag would be a thing | 15:30 |
dansmith | sean-k-mooney: can you dig up a reference on the evacuate bug? | 15:30 |
sean-k-mooney | gibi found it not long ago i can check my email/irc/slack logs and see | 15:31 |
sean-k-mooney | i know the last time it came up as a downstream it was because the oeprator had adjusted the allcoation ratio | 15:31 |
sean-k-mooney | putting the host into an oversubsiped state and then later could not evacuate | 15:31 |
sean-k-mooney | but there was a second edge case too | 15:32 |
dansmith | right, so that's what I was going to say, | 15:32 |
dansmith | I think that an alloc ratio change or reserved change will make our existing code fail to swap an allocation of equal size, when it really shouldn't | 15:33 |
dansmith | for all the move ops | 15:33 |
sean-k-mooney | in the evacuate case i think we only have one allcoation however it just happens ot have resouce form 2 hosts or somethign slightly odd | 15:34 |
dansmith | any feeling on allowing the reserved check to be skipped if the total size of the allocations against a provider are not changing? or do you think we need a flag to ask for that specifically | 15:35 |
dansmith | I tend to think a new microversion after which we say that's the behavior is okay, | 15:35 |
sean-k-mooney | https://bugs.launchpad.net/nova/+bug/1943191 https://bugs.launchpad.net/nova/+bug/1924123 and https://bugs.launchpad.net/nova/+bug/1941892 | 15:35 |
dansmith | but if we want to only do that if we specifically ask for it then... | 15:35 |
dansmith | heh the subject of that first bug is exactly what I'm saying yeah | 15:36 |
sean-k-mooney | non of those actully say evacuate bu ti think its a releated issue | 15:37 |
dansmith | but blocking cold migrate is worse | 15:38 |
dansmith | brb | 15:38 |
sean-k-mooney | as is sid i think, i could be wrong about this, that for evacuate we do not have a migration uuid becuase its actully not a "move op" in that sence it just rebuidl to a new host | 15:38 |
sean-k-mooney | so for evacuate i think we extend the single allcoaiton with rp for the dest | 15:38 |
sean-k-mooney | i could be wrong about that because i know we change that at some point | 15:39 |
sean-k-mooney | i would have to look at the detail | 15:39 |
dansmith | yeah idk, I'm more familiar with the cold migrate case, which is clearly broken already | 15:40 |
gibi | sean-k-mooney: yeah evac is implement the way you describe | 15:41 |
sean-k-mooney | we talked about chaging htat at some point but there was a reaons why it was hard | 15:42 |
gibi | I'm wondering if it is enough to check if the allocated value does not change to use the swap mode instead of using the current independent allocation mode. I mean from placement API perspective there just not enough information about the intention of the user. | 15:42 |
gibi | what if at some point we want a swap behavior that also decreases the size | 15:42 |
dansmith | gibi: I don't think there are multiple different operations are there? | 15:43 |
dansmith | we're using POST /allocations for both | 15:43 |
dansmith | and placement is using the same logic regardless of if we're deleting and re-allocating the same provider vs. different ones, AFAIK | 15:43 |
gibi | yeah, my point is that POST call can be understood both ways (swap or create/delete) even if the total allocated resources does not change in the process | 15:44 |
dansmith | but yeah that was my point.. if the operation is not further over-subscribing the provider (or maybe not changing the amount at all) it seems reasonable to let it through | 15:44 |
gibi | so placemetn does not know | 15:44 |
dansmith | yeah | 15:44 |
sean-k-mooney | if the set of RPs in the set of allction dont change and the total resouces consuemd dont hcange then we might be able to skip the capastiy check | 15:44 |
gibi | if nova tells placement it is a swap, then do a swap regardless of the allocation size. but otherwise keep the current behavior of create/delete | 15:45 |
dansmith | on the one hand, if you have an allocation for X, right now you can't even *decrease* the allocation size to get closer to happy because "closer to happy is still oversubscribed so reject" | 15:45 |
sean-k-mooney | honestly it would be better to do this in 2 calls | 15:45 |
dansmith | sean-k-mooney: I don't see how.. two calls seems worse to me | 15:45 |
sean-k-mooney | 1 call to create the dest allocaotn and a dedicated call to swap the 2 allocations | 15:45 |
dansmith | gibi: a flag to just skip the consistency check also feels wrong to me | 15:45 |
dansmith | sean-k-mooney: oh that's not the problem, it's the swap | 15:46 |
sean-k-mooney | well right now i think we do the swap an creation of the dest allocation in one call | 15:46 |
gibi | if a flag in the current API is to anti REST then lets have a separate resource explicitly for swapping | 15:46 |
dansmith | either way, for the purposes of one-time-use devices, can we say this is already a problem that needs to be solved and it'll block cold migrate for OTU but once we fix it, we're good? | 15:46 |
gibi | yes that is fair to say | 15:47 |
dansmith | I will commit to fixing both, FWIW | 15:47 |
sean-k-mooney | i.e. dont we update the allcoation with the instance uuid to the test host RPs and move the exiting allcotions to the migraiton uuid in one call that just "modifies multiple allcotions" | 15:47 |
gibi | sean-k-mooney: we remove the allocation from the instance uuid and add it to the migration uuid in a POST call then we do a scheduling | 15:48 |
dansmith | gibi: I think a flag would be okay if the semantics are "allow equal or less oversubscription to exist after you do this" | 15:48 |
dansmith | right now it is "oversubscription means fail" even if it's the same amount as before | 15:48 |
gibi | dansmith: if the client explicity call a swap between two consumers placement should just do a swap without extra checks | 15:48 |
gibi | i.e. keep the API generic | 15:49 |
sean-k-mooney | if we move the current allction to the migration uuid before we schedule | 15:49 |
dansmith | gibi: well, that's not very RESTful I think | 15:49 |
sean-k-mooney | then im a little confused | 15:49 |
dansmith | gibi: what about a PATCH to just change the consumer uuid? | 15:49 |
gibi | dansmith: sounds good to me | 15:49 |
dansmith | (not sure if we can be that fine-grained about it, I'd have to look) | 15:49 |
sean-k-mooney | because at that point we jus tneed to convert the allcoation candiate abck to an allction with the instance uuid | 15:49 |
sean-k-mooney | and then elete the miggration uuid | 15:50 |
sean-k-mooney | so it does not need ot be in one api post | 15:50 |
gibi | it is really should be a simple operation to update the owner of the allocation (of a full blown swap between two consumers but we don't need that right now) | 15:50 |
sean-k-mooney | well that you can do | 15:50 |
gibi | s/of/or/ | 15:50 |
sean-k-mooney | you can modify the consumer uuid | 15:50 |
sean-k-mooney | well the consumer but iint the allcotion uuid the instance uuid or migration uuid depending | 15:51 |
sean-k-mooney | we added consumer_type to the api at some point to track that | 15:51 |
gibi | I don't think we have a single placement API operation where you can update the consumer uuid without a reallocation (like the POST call we do currently) | 15:52 |
gibi | dansmith: so for me I don't need a full blown swap operation between any two consumer. I'm OK to have just a PATCH on an existing allocation to update the consumer uuid in it and do nothing else. | 15:53 |
sean-k-mooney | this is the existing api we are talking about yes https://docs.openstack.org/api-ref/placement/#update-allocations | 15:54 |
dansmith | gibi: yeah if we can do that cleanly that might be good.. I'll have to start digging in to determine, but yeah | 15:54 |
dansmith | are you thinking PATCH /allocations/uuid1 with {consume_uuid: uuid2} ? | 15:54 |
dansmith | that will work, not sure how the REST police feel, but I'd think it's better than a POST /swap_action | 15:55 |
gibi | sean-k-mooney: we are talking about https://github.com/openstack/nova/blob/master/nova/scheduler/client/report.py#L2059 that is what we do today | 15:55 |
gibi | dansmith: yeah lets ask the REST gods about it but the PATCH feels simple to me | 15:55 |
dansmith | ack | 15:55 |
sean-k-mooney | patch is an update to an exiting resouce | 15:56 |
sean-k-mooney | the resouce is keyd by the consumer_uuid /allocations/{consumer_uuid} | 15:56 |
sean-k-mooney | we could supprot patch to chagne that | 15:56 |
sean-k-mooney | wihtout conflicting with the existing put | 15:56 |
sean-k-mooney | but it feesl a littel od ot do /allocations/{instance_uuid} bode: consumer_uuid=<migration uuuid> | 15:57 |
gibi | yeah we need to change the "key" that is why it feels problematic from REST perspective | 15:57 |
bauzas | I started to pay attention to this channel but I failed getting the problem to solve | 15:57 |
sean-k-mooney | i dont hate it | 15:57 |
bauzas | can someone summarize me the thread ? | 15:57 |
* bauzas becomes lazy now he's around to be past 45 | 15:58 | |
sean-k-mooney | the ohter option is somthign liek /allocations/{old uuid}/swap/{new uuid} | 15:58 |
dansmith | bauzas: https://bugs.launchpad.net/nova/+bug/1943191 | 15:58 |
gibi | bauzas: move operations does not work if the source RP is over allocated or allocated and reserved | 15:58 |
sean-k-mooney | that would be a post i guess | 15:58 |
dansmith | bauzas: that and several other bugs, plus my OTU spec | 15:58 |
bauzas | I see | 15:59 |
sean-k-mooney | gibi: i think if we document things properly iether could work i think POST /allocations/{consumer_uuid}/swap/{new_consumer_id} is clear honestly | 16:00 |
sean-k-mooney | tha twould not have any body | 16:01 |
sean-k-mooney | so there is no posiblity of changing the allcotin or capsity useage and there for its ok to not do checks | 16:01 |
sean-k-mooney | if we go with patch we would have to ensure the resouces are not changed as well | 16:02 |
gibi | yeah I'm not against either the PATCH or the POST //swap/ way | 16:07 |
sean-k-mooney | i think we can all agree that if it alllow us to finally resovle thos bugs in nova either approch is better then what we have today. so if that the direction we head in im fine with say lets define that in the placement api spec | 16:10 |
gibi | I agree | 16:13 |
dansmith | yup | 16:14 |
melwitt | sean-k-mooney: not sure what you'll think of this but I uploaded a patch in regard to the oslo.limit endpoint discovery https://review.opendev.org/c/openstack/nova/+/943507 fyi | 17:06 |
sean-k-mooney | so we remove the end point check since oslo limists will do that for use internally | 17:17 |
sean-k-mooney | and we repalce the end point fuction with one that returns the limits for this service/region | 17:18 |
sean-k-mooney | these are the global limits right not the project specific ones | 17:19 |
sean-k-mooney | melwitt: without properly loadign context and looking at oslo limits its seams reasonable at first glance | 17:21 |
melwitt | sean-k-mooney: yeah the thing I think is controversial is the private attributes access | 17:23 |
sean-k-mooney | that not required with the latest version of oslo.limits right | 17:24 |
melwitt | maybe in parallel propose some public accesses in oslo.limit and update it when available? dunno | 17:24 |
sean-k-mooney | https://review.opendev.org/c/openstack/oslo.limit/+/914783/3/oslo_limit/limit.py | 17:25 |
melwitt | what is there currently will work for both old and new versions | 17:26 |
melwitt | and underneath oslo.limit will do what it needs to | 17:27 |
sean-k-mooney | right in the new version we can just call get_registered_limits | 17:27 |
sean-k-mooney | on the enforcer | 17:27 |
sean-k-mooney | https://github.com/openstack/oslo.limit/blob/8ada297e6a82f7e50bf746453b8dd21716743798/oslo_limit/limit.py#L183 | 17:27 |
sean-k-mooney | melwitt: do you need to use the connection today | 17:29 |
sean-k-mooney | oh that funciton takes a set of resouces to check | 17:30 |
melwitt | sean-k-mooney: yeah I do that because I want to pull all the resourcs limits in one API call. oslo.limiit does them one API call per resource | 17:33 |
sean-k-mooney | it sound like we shoudl add a funciton to oslo limits to do that | 17:34 |
sean-k-mooney | but i think we can work around it in nova for now | 17:34 |
melwitt | maybe. there's already one so not sure if we change the internal impl or make another alongside | 17:34 |
sean-k-mooney | you could make None a seential value for all or use the sentenial object pattern | 17:35 |
melwitt | yeah | 17:36 |
sean-k-mooney | so gibi's main concern seams to be those are private filed and there for could break which is valid. if oslo were ok providing public access in some way i would be ok with this as a backportable approch as long as we update to using the public appropch in the future | 17:38 |
melwitt | sean-k-mooney: right. it's mine too, which is why I mentioned it in the commit message. yeah I think that sounds reasonable | 17:39 |
sean-k-mooney | i dont hate it but if we could just do ENFORCE.get_registered_limists(None) | 17:39 |
sean-k-mooney | i would be happier | 17:39 |
melwitt | sure | 17:39 |
sean-k-mooney | the other option for now | 17:39 |
melwitt | will that work currently? /me looks | 17:40 |
sean-k-mooney | is kep doing it the old way and instead fo raising try and do it this way | 17:40 |
sean-k-mooney | melwitt: i was trying ot figure that out | 17:40 |
sean-k-mooney | you can pass it i just dont know what happens if you do | 17:40 |
sean-k-mooney | its this right https://docs.openstack.org/api-ref/identity/v3/#list-registered-limits | 17:41 |
melwitt | it looks like it would do nothing | 17:41 |
melwitt | https://github.com/openstack/oslo.limit/blob/master/oslo_limit/limit.py#L355 | 17:41 |
sean-k-mooney | it say resouce name is optional in the api | 17:41 |
sean-k-mooney | you could try [""] | 17:41 |
melwitt | yes that's why I'm using the connection directly | 17:42 |
sean-k-mooney | i.e. ENFORCE.get_registered_limists([""]) | 17:42 |
sean-k-mooney | ya well modle.connection is public it seams | 17:42 |
sean-k-mooney | its just the endpoint id and service id that not right | 17:42 |
melwitt | yes and I'm using it in that patch :) | 17:42 |
melwitt | Enforcer.get_registered_limits() does "for resource_name in resource_names:" and will do nothing if you don't pass it a list of names and if you pass it a list of names it will pull each limit in a separate API call | 17:43 |
melwitt | sean-k-mooney: correct it's the endpoint id and service id that are not public | 17:44 |
melwitt | I could alternatively pull those separately the same way that oslo.limit does today | 17:45 |
sean-k-mooney | ya im not seeing a better way to do it with the current api | 17:45 |
melwitt | I considered that might be more well liked | 17:45 |
melwitt | so maybe I should push a new PS that pulls the services and then gets the endpoint ID from it, that way no private attrs need to be used | 17:46 |
sean-k-mooney | thats an incremental imporment. i fell lilke they didnt fully fix the usablity problem | 17:47 |
sean-k-mooney | as in they didnt really fix the usablity problem reported in https://launchpad.net/bugs/1931875 | 17:48 |
sean-k-mooney | if we have to port that code to nova | 17:48 |
melwitt | well that works for 99% of users who have no need to query for the list of limits | 17:48 |
sean-k-mooney | ya but athat a pretty fundemental thing | 17:49 |
melwitt | strictly for limit enforcement what they did fixes it all | 17:49 |
sean-k-mooney | i guess oslo.limits is not inteded as a generic keyston clinet | 17:50 |
sean-k-mooney | you could argue that we shoudl not be using ot get the registered limits and shoudl use the sdk instead | 17:51 |
melwitt | yeah.. it's not. that was what I felt conflicted about. like making it a passthrough to the underlying keystone client? when it already has a public connection attribute? I dunno. just seemed kind of weird | 17:51 |
melwitt | yeah. I think maybe they did that because the keystone API does not have the ability to filter a limit list by a set of resource names. but you could just filter it yourself after getting it. you'll get all of them in one API call | 17:52 |
sean-k-mooney | ubuntu@sean-devstack-watcher-1:~/repos/nova$ curl -s -H "X-Auth-Token: $TOKEN" -H "OpenStack-API-Version: compute 2.99" http://192.168.50.140/identity/v3/registered_limits | jq | nc termbin.com 9999 | 17:53 |
sean-k-mooney | https://termbin.com/kqfj | 17:53 |
melwitt | so maybe the right thing to do is "fix" the internal impl of get_registered_limits() to just filter in python instead of doing N API calls | 17:54 |
sean-k-mooney | without a regioun or service id you get ll registered limits if you call keystoen | 17:54 |
melwitt | I know lol | 17:54 |
sean-k-mooney | the region we know right | 17:55 |
melwitt | that's literally what it's doing already | 17:55 |
melwitt | yes we know the region | 17:55 |
sean-k-mooney | well yes but is it just the service id we ned | 17:55 |
melwitt | we don't know the endpoint id | 17:55 |
melwitt | yes | 17:55 |
melwitt | er sorry yeah it's service id that the call actually needs. I keep mixing it up with the conf options | 17:57 |
sean-k-mooney | so the sdk supprot the service | 17:57 |
sean-k-mooney | https://github.com/openstack/openstacksdk/blob/master/openstack/identity/v3/service.py | 17:57 |
sean-k-mooney | and you can query by type i.e.. nova | 17:58 |
sean-k-mooney | *compute | 17:58 |
melwitt | yes, that's what oslo.limit does for the discovery | 17:58 |
melwitt | https://github.com/openstack/oslo.limit/blob/master/oslo_limit/limit.py#L294 | 17:58 |
sean-k-mooney | ya so you were suggeting we coudl prot that to nova | 17:58 |
melwitt | so we can do that too obviously | 17:58 |
melwitt | yeah, if I do that then no more private access. I thought about it but figured I'd just throw the "most efficient" thing up to get discussion | 17:59 |
opendevreview | Dan Smith proposed openstack/nova-specs master: Add one-time-use-devices spec https://review.opendev.org/c/openstack/nova-specs/+/943486 | 17:59 |
melwitt | that would also get us endpoint discovery as a backport | 18:00 |
sean-k-mooney | ya | 18:00 |
melwitt | probably a better move. I'll push an update with that and see what yall think | 18:01 |
sean-k-mooney | before you do | 18:04 |
sean-k-mooney | why are we even using oslo.limist for this when the openstack sdk has a function ot get registered limits | 18:05 |
sean-k-mooney | https://github.com/openstack/openstacksdk/blob/master/openstack/identity/v3/_proxy.py#L1625-L1634 | 18:05 |
melwitt | sean-k-mooney: to make it use the [oslo.limit] config section. or do you suppose there's another way to do that? | 18:05 |
sean-k-mooney | well we coudl read those config varible oursleves | 18:06 |
sean-k-mooney | i.e. for region | 18:06 |
sean-k-mooney | lookup the service id our selves and then just call keystone directly | 18:07 |
melwitt | yeah. will need all the other auth stuff too and I was thinking it should match what nova is using for enforcing | 18:07 |
sean-k-mooney | or even use our keystoneauth section | 18:07 |
sean-k-mooney | i would agree if oslo.limit provided the apis we needed :) | 18:08 |
sean-k-mooney | ok for now lets not boil the ocean | 18:08 |
sean-k-mooney | and focus on the smallest change to make it work | 18:08 |
melwitt | eh.. I was wary of that because the idea of using one avenue to get the limits and a different one to enforce. I'm not sure if there could be consequence for some part of the config being different | 18:08 |
melwitt | I was thinking it's likely less error prone to use the same exact thing oslo.limit is using | 18:08 |
melwitt | same user, same password etc and not let there be the ability to make them different | 18:09 |
melwitt | and creating a dummy enforcer seemed like the simplest way to do that. maybe I can make a direct sdk connection read the config section I want. there's probably a way to do that, I'll check | 18:11 |
melwitt | oh yeah, looks like that can be done easily. I'll try that and see what we think | 18:13 |
opendevreview | Merged openstack/nova master: api: Add response body schemas for for console auth token APIs (v2.99) https://review.opendev.org/c/openstack/nova/+/943252 | 20:37 |
*** haleyb is now known as haleyb|out | 22:48 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!