opendevreview | Merged openstack/networking-generic-switch stable/ussuri: Remove grenade jobs from old stable branches https://review.opendev.org/c/openstack/networking-generic-switch/+/800467 | 00:04 |
---|---|---|
opendevreview | Jacob Anders proposed openstack/ironic master: [WIP] Add support for verify steps https://review.opendev.org/c/openstack/ironic/+/800001 | 00:55 |
opendevreview | Merged openstack/ironic-python-agent-builder master: typo in docu, admin, ironic-python-agent-ramdisk, outputs three files -> two https://review.opendev.org/c/openstack/ironic-python-agent-builder/+/800890 | 01:36 |
opendevreview | Jacob Anders proposed openstack/ironic master: [WIP] Add support for verify steps https://review.opendev.org/c/openstack/ironic/+/800001 | 02:45 |
arne_wiebalck | Good morning, Ironic! | 06:31 |
iurygregory_ | morning arne_wiebalck and Ironic o/ | 06:37 |
arne_wiebalck | hey iurygregory_ o/ | 06:38 |
*** iurygregory_ is now known as iurygregory | 06:38 | |
iurygregory | stendulker, hey you around? o/ I'm trying to understand why get_subscriptions would be required =) | 06:40 |
iurygregory | it would be to directly ask the BMC for the subscriptions and be able to identify the ones that were deleted so we can sync the DB? | 06:41 |
stendulker | iurygregory: Good morning | 06:44 |
stendulker | iurygregory: Are we going to fetch from BMC everytime for list of subscriptions or serve from DB? | 06:45 |
iurygregory | morning :D | 06:45 |
iurygregory | stendulker, the plan was to serve from the DB... | 06:45 |
stendulker | iurygregory: It was not clear to me.. if we fetch it from BMC then its fine | 06:45 |
iurygregory | now I'm wondering if we need to even save... | 06:45 |
iurygregory | since this type of things can happen .-. | 06:45 |
iurygregory | I'm not sure how we would allow the user to delete the subscriptions if we don't save | 06:46 |
stendulker | if we save it in DB then extra overhead of translating BMC data to Ironic form everytime | 06:46 |
iurygregory | yeah | 06:46 |
stendulker | iurygregory: yes, delete will be issue. | 06:46 |
stendulker | if we save in DB, we would need regular sync up as well | 06:47 |
stendulker | as we will not know if subscription is deleted | 06:48 |
iurygregory | oh god | 06:48 |
iurygregory | D: | 06:48 |
stendulker | we do not need to user manually deleting a subscription, but one tht get sdeleted by BMC | 06:49 |
iurygregory | hummm maybe the person would need to say the node and id (that matches what is in the BMC) to delete a subscription? | 06:49 |
iurygregory | and not save anything in the DB at all... | 06:49 |
iurygregory | do something like we did for indicators... | 06:49 |
stendulker | i mean " we do not need to worry about user manually deleting a subscription" | 06:49 |
iurygregory | yeah this is true (I would hope the user wouldn't try to manually delete a subscription) | 06:50 |
stendulker | how woudl it match subscription id? | 06:50 |
iurygregory | well, the user would need to request the subscriptions... | 06:51 |
stendulker | identifier may differ from bmc to bmc | 06:51 |
iurygregory | yup | 06:51 |
iurygregory | ilo uses integers and idrac uses uuid | 06:51 |
iurygregory | to identify =) | 06:51 |
stendulker | driver can serve subscription details in a ironic standardised form | 06:52 |
iurygregory | it would be a DELETE with {"node_uuid": <so we know where we will try to delete>, "bmc_id" <id_bmc_would_give>} | 06:52 |
stendulker | ok, sounds good | 06:52 |
iurygregory | if the user gives an invalid bmc_id we say we couldn't find the subscription | 06:53 |
stendulker | or we can pass it to BMC and let BMC throw error | 06:54 |
iurygregory | I'm just wondering if dtantsur TheJulia rpittau|afk will be ok with this approach (we wouldn't need the DB and something to sync the subscriptions, and it would probably me easier) | 06:54 |
iurygregory | stendulker, yeah that would make sense | 06:55 |
stendulker | to save upon the call to BMC to compare validity of id | 06:55 |
iurygregory | save in the DB you mean? | 06:56 |
stendulker | no | 06:56 |
iurygregory | oh ok =) I'm still waking up =) | 06:56 |
iurygregory | time for the second mug of coffee =) | 06:56 |
stendulker | if we were to validate subscription id then we would have to fetch from bmc and compare. instead we can directly attempt delete and let bmc throw error if its incorrect | 06:57 |
iurygregory | yeah, that makes sense =) | 06:57 |
stendulker | no worries, it can be handled in code | 06:57 |
iurygregory | so we would need to move subscriptions endpoint under node since we won't save things in the DB right? (I'm fine with this also) | 06:59 |
iurygregory | the /v1/subscriptions wouldn't make sense in this case but /v1/node/<node_id>/management/subscriptions would | 07:00 |
*** rpittau|afk is now known as rpittau | 07:31 | |
rpittau | good morning ironic! Happy Friday! o/ | 07:31 |
iurygregory | morning rpittau o/ | 07:32 |
rpittau | hey iurygregory :) | 07:35 |
iurygregory | rpittau, when you are more awake I would like to hear your thoughts about the approach I was talking with stendulker about the subscription API =) | 07:37 |
rpittau | iurygregory: I'm reading now, I would probably need one more espresso :) | 07:38 |
iurygregory | yeah =) | 07:38 |
rpittau | I see the point | 07:40 |
rpittau | so we would get the subscriptions every time from the bmc | 07:43 |
iurygregory | yeah | 07:43 |
iurygregory | like we do for indicators | 07:43 |
rpittau | kk I guess it makes sense, and that could be aggregated anyway and saved in an external service if needed | 07:45 |
rpittau | no need to use the ironic DB | 07:45 |
iurygregory | ok, any thoughts about the endpoint? | 07:46 |
iurygregory | v1/subscriptions won't make sense since we cant get the subscriptions from all nodes | 07:46 |
iurygregory | so I was thinking about the same endpoint we have for indicators | 07:47 |
rpittau | yep, that should be per node, the one you wrote looks ok | 07:47 |
iurygregory | I will start updating the spec | 07:48 |
iurygregory | lets see what Dmitry thinks when he is online | 07:49 |
opendevreview | Arne Wiebalck proposed openstack/ironic-python-agent master: Force immediate NTP time sync with chronyd at IPA startup https://review.opendev.org/c/openstack/ironic-python-agent/+/801032 | 08:46 |
opendevreview | Iury Gregory Melo Ferreira proposed openstack/sushy master: Fix Context for EventDestination https://review.opendev.org/c/openstack/sushy/+/801034 | 09:12 |
iurygregory | rpittau, if you can take a look at ^ I just found this while testing | 09:12 |
iurygregory | I've tested locally to be sure that sushy will be able to retrieve the subscription when we create with empty string in the context field =) | 09:14 |
rpittau | that's interesting, I guess there was a confusion between required and requiredOnCreate, even though we didn't set Protocol as required | 09:17 |
rpittau | mmm I see that in more recent versions it becomes required | 09:19 |
rpittau | well, sushy is based on 1.0.0 so it looks ok to me | 09:21 |
rpittau | if we ever update EventDestination to something more recent we'll have to deal with Context being required | 09:21 |
iurygregory | we will send empty string by default | 09:25 |
iurygregory | the BMC didn't report "Context" when is an empty string | 09:25 |
iurygregory | so sushy failed to parser the information | 09:26 |
rpittau | yeah | 09:26 |
fmuyassarov_ | Hello Ironic team! I have a question regarding IPA. We had an IPA uplift, and after that finding a disk based on root device hints (WWN) is taking a longer time than before. In one of the deployments, there are almost 1000 disks on a node and for some reason IPA is checking each disk very slowly, like after 2 hours it was on /dev/sdfh, which previously (before the uplift) was fast. I wonder if you have noticed something similar or | 09:32 |
fmuyassarov_ | have an idea what has changed with this regard or what could be the extra stuff that it is taking too much time now. | 09:32 |
arne_wiebalck | fmuyassarov_: I have not noticed anything, but then we do not have that many disks on a node. This is which version of the IPA? | 09:35 |
arne_wiebalck | fmuyassarov_: Also, how do you know it is the root device hints, you see this in the logs? (post some messages & timeline maybe to paste.o.o for others to have a look) | 09:37 |
fmuyassarov_ | arne_wiebalck, sure, just a moment. I'll provide that information | 09:39 |
rpittau | fmuyassarov_: console logs from ipa ramdisk would help a lot | 09:40 |
fmuyassarov_ | sure, I will try to get that | 09:40 |
arne_wiebalck | fmuyassarov_: FWIW: cleaning/deploy logs are also stored on the conductor, usually in /var/log/ironic/deploy/ | 09:44 |
arne_wiebalck | fmuyassarov_: but only once the processing finishes | 09:44 |
fmuyassarov_ | arne_wiebalck, so the IPA to which we uplifted was build on 2021-04-27. From here https://tarballs.opendev.org/openstack/ironic-python-agent/dib/files/, it seems it is Centos7-stable-train. | 09:46 |
fmuyassarov_ | arne_wiebalck, because we checked the logs, and at the point IPA was on /dev/sdfh. TBH, we are not sure if it is due to root device hints or something else. Previously this wasn't taking this much time :), maybe some drivers were removed. And we are also using multipathing. But I will share logs so that we have a better picture | 09:52 |
arne_wiebalck | fmuyassarov_: removed drivers is sth I was thinking as well (we had this with c8 and some hardware), but since you are on c7 it seems less likely | 09:53 |
arne_wiebalck | fmuyassarov_: let's see what the logs say, but we may need to enable debug mode on the IPA | 09:55 |
fmuyassarov_ | arne_wiebalck, sorry but our IPA is actually on centos8 it seems. Do you have a reference to which drivers were removed? | 09:57 |
opendevreview | Iury Gregory Melo Ferreira proposed openstack/ironic-specs master: Event Subscription Spec https://review.opendev.org/c/openstack/ironic-specs/+/785742 | 10:00 |
iurygregory | stendulker, I've updated the spec =) let me know if I'm missing something | 10:00 |
stendulker | iurygregory: Thanks, will have a look. | 10:00 |
iurygregory | tks! | 10:01 |
arne_wiebalck | fmuyassarov_: this one maybe: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/hardware-enablement_considerations-in-adopting-rhel-8#removed-hardware-support_hardware-enablement | 10:04 |
arne_wiebalck | rpittau: thanks for the chronyd review, I replied on the change | 10:06 |
opendevreview | Iury Gregory Melo Ferreira proposed openstack/sushy master: Fix Context for EventDestination https://review.opendev.org/c/openstack/sushy/+/801034 | 10:06 |
iurygregory | ajya, tks for the review in the sushy patch I think I've answered your question o/ | 10:07 |
ajya | iurygregory: thanks, in the new versions there is "dos not", should be "do not". Also if it is not _all known_ BMCs, then could say "some BMCs" or even add more detail "some BMCs such as X, Y" | 10:10 |
ajya | other than that wondering if excluding required nullable field from JSON is according to spec or not, in any case have to work around it for released firmware | 10:10 |
rpittau | ajya: in the EventDestination version we support in sushy the "Context" field is not required | 10:12 |
iurygregory | I've updated the releasenote | 10:14 |
opendevreview | Iury Gregory Melo Ferreira proposed openstack/sushy master: Fix Context for EventDestination https://review.opendev.org/c/openstack/sushy/+/801034 | 10:14 |
ajya | rpittau: then of for now | 10:14 |
ajya | s/of/ok | 10:15 |
ajya | iurygregory: noticed one more typo in release not just now :( - "sbuscription" | 10:16 |
iurygregory | oh god D: | 10:17 |
opendevreview | Iury Gregory Melo Ferreira proposed openstack/sushy master: Fix Context for EventDestination https://review.opendev.org/c/openstack/sushy/+/801034 | 10:18 |
dtantsur | morning/afternoon ironic | 10:55 |
iurygregory | morning dtantsur, if you have some time I would like your thoughts about the discussion I had with stendulker re subscription API =) I've pushed the updates in the spec already | 11:05 |
* iurygregory brb lunch time | 11:05 | |
* dtantsur checks the news and shivers | 11:29 | |
dtantsur | iurygregory, I'm not exactly fond of a subscription suddenly disappearing from ironic | 11:30 |
dtantsur | if the BMC decided it does not like it | 11:30 |
iurygregory | dtantsur, it would be the BMC removing | 11:46 |
iurygregory | so we would need to ensure that the ironic DB is in sync for the subscriptions for each node | 11:47 |
iurygregory | so we could do like we do for indicators without saving information in the DB | 11:48 |
dtantsur | I don't really want us to have GET API talking directly to BMC | 12:08 |
dtantsur | I'd rather have a subscription in some sort of an "error" state or even re-created automatically | 12:09 |
iurygregory | indicators does a GET directly in the BMC no? .-. | 12:10 |
dtantsur | I hope no, but maybe.. | 12:10 |
iurygregory | https://opendev.org/openstack/ironic/src/branch/master/ironic/drivers/modules/redfish/management.py#L512 (directly -> using sushy..) | 12:12 |
opendevreview | Merged openstack/ironic stable/victoria: Cache AgentClient on Task, not globally https://review.opendev.org/c/openstack/ironic/+/797679 | 12:12 |
dtantsur | iurygregory: okay, let's not take half-broken feature that nobody uses as an example :) | 12:16 |
dtantsur | iurygregory: listing all subscriptions on a node will get you what, 4 + N HTTP requests at least? | 12:16 |
dtantsur | it seems like an easy way to DoS the BMC | 12:17 |
dtantsur | (or ironic itself if you enroll a lot of nodes with unavailable BMCs) | 12:17 |
iurygregory | + N you mean (N subscriptions?) | 12:19 |
iurygregory | I would need one to authenticate to the BMC, 1 to have the root information from sushy, another to get the event service and another to list all subscriptions | 12:21 |
dtantsur | how do you list them by system though? | 12:26 |
dtantsur | and you need N to fetch the details, the root resource will only give you redfish UUIDs | 12:26 |
iurygregory | well list all we can just provide the id of the subscriptions | 12:28 |
iurygregory | https://paste.opendev.org/show/807525/ | 12:29 |
janders_ | see you on Monday Ironic o/ | 12:31 |
*** janders_ is now known as janders | 12:31 | |
iurygregory | bye janders | 12:31 |
janders | have a great weekend everyone | 12:31 |
rpittau | thanks janders you too | 12:32 |
dtantsur | iurygregory: our consumers dont' really need redfish IDs, I suspect.. and how do you filter by System then? | 12:37 |
iurygregory | well the redfish id is the only way to delete a subscription... | 12:38 |
dtantsur | yep, in the original idea it was stored in the database | 12:38 |
dtantsur | an alternative that we haven't considered for a while is to just create a vendor passthru for Redfish | 12:38 |
iurygregory | filter by system? I'm not sure if there is such thing | 12:38 |
iurygregory | EventService is under the root of redfish | 12:39 |
dtantsur | well, here is another reason why we need to store subscriptions in our database (or use a vendor passthru) | 12:39 |
dtantsur | how does it even work when a redfish endpoint is responsible for several systems? | 12:39 |
iurygregory | well the schemas for EventService and EventDestination doesn't mention any Systems... | 12:41 |
iurygregory | mraineri, do you know how it would work if we try to create a subscription when the redfish endpoint is responsible for several systems? | 12:42 |
dtantsur | yeah, we need to figure out how it works before we design further | 12:42 |
dtantsur | iurygregory: given the number of uncertainties we keep discovering, I wonder if starting with a vendor passthru is the right choice | 12:42 |
iurygregory | humm so we would define methods that would be called via /v1/nodes/{node_ident}/vendor_passthru?method={method_name} ? | 12:45 |
dtantsur | iurygregory: yep. and maybe start with only creating and deleting a subscription? | 12:48 |
* iurygregory opens a new branch locally and start looking at vendor-passthru | 12:48 | |
mraineri | iurygregory: It depends on how you structure your subscription | 12:56 |
mraineri | If you don't specify resources you wish to monitor, particular messages, etc, you'll get every event published by the service | 12:56 |
mraineri | Which in the case of a multi-system enclosure would mean all systems | 12:57 |
mraineri | You can use the "ResourceOrigins" and "SubordindateResources" properties in the subscription to restrict what areas of the model you want to see (if it's a particular system instance for example) | 12:58 |
iurygregory | but this is not available in old Event schemas right | 12:58 |
iurygregory | ? | 12:58 |
mraineri | Sorry, OriginResources" | 12:59 |
mraineri | OriginResources has been available for a while | 12:59 |
iurygregory | for example if the machine has 1_0_6 https://redfish.dmtf.org/schemas/v1/EventDestination.v1_0_6.json | 12:59 |
iurygregory | and has multiple systems the subscription would work for all systems on it | 12:59 |
mraineri | SubordinateResources was added later | 12:59 |
mraineri | Yeah, 1.0.6 would mean you don't have that control | 13:00 |
iurygregory | yeah (not even sure if some hardware vendor has support for it) | 13:00 |
mraineri | 1.1.0 introduced OriginResources | 13:00 |
mraineri | I know several do at the moment | 13:00 |
iurygregory | hummm | 13:00 |
dtantsur | mraineri: can we filter subscriptions by OriginResources? | 13:04 |
iurygregory | dtantsur, https://opendev.org/openstack/ironic/src/branch/master/ironic/drivers/modules/redfish/vendor.py#L29 this would be the place right? | 13:05 |
dtantsur | iurygregory: yep. you can see at least one example there already | 13:06 |
iurygregory | ack going to start hacking and push a patch | 13:06 |
mraineri | As in when you're performing inspection of the active subscriptions? You should be able to with $filter | 13:15 |
dtantsur | mraineri: imagine we're looking for subscriptions that belong to a particular System. in the Redfish API, I mean. | 13:15 |
mraineri | Yeah, $filter can be used to narrow down subscriptions with particular OriginResources | 13:18 |
TheJulia | good morning | 13:27 |
dtantsur | morning TheJulia | 13:27 |
TheJulia | iurygregory: stendulker: required to sync is not *really* needed if update is not permitted on the api. Granted, the BMC could have an external action occur and syncing would make sense to remediate that. But that doesn't need to be solved upfront. | 13:30 |
iurygregory | good morning TheJulia | 13:31 |
TheJulia | iurygregory: afaik indicators go through the conductor rpc layer | 13:32 |
TheJulia | iurygregory: under no circumstances should the api be permitted to talk to bmcs directly | 13:32 |
iurygregory | thinking about the vendor passthru it would allow the user to pass more informations and we wouldn't need to worry about only accepting Alert in ironic api | 13:32 |
dtantsur | yeah, I think we should aim for eventual 1st class API, but vendor passthru is great as a staging area to prove usability | 13:33 |
iurygregory | we would let the user have more flexibility I think | 13:33 |
TheJulia | w/r/t vendor passthru, please no if it is going to be a standardized or multi-vendor thing. | 13:33 |
dtantsur | why? | 13:33 |
dtantsur | have we changed our stance on it? | 13:33 |
TheJulia | singular vendor one off thing | 13:33 |
dtantsur | OR a staging area | 13:34 |
TheJulia | true | 13:34 |
dtantsur | at least that's how we originally saw it | 13:34 |
TheJulia | but we've got dsicussion amongst two separate vendors | 13:34 |
dtantsur | have we? I've only heard about Redfish so far. | 13:34 |
TheJulia | I've seen chatter in this channel on the events subject between dell and hpe reps | 13:34 |
TheJulia | true, redfish, but hpe mainly drives folks through ilo except if it is one of the edgelines | 13:35 |
dtantsur | and does iLO support subscriptions? | 13:35 |
dtantsur | (both the hardware and the driver/proliantutils) | 13:36 |
TheJulia | I believe they do, and if they wanted to mirror vendor passthru then it wouldn't be ideal | 13:36 |
TheJulia | also... vendor passthrough basically requires the permission level of system-admin to utilize | 13:36 |
TheJulia | because it can be used as a generic passthrough to the bmc to do any number of things | 13:36 |
dtantsur | in theory, we can provide policies per call | 13:37 |
TheJulia | that is likely okay in the metal3 case though | 13:37 |
dtantsur | (we don't do it now, but there is nothing preventing us) | 13:37 |
TheJulia | when you say in theory, based upon the schema? | 13:37 |
dtantsur | I mean, there is nothing preventing us from doing it | 13:37 |
TheJulia | Yes, I guess, adding more policies and matching the call to the policy yeah | 13:38 |
dtantsur | IIRC we want down a simpler path of using the same policy for all vendor passthru calls in existence | 13:38 |
TheJulia | yes, because the risk and level of access it provides to the bmc | 13:38 |
dtantsur | anyway, my comment about vendor passthru was mostly among the lines of "if we have hard time understanding what we can and cannot do, let's start with a vendor passthru" | 13:38 |
TheJulia | oh, absolutely agree | 13:38 |
TheJulia | My concern is largely something living in vendor passthrough forever and being kind of a nebulous thing | 13:39 |
dtantsur | possibly | 13:40 |
dtantsur | another concern is having synchronous calls in the API (I must admit, my initial version of this RFE already had it) | 13:41 |
TheJulia | also security risk and all of that | 13:41 |
TheJulia | yeah | 13:41 |
TheJulia | If we make it nebulous and undefined, then we won't have the information we really need to expand it and make it a proper interface unless it is the same exact person using and driving it along forever more | 13:42 |
TheJulia | so it is a huge risk too as a nebulous thing | 13:42 |
dtantsur | a counter-example: indicators API | 13:43 |
dtantsur | something we introduced because of a metal3 request. then metal3 stopped being interesting, and here we are | 13:43 |
dtantsur | having an improperly built API which we need to support forever for likely 0 consumers | 13:44 |
TheJulia | true | 13:44 |
TheJulia | so, which poison do we want | 13:44 |
TheJulia | preferably the tasty beverage kind? | 13:44 |
dtantsur | *node* | 13:44 |
dtantsur | egh | 13:44 |
dtantsur | *nod* | 13:44 |
* dtantsur nods with a node | 13:44 | |
* TheJulia needs a teleporter to make a tasty beer appear before dtantsur | 13:45 | |
dtantsur | I wish people were less religious about microversioning and listened to me about having experimental APIs. lost battle, sigh. | 13:45 |
dtantsur | mmm, beer | 13:45 |
dtantsur | we've developed taste for https://sapporobeer.com/ recently | 13:46 |
dtantsur | it's simple but very tasty | 13:46 |
TheJulia | Ahh yes, it is a tasty ?rice? beer | 13:46 |
dtantsur | I think it's just a normal lager | 13:46 |
dtantsur | we pick it up in the Japanese shop at the same time when we pick up fish for sushi | 13:46 |
dtantsur | so they go together :) | 13:46 |
TheJulia | of course | 13:46 |
iurygregory | dtantsur, if you have some time https://review.opendev.org/c/openstack/sushy/+/801034 | 13:54 |
iurygregory | I've found this while testing | 13:54 |
TheJulia | There is some long story behind the evolution of beers to include rice as one of the ingredients. It has always surprised me how the popular american beers have used this for a while and the general public is not really conciously aware of it | 13:55 |
* TheJulia continues role of collector of useless facts | 13:56 | |
iurygregory | I knew about corn, oatmeal.. rice as beer ingredient is new to me | 13:58 |
TheJulia | what beers use corn? | 13:58 |
TheJulia | I want to run away now | 13:59 |
iurygregory | I know a few in Brazil that people told me about Corona, Sol | 14:01 |
TheJulia | hmm | 14:01 |
TheJulia | corona actually makes sense | 14:01 |
opendevreview | Iury Gregory Melo Ferreira proposed openstack/ironic master: [WIP] Add vendor_passthru method for subscriptions https://review.opendev.org/c/openstack/ironic/+/801064 | 14:04 |
iurygregory | dtantsur, something along this lines? ^ | 14:04 |
iurygregory | I would only need one for delete and other to list and that's it? *magic* | 14:05 |
dtantsur | iurygregory: yeah, something like this. this is why vendor passthru are so good for prototyping | 14:06 |
iurygregory | I wasn't aware of this magic | 14:06 |
iurygregory | I need to say I liked | 14:06 |
iurygregory | time to finish this | 14:06 |
rpittau | bye everyone, have a great weekend! o/ | 14:08 |
*** rpittau is now known as rpittau|afk | 14:08 | |
opendevreview | Merged openstack/ironic-python-agent master: Catch ismount not being handled https://review.opendev.org/c/openstack/ironic-python-agent/+/798394 | 14:20 |
opendevreview | Merged x/sushy-oem-idrac master: setup.cfg: Replace dashes with underscores https://review.opendev.org/c/x/sushy-oem-idrac/+/794698 | 14:22 |
opendevreview | Merged x/sushy-oem-idrac master: Add Python3 Xena unit tests https://review.opendev.org/c/x/sushy-oem-idrac/+/793752 | 14:22 |
opendevreview | Merged x/sushy-oem-idrac stable/wallaby: Update .gitreview for stable/wallaby https://review.opendev.org/c/x/sushy-oem-idrac/+/800809 | 14:34 |
opendevreview | Merged x/sushy-oem-idrac stable/wallaby: Update TOX_CONSTRAINTS_FILE for stable/wallaby https://review.opendev.org/c/x/sushy-oem-idrac/+/800810 | 14:34 |
opendevreview | Eric Barrera proposed x/sushy-oem-idrac master: Enable coverage HTML output https://review.opendev.org/c/x/sushy-oem-idrac/+/795698 | 14:37 |
iurygregory | dtantsur, since we are thinking about the vendor passthru gophercloud needs to support that right (not sure if already does) | 14:57 |
dtantsur | iurygregory: in this case yes | 14:58 |
* dtantsur hopes no terraform support is needed | 14:58 | |
iurygregory | it shouldn't =) | 14:58 |
iurygregory | subscriptions will be created after the nodes are deployed afaik | 14:59 |
opendevreview | Iury Gregory Melo Ferreira proposed openstack/ironic master: Add vendor_passthru method for subscriptions https://review.opendev.org/c/openstack/ironic/+/801064 | 14:59 |
iurygregory | I'm working on the unit tests now, I've added validations for create and we can allow users to pass the other parameters | 15:00 |
opendevreview | Merged x/sushy-oem-idrac master: Enable coverage HTML output https://review.opendev.org/c/x/sushy-oem-idrac/+/795698 | 15:08 |
opendevreview | Takashi Kajinami proposed openstack/metalsmith master: Replace deprecated import of ABCs from collections https://review.opendev.org/c/openstack/metalsmith/+/801089 | 15:59 |
TheJulia | NobodyCam: https://etherpad.opendev.org/p/pain-point-elimination | 16:03 |
TheJulia | retweets of https://twitter.com/ashinclouds/status/1416068954427035654 would be greatly appreciated | 16:17 |
TheJulia | arne_wiebalck: https://etherpad.opendev.org/p/pain-point-elimination | 16:24 |
NobodyCam | ++ Thank you TheJulia ++ I will check with folks around these parts and see if there is thing not on the list already | 16:24 |
opendevreview | Bob Fournier proposed openstack/sushy-tools master: Fix to handle correct path for BiosRegistry https://review.opendev.org/c/openstack/sushy-tools/+/801099 | 16:42 |
TheJulia | NobodyCam: oh hai, got a moment for a few questions? | 16:46 |
TheJulia | SPUC!? https://bluejeans.com/250125662 | 16:59 |
JayF | yep be there in ~5 | 17:00 |
TheJulia | NobodyCam: the power of the SPUC compells you.... | 17:00 |
TheJulia | arne_wiebalck: is it always ironic-conductor that causes db spike loads with mysql, or is it nova-compute in your environment. I'm looking through the code, based upon what NobodyCam was noting earlier, and it looks like nova-compute is where things get hit hard | 20:49 |
NobodyCam | sorry in meeting right now | 20:50 |
TheJulia | no worries | 20:53 |
TheJulia | arne_wiebalck is ZzZzZzZzZzZz | 20:53 |
TheJulia | and I'm honestly thinking an afternoon nap is on the docket | 20:53 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!