16:00:00 <iurygregory> #startmeeting ironic
16:00:00 <opendevmeet> Meeting started Mon Mar 14 16:00:00 2022 UTC and is due to finish in 60 minutes.  The chair is iurygregory. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:00 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:00 <opendevmeet> The meeting name has been set to 'ironic'
16:00:09 <TheJulia> o/
16:00:13 <iurygregory> Hello ironicers, welcome to our weekly meeting o/
16:00:13 <dtantsur> o/
16:00:17 <ameya49> o/
16:00:21 <rpittau> o/
16:00:24 <kamlesh6808c> o/
16:00:26 <ajya> o/
16:00:29 <rloo> o/
16:00:37 <MahnoorAsghar> o/
16:00:47 <iurygregory> The agenda for our meeting can be found in the wiki
16:00:51 <iurygregory> #link https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting
16:01:17 <iurygregory> #topic Announcements / Reminder
16:01:17 <arne_wiebalck> o/
16:01:51 <iurygregory> #info Yoga final release is March 30th, we will be trying to release the rest of the projects during this week.
16:02:08 <rpioso> \o
16:02:13 <iurygregory> #info Zed PTG April 4 - 8
16:02:40 <iurygregory> we already have an etherpad, feel free to add topics you would like to discuss during the PTG
16:02:53 <iurygregory> #link https://etherpad.opendev.org/p/ironic-zed-ptg
16:03:15 <iurygregory> Does anyone have anything else to announce/remind us of ?
16:04:24 <iurygregory> ok, moving on because we have a big agenda for today's meeting =)
16:04:26 <MahnoorAsghar> Perhaps I should add that this is my last week with Outreachy
16:04:57 <iurygregory> wow time flies =)
16:05:10 <MahnoorAsghar> It does! Will come back in Open discussion ^-^
16:05:23 <iurygregory> awesome =D
16:05:45 <iurygregory> #topic Review action items from previous meeting
16:05:49 <rpioso> MahnoorAsghar: Congrats!
16:06:06 <MahnoorAsghar> Thank you :D
16:06:16 <iurygregory> skipping since we don't have any action items
16:06:36 <iurygregory> #topic Review subteam status reports
16:06:43 <iurygregory> #link https://etherpad.opendev.org/p/IronicWhiteBoard
16:06:51 <iurygregory> starting around L62
16:08:10 <rpittau> I believe the support for CS9 in bifrost and ipa-builder is completed
16:08:36 <iurygregory> agree
16:11:58 <iurygregory> I've updated the items I had info already, should we wait a bit more or we can move on?
16:13:03 <TheJulia> Likely proceed
16:13:09 <iurygregory> moving on =)
16:13:16 <iurygregory> #topic Deciding on priorities for the coming week
16:14:11 <iurygregory> ok, since we will be trying to create stable/yoga for ironic,inspector,ipa,ipa-b,bifrost this week we will focus on patches for this 5 projects =)
16:14:45 <TheJulia> sounds reasonable
16:15:47 <iurygregory> any outstanding patches we think would be good to have in Yoga?
16:16:16 <dtantsur> the Jacob's cleaning patch, I assume?
16:16:37 <arne_wiebalck> yes, it is on the discussion list for today
16:16:43 <iurygregory> ++, we will have a discussion regarding the patch today
16:16:50 <iurygregory> yeah, tks arne_wiebalck =)
16:17:56 <iurygregory> I will be checking the projects that don't have open patches and submitting a DNM just to check if the CI is fine before cutting the release for it
16:19:31 <dtantsur> ++
16:19:44 <iurygregory> ok, moving to our next topic, if anyone has patches that will need attention please add the hashtag to it =)
16:19:57 <iurygregory> #topic Discussion
16:20:17 <iurygregory> okay first topic for our discussion is mine =)
16:21:00 <iurygregory> #topic Zed PTG slots
16:21:10 <iurygregory> #link http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027647.html
16:21:59 <iurygregory> based on the feedback I got last week from some people we would go with option A or C
16:22:44 <TheJulia> either work for me
16:23:12 <iurygregory> if we want to go with option C I would need to know who would be responsible for starting the meeting etc =)
16:23:32 <arne_wiebalck> A minimises the overlap with the TC
16:23:37 <rloo> iurygregory: given that you're the PTL, I'd prefer A
16:24:13 <iurygregory> arne_wiebalck, yup
16:24:19 <rpittau> A also for me
16:24:38 <rloo> or D? -- is 17:00-18:00 late for people?
16:24:54 <arne_wiebalck> for A: if we work hard during the week, we get Friday off :-D
16:25:04 <iurygregory> keep in mind that the 2hrs on Friday will be only in case *we really need*
16:25:40 <rloo> let's just decide now that we won't need Friday ;)
16:25:50 <iurygregory> yeah =)
16:26:12 <rloo> 3 hours T, W, Th is ... a lot. Did we do that before? I'd think 2 hours would be sufficient?
16:26:13 <rpittau> I doubt we will get to Friday with enough brain power left :)
16:26:23 <iurygregory> rloo, yup we had this in the past
16:26:48 <rloo> ah. ok. then let's try to work hard and be more concise, ha ha.
16:27:01 <iurygregory> we did two breaks I think *10-15min each*
16:27:02 <arne_wiebalck> :-D
16:27:52 <iurygregory> just a heads-up the 14-17 for Tue is all booked so we will probably use something else to have the meeting
16:28:06 <iurygregory> I will figure this out and let everyone knows =)
16:28:23 <rloo> how can things be 'all booked' if this is virtual?
16:28:33 <TheJulia> limited number of zoom rooms
16:28:39 <iurygregory> ^ this =)
16:28:59 <iurygregory> from A to N (I think)
16:29:04 <arne_wiebalck> how come we have less rooms than registered projects?
16:29:20 <rloo> arne_wiebalck: maybe you can ask that question at the tc level? heh
16:29:31 <iurygregory> ops XD
16:29:42 <arne_wiebalck> rloo: heh
16:29:54 <iurygregory> I will talk with the organizers, just to check if they are able to add another or not
16:30:15 <iurygregory> it would be amazing if they can =)
16:30:23 <iurygregory> maybe is related to $$$$
16:31:07 <iurygregory> ok, moving to our next topic, rpittau the mic is yours
16:31:36 <iurygregory> #topic March 27th time change in EU, move the meeting 1 hour earlier?
16:31:56 <TheJulia> I thought we already reached consensus on this
16:32:32 <iurygregory> he sent an email to the ML and we would wait to see (at least I understood that way...)
16:33:21 <rloo> the email sez 'if there are no objections'...
16:33:35 <rpittau> well, noone replied so I guess we're ok with moving the meeting one hour earlier starting from march 28 ?
16:33:42 <TheJulia> Lazy Consensus :)
16:33:45 <arne_wiebalck> rpittau: !!
16:34:30 * arne_wiebalck has a lazy finger it seems
16:34:34 <arne_wiebalck> rpittau: ++
16:34:35 <rpittau> :D
16:34:50 <rloo> yes, please move it! Anyway, you can always wait til after next meeting. it starts Mon Mar 28 :)
16:35:00 <iurygregory> yeah
16:35:03 <iurygregory> ok, moving to our next topic with kamlesh6808c
16:35:12 <iurygregory> #topic usage of DRACClient in ironic-tempest-plugin for raid cleaning tempest test for physical baremetal servers
16:35:25 <TheJulia> Why?
16:35:31 <kamlesh6808c> Hi, Ironic Team !
16:35:33 <iurygregory> kamlesh6808c, feel free to provide details so people can understand
16:35:53 <TheJulia> The *intent* of tempest plugins are that they are to be executed extenrally from a cloud, with no anormal level of context or access to the backend infrastucture
16:36:06 <TheJulia> No client libraries are permitted under Tempest command ment T102
16:36:07 <kamlesh6808c> I am part of Dell team where we are working on improving the test coverage in its third-party CI.
16:36:08 <TheJulia> commandment
16:37:00 <dtantsur> I have hard feelings about "no client libraries", but I do agree that dracclient may be a bit excessive..
16:37:01 <kamlesh6808c> Currently  I am working raid cleaning step tempest implementation , This execution we are performing and testing on physical baremetal servers .
16:37:36 <kamlesh6808c> In order to leverage build_raid_and_verify_node method (https://github.com/openstack/ironic-tempest-plugin/blob/37d61a4acf34040c3f4af63a3b2142bfe59d81a1/ironic_tempest_plugin/tests/scenario/baremetal_standalone_manager.py#L580 ) in test cleaning, we are planning to use root device hint as " by path" in order to deploy node (present hint as “name” doesn’t work on physical BareMetal).
16:37:40 <TheJulia> dtantsur: well, its an an independent api contract test above all else, so... *shrugs*
16:38:20 <TheJulia> kamlesh6808c: so your requiring an elevated level of access and insight to perform the test
16:38:36 <kamlesh6808c> Query or concern i would say how can we use DRACClient in ironic-tempest-plugin to grab virtual disk information which will get generated during runtime of raid create configuration on Servers ?
16:38:37 <TheJulia> That seems to be counter to https://docs.openstack.org/tempest/latest/HACKING.html#test-data-configuration
16:38:52 <dtantsur> TheJulia: I wish we could use keystoneauth at least
16:39:04 <TheJulia> dtantsur: that would be glorious
16:39:43 * rpioso requests folks listen to the problem statement.
16:39:46 <TheJulia> kamlesh6808c: Realistically, from my point of view, it would need to be supplied configuration outside of the test job, and thus can't be in the test itself. So, a configuration parameter in which such data can be supplied
16:40:22 <TheJulia> It would be a violation of the tempest use model to use a client library to go get data from the drac card directly
16:40:42 <TheJulia> that implies *elevated* and *private* infrastucture access, both which are contrary to the use and meaning of tempest
16:40:57 <TheJulia> Because, the use is "as a user of a cloud from outside that cloud"
16:41:41 <dtantsur> I wonder if the test can be combined with inspection to get the necessary data...
16:41:54 <TheJulia> dtantsur: that seems reasonable
16:41:56 <kamlesh6808c> ohh any suggestion of way ahead?
16:42:00 <rpioso> dtantsur: Unfortunately, it can't.
16:42:10 <dtantsur> rpioso: even in-band, via inspector?
16:42:17 <rpioso> dtantsur: Nope, again.
16:42:19 <TheJulia> API surface access to existing services should be entirely doable
16:42:24 <dtantsur> sad
16:43:32 <rpioso> Information about the virtual disk is needed to construct the by_path root device hint. The iDRAC's OOB APIs offer that.
16:43:34 <dtantsur> I'm not a tempest purist, to be honest. Given what we do in the n-g-s tempest plugin, using dracclient doesn't sound too bad to me.
16:43:59 <dtantsur> But then again, if you ask me, I'd burn tempest with fire :)
16:44:09 <TheJulia> dtantsur: link to what your referring to?
16:44:14 <rpioso> We might be able to use sushy, instead of dracclient. Our downstream JetPack scripts use dracclient, so that's more straightforward for us.
16:44:27 <TheJulia> that would still be external elevated access to the backend
16:44:35 <dtantsur> TheJulia: n-g-s literally accesses local network interfaces: https://opendev.org/openstack/networking-generic-switch/src/branch/master/tempest_plugin/tests/scenario/test_ngs_basic_ops.py#L51
16:44:48 <dtantsur> not n-g-s, its "tempest" plugin
16:45:01 <TheJulia> dtantsur: eww, yeah
16:45:12 <TheJulia> that is really bad since it means that test can only ever be run in devstack locally
16:45:22 <TheJulia> that should be fixed...
16:45:26 <dtantsur> rpioso: sushy would be slightly better from the requirements perspective
16:45:39 <dtantsur> I think dracclient is not in global-requirements
16:46:00 <rpioso> dtantsur: Could that be a follow-on thing?
16:46:26 <dtantsur> rpioso: well.. you'll have hard time making the requirements job accept adding dracclient?
16:46:50 <dtantsur> (unless I've forgotten how the requirements processes work)
16:46:51 <TheJulia> And it is likely to trigger a T102 check
16:47:17 <rpioso> dtantsur, TheJulia: Gotcha :) We'll investigate using sushy.
16:48:18 <TheJulia> https://opendev.org/openstack/patrole/src/branch/master/patrole_tempest_plugin/hacking/checks.py#L24-L54 is not too horrible
16:48:39 <rpioso> dtantsur: We'll also double check your introspection suggestion. Seems like that would require yet another reboot of the physical server.
16:48:40 <kamlesh6808c> yes.thanks richard ,TheJulia,dtansur, Iury
16:48:48 <iurygregory> np
16:48:54 <dtantsur> TheJulia: ha, we can use keystoneauth! :)
16:48:59 <dtantsur> rpioso++
16:49:01 <iurygregory> moving to our next topic
16:49:05 <TheJulia> https://opendev.org/openstack/tempest/src/branch/master/tempest/hacking/checks.py#L43 lifted directly from tempest's code
16:49:12 <iurygregory> arne_wiebalck, the mic is yours =)
16:49:44 <arne_wiebalck> This is about the hybrid cleaning patch from janders mentioned earlier.
16:49:47 <iurygregory> #topic Hybrid Cleaning Patch - how to go about the concern about potentially long-running ata_erase?
16:49:51 <iurygregory> #link https://review.opendev.org/c/openstack/ironic-python-agent/+/818712
16:50:24 <arne_wiebalck> After TheJulia has raised some concerns he laid 3 options how to move forward and would like to get some input.
16:50:51 <TheJulia> My preference would be to mirror the pattern, use a thread pool
16:51:01 <arne_wiebalck> parallelism?
16:51:16 <TheJulia> Every operator who has enabled that with cleaning has come back and expressed gratitude cleaning is way faster
16:51:17 <TheJulia> yes
16:51:27 <TheJulia> well, gratitude that cleaning...
16:51:57 <dtantsur> it still can take long?
16:52:00 <TheJulia> Secure erase suffers from a similar conunddrum just as using shred against the filesystem
16:52:02 <arne_wiebalck> yes
16:52:14 <dtantsur> if we call something "express", we should make it fast..
16:52:20 <arne_wiebalck> I think when we say "express", it should be express :)
16:52:28 <TheJulia> yes, if you have a spinning device and no encryption key set/in use or just not supported, secure erase becomes a zero-out operation
16:52:39 <TheJulia> fast ++
16:52:48 <arne_wiebalck> TheJulia: dtantsur right
16:53:06 <dtantsur> I don't want to upset janders.. but I wonder if we're introducing this feature prematurely
16:53:13 <TheJulia> s/shred against the filesystem/shred against the block device/
16:53:23 <dtantsur> if we understood the requirements behind it, we would not try to guess
16:53:31 <arne_wiebalck> the problem is we cannot have it all as we do not know if when we try fast, we get fast
16:54:15 <arne_wiebalck> the requirement was to have sth which is smart enough to do the maximum in a short time
16:54:27 <arne_wiebalck> rather than only metadata or only shred
16:54:41 <dtantsur> define "short time"
16:54:52 <arne_wiebalck> less than 1 minute per disk
16:55:01 <dtantsur> is risk to have it running 2 hours acceptable?
16:55:25 <arne_wiebalck> I don't think so: what if users just want to redploy?
16:55:30 <dtantsur> right
16:55:45 <dtantsur> then we either need to find a way to determine if ATA erase is going to be fast.. or skip ATA erase from this new call
16:55:51 <iurygregory> can the user check the type of cleaning we are doing?
16:56:05 <dtantsur> an operator has access to configuration
16:56:11 <TheJulia> So, the disks are *supposed to* provide an estimated time
16:56:16 <dtantsur> an API user can only determine when it's already running
16:56:18 <TheJulia> before you launch it
16:56:19 <arne_wiebalck> dtantsur: I was suggesting to launch and abort if too long, but this seems to break some hardware.
16:56:28 <dtantsur> yeah, I don't think aborting is a good idea
16:56:30 <TheJulia> That is the give away on the pattern
16:56:41 <TheJulia> Often, you can't abort secure erase ops
16:57:01 <arne_wiebalck> I have no feeling for if a disk offers secure erase, how often does it take "long" ?
16:57:02 <TheJulia> if you *try* you brick the disk into a security locked state and we've seen the couple people who've needed help with such issues over the years
16:57:52 <arne_wiebalck> one suggestion I had was to try, let operators run into, and disable on a per node basis
16:57:52 <TheJulia> arne_wiebalck: getting samples
16:58:09 <arne_wiebalck> but then it seems we cannot configure this per node
16:58:25 <dtantsur> I think there are clean priorities overrides per node?
16:58:30 <dtantsur> or only in the configuration?
16:58:36 <dtantsur> (we do need janders in this discussion...)
16:58:58 <arne_wiebalck> we could try to schedule a dedicated session maybe?
16:59:10 <iurygregory> ++
16:59:21 <arne_wiebalck> not sure this will be feasible with the interested party nicely distributed all over the globe
16:59:29 <arne_wiebalck> *parties
16:59:35 <rloo> is that a ptg topic, or something that needs to addressed asap?
16:59:49 <arne_wiebalck> janders would like to get this into yoga
16:59:55 <arne_wiebalck> so ptg is too late
16:59:58 <rloo> oh...
17:00:01 <dtantsur> Jacob wanted it in Yoga, but I don't think it's business-critical for us
17:00:03 <arne_wiebalck> but it would be a good ptg topic
17:00:10 <iurygregory> we discussed as a PTG topic last time
17:00:14 <MahnoorAsghar> maybe same time of day, different date?
17:00:17 <TheJulia> https://paste.openstack.org/show/bqeB4BHeKR2AtVWVJ6n7/
17:00:20 <dtantsur> can we mark it tech preview / proof of concept / do not use if you don't understand?
17:00:26 <TheJulia> wow, one of my disks doesn't support Security at all *gasp*
17:00:30 <TheJulia> arne_wiebalck: ^^^
17:00:34 <iurygregory> dtantsur, can we do that? O.o
17:00:36 <arne_wiebalck> yes, this was what I had in mind
17:00:43 <arne_wiebalck> basically opt-in
17:01:00 <arne_wiebalck> and then refine if too many issues, e.g. with a config option
17:01:02 <dtantsur> iurygregory: who will forbid us? :)
17:01:19 <rloo> i'm concerned. unless it is a high priority, we shouldn't try to get it into yoga. Or at least, unless there is fairly strong consensus on a solution, we shouldn't.
17:01:23 <arne_wiebalck> TheJulia: ++
17:01:29 <iurygregory> dtantsur, not me :D (I'm just checking)
17:01:35 <dtantsur> TheJulia: so, if both times are within several minutes, we can continue?
17:01:54 <arne_wiebalck> rloo: I agree
17:02:01 <TheJulia> dtantsur: likely, but we can run these things in parallel as well
17:02:07 <arne_wiebalck> rloo: so maybe it is a good ptg topic after all :)
17:02:11 <TheJulia> The first one is an intel ssd, the second one is a samsung ssd
17:02:13 <iurygregory> ++
17:02:15 <dtantsur> even in parallel, 36 minutes is not express
17:02:26 <TheJulia> the third is a enterprise grade 7200 rpm spinner
17:02:57 <rpittau> probably leaving the option jsut for nvme for now is better, and see possible solutions for ATA during PTG ?
17:03:36 <arne_wiebalck> rpittau: it is the least invasive if we want to merge sth for yoga
17:03:43 <rpittau> yeah, exactly
17:03:47 <rpittau> and nvme express works
17:04:01 <arne_wiebalck> rpittau: but will we ever touch this again once sth is merged ?
17:04:08 <TheJulia> another question is "how do I re-run multiple distinct cleaning steps if I skip drives because of a reason and not re-run on them
17:04:28 <TheJulia> I'm starting to think this might just be too early to call ready to ship
17:04:37 <rpittau> arne_wiebalck: I think so, it's still a half-feature :)
17:05:16 <arne_wiebalck> rpittau: yeah ... but we leave out the tricky bit, and everyone knows that
17:05:40 <arne_wiebalck> TheJulia: I agree, maybe we should wait (and discuss as rloo suggested)
17:06:02 <iurygregory> yeah, we can try to have discussion before the ptg and also during the PTG
17:06:30 <arne_wiebalck> ok, let's see if we can settle before, otherwise it has to wait
17:06:37 <iurygregory> cool
17:07:05 <iurygregory> we are over the time of our meeting, but we have the Outreachy part
17:07:33 <iurygregory> those who can't stay it's fine =)
17:07:47 <iurygregory> I don't think we have updates for the SIG right arne_wiebalck ?
17:07:54 <arne_wiebalck> iurygregory: NTR for the SIG
17:08:52 <arne_wiebalck> sorry, nothing to report :)
17:09:17 <iurygregory> no worries
17:09:40 <iurygregory> MahnoorAsghar, TheJulia what would be the Outreachy discussion? =)
17:10:27 <MahnoorAsghar> I have been working on the Sphinx extension to process docstrings
17:11:05 <MahnoorAsghar> And would appreciate reviews, if somebody finds time please do review!
17:11:23 <TheJulia> Part of the reason why is this is MahnoorAsghar's last week working on it as part of Outreachy
17:11:43 <iurygregory> ack, I will provide some feedback today o/
17:11:46 <MahnoorAsghar> [https://review.opendev.org/c/openstack/ironic/+/827200/]
17:12:15 <MahnoorAsghar> Small patch coming up, just to add a bit more docstring input  into the extension
17:12:27 <TheJulia> Thanks MahnoorAsghar
17:12:35 <MahnoorAsghar> :)
17:12:55 <MahnoorAsghar> I want to add it for all the API modules, but that would make the patch very hard to review
17:13:21 <TheJulia> understandable
17:13:24 <iurygregory> yeah, no worries! thank you for working on this!
17:13:36 <MahnoorAsghar> ^-^
17:14:01 <MahnoorAsghar> It has been a pleasure working on it :)
17:14:28 <rloo> thank you MahnoorAsghar!
17:14:44 <MahnoorAsghar> :))
17:14:46 <iurygregory> we appreciate your efforts in our community! Tyvm! =)
17:15:44 <MahnoorAsghar> I have had fun doing this, thanks to the community and Julia's help along the way :)
17:16:04 <iurygregory> nice!
17:16:18 <iurygregory> Thanks everyone! sadly it's time to end our meeting =)
17:16:22 <iurygregory> #endmeeting