15:00:17 #startmeeting ironic 15:00:18 Meeting started Mon Dec 7 15:00:17 2020 UTC and is due to finish in 60 minutes. The chair is TheJulia. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:19 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:19 o/ 15:00:21 The meeting name has been set to 'ironic' 15:00:21 o/ 15:00:24 o/ 15:00:25 Good morning everyone! 15:00:26 \o 15:00:27 o/ 15:00:33 o/ 15:00:38 o/ 15:00:46 It is time for our weekly meeting of baremetal cloud irony! 15:00:53 o/ 15:01:00 Our agenda can be found on the wiki today. 15:01:03 #link https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting 15:01:11 Looksl ike only one discussion item, so maybe today will go quick?!? 15:01:23 #topic Announcements / Reminders 15:02:13 o/ 15:02:33 #info We are approaching end of year. Numerous contributors are expected to be out the last half of December. Please remember to provide review feedback on sprint-1 and early sprint 2 items if applicable and present. If an existing item won't make it, now would be a good time to signal so. 15:03:02 we have releases due this week, right? 15:03:09 umm... good question! 15:03:11 checking 15:03:16 o/ 15:03:22 o/ 15:03:34 dtantsur: next week 15:03:42 o/ 15:03:56 okay, cool. let's try to do it rather earlier than later 15:04:18 ++ 15:04:48 I can help with the releases if needed 15:04:54 I can always put in the actual change for the releases team later in the week, but items need reviews. 15:05:15 I'll be here next week, but only Mon-Wed (same the week after that) 15:06:18 k 15:06:30 Anyway, shall we proceed? 15:06:46 Let's 15:06:58 yep 15:07:22 #topic Review action items from the previous meeting 15:09:24 We had one action item which was me to sync with the stable team regarding skipping backports on branches in the "extended maintenance phase". Basically they feel it is a stable policy violation, and then discussion kind of devolved because their view switched to the community as a whole and not as a project or as a contributor. The resulting consensus seemed to be we need to do what is reasonable for the 15:09:24 situation that exists at that time. 15:09:51 rloo: ^^ 15:09:58 Moving on! 15:10:07 #topic Review subteam status reports 15:10:19 #link https://etherpad.opendev.org/p/IronicWhiteBoard 15:10:25 This makes EM quite costy.. 15:10:35 thx TheJulia! 15:10:47 i'll update the etherpad... 15:10:48 dtantsur: They don't seem to care and we're not really interested in discussing further, I pointed that out and they just let it drop 15:11:02 So I think our EM path is to kind of just do whatever is reasonable to ourselves. 15:11:17 Which likely need to include starting to drop extensive testing on older branches 15:11:26 i wondner if other projects backporting patches to the EM releases 15:11:54 (I think there ought to be some community ... understanding... about this but ...) 15:12:00 kaifeng: nova is doing skip patching, so master -> appropriate current stables and then jumping to the queens em that they care about 15:12:21 rloo: the Problem is the community, they fallback on the community in the past said they don't want to do this 15:12:46 so their stance was frozen in time and not considering the realities of those that actually have to get stuff in to EM. 15:12:54 Anyway! 15:13:01 #link https://etherpad.opendev.org/p/IronicWhiteBoard 15:13:06 Line 267 15:13:22 but if some projects are doing it, i think that ought to be communicated to the community, vs separate projects doing things their own way. having said that, i don't really want to get involved and don't want this to block/delay whatever. 15:13:47 skip patching doesn't seem upgrade compatible th 15:13:54 s/th/tbh/ 15:14:27 kaifeng: depends on the patch I guess, if it is not data model, then there is low risk as long as someone is going to a newer release version and not just going from say Queens -> Rocky 15:14:30 kaifeng: good point. i think though, that the stuff that is backported, is to fix bugs. not add features. so it won't prevent upgrades, and if one upgrades enough, they'll get the bug fix. 15:15:14 rloo: I think the same is true, they did not want to be in the middle of such a discussion either. 15:15:33 dtantsur: rpittau: No luck on nvme secure erase? 15:15:36 TheJulia: to confirm -- we're only talking about backporting patches as per the stable team's rules wrt backporting. we're just not going to backport to 'all' the interim releases. 15:16:05 wrt rules, i mean wrt what qualifies to be backported. 15:16:06 TheJulia: nothing for now 15:16:12 rloo: lets continue this in open discussion 15:16:17 ok 15:16:41 rpittau: Could you put a note saying that it is going to not make sprint1 and will likely make sprint 2? 15:16:42 TheJulia: nothing from me, and I think janders will only get to it after the holidays 15:16:48 k 15:17:00 kaifeng: I see you have an approved spec \o/ 15:17:03 TheJulia: will do 15:17:09 w/r/t node history 15:17:30 TheJulia: yeah, thanks everyone for the review! 15:18:04 ajya, bdodd: reminder: redfish raid is presently listed for sprint 2. I see there is some discussion on the change. I'm curious if it can be updated at this point? 15:18:29 iurygregory: same status on oslo.privsep? 15:18:47 TheJulia, going to update things this week 15:18:49 I'm working on updates to the RAID patches 15:18:50 I was on PTO 15:18:54 \o/ 15:18:58 Okay, thanks 15:19:54 zer0c00l: Looks like you have a little discussion on the anaconda patch, I can drop my +2 if you want for now. You'll likely want to consider revising maybe. 15:20:34 Everything looks good to me, are we good to proceed to priorities for the week? 15:20:35 ++ for revising 15:21:40 vote changed 15:21:42 or... changing 15:21:46 I certainly want my reviews taken into account before it merges :D 15:21:51 * TheJulia needs a gerrit stop-watch 15:22:33 #topic Deciding on priorities for the coming week 15:22:46 #link https://etherpad.opendev.org/p/IronicWhiteBoard 15:22:46 (I added a note wrt anaconda revision desired) 15:23:19 Starting at line 120 15:23:32 Looks like we got some stuff merged last week, so I'll clean that up. I added items aaround 198 15:25:22 * arne_wiebalck cannot join the meeting today, just briefly: SIG meeting tmrw with rpioso on Redfish interop profiles, NTR for the SIG otherwise, bye o/ 15:26:26 Thanks arne_wiebalck 15:26:32 Any objections to the items to add? 15:27:34 lgtm 15:27:59 Okay, I;kk do the shuffling after the meeting 15:28:02 err I'll 15:28:06 Onward! 15:28:22 * TheJulia hears crickets 15:28:27 #topic Discussion 15:28:54 One quick discussion item, should we have a final meeting for the month next week? 15:29:13 I'll be likely here for the next 2 Mondays 15:29:24 won't be around on the 4th of January though 15:29:45 I was about to suggest we skip the 21st, 28th, and Janurary 4th 15:29:54 sounds reasonable to me 15:30:13 Which is a long gap, but I think a number of us need the heads down time be it code or in our pillows 15:30:46 ++ so dec 14 is last meeting of the year! 15:31:04 Seems that way if there are no objections :) 15:31:07 prepare your santa hats \o/ 15:31:14 Oh wait, are we doing that?!? 15:31:19 eek! 15:31:21 :) 15:31:29 we also have two more SPUCs planned 15:31:32 ++ 15:31:35 Spucs are good 15:31:36 * dtantsur had to miss last Friday, sorry for that 15:31:36 I don't have santa hats =( 15:31:50 I assume santa hats should now be on SPUCs 15:31:50 I have a wizard goofy hat, is that ok? 15:31:55 maybe ;) 15:32:01 rpittau, yes! 15:32:21 Anyway, I'll send out the "end of year email" and we'll switch to etherpad updates as we have the last couple of end of year holiday seasons 15:32:33 If there are any questions or concerns, please feel free to reach out to me. 15:32:47 Anyway! Onward to the Baremetal SIG 15:32:50 #topic Baremetal SIG 15:33:44 #Info Baremetal SIG session covering Redfish Interop profiles - Tomorrow, December 8th, at 2PM UTC 15:33:55 #link https://etherpad.opendev.org/p/bare-metal-sig 15:34:19 arne_wiebalck: one reminder, record the talk portion so we can build some content for youtube :) 15:34:32 TheJulia: Ack 15:34:36 Since we have no RFE's on the agenda, we'll go directly to Open Discussion 15:34:45 #topic Open Discussion 15:35:31 rloo: where were we? 15:35:55 TheJulia: for myself, I think what I'd like is a clarification of this process, so everyone (in ironicland) knows. 15:36:00 I think we just try and do the best we can, applicable to stable policy, and just start killing EM branch testing that doesn't make sense 15:36:08 if i recall, i was fine with what we discussed at ptg 15:36:25 what EM branch testing are we doing now? 15:36:26 I can write a doc update then 15:36:33 rloo: basically trying to keep everything alive 15:36:35 which is insane 15:36:46 right. i agree. i thought we had already killed some branch testing. 15:36:56 Some, but not tons 15:37:25 We can likely dial it back by a 50% 15:37:31 Does OpenStack's branch model fit this concept? e.g. can we put "r" in unmaintained while "q" is in EM? 15:37:33 s/a// 15:37:39 there were 2 things? 1 kill some branch testing. 2. allow backports to skip some branches. eg if someone wants to backport to rocky but not stein. (we're talking only branches in EM, can't skip branches in M) 15:38:24 JayF: EM is up to anyone wantint to put patches forth to burn the time to get the patch in, but forcing someone to go through a ton of EM branches and keeping them all perfectly alive with full testing, is a huge time sink 15:38:26 and then maybe 3. is it ok to backport to an EM branch were there is no upstream testing. 15:38:57 or limited testing, like we know some scenarios are more likely to fail than others 15:39:03 TheJulia: I don't disagree in concept, I'm just wondering if there's a clear way to communicate that to users. OpenStack not having an "EM ... but more maintained than the other EM" branch type is unfortunate 15:39:05 At least inCI 15:39:27 it won't help to kill "some" branch testing, our jobs tend to break in large groups :) 15:39:37 JayF: unfortunately it is up to those that care about an EM branch to carry that load 15:40:21 dtantsur: what do you mean by that? keep all branch testing then? or kill all of them? :) 15:40:28 dtantsur: well, huge breakages are generally easy to resolve once we identify the issue. The problem is noise and overall load crushing the ability for the job to actually pass 15:40:28 I'm just thinking, we're going through a process right now to determine which release to upgrade to. As a contributor, I know to look for either Queens or a "M" release or newer. I don't think an external user would easily be able to determine that Queens is a better choice than Rocky, for instance 15:40:35 I mean, it's not about removing one job, it's about removing most of them 15:40:48 fwiw, stein is EM now. 15:40:54 and I'll personally have reservations about +2'ing something that only passed unit tests.. 15:41:16 Yeah, I'd prefer to some tests remain functional 15:41:48 I still want to see a model similar to Linux kernel and some other projects: 15:41:50 just, not 10+ integration jobs 15:42:04 we maintain any branch for a while. then whoever cares can provide support, everything else is killed off. 15:42:19 dtantsur: that is kind of where we're headed I think 15:42:23 dtantsur: are you including branches in Maint, or only EM? 15:42:25 to be totally honest 15:42:37 rloo: Maint is the "we maintain any branch" part. EM is "whoever cares" part. 15:42:39 Verification of a change to openstack/bifrost failed: Support testing secure boot https://review.opendev.org/c/openstack/bifrost/+/760791 15:42:52 (M=maintenance, EM=extended maintenance) 15:43:16 EM's naming is not the best, but I remember when that was a huge fight 15:43:20 dtantsur: does upstream community/stable say anything wrt CI for M branches? 15:43:23 dtantsur: TheJulia: ++ the linux kernel did this a couple of years back to solve this exact problem 15:43:39 rloo: I don't recall any firm position. I think it's up for us as a project. 15:43:52 shall we take EM first, that seems easiest? 15:43:53 rloo: aiui, it is up to the projects. 15:44:05 rloo: I think we're only talking about EM now 15:44:27 the main maintenance is not long, and the stable policy requires us to provide a high level of maintenance for them 15:44:38 I'd like to point out some of the major projects also only have 1-3 integrated scenario jobs on older branches. We've got many more 15:44:55 one of the reasons I started removing the iscsi deploy :) 15:45:00 ++ 15:45:11 but yeah. our problem is low concurrency of our tests 15:45:18 and a relatively large test time 15:45:20 And limited CI resources 15:45:28 8 GB of ram, no ability to touch swap 15:45:45 if Nova can boot a VM with cirros in 30 seconds and 256M of RAM, they're in a much better position 15:45:47 i'm totally good with 0 CI for EM (except unit tests) 15:45:57 to quote an infra person, "Swap is the mind or CI job killer" 15:46:04 rloo: would you +2 any patch under these conditions? 15:46:16 i would if we agree to that 'policy'. it is a backport. 15:46:44 I think patches exist where I'd be hesitant to +2 without any integration tests 15:46:55 clean backsports of minor fixes? sure, lets forgo the integration tests 15:46:55 same for me 15:46:58 same 15:47:07 hmm. what if said person tested downstream? 15:47:08 but I've seen some backports that I'd want to see a deploy work for 15:47:25 rloo: there are people who I'd trust in this case, there are people (most of them) who I'd not 15:47:28 Then we go down a path of why isn't "trust me, I tested it" good enough everywhere :| 15:47:45 i just don't know how reasonable it is, to expect upstream to provide CI, given the state of things upstream. 15:47:49 right 15:48:20 bifrost jobs can potentially have a longer lifetime now that we default to using virtual environments 15:48:24 so if we don't +2, then an alternative is for the person/company to patch downstream. which is also fine with me (that's what we do at verizon media) 15:48:28 because of their simplicity 15:49:24 Pure downstream patching adds a lot of cost as well, but it all becomes what makes the most sense in the situation 15:49:27 honestly, i haven't been contributing upstream. for the folks that have -- what is your preference? 15:49:56 we have a limited number of people working upstream. i think they ought to have a bigger vote wrt where they want to put their time/energy. 15:49:56 rloo: TBH I think our input, contributor or not, is interesting here because we're soon-to-be consumers, likely, of one of these EM branches 15:50:17 It seems like the action is to kind of write down the overall feeling/perception, and be able to point to that whilst also possibly beginning to nuke some but not all integration jobs *where it makes sense* 15:50:29 right, we aren't the only ones (I hope) that will be consumers of those EM. having said that though, things aren't 'free'. 15:50:30 JayF: would you become a "patron" of an EM branch? 15:50:47 I don't care so much what we do w/r/t deciding support. Putting resources where they matter (e.g. targetting queens and not other EMs) makes sense; I'm just concerned about upstream users (vs rh users) realizing that Queens is more maintained than other EMs 15:51:10 How does kernel solve this messaging problem? 15:51:12 * TheJulia likes this idea dtantsur just raised 15:51:16 dtantsur: ^ is my concern, that we communicate it well. I think OpenStack's branch system is almost designed to make this sorta "branch skipping" awkward and hard to communicate 15:51:39 dtantsur: kernel specifically calls out longterm releases on kernel.org 15:51:45 dtantsur: so you get a menu of maintained things to pick from 15:51:47 Honestly, the lack of mutual understanding around EM branches is already problematic 15:51:53 so... we introduce a new 'class' of branches. M -> EM, and EM can include .. sponsored M? 15:52:14 Can we maybe pull our EM branches from releases.openstack.org (if they're anyhow mentioned there) and only provide this information in our docs? 15:52:36 I'm not convinced this issue is solved by adding more concepts 15:52:47 can we pull EM branches *except queens* from releases? 15:52:57 The whole idea around EM is that it can "sponsored" or have a "patron" if the resources are supplied by the interested party/org. The thing is they will never ever be released again so patches are merged at the will of the project or the EM maintainers who care 15:52:59 I think openstack as a whole should do a better jobs communicating that the exactly quality of EM branches is up to a project 15:53:10 dtantsur: ++ 15:53:55 This may be worth of taking to the TC, in order for them to drive that visibility 15:54:14 are you still on TC? ;) 15:54:16 since, in reality, what is needed for that to be documented OR the EM branch model to be re-visited 15:54:33 I think a fair summary is: 1) Nobody has objections to the idea of targetting your resources on a single EM branch and not spreading the love. 2) We need a good way for operators to know that Queens is more maintained than Rocky (and soon Stein), but OpenStack's release system doesn't make that easy 15:54:37 No, Board until January when the new board is seated, that is if I'm not re-elected. 15:55:03 JayF: that sounds right 15:55:27 I'd suggest taking a summary of what you wanna do, call out that sticking point, and hit the ML 15:56:04 This is the sorta thing we should have a longer form discussion about ... but we can have that discussion while effectively implementing the new plan 15:56:59 Agreed 15:57:34 ++ 15:58:35 Okay, well, we're almost at time. Thanks everyone 15:58:45 a heads-up 15:58:59 CentOS 8.3 is out, watch out for related failures 15:59:05 15:59:06 joy 15:59:18 we have a minor breakage in openshift land 15:59:30 #info Heads up - Centos 8.3 has been released, keep an eye out for failures. 15:59:39 If anyplace explodes it will be metalsmith 15:59:45 or bifrost. or IPA-builder. 15:59:47 :) 15:59:51 Yup 15:59:59 Anywya, thanks everyone! Have a wonderful week! 16:00:16 thank you! 16:00:31 \o 16:01:26 #endmeeting