Wednesday, 2022-11-02

opendevreviewMerged openstack/ironic-prometheus-exporter stable/ussuri: CI: Various fixes  https://review.opendev.org/c/openstack/ironic-prometheus-exporter/+/86018302:31
*** mnasiadka_ is now known as mnasiadka06:29
arozmanHi Ironic!07:45
rpittaugood morning ironic! o/08:02
dtantsurTheJulia: hi! you seem to be familiar with IPv6 PXE, do you know why we hardcoded 343 (Intel) here? https://github.com/metal3-io/ironic-image/blob/d79c07796a8ef43ad25fe741cab2ee20bd4e7f5b/ironic-config/dnsmasq.conf.j2#L4010:06
dtantsurJayF, stevebaker[m], Bifrost is interesting, it tends to rot quite quickly because of all the distro changes..10:28
dtantsurW has CentOS 8 in its jobs, which is not functioning any more10:29
dtantsurkubajj: maybe try removing UUID first and get another run? I have a feeling the tests may be misreporting the actual failure..11:34
kubajjdtantsur: Ok, will do11:35
kubajjdtantsur: It seems that the error remains the same (except now it doesn't complain about index uuid).12:15
dtantsurkubajj: oookay, let's upload the new patch through12:19
opendevreviewJakub Jelinek proposed openstack/ironic master: WIP: Implements node inventory: database  https://review.opendev.org/c/openstack/ironic/+/86256912:20
viks__Hi, in the normal vm's i can create a snapshot of the instance and in case any backend node failure, i can recreate the instance from the snapshot.. Is it somehow possible to achieve the same in case of baremetal instance?13:18
opendevreviewHarald Jensås proposed openstack/ironic master: [WIP] Use ORM relationship for ports node_uuid  https://review.opendev.org/c/openstack/ironic/+/86293313:19
TheJuliagood morning13:23
TheJuliadtantsur: no idea...13:23
TheJuliaw/r/t enterprise 34313:24
TheJuliaviks__: ... So from an automation standpoint, Ironic does not support snapshots13:25
TheJuliaviks__: but, you can image disks, it just requires knowledge of how to do it and package the contents. The downside is the size of the resulting disks sometimes13:25
viks__TheJulia: ok.. thanks.. one more question.. suppose i have my baremetal instances in region A and region B.. so is there a way if i can migrate the baremetal instance node from region A to B and in region B and start accessing it from B? 13:30
TheJuliaviks__: not really. I mean, you could say "hey, put that server on a truck and move it to the other data center", but there is no automation to capture that system state and move it for you13:32
viks__TheJulia: Ok thanks..13:32
dtantsurTheJulia: good morning. okay, I wonder if it was just copied from somewhere..13:35
TheJuliaI've never seen that done before quite like that, but enterprise 343 may be more referring to intel architecture ?13:38
TheJuliai suspec thte question is what does the field it is matching serve13:38
JayFdtantsur: https://zuul.opendev.org/t/openstack/config-errors majority of them left for Ironic are because old-old-old bugfix branches have queue:Ironic13:54
JayFexcept that link is dead now (?)13:55
JayFit's the right link; it's just busted right now13:55
JayFfrom https://etherpad.opendev.org/p/zuul-config-error-openstack13:55
dtantsurack, will check when I can14:12
JayFdtantsur: apparently the JS renderer is broke; you have to hit the bell in the top corner, but the info is there lol14:37
dtantsurJS amiright?14:37
JayFonly thing worse is actual-java14:37
dtantsurtrue14:38
dtantsurso, everything I can find is about queue, which is easy to fix if it makes infra people upset14:38
JayFI think, bluntly, that's a little naive given I've been personally fighting, along with TheJulia at times, to get these patches through *actually maintained CI* 14:42
JayFI also don't think it's OK or good for us to have a dozen open dead branches when, in all other openstack projects, it's a baked-in assumption that a branch existing means it's maintained14:42
JayFI don't wanna retire things that you all are using (and I saw on the list, and I responded), but I also am not onboard to just leave these dangling open forever and ever with no end14:43
JayFI find it hard to believe that anyone is using an Ironic bugfix branch from 5 major version ago, for instance, lol14:44
dtantsuryeah, and I don't think we will too14:48
dtantsurrpittau: wdyt about bugfix branches lifetime?14:48
JayFLike I said on list, I'm even OK with explicitly saying "this bugfix branch represents code shipped in $product and will be maintained while $product is maintained" 14:49
JayFlike, that's maybe a little more on the nose than we'd typically do14:49
rpittaummmm as far as I'm concerned I really care about 4 of them at the moment :)14:50
JayFI just want there to be documented upstream method to our madness, which at least somewhat controlled by your downstream usage14:50
rpittauI think what we have int he etherpad now is pretty accurate14:50
TheJuliawhich etherpad?14:50
rpittausorry, the whiteboard14:51
rpittauL4914:51
rpittauand below14:51
JayFhttps://etherpad.opendev.org/p/IronicBugfixBranchCleanup is based on that14:52
JayFso I think my updated version, with the "needs to have branch retired" list, is accurate14:52
rpittaulet me double-check that14:52
rpittauJayF: yep, that's perfect IMHO14:53
dtantsurbtw the notion of vendor-sponsored branches is not new, at least the kernel does it14:53
rpittauspoiler alert: next april we should be able to retire 18.1, 8.1 and 10.714:54
dtantsurI do agree that we should probably put a cap on how long a branch is supported even in this mode. rpittau thoughts?14:54
dtantsurrpittau: or to phrase it differently: can we declare how long we want to maintain a branch upstream even if the downstream release is in some sort of an extended support mode?14:54
JayFI mean, I'm completely game for all that; the pieces I think are missing are: 1) being more explicit about why and what branches are supported and 2) an upstream understanding of life-cycle for these (e.g. RH sponsors these branches thru 4/2023 is exactly what I'd like to be able to say)14:54
dtantsurI think we should take https://specs.openstack.org/openstack/ironic-specs/specs/approved/new-release-model.html, adjust it to match the reality and put to the docs.14:55
rpittaudtantsur, JayF, from what I know the standard timeframe of downstream support is roughly 18 months14:55
dtantsurincluding, yeah, vendor commitment to maintain certain branches longer than they're supposed to14:55
JayFdtantsur: that's what I'm working towards :D 14:55
JayFdtantsur: I made the thread partially to tease out the preexisting consensus into something explicit so we could document it14:56
rpittaudoes 18 months sound reasonable ?14:56
JayFLet me put it this way: if you'd be doing the work to maintain it downstream if it wasn't upstream; I'd rather it be upstream14:56
JayFso whatever timeline that requires is reasonable, as long as the CI stays alive and we document it :)14:56
rpittauJayF: the longest I saw was 24 months downstream and it was an exception14:56
dtantsur18 months is how long "normal" stable branches are maintained, right?14:57
rpittaudtantsur: correct14:57
dtantsurrpittau: I think we can ignore our long-term support release for the sake of upstream sanity14:57
dtantsur(we WILL need to figure out how to backport things to them downstream, but that shouldn't be the upstream headache IMO)14:57
rpittaummmm ok, so full support is 6 months14:57
rpittauI lied14:58
rpittau8 months14:58
rpittausorry14:58
dtantsurLIER!14:58
rpittau:P14:58
dtantsursorry, JayF, we're still trying to figure out how long we support our product :D14:58
* dtantsur reads https://access.redhat.com/support/policy/updates/openshift14:58
JayFAs I've offered many times, I'm willing to be the bad cop you blame all your problems on if it helps downstream lol14:58
rpittauthe support likes changing :)14:58
JayF"this new PTL is a real hardass, he's demanding we [checks notes] know how long we support products"14:59
JayFlol14:59
dtantsur:D14:59
dtantsurokay, so full+maintenance support is 18 months, I suggest we definitely do not go beyond that upstream14:59
rpittaujokes aside, standard support is 8 months, extended maintenance is 18 months, if I can count correctly14:59
JayF18 months would be like Wallaby, which is already feeble15:00
rpittauyeah15:00
rpittauI don't mind keeping track and retiring old bugfix branches when it's the time15:00
JayFMy thing is, I want it somewhat written down, not just personal responsibility15:01
dtantsurI agree with Jay, this makes us better prepared15:01
JayFbecause if upstream has a maintained branch; we can't leave the details of how long that branch is supported as downstream-only  knowledge15:01
rpittaulet's write it down then, I can take care of that, just need to find the right place15:01
JayFI'm working under the (likely false but good to pretend it's true) assumption that the bugfix branches hold value upstream outside of making RH's life easier15:01
dtantsurrpittau: my intuition is to say "12 months" for OCP-supported branches (and we'll deal with the remaining 6 months + EUS ourselves)15:02
JayFthat assumption makes it much easier to support bugfix branches with upstream hats on :D15:02
dtantsurIIRC community support for bugfix branches is 6 months per my spec15:02
JayFWe don't communicate that through actual branch EOL, like we would for stable branches15:02
rpittaudtantsur: yes, it was 6 months, we cheated15:02
dtantsurwe did indeed15:03
JayFand if we did, we'd probably break your downstream given what I'm hearing now lol15:03
dtantsurI don't think we actually EOL stable branches too15:03
JayFwhich is why I'm asking questions instead of just going ka-pow because I don't think that doc aligns with reality15:03
dtantsurat least at some point we just let them stay in EM forever15:03
JayFdtantsur: we just EOL'd Q/R/S15:03
dtantsurnice!15:03
JayFdtantsur: and stevebaker[m] and TheJulia are talking about doing T/U/V as well15:03
JayFbecause CI is so busted on them15:04
dtantsurrpittau: V going away will mean we'll need to figure out 4.7, so this question is timely15:04
rpittau4.7 is gone15:04
JayFI am onboard for  keeping V a little longer too if we can fix CI15:04
dtantsurrpittau: oh my sweet summer child ;)15:04
rpittau:D15:04
JayFfor T/U/V I sorta wanna take an "as needed by CI" approach15:04
dtantsurI'll explain you downstream15:04
TheJuliadtantsur: I was thinking the exact same thing15:04
JayFe.g. no desire to EM->EOL Victoria unless CI is demanding it15:05
JayFbut Train seems already lost, and I think Ussuri is getting close to that point w/CI15:05
TheJuliaI *think* we got most of the usssuri stuff happy, but that might have changed in the last ?week or two?15:06
JayFnot all of it, but if it's doable we should do it \o/15:06
JayFTheJulia: https://review.opendev.org/c/openstack/ironic-lib/+/860176 you're right, this is the only U patch still needed to fix CI15:07
JayFTheJulia: we did remove a job on ironic-prom-exporter to make it land in that one's ussuri tho15:07
TheJuliaohh, that might just work on rehceck15:07
* JayF kicks it15:08
TheJuliaOh, just did as well15:09
* TheJulia shrugs15:09
JayFhttps://review.opendev.org/c/openstack/ironic-lib/+/860175 is all that's left for victoria queue param stuff too, I'm going to kick it as well15:11
* TheJulia raises an eyebrow15:13
TheJuliaI guess because centos8?15:13
JayFI didn't kick that one15:14
JayFit looks actual-broken15:14
TheJuliayeah15:18
TheJuliasteve updated it to depend on the requirements change, but no dice it looks like15:18
TheJuliaI guess on a plus side, is how often have we *actually* needed to backport ironic-lib fixes15:19
TheJulia... in recent history15:19
JayFYep. This is one of those cases where I'm probably OK just removing the integration testing15:24
JayFI'll put it this way: I think for things the age of V, if we can keep it going with just unit tests and then be judicious about what we backport we can get more longevity15:24
TheJuliaagreed15:25
JayFI'm going to update that patch to do just that, then15:26
TheJuliaok15:26
opendevreviewJay Faulkner proposed openstack/ironic-lib stable/victoria: CI: Various fixes  https://review.opendev.org/c/openstack/ironic-lib/+/86017515:28
TheJuliaJayF: remember mentioning hardware assisted software raid?15:43
TheJuliaduring the PTG I think?15:43
JayFIntel RSTe!15:44
JayFI knew it well15:44
TheJuliawell, VROC now15:44
JayFI changed that o to an e because any knowledge I had it in a rackspace wiki15:44
JayFlol15:44
TheJuliaheh15:44
TheJuliaAnyway, as of this morning, I have a customer asking for it15:44
TheJulialolz15:44
JayFso if I say it, it comes15:45
JayFI always have wondered how our bare metal provisioning system would hold up with a million dollars on top of it15:45
JayF<.< >.>15:45
opendevreviewMerged openstack/bifrost master: Fix initial python/venv dependencies  https://review.opendev.org/c/openstack/bifrost/+/86153415:45
opendevreviewMerged openstack/bifrost master: Switching netstat to ss in report  https://review.opendev.org/c/openstack/bifrost/+/86154215:45
opendevreviewMerged openstack/bifrost master: Remove remaining traces of Suse  https://review.opendev.org/c/openstack/bifrost/+/86154116:06
rpittaubye o/16:39
opendevreviewHarald Jensås proposed openstack/ironic master: Add port/portgroup list conductor groups filter  https://review.opendev.org/c/openstack/ironic/+/86229217:55
opendevreviewJulia Kreger proposed openstack/ironic-specs master: Add a shard key  https://review.opendev.org/c/openstack/ironic-specs/+/86180318:35
stevebaker[m]good morning19:43
opendevreviewEbbex proposed openstack/bifrost master: Enable epel repository for more than CentOS  https://review.opendev.org/c/openstack/bifrost/+/86108223:05
opendevreviewEbbex proposed openstack/bifrost master: Move kpartx to dib_host_required_packages  https://review.opendev.org/c/openstack/bifrost/+/86153923:23
opendevreviewEbbex proposed openstack/bifrost master: Use a more traditional ansible approach to include_vars  https://review.opendev.org/c/openstack/bifrost/+/85580623:29
opendevreviewEbbex proposed openstack/bifrost master: Refactor use of include_vars  https://review.opendev.org/c/openstack/bifrost/+/85580723:29

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!