Tuesday, 2021-04-27

*** sshnaidm_ has joined #openstack-ironic00:00
*** sshnaidm|afk has quit IRC00:01
*** sshnaidm_ has quit IRC00:07
*** sshnaidm_ has joined #openstack-ironic00:08
*** zzzeek has quit IRC00:38
*** zzzeek has joined #openstack-ironic00:39
*** gyee has quit IRC01:22
*** ricolin has quit IRC01:37
*** mrda has quit IRC02:14
*** paras333 has joined #openstack-ironic02:26
*** rloo has quit IRC02:29
*** paras333 has quit IRC02:31
*** mrda has joined #openstack-ironic02:33
*** rh-jelabarre has quit IRC02:34
*** rcernin has quit IRC02:39
*** rcernin has joined #openstack-ironic02:40
*** k_mouza has joined #openstack-ironic02:59
*** k_mouza has quit IRC03:03
*** k_mouza has joined #openstack-ironic04:02
*** k_mouza has quit IRC04:07
*** tzumainn has quit IRC05:22
*** tsturm has quit IRC06:35
arne_wiebalckGood morning, ironic!06:42
*** vmud213 has joined #openstack-ironic06:43
iurygregorygood morning janders arne_wiebalck and Ironic o/06:54
jandershey iurygregory06:56
jandersyou made it - well done!06:56
janders(and good morning arne_wiebalck :) )06:57
arne_wiebalckhey iurygregory and janders o/06:57
iurygregoryjanders, yeah!06:58
iurygregoryjetlag causing problems but it's life :D06:58
iurygregorytime to grab more coffee :D06:59
vmud213Good morning Ironic07:14
vmud213iurygregory o/07:14
iurygregoryvmud213, morning o/07:15
*** vmud213 has quit IRC07:18
*** vmud213 has joined #openstack-ironic07:20
*** jobewan has quit IRC07:33
*** tosky has joined #openstack-ironic07:41
*** ricolin has joined #openstack-ironic07:43
*** rpittau|afk is now known as rpittau07:45
rpittaugood morning ironic! o/07:45
*** ociuhandu has joined #openstack-ironic07:48
arne_wiebalckhey rpittau and vmud213 o/07:49
rpittauhey arne_wiebalck :)07:49
vmud213arne_wiebalck o/07:50
*** dougsz has joined #openstack-ironic07:50
*** ociuhandu has quit IRC07:58
*** lucasagomes has joined #openstack-ironic08:04
iurygregoryrpittau, o/08:07
rpittauhey iurygregory :)08:07
*** rcernin has quit IRC08:08
*** jobewan has joined #openstack-ironic08:15
*** ociuhandu has joined #openstack-ironic08:17
openstackgerritMaAoyu proposed openstack/tenks master: setup.cfg: Replace dashes with underscores  https://review.opendev.org/c/openstack/tenks/+/78820608:27
*** dtantsur|afk is now known as dtantsur08:30
dtantsurmorning ironic08:30
iurygregorymorning dtantsur o/08:31
jandershey rpittau dtantsur o/08:35
rpittauhey janders :)08:35
*** ociuhandu has quit IRC08:37
*** ociuhandu has joined #openstack-ironic08:41
*** uzumaki has quit IRC08:43
*** rcernin has joined #openstack-ironic08:45
openstackgerritDmitry Tantsur proposed openstack/ironic master: Docs: dhcp-less works with Glean 1.19.0  https://review.opendev.org/c/openstack/ironic/+/78821008:47
dtantsurmraineri: hi! do you think we could do something about https://storyboard.openstack.org/#!/story/2008852 in ironic?08:49
openstackgerritArne Wiebalck proposed openstack/ironic-python-agent master: [WIP] Burn-in: Add CPU step  https://review.opendev.org/c/openstack/ironic-python-agent/+/78821108:49
iurygregoryand the virtual media saga continues :D08:51
arne_wiebalckdtantsur: ^^ proposal to add CPU burn-in via a clean step, early feedback welcome (before I add more steps ;)08:52
dtantsurneat!08:52
arne_wiebalckdtantsur: it is very close to what we do outside of Ironic atm with all our nodes08:53
arne_wiebalckdtantsur: once we have consensus this schema makes sense, I'll add mem, then disk. Network will be more complicated.08:53
dtantsurlooks good at first glance08:53
arne_wiebalckdtantsur: thanks, I move fwd then :)08:54
dtantsurarne_wiebalck: maybe some code refactoring: move the actual burn-in code to its own module08:54
dtantsurand call it from hardware.py08:54
arne_wiebalckdtantsur: ok!08:54
dtantsurjust to stop hardware.py from growing further :)08:54
rpittauideally we swould have that ina different module08:54
arne_wiebalckheh08:54
arne_wiebalckdtantsur: rpittau: will do08:55
timeuthe one ticket about the virtual media slots was from me ;-) I also asked the Lenovo people in their github issue tracker if they have somekind of workaround (disable those slots).09:05
openstackgerritMark Goddard proposed openstack/bifrost master: Install DIB dependencies in bifrost-ironic-install role  https://review.opendev.org/c/openstack/bifrost/+/75600509:05
timeuand if you need me to test anything on the actual hardware let me know.09:05
openstackgerritMerged openstack/tenks master: Fix ansible lint  https://review.opendev.org/c/openstack/tenks/+/78268209:05
openstackgerritMaAoyu proposed openstack/sushy master: setup.cfg: Replace dashes with underscores  https://review.opendev.org/c/openstack/sushy/+/78822409:06
iurygregoryI don't see python-requires and description-file in the list of opts to give warning O.o  https://github.com/pypa/setuptools/commit/a2e9ae4cb ^ wondering if we want to change or not09:11
*** yoctozepto4 has joined #openstack-ironic09:25
*** yoctozepto has quit IRC09:26
*** yoctozepto4 is now known as yoctozepto09:26
vmud213dtantsur i want to discuss about https://review.opendev.org/c/openstack/ironic/+/78616709:35
vmud213Would u mind to spare minute09:35
vmud213this is in regards to ilo-uefi-https boot interface09:36
*** rcernin has quit IRC09:46
rpittauiurygregory: the reference in the patch is probably wrong, all the options with "dash" are now included https://github.com/pypa/setuptools/blob/main/setuptools/dist.py#L65609:53
iurygregoryrpittau, gotcha!09:54
*** sshnaidm_ is now known as sshnaidm10:00
dtantsurvmud213: could we keep it in gerrit? I don't have much time until Friday.10:02
*** rcernin has joined #openstack-ironic10:04
vmud213dtantsur: I didn't get what u mean when u said keeing in gerrit. Nevertheless i can wait till friday.10:04
vmud213I have seen your comment.10:04
vmud213Apparently i was using an old IPA10:04
vmud213will try to build one and retest it.10:05
*** rcernin has quit IRC10:18
*** ociuhandu has quit IRC10:23
*** ociuhandu has joined #openstack-ironic10:24
*** k_mouza has joined #openstack-ironic10:27
*** ociuhandu has quit IRC10:28
*** dsneddon has quit IRC10:50
*** rcernin has joined #openstack-ironic10:50
*** ociuhandu has joined #openstack-ironic10:54
*** ociuhandu has quit IRC11:01
*** dtantsur is now known as dtantsur|brb11:03
*** ameya49 has joined #openstack-ironic11:05
ameya49Greetings dtantsur|brb ! While testing secure boot management patch, came across few observations which are as below :11:23
ameya491. Deploying node with boot_mode : bios and secure_boot:True :11:23
ameya49Observations : Deployment is successful with secure boot "Enabled" on iDRAC. Code converts boot mode from 'bios' to 'uefi' internally.11:23
ameya49Query : Is this expected behavior?11:23
ameya49Looking at the code, it should raise error message "Configuring secure boot requires UEFI for node <UUID>" but instead its completing successfully by changing boot mode to uefi while deployment process.11:23
ameya492. Secure Boot already Enabled on iDRAC:11:23
ameya49In this scenario, Secure Boot is already enabled on iDRAC and trying to deploy node with secure_boot:True.11:23
ameya49Observations : Node deployment is failed and a pending job gets created for disabling the Secure Boot11:23
ameya49Query : Was this scenario tested? or Is this expected?11:23
ameya49Let me know if any more details needed here. Thanks!!!11:23
iurygregorysecure boot should be with uefi afaik11:28
iurygregoryso I would say this is correct11:28
*** ociuhandu has joined #openstack-ironic11:29
*** rh-jelabarre has joined #openstack-ironic11:30
ameya49Thanks iurygregory. And What about Scenario 2. Is that expected too?11:30
iurygregoryfor scenario 2 is a bit strange.., if iDRAC is configured and also the node is configured (with uefi + secure boot) the deployment should be successful I would say11:31
*** ociuhandu has quit IRC11:33
ameya49Yes, I mean the expected (looking at code) is, it should give a message "Secure boot state for node <UUID> is already Enabled"11:35
*** frigo has joined #openstack-ironic11:36
*** zaneb has quit IRC11:37
*** zaneb has joined #openstack-ironic11:38
*** frigo has quit IRC11:39
*** ameya49 has quit IRC11:44
*** ociuhandu has joined #openstack-ironic11:45
*** ociuhandu has quit IRC11:50
*** ociuhandu has joined #openstack-ironic12:01
openstackgerritMerged openstack/ironic master: Remove a pause before cleaning when fast-tracking  https://review.opendev.org/c/openstack/ironic/+/78587612:02
*** ociuhandu has quit IRC12:06
*** bburns has quit IRC12:10
*** bburns has joined #openstack-ironic12:13
*** tkajinam has quit IRC12:18
*** tkajinam has joined #openstack-ironic12:18
*** ociuhandu has joined #openstack-ironic12:18
*** tkajinam has quit IRC12:24
*** tkajinam has joined #openstack-ironic12:25
*** ociuhandu has quit IRC12:26
*** ameya49 has joined #openstack-ironic12:30
*** ameya49 has quit IRC12:33
*** frigo has joined #openstack-ironic12:34
*** frigo has quit IRC12:35
*** rcernin has quit IRC12:41
TheJuliagood morning12:50
rpittaugood morning TheJulia :)12:52
*** dtantsur|brb is now known as dtantsur12:54
dtantsurTheJulia: morning12:54
*** ociuhandu has joined #openstack-ironic12:54
dtantsurI like it when people say "failed" without providing any details at all :(12:54
iurygregorygood morning TheJulia =)12:57
TheJuliayay, a half an hour until my first call12:58
TheJuliadtantsur: side effect of not having an understanding of just how complex booting is12:59
dtantsurlikely12:59
TheJuliavmud213: I replied to your change and tried to clarify with more verbosity as to what the issue is and how to fix it.12:59
dtantsurspeaking of meetings, tomorrow I have a wall of 1pm to 9pm with very few breaks12:59
iurygregorydtantsur, wow12:59
vmud213TheJulia: Thanks. Will look into it13:00
iurygregorydtantsur, feel free to skip the one at 1pm =)13:01
TheJuliavmud213: it is partially a result of a security change which forces the embedded configuration *only* if we can actually validate it is virtual media, which makes sense why that broke as it is not virtual media.13:01
dtantsurI may end up doing just that13:01
TheJuliaI *think* I only have like 4 hours of meetings today13:01
*** ociuhandu has quit IRC13:02
vmud213TheJulia: Actually, the deploy image is old (though not quite old).13:03
TheJuliavmud213: so then flagging it as vmedia with a pregenerated key is just the wrong path13:03
vmud213And i can see in the latest code that, in case of failure to get a vmedia device, it is not failing unlike what i reported earlier. So will test and confirm the behavior13:04
TheJuliavmud213: whatever you do, the change will need to be cherry-picked as well to stable/wallaby13:06
vmud213TheJulia: https://opendev.org/openstack/ironic-python-agent/src/branch/master/ironic_python_agent/utils.py#L172-L17413:10
vmud213incase if it fails to get the device, it just updates the existing parameters with empty dictionary, which means it's just ignoring it. The parameters read from the /proc/cmdline are used which is fine i think13:11
TheJuliavmud213: that is a security check.13:12
vmud213TheJulia: Thanks. I got it.13:12
TheJuliawe can't trust that the machine was clean to begin with, that a prior consumer didn't setup contents for IPA to try and load13:12
vmud213Ok13:13
openstackgerritArne Wiebalck proposed openstack/ironic-python-agent master: [WIP] Burn-in: Add CPU step  https://review.opendev.org/c/openstack/ironic-python-agent/+/78821113:17
arne_wiebalckrpittau: dtantsur: like this ^^ ?13:17
*** ociuhandu has joined #openstack-ironic13:21
rpittauarne_wiebalck: yes, better13:22
arne_wiebalckrpittau: while this seems to work (I do this in parallel on our downstream tree), the logging is lost and the debug line does not go anywhere ... any idea?13:23
iurygregoryarne_wiebalck, just wondering why you have ports in burnin if you don't use =)13:23
arne_wiebalckrpittau: thanks btw ;)13:23
arne_wiebalckiurygregory: I copied the interface from the other steps (which have ports as well, but do not use them either, I think)13:24
iurygregoryarne_wiebalck, gotcha =)13:25
rpittauarne_wiebalck: what you mean the logging is lost?13:27
*** tkajinam has quit IRC13:27
arne_wiebalckrpittau: the line where I print the args does not appear in journalctl of ironic-python-agent on the node13:27
*** tkajinam has joined #openstack-ironic13:27
arne_wiebalckrpittau: it did when it was in hardware.py13:27
arne_wiebalckrpittau: line 35 in https://review.opendev.org/c/openstack/ironic-python-agent/+/788211/2/ironic_python_agent/burnin.py#113:28
*** rloo has joined #openstack-ironic13:29
rpittauyeah I saw it13:29
openstackgerritArne Wiebalck proposed openstack/ironic-python-agent master: [WIP] Burn-in: Add CPU step  https://review.opendev.org/c/openstack/ironic-python-agent/+/78821113:29
arne_wiebalckrpittau: that may be good to have for these long-running commands13:31
*** paras333 has joined #openstack-ironic13:32
rpittauyeah, I don't see a quick root cause for that13:33
arne_wiebalckit is not the level, I tried info/warning as well13:33
arne_wiebalckI will keep looking ...13:34
arne_wiebalckrpittau: related question, any idea why we mix "from oslo_log import log" and "from oslo_log import log as logging" ?13:35
rpittauarne_wiebalck: not really, no, and using logging as alias could be confusing as there is already a "logging" module in python. We should always use log or just oslo_log13:36
arne_wiebalckrpittau: and it creates work for copy-and-paste developers like me13:37
rpittauheh :)13:37
* arne_wiebalck had to rebuild the image since he mixed logging and log13:38
rpittauthat's the confusion I was talking about :)13:38
arne_wiebalckyeah ... I may propose to clean this up13:39
arne_wiebalckanyway, the code seems to work (just the logging issue)13:39
arne_wiebalckI can run time-controlled CPU burn-in with this13:40
rpittaugreat :)13:40
rpittauI'm checking the logging issue too13:40
*** tzumainn has joined #openstack-ironic13:42
arne_wiebalckrpittau: thanks!13:45
*** k_mouza has quit IRC13:46
*** frigo has joined #openstack-ironic14:05
openstackgerritBob Fournier proposed openstack/sushy master: Add support for BIOS Attribute Registry  https://review.opendev.org/c/openstack/sushy/+/78451614:06
rpittauarne_wiebalck: couldn't find anything, I checked the code on oslo and some more examples but it all looks ok, so no idea :/14:06
rpittaumaybe  ask in the #openstack-oslo channel14:06
arne_wiebalckthanks, rpittau !14:07
openstackgerritBob Fournier proposed openstack/ironic-specs master: Include Redfish BIOS Attribute Registry in bios API  https://review.opendev.org/c/openstack/ironic-specs/+/77468114:08
viks____hi, how do i make deploy images to be installed everytime when i provision the baremetal server... i.e. changing the state from `manage` to `available`?14:10
*** k_mouza has joined #openstack-ironic14:17
*** k_mouza has quit IRC14:22
*** sshnaidm has quit IRC14:38
*** sshnaidm has joined #openstack-ironic14:40
TheJuliaviks____: deployed like after cleaning agent os is deployed and continues running on the machine14:41
*** frigo has quit IRC14:46
JayFAlso noting that going to 'available' is not provisioning the server... it's getting it into inventory, ready for deployment14:48
JayFactive is the state you want for a node with an installed os14:48
TheJulia++14:51
TheJuliaYup14:51
TheJuliaI was hoping to tease what they meant out :)14:51
viks____TheJulia: deploy image gets installed during `openstack baremetal node provide <node>` right?14:54
JayFThat tells Ironic "I want to make this machine ready for provisioning", it does that by (optionally) cleaning off disks and hardware14:55
JayFIf you want an OS to be deployed on the node, you have to use `deploy` verb14:55
JayF(along with other things to configure how to deploy that are documented)14:56
rpiosoWill there be a review jam today?14:57
viks____JayF: so deploy image basically does cleaning... ??14:57
JayFWhat exactly do you mean when you say deploy image?14:58
viks____i'm not very clear what what is the role of deploy images14:58
openstackgerritJulia Kreger proposed openstack/ironic-specs master: Xena themes  https://review.opendev.org/c/openstack/ironic-specs/+/78414314:58
viks____JayF:  as mentioned in here: https://docs.openstack.org/ironic/train/install/creating-images.html14:59
TheJuliaviks____: to perform installation of the tenant/user requested image as well as teardown/cleaning of disks. It is never actually deployed, just booted via virtual media or network booting14:59
viks____. The deploy images are used by the Bare Metal service to prepare the bare metal server for actual OS deployment. Whereas the user images are installed on the bare metal server to be used by the end user. There are two types of user images:14:59
JayFYeah. deploy images are just the tool that Ironic uses to deploy/clean/etc14:59
JayFthey do not get "installed" on the node. They just boot up, do what Ironic says, and stop.14:59
TheJuliaJayF: ++15:00
JayFIf you're deploying (going from active -> available), one of the things Ironic tells it to do is install whatever image you configured in instance_info15:00
*** k_mouza has joined #openstack-ironic15:01
viks____so the command `baremetal node clean <node>     --clean-steps '[{"interface": "deploy", "step": "erase_devices_metadata"}]'` , does this need to install deploy image for cleaning?15:03
TheJuliait never "installs" it15:03
TheJuliait just "boots" it15:04
viks____yep... sorry...15:04
TheJuliacleaning is automatic if you are going from providing a node to be available or tearing down a node15:04
TheJuliayou don't need to explicitly invoke cleaning as such in that case, unless you've explicitly disabled automatic cleaning15:05
*** k_mouza has quit IRC15:05
viks____so if i set `automated_clean=true`, what clean steps it will run?15:06
TheJuliawhatever is defined by config and hardware managers, the default we provide is erase_devices or erase_devices_metadata and you can override the priority of each15:06
viks____i use ipmi/pxe boot ... so whenever i tried few weeks ago, i was having some issues... so i set it to false... automated cleaning should work fine in this case right?15:09
*** k_mouza has joined #openstack-ironic15:18
*** dsneddon has joined #openstack-ironic15:20
*** dsneddon has quit IRC15:25
TheJuliaso the scientific sig uses slack?!15:36
TheJuliaAre we no longer doing the SPUCs, I guess that stuff got deleted form the whiteboard15:39
rpittauTheJulia: yep, scientific sig is on slack15:39
*** ociuhandu has quit IRC15:43
openstackgerritArne Wiebalck proposed openstack/ironic-python-agent master: [WIP] Burn-in: Add CPU step  https://review.opendev.org/c/openstack/ironic-python-agent/+/78821115:45
dtantsurTheJulia: SPUCs are intact from my side15:45
dtantsuralthough the US one is no longer comfortable :(15:45
TheJuliacomfortable time wise?15:46
openstackgerritArne Wiebalck proposed openstack/ironic-python-agent master: [WIP] Burn-in: Add CPU step  https://review.opendev.org/c/openstack/ironic-python-agent/+/78821115:46
JayFTheJulia: I'm still game to spuc. That was in the subteam status section though so it got nuked per your instruction :)15:51
dtantsurTheJulia: yep (it's one hour later now)15:55
TheJuliaahh yeah15:55
TheJuliaJayF: doh!15:56
*** paras333 has quit IRC15:59
*** paras333_ has joined #openstack-ironic15:59
*** lucasagomes has quit IRC16:12
*** ociuhandu has joined #openstack-ironic16:13
*** jamesden_ has joined #openstack-ironic16:21
*** jamesdenton has quit IRC16:22
TheJuliado we have anything to review jam today?16:27
*** ociuhandu has quit IRC16:27
*** ociuhandu has joined #openstack-ironic16:29
*** gyee has joined #openstack-ironic16:30
*** dougsz has quit IRC16:32
rpittaugood night! o/16:33
*** rpittau is now known as rpittau|afk16:33
*** ociuhandu_ has joined #openstack-ironic16:34
*** ociuhandu has quit IRC16:38
*** ociuhandu_ has quit IRC16:39
JayFI guess that means no review jam in 15?16:45
*** vmud213 has quit IRC16:46
TheJulia I guess16:46
dtantsurFYI more redfish fun https://review.opendev.org/c/openstack/sushy/+/78785916:49
TheJuliajoy16:50
arne_wiebalckbye everyone o/17:01
*** dtantsur is now known as dtantsur|afk17:05
openstackgerritJulia Kreger proposed openstack/ironic master: Add tiny little statistic collection utility  https://review.opendev.org/c/openstack/ironic/+/78833517:28
openstackgerritJulia Kreger proposed openstack/ironic-python-agent master: Fix NVMe Partition image on UEFI  https://review.opendev.org/c/openstack/ironic-python-agent/+/78833818:12
*** valleedelisle has joined #openstack-ironic18:22
*** dtantsur has joined #openstack-ironic18:25
*** dtantsur|afk has quit IRC18:28
*** _dvd has quit IRC18:29
*** shadower has quit IRC18:29
*** shadower has joined #openstack-ironic18:29
*** openstackstatus has quit IRC18:31
*** openstackstatus has joined #openstack-ironic18:33
*** ChanServ sets mode: +v openstackstatus18:33
*** lbragstad has quit IRC19:17
*** lbragstad has joined #openstack-ironic19:30
*** lmcgann has joined #openstack-ironic19:35
TheJuliadtantsur: the comment about killing instance_info in the dict... Is that a thing hitting your universe hard?19:38
lmcgannWhy can you only specify 'power', 'management', 'deploy', 'bios', 'raid' steps for manual cleaning. Do interfaces such as 'storage' not do anything during deploy or clean?19:53
TheJulianot that I'm aware of...20:03
TheJuliastorage is more about going and talking to cinder or getting information back out20:04
*** sdanni has joined #openstack-ironic20:11
*** kajalsah07 has quit IRC20:27
*** rcernin has joined #openstack-ironic20:30
*** rcernin has quit IRC21:22
*** rcernin has joined #openstack-ironic21:40
*** lbragstad has quit IRC21:46
*** rcernin has quit IRC21:46
*** rcernin has joined #openstack-ironic22:00
*** rcernin has quit IRC22:07
*** rcernin has joined #openstack-ironic22:16
*** lbragstad has joined #openstack-ironic22:39
*** rcernin has quit IRC22:47
*** rcernin has joined #openstack-ironic22:50
*** rcernin has quit IRC22:50
*** rcernin has joined #openstack-ironic22:51
*** rloo has quit IRC23:06
*** tosky has quit IRC23:08
jandersgood morning Ironic o/23:32
*** sdanni has quit IRC23:49
*** rcernin has quit IRC23:54

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!