*** dmellado4 is now known as dmellado | 11:09 | |
iurygregory | good morning Ironic | 11:46 |
---|---|---|
TheJulia | good morning everyone | 13:00 |
TheJulia | It feels like this week will be quiet | 13:02 |
TheJulia | I may be projecting some hope there | 13:03 |
iurygregory | good morning TheJulia =) | 13:05 |
iurygregory | well, at least today it will be quiet (holiday in Europe and some other countries if I recall) | 13:06 |
iurygregory | TheJulia, I just saw https://review.opendev.org/c/openstack/ironic/+/913432, do you still need it to land ? | 13:09 |
TheJulia | I think it would be nice for anyone using wallaby from master | 13:09 |
TheJulia | err, upstream git | 13:09 |
iurygregory | makes sense, approving it | 13:09 |
opendevreview | Merged openstack/sushy-tools master: Normalize relative path for emulator config file https://review.opendev.org/c/openstack/sushy-tools/+/914756 | 14:06 |
opendevreview | Julia Kreger proposed openstack/ironic master: Add note regarding metal3 ci job in CI config for stable runs https://review.opendev.org/c/openstack/ironic/+/910536 | 14:29 |
TheJulia | rutro, launchpad appears down | 14:30 |
TheJulia | and back | 14:34 |
opendevreview | Merged openstack/ironic-python-agent master: Fix mocking for TestGenericHardwareManager https://review.opendev.org/c/openstack/ironic-python-agent/+/913208 | 14:57 |
opendevreview | Verification of a change to openstack/ironic-python-agent master failed: Add get_additional_skip_list and get_skip_list https://review.opendev.org/c/openstack/ironic-python-agent/+/913209 | 14:57 |
dking | Jay caught a good issue there in the comments, but I see that last one is now failing some zuul tests, but I don't know that I understand why yet. | 15:42 |
dking | BRB | 15:43 |
dking | back | 15:48 |
iurygregory | dking, if I recall Jay is out this week | 15:51 |
dking | iurygregory: Yes, he is. So, I'm open if anybody else has any information. I don't see how my updates would have caused errors in those particular tests. For all non-custom instances, things should be running exactly the same. | 16:05 |
iurygregory | if you have the link for the patch I can look after lunch o/ | 16:05 |
dking | https://review.opendev.org/c/openstack/ironic-python-agent/+/913209?tab=change-view-tab-header-zuul-results-summary | 16:06 |
dking | Things seemed to go well at first, but then it seems like it failed 3 tests. I have a patch to resolve the comment, but I wanted to look at those failures before submitting it again. | 16:08 |
iurygregory | you mean the source jobs that failed right? | 16:25 |
iurygregory | just me, or it's crazy that Jay added +W on Mar 28, and only today we got the failed jobs? O.o | 16:26 |
TheJulia | hmm | 17:01 |
TheJulia | partition image issues | 17:01 |
opendevreview | Julia Kreger proposed openstack/ironic master: DNM: Try to figure out missing /bin/sh failures with partition tests https://review.opendev.org/c/openstack/ironic/+/914772 | 17:03 |
TheJulia | I have a theory, but unfortunately I didn't get any data supporting it last week | 17:04 |
dking | So, perhaps that's an unrelated error, or could something in the change possibly be causing that? They don't look related, but I don't run those tests locally. | 17:06 |
TheJulia | my theory, because we use this really weird way to make a centos partition image, is that the image unpack process is failing | 17:07 |
TheJulia | or the image pack process is failing | 17:07 |
TheJulia | so I'm really starting to think that the partition build just is just not reliable | 20:26 |
samcat116 | I am trying to deploy an ironic node with Nova and it seems that for some reason placement is filtering the node out. Whats weird is if I do an allocation candidate list with the resource class on the node and in the flavor, it shows the right node as a candidate. But when deploying with nova it seems like placement returns no candidates | 21:05 |
opendevreview | Julia Kreger proposed openstack/ironic master: DNM: Try to figure out missing /bin/sh failures with partition tests https://review.opendev.org/c/openstack/ironic/+/914772 | 21:05 |
samcat116 | Do host aggregate groups come into play with scheduling here? | 21:06 |
samcat116 | As in the placement logs I see getting providers with 1 CUSTOM_BAREMETAL_SSR_120 then found 7 providers after applying required aggregates filter then found 1 providers with available 1 CUSTOM_BAREMETAL_SSR_120 then found 0 providers after applying initial aggregate and trait filters | 21:07 |
TheJulia | I guess it could, and samcat116 it sounds like you've honed in on the right place, I just don't know what aggregrate trait/filters get applied there | 21:09 |
samcat116 | the hard part is that placement and nova have a concept of aggregates, and they aren't the same thing | 21:11 |
TheJulia | :( Unfortunately, I just don't know | 21:19 |
TheJulia | sounds like you've already got placement in debug mode? | 21:20 |
samcat116 | yep | 21:20 |
TheJulia | I remember the code/logic behind each individual filter is very simple, maybe time to add a little extra logging? | 21:22 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!