Wednesday, 2024-01-10

opendevreviewMerged openstack/ironic master: Fix system scoped manageable node network failure  https://review.opendev.org/c/openstack/ironic/+/90502204:43
opendevreviewMerged openstack/ironic master: RBAC: Fix allocation check  https://review.opendev.org/c/openstack/ironic/+/90503804:43
opendevreviewMerged openstack/networking-baremetal master: Do not try to bind port when we can't  https://review.opendev.org/c/openstack/networking-baremetal/+/90325204:55
opendevreviewJulia Kreger proposed openstack/ironic stable/2023.2: Fix system scoped manageable node network failure  https://review.opendev.org/c/openstack/ironic/+/90508605:20
opendevreviewJulia Kreger proposed openstack/ironic stable/2023.1: Fix system scoped manageable node network failure  https://review.opendev.org/c/openstack/ironic/+/90508705:21
opendevreviewJulia Kreger proposed openstack/ironic stable/2023.2: RBAC: Fix allocation check  https://review.opendev.org/c/openstack/ironic/+/90508805:21
opendevreviewJulia Kreger proposed openstack/ironic stable/2023.1: RBAC: Fix allocation check  https://review.opendev.org/c/openstack/ironic/+/90508905:22
rpittaugood morning ironic! o/07:48
opendevreviewDmitry Tantsur proposed openstack/ironic-python-agent master: Make inspection URL optional if the collectors are provided  https://review.opendev.org/c/openstack/ironic-python-agent/+/90402607:55
rpittauJayF: re bifrost deps: from what I can see and just to confirm what TheJulia said, I don't think bifrost is impacted07:57
opendevreviewDmitry Tantsur proposed openstack/bifrost master: [WIP] Deprecate inspector support  https://review.opendev.org/c/openstack/bifrost/+/90519208:01
rpittauthe issue with disk space in metal3 integration job is really odd, but at least now we have confirmation that it's indeed / getting full https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_853/905133/1/check/metal3-integration/8532d95/controller/system/df--h.txt08:55
rpittauif we can increase / to 60 GB should be fine08:56
rpittaustill I don't get what changed in terms of size to provoke this issue08:56
dtantsuryeah, it's odd indeed08:56
rpittauin this particular change we just miss few MB, 14 to be precise https://zuul.opendev.org/t/openstack/build/8532d952cc6843ed8488462c2366ac40/log/controller/before_pivoting/ironic.log#154808:58
rpittauanyway, let's see if we can increase / partition first to make the job pass08:58
opendevreviewMerged openstack/ironic-python-agent master: Remove unnecessary egg_info options  https://review.opendev.org/c/openstack/ironic-python-agent/+/90405409:04
opendevreviewMerged openstack/networking-baremetal master: Remove deprecated pbr options  https://review.opendev.org/c/openstack/networking-baremetal/+/90405809:25
opendevreviewMerged openstack/ironic-python-agent master: Remove deprecated pbr options  https://review.opendev.org/c/openstack/ironic-python-agent/+/90405509:32
opendevreviewMerged openstack/ironic bugfix/23.1: Handle LLDP parse Unicode error  https://review.opendev.org/c/openstack/ironic/+/90486009:34
opendevreviewMerged openstack/sushy master: Allows System to access VirtualMedia in Sushy  https://review.opendev.org/c/openstack/sushy/+/90446310:02
opendevreviewMerged openstack/metalsmith master: CI: Ask ironic devstack to set node owner  https://review.opendev.org/c/openstack/metalsmith/+/90501210:47
rpittaudtantsur: I proposed to reduce the size of sushy-tools, that should mitigate the issue https://github.com/metal3-io/ironic-image/pull/47311:29
rpittauthanks clarkb for spotting the image size :)11:30
dtantsurnice!11:44
edebesteHi all, is ironic able to target bluefield dpus to provision OS images to?11:54
dtantsurTheJulia: ^^11:54
dtantsurrpittau: I wonder if we can stop installing libvirt-python from source. bookworm has 9.0.0, which is reasonably recent.12:07
rpittauright, that would probably save us some more space12:07
rpittauI'll add that to the change12:07
dtantsuryeah, not having gcc will help a ton as well12:10
rpittaummm or maybe not, it went up to 1GB :D12:12
rpittauthe python3-libvirt package is bringing in a ton of other packages12:12
rpittaubut I can probably remove gcc after installation12:14
rpittauwe're down to 350MB, hopefully CI will be happy about it :)12:20
dtantsurrpittau: you probably don't need gcc at all any more?12:22
dtantsurpython stuff has to be installed from wheels (if it's not, we're going to have issues)12:22
dtantsurugh, their networking is quite unstable now12:57
rpittaudtantsur: we need gcc because of the libvirt binding13:24
rpittauAFAIK libvirt python needs to be compiled as it's tied to the architecture and to the libvirt package installed in the OS13:31
opendevreviewMerged openstack/ironic master: Make bandit voting on check and gate  https://review.opendev.org/c/openstack/ironic/+/87949813:31
dtantsurrpittau: hmm, no, the debian's python3-libvirt must play well with their libvirt13:32
dtantsuractually, pip install is riskier in this sense13:32
rpittaudtantsur: yes, but the pkg has so many devependencies that the image size is 1GB13:33
rpittaubasically most of the image is taken by libvirt + python-libvirt when installing using apt13:33
dtantsurwow, that's crazy13:35
dtantsurokay, let's at least try to uninstall gcc afterwards13:36
opendevreviewJake Hutchinson proposed openstack/bifrost master: Bifrost NTP configuration  https://review.opendev.org/c/openstack/bifrost/+/89569113:38
TheJuliaedebeste: so... the answer is sort of complicated, depending on the version of bluefield because, aiui, there are non-upstream patches in the mix from nvidia's ubuntu images and if that card has a real bmc, or if the bmc is just a VM in the main OS on the card.14:10
TheJuliaedebeste: I've pondered adding a bfb-install clean step to kind of simplify that, but given rshim updates are only an nvidia image thing, that just seems more like resetting the bluefield's OS to what nvidia ships step, which is not really what most want.14:12
edebesteTheJulia: Thanks. Appreciate the response. I ask because I know my group is currently investigating it's use for a deployment and are eyeing maas instead due to their claim for bluefield support. I need to do more research into things myself as well. New to bluefield14:14
TheJuliaedebeste: ahh, fun. So the bottom line is basically ubuntu is the only OS nvidia supports on bluefields14:15
TheJuliaI know folks who have used ironic "and it just works" for their scenario with bluefields (i.e. a real BMC on the cards, and bluefield2s, not bluefield 3 cards)14:17
TheJuliabut only a slim selection of cards have the real BMC14:17
* TheJulia wonders if she should just hammer out a bfb-install clean step for the "hell of it"14:18
* dtantsur wonders if he should just hammer his laptop and run to the woods to live with owls for the "hell of it"14:29
edebesteTheJulia: Thanks, that insightful14:30
* TheJulia wonders if she should be sending dtantsur a beverage14:33
dtantsurThat could help :)14:37
dtantsurSome time ago, we agreed that it's better to have a per-node external_http_url rather than global external_http_url_v6 used for nodes with IPv6 BMC. I'm now seriously starting questioning this decision.14:41
dtantsurThe problem I have right now is: the caller does not know the correct IP address. I don't know how I populate driver_info[external_http_url] without somehow learning Ironic's networking.14:42
dtantsurJayF: I think you was the most against it back in the days, or am I confusing something?14:42
TheJuliawhy can't an IP be allocated, assigned a DNS name, and ironic be informed of that DNS name?14:43
JayFI certainly don't know if I had some sort of problem with it in the past, but I do know that the need for this feature implies really really strange or bad network design14:43
TheJuliaor IPs with A and AAAA records14:43
dtantsurFirst and foremost, that will need to push DNS settings to the BMC14:43
JayFFor basically the reason that Julia is laying out in detail14:43
dtantsurnothing strange at all, just some BMC having only one networking stack, while Ironic being dual-stack14:44
TheJuliamost people's BMC's I've seen have had dns configured actually14:44
dtantsuryeah, but it may not be your cluster's DNS14:44
TheJuliaeither through static reservations or actual config, since vendors now prefer software updates to be auto-magic14:44
dtantsur8.8.8.8 won't help me much14:44
TheJuliawasn't there a initial state dns requirement for openshift back in the days?14:45
dtantsurI'm quite sure we don't add the OpenShift's internal DNS to BMCs14:45
dtantsurI'm equally sure that many customers will never allow that14:45
dtantsur(also, don't get me started about DNS in kubernetes)14:46
dtantsur(also, I have a bootstrap issue here - there is no openshift yet at all)14:47
TheJuliadtantsur: I thought the requirement was external to the cluster dns which was working before the cluster launched14:48
TheJuliaoverall, fair though14:48
TheJuliaI just worry we're making a mistake by not embracing DNS, but there are valid reasons not to, just... duplicating parameters because of v6's presence is also NotGreat14:49
TheJuliaerr, NotGreatā„¢14:50
dtantsurI'm afraid "you need a valid DNS record for your temporary bootstrap VM on your laptop" is not a viable thing I can ask14:51
TheJuliaI've heard crazier requirements in the past14:51
TheJulia:)14:51
dtantsurDual-stack is generally a weird place. It has been added as an afterthought both to OpenStack and Kubernetes (including OCP and Metal3).14:51
dtantsurTheJulia: I'm not in the position to set arbitrary rules...14:52
TheJuliaoh yeah14:53
TheJuliaAlways an afterthought :(14:53
TheJuliaand the closer you get to the metal, the more people just expect it to be your problem and just make it all work14:54
dtantsuryea14:54
* TheJulia should send hjensas some booze14:55
dtantsurAnd v6-only BMCs are not entirely insane.. they just don't play well with v4-primary dualstacks14:55
TheJuliayup14:55
dtantsurand then there is this aspect of metal3's container that self-determine their networking14:55
TheJuliaI'd almost prefer a giant "is this a v4 or a v6 bmc knob"14:55
dtantsurso, Ironic knows its IP addresses, but the callers (BMO, whatever) do not.14:56
TheJuliaand by default, just treat the variables like you have FQDN14:56
TheJuliaand you could kind of "guess" based upon the input14:56
TheJuliaunless they are operating a well run/configured dual stack environment 14:56
TheJuliaand give you dns names to start14:56
dtantsurIt absolutely does not help that bloody gopls takes GiB's of memory on the bloody openshift-installer repo15:00
opendevreviewMerged openstack/ironic master: Fix versions in release notes  https://review.opendev.org/c/openstack/ironic/+/90410315:10
opendevreviewJake Hutchinson proposed openstack/bifrost master: Bifrost NTP configuration  https://review.opendev.org/c/openstack/bifrost/+/89569115:20
JayFYeah, makes sense that bootstrapping is a problem.15:35
JayFI will say, none of those networks sound like the kinda ones I'd want deployed in my environment, but I can easily see how organic growth gets someone there :/15:35
dtantsurJayF, TheJulia, so, is a combination of external_http_url_v6 with a driver_info[bmc_ip_version] something worth putting my thoughts in or is it still rather meh from your perspective?15:59
JayFmy preference would be driver_info[bmc_ip_version]16:00
dtantsurit's not really either/or: we need a 2nd configuration for the alternative version16:00
JayFI keep trying to figure out some way to magic the other half16:00
JayFbut you can't16:00
dtantsuror something like external_http_url_per_version16:00
dtantsurironic is not self-conscious enough to determine its own networking :)16:01
JayFhonestly, it makes me wish we had split urls16:01
dtantsursplit?16:01
JayFexternal_http_host { v4: 1.2.3.4, v6: 1.2.3.4 }16:01
JayFer lol16:01
JayFexternal_http_host { v4: 1.2.3.4, v6: ::11 }16:01
JayFexternal_http_path "/baremetal/v1"16:01
JayFthat sort of thing16:01
dtantsurgotcha (needs a port as well)16:02
JayFyeah, and I'm not saying we should change16:02
JayFI'm just saying at least the config would be more sensible if it was structured that way16:02
JayFprovide one dns value or two ips for the host16:02
JayFYou have real world problems, you should try to solve them, I'm just trying to hammer it into a generalized shape and it may not be possible :D 16:03
opendevreviewJake Hutchinson proposed openstack/bifrost master: Bifrost NTP configuration  https://review.opendev.org/c/openstack/bifrost/+/89569116:07
TheJuliaI do sort of agree with the sentiment overall, but I also feel like there is a line around asserting DNS servers to BMCs remotely which is both a feature, and something we should never do unless we're explicitly asked to do it16:15
TheJuliaand there didn't seem to be a ton of interest around asserting things and stuff to the bmc networking configuration wise when we discussed it at the PTG16:16
clarkbrpittau: you might also want to consider a multistage build. The first stage can install all your build deps and produce wheels for everything then the second stage will install the wheels and not need the extra bits like git and gcc and all the resulting intermediate build artifacts16:19
clarkbrpittau: if you are interested in that you can look at https://opendev.org/opendev/system-config/src/branch/master/docker/python-builder (stage 1) and https://opendev.org/opendev/system-config/src/branch/master/docker/python-base (stage 2) and how to consume them https://opendev.org/opendev/system-config/src/branch/master/docker/ircbot/Dockerfile16:21
rpittauclarkb: yep, that's what we do in other cases16:21
TheJuliaRegarding the two rbac related bug fixes, both of which have merged against master (Thanks!), I've backported them to 2023.1. I'm not sure there is really a need to backport further.16:29
rpittaugood night! o/17:01
Sandzwerg[m]hmm, was there any request for encryption at rest for BMC credentials when they get stored on the node level?17:05
TheJuliaSandzwerg[m]: there has been some mild expression of desire, but nobody has ever stepped forward to work on it.17:12
Sandzwerg[m]I see. Assumed something like this. 17:12
TheJuliagiven the API also doesn't expose password value, that has also seemed reasonable for most folks, some folks have mused integrating with external services, but we also use them a lot for things like nodes supporting ipmi because we don't cache the credentials in ram.17:14
TheJuliaThe only case where credentials might get cached in ram is with redfish sushy clients17:15
TheJuliaSince we cache those, largely due for their sessions with the established credentials, but they need to be able to re-authenticate when the session expires.17:16
Nisha_AgarwalTheJulia, 17:18
Nisha_Agarwalhi17:18
TheJuliaHi Nisha, how are you today?17:18
Nisha_AgarwalTheJulia, doing good :) How are you?17:18
TheJuliaI'm alright, any response from your firmware team?17:19
Nisha_AgarwalTheJulia, I am seeing something different...wanted to ask on that...Since RHOSP is kolla based, which ip address is associated with ironic conductor container17:20
Nisha_AgarwalIn other words, i dont see the ip address of the undercloud in the client details17:21
Nisha_Agarwalof the session17:21
Nisha_Agarwalhence asking17:21
TheJuliaDo you have an HTTP proxy in place?17:21
Nisha_AgarwalYes17:21
TheJuliaohhh...17:21
TheJuliathat might be the issue then17:21
Nisha_Agarwalproxy was required for undercloud setup itself...else podman doesnt work17:22
TheJuliaAnyway, where are you looking and not seeing17:22
Nisha_AgarwalOk will paste the output17:22
TheJuliaok17:22
TheJuliaI'm trying to remember which ironic is used in that case, I guess I need to go look at the logs17:26
Nisha_Agarwalfor example i see the ip address as "ClientOriginIPAddress": "10.79.90.44"17:28
Nisha_Agarwalwhen i do GET on the session uri for a particular session17:28
Nisha_Agarwaland the undercloud ip starts with 128*17:28
TheJuliai guess that is a header your proxy is injecting?17:28
Nisha_Agarwalok could be...i need to check that but in that case how will it work in devstack env17:30
Nisha_Agarwalproxy is used there also17:30
Nisha_Agarwalthe client origin ip looks to be proxy ip... 17:33
TheJuliaOkay, that makes sense then17:33
Nisha_AgarwalThe firmware team Connection: close just closes the network socket and doesnt invalidate the session17:39
Nisha_Agarwaland a session key expiry is by default 24 hrs17:39
Nisha_AgarwalTheJulia, ^^^17:40
TheJuliaInteresting17:40
TheJuliaI wonder if we're hitting something with the proxy then17:40
Nisha_Agarwalmay be...when i use curl then the ClientOriginIpAddress is the system IP17:41
TheJuliadtantsur: do you have a good example for the networkData field in metal3 ?17:42
TheJuliaspecifically with openshift17:42
dtantsurTheJulia: for the final instance, not for the ramdisk? I don't think I do, we don't use this feature.17:42
TheJuliaramdisk is what I'm looking for17:44
dtantsurTheJulia: point 4 https://docs.openshift.com/container-platform/4.14/installing/installing_bare_metal_ipi/ipi-install-expanding-the-cluster.html#preparing-the-bare-metal-node_ipi-install-expanding17:45
TheJuliaperfect, thanks!17:45
TheJuliaI'm guessing userData is likely just a string of json?17:47
TheJuliawhich would align with the config drive in ironic's api?17:47
dtantsuryeah17:47
dtantsurTheJulia: note that openshift does *not* use the Ironic's network_data. Also note that this uses the preprovisioningNetworkData field, which is CoreOS-specific.17:48
TheJuliaack17:48
TheJuliahmmmmm17:48
Nisha_Agarwali can see when i used sushy , then the ClientOriginIPAddress is proxy ip17:49
TheJuliadtantsur: oh, interesting, so it looks like some of the docs exclude preprovisioningNetworkData17:50
dtantsurIn practice, it may mean that you need to provide the network config twice: for the ramdisk like shows in the document and for the instance through the normal networkData17:51
TheJuliayup17:51
TheJuliaNisha_Agarwal: where exactly are you seeing that, since it is not something we log or report on afaik17:53
Nisha_AgarwalTheJulia, i have put a pdb in sushy where it gets the authtoken18:00
Nisha_Agarwalwhere we populate the auth token and the session_uri18:01
TheJuliais it in the header responses back on the request?18:01
TheJuliaerr, response header 18:01
Nisha_AgarwalSo i just get that session uri and used a GET call on that session uri18:01
Nisha_Agarwalin that GET output we can see the field as "ClientOriginIPAddress": "1.1.1.1"18:02
Nisha_AgarwalSo this curl -gikx ""  -H 'X-Auth-Token:3vFTqTQRTpORP8seu0ikAQ==' -X GET https://<ip>/redfish/v1/SessionService/Sessions/Administrator75516825a7ad4051a0be59aa865a69a718:03
Nisha_Agarwalgives us the response as 18:03
Nisha_Agarwal{"@odata.etag": "\"30e0fdb421510233428419775960592b\"", "@odata.id": "/redfish/v1/SessionService/Sessions/Administrator75516825a7ad4051a0be59aa865a69a7", "@odata.type": "#Session.v1_7_0.Session", "ClientOriginIPAddress": "1.1.1.1", "CreatedTime": "2024-01-10T17:57:23Z", "Id": "Administrator75516825a7ad4051a0be59aa865a69a7", "Name": "User Session Administrator75516825a7ad4051a0be59aa865a69a7", "Oem": {"Hpe": {"@odata.type": 18:03
Nisha_Agarwal"#HpeH3Session.v1_0_0.HpeH3Session", "InitialRole": "Administrator"}}, "SessionType": "Redfish", "UserName": "Administrator"}18:03
Nisha_AgarwalHere the ClientOriginIPAddress for sushy in RHOSP17.1 is proxy ip18:04
TheJuliaAhh, okay, your directly curling the IP from the session list18:04
TheJuliawell, curling the url of the session18:04
Nisha_Agarwali checked once in non-RHOSP env for the auth token created by sushy...and did the same above i saw ClientIPAddress as the system IP18:05
Nisha_Agarwalfor non-RHOSP i can test more but for RHOSP env i have seen this now many times18:05
TheJuliawhich makes sense, it inherently doesn't get the environment variable for the proxy18:06
Nisha_Agarwaland hence sure with sushy it adds the IP of the proxy, while in curl it adds the IP of system18:06
TheJuliaI think it is more a matter of the remote BMC is recording the IP where the session originates18:06
TheJuliathe post to create the session is really quite simple with two fields in response18:07
TheJuliaa token and a uri18:07
Nisha_AgarwalAnd i think thats where sushy token is not getting validated...18:07
Nisha_Agarwalhmmm i will speak to firmware team once more...but not so sure...as of now everything is hit and trial18:08
TheJuliado you mean if the IP somehow changes, you think the redfish endpoint is invalidating the request18:08
TheJulia?18:08
Nisha_AgarwalYes...looks like18:08
TheJuliaThat seems like a weird feature18:08
TheJuliabut would explain it entirely if there is a transparent proxy that picks up the request18:08
TheJuliaor even a proxy cluster18:08
TheJuliaI remember when I was at HPE, all browser requests could end up with with like one of seven IPs18:09
Nisha_Agarwalhmmm...18:09
Nisha_Agarwalwierd thing is that i didnt see this for non-RHOSP env18:09
TheJuliaThat was also a very long time ago18:09
TheJuliabecause it likely picks up your shell enviornment proxy environment variables18:10
TheJuliawhere as in the container, there are none18:10
Nisha_Agarwalfor non-RHOSP where proxy is still there, i see the ClientIP is system ip address18:10
TheJuliaYeah, which makes sense, because there is no proxy IP18:10
Nisha_Agarwalproxy is there in both18:10
Nisha_Agarwalif there is no proxy in container then it shouldnt pick up the proxy ip when i use sushy18:11
TheJuliait could be writing the X-Forwarded-For header to ClientIP in the session json/tracking18:11
Nisha_Agarwalbut actually when it has the proxy ip , i guess thats where it is failing....18:11
Nisha_Agarwalwill check with firmware team and see if they can crack anything with this18:12
TheJuliacan you unset your proxy environment variables and see what happens?18:12
Nisha_Agarwalusing sushy?18:12
TheJuliayeah18:12
TheJuliaThat would explain the basic fundamental operating difference18:13
Nisha_Agarwalyes i ran on etime after unsetting the proxy, and i can see the system ip18:20
Nisha_Agarwalin clientip18:21
TheJuliayour machine, or the undercloud?18:21
Nisha_Agarwalundercloud18:21
Nisha_Agarwalfrom non-RHOSP it always adds the system IP18:21
Nisha_Agarwaleven with proxy being there18:22
TheJuliabut sushy sessions from the undercloud you get the proxy IP as the ClientIP?18:22
Nisha_Agarwalyes18:23
Nisha_Agarwaleven if i assume its firmware issue, i should see the same behaviour irrespective of RHOSP and non-RHOSP18:24
TheJuliaEh, not if the input is different18:24
Nisha_Agarwalinput is same...on the same system18:24
Nisha_Agarwalwith curl, on undercloud we see the systemIP in the ClientIP18:25
Nisha_Agarwali will check with firmware team also on this...But request you to check if some network setting on RHOSP side can solve this18:26
Nisha_Agarwalnetwork/or general setting18:26
Nisha_Agarwalwe tested 4 scenarios:18:27
Nisha_AgarwalRHOSP Undercloud <--> RHOSP SDFLEX BM target  ---> fails with attribute missing error18:28
Nisha_AgarwalRHOSP Undercloud <--> Devstack SDFLEX BM target  ---> fails with attribute missing error18:28
Nisha_AgarwalDevstack System <--> RHOSP SDFLEX BM target  ---> fails with attribute missing error18:28
Nisha_AgarwalDevstack Sytem <-->  Devstack SDFLEX BM target  ---> doesn't fail18:28
Nisha_Agarwalso this is for sure that the issue is seen only in RHOSP env18:29
Nisha_Agarwaland ClientIP thing can be one of the possible reason...18:30
TheJuliaThe wait,s o you have two different targets?18:31
Nisha_Agarwalnopes...these are all the scenarios we tested to root cause the error18:32
TheJuliaso your saying firing up devstack on rhel, it fails?18:32
Nisha_Agarwalnopes18:32
Nisha_Agarwali said if u have a devtsack setup18:33
Nisha_Agarwaland try to do simple two steps, 18:34
Nisha_Agarwalcreate sushy object and then do get_system18:34
TheJuliaso I'm trying to understand your different scenarios, and I guess I'm sure of what you mean by "RHOSP Undercloud <--> Devstack SDFLEX BM target"18:34
Nisha_Agarwalhere i meant that from RHOSP undercloud we tried to deploy(or simply use python shell to root cause) a Overcloud Baremetal target18:35
Nisha_Agarwalhere the Overcloud Baremetal target is SuperdomeFlex series of servers18:36
TheJuliaso how is that different from "RHOSP Undercloud <--> RHOSP SDFLEX BM target" ?18:36
Nisha_Agarwalthe same series of servers which are not part of RHOSP subnet while they are part of devstack subnet, we do not hit the issue when we try the same tests from devstack subnet to devstack subnet.18:37
Nisha_AgarwalBM target is the target system which we want to certify18:37
TheJuliaso the targets are just different groups of servers?18:38
Nisha_Agarwalwait...i think u r confused18:39
TheJuliaVery confused, which is why I'm asking questions trying to clarify18:39
Nisha_Agarwalundercloud is normal HPE server(Not SuperDome Flex system)18:40
Nisha_Agarwalin Overcloud servers we have 3 SuperDomeFlex servers(SuperdomeFlex, SuperDomeFlex 280 and HPE ScaleupCompute 3200)18:41
Nisha_Agarwalwe can hit the issue on all of these servers from undercloud18:41
TheJuliaAnd nodes accessed from the undercloud gets a proxy IP as the ClientIP of the session record on the BMC?18:43
Nisha_AgarwalYes18:43
TheJuliaOkay, and the devstack setup, is the proxy being leveraged?18:44
TheJuliaand in those cases the ClientIP is the devstack node?18:44
Nisha_Agarwalyes it is there in devstack also18:46
Nisha_Agarwalbut the ClientIP is the systemIP even though the proxy is there18:46
TheJuliabecause those processes launch with the environment variables which include a proxy parameter18:46
Nisha_Agarwalhmmmm18:46
TheJuliaThe container launched processes shouldn't have them aiui18:46
Nisha_Agarwalif we add the baremetal IP to no_proxy, proxy has no effect, right?18:47
TheJuliait should not be used then18:47
TheJuliaalthough, it sounds like we've got a transparent proxy somewhere in the mix18:47
Nisha_Agarwalhmmm not sure...will do some more experiments tomorrow18:48
Nisha_Agarwalmay be something to do with proxy18:48
Nisha_Agarwaland thats leading to all thi18:48
TheJuliaThat is what it really does sound like18:48
TheJuliasince it gets the ip even without it being explicitly configured or being in the environment variables18:48
Nisha_Agarwalwill do some more experiments tomorrow and then ask the engineer to trigger deploy with no_proxy18:49
TheJuliaand if that IP changes due to infrastucture config or a load balancer of some sort, and the "bmc" invalidates the request then, that would explain everything, really18:49
TheJuliaokay18:49
Nisha_Agarwalinfratstructre config means RHOSP config?18:49
TheJuliano, like your switches/routers18:50
TheJuliawell, routers, really18:50
Nisha_Agarwalok18:50
TheJuliaI'm not sure we have explicit proxy passing capability in rhosp18:50
Nisha_Agarwalhmmm18:51
TheJuliaso when reproducing the issue, you get an AccessError with dmitry's patch right?19:07
Nisha_AgarwalYes 19:07
TheJuliaokay19:08
TheJuliaThat is good19:08
Nisha_Agarwalbut that patch is not yet there in our RHOSP as i think then we need to do undercloud setup again19:10
Nisha_Agarwalwe jsut checked manually and did see the correct error message19:11
TheJuliaok19:14
TheJuliagah, I never did service/adminy access to ironic-insector23:43
*** jph4 is now known as jph23:48

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!