*** ysandeep|away is now known as ysandeep | 06:12 | |
*** jpena|off is now known as jpena | 07:51 | |
*** ysandeep is now known as ysandeep|lunch | 08:10 | |
*** akekane_ is now known as abhishekk | 09:02 | |
*** ysandeep|lunch is now known as ysandeep | 09:03 | |
*** bhagyashris_ is now known as bhagyashris | 10:07 | |
*** ysandeep is now known as ysandeep|afk | 11:01 | |
*** jpena is now known as jpena|lunch | 11:38 | |
*** rlandy is now known as rlandy|rover | 11:38 | |
*** jcapitao is now known as jcapitao_lunch | 11:44 | |
*** ysandeep|afk is now known as ysandeep | 11:58 | |
sean-k-mooney | fungi: clarkb just noticed that https://review.opendev.org/c/openstack/project-config/+/798071 is still open too if ye have time can ye take a look | 11:59 |
---|---|---|
fungi | looks like frickler just got it | 12:09 |
opendevreview | Merged openstack/project-config master: Add os-vif-core to stable maintainers https://review.opendev.org/c/openstack/project-config/+/798071 | 12:11 |
opendevreview | Merged openstack/openstack-zuul-jobs master: Remove neutron and networking-midonet Ocata jobs definitions https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/791364 | 12:23 |
sean-k-mooney | frickler++ thanks | 12:23 |
sean-k-mooney | fungi: and thanks for looking too | 12:23 |
fungi | once zuul reports on that change for the deploy pipeline, it should hopefully be working | 12:24 |
sean-k-mooney | i can check it later. i noticed it because i was going to do a stable review before a meeting so ill check back later today | 12:24 |
*** jpena|lunch is now known as jpena | 12:41 | |
*** jcapitao_lunch is now known as jcapitao | 12:48 | |
zul | fungi: those two fixes for the tox role seems to have failed | 13:33 |
fungi | zul: yes, i'm working through a solution in a parent change to get the siblings library able to deal with the default increased verbosity the tox_extra_args addition brings | 13:36 |
fungi | https://review.opendev.org/806621 | 13:37 |
fungi | it's getting a bit complicated dealing with python 2.7 backward compatibility for jobs on older platforms like centos 7 | 13:37 |
zul | fungi: ack | 13:41 |
fungi | as soon as i get clear of my morning meetings i'll hopefully get that finished up | 13:42 |
zul | gotcha good luck ;) | 13:42 |
fungi | zul: i can probably resequence that stack to stick the tox_config_file rolevar addition first as i expect it to be less contentious (it's purely an addition, doesn't introduce any behavior changes in existing uses) | 13:43 |
fungi | i initially expected it to require more discussion, but that was before i realized how into the weeds the tox_extra_args change would get | 13:44 |
clarkb | fungi: feel free to throw reviews my way. I'll be catching up on emails and getting a meeting agenda out (late) after meetings and some food | 14:22 |
fungi | clarkb: thanks, will do once it's reviewable ;) | 14:23 |
*** ysandeep is now known as ysandeep|afk | 14:31 | |
*** jpena is now known as jpena|off | 16:02 | |
*** ysandeep|afk is now known as ysandeep | 16:05 | |
*** ysandeep is now known as ysandeep|away | 17:54 | |
opendevreview | Mathieu Gagné proposed openstack/project-config master: INAP mtl01 region is now owned by iWeb https://review.opendev.org/c/openstack/project-config/+/806788 | 18:30 |
opendevreview | Mathieu Gagné proposed openstack/project-config master: INAP mtl01 region is now owned by iWeb https://review.opendev.org/c/openstack/project-config/+/806791 | 18:44 |
opendevreview | Mathieu Gagné proposed openstack/project-config master: INAP mtl01 region is now owned by iWeb https://review.opendev.org/c/openstack/project-config/+/806788 | 18:44 |
opendevreview | Mathieu Gagné proposed openstack/project-config master: INAP mtl01 region is now owned by iWeb https://review.opendev.org/c/openstack/project-config/+/806788 | 19:23 |
opendevreview | Clark Boylan proposed openstack/project-config master: INAP mtl01 region is now owned by iWeb https://review.opendev.org/c/openstack/project-config/+/806788 | 20:30 |
ade_lee | clarkb, fungi hey -- I'm trying to figure out I'm not able to connect to a machine in ci using ssh-python. This is fips related and I can't reproduce locally. Any change we can freeze the instances so I can get to them and troubleshoot. | 22:01 |
ade_lee | clarkb, fungi this is for https://review.opendev.org/c/openstack/octavia/+/798151 | 22:01 |
johnsom | ade_lee Want to chat about this in the #openstack-lbaas channel? | 22:06 |
ade_lee | johnsom, its not octavia related really. I just chose the octavia job because it was small and I was seeing the thing I wanted to fix | 22:07 |
ade_lee | johnsom, but I can certainly discuss it there in the channel | 22:08 |
johnsom | Yeah, it's ssh from a fips enabled instance to a cirros VM it looks like | 22:08 |
*** rlandy|rover is now known as rlandy|rover|bbl | 22:16 | |
fungi | ade_lee: yeah, i can put an autohold in place, which of the three jobs there would be best? | 22:18 |
ade_lee | fungi, octavia-v2-dsvm-scenario-fips | 22:19 |
fungi | thanks, workin' on it now | 22:19 |
ade_lee | fungi, I didn't know you could do an autohold on just one job - but yeah, that would be great, thanks! | 22:19 |
clarkb | ade_lee: johnsom: if it is to a cirros VM the cirros VM runs dropbear not openssh | 22:21 |
clarkb | I'm not sure what limitations that may imply | 22:21 |
johnsom | clarkb I confused the situation by jumping in. It's a bit different of an issue than the cirros image. So, ignore my comments | 22:22 |
fungi | ade_lee: i've set an autohold on the octavia-v2-dsvm-scenario-fips job for change 798151 so recheck the change and when that job fails again we should have held node(s) i can give you access into | 22:23 |
ade_lee | fungi, thanks -- rechecking now | 22:23 |
ade_lee | fungi, rechecked | 22:23 |
clarkb | ssh.exceptions.AuthenticationDenied: b"The key algorithm 'ssh-rsa' is not allowed to be used by PUBLICKEY_ACCEPTED_TYPES configuration option" <- that seems pretty straightforward either allow rsa or use a different algorithm? | 22:23 |
clarkb | similar situation with new fedora talking to gerrit | 22:24 |
ade_lee | clarkb, so that error is because fips is enabled -- and you can't do rsa + sha1 for the signature | 22:24 |
ade_lee | which makes sense of course | 22:25 |
clarkb | ya thats all fine, its the same thing fedora does | 22:25 |
ade_lee | what I'm trying to do is replace paramiko in tempest -- which is the thing that is trying to do the ssh | 22:25 |
clarkb | it breaks when you talk to an ssh server that either cannot do sha2 or does not support key exchange extensions (kex) in its sshd | 22:25 |
clarkb | beacuse the default in the protocol is sha1 which means if you don't explicitly negotiate sha2 you fallback to sha1 | 22:26 |
fungi | forcing to ecc may be an easier solution, assuming both sides support it | 22:26 |
ade_lee | https://review.opendev.org/c/openstack/tempest/+/806274 (very wip patch) | 22:26 |
clarkb | gerrit suffers a similar problem with people using fedora because the java sshd doesn't support kex negotiation. I continue to assert fedora should have also patched openssh to default to sha2 but they didn't | 22:26 |
ade_lee | tempest uses keys obtained from nova which are rsa | 22:27 |
fungi | yeah, they default to sha1 and then break by refusing to remove sha1 | 22:27 |
clarkb | also forgive me but why does it matter that tempest be fips certified? | 22:27 |
clarkb | tempest is an external validator just run it elsewhere | 22:27 |
fungi | that's a good point, tempest could be run from a non-fips node | 22:27 |
clarkb | in any case I suspect that either cirros dropbear or libssh/ssh-python are not negotiating kex properly and you fallback to the default which is sha1 then you fail | 22:28 |
fungi | though perhaps the idea is you're testing whether interactions from a fips system will work... but tempest may not be well designed to test that specifically | 22:28 |
ade_lee | so - what I'd like to be able to do is run all my ci jobs - and a bunch of them are failing because of tempest using parmiko/md5 | 22:29 |
clarkb | also we had to stop using libssh in a completely different context because it wasn't reliable talking to gerrit's event stream | 22:29 |
clarkb | it would error every hour or something | 22:29 |
ade_lee | but yeah - when I do libssh/ssh-python to a machine locally, it works .. so likely then its dropbear maybe ,, | 22:30 |
ade_lee | huh .. interesting | 22:30 |
ade_lee | clarkb, fungi when tempest is run in the ci jobs - where does it run from? | 22:32 |
fungi | ade_lee: depends on the job configuration. it could be run from a completely dedicated node in the nodeset if desired | 22:32 |
clarkb | typically though it runs on the "controller" of the devstack install | 22:33 |
ade_lee | clarkb, yeah - and that we'd want to be fips -enabled | 22:33 |
clarkb | right but as fungi mentions it doesn't have to be if you have a multinode job | 22:34 |
clarkb | https://www.libssh.org/features/ that says libssh does sha2 with rsa | 22:34 |
ade_lee | clarkb, yes -- thats why I was confused it wasn't working in this case. dropbear? | 22:35 |
clarkb | https://github.com/mkj/dropbear/blob/a8d6dac2c53f430bb5721f913478bd294d8b52da/CHANGES#L43 that says dropbear added it in the most recent release: 202.81 | 22:36 |
clarkb | *2020.81 | 22:36 |
clarkb | er no I read that file wrong. | 22:36 |
clarkb | 3 releases ago 2020.79 | 22:36 |
johnsom | Could be the version of dropbear in the cirros image. The job had cirros 0.5.2 in it, but I don't know what version of dropbear that maps to | 22:36 |
clarkb | johnsom: ya I don't either, but that could definitely be it | 22:37 |
johnsom | I might have one, just a sec | 22:37 |
clarkb | its also possible they support the key type now but don't properly negotiate the kex to the client | 22:37 |
johnsom | ade_lee I am almost positive that is the issue, cirros 0.5.2 has Dropbear v2018.76 in it. There have been some compatibility fixes since then. | 22:40 |
johnsom | https://matt.ucc.asn.au/dropbear/CHANGES | 22:40 |
clarkb | I think it is based on ubuntu 18.04 | 22:40 |
clarkb | cirros I mean. | 22:40 |
ade_lee | yeah 2020.79 .. | 22:41 |
clarkb | hrm bionic has much older dropbear so maybe it gets dropbear from somewhere else or builds it locally | 22:42 |
fungi | so basically back to the ongoing ml thread about updating cirros? | 22:42 |
* johnsom changes channels to avoid distro wars part 15,000 | 22:42 | |
clarkb | fungi: ya I suspect this is another addition to cirros that would be helpful. | 22:43 |
clarkb | oh yup I think the busybox config says build dropbear | 22:43 |
ade_lee | clarkb, fungi so -- if thats it -- then the short term solution then is to use ecc keys? | 22:46 |
clarkb | assuming dropbear supports those then yes | 22:47 |
ade_lee | yeah - I can verify that when I log into the instance | 22:47 |
clarkb | the further we get into sha1 deprecation and removal in ssh the more I'm convinced that no one cares about getting it right :/ | 22:49 |
clarkb | its frustrating that the client knows it cannot do sha1 but then refuses to even try sha2 as a fallback. This wouldnt' fix the dropbear situation but would fix the gerrit situation | 22:49 |
ade_lee | fungi, clarkb johnsom thanks - been super helpful. gotta head to dinner -- fungi - let me know when you have a box I can log into. | 22:50 |
fungi | ade_lee: what ssh key do you want authorized for it? feel free to let me know after dinner too | 22:51 |
fungi | i'm digesting mine now, so should be around for a while | 22:51 |
ade_lee | fungi, https://github.com/vakwetu.keys | 22:51 |
clarkb | sorry it is buildroot and not busybox that builds dropbear for cirros | 22:51 |
fungi | ade_lee: thanks, will add that | 22:54 |
clarkb | it looks like the latest version of buildroot will build the latest version of dropbear. Fixing this upstream in cirros may just require a rebuild | 22:54 |
clarkb | nevermind https://github.com/cirros-dev/cirros/blob/77a944c1e65f57ec145e8502eec1a02bd7e99a84/bin/build-release#L31 implies that is very difficult | 23:02 |
*** sshnaidm is now known as sshnaidm|afk | 23:34 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!