*** rlandy has quit IRC | 00:05 | |
*** ryohayakawa has joined #openstack-infra | 00:17 | |
*** armax has quit IRC | 00:32 | |
*** eharney has joined #openstack-infra | 00:36 | |
*** smarcet has joined #openstack-infra | 01:14 | |
*** smarcet has left #openstack-infra | 01:14 | |
*** rfolco has quit IRC | 02:07 | |
*** apetrich has quit IRC | 02:14 | |
*** bauzas has quit IRC | 02:14 | |
*** tetsuro has quit IRC | 02:19 | |
*** bauzas has joined #openstack-infra | 02:20 | |
*** rcernin has quit IRC | 02:27 | |
*** bauzas has quit IRC | 02:27 | |
*** gyee has quit IRC | 02:29 | |
*** bauzas has joined #openstack-infra | 02:29 | |
*** rcernin has joined #openstack-infra | 02:30 | |
*** ysandeep|away is now known as ysandeep | 02:32 | |
*** bauzas has quit IRC | 02:42 | |
*** bauzas has joined #openstack-infra | 02:43 | |
ianw | donnyd: it looks like the host has ipv6 issues | 02:48 |
---|---|---|
ianw | our node launch attempts to ping -6 review.opendev.org and that's failing; i've held the generated node | 02:48 |
ianw | it has an address : inet6 2001:470:e126:0:f816:3eff:feec:cf45/64 scope global dynamic mngtmpaddr noprefixroute | 02:49 |
*** bauzas has quit IRC | 02:50 | |
openstackgerrit | Adrian Turjak proposed openstack/project-config master: Return gnocchi back to openstack https://review.opendev.org/744592 | 02:51 |
*** bauzas has joined #openstack-infra | 02:53 | |
ianw | i can ping the router : 64 bytes from fe80::f816:3eff:fe4a:404a%ens3: icmp_seq=1 ttl=64 time=0.873 ms | 02:55 |
*** bauzas has quit IRC | 02:58 | |
*** bauzas has joined #openstack-infra | 02:58 | |
ianw | also : Request to https://api.us-east.open-edge.io:8776.../volumes/detail timed out when trying to list volumes | 03:07 |
ianw | clarkb / fungi: ^ something to take up tomorrow i think | 03:08 |
ianw | i have uploaded a focal image, and it does boot and chat via ipv4 | 03:08 |
ianw | ./launch-node.py --cloud openstackci-openedge --region=us-east --flavor 8cpu-8GBram-80GBdisk --image ubuntu-focal mirror01.us-east.openedge.opendev.org | 03:08 |
ianw | to save someone some typing :) | 03:08 |
*** Tengu has quit IRC | 03:13 | |
*** Tengu has joined #openstack-infra | 03:15 | |
openstackgerrit | Ghanshyam Mann proposed openstack/openstack-zuul-jobs master: Migrate functional tests jobs to Ubuntu Focal https://review.opendev.org/738325 | 03:27 |
openstackgerrit | Ghanshyam Mann proposed openstack/openstack-zuul-jobs master: Migrate openstack-tox-docs jobs to Ubuntu Focal https://review.opendev.org/738326 | 03:28 |
openstackgerrit | Ghanshyam Mann proposed openstack/openstack-zuul-jobs master: Pin ubuntu bionic for tox-py27 and tox-py36 in no-constraints template https://review.opendev.org/738327 | 03:28 |
openstackgerrit | Ghanshyam Mann proposed openstack/openstack-zuul-jobs master: Pin nodejs6 & nodejs8 jobs on ubuntu-bionic https://review.opendev.org/738328 | 03:28 |
*** armax has joined #openstack-infra | 03:29 | |
*** ramishra_ has quit IRC | 03:34 | |
*** hongbin has joined #openstack-infra | 03:35 | |
*** psachin has joined #openstack-infra | 03:37 | |
*** vishalmanchanda has joined #openstack-infra | 03:42 | |
*** armax has quit IRC | 03:43 | |
*** Tengu has quit IRC | 03:57 | |
*** ramishra has joined #openstack-infra | 03:58 | |
*** Tengu has joined #openstack-infra | 03:58 | |
*** hamalq has joined #openstack-infra | 04:09 | |
*** hongbin has quit IRC | 04:10 | |
*** hamalq has joined #openstack-infra | 04:10 | |
*** hamalq has quit IRC | 04:17 | |
*** raukadah is now known as chkumar|rover | 04:23 | |
*** Tengu has quit IRC | 04:26 | |
*** Tengu has joined #openstack-infra | 04:27 | |
*** evrardjp has quit IRC | 04:33 | |
*** evrardjp has joined #openstack-infra | 04:33 | |
*** ysandeep is now known as ysandeep|afk | 04:33 | |
*** Tengu has quit IRC | 04:33 | |
*** Tengu has joined #openstack-infra | 04:35 | |
*** Lucas_Gray has quit IRC | 04:36 | |
*** Ajohn has joined #openstack-infra | 04:43 | |
*** ykarel has joined #openstack-infra | 04:59 | |
*** lmiccini has joined #openstack-infra | 05:03 | |
*** marios has joined #openstack-infra | 05:13 | |
*** Tengu has quit IRC | 05:17 | |
*** Tengu has joined #openstack-infra | 05:19 | |
*** udesale has joined #openstack-infra | 05:36 | |
*** marios is now known as marios|ruck | 05:43 | |
*** Tengu has quit IRC | 05:44 | |
*** Tengu has joined #openstack-infra | 05:45 | |
*** Tengu has quit IRC | 05:49 | |
*** Tengu has joined #openstack-infra | 05:51 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack/project-config master: Normalize projects.yaml https://review.opendev.org/744096 | 06:10 |
*** dchen has quit IRC | 06:17 | |
*** ysandeep|afk is now known as ysandeep | 06:18 | |
*** xek has joined #openstack-infra | 06:19 | |
*** Tengu has quit IRC | 06:24 | |
*** Tengu has joined #openstack-infra | 06:26 | |
*** eolivare has joined #openstack-infra | 06:27 | |
*** dchen has joined #openstack-infra | 06:29 | |
*** eharney has quit IRC | 06:35 | |
openstackgerrit | Merged openstack/project-config master: Normalize projects.yaml https://review.opendev.org/744096 | 06:37 |
*** Tengu has quit IRC | 06:38 | |
*** Tengu has joined #openstack-infra | 06:40 | |
*** hashar has joined #openstack-infra | 06:49 | |
openstackgerrit | Merged openstack/project-config master: Add notifications for openstack-stable channel https://review.opendev.org/744050 | 06:51 |
*** slaweq has joined #openstack-infra | 06:52 | |
*** eharney has joined #openstack-infra | 06:53 | |
*** Tengu has quit IRC | 07:01 | |
*** Tengu has joined #openstack-infra | 07:02 | |
*** dchen has quit IRC | 07:04 | |
*** dchen has joined #openstack-infra | 07:05 | |
*** dchen_ has joined #openstack-infra | 07:07 | |
*** jcapitao has joined #openstack-infra | 07:08 | |
*** dchen has quit IRC | 07:10 | |
*** svyas is now known as svyas|afk | 07:13 | |
*** dtantsur|afk is now known as dtantsur | 07:16 | |
*** xek has quit IRC | 07:22 | |
*** rcernin has quit IRC | 07:22 | |
*** andrewbonney has joined #openstack-infra | 07:47 | |
*** rcernin has joined #openstack-infra | 07:48 | |
*** tosky has joined #openstack-infra | 07:48 | |
*** ralonsoh has joined #openstack-infra | 07:50 | |
*** ralonsoh has quit IRC | 07:50 | |
*** ralonsoh has joined #openstack-infra | 07:52 | |
*** rcernin has quit IRC | 07:53 | |
*** jpena|off is now known as jpena | 07:55 | |
*** Tengu has quit IRC | 08:00 | |
*** xek has joined #openstack-infra | 08:06 | |
*** Tengu has joined #openstack-infra | 08:07 | |
*** AJaeger has joined #openstack-infra | 08:08 | |
*** xek has quit IRC | 08:18 | |
*** jhesketh has quit IRC | 08:24 | |
*** rcernin has joined #openstack-infra | 08:27 | |
*** Lucas_Gray has joined #openstack-infra | 08:30 | |
*** rcernin has quit IRC | 08:31 | |
openstackgerrit | Merged openstack/project-config master: Add Ceph iSCSI charm to OpenStack charms https://review.opendev.org/744479 | 08:32 |
*** pkopec has joined #openstack-infra | 08:37 | |
*** xek has joined #openstack-infra | 08:37 | |
*** priteau has joined #openstack-infra | 08:43 | |
*** derekh has joined #openstack-infra | 08:47 | |
*** ramishra has quit IRC | 08:51 | |
*** ramishra has joined #openstack-infra | 08:52 | |
*** tetsuro has joined #openstack-infra | 08:55 | |
*** rcernin has joined #openstack-infra | 08:59 | |
*** nightmare_unreal has joined #openstack-infra | 09:06 | |
*** ramishra has quit IRC | 09:08 | |
*** xek has quit IRC | 09:08 | |
*** lpetrut has joined #openstack-infra | 09:12 | |
*** rcernin has quit IRC | 09:16 | |
*** ociuhandu_ has joined #openstack-infra | 09:21 | |
*** ociuhandu has quit IRC | 09:21 | |
openstackgerrit | Merged openstack/project-config master: Revert "Remove os_congress gating" https://review.opendev.org/742532 | 09:24 |
*** apetrich has joined #openstack-infra | 09:40 | |
*** ralonsoh has quit IRC | 09:40 | |
*** ralonsoh has joined #openstack-infra | 09:41 | |
*** hashar has quit IRC | 09:42 | |
*** auristor has quit IRC | 10:01 | |
*** xek has joined #openstack-infra | 10:11 | |
*** udesale_ has joined #openstack-infra | 10:12 | |
*** tkajinam has quit IRC | 10:12 | |
*** sshnaidm_ has joined #openstack-infra | 10:12 | |
*** udesale has quit IRC | 10:14 | |
*** sshnaidm has quit IRC | 10:15 | |
*** jhesketh has joined #openstack-infra | 10:21 | |
*** yamamoto has quit IRC | 10:23 | |
*** sshnaidm_ is now known as sshnaidm | 10:26 | |
*** yamamoto has joined #openstack-infra | 10:27 | |
*** ociuhandu_ has quit IRC | 10:30 | |
*** ociuhandu has joined #openstack-infra | 10:31 | |
*** ociuhandu has quit IRC | 10:36 | |
*** yamamoto has quit IRC | 10:38 | |
*** tetsuro has quit IRC | 10:44 | |
*** xek has quit IRC | 10:50 | |
*** yamamoto has joined #openstack-infra | 10:50 | |
*** rcernin has joined #openstack-infra | 10:55 | |
*** dklyle has quit IRC | 10:56 | |
*** dchen_ has quit IRC | 11:03 | |
*** aedc has quit IRC | 11:08 | |
*** eharney has quit IRC | 11:13 | |
*** zxiiro has joined #openstack-infra | 11:15 | |
*** tosky_ has joined #openstack-infra | 11:20 | |
*** tosky has quit IRC | 11:20 | |
*** tosky_ is now known as tosky | 11:20 | |
*** ociuhandu has joined #openstack-infra | 11:23 | |
*** ociuhandu has quit IRC | 11:24 | |
*** ociuhandu has joined #openstack-infra | 11:25 | |
*** udesale has joined #openstack-infra | 11:27 | |
*** udesale_ has quit IRC | 11:27 | |
*** ociuhandu has quit IRC | 11:30 | |
*** udesale_ has joined #openstack-infra | 11:31 | |
*** udesale has quit IRC | 11:32 | |
*** eharney has joined #openstack-infra | 11:32 | |
*** gfidente has joined #openstack-infra | 11:32 | |
*** jpena is now known as jpena|lunch | 11:36 | |
*** udesale_ has quit IRC | 11:38 | |
*** ociuhandu has joined #openstack-infra | 11:40 | |
*** ykarel_ has joined #openstack-infra | 11:40 | |
*** udesale has joined #openstack-infra | 11:41 | |
*** jcapitao is now known as jcapitao_lunch | 11:42 | |
*** ykarel has quit IRC | 11:43 | |
*** marios|ruck has quit IRC | 11:48 | |
*** smarcet has joined #openstack-infra | 11:50 | |
*** rfolco has joined #openstack-infra | 11:51 | |
*** rcernin has quit IRC | 11:53 | |
openstackgerrit | Rico Lin proposed openstack/openstack-zuul-jobs master: Add job for openstack-python3-victoria-jobs-arm64 https://review.opendev.org/742090 | 11:53 |
openstackgerrit | Thierry Carrez proposed openstack/project-config master: Retire Zuul's Kata tenant https://review.opendev.org/744687 | 11:58 |
*** takamatsu has joined #openstack-infra | 12:04 | |
*** rlandy has joined #openstack-infra | 12:07 | |
*** artom has joined #openstack-infra | 12:10 | |
donnyd | Well in the upgrade to ussuri it would seem something busted in the bgp dragent | 12:10 |
donnyd | it doesn't seem to work at all - so I am running that down now | 12:11 |
*** tosky has quit IRC | 12:14 | |
*** tosky_ has joined #openstack-infra | 12:14 | |
*** tosky_ is now known as tosky | 12:15 | |
*** hashar has joined #openstack-infra | 12:16 | |
*** dave-mccowan has joined #openstack-infra | 12:16 | |
*** ralonsoh_ has joined #openstack-infra | 12:26 | |
*** ramishra has joined #openstack-infra | 12:27 | |
*** ralonsoh__ has joined #openstack-infra | 12:28 | |
rpittau | hello everyone! In ironic projects we do have oslo.config==5.2.0 in lower-constraints in ussuri, but it's installing 8.3.1, that's breaking the l-c tests because of the validators dependencies | 12:28 |
rpittau | it seems that tox is upgrading the packages without taking lower-constraints into consideration, has anything changed recently that could cause that ? | 12:28 |
*** ryo_hayakawa has joined #openstack-infra | 12:28 | |
*** ryohayakawa has quit IRC | 12:29 | |
*** ralonsoh has quit IRC | 12:30 | |
*** ralonsoh_ has quit IRC | 12:31 | |
*** svyas|afk is now known as svyas | 12:32 | |
*** xek has joined #openstack-infra | 12:33 | |
AJaeger | rpittau: Not sure whether this helps, just wanted to poing out http://lists.openstack.org/pipermail/openstack-discuss/2020-April/014237.html | 12:34 |
rpittau | yay | 12:37 |
rpittau | AJaeger: thanks, I'll test that | 12:37 |
*** jpena|lunch is now known as jpena | 12:39 | |
rpittau | AJaeger: we already removed install_command roughyl 3 months ago | 12:40 |
AJaeger | rpittau: in that case I have no idea ;( | 12:43 |
rpittau | :( | 12:43 |
*** arxcruz is now known as arxcruz|off | 12:43 | |
AJaeger | rpittau: IMHO the lower-constraints is just too complex, see the followup to my email as well. You might need to recreate the file | 12:44 |
*** ralonsoh__ is now known as ralonsoh | 12:44 | |
rpittau | AJaeger: ok, I'll have a look at that, thanks! | 12:45 |
AJaeger | rpittau: did you see on the discuss mailing list the thread "[nova] openstack-tox-lower-constraints broken"? You might want to chime in there as well... | 12:46 |
rpittau | AJaeger: yep, saw that too :/ | 12:47 |
rpittau | seems l-c hates us | 12:47 |
AJaeger | those mails seem to indicate that constraints in general hate us ;( | 12:48 |
AJaeger | sorry, hope others can help. Perhaps you can join forces with smcginnis and fungi about this. | 12:49 |
rpittau | AJaeger: we were talking in #openstack-requirements before, they pointed out a change in tox, I guess we need to keep testing | 12:50 |
fungi | rpittau: i think something has changed with how tox is installing packages. in #openstack-cinder smcginnis is experimenting with my suggestion of setting -c in install_command instead of deps | 12:51 |
fungi | this did seem to begin roughly the same time as the most recent tox release (within margin of error anyway) | 12:52 |
smcginnis | fungi: AJaeger pointed out the reason it was removed above. That breaks the l-c job. | 12:52 |
rpittau | fungi: ok, thanks | 12:52 |
smcginnis | The tox release is oddly suspicious. But as fungi pointed out, there really wasn't much in that release that should have affected this. | 12:53 |
smcginnis | And I've updated my tox locally, and I am still unable to reproduce this on F32. | 12:53 |
fungi | to be clear, i didn't look at the complete list of commits between the last two releases, just their (rather minimal) changelog | 12:53 |
rpittau | so basically re-add install_command with -c ? | 12:53 |
smcginnis | rpittau: I have a test patch up to try that now. | 12:54 |
fungi | rpittau: well, that's what we're experimenting with, but we might need to do it separately in the lower-constraints env as well | 12:54 |
rpittau | alright, gotcha | 12:54 |
*** yamamoto has quit IRC | 12:54 | |
fungi | to be able to specify a different constraints file default | 12:54 |
smcginnis | If we do that, we will also need to explicitly define install_command in [testenv:lower-constraints] too. | 12:55 |
rpittau | smcginnis: yeah, that's what I wanted to do | 12:55 |
*** eharney has quit IRC | 12:55 | |
*** eharney has joined #openstack-infra | 12:56 | |
dtantsur | hey folks! I'm trying to use devstack CI jobs on a non-standard branch. How do I figure out why zuul does not run them? | 12:56 |
fungi | right, that's basically what i was describing | 12:56 |
dtantsur | https://review.opendev.org/#/c/744700/ doesn't seem enough, do I need overrides for all required projects? | 12:57 |
smcginnis | fungi: Yeah, what you said. I was too busy typing to see that you had already said it. | 12:57 |
smcginnis | And now it looks like we have a grenade issue going around too. | 12:57 |
smcginnis | dtantsur: You might. Probably good to ask in #openstack-qa. | 12:58 |
dtantsur | will do | 12:58 |
fungi | dtantsur: you can turn on debug for your project in the zuul.yaml for your change and zuul should comment with a detailed explanation of what it selected. that might help narrow down why a particular job was not selected. but also common reasons for a job to not be selected are file filters or branch filters | 12:58 |
smcginnis | But IIRC, devstack has to be aware of each "stable" branch, so this might be a little more work. | 12:58 |
dtantsur | mm, branch filters, lemme check | 12:58 |
dtantsur | interestingly, one non-devstack job also does not run | 12:58 |
dtantsur | fungi: could you remind me how to turn on debug? | 12:59 |
fungi | sure, after i remind myself, just a moment | 13:00 |
fungi | dtantsur: https://zuul-ci.org/docs/zuul/reference/project_def.html#attr-project.%3Cpipeline%3E.debug | 13:02 |
*** ryo_hayakawa has quit IRC | 13:02 | |
dtantsur | thx! | 13:02 |
*** ryo_hayakawa has joined #openstack-infra | 13:03 | |
*** auristor has joined #openstack-infra | 13:05 | |
*** mgoddard has quit IRC | 13:07 | |
fungi | dtantsur: and this is the documentation for the branch override you want to use for your required-projects list, i suspect: https://zuul-ci.org/docs/zuul/reference/job_def.html#attr-job.required-projects.override-checkout | 13:09 |
dtantsur | I guess so. I wonder what to do with jobs that are fully defined in other projects.. | 13:10 |
*** ryo_hayakawa has quit IRC | 13:10 | |
fungi | you can still set that in a variant (basically where you include the job in your pipeline) | 13:10 |
fungi | you don't need to redefine or inherit from the job itself | 13:11 |
*** mgoddard has joined #openstack-infra | 13:11 | |
openstackgerrit | Antoine Musso proposed openstack/pbr master: Update some url to use opendev.org https://review.opendev.org/744704 | 13:17 |
*** ysandeep is now known as ysandeep|mtg | 13:18 | |
*** jcapitao_lunch is now known as jcapitao | 13:20 | |
dtantsur | nice, I'll try | 13:21 |
rpittau | smcginnis: I'm testing the change you did for cinder in ironic and it seems working, at least locally on ubuntu bionic and latest tox | 13:22 |
*** ykarel_ has quit IRC | 13:22 | |
*** ykarel_ has joined #openstack-infra | 13:22 | |
*** lbragstad_ has joined #openstack-infra | 13:24 | |
*** __ministry1 has joined #openstack-infra | 13:26 | |
*** lbragstad has quit IRC | 13:26 | |
*** __ministry1 has quit IRC | 13:33 | |
smcginnis | rpittau: Looks like the cinder one is passing too. | 13:39 |
dtantsur | next question, folks: https://opendev.org/openstack/project-config/src/branch/master/gerritbot/channels.yaml#L763 doesn't seem to work (the patches do not appear). anything I missed? | 13:39 |
smcginnis | That's good news / bad news. Glad it fixes it, but if the "fix" is we have to update every tox file everywhere... that's a pain. | 13:39 |
rpittau | smcginnis: ehm...yeah :/ | 13:39 |
rpittau | I guess we need to start from master and backport... gosh that's a lot of files | 13:40 |
*** lbragstad_ has quit IRC | 13:41 | |
fungi | dtantsur: you haven't missed anything, we have to manually update gerritbot configuration right now because it's in limbo having not been containerized yet after we containerized gerrit itself | 13:44 |
dtantsur | ah! could someone please do this update? | 13:45 |
smcginnis | There was also a recent gerritbot update for the #openstack-stable channel as well. | 13:45 |
fungi | dtantsur: i'll set myself a reminder to try and do it in a bit | 13:46 |
dtantsur | thank you again! | 13:46 |
*** lbragstad has joined #openstack-infra | 13:46 | |
*** armax has joined #openstack-infra | 13:55 | |
*** ysandeep|mtg is now known as ysandeep | 14:00 | |
openstackgerrit | Jeremy Stanley proposed openstack/pbr master: Re-add ChangeLog https://review.opendev.org/744719 | 14:00 |
openstackgerrit | Jeremy Stanley proposed openstack/pbr master: Add Release Notes to documentation https://review.opendev.org/744720 | 14:00 |
*** beagles has joined #openstack-infra | 14:01 | |
*** yamamoto has joined #openstack-infra | 14:05 | |
donnyd | routing it fixed | 14:05 |
donnyd | is* | 14:05 |
donnyd | can someone else confirm | 14:05 |
donnyd | ping6 2001:470:e126:0:f816:3eff:fee4:7b2d | 14:05 |
*** michael-beaver has joined #openstack-infra | 14:05 | |
fungi | donnyd: yep! i can reach that | 14:07 |
donnyd | ok, then we should be good to go | 14:16 |
*** ysandeep is now known as ysandeep|off | 14:16 | |
*** yamamoto has quit IRC | 14:17 | |
donnyd | apparently the BGP agent just doesn't want to work anymore in my deployment - so I am debugging it now - but for the two simple project to support the community - static routes work just fine | 14:17 |
*** pkopec has quit IRC | 14:18 | |
donnyd | so the mirror should work now | 14:19 |
donnyd | but I am thinking it probably needs to be redeployed | 14:19 |
*** dave-mccowan has quit IRC | 14:25 | |
noonedeadpunk | AJaeger: hi, can you push that change to the gates?:) https://review.opendev.org/#/c/742534/ | 14:26 |
openstackgerrit | Jeremy Stanley proposed openstack/pbr master: Re-add ChangeLog https://review.opendev.org/744719 | 14:27 |
openstackgerrit | Jeremy Stanley proposed openstack/pbr master: Add Release Notes to documentation https://review.opendev.org/744720 | 14:27 |
*** hashar has quit IRC | 14:29 | |
*** dklyle has joined #openstack-infra | 14:31 | |
*** psachin has quit IRC | 14:33 | |
*** Tengu has quit IRC | 14:35 | |
*** udesale_ has joined #openstack-infra | 14:35 | |
*** Tengu has joined #openstack-infra | 14:36 | |
*** udesale has quit IRC | 14:38 | |
*** smarcet has quit IRC | 14:46 | |
*** dave-mccowan has joined #openstack-infra | 14:54 | |
*** jamesmcarthur has joined #openstack-infra | 14:56 | |
*** dave-mccowan has quit IRC | 14:59 | |
*** pkopec has joined #openstack-infra | 14:59 | |
*** artom has quit IRC | 15:03 | |
smcginnis | Not sure if it's related yet, but just noticed a new pip was released today that has a few bug fixes related to resolving dependencies. Might be interesting to see if that gets picked up and if it addresses our constraints issues. | 15:04 |
smcginnis | https://pip.pypa.io/en/stable/news/ | 15:04 |
*** artom has joined #openstack-infra | 15:04 | |
*** derekh has quit IRC | 15:05 | |
clarkb | smcginnis: we run tox from a virtualenv iirc now to prevent polluting the global image python. You should be able to run a pre job step to upgrade the pip in that virtualenv before the run stage | 15:05 |
clarkb | (otherwise will need to wait for image updates) | 15:05 |
clarkb | I'm in a meeting now but can help sort that out if it is something you want to try | 15:05 |
*** smarcet has joined #openstack-infra | 15:05 | |
*** derekh has joined #openstack-infra | 15:05 | |
fungi | smcginnis: ooh, "Correctly find already-installed distributions with dot (.) in the name and uninstall them when needed." | 15:06 |
smcginnis | Thanks clarkb. Hoping to try some things a little later, so I can ping you. | 15:06 |
fungi | that could be what we're seeing! | 15:06 |
smcginnis | Yeah, that one caught my eye. | 15:06 |
fungi | it would explain why some already installed stuff gets ignored but other stuff doesn't (thinking about it, those had a . in their names) | 15:06 |
smcginnis | Oh? I didn't check that yet. Was it really only ones that had a . in the name? | 15:07 |
fungi | so not actually a behavior change in tox, but rather because the new tox release brought in new pip | 15:07 |
fungi | oslo.cache, dogpile.cache, oslo.messaging all come to mind | 15:08 |
fungi | as ones i saw getting reinstalled with the wrong versions | 15:08 |
smcginnis | Should we now pick up this latest pip? Or will there be a delay before we see that in our jobs? Would be great if we can just recheck one of the failing jobs and see if that's now resolved. | 15:09 |
fungi | smcginnis: i think it'll be delayed until the next tox release | 15:09 |
fungi | unless we override the version of pip in the tox envs like clarkb described | 15:09 |
*** chkumar|rover is now known as raukadah | 15:10 | |
smcginnis | Reading the pip bug report, it is sounding kind of like that may be it: https://github.com/pypa/pip/issues/8645 | 15:12 |
clarkb | fungi: smcginnis we build a venv for tox in our images | 15:13 |
clarkb | I think we just need to rebuild our images and that will pull in newer pip | 15:13 |
fungi | oh, tox pulls in latest pip? | 15:14 |
clarkb | that happens roughly daily, and we can speed it up and test it with a pre run playbook update that upgrades pip in that virtualenv | 15:14 |
clarkb | fungi: yes I believe so | 15:14 |
clarkb | or rather the virtualenv creation does | 15:14 |
smcginnis | That all sounds like it could also explain why we saw a slight delay between releases and when we started getting failures. | 15:14 |
fungi | oh, actually it depends on what version of pip is vendored in the version of virtualenv used | 15:15 |
*** lpetrut has quit IRC | 15:15 | |
fungi | https://virtualenv.pypa.io/en/latest/changelog.html "v20.0.29 (2020-07-31) [...] Upgrade embedded pip from version 20.1.2 to 20.2" | 15:16 |
clarkb | python3 -m venv /usr/tox-env && /usr/tox-env/bin/pip install tox is what we do roughly | 15:17 |
clarkb | so ya maybe tox is pulling in newer pip (otherwise how did it update in the first plcae) | 15:17 |
smcginnis | Should we add another step to that of /usr/tox-env/bin/pip install --upgrade pip to make sure it grabs the latest? | 15:19 |
clarkb | I'm trying to check now if tox pulls in pip | 15:20 |
*** openstackgerrit has quit IRC | 15:20 | |
clarkb | https://github.com/tox-dev/tox/blob/master/setup.cfg#L45 they install virtualenv which will pull in pip | 15:21 |
clarkb | I think that explains it? | 15:21 |
clarkb | they aren't using the pip in our tox virtualenv they are using the pip installed in the tox env virtualenv | 15:22 |
fungi | #status log manually updated /etc/gerritbot/channel_config.yaml on review01 with latest content from openstack/project-config:gerritbot/channels.yaml (for the first time since 2020-03-17) and restarted the gerritbot service | 15:22 |
clarkb | its turtles the whole way down but as long as virtualenv updates they will update and that is in line with the changelog fungi found | 15:22 |
openstackstatus | fungi: finished logging | 15:22 |
clarkb | smcginnis: but also that means updating pip in the tox venv is unlikely tofix it for us | 15:22 |
clarkb | we need that nested virtualenv version to update | 15:22 |
fungi | dtantsur: i've updated gerritbot's channel config just now at your request | 15:23 |
dtantsur | thank you! | 15:23 |
smcginnis | Smoking gun at least. Last virtualenv release was July 31 - right when we starting getting failures. | 15:23 |
smcginnis | So now we just need a new virtualenv release to happen? | 15:24 |
clarkb | with an updated pip version included yes | 15:24 |
clarkb | if we need to we can change `python3 -m venv /usr/tox-env && /usr/tox-env/bin/pip install tox` to `python3 -m venv /usr/tox-env && /usr/tox-env/bin/pip install tox virtualenv==oldversionthatworks` | 15:25 |
clarkb | or do a pre step that does the equivalent of ^ | 15:25 |
smcginnis | That could be a good workaround for now. | 15:26 |
smcginnis | I don't see any open virtualenv issues asking for updates to newer pip requirements. | 15:26 |
clarkb | probably starting with a pre step and confirming that works is a good start. Then we can put that in the images themselves if we expect to be in that staet for a while | 15:27 |
rpittau | smcginnis, clarkb, would it still make sense to modify the install_command to fix the issue? asking as I started with a couple of patches for ironic and if there is a "global fix" would be way better :) | 15:38 |
clarkb | rpittau: if this is a pip bug then no I don't think that will help much. Best case you'll end up reverting it shortly anyway | 15:39 |
ralonsoh | mordred, hi | 15:39 |
ralonsoh | can you help me on a problem in the Neutron CI? | 15:39 |
ralonsoh | specifically on the grenade jobs | 15:39 |
ralonsoh | I think the problem is related to https://review.opendev.org/#/c/662734 | 15:40 |
ralonsoh | we have released 9.1.0 version recently | 15:40 |
rpittau | clarkb: ok, I actually see that what smcginnis proposed, and I applied to ironic, works, at least I'm seeing the correct packages installed in the lower-constraints job, both in master and ussuri. | 15:42 |
clarkb | rpittau: to be clear I'm not saying it can't be fixed that way, just pointing out it is likely that an updated pip will address the problem and you'll want to revert | 15:42 |
*** lpetrut has joined #openstack-infra | 15:43 | |
clarkb | ralonsoh: it would be helpful to link to a specific failure | 15:43 |
*** ykarel_ is now known as ykarel|away | 15:43 | |
rpittau | clarkb: I see, thanks! I'll check if we can wait for that as at the moment all stable branches are broken :/ | 15:43 |
ralonsoh | clarkb, you are right | 15:43 |
ralonsoh | https://026716e2072b41f3360b-f5b932bfaec6245f8b3ddc87df48d7a4.ssl.cf2.rackcdn.com/640258/23/check/neutron-grenade-multinode/bcc8e1e/controller/logs/screen-placement-api.txt | 15:43 |
ralonsoh | keystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for identity service not found | 15:43 |
ralonsoh | I see the project config is | 15:44 |
ralonsoh | keystone_authtoken.interface = admin | 15:44 |
*** hashar has joined #openstack-infra | 15:45 | |
*** dmellado has joined #openstack-infra | 15:46 | |
clarkb | ralonsoh: that job appears to have installed keystonemiddleware 9.0.0 | 15:46 |
ralonsoh | Collecting keystonemiddleware===9.1.0 | 15:47 |
clarkb | ralonsoh: I see 9.0.0 in that log | 15:48 |
clarkb | is that the correct log | 15:48 |
ralonsoh | clarkb, it uninstall 9.0.0 | 15:48 |
ralonsoh | 2020-08-04 13:39:25.813 | Requirement already satisfied: keystonemiddleware===9.1.0 in /usr/local/lib/python3.6/dist-packages (from -c /opt/stack/new/requirements/upper-constraints.txt (line 215)) (9.1.0) | 15:48 |
*** ykarel|away has quit IRC | 15:49 | |
clarkb | ah I see. The old side installs 9.0.0 then we upgrade to the new side and get 9.1.0 | 15:50 |
clarkb | and I need to look in the grenade log to see that | 15:50 |
ralonsoh | clarkb, I think the problem is | 15:50 |
ralonsoh | keystone_authtoken.interface = admin | 15:50 |
ralonsoh | any other job in the CI uses internal | 15:50 |
ralonsoh | instead of admin | 15:51 |
ralonsoh | I need to find now in grenade were this is configured | 15:51 |
tosky | ralonsoh: the grenade job is basically a devstack job which then runs grenade.sh | 15:53 |
ralonsoh | tosky, yeah, but why this option is different? | 15:53 |
clarkb | and the configs come from https://opendev.org/openstack/placement/src/branch/master/etc/placement | 15:53 |
ralonsoh | because I think this is the problem there | 15:53 |
clarkb | ralonsoh: I think genconfig on stable is using keystonemiddleware with 9.0.0 so the default is admin | 15:53 |
clarkb | then I believe we don't update configs for the new side | 15:53 |
ralonsoh | ahhhhhh | 15:54 |
clarkb | but that should work if it worked on the old side? what is making admin invalid? | 15:54 |
clarkb | that would require a keystone db update wouldn't it? | 15:54 |
ralonsoh | I have no idea, sorry | 15:54 |
fungi | yeah, it's also an intentional choice that we run grenade with config from the previous stable branch, so that we can catch when upgrades fail to be backward-compatible with configs from the previous release | 15:55 |
*** ykarel|away has joined #openstack-infra | 15:55 | |
tosky | grenade does update the db, but iirc the point is to not update the config | 15:55 |
tosky | as fungi explained it in a better way | 15:55 |
ralonsoh | ok, when keystone is restarted | 15:56 |
ralonsoh | in https://026716e2072b41f3360b-f5b932bfaec6245f8b3ddc87df48d7a4.ssl.cf2.rackcdn.com/640258/23/check/neutron-grenade-multinode/bcc8e1e/controller/logs/screen-keystone.txt | 15:56 |
clarkb | ya I think the trick here is to figure out why admin is no longer valid as it should be (as intentional testing design objective) | 15:56 |
ralonsoh | the interface option is changed | 15:56 |
ralonsoh | from "admin" to "internal" | 15:56 |
ralonsoh | at 13:37:57 | 15:56 |
clarkb | is this the port 5000 vs 35something? | 15:56 |
clarkb | or is this the endpoint attribute in the database? | 15:56 |
ralonsoh | keystone_authtoken.interface = internal | 15:56 |
ralonsoh | I'm noob on keystone | 15:57 |
clarkb | I think the likely fix here is we need to keep admin as the value on the new side if the old side defaults to admin | 16:02 |
clarkb | then when old side defaults to internal the new side can default to internal | 16:02 |
ralonsoh | exactly, and this should be done in grenade, if I'm not wrong | 16:03 |
clarkb | ralonsoh: yes in openstack/grenade/projects/10_keystone/from_whatevertheversionthatmattershereis | 16:03 |
ralonsoh | clarkb, thanks a lot | 16:03 |
clarkb | there are examples of other upgrade steps like that in grenade | 16:03 |
tosky | uhm, why doesn't this happen on other grenade jobs? | 16:03 |
clarkb | tosky: because both old and new side use older middleware with the admin default | 16:03 |
clarkb | tosky: the issue here is old defaults to admin but new defaults to internal but we don't change other service configs aiui | 16:04 |
clarkb | if we keep the new side fixed to the admin default then the other services don't need their configs updated | 16:04 |
*** lmiccini has quit IRC | 16:05 | |
clarkb | I'm trying to check if that is set in keystones conf file already | 16:06 |
ralonsoh | clarkb, I'm not very sure about the use of those upgrade files | 16:07 |
ralonsoh | no one is calling those methods | 16:07 |
clarkb | well and https://026716e2072b41f3360b-f5b932bfaec6245f8b3ddc87df48d7a4.ssl.cf2.rackcdn.com/640258/23/check/neutron-grenade-multinode/bcc8e1e/controller/logs/etc/placement/placement_conf.txt doesn't set the interface either | 16:07 |
clarkb | oh I see placement is complaining about the default too | 16:08 |
clarkb | we aren't fixing the value in the configs so we use the default everywhere | 16:08 |
ralonsoh | yes | 16:08 |
clarkb | if we fixed the value in the configs then we'd be ok I think | 16:08 |
ralonsoh | can we set the default value globally? | 16:09 |
clarkb | https://opendev.org/openstack/devstack/src/branch/master/lib/keystone#L403-L426 adds to the mystery :) | 16:11 |
clarkb | when I first found that function I thought yes we can and that is where, but thta sets the value to public and we don't see that in the services | 16:11 |
*** jpena is now known as jpena|off | 16:13 | |
donnyd | fungi: how are things holding up down there? | 16:13 |
ralonsoh | clarkb, but this is being called only in nova, when upgrading to rocky | 16:14 |
clarkb | ralonsoh: ya I guess it is going to be branch specific and that may help explain tosky's question | 16:14 |
*** jcapitao has quit IRC | 16:14 | |
clarkb | ralonsoh: but that job was ussuri to master right? | 16:15 |
ralonsoh | yes | 16:15 |
tosky | I didn't check how that job differs from the normal grenade (or grenade-multinode) job | 16:15 |
*** derekh has quit IRC | 16:15 | |
clarkb | in devstack master and ussuri many of the services call that | 16:15 |
ralonsoh | neutron and nova | 16:15 |
ralonsoh | and placement | 16:16 |
ralonsoh | https://026716e2072b41f3360b-f5b932bfaec6245f8b3ddc87df48d7a4.ssl.cf2.rackcdn.com/640258/23/check/neutron-grenade-multinode/bcc8e1e/controller/logs/screen-placement-api.txt | 16:16 |
clarkb | ralonsoh: and placement swift glance and cinder | 16:16 |
ralonsoh | yes | 16:16 |
ralonsoh | all of them | 16:16 |
clarkb | but if you look in their config files we don't have that value set | 16:17 |
clarkb | https://026716e2072b41f3360b-f5b932bfaec6245f8b3ddc87df48d7a4.ssl.cf2.rackcdn.com/640258/23/check/neutron-grenade-multinode/bcc8e1e/controller/logs/etc/placement/placement_conf.txt | 16:17 |
*** ykarel|away has quit IRC | 16:17 | |
ralonsoh | clarkb, because we don't configure it explicitly | 16:17 |
ralonsoh | we don't call configure_auth_token_middleware | 16:18 |
clarkb | ralonsoh: we do in https://opendev.org/openstack/devstack/src/branch/master/lib/keystone#L403-L426 | 16:18 |
clarkb | ralonsoh: devstack does | 16:18 |
clarkb | or it should | 16:18 |
ralonsoh | indeed | 16:18 |
clarkb | iirc the way this process works is we run devstack on the old branch to configure and deploy the services. Then grenade runs specific portions of devstack things to only update the service installations but not their configs | 16:19 |
clarkb | I would've expected the config to have interface = public in them and then that won't chnage | 16:19 |
clarkb | figuring out why that isn't happening may be the bug that needs to be fixed here | 16:19 |
ralonsoh | clarkb, actually, this is the only variable not set | 16:19 |
ralonsoh | in https://opendev.org/openstack/devstack/src/branch/master/lib/keystone#L403-L426 list | 16:19 |
ralonsoh | compared to https://026716e2072b41f3360b-f5b932bfaec6245f8b3ddc87df48d7a4.ssl.cf2.rackcdn.com/640258/23/check/neutron-grenade-multinode/bcc8e1e/controller/logs/etc/placement/placement_conf.txt | 16:19 |
*** pkopec has quit IRC | 16:19 | |
ralonsoh | in the same order | 16:20 |
*** lpetrut has quit IRC | 16:21 | |
clarkb | I see that function doesn't set that var in ussuri | 16:21 |
clarkb | so I think the fix here is to update devstack stable branches to include that iniset on that function call | 16:22 |
ralonsoh | clarkb, no way! | 16:22 |
ralonsoh | thanks! | 16:22 |
*** __ministry1 has joined #openstack-infra | 16:23 | |
clarkb | but then we'll have the ussuri value fixed and won't have changing defaults to worry about | 16:23 |
clarkb | (and train stein etc depending on how far back we go) | 16:23 |
ralonsoh | clarkb, do you mean we need to backport this ussuri fix up to the last supported branch? | 16:26 |
*** ociuhandu has quit IRC | 16:27 | |
clarkb | maybe? I think it depends on if that default has changed before (it probably hasn't now thatI think about it more) | 16:27 |
clarkb | so it really only matters where the dfeault has changed (between ussuri and master I think) | 16:27 |
*** hamalq has joined #openstack-infra | 16:38 | |
*** hamalq_ has joined #openstack-infra | 16:41 | |
fungi | donnyd: we dodged a bullet here, the water came up high enough to saturate the yard but didn't rise above the slab for our downstairs. cleanup should be minimal | 16:41 |
clarkb | oh that reminds me mirror things. I'll likely have time for that after the meeting today | 16:42 |
ralonsoh | just a heads-up | 16:43 |
ralonsoh | https://review.opendev.org/#/c/744753/ | 16:43 |
donnyd | clarkb: I added static routes in my edge box for the two openstack projects. Routing should not be an issue any longer | 16:43 |
clarkb | cool I think the best process will be to rebuild the mirror to be sure no broken network artifacts remain and take it from there | 16:44 |
*** hamalq has quit IRC | 16:44 | |
*** sshnaidm is now known as sshnaidm|afk | 16:48 | |
*** gfidente is now known as gfidente|afk | 16:48 | |
*** gyee has joined #openstack-infra | 16:49 | |
*** priteau has quit IRC | 16:51 | |
*** priteau has joined #openstack-infra | 16:52 | |
*** xek has quit IRC | 16:55 | |
*** dtantsur is now known as dtantsur|afk | 17:00 | |
*** ralonsoh has quit IRC | 17:00 | |
*** __ministry1 has quit IRC | 17:01 | |
*** ralonsoh has joined #openstack-infra | 17:01 | |
*** ralonsoh has quit IRC | 17:06 | |
*** ralonsoh has joined #openstack-infra | 17:07 | |
rpittau | clarkb, smcginnis, FYI new virtualenv with the fix has been released https://pypi.org/project/virtualenv/20.0.30/ | 17:10 |
clarkb | rpittau: great, we can trigger image builds | 17:11 |
*** nightmare_unreal has quit IRC | 17:11 | |
* clarkb does that now | 17:12 | |
clarkb | I've triggered image builds for the ubuntus which is where we run the tox jobs predominantly. Does anyone know if we need fedora or centos etc? | 17:13 |
clarkb | if not they can pick up the updates on the normal update schedule (which is every 24 hours) | 17:13 |
*** udesale_ has quit IRC | 17:16 | |
*** hashar is now known as hashardinner | 17:16 | |
*** Lucas_Gray has quit IRC | 17:22 | |
smcginnis | rpittau: Excellent timing. | 17:26 |
smcginnis | clarkb: I have only seen the failures on ubuntu. So probably good just starting there. | 17:26 |
*** openstackgerrit has joined #openstack-infra | 17:26 | |
openstackgerrit | Merged openstack/project-config master: Retire Zuul's Kata tenant https://review.opendev.org/744687 | 17:26 |
smcginnis | clarkb: Does that include xenial images that some of the stable jobs run on? | 17:26 |
clarkb | yes | 17:26 |
clarkb | xenial, bionic, and focal | 17:26 |
smcginnis | Cool, thanks. I will recheck a few to see how they go. | 17:27 |
clarkb | the builds do take some time ~an hour plus upload time | 17:33 |
clarkb | I'll let you know when I think rechecks are likely to pass | 17:33 |
*** priteau has quit IRC | 17:33 | |
*** andrewbonney has quit IRC | 17:39 | |
*** michael-beaver has quit IRC | 17:45 | |
*** lpetrut has joined #openstack-infra | 17:49 | |
*** lpetrut has quit IRC | 17:50 | |
*** jamesmcarthur has quit IRC | 18:06 | |
clarkb | xenial has finished and has started uploading. bionic and focal are still building but I expect they'll complete soon. I need to pop out for a bit then prep for the meeting but we're making progress | 18:07 |
*** jamesmcarthur has joined #openstack-infra | 18:13 | |
*** yamamoto has joined #openstack-infra | 18:15 | |
*** jamesmcarthur has quit IRC | 18:17 | |
*** bcafarel has joined #openstack-infra | 18:19 | |
*** yamamoto has quit IRC | 18:20 | |
*** jamesmcarthur has joined #openstack-infra | 18:24 | |
*** jamesmcarthur has quit IRC | 18:30 | |
*** jamesmcarthur has joined #openstack-infra | 18:31 | |
*** mmethot_ has joined #openstack-infra | 18:42 | |
*** mmethot has quit IRC | 18:45 | |
*** jamesmcarthur has quit IRC | 18:48 | |
*** jamesmcarthur has joined #openstack-infra | 18:48 | |
*** jamesmcarthur has quit IRC | 18:51 | |
clarkb | all three images are uploading now but not done uploading | 18:51 |
clarkb | we only seem to upload with 4 threads now on each builder which is less than I thought we were doing in the past. I wonder if that is due to a nodepool update | 18:54 |
fungi | tobiash mentioned something about uploader threads consuming too much memory on his deployment and reducing their number, but not sure if a change was merged to nodepool itself to alter that | 18:55 |
*** markvoelker has joined #openstack-infra | 18:56 | |
tobiash | fungi: it's https://review.opendev.org/#/c/743790 | 18:56 |
tobiash | But not yet ready | 18:56 |
fungi | thanks, so no sounds like we haven't changed the default there | 18:57 |
clarkb | fungi: its because we don't set upload-workers on the command anymore | 18:57 |
clarkb | I think we were overriding that value with the old sysv init script but now we use the container command as is and don't add that in | 18:58 |
fungi | oh, got it, so this changed when containering | 18:58 |
clarkb | after the meeting I'll figure out how to fix that as it seems to be slowing us down on uploads | 18:58 |
clarkb | fungi: ya appears to be so | 18:58 |
clarkb | I'm guessing we have to override the command in the docker compose file | 18:59 |
fungi | sounds likely | 19:00 |
*** jamesmcarthur has joined #openstack-infra | 19:04 | |
tobiash | maybe the command can take an env var as arg which can be overridden | 19:04 |
tobiash | Or we make that a config option | 19:04 |
clarkb | a config option would be nice since we already have to supply the configs | 19:05 |
tobiash | It feels weird anyway to configure everything in nodepool.yaml but not the upload workers | 19:05 |
clarkb | ++ | 19:05 |
*** vishalmanchanda has quit IRC | 19:12 | |
*** auristor has quit IRC | 19:14 | |
*** ramishra has quit IRC | 19:15 | |
ralonsoh | clarkb, sorry (again) | 19:23 |
ralonsoh | there could be a problem with pip, version 20.2 | 19:23 |
ralonsoh | it's not resolving correctly the dependencies | 19:24 |
ralonsoh | https://bugs.launchpad.net/neutron/+bug/1890331 | 19:24 |
openstack | Launchpad bug 1890331 in neutron "[stable] oslo.service 2.3.2 requests eventlet 0.25.2, but upper version is 0.25.1" [Undecided,New] | 19:24 |
ralonsoh | --> second comment | 19:24 |
*** jamesmcarthur has quit IRC | 19:25 | |
clarkb | ralonsoh: 20.2.1 is the fix I think | 19:25 |
clarkb | we're rebuilding new images now to pick that up | 19:25 |
ralonsoh | clarkb, thanks a lot! | 19:25 |
ralonsoh | is there a LP link? | 19:25 |
ralonsoh | just for documentation | 19:25 |
clarkb | no it was a pip bug | 19:25 |
clarkb | which is in github | 19:25 |
*** jamesmcarthur has joined #openstack-infra | 19:25 | |
ralonsoh | https://github.com/pypa/pip/issues/8686 this one, I think | 19:26 |
ralonsoh | https://github.com/pypa/pip/issues/8695 | 19:26 |
ralonsoh | sorry, last one | 19:26 |
clarkb | https://github.com/pypa/pip/issues/8645 is the one smcginnis found | 19:26 |
*** mtreinish has quit IRC | 19:27 | |
fungi | yes, 8645 is the one which has been causing problems with constraints lists not being obeyed under tox anyway | 19:27 |
ralonsoh | thanks a lot for the info, we'll wait for this version then | 19:28 |
fungi | it'll probably be in use within the hour | 19:29 |
*** jamesmcarthur has quit IRC | 19:29 | |
*** jamesmcarthur has joined #openstack-infra | 19:30 | |
*** jamesmcarthur has quit IRC | 19:35 | |
*** jamesmcarthur has joined #openstack-infra | 19:35 | |
*** jamesmcarthur has quit IRC | 19:39 | |
*** jamesmcarthur has joined #openstack-infra | 19:43 | |
*** auristor has joined #openstack-infra | 19:46 | |
*** ralonsoh has quit IRC | 19:47 | |
*** jamesmcarthur has quit IRC | 19:48 | |
*** jamesmcarthur has joined #openstack-infra | 19:50 | |
ianw | donnyd: there's currently two active servers with name "mirror01.us-east.openedge.opendev.org" ... i'm guessing you started them as part of testing? | 19:52 |
ianw | neither have an ipv4 address, and we have one floating ip not attached | 19:53 |
ianw | fungi/clarkb: ^ i'm launching the node now | 19:56 |
clarkb | k | 19:56 |
ianw | (i.e. pressing up twice and enter :) | 19:56 |
clarkb | ianw: have a link to the sshfp change? I'd like to revie wthat either way | 19:56 |
ianw | 64 bytes from review01.openstack.org (2001:4800:7819:103:be76:4eff:fe04:9229): icmp_seq=1 ttl=53 time=53.9 ms | 19:56 |
ianw | it's happier on ipv6 | 19:56 |
ianw | clarkb: that was https://review.opendev.org/#/c/743461/ that i just put it to test hopefully yesterday, but didn't happen | 19:57 |
ianw | ... and, it didn't work it seems ... | 19:58 |
ianw | ok, firstly, oe must be in disabled because the launch didn't do anything | 19:58 |
ianw | $ ssh-keyscan -D 108.44.198.35 | 20:02 |
ianw | unknown option -- D | 20:02 |
ianw | i guess that option came in after bionic? | 20:02 |
fungi | you can run it on any system which is able to reach that ip address | 20:02 |
ianw | yeah, but that's not so good for automatically putting in the launch.py output :) | 20:03 |
fungi | though the typical way to generate sshfp rrs is to run a command locally on the system where the host key resides and derive them from what's on the fs | 20:03 |
fungi | using ssh-keygen -r | 20:05 |
fungi | i suppose we could have ansible run that remotely? | 20:05 |
ianw | yes, or just dump that from a ssh call at this point in the script | 20:07 |
donnyd | ianw: no, I did not start any instances | 20:11 |
clarkb | leftovers from earlier launch runs? maybe the cleanups failed | 20:11 |
donnyd | there is only one instance total on OE right now | 20:12 |
donnyd | it looks to have been running for 16 minutes | 20:12 |
donnyd | I would like to point out its the wrong flavor unless the disk size for the mirror nodes are being reduced | 20:12 |
donnyd | or block storage is being used | 20:13 |
clarkb | donnyd: ianw indicated that cinder was going to be attempted but maybe that is the wrong appraoch for OE? | 20:13 |
donnyd | oh no, cinder is fine | 20:13 |
donnyd | I have an all nvme block store | 20:13 |
donnyd | so it should be pretty awesome | 20:13 |
donnyd | eat all you need there | 20:13 |
donnyd | there is 20TB of nvme available in cinder | 20:14 |
donnyd | https://www.irccloud.com/pastebin/VsISka7B/ | 20:15 |
ianw | donnyd/clarkb: oh, ok, so the last mirror didn't use a volume but a bigger instance? | 20:15 |
clarkb | ianw: ya that sounds right | 20:15 |
clarkb | I think maybe bceause we didn't have cinder to start? | 20:15 |
donnyd | ianw: i can get the sshfp keys using your command | 20:15 |
clarkb | but I could be wrong about that | 20:15 |
donnyd | we did | 20:15 |
donnyd | but the 200G instance size was used | 20:16 |
ianw | donnyd: yeah, it seems everywhere can but the one place i want to do it (bridge.o.o :) | 20:16 |
donnyd | you should be able to use cinder without issue | 20:16 |
ianw | donnyd: i still get "Request to https://api.us-east.open-edge.io:8776/v3/2ed8e9a22ebf4eaeb4149f316b9d6c3d/volumes/detail timed out" | 20:16 |
donnyd | hrm | 20:16 |
*** jamesmcarthur has quit IRC | 20:16 | |
donnyd | that is strance | 20:16 |
donnyd | that is strange | 20:16 |
ianw | 8cpu-8GBram-250GBdisk ... i don't mind. it's easier to not have volumes attached | 20:17 |
donnyd | yea I think that is the flavor | 20:19 |
donnyd | but that is weird cinder is doing that | 20:19 |
donnyd | what happens when you curl cinder endpoint? | 20:21 |
ianw | umm .... one sec | 20:22 |
ianw | $ curl https://api.us-east.open-edge.io:8776/v3 | 20:22 |
ianw | {"error": {"code": 401, "title": "Unauthorized", "message": "The request you have made requires authentication."}} | 20:22 |
ianw | so that responds | 20:22 |
ianw | yeah, running with debug it happily starts chatting but /volumes/detail request just hangs | 20:24 |
donnyd | [04/Aug/2020:20:19:25 +0000] "GET /v3/2ed8e9a22ebf4eaeb4149f316b9d6c3d/volumes/detail?limit=101&sort=created_at%3Adesc HTTP/1.1" 200 1169 "-" "python-cinderclient" 62152(us) | 20:24 |
donnyd | i can see the requests coming in - and its weird that it just hangs (assuming this is from the bridge) | 20:28 |
donnyd | which command are you running if I can ask? | 20:28 |
donnyd | its it a pre-build openstacksdk app? | 20:29 |
ianw | just "volume list" | 20:29 |
clarkb | could there be ipv6 routing issues still? bridge has ipv6 interface so will prefer it to talk to that api server? | 20:29 |
ianw | it's from ... umm ... the container that has openstacksdk in it on bridge | 20:30 |
donnyd | ah.. that is possible | 20:30 |
clarkb | and maybe asymettric routes or similar are causing packets to disappear | 20:30 |
ianw | clarkb: yeah, but the initial chats are all ok | 20:30 |
*** hashardinner has quit IRC | 20:34 | |
donnyd | that is very strange indeed | 20:35 |
donnyd | I am testing from a laptop that has v6 on not my network and it seems to work ok | 20:36 |
donnyd | I will investigate more | 20:36 |
donnyd | but if we want to just get on with it, the 250G disk flavor is what has been used in the past | 20:36 |
donnyd | I also just forced an machine outside my wan onto v6 only and it also seems to work ok | 20:37 |
ianw | ++; because it's in the disabled list i want to remove it from the inventory first, so we don't have a broken state | 20:37 |
donnyd | ah yes, that makes sense | 20:38 |
donnyd | do you think it will try to reach out via already known ip addresses? | 20:38 |
ianw | it will try to talk to the old address if it's in the inventory and thus CD playbook runs will fail | 20:40 |
clarkb | smcginnis: bionic and xenial are all uploaded now. focal is still in progress | 20:43 |
clarkb | smcginnis: I think you should be able to recheck things since we haven't moved much to focal yet iirc | 20:43 |
smcginnis | clarkb: Thanks! Things are already looking better - https://zuul.opendev.org/t/openstack/builds?job_name=openstack-tox-lower-constraints | 20:49 |
*** jamesmcarthur has joined #openstack-infra | 21:04 | |
*** jamesmcarthur has quit IRC | 21:23 | |
*** jamesmcarthur has joined #openstack-infra | 21:23 | |
*** markvoelker has quit IRC | 21:30 | |
*** armax has quit IRC | 21:46 | |
openstackgerrit | Sean McGinnis proposed openstack/pbr master: Fix compatiblity with virtualenv 20.x+ https://review.opendev.org/744793 | 21:46 |
*** armax has joined #openstack-infra | 21:46 | |
*** yamamoto has joined #openstack-infra | 21:55 | |
*** smarcet has quit IRC | 21:58 | |
*** slaweq has quit IRC | 21:58 | |
openstackgerrit | Sean McGinnis proposed openstack/pbr master: Fix compatiblity with virtualenv 20.x+ https://review.opendev.org/744793 | 21:58 |
*** Lucas_Gray has joined #openstack-infra | 22:01 | |
*** jamesmcarthur has quit IRC | 22:05 | |
*** yamamoto has quit IRC | 22:07 | |
*** jamesmcarthur has joined #openstack-infra | 22:14 | |
*** ociuhandu has joined #openstack-infra | 22:15 | |
*** jamesmcarthur has quit IRC | 22:17 | |
*** jamesmcarthur has joined #openstack-infra | 22:17 | |
*** ociuhandu has quit IRC | 22:20 | |
*** eolivare has quit IRC | 22:50 | |
*** tosky has quit IRC | 22:50 | |
*** dpawlik2 has quit IRC | 22:55 | |
*** bnemec-pto has quit IRC | 22:55 | |
*** freerunner has quit IRC | 22:55 | |
*** lastmikoi has quit IRC | 22:55 | |
*** guillaumec has quit IRC | 22:55 | |
*** bradm has quit IRC | 22:55 | |
*** tobberydberg_ has quit IRC | 22:55 | |
*** dansmith has quit IRC | 22:55 | |
*** bstinson has quit IRC | 22:55 | |
*** frickler has quit IRC | 22:55 | |
*** tkajinam has joined #openstack-infra | 22:55 | |
clarkb | we're still waiting on focal uploads ... | 22:56 |
*** dpawlik2 has joined #openstack-infra | 22:57 | |
*** bnemec-pto has joined #openstack-infra | 22:57 | |
*** freerunner has joined #openstack-infra | 22:57 | |
*** lastmikoi has joined #openstack-infra | 22:57 | |
*** guillaumec has joined #openstack-infra | 22:57 | |
*** bradm has joined #openstack-infra | 22:57 | |
*** tobberydberg_ has joined #openstack-infra | 22:57 | |
*** dansmith has joined #openstack-infra | 22:57 | |
*** bstinson has joined #openstack-infra | 22:57 | |
*** frickler has joined #openstack-infra | 22:57 | |
clarkb | but ianw if you get a momement review on https://review.opendev.org/#/c/744780/ would be great. Then I can approve it once uploads are done (though that might be tomorrow morning | 22:57 |
*** bdodd has quit IRC | 22:59 | |
*** bdodd has joined #openstack-infra | 23:01 | |
ianw | lgtm, thanks | 23:02 |
*** Lucas_Gray has quit IRC | 23:10 | |
*** Lucas_Gray has joined #openstack-infra | 23:14 | |
*** jamesmcarthur has quit IRC | 23:16 | |
*** jamesmcarthur has joined #openstack-infra | 23:18 | |
*** jamesmcarthur has quit IRC | 23:21 | |
*** yamamoto has joined #openstack-infra | 23:22 | |
*** jamesmcarthur has joined #openstack-infra | 23:25 | |
*** jamesmcarthur has quit IRC | 23:27 | |
*** jamesmcarthur has joined #openstack-infra | 23:27 | |
*** dchen has joined #openstack-infra | 23:32 | |
*** jamesmcarthur has quit IRC | 23:34 | |
*** jamesmcarthur has joined #openstack-infra | 23:34 | |
*** jamesmcarthur has quit IRC | 23:40 | |
*** jamesmcarthur has joined #openstack-infra | 23:41 | |
*** jamesmcarthur has quit IRC | 23:45 | |
*** jamesmcarthur_ has joined #openstack-infra | 23:46 | |
*** hamalq_ has quit IRC | 23:52 | |
*** jamesmcarthur_ has quit IRC | 23:54 | |
*** jamesmcarthur has joined #openstack-infra | 23:54 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!