*** olaph1 has quit IRC | 00:00 | |
*** yamamoto has joined #openstack-infra | 00:08 | |
andreaf | pabelanger : thanks - trying that | 00:10 |
---|---|---|
ianw | hmm, subscribing to the vcs repo for nodepool/zuul install isn't quite right https://git.openstack.org/cgit/openstack-infra/puppet-nodepool/tree/manifests/init.pp#n119 ... if install fails we don't detect that. we really shoud use the pip provider i guess | 00:11 |
clarkb | ianw: you mean if the git update fails we don't detect that? | 00:12 |
ianw | clarkb: no, if that intsall fails, when we re-run we don't try to reinstall because the git tree hasn't changed | 00:13 |
mnaser | anyone against adding the link to infrastructure status in the topic? | 00:14 |
mnaser | i feel like its a nice place for it there | 00:14 |
*** yamamoto has quit IRC | 00:14 | |
clarkb | ianw: gotcha | 00:15 |
fungi | ianw: problem with the pip package provider is that it lacks support for setting a non-default upgrade strategy, so will unconditionally fight with distro-installed packages of any of your dep tree | 00:15 |
ianw | fungi: yeah, i think it's sufficiently corner case to not worry ... but how do i reset vcsrepo's idea of what it's done? | 00:16 |
fungi | we really need it to have some means of specifying --upgrade-strategy=only-if-needed | 00:16 |
fungi | ianw: easiest brute-force method is to git reset --hard HEAD^1 and then let puppet do its thing all over again | 00:17 |
ianw | large hammer, i like it :) | 00:18 |
fungi | ianw: good news is https://github.com/pypa/pip/pull/4500 seems to be targeted for pip 10 | 00:18 |
*** lbragstad has quit IRC | 00:18 | |
mnaser | infra-root: does anyone have a status update on the mirror rebuilding indices? just to know if i can recheck or not yet :) | 00:19 |
pabelanger | still working on it | 00:19 |
mnaser | pabelanger: not to rush things but any approximate ETA to know how long so i dont bug anyone :p | 00:19 |
pabelanger | mnaser: unknown, likely some time tomorrow | 00:20 |
pabelanger | mnaser: I'll ping once it is fixed | 00:20 |
fungi | pabelanger: is it still trying to resync the afsdb? | 00:20 |
mnaser | pabelanger: ok cool, thank you so much and good luck :) | 00:20 |
pabelanger | fungi: no, I've stopped it because there was an issue with AFS | 00:21 |
pabelanger | about to start it up again | 00:21 |
pabelanger | also | 00:21 |
pabelanger | | opensuse-tumbleweed-0000000001 | opensuse-tumbleweed | nb02 | qcow2,vhd | ready | 00:00:05:27 | | 00:21 |
*** tosky has quit IRC | 00:21 | |
openstackgerrit | Paul Belanger proposed openstack-infra/project-config master: Reduce duplicate data in nodepool configuration https://review.openstack.org/546819 | 00:24 |
mnaser | i always thought tumbleweed was a super cool name for a rolling release | 00:25 |
mnaser | lol | 00:25 |
ianw | ok, nb03 has puppetted itself, nodepool is there, config looks "right" (is probably wrong, but is deployed correctly) | 00:26 |
ianw | nodepool-builder doens't start, debugging now | 00:26 |
openstackgerrit | Paul Belanger proposed openstack-infra/project-config master: Bring online opensuse-tumbleweed images https://review.openstack.org/546821 | 00:30 |
pabelanger | ianw: clarkb: mnaser: ^next patches to boot opensuse-tumbleweed image. | 00:31 |
clarkb | pabelanger: https://review.openstack.org/#/c/546821/1/nodepool/nl02.openstack.org.yaml didn't we remove the images from citycloud? | 00:33 |
clarkb | I would not add new images there at least | 00:33 |
*** dmellado has joined #openstack-infra | 00:33 | |
pabelanger | clarkb: from nodepool.yaml file, that stops them from getting uploaded | 00:33 |
pabelanger | which is builders | 00:33 |
clarkb | I see so we only did it to the builders, I guess that makes sense | 00:34 |
pabelanger | clarkb: can you also get the parent too | 00:35 |
pabelanger | a clean up of data | 00:35 |
*** HeOS has quit IRC | 00:35 | |
*** liusheng has quit IRC | 00:36 | |
*** liusheng has joined #openstack-infra | 00:37 | |
*** andreww has quit IRC | 00:41 | |
*** xarses has joined #openstack-infra | 00:41 | |
*** Miouge has quit IRC | 00:43 | |
clarkb | fungi pabelanger ianw we are about ~15 minutes away from what has become a daily gerrit slowdown. I've got a call I've got to be on but if you can maybe keep an eye out for unusual gerrit behavior and what might be causing this it would be helpful | 00:44 |
fungi | clarkb: yup, not sure what to look for but i'll see if we can catch it | 00:45 |
*** d0ugal has quit IRC | 00:47 | |
ianw | run an iostat maybe? | 00:47 |
pabelanger | fungi: up for 2 reviews? https://review.openstack.org/546819/ and https://review.openstack.org/546821/ nodepool changes | 00:47 |
clarkb | fungi: ianw pabelanger my hunch is its something like stackalytics running a bunch of expensive queries | 00:49 |
clarkb | the behavior I have been able to isolate using melody is that we haev a large spike in threads but not a spike in http requests | 00:49 |
fungi | ianw: good call. i have `iostat -t 10` going in a root screen session on review.o.o now | 00:50 |
fungi | clarkb: i've got cacti and javamelody up too, though i assume those were of no help in previous instances | 00:51 |
clarkb | not really. The only thing melody helped with was showing a large spike in threads and that it didn't appear to be garbage collection related | 00:51 |
pabelanger | I've restarted mirror-update.o.o to see if that help with AFS kernel modules | 00:54 |
pabelanger | we seem to be losing network access a fair bit | 00:54 |
*** cuongnv has joined #openstack-infra | 00:54 | |
clarkb | pabelanger: do you see similar packet loss outside of afs? | 00:54 |
clarkb | mtr is a nice tool to monitor that | 00:54 |
fungi | 83 tasks according to show-queue but a lot of those look related to replication for the ancient git tags which got pushed by release automation a little while ago | 00:54 |
fungi | f88e687b 22:23:28.880 [18949c8c] push cgit@git02.openstack.org | 00:55 |
fungi | :/var/lib/git/openstack/nova.git | 00:55 |
clarkb | completely unrelated but I think that grenade may be broken | 00:56 |
clarkb | http://logs.openstack.org/00/546200/2/gate/neutron-grenade/dbcbf60/logs/grenade.sh.txt.gz#_2018-02-21_23_43_04_988 | 00:56 |
fungi | so yeah, i think we have a replication backlog for nova for a while | 00:56 |
clarkb | my guess is beacuse we use stestr now | 00:56 |
clarkb | ianw: ^ | 00:56 |
pabelanger | clarkb: I'll check | 00:56 |
mnaser | clarkb: https://review.openstack.org/#/q/Ic1fa3a98b6bcd151c489b078028687892655a19b | 00:56 |
mnaser | pabelanger: ^ | 00:56 |
clarkb | mnaser: thanks | 00:56 |
mnaser | i backported the master fix | 00:57 |
clarkb | I don't appear to have approval rights on that maybe tonyb can help us | 00:57 |
*** d0ugal has joined #openstack-infra | 00:57 | |
*** daidv has joined #openstack-infra | 00:58 | |
ianw | me either .. #openstack-qa? | 01:00 |
ianw | oh, the call is out | 01:00 |
pabelanger | okay, reboot seems to help, reprepro is now reporting db corruption. I'm going to copy good database from RO volume, into RW, then run reprepro again | 01:01 |
tonyb | que? | 01:01 |
pabelanger | should then add missing debs into the database | 01:01 |
clarkb | tonyb: https://review.openstack.org/#/q/Ic1fa3a98b6bcd151c489b078028687892655a19b stable reviews for that | 01:01 |
clarkb | tonyb: to make grenade happy with stsetr | 01:01 |
clarkb | (I can't spell) | 01:01 |
fungi | who needs spilling? | 01:02 |
tonyb | Gah no dice sorry | 01:02 |
*** stevebaker has joined #openstack-infra | 01:03 | |
fungi | gerrit queue has doubled to 141 | 01:03 |
mnaser | oh gerrit is fully dead | 01:04 |
mnaser | "Reason: Error reading from remote server" | 01:04 |
tonyb | Looks like grenade is limited https://review.openstack.org/#/admin/groups/188,members + https://review.openstack.org/#/admin/groups/425,members | 01:04 |
tonyb | sorry | 01:04 |
ianw | pabelanger: ok, i hope it's not necessary but https://docs.openstack.org/infra/system-config/reprepro.html#advanced-recovery-techniques we expanded after the last issues | 01:04 |
*** gyee has quit IRC | 01:05 | |
ianw | that's worryingly small for a project that runs so much | 01:06 |
pabelanger | ianw: yah, I'll fallback to that | 01:06 |
openstackgerrit | Ian Wienand proposed openstack-infra/system-config master: Add linaro cloud to nodepool clouds.yaml https://review.openstack.org/546834 | 01:09 |
*** yamamoto has joined #openstack-infra | 01:10 | |
fungi | not seeing any significant spikes in cacti graphs for review.o.o | 01:13 |
clarkb | heisenbug | 01:14 |
fungi | not heise-enough because mnaser reported it going out to lunch at 01:04 | 01:14 |
*** claudiub has quit IRC | 01:15 | |
*** olaph1 has joined #openstack-infra | 01:15 | |
*** olaph has quit IRC | 01:16 | |
*** yamamoto has quit IRC | 01:16 | |
fungi | so cacti does show a relatively anomalous write spike to xvdb atarting up right at 01:00 | 01:18 |
fungi | same for 24 hours ago | 01:18 |
clarkb | is it bup maybe? | 01:19 |
fungi | xvdb is the pv for the main vg | 01:19 |
clarkb | which hosts the git repos iirc | 01:19 |
clarkb | someone doing clones maybe? | 01:19 |
fungi | so something writing to /home/gerrit2 | 01:19 |
fungi | it's write activity, not read | 01:20 |
fungi | according to cacti | 01:20 |
clarkb | hrm | 01:20 |
clarkb | pushes then? or logging? | 01:20 |
fungi | instead of ianw's iostat suggestion (which just told us what block devices were seeing activity) i should have had iotop running instead | 01:25 |
fungi | which i do now, but it's probably too late to be useful | 01:26 |
fungi | but perhaps tomorrow | 01:26 |
fungi | currently the bulk of the write activity is the gerrit jvm, it looks like | 01:27 |
*** xarses has quit IRC | 01:27 | |
clarkb | oh! | 01:27 |
clarkb | could it be the garbage collecting and packing? | 01:28 |
fungi | gerrit does that now, right? we don't cron it any longer | 01:28 |
clarkb | ya its the jvm doing it now iirc | 01:28 |
fungi | though there is also this: | 01:28 |
fungi | no, wait, i was reading cron fields backwards | 01:29 |
fungi | so yeah, very well may be what you're saying | 01:29 |
fungi | we don't seem to have any cronjobs around that timeframe at any rate | 01:29 |
pabelanger | generating universe/Contents-amd64... | 01:30 |
pabelanger | better | 01:30 |
*** kiennt26 has joined #openstack-infra | 01:31 | |
pabelanger | hopefully not much longer to finish | 01:31 |
pabelanger | then I'll manually vos release | 01:31 |
fungi | clarkb: threadcount didn't really spike my more than maybe 20% at 01:00 | 01:33 |
fungi | (if even that much) | 01:33 |
*** dhill__ has quit IRC | 01:34 | |
*** slaweq has quit IRC | 01:34 | |
clarkb | fungi: ok it was more dramatic in the past | 01:34 |
clarkb | several hundred percent according to melody | 01:34 |
*** slaweq has joined #openstack-infra | 01:35 | |
fungi | i must be looking at a different graph | 01:35 |
*** esberglu has quit IRC | 01:35 | |
fungi | the "threads count" graph says it jumped from around 425 threads to 500 at 01:00 | 01:35 |
clarkb | fungi: https://review.openstack.org/monitoring?part=graph&graph=activeThreads | 01:36 |
clarkb | sorry active threads | 01:36 |
fungi | oh, active threads | 01:36 |
clarkb | and it spiked here too | 01:36 |
fungi | yes, though the bigger spike there was apparently closer to 01:20 | 01:36 |
fungi | spiking up to around 55-ish | 01:37 |
*** slaweq has quit IRC | 01:39 | |
*** Kevin_Zheng has joined #openstack-infra | 01:40 | |
*** salv-orlando has joined #openstack-infra | 01:41 | |
openstackgerrit | Ian Wienand proposed openstack-infra/system-config master: Add linaro cloud to nodepool clouds.yaml https://review.openstack.org/546834 | 01:42 |
fungi | io ops to /home/gerrit2 are still pretty heavy and iotop says it's mostly the jvm | 01:45 |
fungi | so daily git repacking is seeming a likely explanation | 01:45 |
*** salv-orlando has quit IRC | 01:46 | |
*** jlabarre has quit IRC | 01:47 | |
*** namnh has joined #openstack-infra | 01:50 | |
*** ykarel|afk has joined #openstack-infra | 01:52 | |
*** lbragstad has joined #openstack-infra | 01:56 | |
*** owalsh has quit IRC | 01:56 | |
*** owalsh has joined #openstack-infra | 01:56 | |
ianw | | ubuntu-xenial-arm64-0000000003 | ubuntu-xenial-arm64 | nb03 | qcow2 | building | 00:00:04:22 | | 01:58 |
*** kien-ha has joined #openstack-infra | 01:58 | |
persia | That looks promising :) | 01:58 |
pabelanger | neat | 02:01 |
*** owalsh has quit IRC | 02:01 | |
ianw | it's a start. it's going to take a while to cache everything in | 02:02 |
*** annp has joined #openstack-infra | 02:02 | |
*** eharney has quit IRC | 02:05 | |
*** owalsh has joined #openstack-infra | 02:06 | |
*** olaph1 is now known as olaph | 02:12 | |
*** yamamoto has joined #openstack-infra | 02:12 | |
*** hongbin has joined #openstack-infra | 02:13 | |
pabelanger | okay, DB recreated, just checking pools now | 02:15 |
pabelanger | (reprepro) | 02:16 |
*** dhill__ has joined #openstack-infra | 02:16 | |
*** dklyle has joined #openstack-infra | 02:17 | |
*** yamamoto has quit IRC | 02:18 | |
openstackgerrit | Merged openstack-infra/project-config master: Reduce duplicate data in nodepool configuration https://review.openstack.org/546819 | 02:19 |
*** david-lyle has quit IRC | 02:21 | |
*** inc0 has quit IRC | 02:21 | |
*** inc0 has joined #openstack-infra | 02:21 | |
pabelanger | reprepro successful | 02:22 |
pabelanger | I'm going to vos release now | 02:22 |
fungi | woo! thanks for sticking with it | 02:22 |
pabelanger | the reboot did help, no AFS issues that time | 02:22 |
openstackgerrit | Merged openstack-infra/project-config master: Bring online opensuse-tumbleweed images https://review.openstack.org/546821 | 02:23 |
*** olaph1 has joined #openstack-infra | 02:24 | |
*** olaph has quit IRC | 02:25 | |
*** zhongjun has joined #openstack-infra | 02:27 | |
*** dave-mccowan has joined #openstack-infra | 02:28 | |
pabelanger | Released volume mirror.ubuntu successfully | 02:29 |
pabelanger | yay | 02:29 |
pabelanger | #status log mirror.ubuntu reprepro has been repaired and back online | 02:30 |
openstackstatus | pabelanger: finished logging | 02:30 |
pabelanger | mnaser: ^ | 02:30 |
*** ykarel|afk has quit IRC | 02:31 | |
*** mriedem has quit IRC | 02:32 | |
*** zhenguo has joined #openstack-infra | 02:32 | |
*** yamahata has quit IRC | 02:34 | |
*** kien-ha has quit IRC | 02:34 | |
melwitt | does anyone know if reviewday is still supported at http://status.openstack.org/reviews ? I notice it says "Page refreshed at 2018-02-02" | 02:37 |
*** dhill__ has quit IRC | 02:40 | |
clarkb | melwitt: it should be, its likely just broken for one reason or another | 02:41 |
melwitt | cool, thanks | 02:42 |
pabelanger | | 0002664319 | inap-mtl01 | opensuse-tumbleweed | d7390c45-35c9-4c30-bb1e-4dc634f2a3d0 | 198.72.124.114 | | ready | 00:00:00:11 | unlocked | | 02:47 |
pabelanger | clarkb: dirk: ^ | 02:47 |
mnaser | pabelanger: awesome thank you | 02:51 |
mnaser | #thanks pabelanger and infra for getting ubuntu mirrors repaired and backup quickly! | 02:52 |
openstackstatus | mnaser: Added your thanks to Thanks page (https://wiki.openstack.org/wiki/Thanks) | 02:52 |
openstackgerrit | Paul Belanger proposed openstack-infra/openstack-zuul-jobs master: Add opensuse-tumbleweed testing to ozj https://review.openstack.org/546844 | 02:55 |
openstackgerrit | Paul Belanger proposed openstack-infra/openstack-zuul-jobs master: Add opensuse-tumbleweed testing to ozj https://review.openstack.org/546844 | 02:57 |
*** dave-mccowan has quit IRC | 02:58 | |
*** wolverineav has quit IRC | 03:00 | |
openstackgerrit | Paul Belanger proposed openstack-infra/openstack-zuul-jobs master: DNM https://review.openstack.org/546846 | 03:00 |
*** andymccr has quit IRC | 03:00 | |
*** andymccr has joined #openstack-infra | 03:03 | |
*** rlandy|rover|bbl is now known as rlandy|rover | 03:03 | |
*** lbragstad has quit IRC | 03:04 | |
*** andreas_s has joined #openstack-infra | 03:06 | |
*** andreas_s has quit IRC | 03:10 | |
openstackgerrit | Paul Belanger proposed openstack-infra/openstack-zuul-jobs master: Add fedora-27 testing to ozj https://review.openstack.org/546847 | 03:12 |
openstackgerrit | Paul Belanger proposed openstack-infra/openstack-zuul-jobs master: DNM https://review.openstack.org/546848 | 03:13 |
*** yamamoto has joined #openstack-infra | 03:14 | |
openstackgerrit | Matt Riedemann proposed openstack-infra/openstack-zuul-jobs master: Don't run neutron-grenade job on stable/ocata changes https://review.openstack.org/546850 | 03:18 |
*** ramishra has joined #openstack-infra | 03:19 | |
*** yamamoto has quit IRC | 03:20 | |
*** mriedem has joined #openstack-infra | 03:23 | |
mriedem | zuulv3 question: the neutron-grenade-multinode job is defined in neutron, but not in the stable/ocata branch for neutron. nova has the neutron-grenade-multinode job in it's check queue via project-config, | 03:25 |
mriedem | we need to not run the grenade job on ocata changes since newton is eol, | 03:25 |
mriedem | but i'm not sure if i need to add a branch restriction to project-config, or neutron's job def (on master?), or other | 03:25 |
mriedem | neutron-grenade is defined in openstack-zuul-jobs and is part of the integrated-gate template, and is defined to not run on stable/newton (and i have a patch up to make it stop running it against ocata changes too) | 03:26 |
mriedem | so a bit confused about where the branch restriction should live | 03:26 |
mriedem | actually i guess it's also defined in neutron stable/ocata https://github.com/openstack/neutron/blob/stable/ocata/.zuul.yaml#L137 so that probably needs to be deleted, | 03:28 |
mriedem | and nova's re-def of it in project-config also needs to go | 03:28 |
mriedem | tonyb: ^ you might care since ocata is blocked atm | 03:28 |
tonyb | mriedem: Thanks. I'd like to know how to avoid making this same mistake (if?)when ocata goes EOL | 03:30 |
mriedem | we've always had to just update branch restrictions for running grenade after a branch goes eol, | 03:31 |
tonyb | mriedem: but I have no idea. I kinda thought it should be modified in openstack-zuul-jobs on the octata branch (or as you say that shoudl be deleted) but I'm just confused | 03:31 |
mriedem | but now it's spread out more since the jobs can be defined in one place, and then re-defined (sort of) still in project-config per project | 03:31 |
mriedem | i'm gonna remove the job def from neutron in ocata | 03:31 |
tonyb | mriedem: Yeah, this time I'd like to do it right after the branches get deleted but it a matter of knowing what "it" is | 03:32 |
mriedem | it is it | 03:32 |
mriedem | what is it | 03:32 |
tonyb | back and forth I sway with the wind ? | 03:33 |
*** wolverineav has joined #openstack-infra | 03:33 | |
tonyb | Gah wrong song :( | 03:33 |
* tonyb queues up album as a corrective measure | 03:34 | |
mriedem | https://review.openstack.org/546855 | 03:34 |
mriedem | ah crap | 03:34 |
mriedem | that's master | 03:34 |
*** yamamoto has joined #openstack-infra | 03:37 | |
*** wolverineav has quit IRC | 03:37 | |
mriedem | https://review.openstack.org/#/c/546857/ | 03:37 |
clarkb | the best place to put the exclusion is on thr main job def if it will be gloal | 03:37 |
clarkb | and I think these grenade exclusions are effectively global right? | 03:38 |
*** dave-mccowan has joined #openstack-infra | 03:38 | |
clarkb | that said I think the best thin here is to evwntually get to having the job list on each branch in each projecf maybe? | 03:38 |
clarkb | then you just update the jobs a project wants to run and thats an easy direct exvlusion | 03:39 |
mriedem | but then you have to remove this job from every project that defines it once a branch goes eol, which sucks | 03:39 |
*** wolverineav has joined #openstack-infra | 03:40 | |
mriedem | i'm not really sure what happens if we remove the job def from neutron's stable/ocata branch, will nova still attempt to run it against ocata changes and if so, which job def would it be using? | 03:40 |
mriedem | https://github.com/openstack-infra/project-config/blob/master/zuul.d/projects.yaml#L10339 | 03:41 |
mriedem | that's where nova says it wants to run that job right now | 03:41 |
clarkb | thats a good question I think it would be an error without removing the job from other branches that use it. Grenade may be a special case because we drop it a cycle early | 03:41 |
*** ramishra has quit IRC | 03:41 | |
clarkb | so adding exclusion to main job def is probably best for grenade | 03:42 |
mriedem | does that go in the main job def in the neutron stable/ocata branch though? | 03:42 |
*** xarses has joined #openstack-infra | 03:43 | |
mriedem | b/c that seems bonkers - the job def in stable/ocata would have to say, don't run this in stable/ocata | 03:43 |
*** salv-orlando has joined #openstack-infra | 03:43 | |
*** ramishra has joined #openstack-infra | 03:43 | |
clarkb | ya thats a good point this may be a corner case we have to figure out | 03:43 |
*** andreww has joined #openstack-infra | 03:43 | |
mriedem | ok so for now, i'm going to push a project-config patch to say don't run that job on stable/ocata chnages | 03:43 |
*** andreww has quit IRC | 03:44 | |
*** wolverineav has quit IRC | 03:44 | |
*** andreww has joined #openstack-infra | 03:44 | |
mriedem | there is still a newton branch for devstack, not sure why | 03:45 |
mriedem | or if that works | 03:45 |
mriedem | tonyb: ? | 03:45 |
*** xarses has quit IRC | 03:47 | |
*** cuongnv has quit IRC | 03:47 | |
*** salv-orlando has quit IRC | 03:49 | |
*** rlandy|rover has quit IRC | 03:50 | |
openstackgerrit | Matt Riedemann proposed openstack-infra/project-config master: Don't run neutron-grenade-multinode on newton or ocata changes https://review.openstack.org/546858 | 03:51 |
mriedem | ok ^ says don't run that job on projects that have stable branches (excluded tempest since it's branchless) | 03:51 |
*** wolverineav has joined #openstack-infra | 03:51 | |
tonyb | mriedem: because $some projects are still using it. | 03:54 |
tonyb | mriedem: I'll look into which ones but I suspect we can drop it and clean up the mess later | 03:55 |
mriedem | and ocata is marked for eol next week :) | 03:56 |
mriedem | yeehaw! | 03:56 |
mriedem | alright i'll check things out in the morning, gotta drop | 03:56 |
*** gema has quit IRC | 03:57 | |
*** mriedem has quit IRC | 03:57 | |
tonyb | mriedem: Yeah but we have a session at the PTG to work out if that's the reality | 03:57 |
*** wolverineav has quit IRC | 03:57 | |
tonyb | I suspect it'll get another 3-6 months but that depends who shows up | 03:58 |
*** rlandy has joined #openstack-infra | 03:59 | |
*** rlandy is now known as rlandy|rover | 03:59 | |
*** slaweq has joined #openstack-infra | 03:59 | |
*** d0ugal has quit IRC | 04:00 | |
*** rlandy|rover has quit IRC | 04:01 | |
*** slaweq has quit IRC | 04:04 | |
*** udesale has joined #openstack-infra | 04:04 | |
*** ykarel|afk has joined #openstack-infra | 04:04 | |
*** udesale_ has joined #openstack-infra | 04:07 | |
*** dave-mccowan has quit IRC | 04:09 | |
*** olaph has joined #openstack-infra | 04:10 | |
*** olaph1 has quit IRC | 04:13 | |
*** wolverineav has joined #openstack-infra | 04:17 | |
*** wolverineav has quit IRC | 04:22 | |
*** sree_ has joined #openstack-infra | 04:25 | |
*** sree_ is now known as Guest96947 | 04:26 | |
*** VW has joined #openstack-infra | 04:26 | |
*** d0ugal has joined #openstack-infra | 04:27 | |
*** links has joined #openstack-infra | 04:30 | |
*** udesale_ has quit IRC | 04:31 | |
*** udesale_ has joined #openstack-infra | 04:31 | |
*** udesale has quit IRC | 04:32 | |
*** olaph1 has joined #openstack-infra | 04:33 | |
*** psachin has joined #openstack-infra | 04:34 | |
*** olaph has quit IRC | 04:34 | |
*** olaph has joined #openstack-infra | 04:37 | |
*** olaph1 has quit IRC | 04:38 | |
*** olaph1 has joined #openstack-infra | 04:41 | |
*** zhenguo has quit IRC | 04:42 | |
*** olaph has quit IRC | 04:42 | |
*** salv-orlando has joined #openstack-infra | 04:45 | |
*** salv-orlando has quit IRC | 04:49 | |
*** pgadiya has joined #openstack-infra | 04:53 | |
*** ykarel|afk is now known as ykarel | 04:54 | |
*** VW has quit IRC | 04:55 | |
*** VW has joined #openstack-infra | 04:55 | |
*** dhajare_ has joined #openstack-infra | 04:57 | |
*** VW has quit IRC | 05:00 | |
*** hongbin has quit IRC | 05:02 | |
*** rosmaita has quit IRC | 05:06 | |
*** andreww has quit IRC | 05:18 | |
*** wolverineav has joined #openstack-infra | 05:21 | |
*** andreww has joined #openstack-infra | 05:22 | |
*** xarses has joined #openstack-infra | 05:24 | |
*** slaweq has joined #openstack-infra | 05:25 | |
*** ihrachys_ has joined #openstack-infra | 05:25 | |
*** ihrachys has quit IRC | 05:26 | |
ianw | pabelanger: 485748 ... i just ... dunno. which is why i guess this sat in review, because it's not *obviously* correct and so nobody wants to break odd things ... hopefully based on that there's a convincing argument for why this and not before=basic.target? | 05:26 |
*** andreww has quit IRC | 05:27 | |
*** wolverineav has quit IRC | 05:27 | |
*** janki has joined #openstack-infra | 05:27 | |
*** slaweq has quit IRC | 05:30 | |
*** oidgar has joined #openstack-infra | 05:32 | |
*** eernst has quit IRC | 05:32 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/nodepool master: Add /label-list to the webapp https://review.openstack.org/535563 | 05:35 |
openstackgerrit | Jens Harbott (frickler) proposed openstack-infra/devstack-gate master: Add neutron-tempest-plugin-api job https://review.openstack.org/529000 | 05:37 |
*** bhujay has joined #openstack-infra | 05:45 | |
*** salv-orlando has joined #openstack-infra | 05:45 | |
*** claudiub has joined #openstack-infra | 05:47 | |
*** aeng has quit IRC | 05:49 | |
*** salv-orlando has quit IRC | 05:50 | |
*** oidgar has quit IRC | 05:51 | |
*** olaph has joined #openstack-infra | 05:51 | |
*** olaph1 has quit IRC | 05:53 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul-jobs master: bindep: use shell instead of command with executable https://review.openstack.org/546869 | 05:56 |
*** bhujay has quit IRC | 06:00 | |
*** jogo has quit IRC | 06:03 | |
*** bhujay has joined #openstack-infra | 06:04 | |
openstackgerrit | OpenStack Proposal Bot proposed openstack-infra/project-config master: Normalize projects.yaml https://review.openstack.org/546870 | 06:05 |
*** cuongnv has joined #openstack-infra | 06:10 | |
*** pfalleno1 has quit IRC | 06:10 | |
*** tdasilva has quit IRC | 06:10 | |
*** kjackal has joined #openstack-infra | 06:12 | |
*** lpetrut has joined #openstack-infra | 06:18 | |
*** salv-orlando has joined #openstack-infra | 06:19 | |
*** tdasilva has joined #openstack-infra | 06:25 | |
*** slaweq has joined #openstack-infra | 06:27 | |
*** gema has joined #openstack-infra | 06:28 | |
*** gema has joined #openstack-infra | 06:28 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul-jobs master: bindep: use shell instead of command with executable https://review.openstack.org/546869 | 06:30 |
*** dbecker has quit IRC | 06:31 | |
dirk | pabelanger: nice! I'm creating some test jobs today | 06:31 |
*** slaweq has quit IRC | 06:31 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul-jobs master: bindep: use shell instead of command with executable https://review.openstack.org/546869 | 06:41 |
*** alexchadin has joined #openstack-infra | 06:41 | |
*** dbecker has joined #openstack-infra | 06:44 | |
*** salv-orlando has quit IRC | 06:51 | |
*** salv-orlando has joined #openstack-infra | 06:51 | |
*** jtomasek has joined #openstack-infra | 06:56 | |
*** salv-orlando has quit IRC | 06:56 | |
openstackgerrit | Tristan Cacqueray proposed openstack-infra/zuul-jobs master: ara: check for return code instead of succeed https://review.openstack.org/546892 | 06:57 |
*** bhujay has quit IRC | 06:57 | |
*** jogo has joined #openstack-infra | 07:00 | |
*** ianychoi has quit IRC | 07:01 | |
*** ianychoi has joined #openstack-infra | 07:01 | |
*** bhujay has joined #openstack-infra | 07:07 | |
*** agopi has quit IRC | 07:11 | |
*** lpetrut has quit IRC | 07:11 | |
*** bhujay has quit IRC | 07:13 | |
*** Swami has quit IRC | 07:13 | |
openstackgerrit | Andreas Jaeger proposed openstack-infra/zuul-jobs master: Add abstract attribute to base jobs https://review.openstack.org/545603 | 07:16 |
*** hashar has joined #openstack-infra | 07:16 | |
AJaeger | dirk: see also https://review.openstack.org/546846 , both tumbleweed tests fail | 07:17 |
*** rcernin has quit IRC | 07:20 | |
openstackgerrit | Merged openstack-infra/project-config master: Normalize projects.yaml https://review.openstack.org/546870 | 07:24 |
openstackgerrit | Andreas Jaeger proposed openstack-infra/zuul-jobs master: Replace sphinx_check_warning_is_error.py with ini lookup https://review.openstack.org/528812 | 07:25 |
*** snapiri1 has joined #openstack-infra | 07:28 | |
*** dhajare_ has quit IRC | 07:28 | |
openstackgerrit | Merged openstack-infra/storyboard master: Create a StoryBoard gui manual https://review.openstack.org/325474 | 07:32 |
openstackgerrit | Andreas Jaeger proposed openstack-infra/zuul-jobs master: Replace sphinx_check_warning_is_error.py with ini lookup https://review.openstack.org/528812 | 07:33 |
*** andreas_s has joined #openstack-infra | 07:33 | |
ameeda | Morning :) , should I reverse os-xenapi version from 0.3.1 to 0.1.0 as this bug mentioned " https://bugs.launchpad.net/nova/+bug/1718606 " | 07:34 |
openstack | Launchpad bug 1718606 in OpenStack Compute (nova) "doc: Outdated XenServer document" [Low,Confirmed] | 07:34 |
*** ykarel is now known as ykarel|lunch | 07:34 | |
*** snapiri1 is now known as snapiri- | 07:35 | |
*** pcaruana has joined #openstack-infra | 07:37 | |
*** lpetrut has joined #openstack-infra | 07:37 | |
*** HeOS has joined #openstack-infra | 07:38 | |
*** slaweq has joined #openstack-infra | 07:45 | |
*** slaweq_ has joined #openstack-infra | 07:47 | |
AJaeger | mordred: I fixed your change https://review.openstack.org/528812 - could you double check it, please? | 07:48 |
*** armaan has quit IRC | 07:48 | |
*** slaweq has quit IRC | 07:48 | |
*** armaan has joined #openstack-infra | 07:48 | |
AJaeger | ameeda: there's an #openstack-nova channel for nova discussion and questions. | 07:49 |
*** slaweq has joined #openstack-infra | 07:49 | |
ameeda | AJaeger: Thanks for reply :) | 07:50 |
*** salv-orlando has joined #openstack-infra | 07:52 | |
*** slaweq_ has quit IRC | 07:52 | |
*** Miouge has joined #openstack-infra | 07:54 | |
*** kjackal has quit IRC | 07:56 | |
*** salv-orlando has quit IRC | 07:56 | |
*** slaweq_ has joined #openstack-infra | 08:00 | |
*** slaweq_ has quit IRC | 08:04 | |
*** lpetrut has quit IRC | 08:09 | |
*** shu-mutou-AWAY is now known as shu-mutou | 08:12 | |
*** threestrands_ has joined #openstack-infra | 08:15 | |
*** liusheng has quit IRC | 08:17 | |
*** liusheng has joined #openstack-infra | 08:18 | |
*** tesseract has joined #openstack-infra | 08:18 | |
*** threestrands has quit IRC | 08:18 | |
*** dhajare_ has joined #openstack-infra | 08:27 | |
*** ralonsoh has joined #openstack-infra | 08:27 | |
*** ykarel|lunch is now known as ykarel | 08:28 | |
*** slaweq_ has joined #openstack-infra | 08:28 | |
*** rossella_s has joined #openstack-infra | 08:31 | |
*** gfidente has joined #openstack-infra | 08:36 | |
*** kjackal has joined #openstack-infra | 08:38 | |
andreaf | tobiash corvus mordred ianw AJaeger some more improvements on stage-output https://review.openstack.org/#/c/544606/ | 08:38 |
*** electrofelix has joined #openstack-infra | 08:40 | |
*** florianf has joined #openstack-infra | 08:41 | |
*** pblaho has joined #openstack-infra | 08:44 | |
*** liusheng has quit IRC | 08:44 | |
*** liusheng has joined #openstack-infra | 08:44 | |
*** slaweq_ has quit IRC | 08:45 | |
*** jpena|off is now known as jpena | 08:50 | |
*** kjackal has quit IRC | 08:51 | |
*** kjackal has joined #openstack-infra | 08:52 | |
*** xarses has quit IRC | 08:52 | |
*** salv-orlando has joined #openstack-infra | 08:52 | |
*** salv-orlando has quit IRC | 08:57 | |
*** salv-orlando has joined #openstack-infra | 08:57 | |
*** vivsoni has quit IRC | 08:59 | |
*** vivsoni has joined #openstack-infra | 08:59 | |
*** jpich has joined #openstack-infra | 09:02 | |
*** alexchadin has quit IRC | 09:03 | |
*** oidgar has joined #openstack-infra | 09:04 | |
*** hjensas has joined #openstack-infra | 09:04 | |
slaweq | hi all | 09:12 |
slaweq | on zuul.openstack.org I see error "status.json: Not Found" and no jobs are displayed | 09:13 |
slaweq | is it some already known issue or should I search on my side? | 09:13 |
*** amoralej|off is now known as amoralej | 09:13 | |
*** annp has quit IRC | 09:14 | |
*** daidv has quit IRC | 09:14 | |
*** cuongnv has quit IRC | 09:14 | |
AJaeger | slaweq: we updated javascript, do a hard reload to invalidate the cached javascript | 09:14 |
*** daidv has joined #openstack-infra | 09:14 | |
*** cuongnv has joined #openstack-infra | 09:14 | |
*** annp has joined #openstack-infra | 09:14 | |
slaweq | AJaeger: thx, that helped :) | 09:15 |
*** dtantsur|afk is now known as dtantsur | 09:17 | |
*** hamzy_ has quit IRC | 09:20 | |
*** hamzy_ has joined #openstack-infra | 09:20 | |
*** pfallenop has joined #openstack-infra | 09:23 | |
*** mgoddard_ has joined #openstack-infra | 09:31 | |
*** kjackal has quit IRC | 09:32 | |
*** kjackal has joined #openstack-infra | 09:32 | |
*** kiennt26 has quit IRC | 09:36 | |
*** threestrands_ has quit IRC | 09:37 | |
*** e0ne has joined #openstack-infra | 09:38 | |
*** jaosorior has joined #openstack-infra | 09:39 | |
*** pfallenop has quit IRC | 09:39 | |
*** pfallenop has joined #openstack-infra | 09:44 | |
*** shu-mutou is now known as shu-mutou-AWAY | 09:54 | |
*** danpawlik has joined #openstack-infra | 09:55 | |
*** jpich_ has joined #openstack-infra | 10:03 | |
openstackgerrit | Hoang Trung Hieu proposed openstack-infra/system-config master: Fix dead link https://review.openstack.org/546930 | 10:04 |
openstackgerrit | Jean-Philippe Evrard proposed openstack-infra/project-config master: Add OSA os_panko repo base jobs https://review.openstack.org/546573 | 10:04 |
openstackgerrit | sebastian marcet proposed openstack-infra/openstackid-resources master: Added new endpoint to seed default event types per summit https://review.openstack.org/546931 | 10:04 |
openstackgerrit | Jean-Philippe Evrard proposed openstack-infra/project-config master: Add OSA nspawn host/container repo base jobs https://review.openstack.org/546294 | 10:04 |
AJaeger | evrardjp: note we still have not recovered from your broken imports, hope corvus can help later today. We might need a Zuul restart. | 10:05 |
evrardjp | Resubmitting -- I guess I could :) | 10:05 |
evrardjp | Oh darn I thought it was done during the night, silently! | 10:05 |
openstackgerrit | Merged openstack-infra/openstackid-resources master: Added new endpoint to seed default event types per summit https://review.openstack.org/546931 | 10:05 |
evrardjp | during my night at least :) | 10:05 |
evrardjp | I am sorry for that then | 10:06 |
AJaeger | evrardjp: I rechecked one chagne this morning and it was still failing. | 10:06 |
evrardjp | oh I haven't seen the rechecks, that's why I rebased. | 10:06 |
*** yamamoto has quit IRC | 10:06 | |
evrardjp | Probably need moar coffee then. | 10:06 |
AJaeger | and I didn't see any mention in IRC that it's fixed either | 10:06 |
AJaeger | evrardjp: yeah, drink some coffee - and ask the US based team later today again, please. | 10:07 |
*** jpich has quit IRC | 10:07 | |
evrardjp | yup will do. | 10:07 |
*** cuongnv has quit IRC | 10:08 | |
evrardjp | sorry, it wasn't my intent to offend . | 10:08 |
*** links has quit IRC | 10:09 | |
*** links has joined #openstack-infra | 10:10 | |
AJaeger | evrardjp: no offense taken or understood - I just wanted to explain since it looked like you weren't away. | 10:10 |
evrardjp | thanks | 10:11 |
AJaeger | s/away/aware/ | 10:11 |
* AJaeger needs better typing skills ;( | 10:11 | |
*** jpich_ is now known as jpich | 10:15 | |
*** olaph1 has joined #openstack-infra | 10:16 | |
*** lpetrut has joined #openstack-infra | 10:17 | |
*** olaph has quit IRC | 10:17 | |
*** rossella_s has quit IRC | 10:20 | |
*** tosky has joined #openstack-infra | 10:22 | |
*** annp has quit IRC | 10:23 | |
*** rossella_s has joined #openstack-infra | 10:24 | |
*** kjackal has quit IRC | 10:26 | |
*** kjackal has joined #openstack-infra | 10:27 | |
*** vipul has quit IRC | 10:28 | |
*** spiffxp has quit IRC | 10:29 | |
*** pcrews has quit IRC | 10:30 | |
danpawlik | coreycb, zigo, beisner: Hi, I have small question to you. Is possible to create Octavia packages for Ubuntu? I create a storyboard for it https://storyboard.openstack.org/#!/story/2001566 | 10:30 |
zigo | danpawlik: Sure it's possible, do you have the needed skills? | 10:31 |
zigo | danpawlik: I can sponsor the upload in Debian if you want, then it will migrate to Ubuntu. | 10:31 |
danpawlik | I guess I don't have such skills :( | 10:32 |
*** yamahata has joined #openstack-infra | 10:32 | |
*** kjackal has quit IRC | 10:33 | |
*** zoli is now known as zoli|lunch | 10:33 | |
*** zoli|lunch is now known as zoli | 10:33 | |
*** alexchadin has joined #openstack-infra | 10:34 | |
danpawlik | zigo: maybe I will try do it tomorrow | 10:34 |
zigo | danpawlik: My advice would be to just look how other packages are made. | 10:35 |
zigo | danpawlik: For Queens, everything is uploaded in Experimental until the final releases, so just use Sid + Experimental. | 10:35 |
zigo | danpawlik: FYI, at the moment, I'm almost finished with Queens, and I'm uploading services in RC1 versions. | 10:36 |
*** pcrews has joined #openstack-infra | 10:36 | |
zigo | (packages are already prepared, doing the final uploads ...) | 10:36 |
*** spiffxp has joined #openstack-infra | 10:36 | |
danpawlik | zigo: nice :) | 10:37 |
danpawlik | zigo: so I try to do it tomorrow base on some other queens package | 10:37 |
danpawlik | zigo: could you tell me what is the storyboard for cloud archive? | 10:37 |
*** vipul has joined #openstack-infra | 10:38 | |
zigo | danpawlik: I don't do Ubuntu, I do Debian. | 10:39 |
zigo | danpawlik: So I use the Debian BTS, and salsa.debian.org as a Git repository. | 10:39 |
danpawlik | ah, ok | 10:39 |
zigo | danpawlik: I can add you in the team if you create an account there. | 10:40 |
zigo | danpawlik: https://signup.salsa.debian.org/ | 10:40 |
zigo | Give me your account, and I'll add you there. | 10:40 |
zigo | We're using a Git tag packaging workflow, I'm not sure if you know how this work ... | 10:40 |
zigo | danpawlik: How many source package will you need? | 10:41 |
danpawlik | zigo: user @daniel.pawlik-guest | 10:42 |
danpawlik | zigo: I know the workflow on RDO | 10:42 |
danpawlik | but here not | 10:43 |
danpawlik | zigo: what do you mean "How many source package will you need?" | 10:43 |
danpawlik | zigo: I try to create debianize for Octavia | 10:43 |
danpawlik | Openstack Octavia service | 10:43 |
zigo | danpawlik: So, just one? | 10:43 |
*** panda|off is now known as panda|sick | 10:44 | |
*** kjackal has joined #openstack-infra | 10:44 | |
zigo | There's already pytohn-octaviaclient that I did. | 10:44 |
danpawlik | zigo: but its just a client :P | 10:44 |
danpawlik | just one | 10:44 |
zigo | Right. | 10:44 |
zigo | I'm creating the repo for you. | 10:44 |
danpawlik | :D | 10:45 |
danpawlik | zigo: thanks | 10:45 |
*** dhajare_ is now known as dhajare | 10:46 | |
zigo | danpawlik: You can also file the ITP bug. Do you know how to do that? | 10:48 |
danpawlik | zigo: maybe come on priv | 10:48 |
*** yee37935 has joined #openstack-infra | 10:48 | |
*** yee379 has quit IRC | 10:49 | |
*** panda has joined #openstack-infra | 10:57 | |
*** purp_too has joined #openstack-infra | 10:57 | |
*** scarpino has joined #openstack-infra | 10:58 | |
*** ericyoung has quit IRC | 10:59 | |
*** witek has quit IRC | 10:59 | |
*** panda|sick has quit IRC | 10:59 | |
*** Zara has quit IRC | 10:59 | |
*** ilpianista has quit IRC | 10:59 | |
*** EmilienM has quit IRC | 10:59 | |
*** purp has quit IRC | 10:59 | |
*** links has quit IRC | 10:59 | |
*** mattoliverau has quit IRC | 10:59 | |
*** Zara has joined #openstack-infra | 10:59 | |
*** scarpino is now known as ilpianista | 10:59 | |
*** olaph has joined #openstack-infra | 10:59 | |
*** witek has joined #openstack-infra | 10:59 | |
*** ericyoung has joined #openstack-infra | 10:59 | |
*** olaph1 has quit IRC | 11:00 | |
*** mattoliverau has joined #openstack-infra | 11:00 | |
*** links has joined #openstack-infra | 11:00 | |
*** EmilienM has joined #openstack-infra | 11:00 | |
*** salv-orl_ has joined #openstack-infra | 11:00 | |
*** yamahata has quit IRC | 11:00 | |
*** Qiming has quit IRC | 11:01 | |
*** kambiz has quit IRC | 11:01 | |
*** purp_too has quit IRC | 11:01 | |
*** salv-orlando has quit IRC | 11:03 | |
*** namnh has quit IRC | 11:03 | |
*** kambiz has joined #openstack-infra | 11:04 | |
*** purp has joined #openstack-infra | 11:04 | |
*** Qiming has joined #openstack-infra | 11:05 | |
*** yamamoto has joined #openstack-infra | 11:06 | |
*** pcaruana has quit IRC | 11:07 | |
*** yamamoto has quit IRC | 11:13 | |
openstackgerrit | Simon Westphahl proposed openstack-infra/zuul master: Allow using remote refs to find commits for change https://review.openstack.org/544964 | 11:16 |
*** panda is now known as panda|sick | 11:17 | |
openstackgerrit | Simon Westphahl proposed openstack-infra/zuul master: Allow using remote refs to find commits for change https://review.openstack.org/544964 | 11:18 |
*** tbarron_ has joined #openstack-infra | 11:23 | |
*** tbarron has quit IRC | 11:24 | |
*** Zara_ has joined #openstack-infra | 11:25 | |
*** kaisers1 has joined #openstack-infra | 11:25 | |
*** olaph1 has joined #openstack-infra | 11:26 | |
*** Qiming_ has joined #openstack-infra | 11:26 | |
*** mattoliverau_ has joined #openstack-infra | 11:27 | |
odyssey4me | Morning all - apologies for the drama yesterday with the various new repo imports. I've learned some things which will hopefully make it a smoother process next time. | 11:31 |
odyssey4me | AJaeger evrardjp I see that we're still not quite done as the base jobs aren't yet importing. I guess we're waiting for the US crew to wake to determine next steps? | 11:32 |
*** efried is now known as efried_rollin | 11:32 | |
*** Faster-Fanboi_ has joined #openstack-infra | 11:32 | |
*** olaph has quit IRC | 11:34 | |
*** Zara has quit IRC | 11:34 | |
*** kaisers has quit IRC | 11:34 | |
*** mattoliverau has quit IRC | 11:34 | |
*** rbergeron has quit IRC | 11:34 | |
*** Qiming has quit IRC | 11:34 | |
*** bradm has quit IRC | 11:34 | |
*** Faster-Fanboi has quit IRC | 11:34 | |
*** rwsu has quit IRC | 11:34 | |
*** ldnunes has joined #openstack-infra | 11:34 | |
*** rwsu has joined #openstack-infra | 11:34 | |
*** rbergeron has joined #openstack-infra | 11:35 | |
*** luzC has quit IRC | 11:36 | |
*** aviau has quit IRC | 11:39 | |
*** aviau has joined #openstack-infra | 11:39 | |
*** luzC has joined #openstack-infra | 11:39 | |
*** armaan has quit IRC | 11:39 | |
*** armaan has joined #openstack-infra | 11:40 | |
*** rbergero1 has joined #openstack-infra | 11:41 | |
*** pbourke has quit IRC | 11:42 | |
*** shardy has quit IRC | 11:42 | |
*** Zara_ is now known as Zara | 11:44 | |
*** bradm has joined #openstack-infra | 11:44 | |
*** jaosorior has quit IRC | 11:45 | |
*** jaosorior has joined #openstack-infra | 11:45 | |
*** danpawlik has quit IRC | 11:45 | |
*** rbergeron has quit IRC | 11:46 | |
*** armaan has quit IRC | 11:46 | |
*** pbourke has joined #openstack-infra | 11:47 | |
*** snapiri- has quit IRC | 11:47 | |
*** snapiri1 has joined #openstack-infra | 11:48 | |
*** danpawlik has joined #openstack-infra | 11:49 | |
*** udesale_ has quit IRC | 11:50 | |
*** pbourke_ has joined #openstack-infra | 11:52 | |
*** armaan has joined #openstack-infra | 11:57 | |
*** jpena is now known as jpena|lunch | 11:57 | |
*** pbourke has quit IRC | 11:58 | |
*** snapiri1 has quit IRC | 11:58 | |
*** danpawlik has quit IRC | 12:00 | |
*** thorre_se has joined #openstack-infra | 12:03 | |
*** lihi has joined #openstack-infra | 12:04 | |
*** danpawlik has joined #openstack-infra | 12:05 | |
*** thorre has quit IRC | 12:06 | |
*** thorre_se is now known as thorre | 12:06 | |
*** yamamoto has joined #openstack-infra | 12:09 | |
*** pcaruana has joined #openstack-infra | 12:09 | |
*** thorre_se has joined #openstack-infra | 12:09 | |
*** thorre has quit IRC | 12:13 | |
*** thorre_se is now known as thorre | 12:13 | |
*** yamamoto has quit IRC | 12:14 | |
openstackgerrit | sebastian marcet proposed openstack-infra/openstackid-resources master: Added get tracks by summit endpoints https://review.openstack.org/546962 | 12:15 |
openstackgerrit | Merged openstack-infra/openstackid-resources master: Added get tracks by summit endpoints https://review.openstack.org/546962 | 12:16 |
openstackgerrit | Hoang Trung Hieu proposed openstack-infra/zuul-jobs master: Update and replace http with https for doc links https://review.openstack.org/546965 | 12:26 |
*** dave-mccowan has joined #openstack-infra | 12:27 | |
*** janki has quit IRC | 12:27 | |
*** yamamoto has joined #openstack-infra | 12:27 | |
*** links has quit IRC | 12:29 | |
evrardjp | odyssey4me: yes | 12:30 |
*** pcichy has quit IRC | 12:32 | |
openstackgerrit | Hoang Trung Hieu proposed openstack-infra/zuul master: Update and replace http with https for doc links https://review.openstack.org/546970 | 12:37 |
*** links has joined #openstack-infra | 12:41 | |
*** pcichy has joined #openstack-infra | 12:48 | |
*** zul has quit IRC | 12:51 | |
*** olaph1 is now known as olaph | 12:51 | |
*** tbarron_ is now known as tbarron | 12:52 | |
*** wxy has quit IRC | 12:52 | |
*** rosmaita has joined #openstack-infra | 12:52 | |
*** wxy has joined #openstack-infra | 12:52 | |
*** hrybacki has quit IRC | 12:53 | |
*** hrybacki has joined #openstack-infra | 12:54 | |
*** zul has joined #openstack-infra | 12:55 | |
*** jlabarre has joined #openstack-infra | 13:01 | |
*** CrayZee has joined #openstack-infra | 13:03 | |
*** salv-orl_ has quit IRC | 13:10 | |
*** salv-orlando has joined #openstack-infra | 13:11 | |
*** jcoufal has joined #openstack-infra | 13:12 | |
*** amoralej is now known as amoralej|lunch | 13:13 | |
*** electrofelix has quit IRC | 13:14 | |
*** betherly has quit IRC | 13:15 | |
*** salv-orlando has quit IRC | 13:15 | |
*** salv-orlando has joined #openstack-infra | 13:15 | |
*** betherly has joined #openstack-infra | 13:15 | |
*** ying_zuo has quit IRC | 13:16 | |
*** dprince has joined #openstack-infra | 13:16 | |
coreycb | danpawlik: happy to help you to get octavia into ubuntu as well. it would need to get into ubuntu bionic by march 1st for queens. | 13:16 |
*** hamzy_ is now known as hamzy | 13:16 | |
*** ying_zuo has joined #openstack-infra | 13:17 | |
*** olaph1 has joined #openstack-infra | 13:17 | |
*** berendt has quit IRC | 13:18 | |
*** olaph has quit IRC | 13:19 | |
*** berendt has joined #openstack-infra | 13:21 | |
openstackgerrit | Omer Anson proposed openstack-infra/project-config master: Dragonflow: Add requirement for neutron-dynamic-routing in tests https://review.openstack.org/546978 | 13:21 |
openstackgerrit | Simon Westphahl proposed openstack-infra/zuul master: Allow using remote refs to find commits for change https://review.openstack.org/544964 | 13:22 |
*** jpena|lunch is now known as jpena | 13:27 | |
*** zhipeng has joined #openstack-infra | 13:28 | |
*** vivsoni has quit IRC | 13:28 | |
*** vivsoni has joined #openstack-infra | 13:28 | |
*** ralonsoh_ has joined #openstack-infra | 13:32 | |
*** udesale has joined #openstack-infra | 13:32 | |
*** yamamoto has quit IRC | 13:34 | |
*** ykarel is now known as ykarel|away | 13:35 | |
*** efried_rollin is now known as efried | 13:35 | |
*** ralonsoh has quit IRC | 13:35 | |
*** rlandy has joined #openstack-infra | 13:36 | |
*** rlandy is now known as rlandy|ruck | 13:36 | |
*** sshnaidm|afk is now known as sshnaidm | 13:37 | |
fungi | evrardjp: odyssey4me: it's possible approving another new project will get things back on track, or a manual zuul reconfig | 13:38 |
fungi | i'll see if i can figure something out in a sec | 13:38 |
*** CrayZee has quit IRC | 13:41 | |
AJaeger | fungi: in that case: Oldest not approved new project change is: https://review.openstack.org/#/c/537653/ | 13:42 |
AJaeger | good morning, fungi! | 13:42 |
*** mriedem has joined #openstack-infra | 13:42 | |
openstackgerrit | Simon Westphahl proposed openstack-infra/zuul master: Allow using remote refs to find commits for change https://review.openstack.org/544964 | 13:42 |
openstackgerrit | Simon Westphahl proposed openstack-infra/zuul master: Allow using remote refs to find commits for change https://review.openstack.org/544964 | 13:43 |
fungi | AJaeger: thanks! that saved me looking one up | 13:44 |
evrardjp | good mornign fungi :) | 13:44 |
evrardjp | morning* | 13:44 |
*** yamamoto has joined #openstack-infra | 13:44 | |
*** yamamoto has quit IRC | 13:44 | |
AJaeger | fungi: if you do approve it, please review it's sibling https://review.openstack.org/537802 as well ;) | 13:44 |
openstackgerrit | Simon Westphahl proposed openstack-infra/zuul master: Allow using remote refs to find commits for change https://review.openstack.org/544964 | 13:44 |
fungi | AJaeger: will do | 13:46 |
fungi | AJaeger: mmm... do you think this is safe to import? https://github.com/pinodeca/tatu-dashboard/blob/master/.zuul.yaml | 13:46 |
fungi | i'm looking now for where that template is defined | 13:47 |
AJaeger | fungi: let's recheck - it should fail ;) | 13:48 |
*** shardy has joined #openstack-infra | 13:48 | |
fungi | yeah, i believe that is yet another broken zuul config | 13:48 |
* fungi looks for a new project not importing prewritten zuul configuration | 13:49 | |
*** pgadiya has quit IRC | 13:49 | |
*** kgiusti has joined #openstack-infra | 13:50 | |
*** annp has joined #openstack-infra | 13:50 | |
AJaeger | https://review.openstack.org/546260 ? | 13:50 |
AJaeger | has no imports ;) | 13:50 |
*** VW has joined #openstack-infra | 13:51 | |
*** briancurtin has quit IRC | 13:52 | |
AJaeger | fungi, 537802 has no zuul config | 13:52 |
*** briancurtin has joined #openstack-infra | 13:52 | |
AJaeger | my test did not work on tatu-dashboard - why? | 13:53 |
*** arxcruz|ruck is now known as arxcruz|rover | 13:53 | |
*** mwhahaha has quit IRC | 13:54 | |
*** eernst has joined #openstack-infra | 13:55 | |
odyssey4me | morning fungi :) apologies for all that drama yesterday... I guess this is how we learn, and how zuul development gets its feedback :) | 13:55 |
fungi | AJaeger: 537802 has https://github.com/pinodeca/python-tatuclient/blob/master/.zuul.yaml | 13:55 |
fungi | similarly broken too i think | 13:55 |
fungi | odyssey4me: yep! | 13:55 |
*** ralonsoh_ is now known as ralonsoh | 13:55 | |
*** oidgar has quit IRC | 13:55 | |
*** mwhahaha has joined #openstack-infra | 13:56 | |
*** zxiiro has quit IRC | 13:56 | |
AJaeger | fungi: yeah ;( Sorry, looked wrong place | 13:56 |
evrardjp | odyssey4me: wait, it's not over yet :D | 13:56 |
*** GregHouse has quit IRC | 13:56 | |
fungi | AJaeger: since 546260 doesn't import a repo, i'm reviewing that on | 13:56 |
AJaeger | fungi: my test is broken, fix coming... | 13:56 |
fungi | e | 13:56 |
*** imacdonn has quit IRC | 13:56 | |
openstackgerrit | Andreas Jaeger proposed openstack-infra/project-config master: Fix typo in tools/check_valid_gerrit_projects.py https://review.openstack.org/546988 | 13:57 |
AJaeger | fungi: care to +2A simple typo, please? | 13:57 |
AJaeger | that should block the two reviews you mentioned now | 13:57 |
*** robcresswell has quit IRC | 13:57 | |
*** zxiiro has joined #openstack-infra | 13:58 | |
*** amoralej|lunch is now known as amoralej | 13:58 | |
*** GregHouse has joined #openstack-infra | 13:58 | |
*** Kevin_Zheng has quit IRC | 13:58 | |
*** robcresswell has joined #openstack-infra | 14:00 | |
*** zhongjun has quit IRC | 14:01 | |
*** zhongjun has joined #openstack-infra | 14:01 | |
*** bstinson has quit IRC | 14:01 | |
*** Kevin_Zheng has joined #openstack-infra | 14:02 | |
*** clayg has quit IRC | 14:02 | |
*** bstinson has joined #openstack-infra | 14:03 | |
*** clayg has joined #openstack-infra | 14:05 | |
*** ihrachys_ has quit IRC | 14:05 | |
*** ihrachys has joined #openstack-infra | 14:06 | |
openstackgerrit | Merged openstack-infra/project-config master: Add new project 'osel' https://review.openstack.org/546260 | 14:06 |
*** Goneri has joined #openstack-infra | 14:06 | |
openstackgerrit | Merged openstack-infra/project-config master: Fix typo in tools/check_valid_gerrit_projects.py https://review.openstack.org/546988 | 14:06 |
fungi | once puppet updates zuul01.o.o for the osel addition we should hopefully be back on track | 14:06 |
*** imacdonn has joined #openstack-infra | 14:06 | |
* fungi needs to step away and do morning things for a few minutes | 14:07 | |
*** alexchadin has quit IRC | 14:08 | |
*** lbragstad has joined #openstack-infra | 14:10 | |
*** oidgar has joined #openstack-infra | 14:11 | |
*** zhipeng has quit IRC | 14:12 | |
*** lathiat has quit IRC | 14:13 | |
*** zhipeng has joined #openstack-infra | 14:14 | |
*** derekh has joined #openstack-infra | 14:15 | |
pabelanger | spacex launch now, FYI | 14:19 |
pabelanger | AJaeger: dirk: we'll also need mirrors for opensuse-tumbleweed | 14:20 |
pabelanger | that's likely why job it failing | 14:20 |
dirk | pabelanger: ehm, sorry, which job is failing? | 14:20 |
dirk | good morning btw | 14:20 |
pabelanger | dirk: see https://review.openstack.org/546846/ | 14:21 |
pabelanger | zypper looks to be configured wrong | 14:22 |
*** myoung|afk is now known as myoung | 14:23 | |
*** vabada has quit IRC | 14:27 | |
*** agopi has joined #openstack-infra | 14:27 | |
*** baoli has joined #openstack-infra | 14:27 | |
*** vabada has joined #openstack-infra | 14:27 | |
openstackgerrit | David Shrewsbury proposed openstack-infra/nodepool master: Add additional builder debug logging https://review.openstack.org/546303 | 14:28 |
*** baoli has quit IRC | 14:28 | |
*** lathiat has joined #openstack-infra | 14:28 | |
*** baoli has joined #openstack-infra | 14:29 | |
*** dhill__ has joined #openstack-infra | 14:29 | |
pabelanger | mnaser: fungi: ubuntu mirror looks good this morning too, we've released with no issues over night. And AFS errors (lost connection to cell) are all but gone today. We haven't lost connection so far | 14:30 |
*** pblaho has quit IRC | 14:30 | |
openstackgerrit | Merged openstack-infra/nodepool master: Add /label-list to the webapp https://review.openstack.org/535563 | 14:30 |
mnaser | pabelanger: sweet. I think my jobs went throug without a hitch too | 14:31 |
*** esberglu has joined #openstack-infra | 14:31 | |
pabelanger | mnaser: great | 14:32 |
*** ykarel|away has quit IRC | 14:35 | |
*** olaph has joined #openstack-infra | 14:35 | |
*** alexchadin has joined #openstack-infra | 14:35 | |
*** olaph1 has quit IRC | 14:35 | |
*** Guest96947 has quit IRC | 14:36 | |
*** sree_ has joined #openstack-infra | 14:36 | |
*** sree_ is now known as Guest64810 | 14:37 | |
*** dtantsur is now known as dtantsur|brb | 14:38 | |
openstackgerrit | Hoang Trung Hieu proposed openstack-infra/zuul master: WIP: Update and replace http with https for doc links https://review.openstack.org/546970 | 14:38 |
*** dklyle has quit IRC | 14:39 | |
*** caphrim007_ has quit IRC | 14:39 | |
*** caphrim007 has joined #openstack-infra | 14:40 | |
*** Guest64810 has quit IRC | 14:41 | |
*** pcichy has quit IRC | 14:42 | |
*** caphrim007 has quit IRC | 14:44 | |
*** yamamoto has joined #openstack-infra | 14:45 | |
*** eernst has quit IRC | 14:45 | |
*** eharney has joined #openstack-infra | 14:46 | |
*** r-daneel has joined #openstack-infra | 14:46 | |
*** psachin has quit IRC | 14:49 | |
*** ameeda has quit IRC | 14:51 | |
*** yamamoto has quit IRC | 14:51 | |
*** auristor has quit IRC | 14:52 | |
*** auristor has joined #openstack-infra | 14:54 | |
*** eernst has joined #openstack-infra | 14:54 | |
thingee | hi infra, just one more +2 on adding a core group for the project sphinx-feature-classification. already has the ptl +1 too | 14:54 |
thingee | https://review.openstack.org/#/c/545861/ | 14:55 |
*** e0ne has quit IRC | 14:57 | |
*** cmurphy has quit IRC | 14:57 | |
*** olaph has quit IRC | 14:58 | |
*** olaph has joined #openstack-infra | 14:58 | |
mnaser | thingee: voila | 14:59 |
*** e0ne has joined #openstack-infra | 14:59 | |
*** ykarel|away has joined #openstack-infra | 14:59 | |
*** ldnunes has quit IRC | 15:00 | |
*** wolverineav has joined #openstack-infra | 15:01 | |
odyssey4me | pabelanger fungi yeah, thanks for sorting out the ubuntu mirrors - any idea why the update took so long? | 15:02 |
fungi | odyssey4me: there had apparently been some disconnect with the afs backend db, and the resulting recovery was causing writes to go slowly. couple that with a largeish set of security updates for ubuntu and a mirror sync ran longer than the allotted time for its kerberos ticket so got aborted leaving a lockfile behind | 15:04 |
odyssey4me | AJaeger are we good to go to recheck https://review.openstack.org/546573 & https://review.openstack.org/546294 now? | 15:04 |
*** annp has quit IRC | 15:05 | |
pabelanger | Yup, when the database becomes corrupt in reprepro, we need to reindex everything again from AFS. And if info isn't in cache, we depend on network bandwidth to fetch new file info | 15:05 |
thingee | mnaser: thank you | 15:05 |
fungi | then cleaning up the lockfile and rerunning manually we noticed the slowness, realized the situation with the cache repair wasn't progressing as expected, rebooted the server which seemed to get the cache sync back underway... | 15:05 |
odyssey4me | looks like the previously mentioned tests added are now working to validate a repository contents before allowing a merge | 15:05 |
*** baoli has quit IRC | 15:05 | |
*** wolverineav has quit IRC | 15:05 | |
odyssey4me | fungi ah, makes sense - thanks all for following that through | 15:06 |
*** baoli has joined #openstack-infra | 15:06 | |
fungi | it was mostly pabelanger | 15:06 |
fungi | odyssey4me: AJaeger: yeah, job working as designed! http://logs.openstack.org/53/537653/7/check/project-config-gerrit/82bf64d/job-output.txt.gz#_2018-02-22_14_44_16_817563 | 15:07 |
fungi | thanks for adding that test, AJaeger! | 15:07 |
openstackgerrit | Merged openstack-infra/project-config master: Set sphinx-feature-classification own core group https://review.openstack.org/545861 | 15:07 |
pabelanger | as for 546573 and 546294, I think we need a full reload zuul configuration. I was holding off from doing that until corvus was online, incase he wanted to debug why it still fails | 15:07 |
fungi | pabelanger: i approved another project addition a little while ago | 15:08 |
fungi | pabelanger: hoping to see if that triggers a reconfig when puppet kicks it again | 15:08 |
pabelanger | kk | 15:08 |
openstackgerrit | Graham Hayes proposed openstack-infra/project-config master: Add certbot-dns-openstack repo https://review.openstack.org/547022 | 15:08 |
fungi | pabelanger: for the record, that was https://review.openstack.org/546260 | 15:08 |
AJaeger | fungi: great! | 15:09 |
*** tpsilva has joined #openstack-infra | 15:09 | |
odyssey4me | ok, good - looks like the plan right now is to simply block *any* pre-implemented zuul config from being imported - which works... that's certainly easier than any other options I can think of | 15:10 |
openstackgerrit | Andreas Jaeger proposed openstack-infra/project-config master: Add zuul config for new project 'osel' https://review.openstack.org/546421 | 15:10 |
openstackgerrit | Graham Hayes proposed openstack-infra/project-config master: Add zuul entry for certbot-dns-openstack https://review.openstack.org/547023 | 15:11 |
pabelanger | odyssey4me: there also been some discussion in #zuul about maybe no loading a project in zuul, if the configuration is bad too. Will be a topic at PTG I believe. | 15:11 |
AJaeger | odyssey4me: yes, big axe - let's import these files via a normal review step. | 15:11 |
odyssey4me | I support this course of action. | 15:16 |
fungi | aha, i think the big gcc update we saw in ubuntu was the retpoline enablement backport | 15:16 |
odyssey4me | Of course ideally zuul could protect itself without requiring an external test, but I can understand that's hard to do and certainly the axe works as a stop gap to buy more time. | 15:17 |
*** udesale has quit IRC | 15:19 | |
*** mriedem has left #openstack-infra | 15:20 | |
*** mriedem has joined #openstack-infra | 15:20 | |
*** links has quit IRC | 15:20 | |
pabelanger | dmsimard: are you aware of an issue with fedora-26 jobs in ozj? http://logs.openstack.org/48/546848/1/check/openstack-infra-multinode-integration-fedora-26/9c7aa43/job-output.txt.gz | 15:22 |
pabelanger | Timeout (32s) waiting for privilege escalation prompt: | 15:22 |
pabelanger | when reloading iptables | 15:22 |
dmsimard | pabelanger: yes, I have held nodes to troubleshoot it | 15:22 |
dmsimard | pabelanger: Haven't gotten to the bottom of it yet | 15:22 |
AJaeger | fungi, we're green again with the new repos! | 15:22 |
dmsimard | pabelanger: If it's blocking something, we can put it non-voting | 15:23 |
AJaeger | dmsimard, fungi , pabelanger, could you review https://review.openstack.org/545234 to refactor static publish jobs as well as https://review.openstack.org/545442 and https://review.openstack.org/545603 to mark base jobs as abstract, please? | 15:23 |
pabelanger | dmsimard: okay, I don't fully understand what the jobs is doing. Not blocking, wanted to add fedora-27 nodes, because we want to delete fedora-26 | 15:24 |
dmsimard | pabelanger: that particular test is for iptables rules persistence integration testing -- we run the multinode roles from zuul-jobs, flush iptables and then restart it to see if the rules are back | 15:25 |
dmsimard | pabelanger: I don't have any objection to move forward with f27 | 15:25 |
*** dtantsur|brb is now known as dtantsur | 15:26 | |
*** alexchadin has quit IRC | 15:26 | |
fungi | odyssey4me: yep, i believe the plan is that zuul will skip loading repos with invalid configuration rather than reverting to an old layout | 15:27 |
openstackgerrit | Paul Belanger proposed openstack-infra/openstack-zuul-jobs master: Set openstack-infra-multinode-integration-fedora-26 non-voting https://review.openstack.org/547026 | 15:27 |
pabelanger | dmsimard: ^ | 15:27 |
*** alexchadin has joined #openstack-infra | 15:27 | |
AJaeger | fungi: and that's what happened in this case. The repo was completely ignored - but that also meant, we could not run any tests to merge anything using the gerrit workflow. | 15:29 |
*** slaweq has quit IRC | 15:30 | |
*** slaweq has joined #openstack-infra | 15:30 | |
*** alexchadin has quit IRC | 15:31 | |
*** e0ne has quit IRC | 15:34 | |
*** slaweq has quit IRC | 15:35 | |
*** e0ne has joined #openstack-infra | 15:35 | |
fungi | yeah, i guess the solution is more nuanced than that, but for discussing in #zuul | 15:36 |
*** alexchadin has joined #openstack-infra | 15:37 | |
*** krtaylor has quit IRC | 15:38 | |
*** yamahata has joined #openstack-infra | 15:40 | |
*** alexchadin has quit IRC | 15:42 | |
*** zhipeng has quit IRC | 15:42 | |
openstackgerrit | Paul Belanger proposed openstack-infra/glean master: Testing https://review.openstack.org/547034 | 15:42 |
openstackgerrit | Graham Hayes proposed openstack-infra/project-config master: Add certbot-dns-openstack repo https://review.openstack.org/547022 | 15:42 |
openstackgerrit | Graham Hayes proposed openstack-infra/project-config master: Add zuul entry for certbot-dns-openstack https://review.openstack.org/547023 | 15:42 |
corvus | pabelanger: debug why what still fails? | 15:45 |
openstackgerrit | Andrea Frittoli proposed openstack-infra/devstack-gate master: Add stable/pike Tempest bitrot job https://review.openstack.org/547035 | 15:46 |
*** hongbin has joined #openstack-infra | 15:46 | |
pabelanger | corvus: ya, we force merged changes to remove the bad zuul.d folders, however zuul still complained about project conflict. It looks like we got a full reconfiguration @ Last reconfigured: Thu Feb 22 2018 09:46:54 GMT-0500 (EST). So, I guess we recheck and see if it works | 15:47 |
*** yamamoto has joined #openstack-infra | 15:47 | |
pabelanger | wasn't sure if that was the required workflow or some other issue | 15:47 |
pabelanger | okay, https://review.openstack.org/546573/ is good | 15:48 |
pabelanger | so, guess we needed a reload then | 15:48 |
openstackgerrit | Merged openstack-infra/project-config master: Add zuul config for new project 'osel' https://review.openstack.org/546421 | 15:48 |
corvus | pabelanger: yes, zuul only reloads its tenant config file on a full reconfiguration. | 15:49 |
pabelanger | corvus: ack, thanks | 15:49 |
*** david-lyle has joined #openstack-infra | 15:50 | |
openstackgerrit | Andrea Frittoli proposed openstack-infra/project-config master: Remove legacy tempest bitrot jobs for pike https://review.openstack.org/547037 | 15:51 |
*** rossella_s has quit IRC | 15:51 | |
*** yamamoto has quit IRC | 15:51 | |
openstackgerrit | Sean McGinnis proposed openstack-infra/project-config master: Limt jobs on release-tools presentations https://review.openstack.org/547041 | 15:55 |
openstackgerrit | Andrea Frittoli proposed openstack-infra/openstack-zuul-jobs master: Cleanup tempest pike legacy jobs. https://review.openstack.org/547042 | 15:56 |
*** VW has quit IRC | 15:56 | |
*** VW has joined #openstack-infra | 15:57 | |
*** eventingmonkey has quit IRC | 15:58 | |
pabelanger | dynamic reloads looked to have climbed up a little: http://paste.openstack.org/show/682153/ | 15:58 |
pabelanger | 20 seconds now | 15:58 |
fungi | pabelanger: yeah, memory utilization is climbing in the past hour too | 16:00 |
fungi | http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=64792&rra_id=all | 16:00 |
pabelanger | so it is | 16:01 |
zxiiro | When OpenStack was still using Jenkins. Did jobs run using ansible playbooks? if so how was that done, did they use the ansible jenkins plugin? | 16:01 |
andreaf | AJaeger fungi corvus pabelanger tosky tonyb three patches to replace legacy tempest jobs with zuulv3 jobs for pike (on stable and bitrot jobs) now that devstack patches have been backported to pike https://review.openstack.org/#/q/status:open+branch:master+topic:stable_pike | 16:02 |
pabelanger | zxiiro: we wrote zuul-launcher which loaded JJB files at run time, and converted them into ansible-playbooks. Then the launchers run ansible-playbook locally | 16:02 |
fungi | zxiiro: no, we switched from jenkins ssh plugin running jobs to zuul v2's experimental ansible launcher which basically replaced our jenkins masters | 16:03 |
fungi | or what pabelanger also said | 16:03 |
zxiiro | ah ok. so even before that they were basically regular freestyle jobs with shell | 16:04 |
fungi | the ansible solution in v2 was our replacement/alternative for jenkins, basically | 16:04 |
pabelanger | zxiiro: yah, you can look at 2.6.0 release of zuul to see how it worked. | 16:04 |
pabelanger | (not that I recommend running it in production) | 16:04 |
pabelanger | but like fungi said, jenkins was gone at that point | 16:05 |
openstackgerrit | Sean McGinnis proposed openstack-infra/project-config master: Switch notifications for stable branches to match all https://review.openstack.org/546690 | 16:05 |
fungi | right, we limited our jenkins plugin utilization as much as possible, anticipating a future where we switched to something else and wouldn't want to have to reimplement various jenkins plugins in some other framework | 16:05 |
*** rossella_s has joined #openstack-infra | 16:05 | |
zxiiro | good to know. thanks. mostly just curious. I'm exploring ways to make Jenkins launch Ansible. The Ansible Jenkins plugin seems to be the current frontrunner. | 16:06 |
openstackgerrit | Merged openstack-infra/release-tools master: stub in presentation for rocky ptg https://review.openstack.org/545066 | 16:06 |
openstackgerrit | Merged openstack-infra/release-tools master: draft of presentation for rocky ptg https://review.openstack.org/545105 | 16:06 |
fungi | which made translating our mostly-shell job definitions from jjb to ansible playbooks tractable | 16:06 |
VW | hey folks - our AUP department is getting a takedown request for this http://paste.openstack.org/raw/665906/ | 16:06 |
*** caphrim007 has joined #openstack-infra | 16:06 | |
VW | supposedly the decompiler offered there is a no no | 16:07 |
hughsaunders | Hey, is there a timescale for a nodepool release post zuulv3 branch becoming master? | 16:07 |
pabelanger | zxiiro: you might also be interested in https://etherpad.openstack.org/p/zuulv3-jenkins-integration which is some ideas on how zuulv3 might integrate with jenkins | 16:07 |
pabelanger | hughsaunders: already done! we merged feature/zuulv3 into master last month | 16:07 |
fungi | VW: i can likely delete it from the database, but we also tell search engines not to index that service so i'm curious how it was even found | 16:07 |
pabelanger | hughsaunders: but, v3.0 release should be coming in the next few weeks | 16:07 |
VW | yeah - that I don't know, fungi | 16:08 |
pabelanger | hughsaunders: https://storyboard.openstack.org/#!/board/53 is last items to finish off | 16:08 |
hughsaunders | pabelanger: ahh, ok I was going to say 0.5.0 was 9th Jan and merge was after (18th?) | 16:08 |
VW | I just got hit up because I'm listed as the primary on the account you all use | 16:08 |
hughsaunders | Thanks for storyboard link | 16:08 |
fungi | VW: oh! our robots.txt isn't being served now for some reason. that might explain it | 16:08 |
VW | well, there you go | 16:08 |
VW | :) | 16:08 |
*** eventingmonkey has joined #openstack-infra | 16:09 | |
pabelanger | hughsaunders: yes, that sounds right. Let me see if I can find the review | 16:09 |
fungi | VW: thanks! cleaning that up now and also figuring out why robots.txt is missing | 16:09 |
VW | cool - thanks fungi | 16:09 |
VW | I'll inform the Brute Squad | 16:09 |
pabelanger | hughsaunders: https://review.openstack.org/535435/ | 16:10 |
hughsaunders | yeah thats the one | 16:10 |
AJaeger | fungi, the two tatu repos should be fixed now - want to remvoe your -1 from https://review.openstack.org/#/c/537653/ ? | 16:10 |
hughsaunders | I just noticed theres quite a large delta between last release and master (due to that merge). Will use a SHA for now while waiting for the v3 tag. | 16:11 |
*** ramishra has quit IRC | 16:11 | |
andreaf | pabelanger not using delegate_to and doing push/pull was the working solution https://review.openstack.org/#/c/528074/19/roles/sync-devstack-data/tasks/main.yaml | 16:12 |
*** sshnaidm is now known as sshnaidm|off | 16:12 | |
pabelanger | hughsaunders: yah, there is some breaking changes to have pre feature/zuulv3 nodepool works too. The configuration format will be a breaking change. | 16:12 |
hughsaunders | not to mention the disappearance of zmq... | 16:13 |
pabelanger | andreaf: good to know | 16:14 |
*** olaph has quit IRC | 16:14 | |
*** olaph has joined #openstack-infra | 16:14 | |
pabelanger | hughsaunders: yah, 0.5.0 does support zookeeper for builders, IIRC. But yah, zmq dropped too | 16:14 |
*** shoogz has joined #openstack-infra | 16:15 | |
*** sshnaidm|off has quit IRC | 16:17 | |
*** rossella_s has quit IRC | 16:20 | |
smcginnis | Looks like zuul events are getting backed up. Anything to be concerned about? | 16:21 |
*** rossella_s has joined #openstack-infra | 16:23 | |
pabelanger | for some reason, reloads are taking a little longer now. Which means we are not starting builds as fast: http://grafana.openstack.org/dashboard/db/zuul-status | 16:25 |
openstackgerrit | Javier Peña proposed openstack-infra/system-config master: Move AFS mirror code to puppet-openstackci https://review.openstack.org/529032 | 16:26 |
smcginnis | Gerrit UI seems slow too. Network issues? | 16:26 |
*** agopi has quit IRC | 16:29 | |
fungi | #status log deleted http://paste.openstack.org/raw/665906/ from lodgeit openstack.pastes table (paste_id=665906) due to provider aup violation/takedown notice | 16:30 |
openstackstatus | fungi: finished logging | 16:30 |
fungi | VW: ^ | 16:30 |
*** yolanda has quit IRC | 16:30 | |
openstackgerrit | Jeremy Stanley proposed openstack-infra/puppet-httpd master: Allow access to docroot for proxy exclusions https://review.openstack.org/547062 | 16:30 |
*** yolanda has joined #openstack-infra | 16:30 | |
fungi | infra-puppet-core: ^ should make our robots.txt work on the paste.o.o site | 16:31 |
*** tosky has quit IRC | 16:31 | |
VW | thanks fungi! | 16:32 |
fungi | VW: thanks for bringing it to our attention! we had intentionally disallowed crawlers on that server for precisely that reason | 16:32 |
VW | my pleasure. I'll let you know if I hear anything else | 16:32 |
fungi | please do | 16:33 |
*** armaan has quit IRC | 16:33 | |
*** armaan has joined #openstack-infra | 16:33 | |
*** oidgar has quit IRC | 16:35 | |
*** d0ugal has quit IRC | 16:35 | |
*** andreas_s has quit IRC | 16:38 | |
*** slaweq has joined #openstack-infra | 16:40 | |
*** d0ugal has joined #openstack-infra | 16:40 | |
*** david-lyle is now known as dklyle | 16:42 | |
*** andreas_s_ has joined #openstack-infra | 16:43 | |
*** slaweq has quit IRC | 16:45 | |
openstackgerrit | Fabien Boucher proposed openstack-infra/zuul master: Tenant config can be read from an external script https://review.openstack.org/535878 | 16:46 |
*** e0ne has quit IRC | 16:46 | |
*** andreas_s has joined #openstack-infra | 16:47 | |
*** andreas_s_ has quit IRC | 16:47 | |
openstackgerrit | Merged openstack-infra/project-config master: Add new project for Tatu (SSH as a Service) Horizon Plugin. https://review.openstack.org/537653 | 16:47 |
*** yamamoto has joined #openstack-infra | 16:48 | |
*** cmurphy has joined #openstack-infra | 16:48 | |
openstackgerrit | Fabien Boucher proposed openstack-infra/zuul master: Tenant config can be read from an external script https://review.openstack.org/535878 | 16:48 |
*** eernst has quit IRC | 16:49 | |
*** oidgar has joined #openstack-infra | 16:50 | |
*** pblaho has joined #openstack-infra | 16:51 | |
*** andreas_s has quit IRC | 16:51 | |
openstackgerrit | Matthieu Huin proposed openstack-infra/nodepool master: Refactor status functions, add web endpoints, allow params https://review.openstack.org/536301 | 16:52 |
*** yamamoto has quit IRC | 16:52 | |
openstackgerrit | Fabien Boucher proposed openstack-infra/zuul master: Tenant config can be read from an external script https://review.openstack.org/535878 | 16:53 |
*** eernst has joined #openstack-infra | 16:54 | |
*** wolverineav has joined #openstack-infra | 16:55 | |
*** wolverineav has quit IRC | 16:55 | |
*** wolverineav has joined #openstack-infra | 16:55 | |
*** ihrachys has quit IRC | 16:56 | |
dirk | pabelanger: is there a way for me to download the qcow2 file (opensuse-tumbleweed) somehow? | 16:56 |
*** ihrachys has joined #openstack-infra | 16:56 | |
dirk | pabelanger: I am interested what is in there | 16:56 |
*** dklyle has quit IRC | 16:56 | |
*** rossella_s has quit IRC | 16:57 | |
pabelanger | dirk: nothing public by default, I can expose it manually. Something your DIB dosn't have? | 16:57 |
*** rossella_s has joined #openstack-infra | 16:59 | |
*** VW_ has joined #openstack-infra | 17:00 | |
*** VW has quit IRC | 17:00 | |
openstackgerrit | Matthieu Huin proposed openstack-infra/nodepool master: Refactor status functions, add web endpoints, allow params https://review.openstack.org/536301 | 17:00 |
openstackgerrit | Matthieu Huin proposed openstack-infra/nodepool master: Add separate modules for management commands https://review.openstack.org/536303 | 17:00 |
openstackgerrit | Matthieu Huin proposed openstack-infra/nodepool master: webapp: add optional admin endpoint https://review.openstack.org/536319 | 17:00 |
*** david-lyle has joined #openstack-infra | 17:01 | |
Zara | hm, not sure where to ask- I'm getting my ptg ticket now; I have a tsp code but the cost is still displayed as $400 after applying it. is that normal? worried I'll accidentally charge myself or am otherwise confused. :) | 17:02 |
*** jpich has quit IRC | 17:04 | |
*** links has joined #openstack-infra | 17:04 | |
fungi | Zara: good question, i'll find out who you should contact | 17:05 |
Zara | thanks :) | 17:07 |
clarkb | good morning | 17:07 |
pabelanger | o/ | 17:08 |
openstackgerrit | Graham Hayes proposed openstack-infra/project-config master: Add certbot-dns-openstack repo https://review.openstack.org/547022 | 17:09 |
openstackgerrit | Graham Hayes proposed openstack-infra/project-config master: Add zuul entry for certbot-dns-openstack https://review.openstack.org/547023 | 17:09 |
*** zoli is now known as zoli|gone | 17:11 | |
*** zoli|gone is now known as zoli | 17:11 | |
fungi | Zara: kendall waters said it shouldn't charge you, but suggested you e-mail ptg@openstack.org and she can help you out | 17:11 |
*** armaan has quit IRC | 17:12 | |
*** armaan has joined #openstack-infra | 17:13 | |
*** agopi has joined #openstack-infra | 17:16 | |
*** agopi_ has joined #openstack-infra | 17:18 | |
*** agopi has quit IRC | 17:20 | |
*** agopi_ has quit IRC | 17:22 | |
*** baoli has quit IRC | 17:24 | |
*** baoli has joined #openstack-infra | 17:24 | |
*** gyee has joined #openstack-infra | 17:25 | |
Zara | fungi: great, thanks, will do :) | 17:27 |
*** dprince has quit IRC | 17:28 | |
*** slaweq has joined #openstack-infra | 17:29 | |
jlvillal | Hey clarkb. I did the puppet-gerritbot change to install via git. It is passing Zuul. But I'm mostly cargo-culting my changes. So a review from someone who knows puppet is appreciated :) https://review.openstack.org/#/c/546700/ | 17:32 |
clarkb | jlvillal: ok | 17:32 |
jlvillal | Thanks. | 17:33 |
openstackgerrit | Merged openstack-infra/nodepool master: Refactor playbooks/nodepool-zuul-functional/pre.yaml https://review.openstack.org/546272 | 17:34 |
mnaser | clarkb: jlvillal ok if i review from a puppet pov? :) | 17:34 |
jlvillal | mnaser, Yes please :) | 17:35 |
clarkb | mnaser: yes! | 17:35 |
fungi | mnaser: speaking of puppet module changes, https://review.openstack.org/547062 is semi-urgent | 17:36 |
mnaser | fungi: will look after this | 17:36 |
clarkb | jlvillal: left some thoughts but you amy want to wait for mnaser to comment too | 17:36 |
jlvillal | clarkb, thanks | 17:37 |
*** olaph1 has joined #openstack-infra | 17:38 | |
*** olaph has quit IRC | 17:39 | |
mnaser | jlvillal: left a few comments but it's mostly ok! :) | 17:40 |
AJaeger | config-core, could you review a refactor of our static publish jobs, please? https://review.openstack.org/545234 | 17:40 |
jlvillal | mnaser, Thanks! | 17:40 |
AJaeger | mnaser: could you review these initial jobs for repos: https://review.openstack.org/#/c/546573/ https://review.openstack.org/538307 https://review.openstack.org/546294 | 17:43 |
AJaeger | , please? | 17:43 |
mnaser | AJaeger: sure, ill have a look and fungi it looks good to me but cant +A :) | 17:44 |
AJaeger | thanks! | 17:44 |
fungi | mnaser: no problem, just appreciate the look! | 17:44 |
*** agopi has joined #openstack-infra | 17:44 | |
*** olaph1 is now known as olaph | 17:45 | |
mnaser | AJaeger: all done | 17:45 |
AJaeger | great | 17:45 |
*** lpetrut has quit IRC | 17:46 | |
mnaser | https://review.openstack.org/#/c/546690 is an easy one for any config-core :) | 17:46 |
AJaeger | mnaser: time for a job removal as well, please? https://review.openstack.org/#/c/546699/ | 17:47 |
AJaeger | mnaser: will check... | 17:47 |
fungi | infra-puppet-core reviewers: i'm looking for a second +2/approval on 547062 to fix serving robots.txt on paste.o.o so spam/scams/illegal content indexed there stop getting picked up by search engines | 17:47 |
*** slaweq has quit IRC | 17:48 | |
*** yamamoto has joined #openstack-infra | 17:48 | |
clarkb | fungi: oh interesting this is if it uses the built in template rather than providing one of its own (whcih we typically do instead) | 17:50 |
clarkb | fungi: I've approved it | 17:50 |
fungi | thanks clarkb | 17:51 |
*** armaan has quit IRC | 17:51 | |
fungi | and yeah, the built-in one was basically broken (at least for apache 2.4 on xenial, may have been working-ish on 2.2 without explicit access allowances) | 17:51 |
*** jpena is now known as jpena|off | 17:52 | |
*** armaan has joined #openstack-infra | 17:52 | |
openstackgerrit | John L. Villalovos proposed openstack-infra/puppet-gerritbot master: Change gerritbot to install from git https://review.openstack.org/546700 | 17:52 |
fungi | my guess is it worked on an implicit lack of acl until we upgraded paste.o.o to trusty | 17:52 |
fungi | er, to xenial | 17:52 |
fungi | at which point we stopped serving robots.txt and it reverted to being a cesspool of cut-n-paste links to illicit sites, eventually drawing complaints to our provider | 17:54 |
*** yamamoto has quit IRC | 17:54 | |
*** dprince has joined #openstack-infra | 17:55 | |
*** mgoddard_ has quit IRC | 17:55 | |
*** efoley has joined #openstack-infra | 17:55 | |
fungi | huh... kde started using phabricator for hosting/reviewing source code? | 17:56 |
*** derekh has quit IRC | 18:00 | |
mnaser | jlvillal: any reason behind the decision to install deps then gerritbot after? | 18:02 |
jlvillal | mnaser, cargo-cult from puppet-zuul | 18:02 |
openstackgerrit | Merged openstack-infra/project-config master: Add OSA os_panko repo base jobs https://review.openstack.org/546573 | 18:02 |
fungi | mnaser: we want to upgrade gerritbot but not unconditionally upgrade its deps if what is already installed is satisfactory | 18:03 |
jlvillal | mnaser, https://github.com/openstack-infra/puppet-zuul/blob/master/manifests/init.pp#L193 | 18:03 |
mnaser | jlvillal: i hate thinking of a suggestion after reviewing but i think this is really useful for you - https://github.com/openstack-infra/puppet-statusbot/blob/master/manifests/init.pp | 18:03 |
mnaser | statusbot deploys from master afaik | 18:03 |
fungi | this will get simpler with pip 10 since --upgrade-strategy=only-if-needed becomes the new default | 18:04 |
jlvillal | mnaser, looking... | 18:04 |
AJaeger | any config-core around to review https://review.openstack.org/545234 to simplify our static publish jobs , please? | 18:04 |
jlvillal | mnaser, What am I looking for? | 18:04 |
mnaser | jlvillal: drawing general inspiration for future maybe :P but it's useful code to look at in comparision, pretty much same idea | 18:05 |
mnaser | the change looks fine to me if fungi mentioned the whole pip install thing is ok | 18:05 |
jlvillal | mnaser, Okay. Thanks! | 18:05 |
fungi | well, mainly just explaining why it's done in such a roundabout way in some of our modules right now | 18:05 |
fungi | "okay" is not the term i'd use, but it's a necessary evil in at least some situations | 18:06 |
mnaser | :D | 18:06 |
fungi | particularly if you want to mix distro-packaged python modules and pypi | 18:06 |
clarkb | AJaeger: part of me wonders if we want to do it as it may become easier to sneak a change that dumps the secret through since its not immediately obvious that the secret is being used in the child jobs | 18:07 |
clarkb | but that could be me being overly paranoid of potential situations | 18:07 |
* jlvillal remembers only the paranoid survive :) | 18:07 | |
fungi | as pip historically (and still for a little while longer) will download newer versions of your deps from pypi when you specify --upgrade/-U even if you have locally preinstalled sufficient versions | 18:07 |
*** SumitNaiksatam has joined #openstack-infra | 18:07 | |
*** pcichy has joined #openstack-infra | 18:08 | |
clarkb | speaking of pip 10, are they still planning to break the world by not uninstalling things globally? | 18:08 |
clarkb | thats going to be a fun weekend for us :/ | 18:08 |
fungi | not sure | 18:08 |
clarkb | I should resurrect my change to test it | 18:08 |
fungi | but yeah, the problem there crops up when, say, you have something like numpy in your dep tree. it has no manylinux1 wheel on pypi so has to be built from sdist, requiring a number of additional c headers preinstalled and potentially taking far longer than our puppet exec timeout to complete | 18:09 |
AJaeger | clarkb: according to the docs, the child has no access to it ;) | 18:09 |
AJaeger | clarkb: https://docs.openstack.org/infra/zuul/user/config.html#secret has | 18:10 |
fungi | the obvious workaround is to apt install python-numpy or whatever, but then pip will try to upgrade to a newer version, drag down the sdist, and break or timeout rebuilding leaving you with a nonfunctional server | 18:10 |
AJaeger | "Additional pre or post playbooks which appear in child jobs will not have access to the secrets" | 18:10 |
AJaeger | clarkb: ^ | 18:10 |
clarkb | fungi: https://review.openstack.org/513825 new patchset that undoes some of my earlier workarounds so that we can test more vanilla pip 10 behavior | 18:11 |
fungi | clarkb: thanks! | 18:11 |
clarkb | AJaeger: aha! that answers that then | 18:11 |
*** ralonsoh has quit IRC | 18:12 | |
*** pcichy has quit IRC | 18:12 | |
clarkb | fungi: in our case pyyaml and python-psutil were breaking in devstack | 18:12 |
*** pcichy has joined #openstack-infra | 18:12 | |
clarkb | fungi: because we install pyyaml in devstack-gate for features.yaml support from the distro and not pypi and python-psutil is part of devstack's debs and rpms lists but then python packages we pip install dep on it too and try to upgrade it | 18:13 |
clarkb | fungi: we should know soon enough how broken it is based on 513825's results | 18:13 |
openstackgerrit | Merged openstack-infra/project-config master: Zuul templates for new project tatu-dashboard. https://review.openstack.org/538307 | 18:14 |
openstackgerrit | Merged openstack-infra/project-config master: Switch notifications for stable branches to match all https://review.openstack.org/546690 | 18:14 |
*** oidgar has quit IRC | 18:14 | |
clarkb | anyone else want to review 545234 for AJaeger before I approve it? | 18:14 |
fungi | clarkb: given that the default upgrade strategy is now only-if-needed in 10, that may no longer happen | 18:15 |
clarkb | fungi: ya though it was broken on pip 10 when I first tested it. Do you know if that is a recent pip 10 change? | 18:15 |
clarkb | oh actually I bet its constraints burning us there | 18:15 |
AJaeger | yeah, we can get rid of jenkins/data now - infra-root, one bindep change: https://review.openstack.org/#/c/543142/ and one for config-core on openstack-zuul-jobs: https://review.openstack.org/#/c/543141/ - please review | 18:15 |
clarkb | upper constraints is almost always going to be an upgrade required compared to the distro packages | 18:15 |
*** slaweq has joined #openstack-infra | 18:15 | |
*** gfidente has quit IRC | 18:16 | |
AJaeger | once those two are in, we can merge https://review.openstack.org/#/c/543140/ | 18:16 |
*** ykarel|away has quit IRC | 18:16 | |
fungi | clarkb: oh, yes, so the upgrade strategy default changed in decemberish i think, but also constraints is likely an issue if we specify a different version than the distro is packaging | 18:16 |
fungi | clarkb: i stand corrected. looks like it may have gone in as early as may of last year https://github.com/pypa/pip/pull/4500 | 18:17 |
clarkb | assuming pip doesn't change this behavior they are effectively saying you must use virtualenvs | 18:17 |
fungi | so yeah, probably constraints is the issue | 18:17 |
mordred | clarkb, fungi: maybe we should stop installing python things from distro packages in devstack or devstack-gate | 18:17 |
clarkb | mordred: that was my workaround | 18:17 |
mordred | clarkb: awesome. I think it's not a workaround- I think it's the correctthing | 18:17 |
clarkb | mordred: probably is you can't completely avoid it in all cases | 18:18 |
clarkb | beacuse cloud init for example | 18:18 |
clarkb | we don't have it but other people running devstack will | 18:18 |
clarkb | and I disagree I think pip's behavior is completely the wrong thing | 18:18 |
clarkb | you may as well just error if not installing to a virtualenv now | 18:18 |
*** dklyle has joined #openstack-infra | 18:18 | |
mordred | oh - well, no arguments from me there | 18:18 |
fungi | i'm still not convinced that `sudo pip install ...` is a sane strategy anyway. using virtualenvs and avoiding mixing package managers makes a lot more sense to me but that train has sailed i guess | 18:18 |
clarkb | or give us a flag to say we really know what we are doing and leave us a lone | 18:19 |
pabelanger | fungi: yah, agree | 18:19 |
clarkb | fungi: ya the problem is it has been a supported configuration for as long as pip has existed | 18:19 |
mordred | I just meant in our repos we should basicaly never install python from distro packages because we install things from pip globally as well - and doing both is *always* going to break something | 18:19 |
clarkb | you can't just remove that functionality imo | 18:19 |
clarkb | maybe disable it by default | 18:19 |
*** dhajare has quit IRC | 18:19 | |
pabelanger | install everything in to virtualenv, update systemd unit files as needed, that's worked well so far with testing I've been doing | 18:19 |
fungi | clarkb: a "supported" configuration where the upstream pip maintainers for the past decade have basically said "you really shouldn't do this but..." | 18:19 |
clarkb | fungi: then they shouldn't have added the feature :P | 18:20 |
clarkb | fungi: the problem is the ship sailed once it was allowed and people used it | 18:20 |
mordred | ++ | 18:20 |
fungi | i agree on that point | 18:20 |
fungi | i don't think they realized what a problem it would grow into | 18:20 |
clarkb | pabelanger: there are weird corner cases where it doesn't work like what we had with dib | 18:20 |
clarkb | pabelanger: these are often fixable its just not something you can rely on when pulling python packages | 18:21 |
fungi | and once they did and better solutions were devised, they couldn't easily take it back | 18:21 |
clarkb | pabelanger: also does that mean you are volunteering to fix devstack? :P | 18:21 |
*** david-lyle has quit IRC | 18:22 | |
*** yamahata has quit IRC | 18:22 | |
clarkb | fwiw on debuntu it really isn't an issue | 18:22 |
mordred | clarkb: you know - I wonder if devstack would work if we just added --user to the pip install commands | 18:22 |
clarkb | mordred: you'd have to update all the unit file paths too for example but ya its doable | 18:23 |
clarkb | and then fix any dib sort corner cases if you run into them | 18:23 |
mordred | ugh. unit files | 18:23 |
clarkb | (the joy of systemd requiring fully rooted paths) | 18:23 |
mordred | \o/ | 18:23 |
clarkb | also this will break how we install zuul and nodepool etc etc etc | 18:24 |
clarkb | its going to be a fairly painful transition | 18:24 |
openstackgerrit | Merged openstack-infra/puppet-httpd master: Allow access to docroot for proxy exclusions https://review.openstack.org/547062 | 18:25 |
openstackgerrit | Merged openstack-infra/openstack-zuul-jobs master: remove legacy-rally-dsvm-fakevirt-heat https://review.openstack.org/546699 | 18:25 |
*** slaweq has quit IRC | 18:25 | |
mordred | clarkb: I'm still fully catching up - what does it break about our zuul/nodepool installs? | 18:25 |
mordred | clarkb: or, rather, what's the behavior change? | 18:26 |
clarkb | mordred: the breakage is if you pip install nodepool and it needs to upgrade some package that is system installed pip says no I won't do it and your install fails | 18:26 |
openstackgerrit | Fabien Boucher proposed openstack-infra/zuul master: Tenant config can be read from an external script https://review.openstack.org/535878 | 18:26 |
clarkb | since we install zuul and nodepool globally and can't compeltely control the fact that there are common python system libs we lose | 18:26 |
clarkb | so basically anywhere we overlap in dependencies between system and $application we are likely to eventually break due to pip 10 | 18:27 |
mordred | I thought we'd already stopped installing python libs from distro packages for nodepool/zuul | 18:27 |
*** lpetrut has joined #openstack-infra | 18:27 | |
clarkb | for our things yes | 18:27 |
clarkb | but nothing stops $package from pulling things in | 18:27 |
mordred | nod | 18:27 |
clarkb | I expect that this will be particularly painful on centos | 18:27 |
clarkb | because yum | 18:27 |
clarkb | fedora's switch to dnf in theory makes this better? | 18:27 |
*** tesseract has quit IRC | 18:28 | |
fungi | does dnf use different installation paths? | 18:28 |
clarkb | fungi: no it just doesn't use python | 18:28 |
openstackgerrit | John L. Villalovos proposed openstack-infra/puppet-gerritbot master: Change gerritbot to install from git https://review.openstack.org/546700 | 18:28 |
fungi | aha, got it | 18:28 |
clarkb | on ubuntu our biggest headache has been cloud init which we personally do not use but others do | 18:28 |
pabelanger | clarkb: ha, I mean adding it to devstack would be a good adventure | 18:28 |
fungi | so saying yum itself depends on a bunch of rpms of python libs | 18:28 |
clarkb | fungi: ya | 18:28 |
*** gfidente has joined #openstack-infra | 18:29 | |
fungi | this is one of the reasons red hat has persisted in stating that you shouldn't run any non-rh-packaged python applications using the distro-installed interpreter | 18:29 |
pabelanger | clarkb: but, I'd be willing to try using virtualenv directly in devstack. Maybe find some time at PTG to test it out | 18:30 |
fungi | and instead have your own python build with your own lib search path and install your python apps and deps in there | 18:30 |
*** lamt has quit IRC | 18:30 | |
clarkb | pabelanger: ya in theory devstack supports it but I have no idea if it works or if it was updated to use unit files | 18:30 |
fungi | the party line, at least for years, seems to have been that the packaged python interpreter and libraries are only there to support packaged python tools and applications shipped as a part of the distro | 18:31 |
fungi | which i think is also how they justified not having python 3 available even as recently as rhel 7 | 18:32 |
clarkb | personally I think the best approach would be to disable the uninstall behavior by default with a warning that pip 11 will completely remove the functionality | 18:32 |
clarkb | then give people ability to fix things on pip 10 by enabling the feature | 18:32 |
openstackgerrit | Merged openstack-infra/openstack-zuul-jobs master: Set openstack-infra-multinode-integration-fedora-26 non-voting https://review.openstack.org/547026 | 18:33 |
*** openstackgerrit has quit IRC | 18:33 | |
AJaeger | clarkb: shall I +A 545234 or anybody else reviewing the static publish job changes? | 18:38 |
clarkb | AJaeger: I was hoping others would give it a look over since it changes a security related item but I'm fairly comfortable with ti based on the docs at least | 18:38 |
AJaeger | mordred, clarkb : we have updated images, could you review https://review.openstack.org/#/c/543141 and https://review.openstack.org/#/c/543142/ , please? | 18:39 |
*** slaweq has joined #openstack-infra | 18:39 | |
AJaeger | clarkb: I can wait until tomorrow morning my time and then approve if nobody reviews - ok? | 18:39 |
*** pcichy has quit IRC | 18:39 | |
clarkb | AJaeger: wfm | 18:39 |
AJaeger | ack | 18:40 |
*** openstackgerrit has joined #openstack-infra | 18:40 | |
openstackgerrit | Merged openstack-infra/project-config master: Migrate to Storyboard for Qinling related projects https://review.openstack.org/540681 | 18:40 |
openstackgerrit | Merged openstack-infra/project-config master: Add individual core group for openstack-ansible-os_neutron https://review.openstack.org/546577 | 18:40 |
*** claudiub has quit IRC | 18:41 | |
*** slaweq has quit IRC | 18:43 | |
clarkb | mordred: http://logs.openstack.org/25/513825/6/check/tempest-full/ec9c70b/job-output.txt.gz#_2018-02-22_18_34_18_732384 that shows you how pip 10 will break | 18:47 |
clarkb | and ^ confirms the behavior has not changed so we should be prepared for much of our world to be broken once pip 10 happens | 18:47 |
fungi | ugh | 18:47 |
openstackgerrit | Merged openstack-infra/project-config master: Add OSA nspawn host/container repo base jobs https://review.openstack.org/546294 | 18:47 |
mnaser | gerrit just 502'd | 18:48 |
mnaser | for what its worth | 18:48 |
clarkb | looks like could be GC related looking at melody | 18:49 |
clarkb | if it persists we'll probably have to restart the server | 18:49 |
mnaser | it doesn't bother me to f5 but i just want to give a heads up in case you're tracking anything | 18:49 |
fungi | clarkb: what's the temporary workaround flag for the refusal to uninstall distutils-based packages? | 18:49 |
clarkb | fungi: I don't think there is one | 18:49 |
clarkb | fungi: thats my grump | 18:50 |
mpeterson | hey, I have a set of jobs that are not generating the ARA html, I think it might be a bug of the role definition but I don't have enough visibility in the logs to be able to see what ansible is doing.. can someone check? https://review.openstack.org/#/c/517359/ any of the functional or fullstack jobs | 18:50 |
clarkb | mpeterson: we only generate them on failure | 18:50 |
fungi | clarkb: oh, thought you said there was a temporary one which would be removed in pip 11 | 18:50 |
clarkb | fungi: no I'm saying thats what pip should do but as far as I know is not doing that | 18:50 |
fungi | ahh | 18:50 |
*** yamamoto has joined #openstack-infra | 18:50 | |
fungi | :( | 18:50 |
clarkb | mpeterson: and looks like functional and fullstack are successful job runs so won't have ara output. We did this to conserve inodes on the log filesystem :/ there is work to have ara use a database file rather than many tiny files | 18:51 |
clarkb | dmsimard: ^ is that work done? maybe we can enable ara on successes now? | 18:51 |
clarkb | pabelanger: we should probably pick a night for the team dinner soonish | 18:52 |
mpeterson | clarkb: oh, gotcha! thanks for the explanation, since they are also useful when jobs have succeeded | 18:52 |
*** dtantsur is now known as dtantsur|afk | 18:52 | |
openstackgerrit | Paul Belanger proposed openstack-infra/openstack-zuul-jobs master: Add fedora-27 devstack / tempest jobs https://review.openstack.org/547107 | 18:52 |
openstackgerrit | Paul Belanger proposed openstack-infra/openstack-zuul-jobs master: Remove fedora-26 devstack / tempest testing https://review.openstack.org/547108 | 18:52 |
*** pbourke_ has quit IRC | 18:53 | |
pabelanger | clarkb: yah, last I looked Tuesday seem to be the winner | 18:53 |
*** yamamoto has quit IRC | 18:56 | |
openstackgerrit | Paul Belanger proposed openstack-infra/project-config master: Switch devstack / tempest testing to fedora-27 https://review.openstack.org/547109 | 18:56 |
*** dhajare has joined #openstack-infra | 18:58 | |
openstackgerrit | Paul Belanger proposed openstack-infra/openstack-zuul-jobs master: Remove fedora-26 devstack / tempest testing https://review.openstack.org/547108 | 18:58 |
*** yamamoto has joined #openstack-infra | 18:59 | |
*** slaweq has joined #openstack-infra | 19:00 | |
openstackgerrit | Eric Kao proposed openstack-infra/system-config master: Start logging openstack-self-healing channel https://review.openstack.org/547110 | 19:03 |
*** yamamoto has quit IRC | 19:04 | |
openstackgerrit | Paul Belanger proposed openstack-infra/project-config master: Change windmill jobs from fedora-26 to fedora-27 https://review.openstack.org/547112 | 19:04 |
*** slaweq has quit IRC | 19:05 | |
openstackgerrit | Paul Belanger proposed openstack-infra/project-config master: Switch bindep testing to fedora-27 https://review.openstack.org/547113 | 19:06 |
pabelanger | AJaeger: thanks for review, will fix in a moment | 19:07 |
clarkb | here is an interesting side effect to the pip 10 behavior | 19:08 |
clarkb | the python3 job appears to work because we install python2 system packages :) | 19:08 |
*** gfidente has quit IRC | 19:09 | |
AJaeger | pabelanger: can I trade in https://review.openstack.org/#/c/543141 and https://review.openstack.org/#/c/543142 , please? ;) | 19:10 |
*** links has quit IRC | 19:11 | |
*** amoralej is now known as amoralej|off | 19:12 | |
*** wolverineav has quit IRC | 19:12 | |
*** wolverineav has joined #openstack-infra | 19:13 | |
*** wolverineav has quit IRC | 19:13 | |
*** wolverineav has joined #openstack-infra | 19:14 | |
*** yamamoto has joined #openstack-infra | 19:14 | |
mnaser | infra-root: how much of a hassle is it to run a manual mirror update for ubuntu? lxc released new images (with newer packages) and mirrors aren't updated yet so installs are broken in the lxc containers for OSA | 19:16 |
*** wolverineav has quit IRC | 19:16 | |
*** olaph has quit IRC | 19:16 | |
*** wolverineav has joined #openstack-infra | 19:17 | |
clarkb | mnaser: it should happen automatically every 4 hours now that pabelanger fixed the mirrors | 19:17 |
*** olaph has joined #openstack-infra | 19:17 | |
clarkb | pabelanger: ^ maybe a lock was held? | 19:17 |
mnaser | clarkb: oh i thought mirrors were every 24 hours for some reason, my bad | 19:17 |
AJaeger | pabelanger: thanks | 19:19 |
clarkb | mnaser: that said do we even mirror lxc images? | 19:19 |
clarkb | oh wait I get it | 19:19 |
*** yamamoto has quit IRC | 19:19 | |
mnaser | clarkb: :) | 19:19 |
mnaser | new images with old mirrors | 19:19 |
clarkb | its the new images which are not mirrored using new packages that we don't have so updates don't work | 19:19 |
mnaser | i was investigating mirroring them right now | 19:20 |
mnaser | and how much space that costs | 19:20 |
mnaser | looks like its a ~100M or so pull for every job | 19:20 |
*** rossella_s has quit IRC | 19:20 | |
clarkb | mnaser: we'd probably want to cache them rather than properly mirror them like we do with docker images | 19:20 |
clarkb | mnaser: http://mirror.dfw.rax.openstack.org/ubuntu/timestamp.txt last update was about an hour ago | 19:21 |
mnaser | clarkb: that would probably make things much easier then, because the urls are timestamped for the images so easy invalidation | 19:21 |
AJaeger | mnaser: could I trouble you for another review, please? https://review.openstack.org/#/c/543141 is ready... | 19:23 |
*** rossella_s has joined #openstack-infra | 19:23 | |
mnaser | AJaeger: is bindep-fallback confirmed to exist by any jobs right now | 19:24 |
AJaeger | mnaser: yes, see https://review.openstack.org/#/c/543142 | 19:24 |
AJaeger | it only works since we have rebuild images with that one in now | 19:25 |
mnaser | AJaeger: ok cool makes sense! | 19:26 |
mnaser | +A | 19:26 |
AJaeger | thanks, mnaser | 19:27 |
*** rossella_s has quit IRC | 19:28 | |
*** xarses has joined #openstack-infra | 19:28 | |
AJaeger | pabelanger: barbican and bifrost have jobs which use legacy-fedora-26 | 19:29 |
*** yamamoto has joined #openstack-infra | 19:29 | |
pabelanger | clarkb: I can look | 19:31 |
clarkb | pabelanger: I checked the timestamp and I think we are good | 19:31 |
pabelanger | okay | 19:31 |
clarkb | pabelanger: sorry didn't make that clear | 19:31 |
pabelanger | np | 19:31 |
pabelanger | AJaeger: kk, I'll update them shortly | 19:31 |
*** timburke_ is now known as timburke | 19:32 | |
*** rossella_s has joined #openstack-infra | 19:32 | |
*** yamamoto has quit IRC | 19:34 | |
openstackgerrit | Merged openstack-infra/openstack-zuul-jobs master: Move jenkins/data/bindep-fallback.txt https://review.openstack.org/543141 | 19:36 |
openstackgerrit | Paul Belanger proposed openstack-infra/openstack-zuul-jobs master: Add fedora-27 devstack / tempest jobs https://review.openstack.org/547107 | 19:37 |
openstackgerrit | Paul Belanger proposed openstack-infra/openstack-zuul-jobs master: Remove fedora-26 devstack / tempest testing https://review.openstack.org/547108 | 19:37 |
*** VW_ has quit IRC | 19:37 | |
*** olaph1 has joined #openstack-infra | 19:38 | |
*** VW has joined #openstack-infra | 19:38 | |
corvus | clarkb, fungi, mordred: if we have to install zuul/nodepool in venvs for pip10 we'll just need to have our deployment tooling symlink all their entrypoint scripts into /usr/local/bin | 19:38 |
clarkb | I guess the othe roption is to have distros use setuptools and not use distutils | 19:39 |
clarkb | but thats probably not going to happen on xenial and may be too late for whatever b is | 19:39 |
*** olaph has quit IRC | 19:39 | |
fungi | corvus: yeah, that sounds reasonable (and is basically the approach i take when pip installing things on my personal systems) | 19:43 |
fungi | install into a virtualenv, symlink into the venv's bindir from somewhere in the default command search path | 19:43 |
*** yamamoto has joined #openstack-infra | 19:44 | |
*** agopi has quit IRC | 19:45 | |
*** dhajare has quit IRC | 19:47 | |
mnaser | clarkb: apparently they're already mirrored here https://github.com/openstack-infra/system-config/blob/master/modules/openstack_project/templates/mirror.vhost.erb#L166-L169 | 19:50 |
mnaser | or cached rather | 19:50 |
AJaeger | infra-root, could I get another review on this bindep change to help get rid of jenkins/data, please? https://review.openstack.org/#/c/543142 | 19:50 |
openstackgerrit | Paul Belanger proposed openstack-infra/openstack-zuul-jobs master: Remove fedora-26 devstack / tempest testing https://review.openstack.org/547108 | 19:50 |
AJaeger | pabelanger: with https://review.openstack.org/#/c/546848/ failing, are we really ready to make fedora-27 voting yet? | 19:51 |
*** yamamoto has quit IRC | 19:52 | |
pabelanger | AJaeger: no, that need to be rebased into non-voting | 19:52 |
pabelanger | AJaeger: dmsimard is working on fixing | 19:52 |
AJaeger | thanks, dmsimard | 19:52 |
AJaeger | pabelanger: ok | 19:52 |
*** rossella_s has quit IRC | 19:52 | |
dmsimard | I'll look now, I have held nodes from the last failure -- not sure yet what the issue is | 19:52 |
*** rossella_s has joined #openstack-infra | 19:53 | |
dmsimard | welp, it hasn't even been held that long: There were 48439 failed login attempts since the last successful login. | 19:54 |
pabelanger | mnaser: yah, that was added a while back. Is there an issue? | 19:56 |
mnaser | pabelanger: im not sure i understand the failure fully, installs inside osa containers "debhelper : Depends: dh-strip-nondeterminism (>= 0.028~) but it is not going to be installed" | 19:57 |
*** mriedem has quit IRC | 19:57 | |
*** mriedem has joined #openstack-infra | 19:57 | |
mnaser | if we are using cached images and download an older image and install packages with new mirrors, it should just update without an issue | 19:57 |
mnaser | if we download a newer image and have old repos might be the cause i can imagine | 19:57 |
*** olaph1 has quit IRC | 19:58 | |
fungi | http://mirror.dfw.rax.openstack.org/ubuntu/timestamp.txt suggests it updated ~1.5 hours ago | 19:58 |
*** olaph has joined #openstack-infra | 19:59 | |
pabelanger | mnaser: have a log I can look at? | 20:00 |
fungi | yeah, that's waaay newer than what packages.ubuntu.com claims is available on xenial at least (even xenial-backports) | 20:00 |
fungi | looks like bionic | 20:01 |
fungi | https://packages.ubuntu.com/bionic/debhelper | 20:01 |
*** bmjen has quit IRC | 20:01 | |
fungi | depends dh-strip-nondeterminism (>= 0.028~) | 20:01 |
fungi | mnaser: are you trying to use bionic images? | 20:01 |
*** slaweq has joined #openstack-infra | 20:01 | |
*** Hunner has quit IRC | 20:01 | |
*** Hunner has joined #openstack-infra | 20:02 | |
*** Hunner has quit IRC | 20:02 | |
*** Hunner has joined #openstack-infra | 20:02 | |
mnaser | i don't think so, but maybe this was a recent osa bug or change | 20:02 |
fungi | yeah, that looks like it's installing or upgrading to the debhelper in bionic | 20:02 |
fungi | but a log would help | 20:02 |
*** yamahata has joined #openstack-infra | 20:03 | |
mnaser | fungi: http://logs.openstack.org/44/545844/1/gate/openstack-ansible-deploy-ceph-ubuntu-xenial/0095672/logs/ara/result/d268817e-7b12-41f9-962e-e20f5bd00ec7/ | 20:03 |
*** slaweq_ has joined #openstack-infra | 20:03 | |
mnaser | i can point you to the direct log but i didnt see much more interesting info (and its cleaner in ara) | 20:03 |
mnaser | let me see what image was being used | 20:04 |
openstackgerrit | sebastian marcet proposed openstack-infra/openstackid-resources master: Added add new track by summit endpoint https://review.openstack.org/547125 | 20:04 |
*** bmjen has joined #openstack-infra | 20:05 | |
openstackgerrit | Merged openstack-infra/bindep master: Move jenkins/data/bindep-fallback.txt https://review.openstack.org/543142 | 20:05 |
*** slaweq has quit IRC | 20:06 | |
mnaser | ok so job is running on xenial according to zuul-info | 20:06 |
AJaeger | config-core, time to say good-bye to jenkins/data - who wants to do the honor? https://review.openstack.org/#/c/543140 | 20:06 |
pabelanger | fungi: mnaser: we do have bionic repos now on AFS | 20:07 |
mnaser | i'm trying to make sense of whats going on exactly with this failure | 20:07 |
*** slaweq_ has quit IRC | 20:07 | |
*** rossella_s has quit IRC | 20:08 | |
*** rossella_s has joined #openstack-infra | 20:10 | |
dmsimard | pabelanger: I've reproduced the f26 issue, not sure what's causing it yet though | 20:11 |
pabelanger | xenial|main|amd64: debhelper 9.20160115ubuntu3 | 20:13 |
pabelanger | mnaser: fungi: ^version of debhelper in reprepro | 20:13 |
pabelanger | looks to be insync with packages.ubuntu.com | 20:14 |
mnaser | pabelanger: what about dh-strip-nondeterminism, the dependency its failing on? | 20:14 |
pabelanger | xenial|main|amd64: dh-strip-nondeterminism 0.015-1 | 20:14 |
*** bmjen has quit IRC | 20:15 | |
mnaser | um | 20:15 |
*** bmjen has joined #openstack-infra | 20:15 | |
pabelanger | which is same as https://packages.ubuntu.com/xenial/dh-strip-nondeterminism | 20:15 |
openstackgerrit | Merged openstack-infra/openstackid-resources master: Added add new track by summit endpoint https://review.openstack.org/547125 | 20:15 |
mnaser | "debhelper : Depends: dh-strip-nondeterminism (>= 0.028~) but it is not going to be installed" | 20:15 |
mnaser | i wodner where it's getting that from | 20:15 |
pabelanger | mnaser: so, maybe repo files are not setup properly and using bionic now? | 20:15 |
fungi | mnaser: that's trying to resolve the dependencies for the version of debhelper in bionic | 20:16 |
jlvillal | clarkb, Made the suggested changes and it is passing the gate now: puppet-gerritbot: https://review.openstack.org/#/c/546700/ | 20:16 |
jlvillal | As an FYI | 20:16 |
mnaser | fungi: ok thats interesting | 20:17 |
persia | Does something automatically pull from backports? Usual Ubuntu developer practice is to backport things like devscripts, debhelper, etc. into Ubuntu Backports so folk can run the current release whilst working on the next release. | 20:17 |
* persia remembers seeing backports not being commented out in some mirror scripts, but doesn't know if it ever gets used | 20:17 | |
fungi | persia: in this case the dep doesn't match the version in either xenial or xenial-backports, but matches the version in bionic | 20:18 |
fungi | i expect there's an incorrect sources.list | 20:18 |
*** slaweq has joined #openstack-infra | 20:22 | |
clarkb | jlvillal: one small thing | 20:22 |
jlvillal | clarkb, looking | 20:23 |
*** slaweq_ has joined #openstack-infra | 20:24 | |
jlvillal | clarkb, I can change it: I did see this: https://github.com/openstack-infra/puppet-statusbot/blob/master/manifests/init.pp#L61-L66 | 20:25 |
jlvillal | clarkb, I'm no expert | 20:25 |
jlvillal | clarkb, What you say makes sense to me as it is a chain of subscribes | 20:26 |
clarkb | jlvillal: ah I think status bot might be wrong there too then | 20:26 |
clarkb | basically the git repo doesn't matter so much there as does the install itself so its better to explicitly call that out I think | 20:26 |
*** dprince has quit IRC | 20:27 | |
jlvillal | clarkb, Sounds good. Updating now. | 20:27 |
mtreinish | pabelanger, clarkb, fungi: if you get a sec https://review.openstack.org/534953 is a super easy review | 20:28 |
openstackgerrit | John L. Villalovos proposed openstack-infra/puppet-gerritbot master: Change gerritbot to install from git https://review.openstack.org/546700 | 20:28 |
jlvillal | clarkb, ^^ Hopefully Zuul likes it :) | 20:29 |
*** slaweq_ has quit IRC | 20:29 | |
openstackgerrit | Matthieu Huin proposed openstack-infra/nodepool master: Refactor status functions, add web endpoints, allow params https://review.openstack.org/536301 | 20:31 |
openstackgerrit | Matthieu Huin proposed openstack-infra/nodepool master: Add separate modules for management commands https://review.openstack.org/536303 | 20:31 |
openstackgerrit | Matthieu Huin proposed openstack-infra/nodepool master: webapp: add optional admin endpoint https://review.openstack.org/536319 | 20:32 |
openstackgerrit | David Moreau Simard proposed openstack-infra/openstack-zuul-jobs master: Attempt to rescue lost SSH connection when restarting iptables https://review.openstack.org/547130 | 20:35 |
dmsimard | pabelanger: attempt at fixing, dunno if it'll work ^ | 20:35 |
dmsimard | I don't understand the specificity to Fedora | 20:36 |
mnaser | fungi: i'm a bit confused at how the broken sources.list can cause this | 20:36 |
*** rossella_s has quit IRC | 20:36 | |
dmsimard | but even without ansible involved, if I just restart iptables over ssh, I lose connectivity | 20:36 |
mnaser | if the sources.list pointed to bionic, it would pull debhelper from bionic and install that dependency. if it pointed to xenial, well, that would never be an issue in the first place | 20:37 |
mnaser | dmsimard: order of iptables rule might have the DROP rule before accept? | 20:37 |
odyssey4me | pabelanger mnaser interestingly, the sources are right: http://logs.openstack.org/73/546773/1/check/openstack-ansible-functional-ubuntu-xenial/e232748/logs/etc/openstack/openstack1/apt/sources.list.txt.gz | 20:37 |
mnaser | so as rules are being reinstalled, DROP rules come in and block everything, then ACCEPT a few seconds later | 20:37 |
odyssey4me | the only other extra is http://logs.openstack.org/73/546773/1/check/openstack-ansible-functional-ubuntu-xenial/e232748/logs/etc/openstack/openstack1/apt/sources.list.d/uca.list.txt.gz | 20:37 |
*** SumitNaiksatam has quit IRC | 20:38 | |
dmsimard | mnaser: I don't know... rules are here: http://logs.openstack.org/15/536615/2/check/openstack-infra-multinode-integration-fedora-26/139d9d2/ara/result/c36376e0-25a2-467d-870d-eb09679d09d1/ | 20:38 |
dmsimard | mnaser: default policy is accept | 20:38 |
dmsimard | mnaser: someone mentioned "kernel" here but I doubt that would be it: https://github.com/ansible/ansible/issues/20033 | 20:38 |
pabelanger | dmsimard: I usually use reload on ubuntu, doesn't fedora have the same? | 20:38 |
*** rossella_s has joined #openstack-infra | 20:39 | |
dmsimard | pabelanger: reload has the same behavior, breaks connectivity as well | 20:39 |
odyssey4me | pabelanger mnaser aha, it's UCA that's broken it | 20:39 |
dmsimard | brb, picking up kid from school | 20:39 |
pabelanger | dmsimard: ouch | 20:39 |
mnaser | thats weird.. im not sure why it wouldn't come back, looks like it drops all connections | 20:39 |
mnaser | odyssey4me: oh? | 20:39 |
odyssey4me | mnaser pabelanger http://paste.openstack.org/show/682458/ | 20:40 |
odyssey4me | see the dep specification in line 12 | 20:40 |
openstackgerrit | John L. Villalovos proposed openstack-infra/puppet-statusbot master: Have the 'statusbot' service subscribe to the pip install https://review.openstack.org/547132 | 20:40 |
jlvillal | clarkb, ^^ Made suggested change to puppet-statusbot too | 20:40 |
clarkb | re specificity to fedora is it using firewalld? | 20:40 |
mnaser | odyssey4me: guess we can go back to #openstack-ansible | 20:40 |
clarkb | because firewalld is weird and does break things in unexecpted ways | 20:40 |
fungi | mnaser: i was theorizing that an image with xenial packages but misconfigured with a sources.list for bionic would create this effect, but forgot that uca was yet another repository we hadn't considered checking yet | 20:41 |
mnaser | fungi: yeah uca is the culprit here :( | 20:42 |
openstackgerrit | Matthieu Huin proposed openstack-infra/zuul-jobs master: role: Inject public keys in case of failure https://review.openstack.org/535803 | 20:42 |
odyssey4me | fungi mnaser and the package UCA is providing for that dep is Version: 0.040-1~cloud2 - clearly a later version, so something is hinky | 20:42 |
openstackgerrit | Merged openstack-infra/release-tools master: Add StoryBoard filter and tag tools https://review.openstack.org/534808 | 20:43 |
dmsimard | clarkb: we have iptables in the fedora DIB images, not firewalld | 20:43 |
pabelanger | let me check reprepro for UCA | 20:43 |
AJaeger | team, it's time to say good-bye to jenkins/data - could I get another review on https://review.openstack.org/#/c/543140 , please? | 20:43 |
mtreinish | clarkb: thanks | 20:43 |
clarkb | dmsimard: right but is firewalld applying them? | 20:43 |
fungi | our uca mirror is up to date too at least, looks like: http://mirror.bhs1.ovh.openstack.org/ubuntu-cloud-archive/timestamp.txt | 20:43 |
dmsimard | clarkb: don't think so, but I'll double check. | 20:44 |
pabelanger | UCA mirror in reprepro is working | 20:44 |
fungi | yeah, it's current as of 45 minutes ago | 20:44 |
mnaser | so to sum it up | 20:45 |
mnaser | debhelper is being pulled from uca which depends on dh-strip-nondeterminism (>= 0.028~), dh-strip-nondeterminism_0.040-1~cloud2 exists in uca but apt doesn't like it (i guess) | 20:45 |
fungi | yep, just confirmed looking in http://mirror.bhs1.ovh.openstack.org/ubuntu-cloud-archive/dists/xenial-updates/queens/main/binary-amd64/Packages | 20:46 |
clarkb | AJaeger: where is /usr/local/bindep-fallback/bindep-fallback.txt created? | 20:46 |
pabelanger | aptmethod got 'http://ubuntu-cloud.archive.canonical.com/ubuntu/pool/main/d/debhelper/debhelper_11.1.4ubuntu1~cloud0_all.deb' | 20:46 |
pabelanger | 2018-02-22T16:00:01,559430775+0000 | 20:46 |
pabelanger | that is when we updated | 20:46 |
fungi | mnaser: so, yes, queens uca seems to be at fault i guess | 20:46 |
AJaeger | clarkb: https://review.openstack.org/#/c/543139/6/nodepool/elements/nodepool-base/post-install.d/89-data-files | 20:46 |
mnaser | fungi: cool, thanks.. time to figure out why | 20:46 |
mnaser | or how to fix | 20:46 |
fungi | mnaser: looks like maybe they build queens uca against bionic package revs instead of xenial | 20:46 |
AJaeger | clarkb: and https://review.openstack.org/#/c/543142/ shows that it's in our images | 20:47 |
clarkb | AJaeger: ty | 20:47 |
clarkb | AJaeger: approved | 20:47 |
AJaeger | thanks, clarkb ! | 20:47 |
*** yamamoto has joined #openstack-infra | 20:48 | |
*** rossella_s has quit IRC | 20:48 | |
fungi | mnaser: though the packages file for xenial queens uca claims to include dh-strip-nondeterminism 0.040-1~cloud2 | 20:48 |
fungi | so interesting that it's not being selected | 20:49 |
AJaeger | config-core, small cleanup for openstack-zuul-job for review, please - https://review.openstack.org/#/c/546307/ | 20:49 |
*** aeng has joined #openstack-infra | 20:50 | |
*** rossella_s has joined #openstack-infra | 20:51 | |
fungi | jamespage: do you happen to know who would be able to tell us why uca's xenial-updates/queens is bringing in a debhelper version from bionic instead of xenial? seems like dh is something you would not want to update in uca, but that could just be my lack of imagination too | 20:52 |
dmsimard | clarkb: yeah firewalld isn't installed, nothing weird in the systemd journal for iptables either | 20:53 |
clarkb | huh | 20:54 |
dmsimard | only fedora is exhibiting that behavior and as far as I know, it wasn't always this way | 20:55 |
*** yamamoto has quit IRC | 20:55 | |
pabelanger | TheJulia: mind adding https://review.openstack.org/547119/ to your review pipeline. It moves bifrost to fedora-27, we'd like to remove fedora-26 nodes. | 20:56 |
dmsimard | in that upstream ansible issue, someone said on RAX it disconnected him and on Linode it didn't, I'm not sure what to make of that | 20:56 |
*** camunoz has joined #openstack-infra | 20:56 | |
*** vivsoni_ has joined #openstack-infra | 20:57 | |
*** vivsoni has quit IRC | 20:57 | |
*** efoley has quit IRC | 20:58 | |
openstackgerrit | Merged openstack-infra/system-config master: Add logstash worker to services table https://review.openstack.org/534953 | 20:58 |
AJaeger | infra-root, do you want to restart zuul some time today? It's getting slower IMHO ... | 20:59 |
pabelanger | memory looks good still | 21:00 |
clarkb | the grafana status page shows it is getting bursty | 21:00 |
clarkb | which appears to be related to the executor governors | 21:00 |
fungi | i need to disappear for a few minutes, but will return soonish | 21:00 |
clarkb | load average and memory spikes which then result in not taking jobs | 21:00 |
pabelanger | reloads are up, around 20s | 21:01 |
mnaser | fungi: i will never understand distro versioning and never will try doing that anymore :D | 21:01 |
pabelanger | clarkb: no, I think we are spending a lot of time reloading in scheduler, which is starving CPU and preventing new builds from starting | 21:02 |
clarkb | pabelanger: ah that could be too though we are definitely having longish periods where executors stop accepting jobs | 21:02 |
pabelanger | http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=389&rra_id=all | 21:02 |
pabelanger | you can see we are using more CPU | 21:03 |
pabelanger | not sure why however | 21:03 |
clarkb | ~5 minutes at a time | 21:03 |
AJaeger | after my last approval it took 7 minutes until the change was queued in gate | 21:03 |
dmsimard | pabelanger: actually wait_for_connection is a thing now btw :p http://docs.ansible.com/ansible/latest/wait_for_connection_module.html | 21:03 |
dmsimard | pabelanger: but yeah, +1, I'll use that instead | 21:04 |
pabelanger | dmsimard: neat, TIL | 21:04 |
dmsimard | pabelanger: yeah it's more convenient when testing for ssh connectivity | 21:04 |
clarkb | looks like cpu incrase started about 1500UTC? | 21:05 |
AJaeger | config-core, thanks for the reviews - we still have some backlog on the projects, reviews welcome! | 21:05 |
* AJaeger waves good night | 21:05 | |
AJaeger | clarkb: and memory as well | 21:05 |
AJaeger | clarkb: http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=64792&rra_id=all | 21:05 |
clarkb | oh ya | 21:06 |
clarkb | so some config change maybe? | 21:06 |
AJaeger | no idea - I glanced at zuul.o.o at that time but didn't see anything obvious ;/ | 21:06 |
*** r-daneel has quit IRC | 21:06 | |
*** r-daneel has joined #openstack-infra | 21:06 | |
*** mgoddard_ has joined #openstack-infra | 21:07 | |
*** hashar has quit IRC | 21:07 | |
dmsimard | I think it'd be useful to install some diagnostic tools by default, namely iotop and sysstat (for iostat) | 21:08 |
AJaeger | config-core, some change that would be nice to have in are https://review.openstack.org/546695 https://review.openstack.org/546432 https://review.openstack.org/545726 and https://review.openstack.org/537634 | 21:08 |
* AJaeger really leaves for today | 21:08 | |
pabelanger | clarkb: AJaeger: that's around the time when fungi approved the change to clear our zuul errors for OSA project | 21:08 |
dmsimard | pabelanger: wasn't that yesterday ? | 21:08 |
*** slaweq_ has joined #openstack-infra | 21:09 | |
AJaeger | pabelanger: the change was approving another new project creation which then triggered the reconfig | 21:09 |
pabelanger | dmsimard: there was some this morning too | 21:09 |
pabelanger | yah, that | 21:09 |
dmsimard | hmm | 21:09 |
pabelanger | so, zuul did full reconfiguration | 21:09 |
*** eharney has quit IRC | 21:10 | |
dmsimard | There's been quite a few exceptions if you look at just today's logs http://paste.openstack.org/raw/682503/ | 21:11 |
jamespage | fungi: in order to ease the overhead of backporting >300 source packages back to xenial, we've backported a newer debhelper so that any new bits landing in bionic which require debhelper >= 10 don't require us to hold a patch to downgrade it | 21:11 |
jamespage | fungi: 3-4 pkgs vs 30-40 patches if you see what I mean | 21:12 |
odyssey4me | jamespage :) evening | 21:12 |
dmsimard | wow that's a nasty exception | 21:12 |
*** r-daneel_ has joined #openstack-infra | 21:13 | |
dmsimard | exception from the poll loop error: http://paste.openstack.org/raw/682505/ | 21:13 |
dmsimard | That's almost java levels of stack trace | 21:13 |
*** r-daneel has quit IRC | 21:13 | |
*** r-daneel_ is now known as r-daneel | 21:13 | |
*** slaweq_ has quit IRC | 21:14 | |
odyssey4me | we hit some trouble with that in OSA, although we discovered that we didn't really know why debhelper was being installed in the first place so we have patches up to remove it... and we're not entirely sure why the installs didn't work... we add UCA/queens on xenial, we try to install debhelper, kerblooey | 21:14 |
openstackgerrit | Merged openstack-infra/project-config master: Move jenkins/data/bindep-fallback.txt (2/2) https://review.openstack.org/543140 | 21:14 |
clarkb | dmsimard: the unknown job error is causing the other error (no job means no attribute) | 21:17 |
clarkb | dmsimard: I think that may imply we still have configs that are not quite correct? | 21:17 |
clarkb | and we are trying to run a job that doesn't exist? | 21:17 |
clarkb | it would be nice if unknown job error had the job name too | 21:18 |
*** dprince has joined #openstack-infra | 21:18 | |
* clarkb goes to poke at gear to see if that is doable | 21:18 | |
dmsimard | I'm not sure, there's a lot of other errors as well | 21:18 |
dmsimard | Not familiar enough to tell if any of those are "normal", recoverable or fatal | 21:18 |
jamespage | hi odyssey4me | 21:19 |
dmsimard | I've seen this one before and it's happening a lot: "ERROR zuul.Scheduler: Unable to process autohold for None" I formally created a story for it so we don't forget | 21:19 |
dmsimard | There's also "ERROR zuul.Scheduler: Exception reporting runtime stats", "ERROR zuul.nodepool: Error unlocking node" and "ERROR zuul.nodepool: Node <Node 0002678661 ['primary']:fedora-26> is not locked" | 21:20 |
openstackgerrit | Paul Belanger proposed openstack-infra/project-config master: Remove jenkins-slave element from DIB images https://review.openstack.org/514485 | 21:20 |
pabelanger | AJaeger: clarkb: ^now we can remove the jenkins-slave element from nodes | 21:20 |
*** iyamahat has joined #openstack-infra | 21:22 | |
openstackgerrit | Clark Boylan proposed openstack-infra/gear master: Include job name in UnknownJobError https://review.openstack.org/547143 | 21:23 |
clarkb | something like ^ may help in debugging this | 21:23 |
pabelanger | ianw: https://review.openstack.org/547107/ brings online fedora-27 for tempest / devstack | 21:23 |
jamespage | odyssey4me: hmmm did something break? we pushed the rc's from last week through to -updates PM today | 21:24 |
jamespage | that may have had a number of these toolchain type things with it | 21:24 |
odyssey4me | jamespage ok, so something did change today | 21:24 |
*** rossella_s has quit IRC | 21:24 | |
jamespage | it was quite a large number of packages so might not have all go in during the same sync cycle, which would create some installability issues for an hour or so | 21:24 |
odyssey4me | I'm tempted to fire up a test now to see if the issue happens outside of infra. | 21:25 |
clarkb | reading the code more though its getting a status_res packet and then not finding the job associated with that packet so that it can update the job | 21:25 |
dmsimard | clarkb: do we want "handle" or "job" ? | 21:25 |
clarkb | dmsimard: job not existing is why we error, handle is the key we are looking for | 21:25 |
jamespage | odyssey4me: issue is with debhelper installability right? | 21:25 |
dmsimard | clarkb: makes sense | 21:25 |
clarkb | thinking about why this may happen maybe the client is cancelling the jobs but that state hasn't made it all the way through | 21:25 |
clarkb | I still think the name would help us understand why it happens | 21:25 |
odyssey4me | jamespage mind if we switch into #openstack-ansible? I'd rather not interfer with the other ocnversation going on here. | 21:26 |
jamespage | odyssey4me: ok | 21:26 |
ianw | pabelanger: tbh, i'd prefer if we could get the stack @ https://review.openstack.org/#/c/540704/ in | 21:27 |
*** jtomasek has quit IRC | 21:27 | |
openstackgerrit | Merged openstack-infra/openstack-zuul-jobs master: Finish golang job removal https://review.openstack.org/546307 | 21:28 |
*** rossella_s has joined #openstack-infra | 21:29 | |
openstackgerrit | David Moreau Simard proposed openstack-infra/openstack-zuul-jobs master: Attempt to rescue lost SSH connection when restarting iptables https://review.openstack.org/547130 | 21:30 |
pabelanger | ianw: okay, I've given my +1 | 21:31 |
pabelanger | :) | 21:31 |
clarkb | dmsimard: thinking about restarting iptables if using iptables persistent I wonder if a stop iptables implies a drop all connections on fedora? maybwe can just start the iptables and avoid that? | 21:32 |
pabelanger | ianw: can you comment on the parts you have issue with in https://review.openstack.org/547107/ ? some jobs are using legacy-fedora-26, so we'll need to land legacy-fedora-27 for the moment | 21:32 |
ianw | thanks, just need some eyes on the non-trivial stuff below that | 21:32 |
pabelanger | it's possible that patch will just turn into adding nodesets | 21:32 |
dmsimard | clarkb: we flush iptables rules on purpose to check if restarting iptables loads them back as a mean of testing that the persistence works -- I wonder why the other distros don't have that issue | 21:33 |
*** kgiusti has left #openstack-infra | 21:34 | |
clarkb | I would look at the unit file and any scripts it is calling | 21:34 |
clarkb | my guess is its doing something smart | 21:34 |
*** mgoddard_ has quit IRC | 21:35 | |
clarkb | I doubt its the kernel since they are pretty steadfast about no user noticeable breakages but I guess its possible the kernel is doing something | 21:35 |
ianw | pabelanger: if f27 runs devstack with python3, the updown job doesn't serve much purpose any more. it's really not worth running it with py2 on f27 i wouldn't think | 21:36 |
dmsimard | yeah, I'll look after we figure out what's going on with Zuul :p | 21:36 |
dmsimard | so I see that the full reconfigure appears to have been done at 14:39, starting with 2018-02-22 14:39:31,197 INFO zuul.TenantParser: Loading previously parsed configuration from openstack-infra/project-config | 21:37 |
dmsimard | that seems to correlate with the higher cpu usage | 21:37 |
odyssey4me | fungi not sure if you're available, but it's plausible that the last UCA repo update happened in the middle of the UCA being updated... are there logs for when UCA last updated? | 21:37 |
clarkb | dmsimard: and with the memory use jump | 21:37 |
*** camunoz has quit IRC | 21:37 | |
odyssey4me | or pabelanger ^ not sure who can help | 21:37 |
*** pcaruana has quit IRC | 21:37 | |
clarkb | odyssey4me: yes navigate to the mirror with your browser and there will be a timestamp file you can check | 21:38 |
pabelanger | ianw: k, is there somebody else that could confirm if we don't need it. If so, then lets delete | 21:38 |
pabelanger | odyssey4me: I can look | 21:38 |
pabelanger | but it seems to be fine | 21:38 |
pabelanger | 1 sec | 21:38 |
ianw | pabelanger: well i just put it in as a smoke test, not sure anyone but me ever cared :) | 21:38 |
pabelanger | it should be running again in 22mins however | 21:38 |
pabelanger | ianw: ah, okay. then lets remove it | 21:39 |
ianw | just make sure the other job is doing USE_PYTHON3 | 21:39 |
pabelanger | odyssey4me: have time to wait 20mins? | 21:40 |
odyssey4me | clarkb pabelanger ah you mean http://mirror.bhs1.ovh.openstack.org/ubuntu/timestamp.txt | 21:40 |
clarkb | odyssey4me: for uca it iwll be a different path, but yes | 21:40 |
odyssey4me | ta, thanks - happy to help urselves do more diggin :) | 21:40 |
clarkb | http://mirror.bhs1.ovh.openstack.org/ubuntu-cloud-archive/timestamp.txt | 21:40 |
odyssey4me | timestamps are UTC? | 21:40 |
*** iyamahat_ has joined #openstack-infra | 21:40 | |
pabelanger | ya | 21:40 |
odyssey4me | k thx | 21:41 |
pabelanger | they update every 2hours | 21:41 |
*** iyamahat has quit IRC | 21:41 | |
odyssey4me | is that timestamp a sfile that comes from the source, or placed there by the sync process when the sync is done? | 21:41 |
dmsimard | clarkb: it looks like there's two reconfigure in a row.. | 21:42 |
openstackgerrit | Paul Belanger proposed openstack-infra/project-config master: Remove legacy-devstack-dsvm-py36-updown-fedora-26 https://review.openstack.org/547147 | 21:44 |
dmsimard | Seeing a lot of "WARNING zuul.GerritConnection: Unable to get change for [...]" | 21:44 |
*** slaweq_ has joined #openstack-infra | 21:46 | |
*** olaph1 has joined #openstack-infra | 21:48 | |
openstackgerrit | Paul Belanger proposed openstack-infra/openstack-zuul-jobs master: Add fedora-27 devstack / tempest jobs https://review.openstack.org/547107 | 21:48 |
openstackgerrit | Paul Belanger proposed openstack-infra/openstack-zuul-jobs master: Remove fedora-26 devstack / tempest testing https://review.openstack.org/547108 | 21:48 |
openstackgerrit | Paul Belanger proposed openstack-infra/openstack-zuul-jobs master: Remove devstack-dsvm-py36-updown-fedora-26 job https://review.openstack.org/547152 | 21:48 |
*** dave-mccowan has quit IRC | 21:49 | |
*** olaph has quit IRC | 21:49 | |
openstackgerrit | Paul Belanger proposed openstack-infra/project-config master: Switch devstack / tempest testing to fedora-27 https://review.openstack.org/547109 | 21:50 |
pabelanger | ianw: ^okay, removed the updown job | 21:50 |
clarkb | dmsimard: whats the ... in there? | 21:50 |
*** slaweq_ has quit IRC | 21:50 | |
clarkb | dmsimard: looking at the code for the gerrit connection I expect its just content we are effectively ignoring because it isn't a change or a ref | 21:51 |
odyssey4me | fungi clarkb pabelanger we got to the bottom of the package install issue - briefly: jamespage pushed a bunch of updates to the PPA for UCA, which then somewhere while it wasn't done got pulled into UCA, which then also got pulled in by the infra mirror process | 21:51 |
dmsimard | clarkb: looks like this: http://paste.openstack.org/raw/682561/ | 21:51 |
odyssey4me | since then, further syncs have been done and the issue is resolved | 21:52 |
clarkb | dmsimard: ya those are I think just noise, we are effectively ignoring the ref-replicated events this way | 21:52 |
pabelanger | odyssey4me: you can see how we generate the timestamp: http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/files/reprepro/reprepro-mirror-update.sh | 21:52 |
clarkb | something we might want to clean up in the logs so it doenst create distractions | 21:52 |
pabelanger | odyssey4me: I guess we could expand it based on which phase it is on | 21:52 |
*** yamamoto has joined #openstack-infra | 21:53 | |
dmsimard | clarkb: there's a LOT of them, 19k in today's log | 21:53 |
*** dave-mcc_ has joined #openstack-infra | 21:53 | |
dmsimard | I'll create a story, I've created a few so far: https://storyboard.openstack.org/#!/project/679 | 21:53 |
pabelanger | odyssey4me: good to know | 21:53 |
odyssey4me | thanks, as always, for the help and support :) | 21:53 |
clarkb | dmsimard: ya anytime a ref gets replicated by gerrit we'll get one (so x 8 for git mirror + 1 for github) | 21:53 |
clarkb | dmsimard: 19000k/9 refs roughly I think | 21:54 |
clarkb | er no k there | 21:54 |
pabelanger | yah, those warnings have been around for a while :) | 21:55 |
*** dhill__ has quit IRC | 21:56 | |
*** yamamoto has quit IRC | 21:57 | |
evrardjp | hello again | 21:59 |
pabelanger | pushing 30mins since zuul has launched any new builds. Guess we have a lot of zuul.yaml files in pipelines right now | 21:59 |
evrardjp | is there someone that can add the openstack-ansible-core group into the group https://review.openstack.org/#/admin/groups/1872,members (see also the review https://review.openstack.org/#/c/546577/) | 21:59 |
clarkb | pabelanger: or possibly the gaerman issues that dmsimard found are causing us to not launch jobs | 22:00 |
clarkb | is gearman running? | 22:00 |
dmsimard | if you look at the web status | 22:00 |
dmsimard | some changes' jobs have all finished running but they're just staying there | 22:00 |
dmsimard | https://i.imgur.com/Gl9M1JB.png | 22:01 |
clarkb | dmsimard: thats "normal" if you look at the queue counts at the top of the page | 22:01 |
pabelanger | dmsimard: right, zuul hasn't processed them yet | 22:01 |
clarkb | 624 results to process which are probably your changes | 22:01 |
pabelanger | and there it goes now | 22:01 |
dmsimard | ah, okay yeah it just cleared it | 22:01 |
clarkb | looks like ya it just went | 22:01 |
*** yamamoto has joined #openstack-infra | 22:01 | |
pabelanger | I think we just have a lot of zuul.yaml files right now | 22:01 |
pabelanger | and will reloads taking 20seconds, taking longer | 22:02 |
dmsimard | seeing fingergw error | 22:03 |
dmsimard | http://paste.openstack.org/raw/682573/ | 22:03 |
*** r-daneel_ has joined #openstack-infra | 22:05 | |
* fungi is back again, catching up | 22:05 | |
dmsimard | I'm tailing logs and I'm not seeing anything obvious | 22:06 |
*** r-daneel has quit IRC | 22:06 | |
*** r-daneel_ is now known as r-daneel | 22:06 | |
clarkb | I think we may just be seeing a further regression from the memory improvements? | 22:06 |
clarkb | zuul is operating just more slowly | 22:06 |
pabelanger | it was working great, up until this morning memory increase | 22:07 |
dmsimard | executors are very spiky | 22:07 |
clarkb | fungi: seems to be correlated to fixing the osa stuff? | 22:07 |
pabelanger | dmsimard: starting builds graph is the once to watch, if that is flat, no new jobs are starting | 22:07 |
pabelanger | s/once/one | 22:08 |
dmsimard | pabelanger: yeah I realize that but executors *are* picking up new builds | 22:08 |
TheJulia | pabelanger: sure, can it wait a day or two? Looks like it didn't take to fedora27 very well. Likely a fix, but I won't have the requisite brain cells for at least another day. | 22:08 |
*** danpawlik has quit IRC | 22:08 | |
pabelanger | dmsimard: yah, once builds are started, executors run them without issue | 22:08 |
pabelanger | TheJulia: Sure, just wanted to make sure you were aware | 22:09 |
*** jcoufal has quit IRC | 22:09 | |
dmsimard | nodepool is showing over 100 nodes ready, at least according to http://grafana.openstack.org/dashboard/db/nodepool | 22:10 |
dmsimard | so zuul is just not picking up the nodes from nodepool to assign them to builds or something ? | 22:10 |
*** dhill_ has joined #openstack-infra | 22:10 | |
TheJulia | pabelanger: awesome, thanks! | 22:10 |
pabelanger | dmsimard: zuul is busying doing dynamic reloads, and events are backing up in queue. see top left of zuul.o.o. When reloading, no new events a processed | 22:11 |
pabelanger | so, nodepool will finish bringing nodes online, then wait to move then in-use by zuul | 22:12 |
dmsimard | pabelanger: thus the bursts of executors picking up new jobs we've been seeing | 22:12 |
pabelanger | yah | 22:12 |
*** armaan has quit IRC | 22:13 | |
fungi | clarkb: if what i did this morning (approving another new project creation change) is what triggered the current performance regression then we have some reason to be concerned, i suppose | 22:13 |
pabelanger | reloads are taking 20 seconds now, up from 12 seconds yesterday. not sure why | 22:13 |
pabelanger | fungi: clarkb: isn't there a way we can dump current layouts in memory and see if we've leaked something? | 22:16 |
pabelanger | I'm not sure how that worked | 22:16 |
clarkb | I think it required doing the repl socket | 22:17 |
clarkb | not sure if that is still enabled | 22:17 |
*** agopi has joined #openstack-infra | 22:18 | |
*** slaweq_ has joined #openstack-infra | 22:18 | |
dmsimard | So I think this isn't a perfect query but it shows recent changes for zuul files https://review.openstack.org/#/q/file:%22%255E(%255C.zuul.d%257Czuul.d%257C%255C.zuul.yaml%257Czuul.yaml).*%2524%22 | 22:19 |
*** rlandy|ruck is now known as rlandy|biab | 22:21 | |
*** slaweq has quit IRC | 22:22 | |
*** slaweq has joined #openstack-infra | 22:22 | |
*** slaweq_ has quit IRC | 22:23 | |
dmsimard | In zuul-web, there's a lot of these: "Submitting job zuul:status_get with data {'tenant': 'openstack'}" and sometimes they don't come back immediately -- I suppose that's the /status endpoint.. is the apache caching working ? | 22:23 |
dmsimard | I'm also not sure why that is but in the apache error log, there's this error but for like... every repo "Not a git repository: '/var/lib/zuul/git/openstack/neutron-vpnaas'" | 22:25 |
dmsimard | (/var/lib/zuul/git is indeed empty on zuul01) | 22:25 |
dmsimard | I have no idea where I'm going, just thinking out loud and hoping it will make someone knowledgeable think of something :D | 22:26 |
clarkb | I think that may be old jobs looking for zuul refs? | 22:26 |
clarkb | not sure | 22:27 |
*** slaweq has quit IRC | 22:27 | |
smcginnis | clarkb: Do I remember right that one of the lunch presos next week will be about zuulv3? | 22:29 |
smcginnis | clarkb: Or is that its own session? | 22:29 |
fungi | smcginnis: yes, tuesday (infra/qa joint update, much of which will be zuul related on both fronts) | 22:30 |
smcginnis | fungi: Great, thanks | 22:30 |
dmsimard | Never realized there was so many queries for status/status.json: http://paste.openstack.org/raw/682607/ | 22:31 |
*** esberglu has quit IRC | 22:32 | |
*** dprince has quit IRC | 22:33 | |
*** kjackal has quit IRC | 22:34 | |
*** tonyb has quit IRC | 22:35 | |
*** tonyb has joined #openstack-infra | 22:36 | |
dmsimard | There's an IP in particular that's hammering status.json, it's owned by RAX but doesn't seem to be one of our servers | 22:37 |
*** rcernin has joined #openstack-infra | 22:38 | |
dmsimard | ¯\_(ツ)_/¯ | 22:39 |
dmsimard | I have no idea | 22:39 |
*** sshnaidm has joined #openstack-infra | 22:40 | |
*** sshnaidm is now known as sshnaidm|off | 22:40 | |
*** tosky has joined #openstack-infra | 22:41 | |
*** shoogz has quit IRC | 22:44 | |
fungi | probably someone's personal vm | 22:44 |
*** dave-mcc_ has quit IRC | 22:44 | |
fungi | maybe running a command-line status display | 22:44 |
mnaser | sneak in a quick rewrite rule for their ip only to show jobs saying 'pls-contact-infra' | 22:47 |
*** Goneri has quit IRC | 22:49 | |
*** lpetrut has quit IRC | 22:50 | |
dmsimard | they're getting 404's so I don't suppose it's hurting us too much | 22:58 |
openstackgerrit | David Moreau Simard proposed openstack-infra/zuul master: Add debug information when failing to autohold https://review.openstack.org/547160 | 23:00 |
clarkb | is corvus around today? wondering if we shouldn't restart the server to see if reconfig time drops back down to 12s but worried that might destroy debugging ability | 23:01 |
corvus | clarkb: i'm here | 23:01 |
clarkb | oh hi | 23:01 |
corvus | what's up? | 23:01 |
clarkb | corvus: tl;dr is reconfigures are apparently up to 20s now and we're seeing higher cpu use on zuul.o.o since ~1500UTC which also came along with a jump in memory use | 23:02 |
clarkb | that time roughly correlates to when fungi fixed the osa repo | 23:02 |
corvus | or rather, the reconfiguration after it was fixed i guess? | 23:02 |
clarkb | as for user impacts zuul "feels" slow. The status graphs show it as being far more bursty than it was before | 23:02 |
clarkb | corvus: ya | 23:02 |
*** yamamoto has quit IRC | 23:02 | |
corvus | have we added any new projects other than osa since the last restart? | 23:03 |
* clarkb checks for github emails | 23:03 | |
dmsimard | corvus: I created a few stories about the different errors/exceptions I've been seeing which might or might not be related: https://storyboard.openstack.org/#!/project/679 | 23:03 |
clarkb | yes tatu-dashboard and possibly osel | 23:03 |
corvus | did those have memory/cpu bumps? | 23:04 |
*** VW has quit IRC | 23:05 | |
dmsimard | need to step away for now, I'll catch up later | 23:05 |
* clarkb does timestamp maths | 23:05 | |
openstackgerrit | Ian Wienand proposed openstack/diskimage-builder master: GPT partitioning support https://review.openstack.org/533490 | 23:05 |
openstackgerrit | Ian Wienand proposed openstack/diskimage-builder master: Fail if two elements provide the same thing https://review.openstack.org/539401 | 23:05 |
openstackgerrit | Ian Wienand proposed openstack/diskimage-builder master: Add block-device defaults https://review.openstack.org/539375 | 23:05 |
openstackgerrit | Ian Wienand proposed openstack/diskimage-builder master: Choose appropriate bootloader for block-device https://review.openstack.org/539731 | 23:05 |
openstackgerrit | Ian Wienand proposed openstack/diskimage-builder master: arm64: use HWE kernel and fix console https://review.openstack.org/547161 | 23:05 |
clarkb | 9:15 was when I got the tatu-dashboard email | 23:05 |
clarkb | which would be 1715UTC | 23:05 |
*** VW has joined #openstack-infra | 23:05 | |
clarkb | no memory jump for that one | 23:06 |
clarkb | osel was at 1430UTC | 23:06 |
clarkb | doesn't look like a jump for that one either | 23:07 |
*** rossella_s has quit IRC | 23:07 | |
corvus | so it sounds like there may be something amiss with sucessful full reconfigurations after a failed full reconfiguration | 23:07 |
clarkb | the memory increase appears to start at 1510 | 23:07 |
*** rossella_s has joined #openstack-infra | 23:09 | |
corvus | i don't think there's much more we can do right now with the current running process; i think we should take that as a hypothesis, go ahead and restart, and, since we're not going to get to fixing it right away on account of travel, keep an eye out for any more data points that could confirm/disprove that for the next little bit. | 23:10 |
*** VW has quit IRC | 23:10 | |
clarkb | ok I'm loading my ssh key now. Anyone else able to help with that? | 23:12 |
corvus | clarkb: i'm deep into another problem but can switch if you need me | 23:13 |
clarkb | ok I'll ping if I run into trouble | 23:13 |
clarkb | checking there aren't any release jobs queued | 23:14 |
*** rlandy|biab is now known as rlandy|ruck | 23:14 | |
clarkb | ya no release jobs so I'll go ahead and save queues and restart the scheduler and zuul-web | 23:14 |
fungi | clarkb: corvus: approving the osel project creation is what i did to "fix" the osa repos (i figured approving another project addition would trigger a full reconfig, and that seems to have done the trick) | 23:14 |
clarkb | ah | 23:15 |
*** VW has joined #openstack-infra | 23:15 | |
clarkb | corvus: ^ still want me to restart knowing that? | 23:15 |
clarkb | I'm not sure it changes the hypothesis htough the timing is less tight | 23:16 |
corvus | clarkb, fungi: then the 1430 approval probably tranlated into a 15.. something reload | 23:16 |
clarkb | ok moving ahead with restart then | 23:16 |
fungi | sounds likely since that part is apparently still puppet-driven | 23:16 |
corvus | wfm | 23:16 |
corvus | 14:46 reload. so there was a bit of a lag. | 23:17 |
clarkb | I've told zuul to stop now just waiting for it to stop | 23:18 |
clarkb | (its not an immediate response but the log did show it got the stop command on the socket) | 23:19 |
*** yee37935 has quit IRC | 23:20 | |
corvus | if it's in a slow reconfiguration queue run, it should stop when it gets to the end | 23:20 |
*** dbecker has quit IRC | 23:20 | |
*** VW has quit IRC | 23:20 | |
*** yee379 has joined #openstack-infra | 23:20 | |
*** rossella_s has quit IRC | 23:21 | |
*** rossella_s has joined #openstack-infra | 23:22 | |
clarkb | looks like it is currently processing queue items | 23:24 |
*** apetrich has quit IRC | 23:25 | |
clarkb | lots of zuul.Pipeline.openstack.check: debug messages currnetly | 23:26 |
corvus | that seems strange | 23:26 |
corvus | there are no other pipelines it's processing | 23:28 |
clarkb | ya I've not seen any | 23:28 |
corvus | clarkb: i suspect you should just kill it | 23:28 |
clarkb | corvus: sigint or sigkill? | 23:29 |
*** rossella_s has quit IRC | 23:29 | |
clarkb | I guess I can start with sigint | 23:29 |
fungi | i take it sigterm is what the initscript sent? | 23:29 |
corvus | fungi: i think it uses the socket | 23:29 |
clarkb | fungi: no it sent a command on the command socket | 23:29 |
fungi | oh | 23:29 |
fungi | kill uses sigterm by default, iirc | 23:30 |
clarkb | oh right not int | 23:30 |
clarkb | I can start with term then | 23:30 |
clarkb | here goes | 23:30 |
*** rossella_s has joined #openstack-infra | 23:31 | |
corvus | it's stuck because it's not actually removing 547073,5 from the queue for some reason | 23:31 |
fungi | sigint is what happens when you ctrl-c in most foreground applications | 23:31 |
clarkb | that seems to have stopped the queue processing but not stopped the process | 23:31 |
clarkb | and now its gone | 23:31 |
clarkb | starting it up again now | 23:31 |
*** efried has quit IRC | 23:32 | |
fungi | 547073,5 looks like just a bog-standard tempest patch | 23:33 |
*** slaweq has joined #openstack-infra | 23:33 | |
corvus | yeah, i don't know what's special there | 23:34 |
fungi | git dependency on another tempest patch which is also typical-looking | 23:34 |
clarkb | once these cat jobs finish I'll reenqueue | 23:34 |
*** hongbin has quit IRC | 23:34 | |
andreaf | clarkb corvus fungi is there any chance that jobs that were in progress will complete and still their logs uploaded? | 23:36 |
*** VW has joined #openstack-infra | 23:36 | |
clarkb | andreaf: if they finished before zuul released the nodesets yes otherwise its a race to how quickly nodepool can clean them up | 23:36 |
corvus | the executors will abort the jobs too once they lose the connection to the gearman server | 23:37 |
corvus | it all happens very fast, so i'd say the chance is slim. | 23:37 |
andreaf | clarkb corvus ok np - can I recheck now? | 23:38 |
openstackgerrit | lifeless proposed openstack/gertty master: Don't lose sync requests that get bad responses https://review.openstack.org/547168 | 23:38 |
corvus | andreaf: yep | 23:38 |
*** slaweq has quit IRC | 23:39 | |
clarkb | yes zuul is up and running now | 23:39 |
clarkb | I am reequeuing too which may catch your thing | 23:39 |
*** tosky has quit IRC | 23:39 | |
*** rosmaita has quit IRC | 23:40 | |
andreaf | clarkb ok thanks | 23:40 |
*** VW has quit IRC | 23:41 | |
lifeless | fungi: ^ that might be a thing to merge and cut a release. It could explain a lot of 'wtf did my change go' moments :) | 23:41 |
lifeless | fungi: there's also a bunch of other gertty stuff sitting there with multiple +1s etc :( | 23:42 |
*** VW has joined #openstack-infra | 23:42 | |
corvus | lifeless: thanks, i'll try to take a look soon | 23:42 |
lifeless | corvus: ta! | 23:42 |
lifeless | corvus: bug for it https://storyboard.openstack.org/#!/story/2001572 | 23:43 |
*** slaweq has joined #openstack-infra | 23:43 | |
openstackgerrit | Ian Wienand proposed openstack-infra/project-config master: Add arm64 nodes to launcher https://review.openstack.org/547216 | 23:44 |
fungi | lifeless: cool! i think i've probably hit that too but didn't manage to hunt down the cause | 23:45 |
*** abelur has quit IRC | 23:45 | |
fungi | also, while i am a sometimes reviewer/contributor to gertty, it's thoroughly a corvus production | 23:46 |
*** VW has quit IRC | 23:46 | |
lifeless | fungi: Its been a while, I just looked at the 'reviewers' list and grabbed the first name I remembered ;) | 23:46 |
fungi | ;) | 23:47 |
lifeless | fungi: corvus: there may be other cases where syncs are lost - checkResponse for instance may want to look for 400's and 5xx's more generically. But this specific case wouldn't be fixed by that either, so.. | 23:47 |
*** slaweq has quit IRC | 23:48 | |
andreaf | corvus how difficult it is to setup a local zuul and get it running with a nodepool running a couple of local virtualbox vms ? | 23:48 |
andreaf | corvus it would be nice to be able to do some quicker local testing | 23:49 |
ianw | arm64 getting soooo close. wouldn't mind eyes on https://review.openstack.org/546834 which is a quick one to add linaro cloud to nodepool clouds conf ... one less hand-applied thing | 23:50 |
clarkb | andreaf: there is a quick start guide in review somewhere | 23:50 |
ianw | builds are working & uploading ... just a few tweaks to get console/config-drive working | 23:50 |
clarkb | andreaf: but ya it shouldn;t be too hard let me try and find it | 23:51 |
*** rlandy|ruck is now known as rlandy|bbl | 23:51 | |
clarkb | andreaf: https://docs.openstack.org/infra/zuul/admin/quick-start.html | 23:53 |
clarkb | looks like it doesn't go over nodepool but for that I'd configure nodepool to use some static nodes provided by whatever (virtualbox for example) | 23:53 |
corvus | andreaf: if you do that, you can use the static driver in nodepool | 23:53 |
corvus | andreaf, clarkb: you may prefer https://docs.openstack.org/infra/zuul/admin/zuul-from-scratch.html which is a more thorough step-by-step walkthrough for a quick example | 23:54 |
corvus | (it does cover nodepool, but not static nodes (yet)) | 23:54 |
*** caphrim007_ has joined #openstack-infra | 23:54 | |
corvus | andreaf: but if you just want to do quick local testing, why not just run ansible directly? | 23:55 |
*** caphrim007 has quit IRC | 23:57 | |
andreaf | corvus: that doesn't go e2e from the job definition - so I need to parse my job definition into an inventory file to supply ansible | 23:57 |
andreaf | corvus merging the dicts and so | 23:57 |
corvus | andreaf: you can grab an inventory file from a previously run job | 23:58 |
corvus | modify it to use your host, setup the git repos on your test node, then run the devstack playbook. that should be about it. | 23:59 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!