ianw | repo: "deb [arch=amd64] {{ docker_mirror_base_url }} {{ ansible_lsb.codename }} {{ docker_update_channel }}" | 00:00 |
---|---|---|
ianw | that would do it | 00:00 |
openstackgerrit | Ian Wienand proposed zuul/zuul-jobs master: ensure-docker: remove amd64 architecture pin https://review.opendev.org/746245 | 00:09 |
*** hashar_ has joined #opendev | 00:26 | |
*** hashar has quit IRC | 00:28 | |
openstackgerrit | Ian Wienand proposed zuul/zuul-jobs master: ensure-docker: remove amd64 architecture pin https://review.opendev.org/746245 | 00:29 |
*** ryohayakawa has joined #opendev | 00:29 | |
openstackgerrit | Ian Wienand proposed zuul/zuul-jobs master: ensure-docker: remove amd64 architecture pin https://review.opendev.org/746245 | 00:30 |
ianw | oh boo, we don't mirror arm64 | 00:43 |
openstackgerrit | Ian Wienand proposed opendev/system-config master: Add arm64 to debian-docker mirroring https://review.opendev.org/746253 | 00:46 |
ianw | fungi: ^ mind a quick poke if you have a sec? i think it should be that easy | 00:48 |
ianw | corvus: thanks :) | 01:05 |
ianw | i don't know why the zuul-jobs depends-on doesn't seem to work in pyca ... maybe it's trusted? i don't think i intended that | 01:06 |
ianw | i'm still poking at the buildx path -- we can up the builds to "-j 8" which should at least be twice as fast as it was hardcoded at -j4 | 01:08 |
openstackgerrit | Merged opendev/system-config master: Add arm64 to debian-docker mirroring https://review.opendev.org/746253 | 01:16 |
ianw | ok, so it takes ~40 minutes to build the manylinux2014_aarch64 container with buildx, a bit annoying but workable | 01:24 |
ianw | mnaser: just fyi all the vexxhost jobs in this zuul-jobs change failed with POST_FAILURE so no logs ... https://review.opendev.org/#/c/746245/ | 02:20 |
mnaser | ianw: rechecking | 02:27 |
mnaser | and ill watch it | 02:28 |
ianw | fungi: http://paste.openstack.org/show/796833/ | 02:36 |
ianw | mnaser: thanks | 02:36 |
ianw | fungi: ^ this is the error mirroring deb-docker ... i won't say how long it's been going on :/ | 02:36 |
ianw | ERROR: Same file is requested with conflicting checksums: | 02:37 |
ianw | root@mirror-update01:/afs/.openstack.org/mirror/deb-docker/lists# cat *_Packages | grep Filename | sort | uniq -c | 02:44 |
ianw | seems to show every package using a unique filename (no double counts) | 02:45 |
ianw | https://download.docker.com/linux/ubuntu/dists/bionic/pool/stable/amd64/ | 02:51 |
ianw | https://download.docker.com/linux/ubuntu/dists/xenial/pool/stable/amd64/ | 02:51 |
ianw | do have files with the same name :/ | 02:51 |
*** guillaumec has quit IRC | 02:53 | |
*** frickler has quit IRC | 02:53 | |
*** lourot has quit IRC | 02:53 | |
*** spotz has quit IRC | 02:53 | |
*** lourot has joined #opendev | 02:54 | |
*** spotz has joined #opendev | 02:54 | |
*** guillaumec has joined #opendev | 02:59 | |
*** frickler has joined #opendev | 02:59 | |
fungi | are you sure it's complaining about local files and not files on the remote mirror? | 03:02 |
ianw | fungi: i think it's the the remote mirror? | 03:03 |
ianw | with the two links above, bionic & xenial seem to have some overlap of the same files | 03:04 |
fungi | yeah, i wonder if it's saying an index on the mirror lists a different checksum than a file actually has | 03:04 |
ianw | the containerd.io files in particular | 03:05 |
mnaser | ianw: the issue is resolved now | 03:05 |
mnaser | thank you for reporting | 03:05 |
ianw | mnaser: cool :) | 03:05 |
mnaser | tons of change/clean up having here and all sorts of small fall out ;[ | 03:05 |
ianw | fungi: | 03:06 |
ianw | containerd.io_1.2.13-2_amd64.deb 2020-05-15 00:17:59 19.9 MiB | 03:06 |
ianw | containerd.io_1.2.13-2_amd64.deb 2020-05-15 00:17:39 20.4 MiB | 03:06 |
fungi | oh, i see, we're telling reprepro to combine deb-docker's xenial and bionic repositories into one common pool, when they don't share a common pool on the source side | 03:06 |
ianw | one's in xenial, one's in bionic | 03:06 |
ianw | upstream | 03:06 |
fungi | we should be mirroring those separately, not combining them | 03:07 |
ianw | i think we basically can not combine these repos (anymore?) ... this has been happening for a while | 03:07 |
ianw | like we'll have to have /afs/.openstack.org/mirror/deb-docker/<xenial|bionic|focal>/... right? | 03:07 |
fungi | er, i mean, we need to mirror those separately (obviously we're not yet) | 03:07 |
fungi | yeah | 03:08 |
fungi | basically docker doesn't understand how to maintain a deb package repository | 03:08 |
fungi | why use the pool model when you don't have a single pool? | 03:08 |
ianw | much like mnaser it never surprises how much a "simple" change manages to turn into yak shaving :) | 03:09 |
mnaser | It’s been two days of this :) | 03:09 |
ianw | fungi: i guess it must have worked, at some point. anyway, i'll pull them out separately ... | 03:10 |
fungi | yeah, i expect it worked briefly until they decided to rebuild the same docker version for more than one dist | 03:10 |
fungi | normally you build against one distro release and then use it for more than one, or you vary the package version to include a unique identifier for the distro version so they don't wind up with the exact same filename. then they can share a package pool | 03:12 |
ianw | yeah, except the same filename has different sizes ... so i guess it's not as same as it suggests :) | 03:14 |
fungi | well, the name is the same, but yes the actual package content differs | 03:16 |
fungi | probably built with different compilers, linked different libs, et cetera | 03:17 |
openstackgerrit | Ian Wienand proposed opendev/system-config master: Fix debian-docker mirroring https://review.opendev.org/746262 | 03:21 |
ianw | fungi: ^ something like ... maybe | 03:22 |
fungi | ianw: yup, exactly. single-core approved so you can take it for a spin | 03:48 |
ianw | unfortunately airship references it a bit -- http://codesearch.openstack.org/?q=deb-docker&i=nope&files=&repos= | 03:55 |
ianw | i guess we can leave the old mirror as is, notify/fix and then remove the non-updating directories | 03:55 |
fungi | yeah, we ought to be able to deprecate that one without blowing it away immediately. it's not been updating for a while anyway | 04:02 |
openstackgerrit | Merged opendev/system-config master: Fix debian-docker mirroring https://review.opendev.org/746262 | 04:04 |
ianw | fungi: thanks ... well that went easier than i though it would -> https://static.opendev.org/mirror/deb-docker/ | 04:49 |
openstackgerrit | Ian Wienand proposed opendev/base-jobs master: Fix docker base location https://review.opendev.org/746271 | 05:00 |
openstackgerrit | Ian Wienand proposed opendev/system-config master: Update deb-docker path https://review.opendev.org/746272 | 05:01 |
*** zbr1 has joined #opendev | 05:05 | |
*** zbr has quit IRC | 05:06 | |
*** zbr1 is now known as zbr | 05:06 | |
*** ianw has quit IRC | 05:48 | |
*** ianw has joined #opendev | 05:49 | |
*** logan- has quit IRC | 05:49 | |
*** hashar_ has quit IRC | 05:50 | |
*** logan- has joined #opendev | 05:51 | |
*** logan- has quit IRC | 05:59 | |
*** logan- has joined #opendev | 05:59 | |
*** DSpider has joined #opendev | 06:01 | |
ianw | corvus/clarkb/fungi: i've made some decent progress on getting manylinux2014_aarch64 wheels and put notes in https://github.com/pyca/cryptography/issues/5292#issuecomment-671759306 | 06:59 |
ianw | if you could have a look at https://review.opendev.org/746245 that will help installing docker on arm64 nodes | 07:00 |
ianw | a recheck on https://github.com/pyca/cryptography/pull/5386 should then attempt to build the wheels | 07:00 |
*** hashar_ has joined #opendev | 07:33 | |
*** hashar_ is now known as hashar | 07:35 | |
openstackgerrit | Andreas Jaeger proposed zuul/zuul-jobs master: Disable E208 for now https://review.opendev.org/746310 | 07:51 |
openstackgerrit | Andreas Jaeger proposed zuul/zuul-jobs master: Disable E208 for now https://review.opendev.org/746310 | 07:57 |
*** moppy has quit IRC | 08:01 | |
*** moppy has joined #opendev | 08:03 | |
*** hashar has quit IRC | 08:34 | |
*** andrewbonney has joined #opendev | 08:45 | |
*** tkajinam has quit IRC | 08:47 | |
openstackgerrit | Merged zuul/zuul-jobs master: Disable E208 for now https://review.opendev.org/746310 | 09:02 |
openstackgerrit | Merged zuul/zuul-jobs master: ensure-pip: add instructions for RedHat system https://review.opendev.org/743750 | 09:02 |
*** tosky has joined #opendev | 10:22 | |
*** ryo_hayakawa has joined #opendev | 10:53 | |
*** ryohayakawa has quit IRC | 10:55 | |
*** hipr_c has joined #opendev | 11:54 | |
*** hashar has joined #opendev | 12:18 | |
*** ryo_hayakawa has quit IRC | 12:26 | |
*** priteau has joined #opendev | 14:02 | |
*** priteau has quit IRC | 14:23 | |
*** priteau has joined #opendev | 14:32 | |
*** hashar has quit IRC | 14:46 | |
*** qchris has quit IRC | 14:57 | |
*** mlavalle has joined #opendev | 15:08 | |
*** auristor has quit IRC | 15:08 | |
*** qchris has joined #opendev | 15:09 | |
*** auristor has joined #opendev | 15:31 | |
*** tosky has quit IRC | 15:43 | |
openstackgerrit | Clark Boylan proposed opendev/system-config master: Add gerrit static files that were lost in ansiblification https://review.opendev.org/746335 | 16:34 |
clarkb | upgrading from 2.13 -> 2.14 with the images from https://review.opendev.org/#/c/745595/ has a working javamelody now. Going to work through the 2.15 and 2.16 upgrades today and assumign java melody and codemirror-editor are all happy I think we should try to land that change | 16:53 |
clarkb | trying to take my time and poke around and ensure things are working. Our css and all that seems to work with 2.14 too | 16:54 |
clarkb | one thing that isn't clear to me is the function of the accountPatchReviewDb database. It has an empty table in my test setup, but that is one thing I/we want to test as it has changed and we modified it on 2.13 to make it work with mysql | 16:54 |
*** andrewbonney has quit IRC | 16:56 | |
fungi | that was in h2 previously, right? | 16:58 |
fungi | and then moved to sql temporarily? | 16:58 |
clarkb | I think we never put it in h2 due to performance issues we were warned about | 16:58 |
clarkb | the default was to h2 it though | 16:58 |
fungi | right, but that's why it's a separate sql db and not in the regular reviewdb | 16:59 |
clarkb | but it didn't work with mysql due to index width issues so we changed the priamry index on the table | 16:59 |
clarkb | then upstream themselves has chagned the index for other reasons | 16:59 |
clarkb | (so we may end up wanting to chagne it again?) | 16:59 |
clarkb | thats probably something we can sort out on its own as an open question while I work though the general do things work testing | 16:59 |
clarkb | oh you know, it may be for inlien comments | 17:00 |
clarkb | and I haven't made any of those yet | 17:00 |
* clarkb makes a note to retest when done with inline comments | 17:01 | |
donnyd | just an FYI it would appear something is busted in the latest bionic image for ipv6 | 17:48 |
clarkb | donnyd: our nodepool image? | 17:49 |
donnyd | I am testing again, could have been a fluke | 17:49 |
donnyd | yes, it does appear as though it is an issue.. testing some of the other images now | 17:51 |
fungi | have a link/log/something? | 17:51 |
donnyd | yes | 17:52 |
donnyd | https://usercontent.irccloud-cdn.com/file/VWEQSyMR/image.png | 17:52 |
donnyd | that would be an issue | 17:52 |
donnyd | LOL | 17:52 |
donnyd | may have been a partial upload or something | 17:52 |
donnyd | https://usercontent.irccloud-cdn.com/file/lb3qCX5z/image.png | 17:53 |
donnyd | that appears to be a bit small | 17:54 |
fungi | indeed, that doesn't look ipv6-specific | 17:54 |
donnyd | no - not it does not | 17:54 |
fungi | we can delete the most recent image and nodepool will reupload | 17:55 |
donnyd | initally I was looking at why I couldn't ping the instances | 17:55 |
fungi | does it seem to just be ubuntu-bionic affected? | 17:55 |
donnyd | so the tinkering I was doing with ipv6 lead me down that road | 17:55 |
donnyd | I am going to check them all | 17:56 |
openstackgerrit | Merged zuul/zuul-jobs master: ensure-docker: remove amd64 architecture pin https://review.opendev.org/746245 | 17:57 |
donnyd | the rest of them look ok so far | 17:59 |
fungi | in that case i'll just delete ubuntu-bionic-1597381079 from openedge-us-east | 18:02 |
fungi | looks like it was uploaded just shy of 12 hours ago | 18:03 |
donnyd | oh I already deleted it from OE.. should I not do that? | 18:03 |
fungi | i may still be able to delete it from nodepool, not sure if i'll have to manually remove the znodes from zk | 18:03 |
fungi | well, it took my nodepool image-delete command at least. checking nodepool image-list now to see if it's gone or at least marked for deletion | 18:07 |
fungi | it's marked as "deleting" so that's a good start | 18:07 |
clarkb | I think we'll clear our db if we don't see it in the cloud listing | 18:08 |
fungi | | 0000114341 | 0000000001 | openedge-us-east | ubuntu-bionic | ubuntu-bionic-1597381079 | 1fec9ab7-d54c-4167-9d37-ed8226c82098 | deleting | 00:00:01:41 | | 18:09 |
fungi | | 0000114341 | 0000000002 | openedge-us-east | ubuntu-bionic | None | None | uploading | 00:00:01:38 | | 18:09 |
clarkb | infra-root https://review.opendev.org/#/c/745595/ seems to make things better for javamelody and codemirror-editor upgrading from 2.13 -> 2.14, 2.14 -> 2.15, and 2.15 -> 2.16 | 18:09 |
fungi | that looks like the behavior we want | 18:09 |
clarkb | I'll leave a note with that info on the change but I think its ready to land and address some of these issues. Then we can keep pushing on this | 18:10 |
clarkb | fungi: donnyd: I'm at a good spot to context switch out of gerrit things (in fact I think I'll do that regardless), let me know if I can help at all | 18:12 |
fungi | looks like this is probably fine, unless we see the same truncated image appear when the upload completes | 18:13 |
donnyd | I will check the new one when its finished | 18:13 |
clarkb | we've seen it a few times. Maybe more frequently lately (or we've just noticed more recently?) | 18:14 |
clarkb | I wonder if it could be related to us updating the docker image for the builder in a forceful way | 18:14 |
clarkb | and it just stops the upload and upstream thinks we are done | 18:14 |
clarkb | (really really really wish the hashes were checked) | 18:14 |
fungi | yeah, i was about to say the same | 18:14 |
fungi | interestingly, the short image has been sitting in a deleting state for almost 10 minutes now | 18:15 |
clarkb | infra-root I started to compose https://etherpad.opendev.org/p/gerrit-upgrade-luca-questions | 18:34 |
fungi | clarkb: lgtm | 18:38 |
clarkb | thanks. I wont send it today as it will get lost in end of week email but early monday is my goal | 18:39 |
*** fressi has joined #opendev | 18:58 | |
fungi | clarkb: i don't think the truncated images from 12 hours ago was related to container restarts. start time for dumb-init on both amd64 builders is 10 days ago | 18:58 |
fungi | er, truncated image (singular) | 18:58 |
clarkb | the mystery deepens | 18:59 |
clarkb | any network errors in dmesg? | 18:59 |
fungi | hard to say, since dib's chroot activity floods the ring buffer | 19:00 |
fungi | it's all full of dracut file listings and the like | 19:00 |
clarkb | also I think we've only seen this happen with vexxhoet and openedge, both run newer clouds. Maybe a glance sidebug | 19:01 |
clarkb | donnyd: ^ anything logged around that upload that is suspicious from your side? | 19:01 |
donnyd | that is directly possible | 19:01 |
donnyd | I am on ussuri | 19:01 |
fungi | the vexxhost occurrence sounded like it was possibly related to storage changes which were happening, and the image was only a few bytes not several gigabytes | 19:01 |
donnyd | I have no central logging setup yet | 19:02 |
donnyd | but its on my list of things to dop | 19:02 |
donnyd | checking glance logs one by one now | 19:03 |
donnyd | No errors in glance logs | 19:05 |
*** fressi has quit IRC | 19:13 | |
fungi | i'm struggling to find any record of us uploading that image. it's probably just friday fumbles on my end | 19:17 |
clarkb | fungi: we do multiple log rotations a day, may be in older log gile | 19:20 |
clarkb | *file | 19:20 |
fungi | yeah, i was looking through those too, just they're huge so nailing down a good pattern to match on | 19:20 |
fungi | i can find the upload for 0000114340 just fine, but not 0000114341 | 19:22 |
clarkb | maybe grep by the uuid? | 19:25 |
fungi | tried that too | 19:25 |
*** hashar has joined #opendev | 19:26 | |
fungi | donnyd: btw, the reupload completed a little over an hour ago, does it look larger to you now? | 19:27 |
fungi | clarkb: what's strange is i can't even find the reupload logged either. it's like there are other builders than just the ones i'm checking, or something | 19:27 |
fungi | oh! yep, that's it :/ | 19:28 |
fungi | i forgot we added an nb04 | 19:28 |
fungi | and there it is | 19:28 |
* fungi sighs | 19:28 | |
clarkb | friday :) | 19:29 |
fungi | 2020-08-14 04:57:59,624 INFO nodepool.builder.UploadWorker.2: Uploading DIB image build 0000114341 from /opt/nodepool_dib/ubuntu-bionic-0000114341.qcow2 to openedge-us-east | 19:29 |
fungi | 2020-08-14 06:25:03,747 INFO nodepool.builder.UploadWorker.2: Image build ubuntu-bionic-0000114341 (external_id 1fec9ab7-d54c-4167-9d37-ed8226c82098) in openedge-us-east is ready | 19:30 |
fungi | so that's the one which was short, and which i deleted a couple hours ago | 19:30 |
clarkb | the time delta there seems like it actually uploaded the data | 19:31 |
fungi | and i can't find any related errors/tracebacks | 19:33 |
fungi | though it's a little hard to say for sure because there's a flood of centos-8 upload errors which look like a corrupt znode | 19:33 |
fungi | json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) | 19:33 |
fungi | one of those | 19:33 |
fungi | well, more like thousands of those | 19:34 |
fungi | skimming the `tree /nodepool/images` output from zkshell to see what's there for it | 19:35 |
clarkb | fungi: thats the thing pabelanger asked about and I pointedout a change for. I think we can rm the znode as it is empty | 19:37 |
clarkb | I should check if I need to rereview that change | 19:37 |
fungi | yeah, except the log doesn't say which znode it is | 19:37 |
fungi | or is that the proposed change, to actually report the znode? | 19:37 |
clarkb | yes that | 19:38 |
clarkb | it also skips over the node andignores it | 19:38 |
fungi | i don't actually see any empty build nodes for centos-8 though | 19:43 |
fungi | tons for ubuntu-bionic | 19:43 |
fungi | er, sorry, no, these are ubuntu-xenial-arm64 | 19:43 |
fungi | also centos-8-arm64 | 19:44 |
fungi | ubuntu-bionic-arm64 | 19:44 |
fungi | beginning to see a pattern here | 19:45 |
fungi | anyway, this tree is 13k lines long, so i'm not quite sure what needs deleting from it | 19:47 |
clarkb | we have to find the empty znode | 19:48 |
clarkb | the cleanup processbails on that which is why we leak | 19:48 |
clarkb | I can take a look in a few | 19:48 |
fungi | i guess there's a zkshell to dump all the data from every znode so we can mechanically search for the empty one? | 19:48 |
fungi | er, a zkshell command | 19:49 |
clarkb | I'm guessing its in an earlier numbered one | 19:49 |
clarkb | and everything behind ithasleaked | 19:49 |
fungi | also is centos-8 upload failure an indication that it's somewhere under the centos-8 portion of the tree, or is that just happenstance? | 19:49 |
clarkb | not sureif we have sucha command | 19:49 |
clarkb | that I dont know | 19:49 |
fungi | it's at leas strange that it only seems to be complaining about centos-8 uploads in the logs | 19:50 |
donnyd | had to go do some friday tasks | 19:52 |
donnyd | checking image now | 19:52 |
donnyd | much mo betta | 19:53 |
donnyd | https://usercontent.irccloud-cdn.com/file/ajhBvffw/image.png | 19:53 |
fungi | good, good | 19:53 |
donnyd | going to fire one up to make sure it works right | 19:54 |
fungi | trying to see if maybe there are build ids for centos-8 in the zk tree which haven't been uploaded. the ones in a ready state for any providers are 0000078539 and 0000078540 | 19:56 |
fungi | the only one zk has besides those is 0000078538 in vexxhost-ca-ymq-1 and that has no upload id nor lock | 19:57 |
fungi | could that be the problem entry? | 19:57 |
fungi | maybe the delete for that failed in a weird way and leaked an incomplete entry ni zk? | 19:58 |
clarkb | fungi: ya that one is weird, but I think it may be not cleaning up because there is an empty upload record somewhere | 19:59 |
clarkb | the cleanup thread basically gets hit by this issue and it noops | 19:59 |
clarkb | and ya this is like a needle in a haystack | 19:59 |
clarkb | we might need to write a zk script to iterate though all of them and serach? | 19:59 |
fungi | it does seem to be the only anomalous subset of the centos-8 portion of the tree though | 20:00 |
donnyd | ok, it's good to hook | 20:00 |
clarkb | fungi: ya but its failing a json decode not a node doesn't exist | 20:01 |
fungi | thanks donnyd!!! | 20:01 |
clarkb | I'm going to hack the old install of nodepool on nl01 root (not container) to log the location when it fails to dib-image-list | 20:02 |
*** hipr_c has quit IRC | 20:04 | |
clarkb | fungi: that seems to show it is the nodes like the one you found (there are more fo them though) | 20:07 |
fungi | i'm getting roped into dinner prep now, but can try cleaning those up after unless you're already on it | 20:09 |
clarkb | I'm still trying to make sense of the problem | 20:09 |
clarkb | because those nodes have data if you get them but if you ls in them they don't have subtrees | 20:10 |
clarkb | ah ok the centos-8 one is empty | 20:11 |
clarkb | but the others aren't so maybe its just that one causing problems | 20:11 |
clarkb | fungi: you think delete /nodepool/images/centos-8/builds/0000078538 and its sub entries? | 20:12 |
fungi | that's where i'd start at least | 20:13 |
fungi | it's the only build which nodepool image-list doesn't show | 20:13 |
fungi | for centos-8 anyway | 20:13 |
clarkb | ok I'll do that now | 20:13 |
fungi | and that's the only label currently getting errors in the log | 20:13 |
fungi | so it's my somewhat unfounded guess | 20:14 |
clarkb | I can dib image list now | 20:15 |
clarkb | I'm undoing my hack to the script on nl01 | 20:15 |
clarkb | also we seem to have a sad arm64 builder | 20:15 |
fungi | that znode deletion caused nb02 to report: 2020-08-14 20:14:22,615 INFO nodepool.builder.BuildWorker.0: Deleting old build log /var/log/nodepool/builds/centos-8-0000078538.log | 20:16 |
fungi | or maybe it was coincidental | 20:16 |
clarkb | we probably have something that cleans those up too when the images go away | 20:16 |
clarkb | I'm guessing not coincidental | 20:17 |
fungi | i'm guessing not either | 20:17 |
*** priteau has quit IRC | 20:19 | |
fungi | the "ERROR nodepool.builder.UploadWorker.6: Error uploading image centos-8 ..." messages seem to have not reappeared since 20:14:19 | 20:22 |
clarkb | our dib image list count is falling too | 20:22 |
clarkb | (its cleaning up all the failed arm64 builds that leaked | 20:22 |
clarkb | fungi: go eat dinner, I think we are good here | 20:22 |
fungi | awesome, thanks for the help! | 20:23 |
clarkb | I've rereviewed swest's change that should help and left a note in #zuul that we hit this in the getBuilds not getUploads path | 20:24 |
clarkb | (I think we may need similar guards there?) | 20:24 |
*** hashar has quit IRC | 20:48 | |
*** DSpider has quit IRC | 23:10 | |
*** DSpider has joined #opendev | 23:11 | |
*** mlavalle has quit IRC | 23:13 | |
ianw | $ ls ./wheelhouse.final/cryptography-3.1.dev1-cp3 | 23:19 |
ianw | cryptography-3.1.dev1-cp36-cp36m-manylinux2014_aarch64.whl cryptography-3.1.dev1-cp37-cp37m-manylinux2014_aarch64.whl cryptography-3.1.dev1-cp38-cp38-manylinux2014_aarch64.whl cryptography-3.1.dev1-cp39-cp39-manylinux2014_aarch64.whl | 23:19 |
openstackgerrit | Matthew Thode proposed openstack/diskimage-builder master: update gentoo to allow building arm64 images https://review.opendev.org/746000 | 23:21 |
*** DSpider has quit IRC | 23:34 | |
fungi | w00t! | 23:40 |
fungi | prometheanfire wants a piece too, looks like | 23:41 |
prometheanfire | wat? | 23:42 |
fungi | the aarch64/arm64 goodness | 23:42 |
prometheanfire | :D | 23:43 |
prometheanfire | indeed | 23:43 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!