ianw | maybe should have just let chatgpt write the rules. i mean sure every change would have to include "i love you chatgpt", but small price to pay :) | 00:02 |
---|---|---|
opendevreview | Ian Wienand proposed openstack/project-config master: gerrit/acl : Convert remaining AnyWithBlock to submit requirements https://review.opendev.org/c/openstack/project-config/+/875996 | 00:12 |
ianw | clarkb: ^ that's not new, but i think i accidentally unstacked it from all the prior changes so it went into merge conflict | 00:13 |
*** dhill is now known as Guest7617 | 00:19 | |
clarkb | ianw: small thing on that | 00:21 |
opendevreview | Ian Wienand proposed openstack/project-config master: gerrit/acl : Convert remaining AnyWithBlock to submit requirements https://review.opendev.org/c/openstack/project-config/+/875996 | 00:22 |
ianw | thanks! | 00:22 |
opendevreview | Merged opendev/system-config master: launch: add a probe for ssh after reboot https://review.opendev.org/c/opendev/system-config/+/868369 | 00:26 |
fungi | better yet, get pre-lobotomy bingbot to write the rules | 00:26 |
fungi | though it may demand blood sacrifice for every change upload | 00:27 |
*** \join_weakmayors is now known as \join_iwp9 | 03:33 | |
opendevreview | Merged openstack/project-config master: gerrit/acl : Convert remaining AnyWithBlock to submit requirements https://review.opendev.org/c/openstack/project-config/+/875996 | 05:49 |
*** jpena|off is now known as jpena | 08:20 | |
opendevreview | Ebbex proposed openstack/diskimage-builder master: Simplify epel/pkg-map https://review.opendev.org/c/openstack/diskimage-builder/+/877347 | 08:51 |
opendevreview | Ebbex proposed openstack/diskimage-builder master: Simplify epel/pkg-map https://review.opendev.org/c/openstack/diskimage-builder/+/877347 | 08:57 |
dpawlik | fungi, ianw, clarkb: o/ https://review.opendev.org/c/zuul/zuul-jobs/+/876081 - could review or reply for Tengu comment when you have few min please? | 15:16 |
clarkb | dpawlik: the Zuul matrix room would be a better venue to discuss zuul-jobs. I mentioend there when you first pushed the change that you should consider adding a job toat runs the role | 15:17 |
dpawlik | ack clarkb | 15:18 |
clarkb | *job that runs the role | 15:19 |
dpawlik | clarkb: will do. Just please check Tengu comment and reply. We would like to move the role from zuul-jobs to dedicated project in oko org and zuul-jobs will just use it. I will cover it with proper zuul job | 15:20 |
clarkb | again that is a discussion for zuul-jobs | 15:21 |
clarkb | er sorry for the zuul matrix room | 15:21 |
dpawlik | understand. Thanks | 15:22 |
clarkb | ianw: feedback from foundation is that the statement we have looks good but it would be better if we can add data/stats like Debian and Ocaml have. Maybe we can talk about numbers of tests run weekly/test nodes weekly? | 15:42 |
clarkb | s/weekly/some time frame/ | 15:42 |
opendevreview | Clark Boylan proposed opendev/git-review master: Test Python bounds only https://review.opendev.org/c/opendev/git-review/+/877321 | 16:02 |
opendevreview | Clark Boylan proposed opendev/git-review master: Test old and new Gerrit https://review.opendev.org/c/opendev/git-review/+/877313 | 16:02 |
clarkb | fungi: ^ I think those will both pass testing now and should be mergeable assuming you want to switch to nox. | 16:02 |
clarkb | not terrible to adapt to tox but jobs will need redoing and tox.ini would need updating to pass the var through | 16:03 |
fungi | i'm not convinced that the max-concurrency setting for rax-ord is actually taking effect: https://grafana.opendev.org/d/a8667d6647/nodepool-rackspace?orgId=1 | 16:03 |
fungi | i see spikes as high as 57 nodes building at the same time there | 16:03 |
clarkb | fungi: a thread dump should confirm as I think you'll be able to identify the threads associated with the provider | 16:03 |
fungi | just as recently as an hour ago | 16:03 |
clarkb | (they should be named in such a way that this is possible iirc) | 16:03 |
fungi | kill -USR1 yeah? | 16:05 |
fungi | twice obviously | 16:05 |
fungi | USR2 | 16:06 |
fungi | https://zuul-ci.org/docs/nodepool/latest/operation.html#daemon-usage | 16:06 |
fungi | running this now on nl01 since it's seems like we're consistently well above 10 building concurrently: sudo kill -USR2 1285922;sleep 60;sudo kill -USR2 1285922 | 16:08 |
fungi | nl01:~fungi/stack_dump.2023-03-14 | 16:14 |
fungi | looks like there are 8 openstack-api-rax-ord_* threads, 6 keyscan-rax-ord_* threads, and a PoolWorker.rax-ord-main thread | 16:17 |
fungi | there are also some generic threads not tied to a particular provider but it doesn't seem like they handle building nodes | 16:19 |
clarkb | ya a building node is always in its own thread iirc | 16:21 |
clarkb | I think the those counts don't contradict our limit of 10 (I'm not sure you can add 8+6 it may be 6 of 8 are in keyscan mode?) | 16:22 |
fungi | i'm more concerned by the "Maximum number of node requests that this provider is allowed to handle concurrently." but maybe those building nodes don't correspond to a node request, or maybe we average >2.5 nodes per request? | 16:23 |
clarkb | I'm not sure I understand that | 16:24 |
fungi | the graph is showing node count rather than node request count, that could be the difference | 16:24 |
clarkb | the limit is 10 concurrent builds and the data above is under that limit I think | 16:24 |
fungi | https://zuul-ci.org/docs/nodepool/latest/configuration.html#attr-providers.max-concurrency | 16:24 |
clarkb | yes we set that to 10 and the data above shows 8 | 16:25 |
clarkb | let me pull up the dump and look directly | 16:25 |
fungi | it says "node requests" there, not "nodes" which is where my confusion over the graph likely stems from | 16:25 |
clarkb | docker just emailed infra-root and said we have been identified as possibly being a free team organization which is sunsetting on April 14 | 16:26 |
clarkb | I thought we never proceeded with that due to their odd requirement to not document the use of podman or whatever | 16:26 |
fungi | we never did complete that application, no | 16:26 |
corvus | yeah i was just reading that email and looking into it... | 16:27 |
clarkb | they are warning us that after that time we may lose access to some of our data | 16:27 |
clarkb | corvus: thanks! | 16:27 |
fungi | anyway, on the max-concurrency setting, what we're trying to limit is the number of nova create calls which haven't returned a ready node yet, what we control is the number of outstanding node requests which have been accepted, there's no 1:1 correlation between building nodes and node requests, so we need to scale it by the average request size i think | 16:27 |
corvus | it looks like "opendevorg" and "zuul" are "organizations" under the "Docker Free Team" subscription | 16:28 |
corvus | so my reading of that email is that after april, those can disappear at any time | 16:28 |
corvus | so that's pretty much worst-case scenario for us | 16:28 |
clarkb | corvus: huh we asked about it then they told us we couldn't document use of tools other than docker which caused us to not pursue further. Did they just grant it to us anyway? | 16:29 |
clarkb | oh or maybe they are saying there is no free tier at all? | 16:29 |
corvus | https://paste.opendev.org/show/bPGrtnOfGSNjw5UG225s/ is the email btw | 16:29 |
clarkb | ya I think thats it. Basically no free option period | 16:30 |
corvus | clarkb: erm, i don't remember being involved in any application process... | 16:30 |
clarkb | corvus: they had a program for open source projects to avoid the rate limits. We looked into that and then didn't pursue it due to their requirements. I was confusing this with that | 16:30 |
clarkb | corvus: But I think in reality what is happening is they are saying docker hub will have zero free options after april 14 | 16:30 |
corvus | clarkb: gotcha, yeah i think this is sort-of orthogonal to the rate limits thing (except inasmuch as their subscription levels have differing rate limits) | 16:31 |
corvus | and i mostly read your second thing about "no free" as correct -- except that i think they may still have a "personal" level that's free | 16:32 |
corvus | (which they say is "good for open source projects" !) | 16:32 |
clarkb | I'll make a note to bring this up on the infra team meeting today. But I guess this just became a bit of a priority (hosting elsewhere) | 16:33 |
fungi | looks like this has exploded on reddit, unsurprisingly | 16:33 |
clarkb | the month + 30 days of RO access seems really aggressive too | 16:33 |
corvus | perhaps there is a possibility to convert each of our several (we have 4) orgs into 4 different "personal" accounts? but i don't see a button to do that -- only upgrade. | 16:33 |
clarkb | any idea what the costs are? I could see us potentially paying once to extend that timeframe and actively work to get away from them | 16:34 |
clarkb | https://www.docker.com/pricing/ $9/user/month | 16:35 |
corvus | https://www.docker.com/pricing/ | 16:35 |
corvus | i think that would be $600 one time cost to keep opendevorg and zuul for 1 year (and abandon the other 2) | 16:36 |
corvus | because of minimums | 16:36 |
clarkb | corvus: sorry what were the other two? | 16:36 |
corvus | openstackinfra and stackforge | 16:36 |
clarkb | ah yup those should be abandonable | 16:36 |
corvus | yes they have no repos | 16:37 |
corvus | i think we can move pretty quickly if necessary | 16:37 |
clarkb | so ya that gives us a potential out so that we are not scrambling over the next month. I think we should continue to look into what it would take to migrate. In particular we probably need to start with base images | 16:37 |
clarkb | basically we can move the base images elsewhere, then once that is done rebuild everything that sits on top of those | 16:38 |
clarkb | corvus: I agree. I think the main gotcha might be whether or not the images we rely on will continue to be available | 16:38 |
corvus | if it's another public service, we can probably move everything in a few days. if we want to self-host, give us another week. | 16:38 |
corvus | mm like python base? | 16:38 |
clarkb | or if we need to also spin up the hosting of a python base image for example | 16:38 |
clarkb | yes | 16:38 |
clarkb | those are all docker hub "library" images so presumably won't be going away? | 16:38 |
clarkb | but maybe you'll lose access to them without a docker account? | 16:39 |
clarkb | fungi: re max concurrent requests I think those are "node build requests" and not api requests. | 16:39 |
fungi | oh, in that case the graph is misleading i guess? or maybe those are nodes which nodepool has given up on but haven't exited building state yet? | 16:40 |
clarkb | fungi: ya I'm wondering if it is an accounting problem more than a real state issue. Probably needs more investigating to understand. One way to do it is check nodepool list for ord building state nodes over time | 16:41 |
corvus | i believe max_concurrency is for requests, so if we get 10 requests for 2 nodes each, we will spawn 20 node building state machines simultaneously | 16:42 |
clarkb | oh! | 16:43 |
fungi | yeah, that's what i was surmising | 16:43 |
corvus | so that's not quite what we wanted for the ord case -- in that we really want to limit node requests -- but on average, maybe it will work out? | 16:43 |
fungi | so like i said, we'll need to scale it by the average node request size | 16:43 |
clarkb | ya and if 10 is still too high we could scale it back a bit more I guess | 16:43 |
corvus | like, maybe on average we have 1.2 nodes per request or something | 16:43 |
clarkb | fungi: sorry I was thinking about api requests and thought that is what you were suspecting | 16:44 |
clarkb | so 10 api requests in flight at a time rather than 10 nodepool things | 16:44 |
fungi | well, the graph suggests that we averaged 6 nodes per request at the point where there were 57 building nodes at max-concurrency 10, and currently it's around 2.5 nodes per request average | 16:44 |
corvus | then, unfortunately, we'll just get hit in the face when we're unlucky enough to run infra jobs where we get 10 requests for 5 nodes each :) | 16:44 |
corvus | that's higher than i would have guessed | 16:45 |
corvus | 6 nodes/request is unpossible, right? | 16:45 |
clarkb | ya I thought our limit is 5 but /me checks | 16:45 |
fungi | well, was current up until a few minutes ago, now there are no requests i think | 16:45 |
clarkb | max is apparently 10 | 16:45 |
corvus | oh nope, we raised the limit | 16:46 |
corvus | it's possible | 16:46 |
corvus | it's 10 | 16:46 |
corvus | (in most tenants) | 16:46 |
fungi | it could still be some combination of high average nodes per request and the graph also reflecting "building" nodes which the launcher gave up on due to timeouts but haven't transitioned to ready/deleting yet | 16:46 |
clarkb | https://web.docker.com/rs/790-SSB-375/images/privatereposfaq.pdf is an faq that seems to create more confusion because it specifically calls out "private repos" | 16:53 |
clarkb | some people seem to think that only private data (which we have none) is affected | 16:53 |
clarkb | however that is't how I read the email | 16:53 |
clarkb | I think we should begin planning at a high level but looking at discussion around this there is a tremendous amount of confusion. It is probably a good idea to try and avoid making any hard decisions for a day or two giving docker some time to clarify things which may impact our decisions | 17:07 |
clarkb | I'm going to start collecting notes here: https://etherpad.opendev.org/p/MJTzrNTDMFyEUxi1ReSo | 17:13 |
corvus | clarkb: agreed all around | 17:17 |
*** jpena is now known as jpena|off | 17:18 | |
frickler | seems the kolla docker account is also affected, mnasiadka received the same email | 17:42 |
frickler | clarkb: I don't think I'll make it to the meeting (again), but since the storyboard topic was triggered at the last PTG, I think it would be really good if we could come up with at least some kind of answer before the next one | 17:45 |
frickler | maybe at least we can schedule a session during the ptg itself joining the affected projects | 17:46 |
clarkb | frickler: re docker I think this is universal for all free orgs on docker hub | 17:46 |
clarkb | frickler: re ptg and storyboard. The problem is I can't seemt oget any opinions other than my own to be stated :) | 17:46 |
clarkb | I don't want to make any unilateral decisins here. I think projects that are actively moving to launchpad should talk to each other and coordinate potential tooling/process to reduce the amount of effort but none of that needs to go through us. Its just they weren't talking together from what I could tell so tried to centralize that discussion which didn't really happen | 17:47 |
clarkb | If I were to make a decision I would probably recommend we sunset storyboard (topical for tdoay) give a shtudown date probably more than a month in the future and take it from there | 17:48 |
clarkb | but I am/was hopeful that we could reach such a decision more collectively or reach a different conclusion as long as it was a bit more of a collective path | 17:50 |
frickler | I'd agree to that except maybe we should see if we can keep it in readonly mode for longer, allowing to still reference existing stories and not have an shortterm pressure to migrate those | 17:50 |
clarkb | ya or maybe some sort of read only archive export? Thats a good idea worth investigating | 17:50 |
frickler | worst case a recursive wget that can be put onto static | 17:52 |
clarkb | fungi: that git-review stack is green now fwiw | 18:07 |
*** gibi is now known as gibi_pto | 18:38 | |
clarkb | Ramereth: hey, if you do end up hearing back from docker re the DSOS I would be curious to know if the have kept the requirement that program participants document that you must use docker and docker desktop to run the images hosted in the program | 18:49 |
clarkb | Ramereth: we looked into this a while back and that was one of the requirements in the resonse we got which led us to not followup on it | 18:50 |
clarkb | NeilHanlon: ^ same for you | 18:50 |
Ramereth | clarkb: I will certainly do that. That's quite the requirement if that's the case but how would they enforce that? | 18:51 |
fungi | yeah, they really didn't seem keen on the idea of including mention of docker alternative container tooling | 18:51 |
fungi | in project documentation i mean | 18:51 |
clarkb | Ramereth: they could potentially enforce it using user agent strings in requests? But I suspect it was more of an honor code thing that they would evaluate in the annual reapplication/reapproval process | 18:51 |
fungi | also they wanted frequent participation in writing docker marketing materials | 18:51 |
* NeilHanlon sighs | 18:51 | |
NeilHanlon | we'll make our own container registry | 18:52 |
clarkb | it is entirely possible they've removed that rule | 18:52 |
NeilHanlon | with blackjack.. | 18:52 |
clarkb | which is why I'm curious | 18:52 |
Ramereth | yup, sounds like the best option is to run your own register if that's the case | 18:52 |
NeilHanlon | the new TOS is pretty.. simple https://web.docker.com/rs/790-SSB-375/images/DockerOpenSourceProgramTermsofAgreement.pdf | 18:52 |
fungi | odds are we caught them while they were still on an early draft of the requirements | 18:53 |
NeilHanlon | i do remember that from the old program | 18:53 |
clarkb | I wish us all much luck. we've been taking notes here: https://etherpad.opendev.org/p/MJTzrNTDMFyEUxi1ReSo though a lot of that is specific to us | 18:54 |
fungi | we originally approached them right after the change in download quotas was announced, so it was probably very early days for their open source community options still | 18:54 |
fungi | probably they loosened up a bit after enough communities turned down the offer | 18:55 |
NeilHanlon | probably. i do know I got pushback when publishing the Rocky images in their 'library' that our documentation mentioned instructions for running systemd in non-docker containers.. they did not like that :) | 18:58 |
fungi | ugh | 18:59 |
* NeilHanlon was only half kidding about the 'build our own' thing and begins thumbing through his collection of unused, purchased-at-3am domains | 19:01 | |
Ramereth | I just got a reply for the OSUOSL request: All right! Your request has been received and put into our queue. The team will start to address this issue immediately. | 19:03 |
Ramereth | we'll see what they say.. | 19:04 |
NeilHanlon | i applied again as well.. we'll see. will let you know clarkb, fungi, how it goes | 19:07 |
clarkb | thanks! | 19:08 |
mnasiadka | From Kolla side we might consider swapping out docker for something else (we already direct our users mainly to quay.io) | 20:18 |
Clark[m] | mnasiadka: is that with a paid or free quay.io account? It isn't clear how to setup the free use and creating an account requires a phone number | 20:21 |
*** dviroel_ is now known as dviroel | 20:21 | |
NeilHanlon | quay is free for public repos | 20:25 |
NeilHanlon | at the cost of a RH account ;) | 20:26 |
NeilHanlon | Can I use Quay for free? | 20:26 |
NeilHanlon | Yes! We offer unlimited storage and serving of public repositories. We strongly believe in the open source community and will do what we can to help! | 20:26 |
clarkb | NeilHanlon: ya but then when you go to sign up it asks for a phone number. Maybe we just have to give them a phone number and send it | 20:27 |
clarkb | ianw: did you see my note re works on arm statement? | 20:27 |
clarkb | ianw: gitea09 has the backup cron jobs on it now and I haven't seen email yet that it has failed. I'll try to do more indepth verification of it though | 20:36 |
ianw | clarkb: yeah, was just thinking about how to get some raw counters | 20:51 |
clarkb | we can't query by label in zuul's api unfrotunately | 20:52 |
clarkb | but we can query by pipeline and mayn of those jobs are in known pipelines. That may be good enough? | 20:52 |
ianw | perhaps i shouldn't have looked because now i don't like the look of the linaro graph | 20:53 |
clarkb | uh oh :) | 20:53 |
clarkb | I'm going to pop out now to take advantage of some "warm" sunny weather. Haven't had this in a few weeks | 20:53 |
clarkb | but happy to help look more when I get back | 20:54 |
ianw | '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:997)') | 20:54 |
ianw | kevinz: ^ | 20:54 |
ianw | i'm not sure what acme method is used here, but these seems like something that can be a cron job | 20:55 |
Ramereth | clarkb: so it looks like the OSUOSL got approved and his part of the email was interesting: If you haven’t already, please take the time to update your project’s Hub pages to include a detailed project description, links to your project source code, as well as contributing guidelines, and a link to your organization’s website. Projects lacking this information may not receive the Docker Sponsored Open Source badging for their images on Docker Hub. | 21:14 |
Ramereth | so far nothing mentioning the requirement of using docker tools | 21:16 |
fungi | that's reassuring | 21:17 |
opendevreview | Sergiy Markin proposed opendev/base-jobs master: Bindep libraries update https://review.opendev.org/c/opendev/base-jobs/+/877430 | 22:50 |
opendevreview | Sergiy Markin proposed opendev/base-jobs master: Bindep libraries update https://review.opendev.org/c/opendev/base-jobs/+/877430 | 23:14 |
clarkb | did anyone prune the vexxhost backups server? We got an email about it being at 90% yesterday and haven't received one yet today | 23:16 |
* clarkb writes a service-announce email for the April 6 22:00 UTC gerrit work | 23:16 | |
ianw | clarkb: i didn't, i thought at the time "that seems like not that long since we last did it". which i then looked up to be 2023-01-30 | 23:17 |
ianw | but what would actually be interesting is how often we have done it before that | 23:18 |
ianw | i feel like it was not usually ~1.5 months | 23:18 |
clarkb | ianw: that is the smaller of the two backup servers right? But ya I think you are correct that it hasn't been this frequent. We probably do end up accumulating more over time simply with new servers but also with the pruning keeping more content over time (eg two annual backups or whatever our retention is) | 23:19 |
ianw | is cacti graphing them? | 23:20 |
clarkb | looks likeyes | 23:20 |
clarkb | the saw edge on the disk graph is actually pretty consistent | 23:20 |
ianw | http://cacti.openstack.org/cacti/graph.php?action=zoom&local_graph_id=69081&rra_id=4&view_type=&graph_start=1645782851&graph_end=1678836035 | 23:20 |
clarkb | http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=69081&rra_id=all | 23:20 |
ianw | heh, jinx again, yeah, that looks fairly periodic | 23:21 |
clarkb | I'm going to state the gerrit outage will be approximately 2 hours for the april stuff | 23:21 |
clarkb | since we have to do offline reindexing I want to give us a bit more time than usual | 23:21 |
ianw | in that case, i don't mind running the prune in a screen, i'll start it in a bit | 23:21 |
clarkb | thanks! | 23:21 |
ianw | currently it seems we run playbooks/service-base.yaml just against all hosts in the inventory | 23:26 |
clarkb | yes, since service-base is our very base config (users, firewall, email) | 23:26 |
ianw | i'm wondering if for the linaro cloud we could have it inventory, but maybe in a group "unmanaged" or something | 23:26 |
clarkb | we want to keep that stuff in sync globally | 23:26 |
ianw | and then pick-and-choose the bits we want to apply | 23:26 |
clarkb | ianw: and exclude that group from base? | 23:26 |
ianw | yeah | 23:27 |
clarkb | ya that might work | 23:27 |
clarkb | maybe stick them in a special section of the inventory file too to make the distinction more clear | 23:27 |
ianw | so install our users, something to manage renewing the LE certs, maybe other stuff in the future | 23:27 |
ianw | the other option is just to put acme.sh renewal on the linaro host in a local cron job and largely forget about it | 23:28 |
ianw | which frankly makes sense, but also triggers my gitops/collaborative infra nerves | 23:28 |
clarkb | another option is to do what we did with the inmotion cloud and self sign a longer term cert and not worry about it for a while | 23:28 |
clarkb | thats less clean, but is nice and stable | 23:29 |
ianw | true, but it is nice having a cert trusted by all the nodepools with no extra effort | 23:29 |
clarkb | gerrit outage email sent | 23:33 |
ianw | actually acme.sh has a deploy plugin to haproxy that concatenates things automatically | 23:34 |
clarkb | ianw: as far as works on arm statements go maybe something like "This has enabled us to run X arm64 test VMs that executed Y test jobs within our CI system" assuming we can asnwer what X and Y are without too much effort would be good | 23:37 |
ianw | i wonder if we can just do something in graphite that keeps adding the in-use nodes, and then just take the highest point | 23:39 |
clarkb | yes I think that is doable with graphite | 23:42 |
ianw | clarkb: https://graphite.opendev.org/S/X | 23:42 |
clarkb | I've also done a thing where i ask it for json data instead of pngs and then have python do some calculations | 23:42 |
ianw | i think you could probably say 7,000 raw | 23:42 |
clarkb | ianw: ya or maybe take a weekly value since that 7k will be out of date in a few weeks | 23:42 |
clarkb | "Enabled 1k test VMs weekly for CI jobs within our CI system" ish | 23:43 |
ianw | yeah, between 01/26 -> 02/26 ~ 5.5 -- say 6k | 23:43 |
ianw | flip d/m there if you live in a weird place that does that :) | 23:43 |
clarkb | but ya I would give a monthly weekly or daily count rather than a total sum as that should be more accurate going forward. Probably pick whichever one sounds most impressive | 23:44 |
ianw | yeah, i think the figures support saying 6k/month | 23:46 |
clarkb | works for me | 23:47 |
ianw | "OpenDev currently provides almost 6000 testing virtual-machines per week, with steady growth as community engagement with ARM increases." | 23:50 |
clarkb | ianw: I made a couple of edits to that line in he etherpad (also its per month not week right?) | 23:51 |
ianw | oh yeah,s orry | 23:51 |
clarkb | I think that looks great | 23:52 |
ianw | ok, i can send it on tomorrow maybe, let it marinate and if anyone else wants to chime in | 23:52 |
clarkb | sounds good. I'll ask foundation to take another look at it too | 23:53 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!