*** flwang has joined #openstack-tc | 00:45 | |
*** whoami-rajat has joined #openstack-tc | 02:14 | |
*** ricolin has joined #openstack-tc | 03:38 | |
*** Luzi has joined #openstack-tc | 05:46 | |
*** e0ne has joined #openstack-tc | 06:41 | |
*** e0ne has quit IRC | 06:47 | |
*** e0ne has joined #openstack-tc | 07:35 | |
*** e0ne has quit IRC | 07:38 | |
*** dtantsur|afk is now known as dtantsur | 07:42 | |
*** tosky has joined #openstack-tc | 08:33 | |
*** jpich has joined #openstack-tc | 08:57 | |
*** e0ne has joined #openstack-tc | 09:08 | |
tosky | I noticed a new wave of "add py37" jobs - is it for sure this time? I may have missed the new discussion on the list | 09:28 |
---|---|---|
*** cdent has joined #openstack-tc | 10:55 | |
smcginnis | tosky: Stein should be py36 for the officially targeted Python 3 runtime: https://governance.openstack.org/tc/reference/runtimes/stein.html | 13:07 |
smcginnis | tosky: So maybe those are being added to make sure we don't have problems in Train. They should probably post something to the ML to let folks know what their plan is though. | 13:08 |
* cdent wonders if we should shrink the tc | 13:11 | |
cmurphy | if you shrink the tc then that leaves even less opportunity for new blood to come in | 13:12 |
ttx | cdent: we might want to do that yes. 13 is a large number | 13:12 |
cdent | cmurphy: yes, that is the big (rather quite very big) negative | 13:13 |
cdent | but 13 is a pretty big number relative to the size of the community | 13:13 |
evrardjp_ | cdent: well we need to automate more things if we want to scale down -- I can't imaging accurately following all the projects with less than 13 ppl. | 13:15 |
cdent | evrardjp_: keep in mind that "following all the projects" is a relatively new thing. It might be the right thing, but it hasn't been round long enough for us to base the composition of the TC on just that. | 13:17 |
*** leakypipes is now known as jaypipes | 13:39 | |
*** mriedem has joined #openstack-tc | 13:50 | |
*** cdent has quit IRC | 13:59 | |
*** cdent has joined #openstack-tc | 13:59 | |
*** Luzi has quit IRC | 14:04 | |
*** jroll has quit IRC | 14:13 | |
*** elbragstad has joined #openstack-tc | 14:14 | |
*** jroll has joined #openstack-tc | 14:14 | |
*** elbragstad is now known as lbragstad | 14:24 | |
*** ricolin_ has joined #openstack-tc | 14:40 | |
*** ricolin has quit IRC | 14:42 | |
*** ricolin_ has quit IRC | 15:16 | |
*** ricolin_ has joined #openstack-tc | 15:17 | |
*** e0ne has quit IRC | 15:42 | |
*** e0ne has joined #openstack-tc | 15:48 | |
*** jamesmcarthur has joined #openstack-tc | 16:01 | |
dhellmann | smcginnis , tosky : one of the folks from canonical was trying to get 3.7 jobs added because they're shipping on 3.7 (stein, I think). IIRC, we said "ok, but you have to do it" | 16:30 |
dhellmann | and there was a mailing list thread, several months ago | 16:30 |
tosky | dhellmann: thanks; I remember the thread, which is also referenced from the commit messages | 16:31 |
dhellmann | ok, good | 16:31 |
tosky | but I remember that at some point the attempt to add new jobs was stopped (before the end of the year vacations, maybe) | 16:31 |
dtantsur | F29 also ships with Py37 by default, so it helps people running tox locally. (although Fedora has many versions available) | 16:31 |
tosky | so I was not sure whether it was fine to propose again those patches or not | 16:32 |
dhellmann | I think there was some confusion about whether adding the 3.7 jobs was related to dropping 3.5 and if having 3.6, 3.6, and 3.7 all at the time was "a waste" | 16:32 |
tosky | oh, I see | 16:33 |
dhellmann | I don't think that adding those unit test jobs is the biggest "resource hog" of our CI though so I tried to get people to consider the questions separately | 16:33 |
tosky | wouldn't it make sense to also add 3.7 as supported into setup.cfg? | 16:33 |
dhellmann | now that we have the 3.6 decision in place and documented, we could drop 3.5 anyway | 16:33 |
*** e0ne has quit IRC | 16:34 | |
dhellmann | probably. I'm not sure we've tied unit test jobs to the classifiers in setup.cfg directly before | 16:34 |
tosky | not always, but I think it wouldn't hurt if we may avoid another wave of reviews just to fix the classifiers | 16:35 |
dhellmann | yeah, good point. I guess the question is, do you want to do that for unit tests alone, or wait for functional tests of some sort? | 16:37 |
dhellmann | I don't have an answer to that one | 16:38 |
clarkb | also don't we still run a subset of tests in many cases for python3 unittests? | 16:38 |
dhellmann | cdent , ttx, evrardjp_ : this would also be a good opportunity to engage with some of our Chinese contributors who have run in the past but not won seats | 16:39 |
ttx | dhellmann: work in progress ! | 16:39 |
dhellmann | clarkb : I don't think so | 16:39 |
dhellmann | a few, but not nearly as many as before | 16:39 |
*** ricolin_ has quit IRC | 17:00 | |
smcginnis | I wonder if it would be more explicit if we had job templates for each release cycle that defined the python versioned jobs to run. | 17:10 |
smcginnis | Maybe slightly less confusion around what needs to be run versus what would be useful. | 17:10 |
smcginnis | tosky: I would think we would want to have functional tests running, and passing, first before adding it to setup.cfg. | 17:10 |
tosky | oki | 17:11 |
tosky | make sense | 17:11 |
smcginnis | There probably wouldn't need to be much of a gap between adding a functional job and adding it to setup.cfg. Assuming everything passes. | 17:12 |
*** ricolin has joined #openstack-tc | 17:16 | |
clarkb | smcginnis: re release specific python templates I think 3.7 would still be an outlier because its information futuer looking work not directly supported python. | 17:21 |
smcginnis | clarkb: Not sure if I parsed that, but I think that was what I was saying. python-stein-pti template would only have the designated py3 environment of py36. So it would be somewhat more clear that any py37 jobs would be additional testing, not the core required runtime for stein. | 17:23 |
clarkb | ah I read it more as "this is what we test on stein, don't add others" but that expansion is basically what I was saying. You'd have the main template then additional stuff on top | 17:23 |
smcginnis | ++ | 17:27 |
smcginnis | I was also thinking then for stable branches we would have a set template of jobs to run and we may choose to drop some of the forward-looking jobs that aren't as interesting for stable. | 17:27 |
*** jpich has quit IRC | 17:50 | |
*** ricolin has quit IRC | 17:52 | |
fungi | especially if those jobs depend on "ephemeral" node states such as short-support distro releases or rolling release distros | 17:54 |
clarkb | fungi: rolling release seems to address the target function of these jobs well though. Test with whatever newest python3 is basically. Then let rolling release update that | 17:56 |
fungi | we can test stable/stein changes on ubuntu 18.04 lts and centos 7 for years to come still. fedora 29 on the other hand will cease to be supported upstream shortly after we release | 17:56 |
fungi | and testing stable changes on rolling releases is basically a ticking timebomb | 17:56 |
clarkb | oh ya don't test stable on rolling releases. I meant for master the rolling release model seems ideal for this particluar use case | 17:57 |
fungi | but for master branch testing, sure i'm all for it as long as we also test development on whatever the target long-term release is to make sure it works by the time we release | 17:57 |
clarkb | as with pypy I expect we'll not have long term interest/maintenance of such forward looking jobs though and so adding them if/when there is interest on $whateverplatform is good enough | 17:58 |
cdent | this conversation reminds me of a somewhat related but mostly orthogonal question: Have we got the capacity to do unit tests in containers if that's what we prefer? | 18:06 |
clarkb | we don't currently have access to managed containers at that scale no. In fact I think mnaser prefers to give us VMs because we can scale that upa nd down whereas container resources are always on | 18:06 |
clarkb | (we could in theory build scale up and down for containers, but it isn't there currently) | 18:07 |
cdent | s/capacity/capability/ is probably more accurate for what I really meant | 18:07 |
clarkb | cdent: nodepool has kubernetes drivers and we have a test cluster (mostly to make sure the driver itself functions). So yes | 18:07 |
clarkb | fwiw many of our unittest jobs do consume the whole node so resource wise they are quite efficient | 18:08 |
cdent | I suspect that's true, but there are other cases where it's not. For example, in placement a local run of the unit tests takes ... (one mo) | 18:09 |
cdent | 9 seconds, 99% of that is tox and stestr overhead | 18:10 |
cdent | Ran: 146 tests in 0.1959 sec. | 18:10 |
cdent | functional is a bit longer at 29s, with about 50% in overhead | 18:11 |
cdent | but the nodepool jobs repot at 4-6 minutes | 18:11 |
fungi | how much of that is installing software? | 18:11 |
clarkb | cdent: sure, but containers don't make cpus or IO faster. | 18:12 |
fungi | unless we pre-baked job-specific container images knowing in advance what dependencies to install, it's going to spend the same amount of time installing software | 18:12 |
cdent | clarkb: that's my point: those tests don't care about cpus or IO. The reason I want to use a container is so the the node creation time is shorter so the resources can be freed upfor other things | 18:12 |
clarkb | what containers in theory give you is caching of system deps (some of the overhead in those jobs for sure is installing these deps) and better resource packing at the cost of being always on (at least with current tooling) | 18:12 |
clarkb | cdent: right the node creation time is shorter because we never turn off the hosts | 18:13 |
clarkb | which is a different potentially worse cost according to the clouds | 18:13 |
cdent | fungi: presumably projects could be responsible for providing their own images to some repo? | 18:14 |
fungi | they could do that with vm images too | 18:14 |
fungi | containers don't change that | 18:14 |
cdent | clarkb: yeah, understood. Do we go to zero nodes in use much? | 18:14 |
cdent | fungi: I think it does. Containers can be easier in some instances. | 18:15 |
fungi | basically the only efficiency gains i see from running some jobs in reused containers on nodes we don't recycle is we spend (marginally) less time waiting for nova boot and nova delete calls to complete | 18:15 |
cdent | s/easier/easier to define and manage/ | 18:15 |
clarkb | cdent: not quite zero but we are very cyclical. http://grafana.openstack.org/d/T6vSHcSik/zuul-status?orgId=1&from=now-7d&to=now the test nodes graph there illustrates this | 18:15 |
fungi | our longest-runnnig jobs are also the ones which use the majority of our available quota, and would see proportionally the least benefit from containerizing | 18:16 |
clarkb | weekends are very quiet and we tend to have less demand during apac hours | 18:16 |
fungi | our average job runtime across the board is somewhere around 30 minutes last i checked, while our average nova boot and delete times measure in seconds, so optimizing those away doesn't necessarily balance out the new complexities implied | 18:17 |
clarkb | you can actually see the difference in boot times between ceph backed clouds and non ceph backed clouds in our time to ready graphs | 18:18 |
cdent | I think focusing on the averages suggests that those patterns are somehow something we should strive to maintain. The fact that many of the larger projects have unit tests that take an age is definitely not a good thing. | 18:18 |
clarkb | under a minute for ceph and just under 5 minutes for non ceph. So the cost is definitely there | 18:19 |
fungi | it might mean a significant efficiency gain proportional to a 30-second job, but when most of our nodes are tied up in jobs which run for an hour it's not a significant improvement | 18:19 |
clarkb | right similar to why running python3.7 tests isn't a major impact on our node usage | 18:19 |
fungi | it's really less the unit tests and more the devstack/tempest/grenade/et cetera jobs we spend an inordinate amount of our resources on | 18:20 |
cdent | right, I know, but that's in aggregate. Every time a job finishes sooner, however, is the sooner another job can start | 18:20 |
cdent | If that's not something we care about, fine, but it seems like it would be nice to achieve. It sounds like containers are not an ideal way to do that at this time, that's fine too. | 18:21 |
clarkb | cdent: I think that would be nice to achieve which is why we now start jobs in a priority queue. The problem with simply running short jobs quicker is we quickly end up using all the nodes for the long running jobs | 18:21 |
clarkb | the priority queue we've got now aims to reduce that pain | 18:22 |
fungi | it's a ~2% efficiency improvement which implies rather a lot of effort to implement and significant additional complexity to maintain. i expect we can spend that same effort in other places to effect much larger performance gains | 18:22 |
cdent | yeah, I'm keen on the priority queue | 18:22 |
cdent | okay, let's go for a different angle | 18:23 |
cdent | what about an up and running openstack node to which we install incremental software updates and run tempest against | 18:23 |
cdent | (instead of from scratch grenade and tempest) | 18:23 |
cdent | so again, a long running node, which I guess is not desirable | 18:24 |
cdent | I assume you both saw the blogpost lifeless had on an idea like that? If not I can find the link. | 18:24 |
clarkb | ya the lifeless suggestion basically. It is doable if you somehow guard against (or detect and reset after) crashes | 18:24 |
*** e0ne has joined #openstack-tc | 18:24 | |
fungi | yet another thing we've tried in the past. running untrusted code as root on long-running machines is basically creating a job for someone (or many someones) to go around replacing them every time they get corrupted | 18:24 |
clarkb | ya I've read it. Nothing really prevents us from doing that except its a large surface area of problems you have to guard against | 18:24 |
clarkb | the reason we use throw away nodes is we avoid taht problem entirely (and scale down on weekends to be nice to clouds) | 18:25 |
* cdent nods | 18:25 | |
clarkb | and at the same time can give our jobs full root access which is useful when testing systems software | 18:25 |
fungi | we haven't explicitly tried lifeless's suggestion, but we've definitely run proposed code on pets-not-cattle job nodes in years gone by and it's a substantial maintenance burden (which led directly to the creation of nodepool) | 18:26 |
clarkb | a concrete place this would get annoying is we have a subset of projects that insist on running with nested virt. This often crashes the test nodes | 18:26 |
clarkb | (if you can detect for that and replace nodes then maybe its fine, but how do you detect for that when the next job runs and fails in $newesoteric manner) | 18:27 |
clarkb | one naive approach is to rerun every job that fails with a recycled node | 18:27 |
cdent | I guess in my head what I was thinking was that if a test fails or a node crashes (hopefully that is detectable somehow) it gets scratched and the | 18:27 |
cdent | yeah, that | 18:27 |
clarkb | (and use that to turn over instances) | 18:27 |
clarkb | but now you are running 2x the resources for every test failure | 18:27 |
clarkb | that may or may not be cheaper resource cost 9someone would need to maths it) | 18:27 |
cdent | oh, that's not quite what I meant | 18:27 |
cdent | if a test fails or a node fails, that job fails, the next job in the queue (whichever it was) starts on a node (building the needs) | 18:28 |
fungi | also you run the risk that one build corrupts the node in such a way that the next build which should have failed succeeds instead | 18:28 |
cdent | you wouldn't do it for unit or functional tests, only integration, and if, in integration tests, we can't cope with that kind of weird state, we have a bug that needs to be fixed | 18:29 |
fungi | so you can't necessarily identify corrupt nodes based on job failures alone | 18:29 |
cdent | however, I recognize that we don't have a good record with fixing gate unreliability | 18:29 |
clarkb | cdent: yes, but pre merge testing accepts there will be many bugs | 18:29 |
fungi | and back to the suggestion of running on pre-baked job-specific images, we did that for a while to and the maintenance burden there pushed us into standardizing on image-per-distro(+release+architecture) | 18:29 |
clarkb | and then ya you add on top that me and mriedem and slaweq end up fixing all the gate bugs... it is likely to not be sustainable | 18:29 |
fungi | image management is actually one of the more fragile parts of running a ci system | 18:30 |
cdent | clarkb: yeah, I wish we didn't run tempest, grenade, dsvm style jobs until after unit, functional, pep, docs had all passed | 18:30 |
clarkb | cdent: we've tried that too :) | 18:30 |
cdent | heh | 18:30 |
fungi | something else we've tried! ;) | 18:30 |
* cdent shakes fist at history | 18:30 | |
clarkb | (and zuul totally supports the use case we just found it to end up in more round trips overall) | 18:30 |
clarkb | basically what happens is you end up with extra cycles to find bugs | 18:30 |
clarkb | so rather than getting all results on ps1 then being able to address them, its ps1 fails, then ps2 fails then ps3 fails and ps4 now passes | 18:31 |
clarkb | but zuul can be configured to run that way if we really want to try it again | 18:31 |
fungi | yeah, 1: it extends time-to-report by the sum of your longest job in each phase, and 2: i often find that a style checker and a functional job point out different bugs in my changes which i can fix before resubmitting | 18:31 |
cdent | Is it a thing zuul can do per project, or is it more sort of tenant-wide? | 18:32 |
clarkb | cdent: its per project job list | 18:32 |
fungi | whereas if we aborted or skipped the functional job, i'll be submitting two revisions and also running more jobs as a result | 18:32 |
clarkb | cdent: basically where you list out your check/gate jobs you can specify that each job depends on one or more other jobs completing successfully first | 18:32 |
fungi | i want to say one of the projects started doing that with their changes recently. was it tripleo? or someone else? | 18:33 |
fungi | curious to see if they stuck with it or found the drawbacks to be suboptimal | 18:33 |
cdent | ooooh. this is quite interesting. I'd be interested to try it in placement. most of us in the group were disappointed when we had to turn on tempest and grenade. If we could push it into a second stage that might be interesting. On the other hand, everyone should be running the functional and unit before they send stuff up for review... | 18:34 |
* cdent shrugs | 18:34 | |
cdent | TIL | 18:34 |
clarkb | https://zuul-ci.org/docs/zuul/user/config.html#attr-job.dependencies | 18:34 |
cdent | thanks | 18:34 |
cdent | I really wish I had more time. zuul is endlessly fascinating to me, and I've barely scratched the surface | 18:35 |
cdent | It's a very impressive bit of work. | 18:36 |
clarkb | fungi: I am not aware of anyone currently doing it but it wouldn't surprise me | 18:36 |
fungi | as someone who is a zuul maintainer, i too wish i had more time to learn about it | 18:37 |
clarkb | on the infra side it is more there for building buildset resources like with the docker image testing work corvus is doing | 18:37 |
clarkb | rather than as a canary | 18:37 |
* cdent must go feed the neighbor's cat | 18:38 | |
cdent | Thanks to you both for the info and stimulating chat. | 18:38 |
fungi | any time! | 18:39 |
*** dtantsur is now known as dtantsur|afk | 18:47 | |
* smcginnis had wondered about container usage in CI too | 19:14 | |
smcginnis | This has been interesting. | 19:14 |
*** jamesmcarthur has quit IRC | 19:19 | |
*** e0ne has quit IRC | 19:30 | |
*** jamesmcarthur has joined #openstack-tc | 19:33 | |
dhellmann | smcginnis : your idea about well-named job templates makes a lot of sense. I agree that would make rolling out job changes simpler | 19:33 |
dhellmann | or at least clearer | 19:34 |
smcginnis | At least one more step towards making things more explicit. | 19:34 |
dhellmann | yeah | 19:34 |
*** e0ne has joined #openstack-tc | 19:37 | |
openstackgerrit | Gorka Eguileor proposed openstack/governance master: Add cinderlib https://review.openstack.org/637614 | 19:40 |
*** zaneb has quit IRC | 19:50 | |
mnaser | interesting talk about containers | 19:59 |
mnaser | i think they do bring an interesting value to the table in terms of being able to run more jobs, but we can do that by leveraging different flavors on clouds | 20:00 |
mnaser | i.e.: i think it would be good if we run all doc/linting jobs in 1c/1gb instances | 20:00 |
mnaser | instead of 1 doc/lint job, we now run 8 at the same time (possibly) | 20:00 |
clarkb | mnaser: one complication for ^ is most clouds set a max instances quota so that doesn't help us much (your cloud is an exception to that) | 20:01 |
evrardjp_ | lots of scrollback, and interesting | 20:01 |
mnaser | clarkb: yeah, i think it would be good to approach the clouds and ask to drop the max instance limit and move to a ram based approach | 20:01 |
mnaser | in aggergate you're still using just as much resources really | 20:01 |
clarkb | ya it is a possibility | 20:01 |
evrardjp_ | yeah aggregating docs/linting jobs that are pretty much self contained in env seems a good way to reduce turnover (at first sight....now experience has proven me wrong there :p) | 20:02 |
evrardjp_ | increase turnover? anyway | 20:02 |
mnaser | it doesn't get us jobs that start quicker and get rid of the overheads.. but it does give us more efficent use of resources | 20:02 |
clarkb | mnaser: right can run more jobs in parallel | 20:02 |
clarkb | (assuming quotas are updated) | 20:02 |
mnaser | also, something i've thought about is with the PTI, we can have a set of dependencies that we almost always deploy, so for example -- doc jobs have a prebaked image, lint jobs as well | 20:03 |
clarkb | mnaser: we just got away from that :/ | 20:03 |
evrardjp_ | clarkb: because of the pain of image management? | 20:03 |
*** mriedem has quit IRC | 20:03 | |
*** mriedem has joined #openstack-tc | 20:04 | |
clarkb | evrardjp_: thats a big part of it. You end up with 20GB images * number of image variants and on top of that the infra team becomes this bottleneck for fixing jobs or adding features to jobs | 20:04 |
evrardjp_ | (just wondering the reasons, and _if containers are thrown into the mix_ if that has changed) | 20:04 |
clarkb | if you want to use prebaked container images vs bindep I don't really see that being a huge speedup because we already locally cache the data in both cases | 20:05 |
clarkb | where you might see a difference is if those packages are expensive to install forsome reason | 20:05 |
clarkb | and maybe yum being slow makes that the case | 20:05 |
mnaser | clarkb: also you're introducing io into the equation in this case i guess | 20:05 |
mnaser | doing a 1g image pull (maybe once only if its cached) that does big sequential writes | 20:06 |
mnaser | vs doing a ton of tiny io as apt-get install or yum does its thing | 20:06 |
clarkb | ya but it is all already locally cached in the cloud region and bottleneck should be the wire not the disk | 20:06 |
evrardjp_ | my question is more can we run multiple of those jobs on a containerized infra? Instead of getting a minikubecluster per job, we could have a shared minikube that's spawned every x time, which allows tests in containers in jobs if we receive a kubeconfig file for example | 20:07 |
clarkb | calculating deps and verifying package integrity is likely to be the bigger delta in cost | 20:07 |
clarkb | evrardjp_: see the earlier discussion. The problem with that is that becomes a more static setup which costs the clouds more | 20:07 |
evrardjp_ | ok | 20:07 |
evrardjp_ | yeah true I remember what you said above | 20:08 |
clarkb | in theory we could manage those resources on demand too, but don't have the drivers for nodepool to do that today | 20:08 |
evrardjp_ | so basically we've got the best already. :D | 20:08 |
clarkb | I think what we have is a good balance between what the clouds want, what the infra team can reliably manage, and throughput for jobs | 20:08 |
clarkb | there are many competing demands and what we have ended up with works fairly well for those demands | 20:09 |
clarkb | but not perfectly in any specific case | 20:09 |
evrardjp_ | I am happy with it myself. | 20:09 |
evrardjp_ | I don't think we say enough thank you to infra :p | 20:10 |
clarkb | one thing we can experiment with is jobs consuming docker images to bootstrap system dep installations | 20:11 |
clarkb | it is possible that that will be quicker due to the package manager overhead of installing individual packages | 20:11 |
clarkb | (at the same time docker images tend to be a fair bit larger than installing a handful of packages so it might go the other direction) | 20:11 |
*** whoami-rajat has quit IRC | 20:23 | |
evrardjp_ | clarkb: what do you mean by "bootstrap system dep installation" ? I am confused | 20:28 |
clarkb | evrardjp_: at the beginning of most of our jobs we run a bindep role that installs all the system dependencies for that test | 20:29 |
clarkb | evrardjp_: you could install pull and run a docker container | 20:29 |
clarkb | whether one is faster than another depends on a lot of factors though | 20:30 |
evrardjp_ | I guess it depends on the jobs too -- because some can have lightweight image that's fast to pull and fast to install a few packages -- but then PTI comes in the way I guess | 20:31 |
fungi | clarkb: have we tried using eatmydata to cut down on i/o cost from package installs? | 20:36 |
fungi | it seems that clouds tend to be *much* more i/o-constrained than network bandwidth | 20:37 |
clarkb | fungi: not recently. I remember fiddling with it once long ago and deciding that $clouds were probably not doing safe writes in the first place. This may have changed and is worth reinvenstigating possibly | 20:37 |
clarkb | fungi: we have seen a move towards tmpfs for zookeeper and etcd though in part due to the io slowness on some clouds | 20:38 |
clarkb | so probably would help at least in some cases | 20:38 |
fungi | it does seem like most of the system package installer delay is i/o-related, but i have no proper experimental data to back that up | 20:39 |
*** zaneb has joined #openstack-tc | 21:12 | |
*** zaneb has quit IRC | 21:37 | |
*** cdent has quit IRC | 21:42 | |
*** e0ne has quit IRC | 22:03 | |
*** e0ne has joined #openstack-tc | 22:04 | |
*** mriedem has quit IRC | 23:03 | |
*** dklyle has quit IRC | 23:03 | |
*** e0ne has quit IRC | 23:12 | |
*** e0ne has joined #openstack-tc | 23:23 | |
*** e0ne has quit IRC | 23:30 | |
*** spsurya has quit IRC | 23:58 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!