*** jamesmcarthur has joined #openstack-tc | 00:20 | |
*** zhipeng has quit IRC | 00:27 | |
*** zhipeng has joined #openstack-tc | 00:37 | |
*** ricolin has joined #openstack-tc | 01:08 | |
*** jamesmcarthur has quit IRC | 01:11 | |
*** whoami-rajat has joined #openstack-tc | 01:25 | |
*** lbragstad has quit IRC | 01:56 | |
*** lbragstad has joined #openstack-tc | 02:09 | |
*** jamesmcarthur has joined #openstack-tc | 02:20 | |
*** jamesmcarthur has quit IRC | 02:25 | |
*** dangtrinhnt has quit IRC | 02:25 | |
*** dangtrinhnt has joined #openstack-tc | 02:30 | |
mnaser | asettle, jroll: just wanted to follow up if you had a chance to push up a review to slot in a place for the project review (wrt vision) in governance and update community afterwards over ML | 02:35 |
---|---|---|
openstackgerrit | Merged openstack/governance master: Explicitly declare Train supported runtimes. https://review.openstack.org/642272 | 02:38 |
*** jamesmcarthur has joined #openstack-tc | 02:38 | |
*** jamesmcarthur has quit IRC | 02:43 | |
*** ricolin has quit IRC | 02:44 | |
*** jamesmcarthur has joined #openstack-tc | 03:14 | |
*** jamesmcarthur has quit IRC | 03:31 | |
*** jamesmcarthur has joined #openstack-tc | 03:31 | |
*** diablo_rojo has quit IRC | 04:15 | |
*** jamesmcarthur has quit IRC | 04:40 | |
*** dklyle has quit IRC | 04:44 | |
*** dklyle has joined #openstack-tc | 04:45 | |
prometheanfire | just saw the thing about golang, wondering if coinstallability was considered | 04:54 |
*** ricolin has joined #openstack-tc | 05:23 | |
*** lbragstad has quit IRC | 06:38 | |
*** ricolin has quit IRC | 06:43 | |
*** jaosorior has quit IRC | 06:55 | |
*** jaosorior has joined #openstack-tc | 06:57 | |
*** Luzi has joined #openstack-tc | 06:59 | |
*** tosky has joined #openstack-tc | 07:25 | |
*** dtantsur|afk is now known as dtantsur | 08:18 | |
*** e0ne has joined #openstack-tc | 08:25 | |
asettle | mnaser, sorry - just got back online. I have not done the thing, I'll check in with jroll | 08:58 |
*** ricolin has joined #openstack-tc | 08:58 | |
*** Luzi has quit IRC | 08:59 | |
*** jpich has joined #openstack-tc | 09:00 | |
ttx | ohai | 09:01 |
ttx | tc-members: quick reminder to add your name to https://framadate.org/dBXkYU4MUyROFSYB if you're interested in joining a TC dinner on Saturday evening of the summit | 09:02 |
asettle | Morning o/ | 09:02 |
ricolin | hola! | 09:02 |
asettle | ttx, the saturday before? | 09:02 |
ttx | asettle: yes | 09:02 |
asettle | Ah! | 09:02 |
ttx | evrardjp: also need to update your answer to say whether 8:30pm would work for you | 09:03 |
evrardjp | will do ttx | 09:03 |
* ttx go grab coffee | 09:04 | |
*** dklyle has quit IRC | 09:08 | |
*** dklyle has joined #openstack-tc | 09:09 | |
ttx | I was reading https://twitter.com/mikal/status/1107787256012562433 and wondering if someone had more context ? I wish that discussion would happen on the ML instead of Twitter, but if we want to be friendly to part-time contributors I guess we can't force all discussions to happen on ML. | 09:11 |
ricolin | ttx to have a way to be friendly in social media maybe?:) | 09:20 |
evrardjp | ttx: If mikal wanders off due to comment on nova community that's quite opposite to the anti-nitpicking culture we have tried to build | 09:55 |
evrardjp | then I think we can say anti-nitpicking has failed | 09:56 |
evrardjp | anyone has an idea on how to improve there? | 09:56 |
*** e0ne has quit IRC | 09:56 | |
cmurphy | I don't think it's fair to say anti-nitpicking has failed because one person complains | 09:59 |
*** jpich has quit IRC | 09:59 | |
ttx | yeah, I really need more context to form an opinion | 09:59 |
cmurphy | I also don't think mikal's tweet is about nitpicking, it sounds like a legitimate disagreement about the value of a feature or design | 09:59 |
evrardjp | cmurphy: that's not what I meant | 09:59 |
frickler | this seems to be the review being talked about https://review.openstack.org/640688 | 09:59 |
*** jpich has joined #openstack-tc | 10:00 | |
evrardjp | but I am glad my controversive message got ppl to talk about it :p | 10:00 |
evrardjp | frickler: thanks for the context :) | 10:02 |
evrardjp | cmurphy: it seems so | 10:02 |
ttx | Looks like mikal was not ready/willing to discuss implementation... I don't see anyone saying that would be a bad idea here. | 10:05 |
evrardjp | yeah | 10:06 |
evrardjp | that's totally different :) | 10:06 |
ttx | I suspect there is more context though... since ni another tweet he also said people redirected him to os-profiler | 10:07 |
evrardjp | My point was -- this is a contributor, whoever this might be, known to us or not. We shouldn't shut him down, and he should listen to positive improvements. That's all I meant. | 10:07 |
evrardjp | Probably terribly formulated though | 10:07 |
ttx | The os-profiler mention is what got me interested in that discussion -- as I don't think we should not do Prometheus support because we have os-profiler | 10:08 |
ttx | But no mention of os-profiler on that review so I suspect some of that discussion happened elsewhere | 10:08 |
*** cdent has joined #openstack-tc | 10:27 | |
*** e0ne has joined #openstack-tc | 10:50 | |
mugsie | ttx: I think it was IRC - but I can't find it anywhere ... | 11:01 |
mugsie | and in this case it is really unfortunate, I know that these style of stats are very useful (and completely different to os-profiler) | 11:01 |
ttx | mugsie: exactly | 11:10 |
jroll | mugsie: +100 | 11:21 |
jroll | we (and I assume other companies) have just built little binaries which run on the hypervisor and push similar data into $metrics_system_of_choice | 11:23 |
jroll | would be nice for it to be built in and standard (and I'd much rather just choose a tech rather than support a million systems) | 11:24 |
jroll | asettle: mnaser: vision things is on my list to catch up on today - I'll dig around governance and see if I can find a place it belongs | 11:24 |
dtroyer | (disclaimer: I've only read the review comments noted above but it raises a point in my head) Many of our projects push back against anything that isn't fully-developed or designed. I am fighting this in multiple places these days,… where does the community expect things to get worked out that need exposure to real deployments? | 11:24 |
asettle | jroll, thank you :) I am *swamped* with non-stack work today | 11:24 |
asettle | dtroyer, good question | 11:25 |
jroll | asettle: no worries | 11:25 |
cdent | dtroyer: good question. Our development and release proceses aren't friendly to that kind of thing, which is a real bummer | 11:26 |
jroll | I feel like we assume every deployment has patches on top of openstack. and we also assume everyone has a corporate sponsor. so asking people to prove things out downstream is typical | 11:29 |
jroll | (which isn't always good) | 11:30 |
jroll | I have personally found the community is quite receptive to helping design not-quite-baked features where there's a clear need. proving that the feature is valuable is harder without data. (in this case, probably showing off a dashboard built from these metrics) | 11:30 |
cdent | I bet openstack-the-project would drastically improved if we hosted our own public cloud (with real customers) | 11:32 |
cdent | not that I expect that to happen | 11:32 |
cdent | but it would be interesting | 11:32 |
mugsie | cdent: at one point I considered running a large Designate install on tip of master, but failed to get resources needed :/ | 11:32 |
jroll | a million times agree, cdent | 11:32 |
cdent | dtroyer: I got pinged yesterday about whether or not I was intending to go any further with my ectd-compute stuff and thus far I haven't been motivated becauses there's little conversation about it | 11:33 |
mugsie | I think it really helps | 11:33 |
cdent | that lack of solid feedback loops is a big problem (to me) in openstack | 11:33 |
jroll | when infra-cloud was coming up, the infra team contributed a ton of feedback/code | 11:33 |
evrardjp | jroll: I like what you said "would be nice for it to be built in and standard" -- this is what I believe the twitter rant is about. | 11:54 |
evrardjp | ofc I might be wrong | 11:54 |
* cdent missed a twitter rant? | 11:55 | |
cdent | link? | 11:55 |
evrardjp | we have one feedback here, let's just try to convert the feedback with a built in feature | 11:55 |
evrardjp | https://twitter.com/mikal/status/1107787256012562433 | 11:55 |
evrardjp | if we can call that a rant | 11:55 |
evrardjp | you know my english is not great :p | 11:56 |
evrardjp | anyway | 11:56 |
cdent | I find it utterly hilarious that mikal complains about nova's behavior when, from my standpoint, he is one of the major factors in why droves of people walked away from nova's toxic environment. | 11:57 |
cdent | That said, he's right | 11:57 |
evrardjp | I don't judge/look at the messenger -- I just look at what could be interesting to learn from this | 11:58 |
evrardjp | I totally would like this prometheus thingies :) | 11:58 |
cdent | yes, me too | 11:58 |
evrardjp | more built in, less downstream please :D | 11:58 |
evrardjp | the metrics case is interesting, as I feel it's very close to healthchecking middleware | 11:59 |
evrardjp | where do we stop, how do we implement things together? | 12:00 |
cdent | I think it is _much_ different from healthchecking middlware | 12:00 |
evrardjp | oh? | 12:02 |
cdent | I guess it depends on how it goes, but I think metrics and health are important to keep distinct | 12:03 |
cdent | in part because of the granularity/frequency of the info | 12:03 |
mugsie | I think they are in the same vein - using "cloud native" app features to help deployers / ops use standard toolchains and common app suites | 12:17 |
mugsie | e.g. would love to replace os-profiler with jeager / opentracing, so if someone does a k8s volume create, we could trace the entire call down to cinder and the backend to see what takes time or errors out | 12:18 |
evrardjp | jaeger/open tracing is indeed a compelling use case from what I can see here | 12:23 |
mugsie | it is | 12:29 |
fungi | the prometheus/statsd functionality is compelling if ceilometer maintenance erodes further | 12:36 |
fungi | so as to have a more integrated alternative | 12:36 |
*** ijolliffe has joined #openstack-tc | 12:37 | |
mugsie | fungi: I think this is orthangonal to celiometer - even if that was a really active project I would advocate for promethous | 12:39 |
mugsie | it is a standard base level service for a lot of IaaS/PaaS/DCaaS software these days, and we should support the standard way people collect stats from a lot of other software | 12:39 |
fungi | well, yes, because 1. used by more tooling outside openstack, and 2. implementation would be actually integrated in services rather than some external codebase which tries to import code from services directly | 12:40 |
* fungi doesn't actually know whether ceilometer still imports nova and depends on details of private objects, but that was at least the case for a very long time | 12:41 | |
fungi | oh, and by monkey-patching the services themselves too | 12:42 |
cdent | As I recall, the reason for that all that stuff was because ceilo was unable to get cooperation from the projects-being-measured | 12:44 |
cdent | so ceilo has been surfacing fundamental issues in the openstack process for _years_ | 12:44 |
fungi | right, i'm not saying they made that choice lightly | 12:44 |
fungi | just that it probably would have been far easier to maintain if the services were up for integrating stat emitting routines/hooks in their codebases | 12:45 |
* cdent nods | 12:45 | |
fungi | having to work around the unwillingness to carry what effectively has to be integrated routines in the service implementations necessarily implies fragile design patterns and a need to constantly chase any changes made to related parts of the services | 12:48 |
*** mriedem has joined #openstack-tc | 12:54 | |
mugsie | Yeah - things like changing the format of events doesn't help either - it forces projects to import nova.objects to read events from the notifications queue | 13:03 |
* dhellmann senses the makings of a community goal for the U cycle | 13:06 | |
fungi | openstack unicorn: making the unpossible | 13:18 |
dhellmann | I would wear that t-shirt | 13:25 |
fungi | so... wrt the ad hoc goals meeting, whatever came of the idea that we could use the 14:00 utc thursday slot similar to our ~monthly meetings? | 13:26 |
fungi | the poll ended up using thursday office hour instead | 13:27 |
fungi | which even though it seems to be the most popular, i'll probably have to bow out of as i already have two other meetings going on at the same time | 13:28 |
*** lbragstad has joined #openstack-tc | 13:34 | |
*** marst has joined #openstack-tc | 13:44 | |
mnaser | *honestly* i'm all for us adopting other projects into our ecosystem that have been more successful and thriving with contributions than our own little pet projects | 13:46 |
mnaser | we talk about being better open source citizens and part of that is integrating across * | 13:46 |
mnaser | having said that, i'm pretty sure osprofiler != prometheus/exporters | 13:47 |
mnaser | that's more on the opentracing side of things | 13:47 |
mnaser | i do have some disagreements on what mikal believes prometheus should be exposing but i didn't want to get into an in-depth discussion about implementation details | 13:48 |
mnaser | ceph has done a great job exposing things like this: http://docs.ceph.com/docs/mimic/mgr/prometheus/ | 13:49 |
ttx | that's something that would benefit from openstack-level design. you would likely want those Prometheus metrics exposed across more than just Nova anyway. | 13:54 |
fungi | yeah, but you really need code *in nova* to generate the events | 13:54 |
*** Luzi has joined #openstack-tc | 13:54 | |
ttx | oh for sure. But most of the objections (in that review at least) was more around how to expose the metrics, rather than which metrics or how to generate them | 13:55 |
fungi | (and ideally not just flood the already overtaxed and broken external message queue) | 13:55 |
*** whoami-rajat has quit IRC | 13:55 | |
fungi | but yes, i'm not going to just assume this is a case of the nova reviewers rejecting the idea of stats emitting altogether, and merely taking exception to mikal's proposed implementation thereof | 13:56 |
fungi | er, rather than taking exception to | 13:56 |
evrardjp | as dhellmann said -- nice goal IMO | 13:57 |
TheJulia | So speaking of metrics code, in some downstream discussions at least, we've been discussing exposing metrics to prometheus for hardware, so we actually have someone that is going to look at hooking into our existing metrics data stream that we had to send ceilometer, and be abel to redirect it to prometheus. As for measurement of internals, ironic could also change its internal metric measurement decorator to support | 14:05 |
TheJulia | additional options. | 14:05 |
*** needscoffee is now known as kmalloc | 14:09 | |
*** jpich has quit IRC | 14:09 | |
*** jpich has joined #openstack-tc | 14:13 | |
zaneb | I'd just like to take this opportunity to say that if the compute node had a ReST API then it would be very easy to expose metrics for monitoring tools | 14:19 |
zaneb | and then run away very quickly | 14:19 |
zaneb | before people start throwing things at me | 14:19 |
jroll | too late, we now have automated catapults which are triggered on that phrase | 14:20 |
jroll | I agree with that, though :) | 14:21 |
dhellmann | maybe adding a rest api for metric collection is the first step to unifying all of the team-specific agents that run on the compute host | 14:22 |
zaneb | is this the point in the discussion where we need to ping Nova people and invite them to throw stuff at us, to avoid them throwing stuff at us later when they find out we discussed it without pinging them? | 14:24 |
TheJulia | dhellmann: are you speaking in only the nova context, or a larger context? | 14:25 |
* zaneb can never keep track of the rules | 14:25 | |
mriedem | did someone say nova? i'll just drop this in here https://docs.openstack.org/nova/latest/contributor/policies.html#metrics-gathering | 14:26 |
cdent | zaneb: I think you've been to enough nova ptg and forum sessions to know that throwing stuff is allowed at any point in the time space continium if you express the correct key phrases | 14:26 |
zaneb | cdent: I've been to maybe like 3? | 14:27 |
cdent | that should be enough | 14:27 |
mriedem | zaneb: how is a rest api on the compute node itself going to make metric unicorns happen? | 14:27 |
mriedem | there are already compute REST APIs for hypervisor stats | 14:27 |
mriedem | https://developer.openstack.org/api-ref/compute/?expanded=show-hypervisor-statistics-detail#show-hypervisor-statistics | 14:28 |
mriedem | would all of the people here that want to add metrics gathering and management to nova please sign this 4 year contract to test and maintain that stuff as well please | 14:28 |
mriedem | thanks | 14:28 |
zaneb | mriedem: context https://review.openstack.org/640688 | 14:29 |
mriedem | we also have old crap like this in the rest api https://developer.openstack.org/api-ref/compute/?expanded=#server-usage-audit-log-os-instance-usage-audit-log | 14:32 |
mriedem | which was added for ceilometer long ago | 14:32 |
mriedem | and some bandwidth notification stuff for xen which is just rotten at this point | 14:32 |
mriedem | https://github.com/openstack/nova/blob/d3254af0fe2b15caff3990c965194133625b681d/nova/compute/manager.py#L7513 | 14:33 |
mriedem | oh hi and this stuff https://github.com/openstack/nova/tree/d3254af0fe2b15caff3990c965194133625b681d/nova/compute/monitors | 14:33 |
mriedem | which is not tested anywhere | 14:33 |
dhellmann | TheJulia : the larger context | 14:33 |
*** jamesmcarthur has joined #openstack-tc | 14:39 | |
*** cdent has quit IRC | 14:59 | |
mnaser | yeah nova does indeed have a lot of bit rotted metrics stuff that barely works | 15:00 |
mnaser | i.e. instance usage audit log pretty much can kill your database server on a reasonbly sized cloud | 15:01 |
*** diablo_rojo has joined #openstack-tc | 15:02 | |
mriedem | i'd rather wipe the slate before re-inventing the wheel here, but since it's exposed in the REST API that's kind of hard to deprecate | 15:06 |
TheJulia | dhellmann: I think one size fits all is a mistake. Each project has slightly different architectures and implementation details that do not map into the concept of compute nodes running services | 15:06 |
*** cdent has joined #openstack-tc | 15:10 | |
*** Luzi has quit IRC | 15:12 | |
gmann | mriedem: well, if we are sure that's the thing not used much then, we can think of deprecating it via version bump. cleaning those are much helpful for maintenance. nova already have a lotss of APIs | 15:14 |
mriedem | deprecating the api with a microversion is just symbolic, the code remains essentially forever for backward compat | 15:15 |
gmann | at least that indicate the deprecation and freeze code for that API. | 15:17 |
bauzas | MHO on the Prometheus rant is that AFAIK Prometheus maintainers have a very strong opinion about push vs. pull metrics | 15:17 |
bauzas | and they don't like push | 15:17 |
bauzas | on the other side, we do have many ways for Prometheus to pull our metrics thru our API | 15:18 |
bauzas | so, I'm very disappointed by the rant since it's even not something I think Prometheus maintainers would love | 15:18 |
mriedem | always helpful for the conversation https://twitter.com/mikal/status/1107787256012562433 | 15:20 |
bauzas | gosh dear Twitter | 15:20 |
mnaser | prometheus style metrics => a rest endpoint that exposes key/values pretty much | 15:20 |
mnaser | thats the whole 'exporter' idea | 15:20 |
bauzas | I very know simon pasquier | 15:20 |
mnaser | prometheus on it's own scrapes those exporters and then does $things with them | 15:20 |
bauzas | he was an openstack dev before he moved to prometheus | 15:21 |
bauzas | I can ask him to chime in | 15:21 |
bauzas | but I'm pretty sure they don't like push models | 15:21 |
bauzas | oh, fun, actually, he replied | 15:21 |
bauzas | "As stated in the documentation [1][2], the Pushgateway is targeted for very specific use cases (mostly services that don't live long enough to be scraped by Prometheus). I'd definitely recommend exposing the metrics via HTTP." | 15:21 |
mnaser | prometheus is pull based (exposing a rest api), statsd is push based (push to socket on event X) | 15:22 |
mnaser | so they are inherently different architectures | 15:22 |
mnaser | now if you ask me, a good exercise would be for us to gather around a bunch of operators, ask how you're doing metrics, and try to find the common denominator | 15:22 |
bauzas | oh yeah | 15:22 |
*** whoami-rajat has joined #openstack-tc | 15:23 | |
bauzas | yup, but my take is that we have notifications for event-based metrics | 15:23 |
bauzas | for other metrics we have our API | 15:23 |
bauzas | so, what are we missing ? | 15:23 |
mnaser | and we use oslo.notifications for event based metrics which means anyone can easily write a driver if they want to use statsd | 15:23 |
mnaser | i guess it narrows down to nova itself providing _native_ prometheus-consumable information, but that might start to creep scope | 15:24 |
bauzas | yup, it's not I'm against stats push | 15:24 |
bauzas | I'm more I don't think it should be nova's responsibility to push those metrics | 15:24 |
mnaser | like i cant just _wire_ prometheus into nova and pull info out, there's integration work to be done. should that live inside nova or not is the question .. i think there is strong reasons for both | 15:24 |
mnaser | i found this the other day | 15:25 |
mnaser | https://github.com/zhangjianweibj/prometheus-libvirt-exporter | 15:25 |
*** cdent has left #openstack-tc | 15:25 | |
mnaser | *SUPER* neat. | 15:25 |
bauzas | again, if we go with prometheus, it has to be a pull model | 15:25 |
TheJulia | mnaser: we've been talking about creating a plugin that intercepts our event stream, and kind of sorts our data out by baremetal node, and then expect to have something read the data and transform it out into something exporter friendly since we have a curse of varying vendor data | 15:25 |
gmann | yeah, i agree on consuming it via push driver and consume nova notification/API info and tell "this is we need and missing and how we can get" | 15:25 |
bauzas | I have to go get my kids, but I'll reply on the change itself | 15:26 |
mnaser | i think we need to also realize _what_ we need to instrument too | 15:26 |
bauzas | I'm way off Twitter for discussing design points | 15:26 |
mnaser | this has a huge impact, if you want to instrument instance boot times vs # of instances on a hypervisor, those are very much different things | 15:26 |
mriedem | heh https://github.com/zhangjianweibj/prometheus-libvirt-exporter looks like a mismash of existing stuff that is exposed in the compute API | 15:26 |
mriedem | for instance: https://developer.openstack.org/api-ref/compute/#servers-diagnostics-servers-diagnostics | 15:26 |
gmann | bauzas: :) true. | 15:27 |
mriedem | and ceilometer agent already pulls information directly from libvirt apis | 15:27 |
mnaser | mriedem: yeah, but this exposes it in 'prometheus'-friendly format | 15:27 |
TheJulia | mnaser: ++, although we shouldn't front load too much because of the variability and if we decide to instrument too much then it will be perceived as too hard and too much work. | 15:27 |
mriedem | many years ago we had talked about ways to send that information out of nova via notification so ceilometer wouldn't need to ask libvirt for it | 15:27 |
mriedem | mnaser: well anything can translate what nova exposes yeah? | 15:27 |
mnaser | if i was to write something that hit the nova api directly, we'd get list of all vms and then call diagnostics on every single server, way slower overall i guess | 15:27 |
mriedem | right, i'd have to dig (far) but i remember the ceilometer thing from years ago was a way to avoid that | 15:28 |
mriedem | but still get to ceilometer what they were doing with libvirt | 15:28 |
mnaser | yeah ceilometer actually does the same thing as this | 15:28 |
gmann | yeah GET diagnostics if per VM only not as whole list | 15:28 |
mnaser | but ceilometer pushes notifications perodically, this just echos out info every time you hit the endpoint | 15:28 |
TheJulia | mnaser: so for collecting data from redfish BMCs, we'll have to do much the same for components :( | 15:29 |
mnaser | also | 15:29 |
mnaser | i'm in favour of bringing this problem on step up in some sense | 15:29 |
mnaser | like: a document on how to best instrument $foo using $some_tooling | 15:30 |
mnaser | and depeloymnt tools can implement that, or not, but that stuff doesn't live inside the project itself | 15:30 |
mriedem | btw https://etherpad.openstack.org/p/BER19-OPS-LOGGING-AND-MONITORING | 15:30 |
mriedem | this was just discussed in berlin | 15:30 |
mnaser | *OR* we can have something like nova/prometheus-exporter under opendev and those interested in that can hack on it too :) | 15:30 |
mriedem | well, this == monitoring | 15:30 |
TheJulia | mnaser: ++ I feel this is an easy discussion to sidetrack on because of different interpretations to words | 15:30 |
mnaser | i do not feel that we're "cutting out" the casual contributor in this scenario going back to the original topic | 15:32 |
gmann | i feel we can have under openstack/xyz-matrix means consuming all other service matrix too if needed. I agree on not being push these on project in-tree. | 15:32 |
mnaser | this is a fundamental part of a project and i don't think realistically a part time contributor could succeed with it, because it involves work, writing specs, getting people to agree | 15:32 |
mnaser | i think this is just something we have to accept. this isn't a small change. | 15:33 |
TheJulia | It is a nice thought, but it is going to push users away if we only support our own thing | 15:33 |
mnaser | fyi what mikal is bringing up is not accurate | 15:33 |
mnaser | osprofiler != prometheus | 15:34 |
mriedem | mnaser: found it https://openstack.nimeyo.com/49291/openstack-ceilometer-proposal-hypervisor-notifications | 15:35 |
mriedem | 2015 | 15:35 |
mnaser | these are two different solutions for two different problems. one is cross-project / request tracing, the other is metrics exporting | 15:35 |
mnaser | mriedem: i think that's how it should be from the first place but who's gonna do the work is always the fun question | 15:36 |
mriedem | http://lists.openstack.org/pipermail/openstack-dev/2015-June/thread.html#67283 is a bit nicer | 15:36 |
mnaser | i.e. ceilometer is publishing info that is pretty much useless when it comes to network because i dont even know what port is publishing X usage | 15:36 |
mnaser | it uses the port id generated by nova so 'qbr324342-324' which tells me nothing | 15:37 |
mnaser | heck, ceilometer is in a great position to get a prometheus style api with all of its existing pollers | 15:37 |
TheJulia | I don't fully understand how they handle the data model or services, but it seems like a single point could work if there was no implication of what type of data was being provided by a general pull from ceilometer, but getting/sorting/selecting what is also kind of a conundrum because if I remember correctly, for Ceilometer, operators who used it with ironic also had to ensure ceilometer would pick the event_type out | 15:56 |
TheJulia | of the queue. | 15:56 |
bauzas | mnaser: FWIW, that's the exact reason why I'll reply on the gerrit change and not on twitter | 15:57 |
bauzas | mnaser: because when I see some other tweets, it's just "meh, nova cores are badboys" | 15:58 |
*** jaosorior has quit IRC | 16:02 | |
jroll | dansmith's comments on the review are A+, specifically: | 16:03 |
jroll | > Nova exporting metrics for which it is authoritative is pushing things in the right direction of depending on something else for metrics, IMHO. | 16:03 |
jroll | I see a screen or two here worth of discussion on implementation details, which is totally missing the point. mikal said: | 16:04 |
jroll | > The previous answer was "we don't see the value in that, and have a ban on new implementations of (completely different other thing with similar name)", so I'd see that as a step forward. | 16:04 |
jroll | > I wrote a experimental trivial implementation which I've now wandered off from because of these comments. | 16:04 |
*** jamesmcarthur has quit IRC | 16:04 | |
*** jamesmcarthur has joined #openstack-tc | 16:05 | |
jroll | it appears he's looking to have a discussion on what he's trying to accomplish, bringing a POC along, I suspect the implementation details aren't important here | 16:06 |
TheJulia | ++ | 16:06 |
jroll | of course, I don't know if or where he attempted to start that conversation so it's hard to say what happened | 16:07 |
asettle | Mark Shuttleworth is doing the keynote FYI https://openinfradays.co.uk/ | 16:08 |
jroll | anyway, I feel like we all missed the point here (though it's certainly planted some seeds) | 16:09 |
asettle | ohffs wrong channel | 16:10 |
asettle | Sorry | 16:10 |
* mugsie shakes head at asettle | 16:11 | |
asettle | "Elect her to the TC" they said | 16:12 |
asettle | "She's perfectly capable" the ysaid | 16:12 |
dhellmann | TheJulia : I think we're mixing 2 unrelated comments. I think it makes sense to consider a goal to "expose useful internal metrics in a consistent way". I also think it makes sense to consider the compute agent as a source of those metrics, and to just add a REST API to it to provide them, rather than trying to collect them centrally and serve them some other way. As mriedem points out, there is a lot of existing work in this area to take into | 16:13 |
dhellmann | account when considering implementation details deeper than what I've just said. | 16:13 |
jroll | ++ | 16:15 |
dhellmann | for one thing, when we start talking about adding a similar rest endpoint to every service, I start wondering how we can take advantage of WSGI to provide middleware to do that | 16:17 |
dhellmann | and of course, what does "useful internal metrics" even mean? | 16:18 |
TheJulia | dhellmann: absolutely on all counts, I just want to us also avoid thinking that we can proceed down a path of one size fits all when I think we would be hurting ourselves if we dictated from the TC that this is the direction we think the community should head in. Ideally I think a couple projects should try and head in at least a POC direction if they see the need, and go from there. I know for internal process metrics, | 16:25 |
TheJulia | the original focus when the work was done in ironic was timing data... which also doesn't seem like the most useful thing in all cases. | 16:25 |
bauzas | jroll: this isn't an implementation question | 16:30 |
bauzas | jroll: IMHO, Nova could expose more if needed, for sure, but it's not the role of Nova to be responsible for sending specialized metrics to a very specific tool | 16:31 |
bauzas | if so, we create a new API contract | 16:31 |
bauzas | which I disagree | 16:32 |
*** e0ne has quit IRC | 16:32 | |
jroll | bauzas: that sounds like an implementation detail to me :) | 16:34 |
jroll | the point of the POC is to show that these metrics are useful, not to say this is how it should be done. | 16:34 |
mugsie | bauzas: but if this "very specific tool" is used by most other projects in this space, we should look at it | 16:34 |
bauzas | creating a new API contract to a specific 3rd-party tooling ? | 16:34 |
mugsie | not just dismiss out of hand | 16:34 |
mugsie | afaik there was a pluggalble point (or he was working on one) for swapping to statsd over /metrics | 16:35 |
bauzas | mugsie: I'm not dismissing it, see my comment | 16:35 |
bauzas | mugsie: there are many ways to collect nova metrics in prometheus already | 16:35 |
bauzas | and I'm up for discussing what we can add more | 16:36 |
jroll | building a daemon which reads the notifications queue and emits metrics to $system is not a trivial thing that we can just handwave away as "this already works, we don't need to think about alternatives" | 16:37 |
jroll | if for no reason other than the sheer volume of the message queue | 16:38 |
mugsie | that read as a dismissal. promethous (via a /metrics endpoint) is the defacto way people expose metrics for these system, not parsing rmq events or ceiolmeter | 16:38 |
bauzas | here is the thing I can propose | 16:38 |
bauzas | I very know a Prometheus maintainer | 16:38 |
bauzas | I worked with him in the past, he also worked in the OpenStack community | 16:38 |
bauzas | he knows ceilometer and all openstack things | 16:39 |
bauzas | now, he changed his role to some other job in Red Hat and is now a Prometheus maintainer | 16:39 |
bauzas | so looks to me like the best person for discussing | 16:39 |
bauzas | I can ask him to provide more thoughts in the change (he already did) | 16:40 |
TheJulia | I think one thing people are missing with this, we don't actually need to enforce sending over a message bus, a plugin can presently write the same data syslog, or with some work write it to... well... anything... which could be a service exposes a protected /metrics endpoint that is not on the same API surface as existing projects.... | 16:42 |
*** cdent has joined #openstack-tc | 16:42 | |
bauzas | jroll: accepting a new time-based collecting thread for a specific 3rd party tooling isn't a trivial thing either fwiw | 16:42 |
bauzas | and the nova community really cared about the support it needs | 16:43 |
bauzas | https://docs.openstack.org/nova/latest/contributor/policies.html#metrics-gathering | 16:43 |
jroll | bauzas: again, implementation details. mikal is trying to show here that putting these specific metrics in prometheus is useful | 16:43 |
jroll | we have to agree on the "why" before the "how" | 16:43 |
mugsie | bauzas: the metrics in that policy are different to the metrics that were proposed | 16:44 |
jroll | +100 | 16:44 |
jroll | this is about metrics on the nova-compute daemon, not the system it is running on (which that policy is about) | 16:45 |
bauzas | and again, I'm not against discussing on how to expose some metrics that are useful | 16:45 |
bauzas | or the why if you prefer | 16:45 |
*** cdent has left #openstack-tc | 16:46 | |
bauzas | jroll: there is already an openstack exporter https://prometheus.io/docs/prometheus/latest/configuration/configuration/#openstack_sd_config | 16:47 |
jroll | bauzas: I was expressing that as a group, we tend to dive right into discussing implementation when people try to discuss general ideas. I wasn't saying anything about you specifically. | 16:47 |
dhellmann | I'm less concerned with what metrics are collected than with getting teams to accept the idea that metrics collection is something we should be doing in the first place | 16:47 |
bauzas | so, the question is : should we expose driver specific metrics to prometheus ? yes | 16:47 |
fungi | maybe it's just me, but i can't help but think if we simply got all projects to provide an snmp callout hook and a nicely structured mib, then metrics would be solved in a way compatible with proper traditional tooling | 16:48 |
bauzas | should it be nova's responsibility to expose such metrics ? that's where I disagree | 16:48 |
jroll | bauzas: I'm glad you agree - it appears mikal doesn't think the nova team as a whole agrees. | 16:48 |
fungi | i have a hard time imagining prometheus or any metrics collection system designed in the past 25 years lacks support for good old-fashioned snmp | 16:48 |
mugsie | fungi: not really well | 16:48 |
bauzas | jroll: with respect to mikal, the -2 was procedural | 16:48 |
fungi | mugsie: well, polling-oriented metrics collection system i mean | 16:49 |
jroll | bauzas: it might not be nova's responsibility to emit the metrics, but it's nova's (or openstack's?) responsibility to make sure that getting the right metrics there is a good experience for operators | 16:49 |
bauzas | jroll: I totally agree with that sentence | 16:49 |
jroll | bauzas: the -2 was after the tweet; the tweet came from conversations. | 16:49 |
jroll | dhellmann: +1000 | 16:50 |
fungi | mugsie: after all, your routers and switches and disk arrays and lights-out management interfaces and various other network-connected hardware are all providing snmp services already | 16:50 |
fungi | your power distribution units, your smart power strips, your hcav systems, your security systems... | 16:50 |
fungi | s/hcav/hvac/ | 16:50 |
persia | (not all: many of those run REST servers with various interesting things and don't provide SNMP anymore) | 16:51 |
fungi | eek, really? they've dropped their snmp services? | 16:51 |
persia | Some manuafacturers, yes. Some are using derivatives of openBMC to provide data feeds. Others custom embedded stuff. | 16:52 |
fungi | wow, gross | 16:52 |
asettle | On another note for Twitter conversations today, anyone see this? https://twitter.com/jessfraz/status/1107358385459081218 | 16:52 |
asettle | Specifically the convo between Maish Saidel-Keesing and Steve Gordon | 16:53 |
bauzas | I'm seriously considering to either grab popcorn everytime I go looking at twitter, or just stop looking | 16:53 |
fungi | amusingly, i *just* popped some | 16:53 |
persia | asettle: Some of that comes from viewpoint rather than "cloud": http://www.openforumeurope.org/wp-content/uploads/2019/03/OFA_-_Opinion_Paper_-_Simon_Phipps_-_OSS_and_FRAND.pdf is a good recent read on different viewpoints | 16:54 |
asettle | bauzas, I honestly don't look often for many reasons. It tends to be a cesspool of shit-human interactions and everyone touting their self-righteous bullshit. | 16:55 |
asettle | Never-the-less, it has it's place for occasionally interesting debate | 16:55 |
jroll | asettle: got a pointer to said conversation? I'm not seeing it | 16:55 |
asettle | jroll, uno moment | 16:55 |
jroll | ta | 16:55 |
*** dtantsur is now known as dtantsur|afk | 16:56 | |
fungi | it's probably that i can't figure out twitter's ui, but i don't see any discussion between steve and maish there | 16:56 |
asettle | jroll, probably from here: https://twitter.com/sparkycollier/status/1107365187512909825 | 16:56 |
asettle | oh ffs fucking twitter | 16:56 |
jroll | lol | 16:56 |
asettle | https://twitter.com/maishsk/status/1107386258114965506 | 16:56 |
asettle | THERE we go | 16:56 |
asettle | Sorry folks. | 16:57 |
jroll | fungi: twitter has threaded replies and it's super weird to navigate | 16:57 |
asettle | ^^ exactly that. You *think* you're clicking on the thread, but actually you're looking at a recipe for hot sauce | 16:57 |
jroll | a recipe for hot sauce would be an improvement | 16:58 |
fungi | this hot sauce recipe is entirely unsatisfying | 16:58 |
asettle | Hell yes it would | 16:58 |
asettle | But an improvement | 16:58 |
evrardjp | Most of the discussion above should go into some kind of documentation for a future community goal, if we want to implement things across openstack in a common way -- we are just exposing "prior art" here | 16:59 |
fungi | in another 6-ish hours when we close the nova and charms ptl elections you can follow up with the voter turnout for those | 17:00 |
asettle | 't would certain be interesting | 17:01 |
fungi | preliminary numbers are 34% for openstack charms and 42% for nova | 17:01 |
*** ricolin has quit IRC | 17:01 | |
asettle | Innnnteresting | 17:01 |
bauzas | 42% is nice given our large community | 17:01 |
asettle | Yes ^^ | 17:01 |
fungi | the tc election turnout of 20% was less great, granted | 17:01 |
bauzas | yeah, I've heard some friends unvoting for the TC | 17:02 |
asettle | I do have some thoughts on that. But it ties into a lot of other conversations we've had in the past and I don't think it adds much. But it would be good to chat to you all in Denver. | 17:02 |
bauzas | they told me litterally "heh, I like you but I had no energy for reading emails and voting" | 17:02 |
asettle | Is this currently an item on our agenda? | 17:02 |
mnaser | asettle: i may or may not have asked for someone to volunteer to get our ptg etherpad started | 17:03 |
asettle | >.> did ya now | 17:03 |
asettle | I go away 8 days, I miss all the fun stuff | 17:03 |
fungi | also, to maish's comment about "people that contribute on a regular basis to the code" the *majority* of our electorate is made up of people who have gotten a single patch merged to a repository | 17:04 |
mnaser | asettle: http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003916.html | 17:04 |
fungi | i expect if we had insight into *who* is and isn't voting, we'd see that the minority of people who contribute most of the code are roughly the same minority who participate in elections | 17:04 |
asettle | fungi, I think you're right there | 17:05 |
asettle | mnaser, sure. I'll grab it then. I'll put it as an AI for tomorrow as I'm absolutely shattered today and need to R&R a bit more. | 17:05 |
mnaser | asettle: yay, thanks. :) | 17:05 |
asettle | I'll reply to your email so it's public. | 17:05 |
asettle | Anyone got last releases TC etherpad? | 17:07 |
asettle | Just for a point of ref | 17:07 |
gmann | asettle: https://etherpad.openstack.org/p/tc-stein-ptg | 17:09 |
asettle | Gracias gmann | 17:09 |
fungi | okay, i take back what i said about the majority of the tc electorate having only one patch merged. it's 30% with only one change, 41% with 2 or fewer, 48% with 3 or fewer, and 54% with 4 or fewer | 17:12 |
fungi | in the last election anyway | 17:12 |
fungi | 66% have up to 10 changes merged in the qualifying year for the last election | 17:13 |
*** jaypipes has joined #openstack-tc | 17:16 | |
fungi | the most active (by patch count) 20% of the electorate merged 29 or more changes that year | 17:16 |
fungi | so we don't know which 20% voted in the tc election, but if it was the 20 most active by patch count then it was the people getting on average one patch merged every 13 days | 17:19 |
evrardjp | Is it me, or the projects with little badges always look more serious than those without. I had one of the projects above with a proud badge of "tests passing" and "coverage 0%", and I was totally biased at first... "it has badges, it must be pro" | 17:19 |
evrardjp | sneaky | 17:19 |
fungi | or phrased another way, the level of activity which results in getting a patch every couple weeks over the course of a year is roughly the same percent of our electorate who voted in the tc election | 17:20 |
evrardjp | nice insights fungi | 17:21 |
*** jpich has quit IRC | 17:23 | |
bauzas | fungi: from the personal feedback I got (which is of course subject to debates), there is no clear relationship in between voting and contributing ratios | 17:32 |
bauzas | some with high contribution levels just expressed some TC fatigue | 17:33 |
bauzas | (which doesn't surprise me and which was one the reasons why I was running) | 17:33 |
bauzas | anyway, time to wrap | 17:34 |
fungi | yep, i don't necessarily expect them to be equivalent groups, more reacting to maishk's implication that the technical electorate are necessarily engaged contributors to the projects | 17:39 |
*** jamesmcarthur has quit IRC | 17:39 | |
fungi | nearly a third only every gave one change merged in a year, and the percent who voted are equal to the percent who get one change merged every couple weeks | 17:40 |
fungi | so it depends a lot on what you consider "contribute on a regular basis" to mean | 17:41 |
fungi | and one other point... where he compares it to participation for the osi board election, i don't know what their year-over-year turnout was like but the numbers they cite there are for this year's election which was wildly controversial. that sort of controversy definitely increases voter engagement, but... wow that was such a dumpster fire of a situation in their community | 17:45 |
fungi | i'd rather have a third the voter turnout they do and also a civil community who see elections as routine and not terribly "exciting" | 17:46 |
*** timburke has joined #openstack-tc | 17:48 | |
mugsie | fungi: yeah. OSI also had a lot of people joining just to vote in that election (due to its dumpster fire status), which would have shifted numbers | 17:51 |
*** jamesmcarthur has joined #openstack-tc | 17:54 | |
*** jamesmcarthur has quit IRC | 18:12 | |
*** jamesmcarthur has joined #openstack-tc | 18:14 | |
*** jamesmcarthur has quit IRC | 18:14 | |
*** jamesmcarthur has joined #openstack-tc | 18:14 | |
*** efried has left #openstack-tc | 18:18 | |
cmurphy | framadate doesn't support time zones :( https://framagit.org/framasoft/framadate/framadate/issues/158 | 18:25 |
evrardjp | oh shucks | 18:33 |
evrardjp | what is framadate based on? | 18:34 |
evrardjp | it seems it's their own soft? wow I didn't know that | 18:35 |
evrardjp | learning everyday | 18:35 |
fungi | looks like php, so i'm not going to attempt to identify the time-handling bits in there | 18:36 |
fungi | though neat that they're using zanata for translations | 18:37 |
*** diablo_rojo has quit IRC | 19:03 | |
*** e0ne has joined #openstack-tc | 19:28 | |
*** gouthamr_ has joined #openstack-tc | 19:34 | |
*** gouthamr_ has quit IRC | 19:38 | |
*** jamesmcarthur has quit IRC | 19:41 | |
*** jamesmcarthur has joined #openstack-tc | 19:42 | |
*** jamesmcarthur has quit IRC | 19:45 | |
*** jamesmcarthur_ has joined #openstack-tc | 19:45 | |
*** e0ne has quit IRC | 19:51 | |
*** jamesmcarthur_ has quit IRC | 20:02 | |
*** jamesmcarthur has joined #openstack-tc | 20:03 | |
*** ijolliffe has quit IRC | 20:07 | |
*** jamesmcarthur has quit IRC | 20:08 | |
*** jamesmcarthur has joined #openstack-tc | 20:08 | |
*** cdent has joined #openstack-tc | 20:19 | |
*** diablo_rojo has joined #openstack-tc | 20:25 | |
*** cdent has quit IRC | 20:26 | |
*** tosky has quit IRC | 20:30 | |
*** e0ne has joined #openstack-tc | 20:41 | |
*** e0ne has quit IRC | 20:46 | |
jroll | asettle: I didn't get around to finding a place in governance for the team vision things today, but am going to prioritize it tomorrow morning | 21:01 |
*** cdent has joined #openstack-tc | 21:26 | |
*** whoami-rajat has quit IRC | 21:52 | |
*** diablo_rojo has quit IRC | 21:57 | |
*** ijolliffe has joined #openstack-tc | 22:03 | |
*** jamesmcarthur has quit IRC | 22:08 | |
*** diablo_rojo has joined #openstack-tc | 22:30 | |
*** jamesmcarthur has joined #openstack-tc | 22:30 | |
* diablo_rojo puts election official hat on | 22:40 | |
diablo_rojo | So.. its looking like at PTL election close we will have 6 PTLs with no IRC nicks provided. I know this is something that has been brought up in the past but I don't remember exactly what we decided in terms of handling this. | 22:40 |
* diablo_rojo puts community member hat on | 22:40 | |
diablo_rojo | I personally think that if you are a PTL, you should have a registered IRC nick. | 22:40 |
diablo_rojo | Projects with PTLs missing provided IRC nicks: congress, freezer, karbor, senlin, watcher, zun | 22:42 |
cdent | nice hat | 22:44 |
cdent | several of those projects have ptls from china, yes? | 22:45 |
tonyb | cdent: correct | 22:45 |
openstackgerrit | Matt Riedemann proposed openstack/governance master: Document goal closure https://review.openstack.org/641507 | 22:45 |
gmann | congress PTL is on irc ekcs | 22:45 |
cdent | for whom irc has proven a roadblock? | 22:45 |
*** jamesmcarthur has quit IRC | 22:45 | |
tonyb | gmann: Thanks. It'd be good if Eric added that to his community profile | 22:45 |
*** jamesmcarthur has joined #openstack-tc | 22:46 | |
gmann | yeah, having mandatory IRC is not globally true | 22:46 |
gmann | tonyb: yeah, i think he might have forgot or planning to move from IRC :). till now he conduct congress meeting regularly on IRC | 22:46 |
cdent | I suspect a fair few people never know to even look at their foundation profile (although, one would hope a PTL would) | 22:47 |
cdent | it's not something that is highly visible (at least from where I'm slouching) | 22:48 |
tonyb | cdent: Yeah, it's 'baked' into the election, summit, forum and PTG? planning process so I'd like to think that project leaders/influencers know about it and use it | 22:48 |
dtruong | The incoming Senlin PTL nickname is xuefeng . But he is not on IRC much so I'm not sure it helps to point people to contact him that way. | 22:49 |
diablo_rojo | cdent, me many times. When I dont get email responses I go to IRC for many PTLs | 22:50 |
gmann | ML and email id is mandatory at least and we expect PTL to check the communication there actively | 22:50 |
cdent | diablo_rojo: If you're responding to "for whom..." what I meant there was that people in china have trouble accessing irc sometimes (either technically or socially), not who is hurt by people not being on irc | 22:51 |
diablo_rojo | Ahhhhh my bad | 22:51 |
diablo_rojo | misinterpreted | 22:51 |
* tonyb gets outside the box what if we let/encouraged people set the NICK to 'weechat:$user_name' ? | 22:52 | |
persia | I think occasional nonparticipation is better than explicit fragmentation. | 22:52 |
*** jamesmcarthur has quit IRC | 22:52 | |
cdent | tonyb: there have been suggestions like that in the past, but persia's comment hits one of the main objections | 22:53 |
persia | Most of the IRC "nonparticipants" seem to be folk who occasionally use a web gateway of some sort to attend meetings, but don't maintain presence (so chasing on IRC isn't a good alternative to email). | 22:53 |
cdent | another is some people being very -- on using wechat | 22:53 |
tonyb | Well isn't it just formalising the fragmentation we already have? | 22:53 |
diablo_rojo | Do we want that formalization though? I feel like we shouldnt do that. | 22:54 |
persia | tonyb: Not necessarily, unless there are a lot of formally organised openstack channels on weechat of which I'm unaware. | 22:54 |
tonyb | *I* think that email and IRC are the correct tools and I don't want to fragment the community | 22:54 |
persia | It's not just about nicks: it's also about project documentation specifying comms fora, etc. | 22:54 |
tonyb | We can live with the status quo and just treat IRC as semi-mandatory but it won't help me reach $people | 22:55 |
diablo_rojo | gmann, I'll add their nicks to the results but we should get them to update their profiles. | 22:55 |
tonyb | Gah /me has a meetign now :( | 22:55 |
gmann | yeah, i (as TC liaison to congress) can ask him to add his nick in next congress meeting | 22:57 |
diablo_rojo | dtruong, thanks for the nick | 22:57 |
diablo_rojo | So with those two irc nicks we are now missing 4 | 22:58 |
*** marst has quit IRC | 23:00 | |
fungi | with my tc member hat on, i think requiring them to read mailing list messages tagged with [ptl] or their project and to provide an e-mail address they will actually read regularly and respond from for private communication ought to be sufficient. i see irc as convenient and so nice to include, but i don't consider making it mandatory would really help | 23:02 |
fungi | just requiring they provide an irc nick even if they don't ever use it is pointless | 23:02 |
fungi | and if they are on irc regularly, they're likely to provide it already | 23:03 |
cdent | these things are true | 23:05 |
diablo_rojo | Indeed. All true. | 23:06 |
*** mriedem is now known as mriedem_away | 23:07 | |
*** jamesmcarthur has joined #openstack-tc | 23:14 | |
*** jamesmcarthur has quit IRC | 23:30 | |
*** tjgresha has joined #openstack-tc | 23:40 | |
*** diablo_rojo has quit IRC | 23:46 | |
*** jamesmcarthur has joined #openstack-tc | 23:51 | |
*** tjgresha has left #openstack-tc | 23:52 | |
*** penick has joined #openstack-tc | 23:56 | |
*** diablo_rojo has joined #openstack-tc | 23:56 | |
tonyb | fungi: so last cycle I proposed a patch to governance code to make IRC nic optional and at that time it was decided that it was mandatory. | 23:59 |
tonyb | so today if we add a ptl: block to projects.yaml without an irc_nick key/value the docs will fail to build | 23:59 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!