scas | gmail is not very appropriate for participating, indeed. it likes to do "stuff" | 00:00 |
---|---|---|
clarkb | fwiw gmail will show them as separate too since it seems to only rely on subject for grouping | 00:00 |
clarkb | I noticed this with the way storyboard sends notifications | 00:01 |
scas | i have issues handling my mail in most desktop clients, though, even through imap | 00:01 |
scas | thunderbird has ground a laptop with 16gb of ram and a ssd to a dead stop. my workstation isn't much better despite more resources | 00:02 |
clarkb | scas: is that due to the indexing? you can disable that | 00:03 |
clarkb | might affect your ability to do full text search on all the emails though | 00:03 |
zaneb | it's certainly true that every email client has its own set of heuristics for identifying what is and is not part of the same thread | 00:05 |
scas | as far as threading goes, gmail breaks occasionally, but it doesn't seem to be any different than in, say, thunderbird | 00:06 |
zaneb | out of curiosity I checked gmail and can confirm that that subthread shows up as a separate thread | 00:08 |
*** mriedem has quit IRC | 00:13 | |
fungi | zaneb: d'oh, for some reason i thought you said you were using gmail, but now i see you did not ;) | 00:25 |
fungi | and yes, i've heard reports that gmail threads by subject and ignores in-reply-to references | 00:26 |
zaneb | fungi: maybe you just assume that anyone complaining about mailing list headers is probably using gmail ;) | 00:26 |
fungi | so the reverse of what you said | 00:26 |
fungi | heh, well, odds are it's an accurate guess! ;) | 00:26 |
zaneb | technically it's both because thunderbird is in fact fetching via IMAP from a gmail account | 00:27 |
fungi | ahh | 00:27 |
fungi | well, at any rate, suggestion heard and i'll try making it in no way a reply next round | 00:27 |
zaneb | but each uses it's own set of heuristics for determining what a thread is | 00:28 |
zaneb | its | 00:28 |
fungi | yep, maddening | 00:29 |
fungi | mutt shows them as part of the same thread but indicates where subject lines change, fwiw | 00:29 |
*** tosky has quit IRC | 00:50 | |
*** zaneb has quit IRC | 01:10 | |
*** dklyle has joined #openstack-tc | 01:32 | |
*** ricolin has joined #openstack-tc | 02:21 | |
*** jamesmcarthur has joined #openstack-tc | 03:00 | |
*** jamesmcarthur has quit IRC | 03:04 | |
*** diablo_rojo has quit IRC | 03:11 | |
*** jamesmcarthur has joined #openstack-tc | 03:14 | |
*** jamesmcarthur has quit IRC | 03:30 | |
*** dklyle has quit IRC | 03:58 | |
*** diablo_rojo has joined #openstack-tc | 04:17 | |
*** jamesmcarthur has joined #openstack-tc | 04:20 | |
*** jamesmcarthur has quit IRC | 04:25 | |
*** whoami-rajat has joined #openstack-tc | 04:47 | |
*** Luzi has joined #openstack-tc | 06:59 | |
*** e0ne has joined #openstack-tc | 07:10 | |
*** diablo_rojo has quit IRC | 08:29 | |
*** dims has quit IRC | 08:32 | |
*** dims has joined #openstack-tc | 08:33 | |
*** tosky has joined #openstack-tc | 09:03 | |
*** ricolin has quit IRC | 09:07 | |
*** jpich has joined #openstack-tc | 09:15 | |
*** cdent has joined #openstack-tc | 09:39 | |
*** purplerbot has quit IRC | 11:32 | |
*** purplerbot has joined #openstack-tc | 11:33 | |
*** jamesmcarthur has joined #openstack-tc | 12:00 | |
*** jamesmcarthur has quit IRC | 12:04 | |
cdent | are we low on nodes again | 12:48 |
*** e0ne has quit IRC | 12:53 | |
fungi | looks like we have >800 actively running jobs: http://grafana.openstack.org/d/rZtIH5Imz/nodepool?orgId=1 | 13:03 |
fungi | tripleo still accounts for nearly half our utilization, though they've been steadily decreasing over the past month according to clarkb's most recent analysis | 13:04 |
fungi | hard to know the impact of their efforts exactly with holidays and conferences underway | 13:04 |
*** whoami-rajat has quit IRC | 13:19 | |
fungi | (note that's not >800 jobs, but rather >800 nodes actively in use by running jobs, in case my phrasing was unclear) | 13:24 |
cdent | I got it, thanks for the link. I always forget about the url for some reason | 13:25 |
fungi | we are down ~10% capacity because we were getting complaints about jobs running slow/timing out in one ovh region. after confirming with them that they're running a 2:1 cpu oversubscription ratio in our dedicated host aggregate i've tried halving our max in that region to see if it resolves the performance issues | 13:26 |
cdent | I was trying to run a 2:1 over sub in my home testing rig and it completely tanked | 13:27 |
*** e0ne has joined #openstack-tc | 13:31 | |
cdent | I guess that can work if you're sparse, but not otherwise | 13:32 |
*** jamesmcarthur has joined #openstack-tc | 13:48 | |
fungi | right, in the case of a diverse user population it can be an effective means of utilizing all your resources | 13:51 |
fungi | in this particular case it's a host aggregate dedicated to our ci system, so when our amplitude peaks it's akin to overdriving a waveform amplifier | 13:52 |
* cdent feedback solos | 13:53 | |
* fungi breaks into a rousing bit of air guitar | 13:54 | |
EmilienM | fungi: anything we can do in tripleo to help? (like stopping to approve patches or anything) | 13:58 |
EmilienM | (hi btw) | 13:58 |
* cdent waves | 13:59 | |
* cdent stops crying | 13:59 | |
*** jamesmcarthur has quit IRC | 14:04 | |
fungi | EmilienM: i think the efforts so far have been great, just more of the same (make jobs more efficient, combine jobs which are mostly redundant, et cetera) | 14:06 |
fungi | discourage unwarranted rebasing, encourage people to look into failures before blindly rechecking... | 14:07 |
EmilienM | ack | 14:09 |
*** mriedem has joined #openstack-tc | 14:29 | |
scas | i know i'm not the cause of the elevated usage. i had someone join the channel recently and mentioned that 'people' at the summit were saying chef is for-reals dead | 14:31 |
scas | i have somewhat of the opposite effect happening, it seems | 14:31 |
fungi | saying chef the configuration management ecosystem is dead, or the chef-openstack project specifically? | 14:33 |
fungi | i recall it coming up a couple times in conversation and i was sure to mention that you were doing an admirable job of keeping it going with the help of some other occasional contributors | 14:34 |
fungi | and that it's definitely not dead, even if you were unfortunately unable to make it to the conference this time | 14:35 |
smcginnis | I do recall Chef being mentioned at least once. Nothing about its health though. | 14:36 |
scas | appreciate it. i probably heard something that came in through the hallway track. chef-openstack itself was apparently mentioned to a new-to-openstack person as being 'dead' | 14:37 |
scas | in my head, that's a sign for me to actually finish writing some blog content | 14:39 |
dhellmann | lbragstad : o/ | 14:41 |
scas | sabbaticals have a way of making people think things :) | 14:41 |
lbragstad | does anyone know if there is a document floating around for project-initiated leadership changes? | 14:41 |
lbragstad | cc dhellmann ^ | 14:41 |
cdent | lbragstad: can you translate that to other words? | 14:42 |
lbragstad | i know we just went through this with hodepodge and loci | 14:42 |
dhellmann | yeah, the loci team is going through a change right now | 14:42 |
dhellmann | I don't think we have a document per se, so that may be a good outcome of this new change | 14:42 |
lbragstad | a PTL wants to hand things over mid-cycle and they want to know if there is anything the TC expects in that transition | 14:42 |
lbragstad | as far as crossing T's and dotting I's | 14:42 |
dhellmann | the transition should be announced on the ML, and then a governance patch proposed with the change | 14:42 |
lbragstad | ok - cool | 14:43 |
dhellmann | I don't think we have any formal requirements about timing for new elections to replace PTLs like we do for TC members | 14:43 |
cdent | thanks lbragstad, I thought you meant something like that but wasn't quite sure | 14:43 |
* lbragstad wonders if that should be written down | 14:43 | |
lbragstad | cdent np | 14:43 |
lbragstad | i'm also not through my first cup of coffee, so... words are even harder at the moment | 14:44 |
dhellmann | lbragstad : yeah, we should document it. I think we've always handled it as a "TC confirmation" thing so it should be a minimal update to the charter https://governance.openstack.org/tc/reference/charter.html | 14:44 |
dhellmann | maybe have some coffee and then see about a patch? :-) | 14:44 |
lbragstad | yeah - i'll get something proposed this morning | 14:45 |
dhellmann | lbragstad : maybe also make a note of this in the health tracker in the wiki under the section for this team. It's not serious if they have a new candidate now, but it's relevant if it turns into part of a trend (either with this team, or across teams) | 14:46 |
lbragstad | good point | 14:47 |
lbragstad | noted | 14:47 |
evrardjp | dhellmann: that's interesting I didn't want to write anything in the health tracker while there were activities -- it seems the data can be quickly outdated | 14:56 |
evrardjp | by activities I meant leadership discussions and changes. | 14:56 |
evrardjp | because I'd say things changes with new leadership | 14:57 |
evrardjp | but writing the fact there is a new leadership is important | 14:57 |
evrardjp | IMO | 14:57 |
dhellmann | writing something like "during stein changed PTLs due to $reason" seems ok, doesn't it? | 14:57 |
evrardjp | yeah | 14:57 |
evrardjp | on that I agree :) | 14:57 |
dhellmann | the idea is to use that page as a shared set of status notes | 14:59 |
dhellmann | with interpretation left up to the reader | 14:59 |
lbragstad | i don't expect to update it with changed due to normal election process, though? | 14:59 |
fungi | shhh! it's time for another office hour | 15:00 |
fungi | ;) | 15:00 |
gmann | o/ | 15:00 |
ttx | I'm around in case of questions, and have a canned one in case nobody says anything | 15:01 |
cdent | thrilling | 15:01 |
* smcginnis takes his chair in the corner | 15:01 | |
ttx | As previously announced, we/OSF are coordinating a regular (biweekly is the plan) newsletter for people who want to keep up with what's happening with our project(s) but are not ready (or don't have the time) to engage on ML. | 15:02 |
ttx | There should be a "openstack project news" section in there, let me know if you have anything significant to mention | 15:02 |
ttx | We already have a couple of items: | 15:03 |
dhellmann | lbragstad : no, if there's an election everything is going normally | 15:03 |
ttx | “Vision for OpenStack clouds” document published | 15:03 |
ttx | openstack mailing list merge | 15:03 |
ttx | New Autoscaling SIG | 15:03 |
scas | just wanted to mention it again that i appreciate what folks have heard through the grapevine about chef-openstack. it's not something i anticipate, but i can see where having a written policy regarding project lead succession out-of-cycle would be of benefit. i presume it would be a 'special election' type thing | 15:03 |
gmann | ttx: and what is target audience for that. operator, developer , end user ? | 15:04 |
*** Luzi has quit IRC | 15:04 | |
ttx | gmann: I'd say potential user / not directly engaged yet | 15:04 |
fungi | scas: in the past there's basically been consensual appointment of an interim ptl until the next regularly-scheduled election | 15:04 |
ttx | gmann: in Berlin we have talked about using that newsletter as the step 0 of community engagement | 15:05 |
fungi | scas: we reelect ptls with great enough frequency that a <=6mo appointment isn't a huge concern | 15:05 |
*** cosss_ has joined #openstack-tc | 15:05 | |
*** openstackgerrit has joined #openstack-tc | 15:06 | |
openstackgerrit | Lance Bragstad proposed openstack/governance master: Update charter to include PTL appointment https://review.openstack.org/620928 | 15:06 |
ttx | We'll likely put in place some etherpad for people to regularly drop OpenStack-news bullet points, but for the first one we are winging it a bit | 15:06 |
lbragstad | open for feedback ^ | 15:06 |
gmann | i see, that was my next question | 15:06 |
gmann | thanks | 15:06 |
ttx | Once we have one out it will be clearer to replicate :) Fungi, diablo_rojo and myself should be on point to collect openstack bits | 15:07 |
scas | i think arch is trolling me with fonts. i almost thought i had a dead pixel, but it was an accent mark | 15:07 |
ttx | ȧ | 15:08 |
ttx | ṁ | 15:08 |
gmann | ttx: will you announce it on ML also for PTL or team to know about it ? | 15:09 |
dhellmann | ttx: I'm glad to hear the newsletter is taking off | 15:09 |
cdent | ttx: I think it is a great initiative. The thing I most worry about with those sorts of things is that if they are contribution driven, rather than reporting, its hard to get the _right_ contnet | 15:09 |
ttx | gmann: there is a bit of chicken and egg -- will be much easier to talk about it once we have one out. It was announced on ML posts already | 15:09 |
gmann | k | 15:10 |
*** ricolin has joined #openstack-tc | 15:10 | |
ttx | cdent: yeah, and making it _too_ crowdsourced usually fails (if past experiences are to be learned from) | 15:10 |
cdent | on the flip side, reporting is time consuming | 15:10 |
ttx | which is why it's a OSF newsletter... sourcing content from community | 15:11 |
gmann | main point is we cover all projects/sig to get it sourced from not the key projects only. | 15:12 |
ttx | we are still going back and forth on right length/content/frequency, but we'd like to push one out soon. Which is why I ask if tehre is anything critical to mention, beyond the 3 things already mentioned | 15:12 |
gmann | This is something i got feedback from blazar team also to have their things also publish, announce etc | 15:12 |
ttx | obviously there is an editorial aspect to it too -- some "news" might not make the cut | 15:13 |
ttx | although I'd rather worry of the opposite :) | 15:14 |
fungi | but conversely, things may fail to get included if not brought to the attention of those assembling the newsletter | 15:14 |
ttx | anyway, I did not want to abuse the office hour to talk about that, just something to keep in mind if you see newsworthy bits pass | 15:15 |
fungi | if blazar or any other team is concerned that they're not getting their news isn't getting the visibility of the news for other things, they need to start supplying it | 15:15 |
fungi | er, that was a terrible sentence. i blame too much multitasking | 15:15 |
scas | a while back, we mentioned not flying the help flag too hard. upstream, there is a library in chef itself that chef-openstack depends on. its future is even more precarious than chef-openstack, but i wasn't sure if it'd be too forward to have a mention in my next ml update. much like other roll-up updates, it'd be a bulletpoint | 15:16 |
dhellmann | scas : I think that conversation was about communicating with the board, rather than discussions on our mailing list | 15:16 |
scas | it's partly one reason why there hasn't been a chef ml update in a few months | 15:16 |
scas | fair enough. things blur over time | 15:17 |
* dhellmann nods | 15:17 | |
gmann | fungi: +1, i told the same and there are few things which get not responded to them. example is user survey which i want to bring next | 15:17 |
fungi | scas: explaining what bits of the project would be most effective/useful to get additional contribution on is great, in my opinion. just don't phrase it in an overly alarming doom-n-gloom sort of way | 15:17 |
gmann | that came from Blazar health checking and my discussion with PTL in summit. | 15:17 |
openstackgerrit | Merged openstack/governance master: update chair guide reference to closed mailing list https://review.openstack.org/620656 | 15:17 |
openstackgerrit | Merged openstack/governance master: clarify chair responsibility for managing chair selection process https://review.openstack.org/620657 | 15:18 |
gmann | Blazar is not in user survey and it was requested by PTL but no response or fixed. | 15:18 |
scas | of course. i wanted a quick sanity check before i did send something out | 15:18 |
dhellmann | gmann: that's good info to have, thanks | 15:18 |
gmann | may be ttx fungi ^^ can help to get them involve in user survey | 15:18 |
dhellmann | did you put that in the health-tracker wiki page? | 15:18 |
fungi | gmann: did their ptl not get contacted by aprice last time project-specific survey questions were solicited? | 15:18 |
dhellmann | I would have thought the user survey would include all of the official projects | 15:18 |
fungi | gmann: that's been fairly standard process in the past | 15:19 |
dhellmann | yeah, we should have all of that down pat by now :-) | 15:19 |
gmann | fungi: i think no, he mentioned theyr requested but not heard back | 15:19 |
scas | the current maintainer did a good enough job at explaining the gloom on their end, if one looks through bug reports. i simply like for people to be aware, much like how fog-openstack caused some ruckus for some a while back | 15:19 |
fungi | gmann: it's possible they were overlooked, though what usually happens in these cases is that the ptl was e-mailed for input and never responded to those working to assemble the survey updates. that only happens periodically (yearly now i think) | 15:21 |
dhellmann | yeah, we're only doing the user survey once a year now | 15:21 |
aprice | gmann: apologies for the oversight. we actually just kicked off the 2019 version at the Berlin Summit so it's an easy addition | 15:21 |
*** whoami-rajat has joined #openstack-tc | 15:21 | |
openstackgerrit | Merged openstack/governance master: Retire openstack-ansible-os_monasca-ui https://review.openstack.org/617322 | 15:22 |
aprice | we are reaching out to the PTLs in December for the project-specific questions that are asked at the end of the survey. We are planning to update all of these in January | 15:22 |
gmann | fungi: aprice great, that will help. i can tell priteau to wait for that. | 15:23 |
dhellmann | tc-members: how do folks feel about my proposed plan for landing our various resolutions for tracking python 3 release? http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000144.html | 15:23 |
scas | appreciate the responses and feedback so far. i have something to work with | 15:23 |
dhellmann | are we ready to move ahead with zaneb's bit in https://review.openstack.org/#/c/613145/ as it is or do we need updates? | 15:24 |
fungi | dhellmann: sounds great, i was waiting to rebase mine into place per that plan until the one which would go ahead of it is updated | 15:24 |
aprice | gmann: ok, great. It should be added to the project list shortly in the meantime. Happy to help with any other questions there may be | 15:24 |
gmann | aprice: thanks again. | 15:25 |
smcginnis | dhellmann: I'm good with moving ahead with it. | 15:25 |
dhellmann | let's get some votes lined up, then :-) | 15:25 |
dhellmann | oh, smcginnis has already voted | 15:25 |
smcginnis | I made some updates to how my declaration patch was done. Would appreciate any input if that is more clear. | 15:26 |
smcginnis | More betterer. | 15:26 |
*** mriedem is now known as mriedem_afk | 15:29 | |
evrardjp | for https://review.openstack.org/#/c/611080/ I thought the addition of champions later would do | 15:30 |
smcginnis | What champions though? | 15:30 |
smcginnis | evrardjp: We are the champions, my friends. :) | 15:31 |
dhellmann | smcginnis : that new version looks good | 15:31 |
* fungi will certainly keep on fighting till the end (just like freddie!) | 15:31 | |
evrardjp | smcginnis: I think you're watching too much sports, listening to music, and watch some recent movies. | 15:31 |
smcginnis | :) | 15:32 |
evrardjp | on top of that I think that latest version of smcginnis can be easily followed up by the reader wanting to know 'which version of python do I need for openstack version x' | 15:33 |
smcginnis | That was my hope. | 15:33 |
smcginnis | I think it would be worth adding older release pages so there is a record of what was targeted at the time. | 15:33 |
dhellmann | that would be interesting info to track down for sure | 15:34 |
evrardjp | fair. | 15:34 |
smcginnis | Another research project. | 15:34 |
ttx | dhellmann: do you expect a new patchset on Zane's python3 review? | 15:34 |
evrardjp | dhellmann: I am not sure the last iteam of the plan you sent on the ML does still apply | 15:34 |
ttx | or more of a followup patch? | 15:34 |
evrardjp | item* | 15:34 |
dhellmann | ttx: maybe a follow-up to clear up the wording on setting the goal but the more I thought about it the more I like the slightly vague wording for the deadline section | 15:35 |
dhellmann | evrardjp : do you think smcginnis' patch encompasses fungi's? | 15:36 |
evrardjp | yes | 15:36 |
evrardjp | well | 15:36 |
evrardjp | it doesn't really, but I think it has clarified enough, and for the reader, the rest of the pages are not referring to any version of python, which sounds good enough for me | 15:37 |
evrardjp | s/referring/explicitly hardcoding/ | 15:37 |
dhellmann | yeah, I guess I'd need to see them rebased together to tell for sure but you might be right | 15:37 |
fungi | as long as it drops the explicit python3.5 mention from reference/pti/python.rst i'm fine abandoning mine | 15:38 |
* fungi checks | 15:38 | |
fungi | yeah, looks like it basically incorporates the same changes from mine now | 15:40 |
evrardjp | it gives an example, and points IIRC | 15:40 |
evrardjp | ok | 15:40 |
evrardjp | that is clarified then :) | 15:40 |
fungi | his original version didn't alter reference/pti/python.rst (mainly i think because mine was the first to be proposed and the other two changes grew out of the subsequent discussions about it) | 15:41 |
dhellmann | makes sense | 15:42 |
evrardjp | I might be the cause of that, but that sounded more intertwined in smcginnis 's patch, anyway... | 15:42 |
cdent | nice progress all round I'd say | 15:43 |
fungi | they were initially sort of written to be complimentary to each other, but subsequent revisions resulted in overlap | 15:43 |
* evrardjp whistles | 15:43 | |
evrardjp | sorry! | 15:43 |
fungi | no need to apologize. it's not as if i need more patches to my credit ;) | 15:44 |
fungi | we probably could have done this as revisions to my original but the different pieces were being kept separate so that their impact could be discussed independently | 15:45 |
dhellmann | after that lands, the next thing to take up is the list of LTS platforms. evrardjp, are you leading that set of changes? | 15:45 |
evrardjp | yeah I'd like to. I am currently preparing something about that internally, and will use said time to come with a proper argument about what makes part of this list in governance | 15:46 |
dhellmann | fungi : yeah, I think coming at it from several directions at once helped clarify what needed to change, so thank you for triggering the discussion and being gracious about the resolution | 15:47 |
dhellmann | evrardjp : ok | 15:47 |
* fungi is always glad to incite others to heated debate | 15:47 | |
cdent | no you're not | 15:48 |
smcginnis | I agree with cdent about the ideal being one platform, but since that is not likely I would be glad to see some version of SUSE added to the list of distros. | 15:48 |
cdent | ++ | 15:48 |
evrardjp | let's see how this pans out in the review :) | 15:48 |
dhellmann | I'm not sure I see the point of focusing dev and testing upstream on the least used platform. It very much ignores the fact of how this project is used, not to mention backed. | 15:49 |
fungi | keep in mind that at the moment we say ubuntu lts and centos, but we're vague enough about what it is we're testing that in practice it's mostly just devstack on ubuntu and tripleo on centos | 15:49 |
dhellmann | if we're going to use that list of platforms to determine language version support, I'm comfortable extending it to include some other names, as long as we update our description of what the least means at the same time | 15:50 |
fungi | and if it weren't for tripleo being an rh-centric project, we wouldn't really be living up to our claims to do integration testing on centos | 15:50 |
scas | chef-openstack tests on centos and ubuntu. anything else is considered to be best effort, but i'm pretty sure chef-openstack could even dredge up the actually-dead freebsd implementation of openstack and get most of the way with it | 15:51 |
evrardjp | same applies within openstack-ansible | 15:51 |
fungi | back when this was still sitting in a wiki article, in the before-time, the expectation was that we'd have devstack jobs exercising both ubuntu lts and centos, but the centos devstack jobs were poorly-maintained and often broken/disabled for months or years | 15:52 |
fungi | so in actuality most of the time it was just ubuntu lts consistently, and occasionally some centos, until tripleo shifted their focus to rdo | 15:53 |
evrardjp | clarification is indeed needed, I am tackling this, and I will probably come for help a little later :) | 15:53 |
scas | i'm considering updating chef-openstack docs to clarify platform support. i mentioned freebsd only because i was following the progress before the implementer publicly gave up and it came to mind in context | 15:54 |
fungi | evrardjp: what integration testing is happening on opensuse these days? any devstack+tempest jobs? or is it mainly osad? or something else? | 15:55 |
cmurphy | devstack itself tests on opensuse, looks like non-voting though | 15:56 |
fungi | i note that in the pti we don't actually say integration testing for those platforms, just "functional tests" which... could be interpreted in a lot of ways | 15:57 |
gmann | yeah we have few job on n-v or experimental on opensuse | 15:57 |
gmann | example - https://github.com/openstack/nova/blob/8c318d0fb20fdfe0ae8e203245e4d4d6668c8a44/.zuul.yaml#L254 | 15:57 |
gmann | but its experimental job | 15:58 |
* fungi suddenly realizes we've monopolized the office hour mostly for discussion between tc members, and feels guilty | 15:59 | |
mnaser | :< | 16:00 |
scas | i've balanced it out with some non-tc noise! | 16:00 |
mnaser | only 2 non-tc participants unfortunately | 16:01 |
fungi | scas: yes, thank you!!! | 16:01 |
*** dklyle has joined #openstack-tc | 16:01 | |
cmurphy | I don't think it's a problem for TC members to chat during the TC office hours, especially when it would most likely otherwise be quiet | 16:01 |
mnaser | fungi: openstack ansible does CI for centos and its fully supported :) | 16:01 |
cmurphy | if people have thoughts on the topic (which scas did) they can chime in | 16:02 |
fungi | there was an assertion that non-tc lurkers are hesitant to speak up in this channel when they see us discussing something that looks important | 16:02 |
scas | honestly, tc office hours is probably the one time that people know they'll have high bandwidth responses. it makes sense that it's largely tc members talking to tc members, in lieu of physical/video/voice interaction | 16:02 |
cmurphy | I think most people in the community are trained to know that if there is an important community-wide topic they want input on, they use the mailing list | 16:03 |
cmurphy | the office hour feels like it's more for chatting than about bringing up important issues | 16:04 |
evrardjp | fungi: cmurphy gmann interesting you're busy doing what I was doing today -- a real map of what the project are actually gating :p | 16:04 |
scas | it can also be hard to jump in at times, at the times of day if folks are particularly incensed | 16:04 |
evrardjp | ofc it's no surprise | 16:04 |
scas | much like a group of people engaged in deep conversation with one another, the casual observer wants to let them be | 16:04 |
*** jamesmcarthur has joined #openstack-tc | 16:05 | |
scas | knowledge workers are also particularly engaged at this time of morning. many commute during this time of day, or they're in other high bandwidth interactions | 16:06 |
scas | in PST, it's barely after 0800. the 5 is a parking lot, i'm certain | 16:07 |
*** jamesmcarthur has quit IRC | 16:10 | |
*** mriedem_afk is now known as mriedem | 16:22 | |
*** e0ne has quit IRC | 16:26 | |
*** ricolin has quit IRC | 16:26 | |
*** e0ne has joined #openstack-tc | 16:38 | |
*** e0ne has quit IRC | 16:48 | |
*** david-lyle has joined #openstack-tc | 17:04 | |
*** dklyle has quit IRC | 17:05 | |
*** jamesmcarthur has joined #openstack-tc | 17:24 | |
*** cosss_ has quit IRC | 17:24 | |
*** jpich has quit IRC | 17:29 | |
*** david-lyle has quit IRC | 18:15 | |
*** jamesmcarthur has quit IRC | 18:24 | |
*** diablo_rojo has joined #openstack-tc | 18:31 | |
*** dklyle has joined #openstack-tc | 18:45 | |
*** dklyle has quit IRC | 19:07 | |
*** jamesmcarthur has joined #openstack-tc | 19:37 | |
*** jamesmcarthur has quit IRC | 19:50 | |
*** jamesmcarthur has joined #openstack-tc | 19:52 | |
*** whoami-rajat has quit IRC | 19:56 | |
*** jamesmcarthur has quit IRC | 20:00 | |
*** e0ne has joined #openstack-tc | 20:20 | |
*** jamesmcarthur has joined #openstack-tc | 20:25 | |
*** jamesmcarthur has quit IRC | 20:29 | |
*** e0ne has quit IRC | 20:30 | |
*** jamesmcarthur has joined #openstack-tc | 20:53 | |
*** dklyle has joined #openstack-tc | 21:10 | |
openstackgerrit | Sean McGinnis proposed openstack/governance master: Add openstack/arch-design repo for Ops Docs SIG https://review.openstack.org/621013 | 21:34 |
*** dklyle has quit IRC | 21:39 | |
*** cdent has quit IRC | 21:40 | |
*** jamesmcarthur has quit IRC | 21:48 | |
*** jamesmcarthur has joined #openstack-tc | 21:48 | |
openstackgerrit | Lance Bragstad proposed openstack/governance master: Update charter to include PTL appointment https://review.openstack.org/620928 | 21:50 |
*** jamesmcarthur has quit IRC | 22:03 | |
*** dklyle has joined #openstack-tc | 22:19 | |
*** dklyle has quit IRC | 22:24 | |
*** dklyle has joined #openstack-tc | 22:32 | |
*** mriedem is now known as mriedem_afk | 22:33 | |
*** dklyle has quit IRC | 22:44 | |
clarkb | hello TC I know it isn't office hours, but this is sort of on my mind recently due to gate behavior and compounding factors from the underlying clouds that we dogfood openstack through. In berlin I brought up quality as a goal we should have in the goals session. I think we all agreed that this doesn't quite fit the goals framework but there also seemed to be agreement that it would be worthwhile | 22:49 |
clarkb | In concrete terms the infra team has been watching various gate queues struggle to merge code due to flaky tests/software, but also flaky clouds (that run our software). Things like duplicate IP addresses fighting each other with ARP leading to jobs that have ssh host key verification errors, networking just straight up breaking in multiple clouds making our mirrors unreliable/unresponsive, port leaks | 22:51 |
clarkb | in neutron leading to quota limits being hit, and I'm sure I could dig up a bunch of other things if I went looking | 22:51 |
clarkb | while I don't have hard data to support this, it feels like this is sort of a snowballing feedback loop. We've effectively lowered our standards upstream, downstream deploys said software, and then we get to feel the results in the form of flakyness | 22:51 |
clarkb | any thoughts on how we can make quality a goal/objective? and hopefully reverse the direction this snowball is rolling in and produce software that works better for our clouds leading to more reliable testing? | 22:52 |
mnaser | clarkb: i agree with this a lot | 22:54 |
mnaser | unfortunately i think not a lot of people look/care about gate problems | 22:55 |
mnaser | let me just put "recheck" and we're good | 22:55 |
mnaser | bit of a bummer too, because infra donations really aren't cheap. | 22:55 |
clarkb | yup I think that definitely feeds into it. I think we've also abstracted the gate enough where people don't perceive gate failures as future production failures | 22:55 |
clarkb | when the reality is they very likely are/will be | 22:55 |
mnaser | im not really sure what we can do to help this other than make changes to zuul to prioritize jobs that pass more often to run before those that faila ll the time | 22:56 |
clarkb | mnaser: and ya it would also be good to be better stewards of the resource we are given (by treating the output of that system as valuable) | 22:56 |
mnaser | "want your jobs to run faster and be ahead? make sure your jobs are stable" | 22:57 |
clarkb | mnaser: one concern I have with that approach is the trivial way to fix jobs is to delete tests | 22:57 |
clarkb | and then we are arguably in a worse spot becaus the jobs pass but don't cover things we know are broken | 22:57 |
clarkb | one (admittedly less practical idea) I had was to encourage a sort of "sdague/jogo/mtreinish" rotation. Basically have a group of people that can take on the tasks they did in the past, but be explicit that it shouldn't be a full time thing to help avoid burn out but also ensure more than one person knows what to do | 23:03 |
clarkb | a lot of that work is/was categorize failures with elastic-recheck, then when identification rate is high dig into the most problematic issues | 23:04 |
clarkb | eventually you brun down the list and things get stable | 23:04 |
clarkb | the trick is to not stop at that point | 23:04 |
mnaser | clarkb: I agree. It is important to do this type of initiative. | 23:06 |
mnaser | For example in OpenStack Ansible world we don’t really use elastic recheck which isnt great but sometimes we have failures which occur in tempest that are kinda out of our scope | 23:07 |
mnaser | clarkb: I’m almost tempted to say that maybe weshould introduce another “tier” in the review before code review | 23:08 |
mnaser | Once a commit has a core who gives an initial +2, then we start running tests on it | 23:08 |
clarkb | mnaser: if were honest cores are just as bad as anyone else :P | 23:08 |
mnaser | That won’t improve on quality of code but at least reduce the amount of churn we have. I dunno. | 23:08 |
mnaser | At least if something is glaring bad it doesn’t have to undergo CI to get a -1 anywys | 23:09 |
mnaser | Not bad, wrong rather | 23:09 |
clarkb | I'm not sure a mechanical change will help much. The issue (from where I sit) is we undervalue quality | 23:09 |
clarkb | we overvalue JFDI | 23:09 |
mnaser | It’s a lot easier to type recheck than debug an issue | 23:11 |
clarkb | yup | 23:11 |
clarkb | but chances of your code actually merging if the issue is debugged is way higher | 23:11 |
clarkb | and that seems to be the piece we miss out on | 23:11 |
*** dhellmann_ has joined #openstack-tc | 23:26 | |
*** dhellmann has quit IRC | 23:26 | |
*** dhellmann_ is now known as dhellmann | 23:30 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!