19:01:13 #startmeeting infra 19:01:14 Meeting started Tue Jul 3 19:01:13 2018 UTC and is due to finish in 60 minutes. The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:01:15 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:01:17 The meeting name has been set to 'infra' 19:01:30 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:01:43 #topic Announcements 19:01:51 o/ 19:02:03 Tomorrow is a US holiday. I'll be afk all day and then doing followup holidaying on thursday as well. 19:02:29 Fungi and I are trveling next week as well 19:02:44 i'll be around some tomorrow, but keeping an eye on the smoker will be my priority 19:03:05 yeah, i'm travelling monday through friday next week 19:03:16 and then i'm disappearing for a couple weeks starting the following wednesday 19:03:24 That means we will need a volunteer to run the infra meeting that isn't Fungi or myself. 19:03:37 for my first actual vacation in i can't remember how long. a couple years anyway 19:04:36 #topic Actions from last meeting 19:04:47 #link http://eavesdrop.openstack.org/meetings/infra/2018/infra.2018-06-26-19.01.txt Minutes from last meeting 19:04:49 i will also be on PTO a few days next week, including my wednesday when meeting is sorry 19:05:01 ianw: no worries 19:05:17 We had a collective action to review mordreds spec 19:05:24 why don't we just jump ahead to that 19:05:30 #topic Specs approval 19:05:39 #link https://review.openstack.org/#/c/565550/ config mgmt and containers 19:05:49 o/ 19:05:52 * mordred wrote some words 19:05:56 I've reviewed it. I personally like the incremental approach it describes 19:06:19 corvus: I updated it a little since last you reviewed/voted 19:06:40 have others had a chance to review it as well? and if so how do we feel about continuing to push on it? is it ready to be put up for approval? 19:06:43 mordred: oh cool -- enough i should re-review? 19:06:44 https://review.openstack.org/#/c/449933/ is still there though it's pretty close to done at this point 19:06:55 corvus: it should be a quick re-review 19:07:02 cool, will do today 19:07:58 mordred: if you spec doesn't have a "this is how to puppet 4" link in it yet (I don't recall) we should add one to ^ 19:08:10 and probably merge those together 19:08:15 I *think* I copied in the relevant text from the puppet4 spec 19:08:51 ah ok so truly a merged/converged spec. cmurphy are you comfortable with tracking that work as part of the larger scope spec? 19:08:53 it definitely has the work items - and cmurphy as a co-author 19:09:03 oh sure, I will look at it 19:09:13 I didn't change any of cmurphy's words 19:09:31 or, at least not many 19:10:49 As a rough poll how many have managed to review it? I think corvus and myself and mordred is auther I know about so far. Do we need more time to review it? <- cmurphy fungi ianw. pabelanger this might be one you'd have some interest in looking at as well? 19:11:12 i, er, did pull it up to start reading a couple days ago but... 19:11:18 I haven't looked yet, will do so 19:11:54 ok I don't think we are ready to put it up for approval this week then. Considering the amount of travel and time off between now and next week do we want to aim for week after? that should give us plenty of time 19:11:57 it's the most recent non-emergency review in my gertty stack 19:12:15 wfm 19:12:20 July 17 19:12:30 also - feel free to poke me with questions or whatnot 19:13:05 mordred: considering we have a zuul spec suggesting the use k8s, do we want to reconsider using docker for that (just for merging of brain space) 19:13:37 Shrews: the spec as is is largely a precursor to being able to use k8s 19:13:42 by getting the image management done first 19:14:13 (I think the zuul work is compatible with this spec given that) 19:14:52 yes, this is true - there's also some words in the more recent version that talk about cri-o+k8s briefly and on docker as a temporary expedience 19:15:00 nod. not being familiar with k8s, wasn't sure if there was overlap 19:15:07 mordred sometimes calls k8s a 'container orchestration engine (coe)' in the spec and suggests that we could potentially use that later after what clarkb says 19:15:34 (so, sometimes it heps to read coe as k8s) 19:16:00 (or openshift or...) 19:16:26 shiftynetes 19:17:14 I'll send out an email to the infra list asking fo rreviews as well with a plan to put it up for approval on the 17th (two weeks from today) 19:17:41 any other questions or feedback for mordred before we move on? 19:18:00 also - if anyone gets bored ... 19:18:35 grab the openstack/pbrx repo and try the 'pbrx build-images' command out 19:19:08 it currently would like some of the bindep depends marked with a 'build' tag (since install, build and test are all potentially different things) 19:19:09 mordred: do the repos need to be preconfigured to make that work? or should it work on any pbr using setup.py project? 19:19:20 it should work for any pbr+bindep repo 19:19:50 with some caveats about maybe some extra tags in bindep.txt - or we might want to consider what we're communicating via bindep ... 19:20:12 but I'd love feedback from folks on that - so we can maybe write up some docs on how to use it , etc 19:20:24 or that it's a stupid idea and we should do something completely different 19:20:45 can we make zuul jobs that use it to build zuul images yet? 19:20:51 sure 19:21:08 cool, that might be fun to play with, and is a start on the spec work too 19:21:09 I mean - those jobs would likely want to install it from source at the moment 19:21:11 semi related to that there is a bindep change up to add alpine support 19:21:28 swift is pushing on that, but could be useful in the combination of docker images + pbr + bindep 19:21:57 cool. that would allow us to use the python:slim base image instead of the python base image 19:22:30 #link https://review.openstack.org/#/c/579056/ alpine support in bindep 19:22:38 I've reviewed an older patchset and should rereview it 19:22:58 same here 19:23:09 alpine's packages manager is called apk? that's not confusing at all. 19:23:14 corvus: yes and yes 19:23:25 s/python:slim/python:alpine/ 19:23:26 apk is going to be my new apt typo 19:23:46 python:slim is also fine - and is debian based 19:23:51 it's the default right now 19:24:04 we don't have alpine images in nodepool do weL 19:24:05 ? 19:24:12 corvus: correct 19:24:19 not to my knowledge - but we don't need them if we make alpine container images :) 19:25:03 in any case reviewers should check that the version parsing in particular is correct (I'm not convinced yet and will focus on that myself) 19:25:11 do we want to make landing that contingent on some kind of real testing? (i'm really asking: what is/should be/ our testing requirements for bindep platforms?) 19:25:27 corvus: that's an excellent question 19:25:47 corvus: we've already accepted support for platforms like Arch without "real" testing 19:26:02 the unit test framework is fairly robust as long as you have the package manager output correct in the mocks 19:26:04 otoh, i recall lots of gentoo effort going into real testing 19:26:12 and macports or brew or something too, right? 19:26:16 fungi: yes 19:26:35 also we've switched to the distro library for identifying distros which means we have to worry less about that 19:26:35 ok. so the policy is "get the mocks right". wfm. 19:27:00 so strictly speaking we haven't insisted on automatable testing for every platform implemented, just the ones where we already had those platforms available in nodepool 19:27:16 (i mean nodepool's policy is mostly "get the fakes right", with a few key things real-tested) 19:27:31 certainly if we _did_ get alpine-based vm or container images, we'd want to add a bindep job 19:28:26 Last call for spec related items 19:28:36 also - if we make a zuul job to build alpine-based zuul container images using pbrx - we could totaly run that job on bindep changes too :) 19:29:50 #topic Priority Efforts 19:30:27 The puppet 4 work is related to the previously mentioned spec and so will get added as a priority effort soon (I expect). Considering that it would be helpful if others can review topic:puppet-4 19:30:49 I'm happy to be the babysitter of those changes if we can get reviews. Feel free to ping me if you are +2 but can't watch it 19:31:47 fungi: Any storyboard items for today? 19:32:32 #link https://review.openstack.org/#/q/topic:puppet-4+status:open puppet 4 migration changes 19:32:43 oh, yep, sorry a friend was just dropping off the pork belly for tomorrow 19:33:39 looks like we ended up not having a sb meeting last week 19:34:10 fungi: good friend 19:34:17 Now I want pork belly 19:34:36 not much to report, no. fatema (our outreachy intern) could use some help with a sql migration according to channel scrollback 19:34:55 i'm also still poking at the decoding issue with slashes in project names for the name-based project pages 19:35:04 not having much luck debugging the api though 19:35:27 and we had a few wishlist items crop up 19:35:31 that's pretty much it 19:35:57 ok, thanks. Looks like dhellmann was trying to help with the sql migration (thank you!) 19:36:11 #topic General topics 19:37:03 As a followup to winterstack naming. I've been asked if the foundation can take a week or two to get the creative team's input on what we have on the ehterpad. 19:37:29 I expect that mid July will be whne we start to push on that again (all the things in mid july I guess) 19:37:46 also starting to get internal questions from foundation staff on what sorts of things are going to have the option of being whitelabeled 19:38:37 questions ultimately coming from other new pilot project communities via foundation staff liaisons (who are encouraging those communities to get in touch with the infra team more directly if they have actual questions) 19:39:27 they've been getting pushed to e-mailing actual community infrastructure questions to the openstack-infra@l.o.o ml 19:39:39 so i've been trying to get a little more diligent on watching the moderation queue 19:39:45 fungi: thanks. 19:39:54 but if anyone else is interested in pitching in on that, please let me know 19:40:15 re whitelabeling maybe we should draft up a listing of services and provisional future naming state (I'm happy to put up a first draft list and we can refine from there) 19:40:28 will probably be useful for us as well as our usres to get a rough idea of what goes where 19:40:54 yeah, and i think that's a relevant question for existing services/communities too. 19:40:54 that sounds good, i expect most of those would be entries for future specs 19:41:27 i'm very keen to talk more about what our gerrit offering looks like :) 19:41:54 like, an example which came up earlier today, whitelabeled eavesdrop vhosts filtered to community-specific channel and meeting lists. probably doable but we'd want a spec for that and may also want to make it dependent on the irc bot refactor spec 19:42:05 i'm just not sure we're quite at the point of having gerrit be a productive conversation yet 19:42:54 and ideally, for some of these services communities want whitelabeled, they'd step forward with people to do the bulk of that work 19:43:10 right. that, like gerrit, is also very difficult to talk about without knowing what winterscale looks like exactly -- do we still care about whitelabling so much if we have a better name? 19:43:26 exactly 19:43:32 (or do we not care so much about a better name because everything will be whitelabeled?) 19:43:34 in general I've been assuming we don't 19:43:34 a more timely example might be mailman3 19:44:29 it looks like it has built-in support for whitelabelling the listserv (mailman3) itself and the archiver (hyperkitty), but not for the management webui (postorius). it may be that non-whitelabeled postorius is just fine on a neutral domain name 19:44:30 in part because I'm selfish and it is less work but also because people seem ok with servies like bitbucket and github operating in a similar manner 19:46:19 yah - mailing lists seem to be where I'd expect more people to have whitelabel interest 19:46:20 I'll try to work up a rough draft maybe that we could declare as an ideal with the caveat reality may force changes 19:46:23 once we have a name 19:46:40 thanks clarkb! 19:47:05 As an update on packethost we've been told that bad cables have been replaced. John is working to get the MTU situation sorted out as well before we turn it back on again 19:47:17 Hopefully we can run jobs there soon 19:47:30 and vexxhost is back in the pool again, right? 19:47:46 yes vexxhost ipv6 issues got sorted out and I turned it back on again late yesterday 19:47:55 excellent 19:48:30 #topic project renames 19:49:38 Really quickly before we run out of time we have had a couple project renames floated by us. I'm in no position to drive that for a while (just with travel and other commitments). For some reason I'm not finding our release calendar 19:50:02 there it is , https://releases.openstack.org/rocky/schedule.html 19:50:05 https://releases.openstack.org/rocky/schedule.html 19:50:37 End of this month seems like a possibility 19:51:14 aw can't we do it mid-july like everything else? :) 19:51:21 boo :P 19:51:37 lets see where we are at on the 17th and maybe we can do it end of july? 19:52:13 that would be cool 19:52:24 i know the api sig has been waiting a little while 19:52:55 #topic Open Discussion 19:52:56 and i think tripleo-ci is the only remaining non-infra-team repo in the openstack-infra namespace once release-tools retirement completes 19:53:10 #undo 19:53:11 Removing item from minutes: #topic Open Discussion 19:53:13 Sorry 19:53:22 that was all i had. no worries 19:53:33 fungi: also tripleo-ci bumps the renames up to >1 which makes it less painful 19:53:39 yep 19:53:47 Anyways lets talk about it once we get past the first half of this month 19:53:55 #topic Open Discussion 19:53:59 Any now, anything else 19:54:26 i need to go start soaking a bunch of applewood for tomorrow morning 19:55:09 I've got to clean out my smoker 19:55:32 brisket? 19:55:36 ribs 19:55:40 also going to grill burgers 19:55:42 yessss 19:55:51 i should do some ribs soonish 19:56:58 seems like that may be it. Enjoy the holiday if it is a holiday in your part of the world. Thanks everyone 19:57:00 #endmeeting