19:03:46 <fungi> #startmeeting infra
19:03:47 <openstack> Meeting started Tue Jun 27 19:03:46 2017 UTC and is due to finish in 60 minutes.  The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:03:48 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:03:50 <openstack> The meeting name has been set to 'infra'
19:03:57 <fungi> #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
19:03:59 <fungi> #topic Announcements
19:04:04 <fungi> #info Don't forget to register for the PTG if you're planning to attend!
19:04:06 <fungi> #link https://www.openstack.org/ptg/ PTG September 11-15 in Denver, CO, USA
19:04:08 <fungi> as always, feel free to hit me up with announcements you want included in future meetings
19:04:14 <fungi> #topic Actions from last meeting
19:04:20 <fungi> #link http://eavesdrop.openstack.org/meetings/infra/2017/infra.2017-06-20-19.03.html Minutes from last meeting
19:04:25 <fungi> #action ianw abandon pholio spec and shut down pholio.openstack.org server
19:04:35 <ianw> oh, forgot to abandon the spec
19:04:44 <ianw> #link https://review.openstack.org/#/c/477386/
19:04:54 <ianw> review to remove it from system-config
19:04:59 <fungi> cool!
19:05:19 <fungi> and no worries. i just noticed one we agreed to move out of the priority efforts list to implemented that i forgot to clean up, so will have a patch up for that forthwith as well
19:05:53 <fungi> #topic Specs approval
19:05:57 <fungi> #info APPROVED "PTG Bot" spec
19:06:03 <fungi> #link http://specs.openstack.org/openstack-infra/infra-specs/specs/ptgbot.html "PTG Bot" spec
19:06:10 <fungi> #info APPROVED "Provide a translation check site for translators" spec
19:06:14 <fungi> #link http://specs.openstack.org/openstack-infra/infra-specs/specs/translation_check_site.html "Provide a translation check site for translators" spec
19:06:37 <fungi> #topic Priority Efforts - Gerrit 2.13 Upgrade: status update (clarkb)
19:07:14 <fungi> i know we discussed last week that we'd follow up this week
19:07:40 <clarkb> ya
19:07:47 <clarkb> so it turns out that 2.13 is a really weird release
19:08:06 <fungi> weird even as gerrit releases go
19:08:32 <jeblair> yowza!
19:08:37 <clarkb> and if you upgrade to 2.13 < 2.13.8 you have to do db migrations by hand when you go to 2.13.8 (and presumably 2.14)
19:08:57 <clarkb> so we reverted teh 2.13.7 upgrade and are working on a plan to go staright to 2.13.8
19:09:04 <fungi> migrations... between different db backends too
19:09:16 <clarkb> I have changes up under the gerrit-upgrade topic to get artifacts built
19:09:28 <clarkb> so reviews on thiose would be great
19:09:56 <clarkb> once we have artifacts I just need to update our process doc on etherpad and we can give it another go
19:10:01 <clarkb> maybe tomorrow even
19:10:38 <fungi> #link https://review.openstack.org/#/q/is:open+topic:gerrit-upgrade
19:11:37 <clarkb> also the new db can't have the same name as the old db or you lose your old db
19:11:48 <clarkb> thats a nice feature that only just got documented in tip of the 2.13 branch
19:13:06 <clarkb> so thats teh basic update. THings to review to make wars and plugin jars so that we can tr 2.13.8
19:13:32 <fungi> "new" being the "accountPatchReview" database?
19:13:44 <clarkb> yes
19:14:39 <fungi> that's what i thought, just clarifying for those coming up to speed ;)
19:15:05 <fungi> okay, so you'll give it a shot on review-dev possibly as early as tomorrow?
19:15:23 <clarkb> yes if we can get those changes merged today then plugins will be built overnight allowing us to attempt upgrade tomorrow
19:15:38 <clarkb> I'd like another infra-root around just to bounce ideas off of especially considering the way the last one went
19:15:44 <clarkb> fungi: are you willing/able to do that again?
19:15:47 <fungi> you can count me in for that
19:16:14 <mordred> yah - same here
19:16:35 <fungi> worth noting, if we switch to building off the upstream stable branch tips rather than point release tags, we will be making non-fast-forward changes to our local branches (possibly orphaning earlier changes that add our local .gitreview et cetera) when we update that branch
19:17:15 <jeblair> consider me on standby for this
19:17:22 <fungi> awesome
19:17:47 <clarkb> sounds good, I can likely get going as early as 9am pdt tomorrow
19:18:24 <jeblair> fungi: we modify so littl now, i'm not too worried about changes on the branches
19:18:53 <fungi> jeblair: that was my feeling on it as well, just wanted to make sure that (slight) change in workflow was known
19:19:05 <clarkb> ya its mostly just the submodule change to make builds work
19:19:18 <fungi> anything else we should know about at this stage?
19:19:31 <clarkb> that is all I have
19:19:44 <clarkb> unless we want to talk about how much more fun 2.14 will be :)
19:19:45 <fungi> i know you had to abort before viability last week, so not much was learned about anything besides the upgrade process itself
19:20:24 <fungi> discussing 2.14 upgrades at this stage is probably premature
19:20:59 <fungi> unless we get far enough with 2.13 to decide that we really should be running 2.14 because of some currently unknown problem
19:21:11 <clarkb> ya I don't think we are anywhere near that point
19:21:47 <fungi> thanks clarkb! and thanks mordred and jeblair for volunteering to help put out fires if we cause any
19:22:11 <fungi> #topic Infra-cloud's future and call for volunteers (fungi)
19:23:15 <fungi> i've been trying to take point on the private three-way discussions with hpe and osu on the possibility of relocating the infra-cloud hardware
19:23:25 <fungi> but things have gotten more complicated
19:24:11 <fungi> particularly because, over the course of one or more of the previous data center moves, the server mounting rails and top of rack switches we originally had were "lost"
19:24:56 <fungi> and it's starting to look like if we want to continue running infra-cloud, we'll need to buy some
19:25:22 <fungi> i don't personally have the bandwidth to research, spec and price out the necessary additional gear
19:25:43 <fungi> so i'm looking for volunteers to pick up this project
19:25:51 <clarkb> fungi: and that is because osuosl isn't going to have switches we can plug into (I'm guessing thats a capacity problem?) and rails are always hard tocome by seems like?
19:26:04 <fungi> yeah, pretty much
19:26:10 <clarkb> fungi: is hpe wanting to turn them off in the current shelf/sit on floor/daisy chained to router setup?
19:26:22 <jeblair> they're... what, stacked on top of each other now?
19:26:31 <clarkb> jeblair: I'm assuming based on lack of rails
19:26:44 <fungi> we'll need fairly high-speed interconnections between the servers so basic connectivity is going to be insufficient
19:27:00 <fungi> jeblair: correct, they are stacked directly on one-another
19:27:06 <fungi> according to what we've been told
19:28:32 <jeblair> "neat"
19:28:35 <fungi> i can still continue acting as go-between on the foundation budget/funding/contracts part
19:28:46 <ianw> i can try reaching out to some people internally at RH who might have recent experience with this
19:29:25 <fungi> but i'm wondering whether it makes sense to try to move forward with the current hardware if we're going to need a significant capital investment in rails and switch gear
19:29:30 <ianw> personally i know not very much about rack/rail solutions, but if we can find someone from RDO cloud ....
19:30:35 <jeblair> fungi: do we have a hardware manifest somewhere?
19:31:46 <fungi> i haven't seen one that i can recall. i believe most of the hardware model info we have was worked out by rcarrillocruz, yolanda or cmurphy remotely
19:32:07 <jeblair> https://docs.openstack.org/infra/system-config/infra-cloud.html  does not look up to date....
19:32:43 <jeblair> there may be some stuff in bifrost files, but i don't know if we would have (accurate?) chassis information if all of that was collected remotely
19:32:55 <fungi> i can ask the data center manager who's been e-mailing our contact at osu if they can do an audit of the hardware that's in place
19:33:18 <clarkb> we can gather that via ilo at least
19:33:25 <clarkb> it may be slow but should work
19:33:46 <fungi> and possibly incomplete for any where the ilo fails to respond
19:33:52 <fungi> but better than nothing
19:34:35 <jeblair> i reckon that's step one if we want to find rails
19:35:01 <jeblair> personally, i'd ask the dc manager.  i'd prefer that to trusting ilo.
19:35:06 <fungi> it'll also only take me a few minutes to ask them to confirm one last time that there really are no mounting rails attached to the servers (maybe we've misinterpreted what they were saying about that) and get us a count of switchports/speeds
19:35:35 <jeblair> (or, you know, both.  trust but verify.  :)
19:36:01 <fungi> after that, i can start a thread on the infra ml about this i guess and see if we get any good suggestions/offers
19:37:16 <fungi> #action fungi start a ml thread about the infra-cloud rails and switching situation
19:37:40 <fungi> we can continue there, hopefully reaching a wider audience
19:37:47 <jeblair> fungi: can you go ahead and ask the dc op too?
19:38:06 <fungi> jeblair: yeah, i figured i was going to do that first
19:38:10 <jeblair> cool
19:38:45 <fungi> #action fungi get details on current server models, presence of rails and switchport counts for infra-cloud
19:39:21 <fungi> #topic Should we skip our next meeting? (fungi)
19:39:39 <clarkb> I will not be here.
19:39:43 <jeblair> yes
19:39:54 <fungi> in reading yesterday's zuul meeting minutes to make sure i hadn't missed anything important while i was out, i was reminded that it will be a holiday in the usa
19:40:10 <fungi> so if people want an infra meeting next week, we'll need a volunteer chair
19:40:36 <fungi> because my wife will probably hound me to stay off the computer
19:40:48 <jeblair> (i'm really proud that i actually remembered to bring that up in meeting for once)
19:41:06 <fungi> (and just look at the example it set!)
19:41:28 <ianw> i could ... but i think it's not worth it
19:41:54 <fungi> thanks ianw. not hearing any objections, let's go ahead and cancel next week
19:42:52 <fungi> #agreed The Infra team meeting for July 4 will be skipped due to a major holiday for many attendees; next official meeting will take place on Tuesday, July 11.
19:43:29 <fungi> #topic Open discussion
19:43:57 <clarkb> I'm fairly confident dns over ipv4 was the vast majority of our dns problems
19:44:19 <clarkb> the leftovers appear related to how various jobs deploy software
19:44:31 <clarkb> eg docker using 8.8.8.8 in osic anyways
19:45:08 <fungi> i spotted this on the general ml but haven't had time to look into it:
19:45:09 <ianw> ++ my testing has stabilised greatly
19:45:10 <fungi> #link http://lists.openstack.org/pipermail/openstack/2017-June/045095.html git-review removing /gerrit
19:45:14 <fungi> would be really awesome if someone has time to jump in on that since it's another case of our tools being used outside the openstack community
19:45:35 <ianw> #link https://review.openstack.org/#/c/477736/
19:45:47 <cmurphy> new batch of beaker fixes here, mordred already +d' all of them https://review.openstack.org/#/q/project:%22%255Eopenstack-infra.*%2524%22+topic:fix-beaker
19:46:12 <ianw> there's been some stretch talk ... patches for nodes etc.  does anyone mind if i do 477736 and i imagine we need a manual run to get the first sync down
19:46:35 <fungi> ianw: seems fine to me
19:46:49 <clarkb> ianw: ++ also check volume quota/size/space first
19:47:03 <ianw> will do
19:47:20 <cmurphy> s/+d/+2'd
19:48:05 <fungi> thanks for adding all those cmurphy
19:48:27 <fungi> i also approved the change to make the beaker jobs voting a few hours ago
19:48:37 <clarkb> ianw: I think someone said we might need to add the stretch release gpg key too
19:48:42 <cmurphy> fungi: awesome
19:49:11 <fungi> clarkb: yes, the key should be added but it's not actually in use yet afaik
19:49:31 <fungi> the "stretch release key" is so named because it may start being used following the stretch release
19:49:43 <clarkb> ah
19:49:53 <fungi> (sort of backward from how we name our signing keys)
19:52:59 <fungi> jhesketh: not sure if you're awake yet, but are you still free to process the requested stable/mitaka eol? that would take one more thing off my plate
19:53:11 <fungi> #link http://lists.openstack.org/pipermail/openstack-dev/2017-June/118473.html Tagging mitaka as EOL
19:54:06 <fungi> i'll follow up to your reply on the ml
19:54:59 <fungi> clarkb: do you (or anybody else) know the status on gerrit review tags? ttx was asking in this thread:
19:55:01 <fungi> #link http://lists.openstack.org/pipermail/openstack-dev/2017-June/118960.html Turning TC/UC workgroups into OpenStack SIGs
19:55:38 <fungi> i vaguely recall earlier discussions suggesting that was waiting for their notedb implementation
19:55:48 <fungi> so not sure if 2.13 gets us far enough for that
19:56:10 <clarkb> fungi: I tried to find mention of them in changelog and docs and found nothing
19:56:21 <clarkb> I wonder if they just stalled out
19:56:59 <fungi> what was the word they actually used for those (as "tags" would be ambiguous with git tags)
19:57:02 <fungi> ?
19:57:53 <jeblair> hashtag was used at one point
19:57:55 <clarkb> I dont recall
19:58:13 <fungi> thanks!
19:58:24 <fungi> #link https://bugs.chromium.org/p/gerrit/issues/detail?id=287 arbitrary labels/tags on changes
19:59:58 <fungi> i'll follow up to the ml after reading whatever i can find, if nobody beats me to it
20:00:00 <jeblair> erm
20:00:03 <jeblair> https://review.openstack.org/Documentation/config-hooks.html#_hashtags_changed
20:00:08 <jeblair> my google bubble pointed me at that
20:00:27 <fungi> strange indeed!
20:00:29 <clarkb> looks like you have to enable notedb
20:00:31 <fungi> and we're out of time
20:00:38 <fungi> thanks everyone!
20:00:40 <clarkb> which last I read is not stable
20:00:45 <fungi> #endmeeting