19:02:38 #startmeeting infra 19:02:39 Meeting started Tue Jan 19 19:02:38 2016 UTC and is due to finish in 60 minutes. The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:02:40 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:02:42 The meeting name has been set to 'infra' 19:02:57 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:03:00 o/ 19:03:01 o/ 19:03:06 #topic Announcements 19:03:47 #info Volunteers sought to present Upstream Development track at the Newton summit in Austin, TX 19:03:58 #link https://www.openstack.org/summit/austin-2016/call-for-speakers/ 19:04:08 #link https://etherpad.openstack.org/p/austin-upstream-dev-track-ideas 19:04:32 pabelanger (who isn't in here?) has already volunteered, but let me know if you want to do something for this 19:04:40 o/ 19:04:51 o/ 19:05:17 #info Delegate sought for Cross-Project Specs Liaison 19:05:35 fungi, if there's a topic in my area, I'm happy to (co)present 19:06:01 o/ 19:06:06 basically we need someone reviewing the cross-project specs regularly and spotting when there are infra-specific items in them 19:06:29 i'm on the hook for it right now, but if anyone's interested in taking that on instead, get up with me later 19:06:51 #link https://wiki.openstack.org/wiki/CrossProjectLiaisons#Cross-Project_Spec_Liaisons 19:07:06 any other important announcements not covered in meeting topics on the agenda>? 19:07:21 going once... twice... 19:07:35 fungi: looks like there was an ml thread about the cpl? 19:07:35 #topic Actions from last meeting 19:07:44 jeblair: yeah, there was 19:08:08 there were no action items last week, good for us! 19:08:20 #topic Specs approval 19:08:21 \o/ 19:09:01 we have two proposed for council vote 19:09:06 ( http://lists.openstack.org/pipermail/openstack-dev/2016-January/084136.html for those who are curious) 19:09:20 PROPOSED: Move docs.o.o/releases to releases.o.o 19:09:28 #link https://review.openstack.org/266506 19:09:38 PROPOSED: Make translation set up consistent 19:09:47 #link https://review.openstack.org/262545 19:09:55 I need a bit of guidance on the releases.o.o one, but I think anteaya has explained one of my questions. The other question is how to handle the redirects. 19:10:04 anyone object to setting the council vote deadline on these to thursday at 19:00? 19:10:15 dhellmann: I can help with redirects - those are don in openstack-manuals 19:10:21 AJaeger : excellent, thanks 19:10:22 o/ (sorta) 19:10:26 I has no objection 19:10:33 english hard 19:10:43 fwiw I have long been an unfan of the proliferation of doc sites, but I also accept that we have decided to do it for various reasons. Might be good to keep track of the number of times we go back and forth and have to reconfigure apache and dns for this stuff to quantify why it is painful 19:10:45 dhellmann: oh, is it still being tweaked, or are you ready for it to go up for approval? 19:11:01 fungi : I am ready to go, I will just need help with the implementation 19:11:04 fungi, AJaeger, dhellmann: no objection here -- i take it from AJaeger's participation in the conversation that this won't come as a surprise for the docs folks? 19:11:10 fungi : several of the patches to implement it are already up for review 19:11:31 jeblair: I'm not sure - let me tell them... 19:11:44 jeblair : I didn't consult with them, but they don't have much to do with that repo so I wasn't worried about it. I'll make sure they know the new URL. 19:11:46 jeblair: you mean dhellmann's change, correct? 19:11:51 AJaeger, fungi: maybe we should get a docs ptl +1 on that? 19:11:52 AJaeger: yep 19:12:02 for translation consistency big +1 19:12:04 loquacities: ^ 19:12:09 it is something we need to do just havne't found time for it 19:12:11 i agree 19:12:15 loquacities: specifically https://review.openstack.org/266506 19:12:17 jeblair : This isn't a docs repo. It's not for them to say. 19:12:43 clarkb, fungi: I hope I covered everything for translations... 19:12:43 dhellmann: i mostly don't want to yank content from docs.o.o without them at least knowing about it 19:12:46 oh, i thought we were talking about the translation consistency spec 19:12:51 dhellmann: honestly I think it's better for SEO and for people to understand where to fix stuff, but the "not for them to say" is a bit much :) 19:13:04 jeblair: yes, definitely let loquacities know 19:13:05 jeblair : sure. I'll make sure they're aware of the new URL 19:13:07 fungi: the translation one does not handle docs at all. That's a followup ;) 19:13:17 but yeah, i guess giving the docs team a heads up that the release details are moving to a separate site is reasonable 19:13:26 fungi: +1 19:13:28 AJaeger: oh, right--thanks 19:13:32 translation of docs works fine, the lots of python and django projects cause trouble 19:13:56 if they -1 then we can have the territorial dispute, but if they +1 we can dodge the issue. :) 19:14:23 okay, i'll put them up for council vote and see if we can get loquacities to weigh in on the releases.o.o spec as a courtesy before then 19:14:58 #info Voting is open on "Move docs.o.o/releases to releases.o.o" spec until 19:00 UTC Thursday, January 21 19:15:04 fungi: I'm writing an email to docs team 19:15:15 #info Voting is open on "Make translation set up consistent" spec until 19:00 UTC Thursday, January 21 19:15:32 thanks AJaeger 19:15:48 anything else we need to handle regarding specs voting? if not, we're at 44 minutes left 19:16:36 #topic Priority Efforts: Ansible Puppet Apply 19:16:43 we're ready 19:16:49 note here says "Ready to go live" 19:16:49 * jeblair boggles! 19:16:58 all of the changes that are not the actual puppet apply go live patch have landed 19:16:58 \o/ 19:17:10 I have verified that we're copying the hiera files and the puppet files appropriately 19:17:14 great! so we're on track to wrap that up later this week? 19:17:20 (and in fact ran puppet apply by hand on git01.openstack.org) 19:17:25 or this afternoon? :) 19:17:26 fungi: I was thinking this afternoon actually 19:17:36 (or before the meeting is over?) 19:17:41 heh, cool 19:17:51 and congrats to all involved! 19:17:56 this was a biggie 19:18:00 huzzah! 19:18:02 neat 19:18:15 mordred: and you said puppetdb posting was functional ya? 19:18:25 someone also needs to propose a patch to infra-specs to do the cleanup (move it to implemented, take it out of the priority efforts list and query/url) 19:18:30 nibalizer: I will verify that before landing the change 19:18:33 nibalizer: as of just before the meeting, yep 19:18:39 nibalizer: oh, neverminf 19:18:40 so cool 19:18:44 nibalizer: but I have all the pieces in place to be able to verify that and be satisfied with the verification 19:18:48 you meant still functional once we switch to applying 19:18:55 yes 19:19:20 and what's the change for go-live, just for the record? 19:19:36 someone #link us a url for that 19:19:38 landing the go live patch - it should be mostly unnoticable 19:19:54 #link https://review.openstack.org/249492 19:19:58 appreciated 19:20:03 the docs on the new system are in that patch 19:20:04 mordred: is there a doc change for explaining how to run puppet apply locally? 19:20:07 so folks might want to review it 19:20:09 yes 19:20:13 mordred: its not complicated and I did it myself at one point but oh good 19:20:21 what a cool number 19:20:28 although luckily the answer is "puppet apply /opt/system-config/production/manifests/site.pp" 19:20:39 so it's not bad 19:20:45 okay, anything else we need to cover before moving on to swift logs? 19:20:48 ya I did it at one point testing things 19:20:48 it even works on git*.o.o where we have the project-config sha fact 19:21:04 I also included docs on how to run puppet manually from the puppetmaster via ansible 19:21:14 in case you want to get hiera things copied and modules updated and whatnot 19:22:11 ansible-playbook --limit "$host;localhost" /opt/system-config/production/playbooks/remote_puppet_allyaml ... in case anybody was curious 19:22:23 s/allyaml/all.yaml/ 19:22:24 #link http://docs.openstack.org/infra/system-config/puppet.html 19:22:31 is where the bulk of that documentation is winding up 19:22:40 for the benefit of those reading the meeting log 19:23:23 #topic Priority Efforts: Store Build Logs in Swift 19:23:30 Reviews still needed 19:23:35 mordred: 249492 is a beautiful change! mostly docs and like 3 lines of code 19:23:40 #link https://review.openstack.org/#/q/status:open+topic:enable_swift 19:23:44 I reviewed the zuul changes which need to go in before the config changes can happen 19:23:57 so I think the zuul changes would be the proirity 19:24:06 jeblair: statistically I've got to at least SOMETIME produce one of those, right? 19:24:14 looks like some of those should be ready to go in 19:24:34 unless anyone else spots issues in there 19:25:02 otherwise looks like same situation we were in last week, so moving on 19:25:05 er 19:25:14 what's the problem that 262112 is trying to solve? 19:25:17 or not moving on yet 19:25:39 jeblair: we need to write to two containers with the second having a different root 19:25:48 jeblair: so that we can build a file system directory 19:26:09 clarkb: are we not having the log server do that for us? 19:26:15 jeblair: no 19:26:28 clarkb: i've clearly missed something. how can i catch up? 19:26:31 jeblair: because that requires some method to write back to the server from unprivileged slaves 19:26:34 does the spec update clarify why we aren't putting the filesystem metadata on the same container? 19:26:37 jeblair: the spec with that topic should catch it up 19:26:45 fungi: in a different container and yes 19:27:16 clarkb: I think my change would allow to do that, have different container root https://review.openstack.org/#/c/229582/ 19:27:37 okay. i will read the spec update. in the mean time, i'm not sure i'm on board. 19:28:10 ah, got it, so it's the out-of-tree requirement driving having it on a different container 19:28:38 this does, i think, mean that any malicious change could in theory trash the filesystem metadata, right? 19:29:01 fungi: yes, though in theory its hashed so semi difficult to trash particular things 19:29:07 the spec calls that out iirc 19:29:13 I read that 19:29:27 in the spec 19:29:28 right, now i see the security section there 19:29:36 honestly I am beginning to think we shouldn't use swift at all 19:29:47 i also feel like this is a substantial enough change/setback that we should at least reconsider the alternatives of "a) keep using the big log server" and "b) use afs" 19:30:05 jeblair: or C) something else 19:30:07 I agree 19:30:10 clarkb: ya 19:30:13 so the worker won't actually have access granted to upload anything into the metadata container 19:30:15 ? 19:30:25 (e.g. to overwrite files in it) 19:30:50 fungi: it will have enough access to write the metadata object, I am not sure if the acls allow us to prevent writing to other metadata objects 19:31:02 that's what i was concerned about 19:31:06 clarkb: also a+b: keep jenkins scping to the big log server but have it actually backed with afs 19:31:16 it seems that uploading logs to a thing that has a file structure is useful 19:31:32 right so the underlying issue here is we actually find the posix filesystem to be useful 19:31:43 swift doesn't provide that so we either have ot build it ourselves or not use swift 19:32:05 and the build it ourselves has a security concern 19:32:35 and teh answer from swift developers (which is entirely reasonable from their perspective) is that if our application is dependent on posix filesystem implementation details/features then we should redesign that application to need something different 19:33:20 though i'm not sure that's possible if you consider the use case to be presenting a filesystem-like browsing experience to our end users 19:33:40 o/ 19:33:43 it basically means we should tell the consumers of our log data that their expectations are wrong 19:33:59 that makes people sad 19:34:05 both the teller and the listener 19:34:09 i am sad 19:34:26 so anyway, i guess let's get this additional feedback into the proposed changes 19:34:37 and see if there's a compromise to be had 19:35:07 #topic Priority Efforts: maniphest migration 19:35:24 craige wanted some input on direction, according to the agenda 19:35:49 I did. 19:35:49 craige: any details on what those questions were? 19:35:55 (or where?) 19:37:04 Not at present. 19:37:35 oh 19:37:37 * anteaya finds it hard to provide input 19:37:40 I'll have to punt them to -infra when I'm more awake. 19:37:50 it does. Sorry. 19:38:13 craige: okay, please do. it's entirely reasonable to ask questions in #openstack-infra when they occur to you rather than waiting for the weekly meeting 19:38:31 #topic puppetlabs-apache migration (pabelanger) 19:38:41 #link https://review.openstack.org/205596 19:38:53 ohau 19:38:55 hi* 19:39:03 pabelanger: what's the status and/or blocker on this work? 19:39:11 * anteaya likes ohau as a greeting 19:39:20 * jeblair likes oahu 19:39:25 so, I wanted to see if people are still interested in the puppetlabs-apache migration? I've had nodepool up for a month or so, but have no feedback 19:39:44 so, before going crazy with patches, want to see if I can have some puppet core ready for reviews 19:40:00 and make sure people are happy with apache::vhost:custom patch 19:40:58 maybe we need to ask when nibalizer is more around? 19:41:13 as a reminder, this should finally get us of our puppet-http fork 19:41:49 pabelanger: maybe start by getting the apache module update change in? 19:41:58 pabelanger: it is currently marked WIP so not likely to get much attention 19:42:06 oh it merged 19:42:10 with WIP... 19:42:18 ha! 19:42:21 Ya, not sure why that merged honestly :) 19:42:24 but, it is in 19:42:40 well then ya probably best to start with a service and make sure it is happy then move on 19:42:55 okay, so apache::vhost::custom is a working thing now. and looks from the example change like we don't lose our current capabilities 19:43:18 thanks for driving that to a successful conclusion upstream! 19:43:24 Ya, I can take this offline, like I say, just wouldn't mine puppet-core to confirm they are happy and I can get started on others 19:43:50 I think my biggest concern at this point would be making ure the module code works across platforms as we roll out 19:43:59 since we tripped over the apache mod stuff before 19:44:22 Ya, puppetlabs-apache has some good test coverage for other OSs. 19:44:26 and beaker jobs upstream 19:45:08 anything else on this? 19:45:19 none here 19:45:21 i apparently need to run and sign for a package at the door real fast 19:45:26 #topic gerritlib release for jeepyb to set gerrit project descriptions (yolanda, zaro) 19:45:58 #link https://review.openstack.org/#/c/110009/ 19:46:03 jeepyb feature to set project desriptions needs a gerritlib release 19:46:11 to work. 19:46:17 can someone do that? 19:46:46 there hasn't been a release in a while. 19:47:04 sorry about that, back now 19:47:17 https://review.openstack.org/#/admin/groups/730,members 19:47:47 i can do that, but i haven't been watching gerritlib and have no idea what's in the new version 19:47:52 has anyone looked at what the divergence is from the last tag (number of significant changes) yet? 19:48:00 er, what jeblair just said 19:48:19 zaro: are you relatively confident it's ready for release, won't cause major problems, etc? also, are there any backwards incompatible changes? 19:48:47 not atm. i would have to probably take another look. 19:48:57 I'm currently spamming the universe. For a CI I want to chop down the jobs section of zuul's layout.yaml file to just what I care about, right? 19:48:57 i have been doing reviews on changes though. 19:49:26 Swanson: please move to -infra 19:49:51 i can take a look and let you know on -infra. 19:49:52 skimming, maybe half a dozen notable changes in the log since 0.4.0 19:50:14 anteaya, Whups! Clicked the wrong tab. 19:51:08 zaro: i can go ahead and tag this afternoon. looks like it's probably a 0.5.0 19:51:36 #action fungi release gerritlib 0.5.0 19:51:44 fungi: cool 19:51:59 just get up with me first on whether the master branch tip testing is successful 19:52:16 and i'll hold off tagging until you confirm 19:52:48 the only openstack-infra thing i know that depends on gerritlib is jeepyb, does anything else depend on it? 19:52:49 #topic update on gerrit performance after adding memory. Anybody experiencing slowness with pushes to gerrit? (zaro) 19:53:20 zaro: we probably need a much longer discussion on this topic than the next 6 minutes will allow 19:53:40 there're lots of reports of 404s and 502s - and some of these cause slowness since git review will retry... 19:53:41 was there anything real quick you needed to bring to the team? or should we try to dig into it on #openstack-infra later? 19:54:11 my theory is as memory gets constrained we can't servce requests fast enough 19:54:13 we could probably spend half a meeting just talking about the performance issues at a high level 19:54:23 then apache decides the backend has gone away and 500s everyone for a minute 19:55:11 i want to sacve the last 5 minutes for pleia2's updates/reminders on the mid-cycle too 19:55:28 we have also experienced 502/503 after a service restart where gc isn't running 19:55:33 we can take this offline. i was just looking for feedback. put this on the schedulde before yesterday's restart. so now i know we still have big problem. 19:55:34 fungi: agreed 19:55:53 yeah, the one i saw yesterday right after restart seemed to coincide with a large outbound traffic spike 19:56:05 so maybe an inadvertent dos condition 19:56:17 zaro: yes, we still have performance issues 19:56:26 #topic Infra-cloud sprint (pleia2) 19:56:34 #link https://wiki.openstack.org/wiki/Sprints/InfraMitakaSprint 19:56:42 so first, reminder that we want the final attendee count to HPE for catering planning Friday, January 29th (11 days from now) 19:56:58 so if you haven't signed up, please do :) 19:57:15 #info Deadline for attendee sign-up for the mid-cycle is Friday, January 29th (11 days from now) 19:57:20 HPE is also sending some infra-branded swag, so that'll be fun (t-shirts and stuff) 19:57:26 neat! 19:57:31 (thanks hpe!) 19:57:42 the last thing is hotels, I haven't booked yet but I don't want to end up at the hotel where no one else is since we'll need rides/cars/shuttles to the office in the morning 19:57:45 assuming I end up buying this house I may not end up going... would get keys just a couple days before sprint. Will keep things updated and will remove my name from list for catering purposes if that happens 19:57:46 wooo 19:58:00 has anyone booked? I added a hotel column to the sign up 19:58:10 oh - I have booked 19:58:10 pleia2: i was going to just pick the nearest hotel, but i haven't yet no 19:58:11 i'm at the courtyard 19:58:19 clarkb: would love to see you but also would love you to have a good house 19:58:25 but I'm not at any of those hotels 19:58:39 i booked at the hilton 19:58:41 but yeah, happy to stay at whichever is most popular 19:58:49 those hotels are within a rock's toss of each other 19:58:57 they basically share the same parking lot 19:59:03 mordred: where are you staying? 19:59:10 ok, can folks who book add their hotels to https://wiki.openstack.org/wiki/Sprints/InfraMitakaSprint#Registration ? 19:59:16 will help inform the rest of us ;) 19:59:19 jeblair: I'm staying at the Fort Collins Marriott 19:59:23 then we can also coordinate rides with each other 19:59:31 also - I'll only be there the last 2 days 19:59:44 yeah I don't post the hotel I'm staying at 19:59:50 anyone who wants to know can pm me 20:00:13 yep, i agree privacy in these matters is entirely understaneable 20:00:13 anteaya: that's fine 20:00:14 (with the marriott/starwood merger, I'm falling back to marriott now when starwood is not a choice on the thinking that perhaps the merger will merge my accounts) 20:00:18 and we're out of time 20:00:21 thanks fungi 20:00:33 thank you 20:00:44 dimtruck: thomasem: we'll have to get you next time 20:00:54 /sadpanda 20:00:55 okey dokey 20:00:57 fungi: no worries! we'll ping on -infra 20:00:58 SergeyLukjanov: i didn't get to bring up the renames, but maybe we can talk in #openstack-infra later 20:01:00 Migth just hit y'all up in infra 20:01:01 thanks 20:01:01 channel 20:01:04 @endmeeting 20:01:06 endmeeting 20:01:09 #endmeeting