19:03:22 <fungi> #startmeeting infra
19:03:23 <openstack> Meeting started Tue Feb 28 19:03:22 2017 UTC and is due to finish in 60 minutes.  The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:03:24 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:03:26 <openstack> The meeting name has been set to 'infra'
19:03:31 <fungi> #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
19:03:41 <fungi> #topic Announcements
19:03:47 <fungi> clarkb: want to #info the zanata maintenance here?
19:04:14 <clarkb> sure
19:04:16 <fungi> we can cover general discussion about it later
19:04:38 <clarkb> #info upgrading translate.openstack.org to Xenial + java 8 + zanata 3.9.6 Wednesday March 1 at 2300 UTC
19:04:51 <fungi> thanks!
19:04:53 <clarkb> this adds a bunch of features that the translators have been asking for
19:04:58 <fungi> as always, feel free to hit me up with announcements you want included in future meetings
19:05:10 <fungi> #topic Actions from last meeting
19:05:25 <fungi> #link http://eavesdrop.openstack.org/meetings/infra/2017/infra.2017-02-14-19.03.html Minutes from last meeting
19:05:33 <fungi> "1. (none)"
19:05:40 <fungi> and done
19:05:47 <fungi> #topic Specs approval
19:06:13 <fungi> we don't seem to have anything new up this week, though there are a few hanging out there that could use some eyeballs
19:07:11 <fungi> #link https://review.openstack.org/434951 Stackalytics Persistent Cache
19:07:42 <fungi> that one in particular could stand to get some attention since it is basically the last bit standing between us and being able to move to running stackalytics.openstack.org as production
19:08:09 <fungi> so anybody with an interest in stackalytics, please weigh in
19:08:15 <clarkb> fungi: isn't that something better for the stackalytics devs to weigh in on?
19:08:24 <clarkb> I guess I don't really feel like I'd be a great reviewer
19:08:50 <fungi> well, we need to review it from the standpoint of whether this is a fit for the current automation and configuration management issues we have with the service
19:09:05 <pabelanger> I can take a look, only because I helped get stackalytics.o.o online. But would agree with clarkb , we should ping core reviewers on stackalytics
19:09:24 <fungi> but yes, the implementation on the stackalytics side would still need buy-off from their maintainers
19:10:08 <fungi> mrmartin: ^ reminder that the stackalytics devs should also take a look at that spec
19:10:37 <fungi> #topic Priority Efforts
19:10:57 <fungi> nothing called out specifically here for this week, but we got a lot tackled at the ptg last week
19:11:19 <fungi> #topic Apache workers on static.o.o (ianw)
19:11:47 <ianw> hi, this has caused several problems for me lately
19:12:11 <fungi> looks like the first change linked there has already merged
19:12:18 <ianw> people reporting job failures, and on monday at least two reports for static.o.o and docs.o.o of intermittent connections
19:12:45 <ianw> did i link https://review.openstack.org/#/c/426639/ ?
19:12:57 <fungi> #link https://review.openstack.org/426639 Switch static.openstack.org to worker MPM
19:12:58 <pabelanger> I gave a +2 a while back, thought we were ready to move it into production
19:13:09 <pabelanger> didn't +3 as couldn't monitor
19:13:25 <ianw> yeah, i just didn't want to go fiddling with it without consensus
19:13:33 <fungi> oh, right, it didn't get workflow +1 yet, i misread
19:14:10 <jeblair> +2.  is mpm-event in xenial?
19:14:21 <fungi> #link http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2017-02-26.log.html#t2017-02-26T23:57:30 discussion of job failures and user downtime
19:14:24 <clarkb> jeblair: its the default in xenial iirc
19:14:39 <jeblair> so this might be short lived if we organize our xenial virtual sprint
19:14:43 <ianw> i believe that the xenial version has, however, put in fixes for this particular issue
19:15:41 <pabelanger> ++
19:15:57 <ianw> alright, if no major issues then i might babysit it this afternoon when it's quiet.  in theory, at least, it should just be an apache restart
19:16:22 <fungi> yeah, i can't personally participate in the xenial upgrades sprint until at least week-after-next but hope we can do that rsn
19:16:38 <fungi> ianw: sounds fine to me. go for it
19:17:24 <fungi> anything else we need to cover on this topic?
19:17:35 <ianw> nope /EOT
19:18:06 <fungi> #topic migrate github/bagpipe-bgp into openstack/networking-bagpipe (pabelanger)
19:18:28 <pabelanger> so, clarkb and I talked a little about this topic this morning. I think we have it under control now.
19:18:39 <fungi> #link https://review.openstack.org/438573 migrate github/bagpipe-bgp into openstack/networking-bagpipe
19:18:45 <pabelanger> I wasn't sure if we had an automated process for re-import or it was manual
19:18:50 <pabelanger> looks like manual wins out
19:19:04 <pabelanger> so, we just need to schedule this with project owner I think
19:19:08 <fungi> right, there's no automation since we expect it to be a rare occurrence
19:19:17 <pabelanger> great, thats all I had
19:19:36 <fungi> excellently brief topic!
19:19:49 <fungi> #topic translate.openstack.org upgrade (clarkb)
19:20:00 <clarkb> ohai
19:20:18 <clarkb> juts wanted to let people know that things are going well on this and plan to run through it tomorrow
19:20:24 <fungi> #link https://etherpad.openstack.org/p/translate.o.o-upgrade translate server upgrade maintenance plan
19:20:34 <clarkb> translate-dev has been ahppy on new code once I igured out how jboss and zanata are different
19:20:41 <clarkb> ianychoi has tested translate-dev and is happy with it
19:20:48 <clarkb> so now we are ready to do production
19:21:00 <fungi> is translate-dev back to authenticating against openstackid-dev.o.o now?
19:21:09 <clarkb> fungi: there is a change up for that but I don't think it has merged
19:21:12 <clarkb> let me find it really quick
19:21:34 <fungi> nevermind, i just tested and it's not
19:21:35 <clarkb> https://review.openstack.org/#/c/419667/
19:21:59 <clarkb> please read through the etherpad steps and let me know if you think anything is missing
19:22:07 <fungi> #link https://review.openstack.org/419667 Switch to openstackid-dev for translate-dev
19:22:12 <clarkb> we are doing it at an apac friendly time so that ianychoi and others can help test once done
19:22:13 <fungi> i'll go ahead and approve that one now
19:23:00 <pabelanger> I can be on standby if needed
19:23:45 <fungi> 23:00 utc wednesday. i _may_ be around or might show up partway into the maintenance
19:24:04 <fungi> hard to know what dinner plans will be with family here
19:24:36 <fungi> but i'll review the changes you've got linked in the pad at least
19:24:42 <pabelanger> fungi: I don't mind doing it
19:24:48 <fungi> thanks pabelanger!
19:24:58 <clarkb> thanks
19:25:11 <fungi> #link https://review.openstack.org/438738 Add translate01 to cacti
19:25:36 <fungi> #link https://review.openstack.org/438737 Update db creds for translate01.o.o
19:25:52 <clarkb> note ^ is WIP because I don't want new server talking to db until we are ready to switch
19:26:06 <fungi> yep, i assumed so
19:27:04 <fungi> looks like it hasn't actually saved a db dump yet
19:27:39 <fungi> i guess that merged more recently than utc midnight
19:28:37 <fungi> nothing in the etherpad is jumping out at me as overlooked or concerning
19:28:49 <clarkb> oh it should've done a db dump since I manually ran one
19:29:20 <clarkb> fungi: I ran both the mysqldump command and bup command from cron (in that order) to jump start the backup process
19:29:25 <fungi> the /var/backups/mysql_backups dir is empty except for what looks like a logrotated empty file
19:29:36 <fungi> on translate01
19:29:37 <clarkb> fungi: on translate.openstack.org?
19:29:42 <clarkb> ya translate01 does not have backups yet
19:30:04 <clarkb> translate.o.o had never had backups so I wanted those in place first, then will transition backups to 01 when we switch
19:30:30 <fungi> got it. i saw it had the dump job in its crontab, but i guess it doesn't have access to the production db yet anyway
19:30:37 <clarkb> correct
19:31:59 <fungi> you're going to use zuul enqueue-ref to test/prime teh periodic translation jobs?
19:32:19 <clarkb> ya
19:32:35 <clarkb> well not enqueue-ref since they re periodic iirc
19:32:37 <clarkb> enqueue?
19:32:49 <fungi> enqueue needs a change id
19:32:54 <fungi> and patchset
19:33:07 <fungi> enqueue-ref can be provided a branch tip or whatever
19:33:11 <clarkb> ah ok
19:33:24 <fungi> though i guess the ref itself is irrelevant in that case
19:33:52 <clarkb> right just need to say run nova's periodic translation jobs early
19:33:53 <fungi> i haven't personally tried manually enqueuing a periodic job in zuul. you might find you have to compose a trigger-job.py invocation for each one instead
19:34:12 <fungi> but hopefully the zuul cli works in this case
19:34:13 <clarkb> otherwise they run around 0600 UTC which is quite a bit later
19:36:28 <clarkb> I will sort out what the command to enqueue periodic jobs is
19:37:24 <fungi> might save you some time later
19:37:36 <fungi> okay, anything else on this? if not, open discussion ensues
19:38:06 <clarkb> I don't have anything else. ianychoi has done a bunch of user side testing and I think I have the system side sorted so expect it to be mostly happyness (now I have jinxed it)
19:38:53 <fungi> well, here's hoping that if nothing else, we only need to upgrade to zanata 3.9.6 once
19:39:19 <fungi> #topic Open discussion
19:39:37 <clarkb> for open discussion more and more groups seem interested in upgrading our nodepool builders
19:39:41 <clarkb> we might want to schedule that oto
19:40:35 <fungi> in service of assembling a ptg summary, i've seeded an etherpad from the zuul v3 accomplishments mentioned in yesterday's zuul meeting
19:40:38 <fungi> #link https://etherpad.openstack.org/p/infra-ptg-pike-recap brainstorming pad for ptg recap
19:41:19 <fungi> people who worked on other things during the ptg (storyboard, translations tooling, job failure diagnosis, et cetera) please add some bullets!
19:41:19 <pabelanger> clarkb: agreed, should be something we can roll into production
19:41:37 <clarkb> looking at the puppetry for builders it apperas that we should already work with systemd except for possibly needing to do the systemctl command to reload units (in this case our sys v scripts)
19:41:42 <pabelanger> We'd need xenial for epensuse images too
19:41:47 <felipemonteiro> Hi, Mirantis currently hosts Murano's CI environment. They've expressed interest in moving it to me. Would it be possible to have infra do this?
19:42:32 <fungi> felipemonteiro: does it do anything special or have any special requirements which they were unable to upstream previously?
19:42:47 <jeblair> clarkb: can you elaborate on 'upgrading our nodepool builders'?
19:43:24 <clarkb> jeblair: ianw/dib have wanted us to upgrade to xenial from trusty to be closer to how dib is running CI. And SUSE is interested in building suse images but that requires zypper install that works which isn't available on trusty but is on xenial
19:43:26 <fungi> adding xenial-based builders
19:43:27 <felipemonteiro> fungi: I doubt it. Serg Melikyan told me it's "1 medium sized VM with Jenkins and 1 hardware node with 96 RAM".
19:43:38 <clarkb> jeblair: so adding xenial based builders, then deleting the trusty ones
19:44:06 <jeblair> gotcha
19:44:13 <mordred> felipemonteiro: the hardware node with 96 ram would be the thing that might make running the jobs in infra tricky
19:44:21 <mordred> felipemonteiro: is that node just being used as a source of vms?
19:44:36 <mordred> (I believe I remember Serg was doing things with libvirt builders)
19:44:41 <jeblair> clarkb, fungi, pabelanger: sounds good to me.  we wanted to only change one variable at a time with nodepool.  i think we're good to change the next now.  ;)
19:45:01 <fungi> felipemonteiro: you probably want to look at https://docs.openstack.org/infra/manual/testing.html to confirm there are no known show-stoppers for you
19:45:31 <clarkb> felipemonteiro: mordred fungi I think what we'd want to do is port the jobs over into our infra and not host a separate jenkins
19:45:36 <fungi> #link https://docs.openstack.org/infra/manual/testing.html Test Environment documentation
19:45:37 <mordred> clarkb: yes
19:45:37 <pabelanger> jeblair: ya, I'm not rushing nodepool-builder swap. Maybe in a few weeks :)
19:45:44 <mordred> clarkb, felipemonteiro: yes. that is
19:46:18 <ianw> i'm happy to take as an action item getting a xenial builder into some pre-production state
19:46:30 <ianw> i would like that for dib reasons
19:46:32 <mordred> clarkb: also s/not host a separate jenkins/not host a jenkins at all/
19:46:47 <clarkb> ianw: cool, I think that would be a good first step to just make sure that systemd and friends work due to funnyness around how puppet deals with that
19:47:04 <fungi> #action ianw launch a "beta" nodepool builder on xenial
19:47:15 <clarkb> ianw: I half expect our post puppet reboot to fix that for our first install, but in general case it may not work since you need a systemctl load-units or whatever to pick up sysv scripts
19:48:50 <ianw> if that's the only systemd issue, i think we call ourselves lucky :)
19:48:53 <felipemonteiro> mordred: Thanks. I'll check with Mirantis to see if these requirements are good enough for murano-ci.
19:49:06 <clarkb> felipemonteiro: so I think you want to read over the document that fungi linked to understand the limitations involved, then if you can work with those start porting your jobs into our CI by adding new jobs like everyone else
19:49:14 <mordred> felipemonteiro: awesome.
19:49:52 <felipemonteiro> clarkb: You mentioned how you don't want to host a separate Jenkins...does this mean you want everything infra if I understood correctly?
19:50:05 <felipemonteiro> in infra
19:50:09 <mordred> felipemonteiro: yes
19:50:16 <clarkb> felipemonteiro: right we would not take over the setup by running a new jenkins for you. Instead you would need to run your jobs as first party jobs in infra
19:50:38 <mordred> felipemonteiro: either all of the jobs should just be ported into our project-config repo and run as normal jobs, or none of it should be run in infra
19:51:11 <blancos> I had a question regarding setting up CI gates for a new(ish) project
19:51:21 <mordred> if the jobs are already written in jenkins-job-builder, then porting them over into project-config should be _fairly_ easy
19:52:19 <clarkb> blancos: feel free to ask, or hop over to #openstack-infra after the meeting and ask there
19:52:50 <felipemonteiro> mordred: I see. I'll try to get more information regarding whether it's just the ci jobs that are hosted by Mirantis. I have no objection with moving everything to infra.
19:53:15 <pabelanger> ++
19:53:18 <mordred> felipemonteiro: cool. if it's possible, it certainly seems like a good direction!
19:54:19 <blancos> clarkb: I guess the question is really just procedural; i.e., what's needed from me?
19:54:51 <clarkb> blancos: you'll need to propose changes to openstack-infra/project-config that adds jobs in jenkins/jobs and then tells zuul to run them via zuul/layout.yaml
19:55:34 <fungi> #link https://docs.openstack.org/infra/manual/creators.html#add-basic-jenkins-jobs Project Creator’s Guide: Add Basic Jenkins Jobs
19:55:52 <fungi> blancos: ^ not sure if you've been following that guide
19:55:55 <blancos> fungi: Thank you :)
19:56:07 <ayoung> http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/
19:56:17 <ayoung> http://eavesdrop.openstack.org/irclogs/%23openstack-meeting/
19:57:27 <fungi> ayoung: those are fun
19:57:42 <ayoung> sorry wrong room
19:57:55 <fungi> ayoung: no problem. thought maybe you were reporting a problem/need wrt those
19:58:14 <ayoung> nah, more as a solution to other problems fungi
19:58:34 <fungi> okay, open discussion seems to be winding down. i'll go ahead and polish up the ptg recap this evening. don't forget to add your highlights to the recap etherpad if you have any
19:58:47 <fungi> thanks everyone!
19:58:49 <clarkb> I'm going to enter a cave this afternoon to do track chair duties for summit
19:59:00 <clarkb> I will try to watch irc for fires but really need to get ^ done before dealdine
19:59:15 <fungi> i have family in town still so am mostly not around
19:59:41 <fungi> and that basically brings us to time for the tc meeting, up now!
19:59:44 <fungi> #endmeeting