19:01:29 #startmeeting infra 19:01:30 Meeting started Tue Aug 6 19:01:29 2013 UTC and is due to finish in 60 minutes. The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:01:31 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:01:33 The meeting name has been set to 'infra' 19:01:35 #link http://eavesdrop.openstack.org/meetings/infra/2013/infra.2013-07-30-19.02.html 19:01:40 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting 19:02:00 o/ 19:02:29 jeblair has super exciting news I think 19:02:41 * fungi thinks so too 19:03:01 that's a slightly stale agenda, but i think we can work with it 19:03:18 #topic asterisk 19:03:19 o/ 19:03:45 o/ 19:04:10 it's probably not worth calling in again, as i'm not sure anyone has had a chance to identify causes/solutions to the high cpu from transcoding we saw... 19:04:25 jeblair: I haven't seen any puppet changes to address that 19:04:34 there was one small change to deal with NAT better but I doubt that is related 19:04:49 russellb: i haven't had a chance to look into it, have you? 19:04:56 pabelanger isn't around :( 19:05:08 jeblair: i haven't touched it 19:05:12 clarkb: not realted 19:05:22 that was related to me helping someone get their local PBX behind NAT working 19:05:55 i don't think we know that it's transcoding specifically, and not just the cost of running the conference bridge 19:06:25 i don't think we had monitoring set up last week? so would be worth doing it again once we have a graph to look at i guess 19:07:08 #link http://cacti.openstack.org/cacti/graph_view.php?action=tree&tree_id=1&leaf_id=39 19:07:42 awesome 19:08:00 did we possibly under-size the vm? we can always grow it to the next flavor up if needed, i expect 19:08:19 but we'll obviously want to observe its performance under load 19:08:26 i don't see CPU on there 19:08:28 should we call in again to get some numbers in cacti? 19:08:32 russellb: second page 19:08:49 oops :) 19:08:52 (or set "graphs per page" to a high number at the top) 19:09:14 where "high number" is i excess of the ~25 graphs currently generated 19:09:33 that's 5 min intervals 19:10:35 let's aim for calling in again next week 19:10:39 ok 19:10:48 * russellb is in now in case anyone wants to call in and idle for a bit 19:11:01 oh, or we could do it now. :) 19:11:03 should see if we can get a graph of number of calls on here too 19:11:12 i'm in bridge 6000 19:11:18 what's the did again? 19:11:26 https://wiki.openstack.org/wiki/Infrastructure/Conferencing 19:11:35 #link https://wiki.openstack.org/wiki/Infrastructure/Conferencing 19:11:40 thanks! 19:13:19 i couldn't figure out how to mute my other client, lol. 19:13:20 sorry. 19:13:36 * mordred not calling in beause he's in brazil. btw 19:14:21 mordred: you could try SIP :) 19:14:27 * jeblair is listening in stereo 19:15:51 so we presumably want to load the pbx for a good 10 minutes to get a decent snmp sample period 19:16:02 fungi: yeah 19:17:02 so while that's going on... 19:17:04 #topic Multiple Jenkins masters (jeblair) 19:17:11 yes! 19:17:19 we have them! 19:17:21 \o/ 19:17:28 several! 19:17:34 all of the test load should now be on jenkins01 and jenkins02 19:17:43 while jenkins is running the jobs on the special slaves 19:18:03 this is going to be a big help in that we can scale out jenkins with our test load 19:18:12 i've seen no problem reports which seem attributable to the change 19:19:07 (from an overall system pov, we've moved the bottleneck/spof to zuul, but there's a logical bottleneck there in terms of having a single logical gate) 19:21:31 next i'm going to be working on devstack-gate reliability, and then speed improvements 19:22:01 i haven't looked back at the resource graphs for zuul recently to see whether that's creeping upward 19:22:53 it seems to be keeping up with gerrit changes, etc 19:23:02 i think the new scheduler there did a lot to help that 19:23:28 so, completely unscientifically, now that we're spreading the load out a bit, it seems as if jenkins itself is able to keep up with the devstack-gate turnover better. 19:23:29 * mordred bows down to jeblair and the jenkinses 19:23:30 cpu and load average look fine on zuul 19:23:44 at least, when i've looked, the inprogress and complete jobs are finishing quickly 19:24:35 #topic Requirements and mirrors (mordred) 19:24:42 and we can upgrade jenkins eith zero downtime :) 19:24:50 mordred, fungi: updates on that? 19:24:57 clarkb: yes! i hope to do that soon! 19:25:16 well... we've goten somewhere 19:25:27 oops, i hung up on the pbx 19:25:31 devstack is now updating all projects requirements to match openstack/requirements 19:25:37 before installing them 19:25:48 and requirements is now gated on devstack 19:25:53 so that's super exciting 19:26:06 anyway, i've tried and scrapped about three different designs for separate-branch mirrors and settled on one i think should work 19:26:17 woot 19:26:35 i've got a wip patch for the job additions up right now while i hammer out the jeepyb run-mirror.py addition 19:27:12 i still don't have good ideas got bootstrapping new branches of openstack/requirements to the mirror without manualintervention 19:27:44 fungi: is listing the branches in the requirements repo not sufficient? 19:27:56 and what to do with milestone-proposed periods 19:28:16 milestone proposed belongs to the parent branch right? 19:28:35 clarkb: well, if we branch requirements for a new release, we want the mirror to already be there so tests will run, right? 19:29:03 fungi: yes, we can branch requirements first though 19:29:11 so do we rename the mirrors in place, or duplicate them, or play with symlinks during transitions or... 19:29:21 I'm not sure we should ever have a milestone-proposed requirements, should we? 19:29:34 I think we can duplicate. It is easy, pip cahce prevents it from being super slow 19:29:39 and python packages are small 19:29:47 can we do failover-to-master ? or is that crazy 19:29:54 mordred: not so much a milestone-proposed requirements, but what do we gate nova milestone-proposed against? master? havana? 19:30:11 fungi: gotcha 19:30:22 thinking mostly in terms of which mirror to use in integration tests around release time 19:30:26 yah 19:30:45 mordred: particularly since devstack forces the requirements now, the m-p branch of code will either use what's in requirements/m-p or requirements/master 19:30:47 so anyway, i'll get the bare functionality up first and then we can iterate over release-time corner cases for the automation 19:31:04 mordred: if we don't branch requirements, then that means master requirements must be frozen during the m-p period 19:31:24 nod. branching requirments seems sensible 19:31:26 which i think is counter to what we want m-p for (to keep master open) 19:31:36 so i think we probably have to branch requirements... 19:31:39 and if we branch it first, then that could trigger the m-p mirror 19:31:47 yeah. I'm on board with that now 19:31:52 fungi: the act of creating a branch is a ref-updated event that could trigger the run 19:31:57 agreed 19:32:17 so it should be transparent to the projects as long as requirements is branched first 19:32:29 another point in question here is, once we have requirements-sync enforced on projects, do we need to keep carrying forward the old package versions? 19:32:33 fungi: we'll want to make sure that the release documentation for the m-p branching process is updated for this. 19:32:40 absolutely 19:33:20 fungi: it doesn't seem like we do need to carry those. 19:33:32 fungi: maybe clean them up after 48/72 hours or something? 19:33:51 fungi: (to give us a chance to pin them if something breaks) 19:34:02 i wouldn't think so either, just wanted to make sure i didn't go to extremes to populate new branch mirrors with the contents of the old ones indefinitely 19:35:02 we may also need requirements branches for feature branches 19:35:25 cleaning up old packages will be an interesting endeavor as well... especially if different python versions cause different versions of a dependency to get mirrored 19:35:56 jeblair: I think it is fair to make feature branches dev against master 19:36:08 s/master/master requirements/ 19:36:11 so we can't necessarily guarantee that if two versions appear i the mirror, the lower-numbred one should be cleared out 19:36:16 fungi: yeah, i don't think there's any rush to do that. if you were mostly asking about carrying existing things to new branches, then i think nbd. 19:36:24 otherwise you won't be able to sanely test keystone branch foo against everything else master in tempest 19:36:54 clarkb: the feature branch requirements would see that it is done. 19:36:57 jeblair: that was mostly it. create the new mirror from scratch when the branch happens, vs prepopulating with a copy of the old mirror first 19:37:34 jeblair: the problem with it is if nova master conflicts with keystone foo 19:37:37 clarkb: if reqs has feature branches, then devstack will now force the feature reqs to be the deps for all the projects. that will either work or not in the same way as master. 19:37:39 then all of your testing fails 19:38:06 jeblair: I think the net effect is you end up with something a lot like the master requirements 19:38:11 clarkb: yes, such a requirements change would not be allowed to merge to the feature branch. 19:38:22 clarkb: so that's the system working as intended. 19:38:41 yup, but the diff between master and foo requirements will be tiny 19:38:44 clarkb: if two openstack projects need updating to work with an updated requirement in the feature branch, then two openstack projects will need that feature branch. 19:38:57 gotcha 19:38:57 clarkb: tiny but important. 19:39:19 (important enough for you to go through all this hassle to get a new version of something. :) 19:39:49 ++ 19:39:50 anything else? 19:40:11 (i really like the way this is heading) 19:40:19 #topic Gerrit 2.6 upgrade (zaro) 19:40:42 we forgot to link zaro's change last meeting (or maybe it wasn't pushed yet) 19:40:50 zaro: do you have the link to your gerrit patch handy? 19:41:01 yes, give me a min. 19:41:31 zaro: btw i played around with your wip feature poc and it seems to do what we want 19:41:39 I agree 19:41:40 excellent stuff 19:41:41 # link https://gerrit-review.googlesource.com/48254 19:41:41 I like it 19:41:53 of: [cgit, py3k, git-review, and storyboard], which do we need to talk about at this meeting? 19:41:53 #link https://gerrit-review.googlesource.com/48254 19:41:57 #link https://gerrit-review.googlesource.com/48255 19:42:08 was uploaded for almost a week, no love yet. 19:42:09 jeblair: cgit and py3k have recent changes 19:42:25 i have py3k and g-r updates, but they're not critical to be covered 19:42:47 so just waiting now.. 19:43:04 * ttx lurks 19:43:08 zaro: should we try and get people to review that change? 19:43:21 i could make a request. 19:43:22 zaro: I am not sure what their review backlogs look like, but asking in IRC might help 19:43:36 probably mfick 19:43:52 zaro: david did have a nit on that second change 19:43:58 he just came back from vacation. 19:44:01 zaro: very cool 19:44:27 jeblair: was a nit but didn't give a score. 19:44:53 jeblair: i was waiting for maybe someone else to score before fixing the nit 19:44:54 zaro: anyway, you might want to update that, and then, yeah, start pestering people. :) 19:45:09 jeblair: ok. will give that a try. 19:45:22 #topic cgit server status 19:45:41 so, we're pretty close, 2 of the 3 reviews outstanding should be pretty good to go 19:46:23 someone keeps scope-creeping at least one of them, sorry. :) 19:46:35 yeah, that's the 3rd :) 19:46:43 pleia2: i see the replication and https changes open... what's the third? 19:46:49 fungi: ssl 19:46:51 oh 19:46:57 I intend on updating my reviews now that there are new patchsets (I believe) 19:47:03 fungi: git daemon https://review.openstack.org/#/c/36593/ 19:47:10 pleia2: did we agree https only ? 19:47:12 ahh, yes 19:47:14 jeblair: yeah 19:47:26 patched ssl one accordingly 19:47:34 that seems reasonable to me, fwiw. https (no http) and git:// otherwise. 19:47:38 ++ 19:47:53 wfm 19:48:06 once these are done there is just cleanup and theming if we want 19:48:08 ++ 19:48:23 ttx: you may be interested in the "Requirements and mirrors" topic earlier in this meeting 19:48:29 I'd like theming - but I'm fine with that coming later 19:48:55 yeah, I think right now having performant fetches on centos is more important that theming :) 19:48:56 also, I'm flying to philly on thursday for fosscon, so my availability will be travel-spotty as I prep and attend that 19:49:03 jeblair: reading backlog 19:49:07 ttx: short version: we will probably need to create a m-p branch of the openstack/requirements repo before doing any other project m-p branches. 19:49:24 pleia2: have fun 19:49:29 clarkb: thanks :) 19:49:34 #topic Py3k testing support 19:49:52 we have a couple outstanding moving parts which need to get reviewed 19:49:57 #link https://review.openstack.org/#/q/status:open+project:openstack-infra/config+branch:master+topic:py3k,n,z 19:50:18 clark has a patch out there to start non-voting py33 tests on all the clients after those above go in 19:50:26 #link https://review.openstack.org/#/c/40323/ 19:50:31 jeblair: I'll need to do a m-p for swift 1.9.1 tomorrow or thursday morning 19:51:08 ttx: if it's just for swift, it shouldn't be an issue 19:51:30 ttx: it only matters if the requirements for master and m-p diverge, which i am certain will not happen for swift. :) 19:52:02 ttx: so you should be able to ignore it for this, and hopefully we'll have all the machinery in place by h3 19:52:12 jeblair: ack 19:52:18 fungi: i will promote those to the top of my queue 19:52:34 jeblair: awesome. thanks 19:52:58 other than that, the projects which are testing on py33 seem to be doing so successfully 19:53:06 not much to add on that topic 19:53:12 very excited about running py33 tests for the clients, since zul is actually submitting changes there! 19:53:16 #topic Releasing git-review 1.23 (fungi) 19:53:32 this was on the agenda just as a quick heads up 19:53:56 fungi: cool. did we get anywhere with installing the hook differently? 19:54:05 fungi: so that it applies to merge commits too? 19:54:07 we have a contributed patch to convert git-review to pbr 19:54:24 #link https://review.openstack.org/#/c/35486/ 19:54:26 and i want to tag a release on the last commit before that 19:54:48 ++ 19:54:54 just so if we have installation problems in the following release for some users, the fallback is as up to date as possible 19:55:07 i've been using the current tip of master for weeks successfully 19:55:20 have one cosmetic patch i want to cram in and then tag it, probably later this week 19:55:35 fungi: sounds good 19:55:40 yup 19:55:42 the pbr change is exciting though, because we have integration tests which depend on that 19:55:48 very exciting 19:55:54 ah, that's what's holding that up 19:56:01 #link https://review.openstack.org/#/c/35104/ 19:56:11 #topic Storyboard (anteaya) 19:56:19 also i want to turn on the tests as no-ops first so we can gate them on themselves 19:56:19 anteaya: 4 minutes. :( 19:56:26 hello 19:56:31 hi! 19:56:39 well most of my questions from last week were answered 19:56:47 I hadn't set up the db properly 19:57:06 I have a patch waiting to merge that adds those instructions to the readme 19:57:07 * ttx started working on the project group feature today 19:57:34 basically what I do know is that ttx wants to stay with 1.4 django, correct ttx? 19:57:36 ttx: https://review.openstack.org/#/q/status:open+project:openstack-infra/storyboard,n,z 19:57:47 ttx: i think both of those changes could use some input from you if you have a min. 19:57:57 and other than that I am still trying to get the models straight 19:58:10 jeblair: oh. I wasn't notified on those for some reason 19:58:17 that is about it from me, ttx, anything else 19:58:26 ttx: ah, you may need to add it to your watched projects list in gerrit 19:58:35 https://review.openstack.org/#/settings/projects 19:58:37 doing that right now 19:58:44 ttx: i don't think i've started watching it in gerrit either. good reminder 19:58:52 (used to happen as part of the lp group sync) 19:59:23 ttx: I personally don't see any reason to stay with 1.4 19:59:28 but defer to you 19:59:37 upgrade to all the new things 19:59:45 no good reason, except that supporting 1.4 is not really causing an issue 20:00:02 oh, and reminder to everyone i'm working from seattle the week of the 18th, then mostly unreachable the week after that 20:00:10 ttx if I were able to put together a patch to upgrade to 1.5, would you look at it? 20:00:11 I'm fine with 1.5+ if we end up using something that is only 1.5 :) 20:00:12 django is tricky because supporting mutliple versions isn't super straightforward aiui 20:00:28 thanks all! 20:00:32 #endmeeting