19:04:09 #startmeeting infra 19:04:10 Meeting started Tue Sep 9 19:04:09 2014 UTC and is due to finish in 60 minutes. The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:04:11 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:04:14 The meeting name has been set to 'infra' 19:04:21 #link agenda https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting 19:04:23 #link previous meeting http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-09-02-19.01.html 19:04:24 (and now for something completely different) 19:04:28 o/ 19:04:31 #topic Priority Specs (jeblair) 19:04:34 o/ 19:04:42 o/ 19:04:52 hey, so we like totally merged those specs 19:05:09 yay 19:05:13 #link http://specs.openstack.org/openstack-infra/infra-specs/ 19:05:13 showed them what's for 19:05:16 yay! 19:05:17 o/ 19:05:21 yay 19:05:26 also i have a todo to update my license 19:05:30 because i screwed that up 19:05:36 o/ 19:06:00 nibalizer: you're no longer licensed? 19:06:10 i think the thing to do now with this time is to start tracking implementation of those and other important things we're working on 19:06:12 fungi: i need to be re-registered in oregon 19:06:23 an unlicensed nibalizer 19:06:24 (we have a few things in progress that predate specs but probably would have been) 19:06:38 a review to split out a module: https://review.openstack.org/#/c/119543/ 19:06:44 i'm thinking specifically about puppet 3 migration, config repo split, nodepool dib, logs in swift, docs publishing 19:07:12 ...finishing moving jobs to trusty 19:07:33 fungi: right, the 34 work is in service of that, right? 19:07:40 yeah 19:07:53 or did you mean specifically things that are already specs? 19:08:07 fungi: nope, including something big that's pre-spec like that is good 19:08:30 fungi: i want to make sure we make forward progress on our priorities, and that we're good at communicating those 19:08:34 right, that. i just reread your list ;) 19:08:36 ++ 19:08:38 and i think moving to trusty is one of our priorities 19:09:05 so maybe next week we should expect to have agenda items for each of those 19:09:17 sounds like a good idea 19:09:31 we'll need to coordinate with jhesketh on logs/swift, or potentially have a meeting annex later in the day 19:09:37 sounds like a candidate for an action item 19:09:51 * nibalizer can keep updates for puppet3 migration as that goes forward 19:10:55 since that's already on todays agenda, and we have a looming deadline, let's go ahead and switch to that 19:11:03 #topic Puppet 3 Migration 19:11:18 * mordred welcomes his puppet3 overlords 19:11:19 * anteaya is curious aboaut the looming deadline 19:11:26 so sept 30 is the day puppet 2.7 goes EOL 19:11:33 that would be it 19:11:34 oh crikey 19:11:39 jeblair: has stated that we will not run puppet if its eol 19:11:47 ++ 19:12:00 there is a patchset chain that paves the way for us to build a p3 master 19:12:06 #link https://review.openstack.org/#/c/117604/ 19:12:08 that is exactly 3 weeks from today 19:12:19 after that rooters can run nodes against the p3 master with --noop and inspect the breakage 19:12:37 it may be possible to move many nodes over immediately, others may require some refactors 19:12:40 looks like the beginning of that stack has merged 19:12:56 nibalizer: did the gerrit change merge? 19:13:00 have a link to that one handy? 19:13:13 also there is a tool in puppet that lets us generate catalogs, so we could generate 2.7 and 3 catalogs and diff them 19:13:14 we could also create puppet3 nodes to run the apply test on to inspect breakage 19:13:45 and wikipedia built a tool to operationalize that as a websevice, i have no experience with either 19:13:56 jesusaurus: i created https://review.openstack.org/#/c/107250/ but got stuck on 'how do i jjb' 19:14:01 any current insurmountable obstacles? 19:14:14 clarkb: the gerrit change did not merge i think, but it isn't required, those are only deprecation warnings 19:14:18 * jesusaurus looks at 107250 19:14:23 nibalizer: oh right 19:14:41 clarkb: https://review.openstack.org/#/c/107243/ 19:15:25 sounds like reviewing the remainder of https://review.openstack.org/#/c/117604/ then getting 107250 going are the next steps? 19:15:37 at any rate i think getting the chain that ends at https://review.openstack.org/#/c/117604/ to a place where it can land, then landing it, then building the p3 master should be our top priorities 19:16:01 nibalizer: sounds good; maybe we can get that merged this afternoon and i can spin up a p3 master again 19:16:13 jeblair: that would be great 19:16:29 nibalizer: given we have 3 weeks to finish this, where do you want to be by next week? 19:16:53 anteaya: with a p3 master up, and rooters checking nodes with noop 19:16:58 ++ 19:17:05 kk 19:17:16 * fungi volunteers to help test 19:17:22 if we dont have a p3 master by next meeting, thats an oshitsignal 19:17:28 indeed 19:17:43 the sooner we get the easy bits out of the way, the more time we have left to fix the harder stuff 19:17:56 exactly 19:18:06 sounds good to me 19:18:13 i dont want to burn meeting time theorizing what the hard bugs would be, but yea 19:18:18 if we have to prioritize nodes, we can do so for the gerrit/jenkins/zuul conglomerate 19:18:25 ya 19:18:30 those are the ones we want to keep updating with the puppetmaster 19:18:41 if paste loses its connection to puppet27 master, we'll live :) 19:18:43 the storyboard node, the puppetdb node those machines could have puppet just disabled on them 19:18:55 does the -dev ml need a heads up or just announce once it is completed? 19:19:15 anteaya: this should be invisible to developers 19:19:19 very good 19:19:25 i don't think the -dev subscribers are likely to particularly care unless we break something 19:19:53 ya this should be a backend chagne no one ever knows we made 19:19:56 anything else on puppet3? 19:20:04 unless they are reconsuming our puppet so maybe a note to the infra list 19:20:17 yeah, ci might care 19:20:31 nothing else on p3 19:20:39 #topic Manila project renaming (fungi, bswartz) 19:21:05 fungi: istr you communicated with someone about this? 19:21:14 i talked to vponomaryov1 and he said it would be fine for us to just pick a date/time and then give the manila devs a heads up on the -dev ml 19:21:32 and bswartz later agreed when he popped in to ask an unrelated question 19:21:58 so... do we want to rename this weekend? hold off until there are other things we need to add to the pile? 19:22:11 do it on a friday now that ff is behind us? 19:22:21 this weekend is hard for me to do. but I won't stop you guys from doing it 19:22:24 though the gate still is a bit submerged 19:22:36 this weekend is also tough for me 19:22:49 hopefully by later this week it will have lightened up considerably 19:22:51 so friday or kick it down the road 19:22:55 although I could do sunday before noon central time 19:23:08 i'm honestly fine either way, just want to make sure it doesn't fall through the cracks indefinitely 19:23:28 sounds like maybe we should just revisit next week 19:23:37 I'm for later, since p3 testing this week sounds like a priorty 19:23:43 and might leak into friday 19:24:19 okay, let's kick it down the road and see what sept 19-21 ends up looking like 19:24:28 yeah, table it for now 19:24:39 or shelve it 19:24:41 #topic Fedora/Centos testing updates (ianw 09-09-2014) 19:24:53 hey 19:25:04 firstly nodepool 19:25:30 wondering if any thoughts on https://review.openstack.org/#/c/118939/ (Ignore min-ready when at capacity) 19:26:26 ianw: cool, we should look at that since it may help with operational problems; sorry i haven't had a chance 19:26:39 ok 19:26:41 ianw: we did reduce min-ready to minimize that problem, but i'd like to put it back 19:26:59 i think we may need to do a nodepool restart soon to help mordred with the dib work 19:27:11 jeblair: dib work schmib work 19:27:19 so maybe we can try to push on that review and maybe get it in too 19:27:29 on the bare-f20 nodes, last week was happy to wait for postgresql puppet release to enable these nodes, but that didn't happen 19:27:49 https://review.openstack.org/117397 and dependent 19:27:56 ianw: ya we should probably just go ahead with the git installed version 19:28:26 so my concern with that is that it will break the puppetdb server 19:28:35 but that wouldn't actually break anything else i think 19:28:47 nibalizer: because puppetdb uses postgres? 19:28:50 arg 19:29:03 im +1 on the change 19:29:06 like lets do it 19:29:18 if it breaks, chances are good it would have broken on the stable release anyways 19:29:28 nibalizer: what do we need to do to fix puppetdb? and will it be puppetdb the service or just puppeting of puppetdb 19:29:30 nibalizer: you don't think it will break the postgres testing on the gate workers? 19:29:42 jeblair: ianw said he tested that 19:29:51 rock on 19:30:02 also why did they take the travis badge off the postgres module? shame? 19:30:19 nibalizer: maybe travis ran out of nodes 19:30:38 but the little travis.png isn't on their readme 19:30:49 * anteaya is amazed to see jeblair type rock on 19:30:49 mordred: you said we have more than twice the test capacity of travis at this point? 19:31:04 hax: https://travis-ci.org/puppetlabs/puppetlabs-postgresql 19:31:24 okay so yea tests are green 19:31:24 nibalizer: is it puppetdb the service that will break or puppeting of puppetdb server? 19:31:25 * fungi keeps meaning to find out where you got the capacity info for travis, but can defer that discussion to not-meeting 19:31:34 nibalizer: because puppetdb the service breaking completely hoses all of puppet 19:32:02 clarkb: i can run the beaker tests locally for the puppet module 19:32:12 that will give us "high confidence" that the git version is not busted 19:32:22 I thought it aws busted? 19:32:24 since thats essentially what puppetlabs does to vet a release 19:32:25 I am confused 19:32:31 clarkb: i see green in the travis 19:32:38 basically what I want to prevent is that our puppetdb will break 19:32:41 bceause that breaks everything 19:32:42 clarkb: no i getcha 19:32:44 clarkb: busted for fedora 20 19:33:11 nibalizer: ok. earlier you said puppetdb would break 19:33:20 clarkb: it might 19:33:37 idunno, i cant answer for all potentialities 19:33:58 thats fine but earlier you made it sound more definitel 19:34:09 i was just calling out it as a dependency 19:34:32 okay, so it sounds like we're ready to try to collapse that wave function! 19:35:01 so what was the conclusion? 19:35:07 give it a shot 19:35:08 i think we should merge it 19:35:31 clarkb: if postgres really does crap itself we can switch puppetdb to use the in memory database until we fix postgres 19:35:39 but i dont think that will happen 19:35:41 nibalizer: ok. 19:35:51 ianw: nibalizer: is that git sha1 the one we want to use still? 19:36:20 I assume that is the one that was tested so probably 19:36:48 clarkb: that is the one i tested and got a good run with on f20 ... 19:36:54 great 19:37:08 last question. :) should we remove the regular module install from install_modules.sh? 19:37:16 I think its ok as is but will end up doing more work 19:37:27 i think we want to use a1edd262d84f97a33ff64b8c542cffc5fa6d98c1 19:37:51 oh i hadn't thought that ianw had done testing 19:38:16 ianw: see inline comment on the change otherwise I agree lets do this 19:38:33 seems like we should get rid of the regular module? 19:38:55 ya I think it is most correct to do so 19:39:07 +1 to deleting the regular module 19:39:44 ianw: how hard was testing the new postgres module with the rax node you used? 19:40:34 nibalizer: not super hard 19:40:56 okay, can you test on the tip of master then? it looks like hunner fixed the tests, since they are passing and the patch is called 'patch specs' 19:41:51 nibalizer: i can, but i wouldn't like to be having this same discussion next week... 19:42:21 ok, review updated with that removal 19:42:26 that makes sense 19:42:32 consensu is important 19:42:50 so maybe lets go ahead and merge with the tested change, and maybe ianw can also test master if he has a chance 19:42:57 ++ 19:43:01 that will hopefully help us avoid breakage as they get closer to release 19:43:20 i can do that 19:43:33 yea lets merge ianw's change 19:43:55 i've been keeping the centos7 images up-to-date https://review.openstack.org/#/c/118246/ (Update to newer Centos7 images) 19:44:01 in disk-image-builder 19:44:23 reviews are a little slow 19:44:50 last time i added -core to it, and got back responses like "don't add core to changes" 19:45:21 why this matters for infra is if d-i-b is a key part of nodepool, getting changes through in a timely manner is important 19:45:39 ianw that always annoyed me- reason being it means the change effects all of core and should be voted on by the entire cor eteam 19:45:45 nod 19:45:53 that was "core" concept of it but no talways how people use it 19:45:54 ianw: indeed, using dib was supposed to free us from having to wait for operators to upgrade 19:46:40 it is always interesting to see how groups use errit differently 19:47:08 sdake_: so my opinion is a bit different, if you're part of -core then i feel like you should be interested in all changes. but yeah, different groups, etc 19:47:20 ianw: ++ 19:47:47 i honestly mostly don't notice if authors add me as a reviewer on changes (at least insofar as it would affect the urgency of review) 19:48:04 fungi: indeed, gertty lacks that 'feature' :) 19:48:36 ;) 19:48:41 well either way, timely sync between d-i-b and infra seems like it's going to be of increasing importance 19:48:46 ianw: indeed 19:48:48 ianw: thanks for mentioning it though; we'll want to keep an eye on that 19:49:42 cd 19:49:46 mt 19:50:01 alright, well not much else for me, thanks 19:50:14 ianw: thanks! 19:50:14 #topic Open discussion 19:50:21 yolanda and I are giving some OpenStack CI talks at Fossetcon late this week (Friday and Saturday), if we could get our changes slides in so they're published by then it would be very helpful, particularly for yolanda's since no version is up on the publications site yet (my sysadmin ones are close enough) https://review.openstack.org/#/q/status:open+project:openstack-infra/publications,n,z 19:50:32 also, this means I'm gone tomorrow - next tuesday, will be back on Wednesday 19:50:51 pleia2: have fun 19:51:00 thanks :) forecast is hot and muggy, woo florida 19:51:08 enjoy yourself 19:51:27 I'll be away tomorrow and intermittent on Thursday, personal errands 19:51:28 * jeblair subscribes to publications. oops. 19:51:46 * mordred mocks jeblair 19:52:13 * jeblair wonders if mordred's mocking covers all interfaces 19:52:23 * mordred segfaults 19:52:42 lol 19:52:55 pleia2: gl! 19:53:08 if all goesaccording to plan I will be working from a new place in oregon next week 19:53:14 nibalizer: thanks, I'll bug you when I'm back to review some puppetconf slides :) 19:53:15 ++ 19:53:18 so I may be working off of a phone tether for a bit 19:53:21 clarkb: happy transfer 19:53:26 clarkb: yay, congrats 19:53:29 clarkb: just wander over to puppetlabs and steal wifi from them 19:53:44 that leaves zaro alone in an office, yeah? 19:53:48 pleia2: are you speaking at puppetconf!? puppetconf is gonna be siiiiiick 19:53:49 with krotscheck 19:53:55 nibalizer: I am! 19:54:03 Wait, wha? 19:54:07 anteaya: nah, jesusaurus and krotscheck are there ... unless jesusaurus is back to portland too... 19:54:12 and i have to run now to dlea with apartment stuff back shortly 19:54:24 mordred: not yet. but im aiming at heading that way in november 19:54:24 Oh. Yes. I’ll be in the office. 19:54:27 ah cool, glad zaro still has company 19:54:31 krotscheck: great 19:54:45 krotscheck: soon it will just be you and zaro up in there 19:55:04 in an office with tables where the pencils roll off 19:55:10 but PDX will be full of awesome people 19:55:21 nibalizer: _full_ is a strong word 19:55:31 nibalizer: there will be a cadre of awesome people and then a bunch of hipsters ... 19:55:47 mordred: s/awesome people/hipsters/ 19:55:56 :) 19:56:30 is discussion of hipsters the natural terminus for all our conversations? 19:56:37 think about that over the next week 19:56:38 +1 19:56:40 thanks everyone! 19:56:43 #endmeeting