15:01:10 <amoralej> #startmeeting RDO meeting - 2017-10-25 15:01:14 <openstack> Meeting started Wed Oct 25 15:01:10 2017 UTC and is due to finish in 60 minutes. The chair is amoralej. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:15 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:17 <openstack> The meeting name has been set to 'rdo_meeting___2017_10_25' 15:01:28 <amoralej> #topic roll call 15:01:31 * Duck o/ 15:01:34 <chandankumar> \o/ 15:01:35 <ykarel> o/ 15:01:49 <jjoyce> o/ 15:02:08 <jpena> o/ 15:02:13 <amoralej> #chair chandankumar ykarel jjoyce Duck jpena number80 15:02:14 <openstack> Current chairs: Duck amoralej chandankumar jjoyce jpena number80 ykarel 15:02:26 <number80> o/ 15:03:23 <PagliaccisCloud> \o/ 15:03:48 <amoralej> #chair PagliaccisCloud 15:03:49 <openstack> Current chairs: Duck PagliaccisCloud amoralej chandankumar jjoyce jpena number80 ykarel 15:03:57 <jrist> o/ 15:04:06 <amoralej> #chair jrist 15:04:07 <openstack> Current chairs: Duck PagliaccisCloud amoralej chandankumar jjoyce jpena jrist number80 ykarel 15:04:08 <rdogerrit> Merged rdo-infra/ci-config master: Remove "static" option from include_role https://review.rdoproject.org/r/10280 15:04:19 <amoralej> let's start with chandankumar topic 15:04:33 <amoralej> #topic Make RDO Office Hour biweekly and duration to one hour 15:04:39 <amoralej> chandankumar, ^ 15:04:57 <chandankumar> So currently we are running RDO office hour each tuesday for 2hours. 15:05:28 <chandankumar> We found that most of the people have conflicting meeting and z hours appears to be too much more time. 15:05:46 <chandankumar> So can we bring down to one hour and make it biweekly? 15:05:50 <apevec> yeah, biweekly makese sense 15:06:17 <number80> +2 15:06:19 <amoralej> +1 15:06:31 <apevec> +1 15:06:34 <PagliaccisCloud> +1 15:06:40 <jpena> +1 15:06:47 <jrist> +1 15:06:52 <jjoyce> o/+1 15:06:55 <chandankumar> locked down, merged! +W 15:06:58 <amoralej> ok, it seems there is consensous 15:07:21 <chandankumar> next is do we need to change the timing? 15:07:24 <amoralej> chandankumar, will you send a mail to comunicate it? 15:07:42 <chandankumar> amoralej: i will send an email about the changes. 15:07:44 <number80> chandankumar: let's keep it on the same time for now 15:08:10 <chandankumar> number80: sure. 15:08:37 <amoralej> #agreed to do RDO Office Hour beweekly and duration to one hour 15:08:41 <chandankumar> so starting from next week, we will be keeping it biweekly. 15:08:58 <amoralej> #action chandankumar will send a mail to communicate new schedule 15:09:10 <chandankumar> that's it from my side. 15:09:20 <amoralej> thanks chandankumar 15:09:36 <amoralej> #topic infra: any problem to report after the ML migration? 15:09:59 <amoralej> Duck, is this yours? 15:10:09 <Duck> quack 15:10:15 <Duck> yes 15:10:23 <Duck> that's not me to talk :-) 15:10:41 <amoralej> from my side, we changed mail to notify failures in ci to infra@ 15:10:45 <amoralej> and it's working fine 15:11:19 <Duck> I'll be away from 31-08 for RubyWorld Conference and some PTO, so it's the right time to fix things if any problem 15:11:26 <number80> nothing special 15:11:44 <Duck> ok, then that's all for me :-) 15:12:16 <apevec> Duck, thanks for the lists work! 15:12:20 <dmsimard> \o sorry didn't realize meeting had started 15:12:30 <apevec> now we are a proper project :) 15:12:36 <dmsimard> Duck: yeah thanks, it looks like there was no issues in the migration at all 15:12:37 <Duck> apevec: that's just step 1 :-) 15:12:38 <amoralej> #chair apevec dmsimard jjoyce 15:12:38 <apevec> with its own lists.* 15:12:39 <openstack> Current chairs: Duck PagliaccisCloud amoralej apevec chandankumar dmsimard jjoyce jpena jrist number80 ykarel 15:12:54 <jpena> everything looks fine 15:13:09 <apevec> Duck, where is the machine running btw? 15:13:19 <Duck> apevec: RDO Cloud 15:13:20 * apevec hopes rdocloud 15:13:24 <apevec> excellent 15:13:25 <Duck> the doc has been updated 15:13:44 <Duck> the doc you started on service continuity, and some other things in the site 15:14:27 <Duck> so the only thing to change is to finish the ansible base roles topic, and move the rules in your repo 15:14:45 <Duck> backup is done, but monitoring is waiting for this fix 15:14:46 <amoralej> Duck, iirc, there was some plan to deploy some fancy webui, right? 15:15:09 <Duck> yes, that's step 2 but blocked because of a bug 15:15:18 <amoralej> ok, cool 15:15:20 <Duck> so this will be Mailman _3_ 15:15:25 <Duck> and later Ponymail is possible 15:15:28 <jruzicka> o/ 15:15:30 <dmsimard> no hyperkitty ? 15:15:30 <Duck> if 15:15:39 <Duck> dmsimard: yes, in step 2 15:15:48 <apevec> is there Unicornmail too? :P 15:16:02 <Duck> OMG, please choose one guys 15:16:11 <amoralej> :D 15:16:21 <Duck> hyperkitty comes with MM3 and is already handled by our roles and scripts 15:16:27 <Duck> but the rest needs work 15:16:50 <number80> Hyperkitty 15:17:26 <apevec> alright let's move on? 15:17:29 <amoralej> i have no preference, so the simpler is ok 15:17:31 <amoralej> yeah 15:17:43 <number80> next 15:17:48 <Duck> number80: rbowen tested Ponymail est found it better 15:17:52 <Duck> and 15:18:08 <dmsimard> rbowen is biased, Ponymail is an apache project :P 15:18:28 <dmsimard> anyway, let's defer that discussion to some other time :) 15:18:34 <amoralej> #info no issues have been reported after mailing lists migration 15:18:39 <amoralej> let's move on to the next topic 15:18:42 <amoralej> #topic given master CI status, delay Queens milestone 1 to the next week? https://www.rdoproject.org/testday/queens/milestone1/ 15:19:07 <apevec> so this should be a nobrainer given status: 15:19:18 <apevec> https://dashboards.rdoproject.org/rdo-dev 15:19:28 <apevec> master didn't pass for > 40days 15:19:39 <apevec> there is not other option than to delay to next week 15:19:45 <amoralej> yeah 15:19:51 <jpena> yep, let's delay 15:19:56 <number80> yep 15:20:02 <amoralej> migration to cinder v3 seems fixed but there are some open issues yet 15:20:05 <apevec> ok, I'll send update to the website page 15:20:07 <dmsimard> has it really not passed for 40 days ? 15:20:11 <amoralej> yeah 15:20:11 <apevec> yes 15:20:14 <apevec> sadly 15:20:28 <dmsimard> usually when it doesn't pass for >7 days the world is on fire 15:20:37 <dmsimard> why are we not making a bigger deal out of this ? 15:20:41 <apevec> so it was world on fire for 40days 15:20:42 <apevec> we are 15:20:46 <amoralej> dmsimard, many people is still working in pike 15:20:50 <amoralej> stabilization 15:20:50 <apevec> you are just not on right calls :) 15:21:01 <dmsimard> amoralej: yeah I've seen a lot of work on pike.. 15:21:03 <jpena> we live in our happy world dmsimard 15:21:08 <amoralej> :) 15:21:21 <dmsimard> I might be a bit more disconnected from CI and promotion recently :/ 15:21:28 <amoralej> btw, we have now ansible 2.4 in centos extras repos 15:21:50 <apevec> heh 15:21:52 <amoralej> in case you start seeing weird breakages 15:21:54 <dmsimard> yeah it appeared sometime recently, 2.3.2 is still available 15:22:03 <apevec> but there should be workarounds available in ooo I think? 15:22:20 <dmsimard> it actually broke something in software factory because of a "mutual" bug between ara and ansible in 2.4.0 15:22:44 <dmsimard> 2.4.1 has been delayed for like 2 weeks now.. it should hopefully be tagged today so I can tag a new release for ara that works for 2.4.1 15:22:53 <apevec> so actions 15:23:09 <ykarel> for ansible 2.4 there is a fix: https://review.openstack.org/#/c/513701/ 15:23:23 <apevec> #action apevec to update testdays page and move queens1 testday to Nov2/3 15:23:27 <amoralej> containers build should work also after https://review.rdoproject.org/r/#/c/10280/ 15:24:13 <dmsimard> we should probably consider pinning the package versions we get from extras like we did on pip .. 15:24:20 <dmsimard> ansible upgrades are dangerous :( 15:24:41 <amoralej> there was some discussion about that in the past 15:25:13 <amoralej> but i think we need to keep compatible to whatever is in centos extras 15:25:29 <apevec> yeah we can't really pin extras 15:25:42 <apevec> instead we need to plug into ansible ci pre-release 15:26:07 <apevec> there is work on that Cc flepied :) 15:26:52 <apevec> ok, I think we're done with the topic, we can discuss general CI later 15:26:54 <amoralej> ok, let's hope we can get a promotion before Nov the 1nd 15:27:04 <amoralej> 2nd 15:27:09 <number80> ack 15:27:14 <apevec> if we don't I'll be sad 15:27:25 <apevec> let's move on 15:27:32 <amoralej> #topic longer EOL goodbye for Newton - keep trunk running for some projects longer (apevec) 15:27:44 <amoralej> #info http://lists.openstack.org/pipermail/openstack-dev/2017-October/123624.html 15:27:54 * eggmaster perks up 15:27:59 <eggmaster> o/ 15:28:04 <apevec> so deployment projects want to stay around longer 15:28:08 <amoralej> #chair eggmaster 15:28:09 <openstack> Current chairs: Duck PagliaccisCloud amoralej apevec chandankumar dmsimard eggmaster jjoyce jpena jrist number80 ykarel 15:28:10 <apevec> question is how can we support them 15:28:21 <apevec> post newton EOL is pushed upstream 15:28:27 <apevec> which is happening ~now 15:28:38 <dmsimard> There's one thing for sure -- it's that we can't build openstack projects in general past EOL since they delete the branch and tag eol. 15:28:39 * jrist waves newton a farewell 15:28:42 <apevec> we could pin EOled projects 15:28:56 <dmsimard> So if the deployment projects happens to need a patch that isn't in newton-eol for, say, nova, there's nothing we can do about it 15:28:59 <apevec> and build only deployment projects 15:29:05 <jpena> we could change the branch in DLRN to newton-eol 15:29:08 <apevec> dmsimard, and we won't 15:29:11 <jpena> I mean, in rdoinfo 15:29:11 <apevec> that would be a limitation 15:29:28 <apevec> jpena, or last tag which should be == 15:29:30 <amoralej> as we disable fallback to master, they will fail 15:29:39 <jjoyce> It would be nice to see a promotion with all of the eol tags if possible. 15:29:41 <number80> dmsimard: I get EmilienM proposal to allow people pushing fixes in installers, not in projects themselves 15:29:48 <dmsimard> apevec: right, but this also means we need to keep the trunk, testing and stable repositories around -- which are no longer supported (especially from a security perspective) 15:29:59 <apevec> dmsimard, only trunk 15:30:24 <number80> jpena: 15:30:25 <number80> +1 15:30:31 <dmsimard> apevec: do you consider -deps (-testing) as trunk ? 15:30:43 <dmsimard> apevec: who's on the hook for maintaining that ? 15:31:09 <apevec> same suspects 15:31:14 <amoralej> the main problem i see is to ensure things keep working with RHEL updates 15:31:43 <dmsimard> the main problem I see is a resource one (both human and physical) 15:31:44 <apevec> that's a good point and we could say this is only for the current know working rhel minor release 15:31:54 <apevec> i.e. 7.4 15:31:55 <amoralej> if we want to keep testing that over time, it may happen that patches in EOLed repos 15:32:13 <apevec> amoralej, we are not going to that 15:32:17 <amoralej> apevec, the problem is how to enforce using 7.4 in CI 15:32:24 <apevec> proposal is to keep non-EOLed projects building in trunkj 15:32:41 <amoralej> after 7.5 is published 15:32:52 <apevec> amoralej, then we EOL this 15:33:06 <dmsimard> 7.5 is in like what, 6 months maybe ? 15:33:16 <apevec> so it's 6 months proposal then 15:33:23 <apevec> dmsimard, there isn't fixed schedule 15:33:31 <amoralej> so, it's like a best effort 15:33:32 <apevec> it follows rhel and that's not public sched 15:33:33 <dmsimard> apevec: yeah it's a hand-wavy approximation 15:33:36 <number80> dmsimard: no assumptions, can be earlier or later 15:33:43 <apevec> re. resources, how much we need for Newton? 15:33:50 <apevec> we could trim trunk 15:33:58 <dmsimard> so this means we have manage an additional openstack release per cycle 15:34:35 <dmsimard> because if we do this for pike, we'll be doing it for ocata and so on 15:34:41 <dmsimard> errrrr s/pike/newton/ 15:34:54 <apevec> what does that actually mean? 15:35:03 <apevec> e.g. https://review.rdoproject.org/r/#/q/topic:rdo-FTBFS-newton is not that frequent 15:35:12 <apevec> (as it should be in a stable release) 15:35:31 <jpena> in terms of storage it should not mean much, with purging it will get down to a minimal footprint pretty soon 15:35:36 <apevec> it means 1 more dlrn process and disk storage 15:35:38 <amoralej> maintaining DLRN builder and fix FTBFS shouldn't be really an issue, i think 15:35:48 <dmsimard> apevec: it means instead of the ~1 month overlap during which we support 3 releases (N, N-1, N-2), we extend that a few months 15:36:05 <dmsimard> not too different from what we had to do during the ocata short cycle 15:36:08 <apevec> dmsimard, ok, let's quantify the work 15:36:29 <amoralej> main problem is the time to debug CI and keep promotions passing when we start having issues 15:36:32 <apevec> I'd say let's try this post EOL trunk with Newton 15:36:33 <dmsimard> EmilienM: are you there ? 15:36:38 <amoralej> and note that for newton there is no upstream promotion 15:36:42 <apevec> then we see what comes out of it 15:36:44 <amoralej> only RDO CI promotion pipeline 15:36:47 <EmilienM> I'm very busy now 15:36:56 <EmilienM> dealing with tripleo gate & stuff 15:36:57 <EmilienM> whats up 15:37:12 <EmilienM> oh newton EOL? 15:37:15 <dmsimard> EmilienM: we're talking about extending EOL for deployment projects 15:37:16 <apevec> EmilienM, we're figuring out how to give you more work :) 15:37:25 <EmilienM> bah, do what you like 15:37:27 <apevec> b/c you asked for it! 15:37:30 <EmilienM> I said it sucks if we EOL 15:37:40 <EmilienM> I guess I already said my opinions and the why etc 15:37:45 <EmilienM> do I need to repeat? 15:38:00 <apevec> we can experiment with post EOL rdo trunk 15:38:14 <dmsimard> EmilienM: We're interested in what in means in terms of resources to keep it going -- what jobs we need to keep, what packages still need to be maintained, etc. 15:38:59 <EmilienM> i guess we just want to run tripleo jobs 15:39:07 <EmilienM> because we still have backports in tripleo / newton 15:39:15 <EmilienM> we want to make sure they actually work (HEH) 15:39:55 <apevec> weshay|PTO, ^ what ooo newton jobs can we run in review.rdo? 15:39:57 <apevec> ah PTO 15:40:01 <amoralej> so, we'd maintain only RDO Trunk repos pinned for EOLed projects and following stable/newton for the non EOLed 15:40:08 <apevec> adarazs, trown ^ 15:40:21 <apevec> amoralej, yes 15:40:35 <dmsimard> EmilienM: so we need a promotion pipeline ? periodic jobs ? etc 15:40:37 <apevec> questions are about: deps repo and storage 15:40:58 <dmsimard> newton uses... hammer? which has been EOL for several months already 15:41:05 <adarazs> sorry, on a meeting at the moment. 15:41:06 <dmsimard> we have however not retired the repositories (yet) 15:41:21 <amoralej> in ci.centos we could keep a reduced pipeline with only tripleo jobs 15:41:24 <apevec> dmsimard, I think periodic jobs were no't there for newton? 15:41:33 <apevec> amoralej, +1 15:41:38 <jjoyce> dmsimard: It would be helpful to keep the pipeline up until we are through all of the eol tags and see promotions with them. 15:41:50 <dmsimard> apevec: there was a promotion pipeline for newton, wether it was in tripleo-ci, ci.centos or review.rdo 15:42:09 <dmsimard> jjoyce: yeah, that's a given, regardless if we keep newton around for longer or not 15:42:12 <EmilienM> dmsimard: no promotion 15:42:15 <amoralej> for newton we just have a single tripleo periodic job 15:42:16 <EmilienM> just gate in tripleo-ci 15:42:24 <EmilienM> on patches sent to tripleo / newton 15:42:27 <apevec> there wasn't https://trunk.rdoproject.org/centos7-newton/current-tripleo/ 15:42:32 <apevec> it's 404 15:42:47 <amoralej> in newton is old model 15:42:52 <amoralej> only RDO-CI promotion 15:42:57 <amoralej> not upstream promotion 15:43:03 <amoralej> not tripleo-ci promotion 15:43:05 <jjoyce> dmsimard: So we would see dlrn hashes that have been promoted for newton correct? 15:43:19 <amoralej> i'd keep some periodic jobs in ci.centos 15:43:55 <amoralej> or it'll be a mess when jobs fail in upstream gates 15:43:56 <apevec> amoralej, which jobs are periodic in ci.c.o ? 15:44:11 <apevec> or you mean promotion running periodically? 15:44:15 <amoralej> https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo-delorean-promote-newton/ 15:44:21 <amoralej> yes 15:44:22 <apevec> ok, so the latter 15:44:27 <amoralej> yes 15:44:46 <amoralej> we could get rid of weirdo if it's a problem 15:45:09 <amoralej> and remove all jobs for cloudsig repos 15:45:11 <apevec> alright, so can we agree to try this for Newton then review what we do for Ocata? 15:45:22 <dmsimard> it would be, because p-o-i will be EOL'd and I suppose we won't make packstack follow an extended cycle 15:45:42 <apevec> yeah, no patches for packstack 15:45:48 <apevec> it's rock stable ;) 15:45:55 <dmsimard> well, hang on, EmilienM -- would your thing also run for puppet ? 15:45:56 <amoralej> bug-free software 15:45:59 <dmsimard> EmilienM: as in, extended EOL for puppet modules 15:46:07 <EmilienM> if mnaser is ok yes 15:46:13 <EmilienM> if too late no 15:46:44 <dmsimard> yeah it's a bit last minute considering the draft was not broadly discussed upstream 15:46:52 <EmilienM> you have to understand we have customers (HEH!) running OSP10 and the more backports we do downstream-only, less testing we have when shipping these backports. I see value in keeping Newton a little bit more 15:47:47 <dmsimard> EmilienM: we can bring the production chain back 2 years and do backports manually even for supported releases if you want :P 15:48:59 <amoralej> we can keep p-o-i jobs while stable/newton branch exist 15:49:05 <amoralej> so for actions 15:49:28 <apevec_> looks like I dropped 15:49:39 <amoralej> i think newton-EOL tags are still not in place, right? 15:49:55 <amoralej> jpena, did you already deploy dlrn with the new option to prevent fallback to master? 15:49:59 <apevec_> they are coming 15:50:00 <jjoyce> amoralej: They are starting to show up now. 15:50:03 <EmilienM> guys do what you like 15:50:07 <jpena> apevec: not yet 15:50:11 <EmilienM> I just gave some feedback on why I think we should keep newton 15:50:12 <apevec_> EmilienM, we want to make you happy! 15:50:17 <EmilienM> no you don't ! 15:50:22 <amoralej> that'd be important to avoid getting builds from master in newton 15:50:23 <EmilienM> btw, I'll never be happy. 15:50:26 <jpena> I mean amoralej 15:50:28 <jpena> yep 15:51:09 <apevec> ok, so we can refined this in newton eol card, which I need to create from a template https://trello.com/c/fV69VODx/165-rdo-release-eol 15:51:27 <apevec> #action apevec to create Newton EOL card in rdo trello 15:51:27 <amoralej> ok 15:51:41 <apevec> then we discuss details there 15:51:49 <amoralej> ok 15:51:50 <apevec> let's move on 15:52:13 <amoralej> there are no more topics in the etherpad 15:52:24 <amoralej> #topic who will chair next week? 15:52:28 <amoralej> any volunteer? 15:54:06 <amoralej> ok, i will do it again 15:54:43 <jpena> Is Nov 1 a non-working day everywhere? 15:54:53 <jpena> it is in Spain, so amoralej and I will be off 15:55:05 <amoralej> i was checking, thanks jpena 15:55:32 <apevec> oh right, I'm off too 15:55:58 <amoralej> dmsimard, number80 chandankumar ykarel do you want to keep the meeting? 15:56:19 <ykarel> amoralej, no item from my side 15:56:31 <amoralej> or is it non-working day there also on Nov-1st? 15:56:31 <apevec> for those outside the core-catholic states :) 15:56:37 <amoralej> :) 15:56:42 <ykarel> and it's working here that day 15:57:13 <apevec> ykarel, can you run the meeting, we might still need a sync to confirm testday? 15:57:13 <amoralej> ykarel, that sounded to me as presenting as volunteer to chair? :) 15:57:21 <ykarel> amoralej, sure 15:57:42 <amoralej> #action ykarel will chair the meeting on next week 15:57:50 <amoralej> #topic open floor 15:57:56 <amoralej> we still have a couple of minutes 15:58:14 <apevec> so quick on general CI, 15:58:15 <amoralej> some topic you'd like to bring? 15:58:27 <apevec> looks like people are not aware to look where are the current issues 15:58:42 <apevec> and which series of issues we were hitting 15:59:22 <apevec> amoralej, and now I'm not sure where we have it publicly presented... 15:59:36 <apevec> https://trello.com/b/WXJTwsuU/tripleo-and-rdo-ci-status is not clear since it's just a scratchpad 15:59:49 <apevec> and it even links to the internal docs ;( 16:00:04 <apevec> we need to fix that so folks like jpena and dmsimard know what is going on ... 16:00:05 <amoralej> yeah, i'm trying with a search in LP 16:00:08 <amoralej> gimme a while 16:00:15 <apevec> there is https://bugs.launchpad.net/tripleo/+bugs?field.tag=alert 16:00:27 <amoralej> yeah, that's what i was looking for 16:00:55 <amoralej> although that's not complete, to be honest 16:01:18 <apevec> yeah, ok, I'll discuss that internall and see what we can do to increase visibilty 16:01:24 <apevec> that's it from me 16:01:43 <amoralej> ok, thanks 16:01:48 <amoralej> we are over time 16:01:48 <number80> amoralej: network got slower at the office 16:01:54 <number80> so I'm good 16:02:11 <amoralej> i'm closing the meeting unless there is something else 16:02:30 <amoralej> #endmeeting