15:01:25 <chandankumar> #startmeeting RDO meeting - 2017-12-13 15:01:26 <openstack> Meeting started Wed Dec 13 15:01:25 2017 UTC and is due to finish in 60 minutes. The chair is chandankumar. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:28 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:30 <openstack> The meeting name has been set to 'rdo_meeting___2017_12_13' 15:01:41 <chandankumar> #topic Roll Call 15:01:42 <amoralej> o/ 15:01:52 <PagliaccisCloud> \o 15:01:55 <jpena> o/ 15:02:01 <chandankumar> #chair amoralej PagliaccisCloud jpena 15:02:02 <openstack> Current chairs: PagliaccisCloud amoralej chandankumar jpena 15:02:10 <rbowen> o/ 15:02:16 <number80> o/ 15:02:16 <chandankumar> #chair rbowen 15:02:18 <openstack> Current chairs: PagliaccisCloud amoralej chandankumar jpena rbowen 15:02:23 <chandankumar> #chair number80 15:02:24 <openstack> Current chairs: PagliaccisCloud amoralej chandankumar jpena number80 rbowen 15:04:09 <chandankumar> EmilienM: apevec meeting time 15:04:23 <dmsimard> hi there o/ 15:04:34 <chandankumar> #chair dmsimard 15:04:35 <openstack> Current chairs: PagliaccisCloud amoralej chandankumar dmsimard jpena number80 rbowen 15:05:04 <chandankumar> so starting with today's first topic 15:05:09 <EmilienM> hello 15:05:11 <chandankumar> #topic Dry-run/test/firedrill of the queens release ? (in retrospect of the challenges from pike) 15:05:17 <chandankumar> #chair EmilienM 15:05:18 <openstack> Current chairs: EmilienM PagliaccisCloud amoralej chandankumar dmsimard jpena number80 rbowen 15:05:22 <chandankumar> dmsimard: please go ahead 15:05:58 <dmsimard> I'd like to propose we change the way we ship new stable releases 15:06:36 <dmsimard> Historically, we've populated the testing repositories with RCs and eventually the signed tarballs and then pushed everything to the stable release mirror when everything was ready 15:07:13 <dmsimard> The problem with this, is that we only do the initial push to stable repos once every 6 months and the process is largely prone to code rot or other problems in general 15:07:41 <dmsimard> We've seen this for pike, it took us several days to get the release out 15:08:31 <dmsimard> What I think would be good would be to build everything in -testing and then as soon as everything is bootstrapped, we push to the stable repositories -- at this point we do not ship neither rdo-release or centos-release-openstack-*, we would only release those on the release day 15:08:32 <number80> So? 15:09:03 <amoralej> that means pushing RC releases for all packages to mirror.c.o 15:09:16 <amoralej> before GA, right? 15:09:19 <dmsimard> This way, we are not chasing KB and other things in an emergency to ship ASAP 15:09:27 <dmsimard> we just push hotfixes or new builds as required 15:09:29 <dmsimard> amoralej: yes 15:10:02 <amoralej> i'm not sure if there is any policy in CentOS, about it, but i'm ok with that altough it may help less that we expect 15:10:09 <dmsimard> amoralej: the fact that the packages are there does not make anyone start using them unless they manually change their repo files, install rdo release or centos-release 15:10:19 <dmsimard> the only exception to this might be the common repo 15:10:26 <dmsimard> but these are updated throughout the cycle already 15:10:57 <dmsimard> amoralej: why would it help less ? We would have like 2 weeks to address potential issues with releasing instead of 2 days 15:11:05 <amoralej> dmsimard, depends 15:11:09 <amoralej> let me ask something first 15:11:11 <jpena> we cannot remove the RC packages once pushed, right? 15:11:22 <amoralej> those pushes would have to pass promotion criteria? 15:11:30 <amoralej> jobs in rdoinfo gate 15:11:42 <dmsimard> those pushes would have to pass whatever criteria there would be for a release 15:11:44 <rbowen> KB has also asked me to communicate that if we can give plenty of warning of an upcoming release, and several reminders a week, day, etc out, that will greatly improve the chances that he'll be available. 15:11:44 <amoralej> i'd say so 15:11:49 <amoralej> ok, then 15:12:09 <amoralej> the problem is that trailing services (tripleo, etc...) 15:12:19 <amoralej> push RC tags short time before GA 15:12:32 <amoralej> i think in pike it was 4 days or so 15:12:40 <amoralej> so yeah, we'd have that time 15:12:58 <dmsimard> did we ever get around to wrapping up the definition of done ? I think I had an action on that.. 15:12:59 <amoralej> that could help, but maybe not as much as we'd like 15:13:19 <dmsimard> because that reminds me we had the discussion or whether or not we actually shipped on release day or waited until cycle trailing was out 15:13:33 <amoralej> yes, in fact this is closely related 15:14:17 <amoralej> ini Pike, for example we almost couldn't push RCs to testing repo because of missing tags until very close of GA 15:14:41 <dmsimard> RDO historically released (or at least tried to) on upstream release 15:15:09 <amoralej> yes, that's still the goal, i guess 15:15:29 <dmsimard> but if we do that, we ship RC of cycle-trailing :p 15:15:46 <dmsimard> which might be blocked due to bugs or whatever 15:15:51 <dmsimard> (I mean promotion criteria) 15:16:03 <amoralej> so goal is to provide 1) working packages 2)as soon as posible 15:16:36 <amoralej> in last releases RC for deployment tools have been good enough to promote 15:16:42 <amoralej> and push on GA 15:16:46 <dmsimard> number80: what are your thoughts on trying to ship to stable mirrors ASAP to give us more time to troubleshoot ahead of upstream release ? 15:16:48 <dmsimard> amoralej: yeah 15:17:18 <amoralej> at some point they may not be but, i think the compromise is to keep it as-is and *try* to promote with RC for cycle-training 15:17:30 <amoralej> keeping in mind that at some point it may not be possible 15:17:45 <amoralej> but TBH, it may happen even with GA releases, right? 15:17:50 <dmsimard> yeah 15:17:54 <number80> dmsimard: I think we need specific process to ship GA as it is as you said once-per-release thing 15:18:26 <number80> but relaxing CI jobs for GA push could be a different way 15:18:27 <dmsimard> number80: the process (or the code involved in the process) is highly prone to bit rot 15:18:37 <dmsimard> automation, scripts, documentation 15:18:40 <amoralej> we have worked to implement some improvements in tooling based on pike retrospective 15:18:41 <dmsimard> a lot of things can change in 6 months 15:18:53 <number80> yeah 15:18:58 <dmsimard> if we have a way to test this process from end to end without pushing to stable repos, I'm all ears 15:19:02 <rbowen> We have a document here - http://rdoproject.org/rdo/release-checklist/ - that talks about the peripheral things around a release, too, by the way. 15:19:03 <amoralej> and i think it should work better for queens but 15:19:07 <dmsimard> because I mean, that's the purpose 15:19:12 <rbowen> Mostly that's more about communication than the actual release process. 15:19:39 <rbowen> But it's helpful to keep it in mind with regard to timing and whatnot. 15:20:05 <amoralej> if we think that we should move to a different model in queens to make it shorter and less prone to automation errors, we can do it 15:20:08 <dmsimard> rbowen: yeah, it's about the technical aspect of things -- how we build RC releases, the automation involved, updating specs, updating rdoinfo, get them tagged to stable, getting someone to sign the packages, ship rdo-release and centos-release 15:20:51 <dmsimard> amoralej, number80: despite having quorum, I would love to have apevec_'s opinion so maybe we can take this one to the ML 15:20:52 <amoralej> in fact, with current automation model based on rdoinfo, it's easy to push packages manually 15:21:03 <number80> ack 15:21:04 <amoralej> either with cbs or graffiti 15:21:11 <amoralej> it will not break anything 15:21:16 <chandankumar> dmsimard: +1 on taking to mailing list for more eyes and ideas 15:21:22 <chandankumar> for improvements 15:21:23 <amoralej> yeah 15:21:30 <dmsimard> ok 15:21:47 <chandankumar> dmsimard: can you take the action item for the same? 15:21:56 <dmsimard> #action dmsimard to create a ML thread about testing the process of shipping a new stable release 15:22:07 <chandankumar> dmsimard: thanks :-) 15:22:31 <chandankumar> i think i can move to next topic then. 15:22:51 <chandankumar> #topic Test day TOMORROW 15:23:11 <chandankumar> RDO test day is tomorrow https://www.rdoproject.org//testday/queens/milestone2/ 15:23:27 <rbowen> Please join us, and please help get the word out to anybody you think might have an hour or two to try it out. 15:23:39 <rbowen> dmsimard is working on our testing cloud. 15:23:46 <dmsimard> oh, not just me 15:23:48 <dmsimard> amoralej and jpena too :D 15:25:40 <chandankumar> Here is the instructions on how to get a test cloud 15:25:43 <chandankumar> #link https://etherpad.openstack.org/p/rdo-queens-m2-cloud 15:26:15 <rbowen> Also if you have suggestions of what people might should test, please update the test matrix 15:26:28 <rbowen> That's a place where we have historically not done a great job. People that do show up aren't sure what to do. 15:27:23 <chandankumar> rbowen: is it a good idea to create a meetup event in my local openstack user group if needed people can join from there? 15:27:53 <rbowen> That's actually a great idea. Thanks. 15:28:04 <chandankumar> rbowen: i will create it right now 15:28:19 <dmsimard> oh 15:28:20 <rbowen> I'll make a note of that for next time. It's a little late now but every bit helps. 15:28:26 <rbowen> Next time we can do it in advance. 15:28:28 <dmsimard> I happen to be a speaker at the Montreal OpenStack meetup tonight 15:28:34 <dmsimard> I'll totally advertise the test day there 15:28:41 <rbowen> dmsimard: awesome. Tell them all to show up and bang on it. 15:29:29 <EmilienM> offer them a free poutine 15:29:33 <EmilienM> they'll help 15:29:46 <dmsimard> EmilienM: the poutine at openstack canada day in ottawa wasn't even that good :/ 15:29:56 <rbowen> I think this topic is exhausted. 15:29:56 <EmilienM> disappointing. 15:30:12 <chandankumar> moving to next topic, 15:30:26 <chandankumar> #topic ICYMI: http://rdoproject.org/newsletter/2017/december/ (Thanks to mary_grace) 15:30:29 <rbowen> I wanted to mention here that I had a lot of help putting together December's newsletter 15:30:41 <rbowen> mary_grace did almost all the work there, and it's the best one we've ever had 15:30:51 <rbowen> So if you want to show off what we're doing, please do tell folks about it. 15:31:06 <rbowen> And help is always appreciated suggesting items for that newsletter. 15:31:18 <rbowen> It goes out to 2000+ people, and I have evidence that at least 3 of them actually read it. 15:31:19 <rdogerrit> Merged rdoinfo master: Add python-octavia-tests-tempest https://review.rdoproject.org/r/10523 15:31:35 <rbowen> </topic> 15:32:18 <chandankumar> thank you rbowen and mary_grace for the nice newslettter :-) 15:32:23 <chandankumar> moving to next topic 15:32:47 <chandankumar> #topic Review "rebase if necessary" strategy in gerrit (e.g rdoinfo) 15:32:51 <EmilienM> o/ 15:32:58 <chandankumar> #chiar EmilienM 15:33:17 <chandankumar> #chair EmilienM 15:33:18 <openstack> Current chairs: EmilienM PagliaccisCloud amoralej chandankumar dmsimard jpena number80 rbowen 15:33:30 <EmilienM> I want to know why rdoinfo project is configured in Gerrit in a way each patch has to be rebased to be merged. 15:33:43 <EmilienM> it's super painful 15:33:51 <chandankumar> it is too irirating. 15:33:55 <rbowen> That's also the case in the 'website' repo, and it is a fairly recent change. 15:33:56 <EmilienM> and it refrains me to review from now 15:34:01 <amoralej> yeah, i understand it 15:34:08 <rbowen> It didn't used to be that way. I can't pinpoint when it changed. 15:34:15 <EmilienM> so unless that thing changes, i'll probably stop reviewing this project 15:34:16 <dmsimard> amoralej: ^ would it break things to put merge ? 15:34:17 <amoralej> but it's not so easy for rdoinfo 15:34:25 <amoralej> so 15:34:31 <amoralej> there are two problems 15:34:56 <amoralej> 1. make sure it doesn't breake the diff-tags command ( i need to check it) 15:35:19 <amoralej> 2. we are running deployment jobs (weirdo and tripleo) only in check pipeline, not in gate one 15:35:27 <amoralej> to minimize ci resources usage 15:36:00 <amoralej> and changing the policy to merge would mean that we would be merging untested rdoinfo content 15:36:04 <dmsimard> so the problem I see is 15:36:07 <rdogerrit> Merged openstack/cinder-distgit rpm-master: Remove policy.json file https://review.rdoproject.org/r/10962 15:36:12 <dmsimard> If we allow merge in rdoinfo, what can happen 15:36:28 <dmsimard> we test change A and it works, we test change B and it works -- but testing them together fails 15:36:35 <amoralej> yeah 15:36:50 <amoralej> i see two options: 15:36:52 <dmsimard> so zuul is kind of built to prevent exactly that 15:37:16 <dmsimard> except I'm not sure how to prevent that scenario from happening 15:37:30 <amoralej> 1) add jobs to gate pipeline + fixing rdopkg if needed + changing t merge 15:37:34 <pabelanger> why would changing the merge stragegy allow untested code to be merged into rdoinfo? 15:37:48 <pabelanger> you'd still need a +1 from jenkins / zuul right? 15:37:57 <amoralej> my understanding is that the solution is to add old jobs to gate pipeline 15:38:19 <rdogerrit> Merged rdoinfo master: Import ansible-role-redhat-subscription https://review.rdoproject.org/r/10934 15:38:29 <amoralej> pabelanger, i may be wrong here, but the problem is that we only run a subset of jobs in gate pipeline, not all in check one 15:38:48 <amoralej> so in check we are testing master + review, right? 15:39:10 <amoralej> but if there is an automatic rebase it's only tested in gate one 15:39:11 <dmsimard> pabelanger: there's two changes, change A and change B -- each are individually tested irrespective of each other. If these two merge, the end result might not be the same 15:40:01 <pabelanger> dmsimard: sure, you could fix that my making a dependant pipeline for rdoinfo, some sort of special check to support that 15:40:37 <pabelanger> or, just keep gerrit merge stragegy the same for rdoinfo and change it globally every place else. IIRC, you can override it per project 15:41:15 <dmsimard> I believe there's just two projects where we have this kind of issue -- the config repo and the rdoinfo repo 15:41:17 <rdogerrit> Merged rdoinfo master: Remove python-django-openstack-auth from queens https://review.rdoproject.org/r/10921 15:41:20 <pabelanger> dmsimard: but the issue you are discribing is the same issue that happens upstream with tripleo-check today 15:41:48 <dmsimard> EmilienM is suggesting we change the policy for rdoinfo to merge if required (which I guess is the default upstream) but the concern is about the scenario I told you about. 15:42:13 <amoralej> pabelanger, to be sure i understand correctly, if we have two reviews for rdoinfo, review A and B 15:42:15 <dmsimard> I'd be happy to change the merge policy if zuul can help us test the changes together under the hood (which I guess by design it is supposed to do ?) 15:42:28 <amoralej> in check master + A and Master + B are tested 15:42:34 <amoralej> but with merge policies 15:42:36 <dmsimard> I'm not very familiar with the concept of queue and dependant pipeline in zuul 15:43:04 <amoralej> in gate pipeline, jobs for review B wouldn't use master + A + Merge B ? 15:43:18 <pabelanger> right 15:43:32 <pabelanger> but that is not specific to gate, that is a dependant pipeline 15:43:42 <pabelanger> which, you could make check or some other pipeline 15:44:09 <pabelanger> https://docs.openstack.org/infra/zuul/feature/zuulv3/user/config.html#value-pipeline.manager.dependent 15:44:19 <amoralej> then, let's asume we make rdoinfo-check pipeline dependent 15:44:24 <dmsimard> pabelanger: ah, okay so we could (theoretically) make check a dependant pipeline 15:44:46 <pabelanger> however, at the risk of increasing complexity, why not just add jobs into gate and issue is solved by default. I guess you don't have resources? 15:44:47 <amoralej> it would convert in a kind of sequential ? 15:44:53 <pabelanger> amoralej: right 15:45:03 <pabelanger> but, at the risk of being more complex 15:45:26 <pabelanger> because if patch A is broken, then all new reviews are broken 15:45:31 <pabelanger> and will slow development 15:45:38 <amoralej> there is another option that would help to optimize it 15:45:48 <amoralej> re-distribute data files in rdoinfo 15:45:54 <rdogerrit> rdo-trunk created config master: Create project for octavia-tempest-plugin https://review.rdoproject.org/r/10977 15:46:05 <pabelanger> that is one reason why having check / gate is powerful 15:46:17 <dmsimard> pabelanger: the jobs that run on rdoinfo are pretty expensive 15:46:43 <amoralej> dmsimard, the problem with that if that if a job fail in that pipeline 15:46:46 <amoralej> for a review 15:46:57 <amoralej> all chained reviews need to be re-executed 15:47:10 <pabelanger> dmsimard: sure, cloud resources are cheap :) 15:47:44 <amoralej> dmsimard, jpena, back to option 2), change rdoinfo files 15:47:51 <amoralej> i explained it in the meeting etherpad 15:48:03 <amoralej> if we would have per-repo files 15:48:08 <pabelanger> amoralej: yah, you'd need a high rate of success for a dependant pipeline, otherwise you'll get into a gate reset loop 15:48:22 <amoralej> cloudsig-newton-testing-tags.yaml 15:48:27 <amoralej> cloudsig-ocata-testing-tags.yaml 15:48:28 <amoralej> and so on 15:48:38 <amoralej> then we'd only have to rebase when it's actually needed 15:48:46 <amoralej> to ensure tests consistency 15:48:52 <amoralej> but that needs change in rdoinfo 15:48:58 <amoralej> and in automation scripts 15:49:10 <dmsimard> ok, so, just to summarize 15:49:30 <dmsimard> the merge strategy in rdoinfo is the way it is for good reasons for the time being 15:49:41 <dmsimard> we can change that, but it requires changes in configuration and tooling which might take a while 15:49:49 <dmsimard> did I get that right ? 15:49:57 <amoralej> yes 15:50:00 <dmsimard> EmilienM: ^ 15:51:24 <amoralej> the "easy" solution is fix rdoinfo (probably not much work) + more CI resources (add jobs to gate pipeline) 15:51:44 <amoralej> or change check pipeline to dependant for rdoinfo 15:52:12 <dmsimard> more ci resources 15:52:14 <dmsimard> hah 15:52:24 <amoralej> the "long" solution is to change rdoinfo data files, but i'd like that to be agreed with the team 15:52:46 <amoralej> as it may fit with the effort to transform rdoinfo in a python module 15:52:50 <amoralej> + data files 15:52:53 <amoralej> in a separated repo 15:53:08 <EmilienM> ok 15:53:08 <amoralej> that has been discussed in the past 15:53:23 <dmsimard> rdoinfo hasn't exactly scaled very well 15:53:33 <dmsimard> it grew organically as we added stuff we needed 15:53:33 <rdogerrit> rdo-trunk created config master: Create project for ansible-role-redhat-subscription https://review.rdoproject.org/r/10978 15:53:42 <EmilienM> I enjoyed doing review on this project! 15:53:51 <dmsimard> it might be worth thinking about how we would like things to be 15:54:39 <amoralej> yeah, i agree 15:54:55 <amoralej> that's what i'd like before start changing things 15:55:39 <dmsimard> I guess we're done with this topic 15:55:41 <dmsimard> open floor ? 15:56:04 <chandankumar> #topic chair for next meeting 15:56:26 <chandankumar> Anyone up for volunteering for next meeting 15:56:28 <chandankumar> ? 15:56:55 <amoralej> i can take it 15:56:55 <chandankumar> Do we want to keep meeting on 27th Dec, 2017? As it is shutdown 15:57:04 <rbowen> A lot of people are going to be out the next 2 - 3 weeks. 15:57:08 <chandankumar> #action amoralej to chair for next meeting 15:57:17 <amoralej> no, i think we'll cancel on 27th 15:57:22 <rbowen> Cancel on the 27th 15:57:53 <chandankumar> #info 27th Dec, 2017 RDO community meeting is cancelled due to shutdown :-) 15:58:01 <chandankumar> #topic openfloor 15:58:19 <chandankumar> we still have 3 mins left any topic to kickstart 15:59:33 <chandankumar> if not then let's end on a count of 3 15:59:35 <chandankumar> ... 15:59:37 <chandankumar> .. 15:59:40 <chandankumar> . 15:59:45 <chandankumar> #endmeeting