15:01:06 <number80> #startmeeting RDO meeting - 2017-09-27 15:01:07 <openstack> Meeting started Wed Sep 27 15:01:06 2017 UTC and is due to finish in 60 minutes. The chair is number80. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:09 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:11 <openstack> The meeting name has been set to 'rdo_meeting___2017_09_27' 15:01:15 <rbowen> Partially here, but I'm in an all-day meeting 15:01:22 <number80> #chair amoralej dmsimard rbowen 15:01:23 <openstack> Current chairs: amoralej dmsimard number80 rbowen 15:01:27 <number80> #topic roll call 15:01:31 <rbowen> o/ 15:01:34 <jpena> o/ 15:01:37 <amoralej> o/ 15:01:42 <number80> #chair jpena 15:01:42 <openstack> Current chairs: amoralej dmsimard jpena number80 rbowen 15:01:55 <number80> amoralej: if you want to switch, feel free 15:01:55 <jatanmalde|lunch> o/ 15:02:10 <amoralej> number80, no, thanks for starting it 15:02:19 <number80> #chair jatanmalde 15:02:20 <openstack> Current chairs: amoralej dmsimard jatanmalde jpena number80 rbowen 15:02:26 <amoralej> i'll chair next week 15:02:36 <number80> chandankumar, ykarel, shreshtha, jschlueter ^ 15:02:38 <number80> amoralej: ack 15:02:54 <chandankumar> \o/ 15:02:55 <number80> agenda is here 15:03:00 <number80> https://etherpad.openstack.org/p/RDO-Meeting 15:03:04 <number80> #chair chandankumar 15:03:05 <openstack> Current chairs: amoralej chandankumar dmsimard jatanmalde jpena number80 rbowen 15:03:33 <number80> let's start with infra since we have most people around 15:03:36 <myoung> o/ 15:03:41 <number80> #chair myoung 15:03:42 <openstack> Current chairs: amoralej chandankumar dmsimard jatanmalde jpena myoung number80 rbowen 15:04:01 <number80> We'll start with Infrastructure topics 15:04:11 <jruzicka> o/ 15:04:27 <number80> Duck: ^ 15:04:34 <number80> #chair jruzicka 15:04:35 <openstack> Current chairs: amoralej chandankumar dmsimard jatanmalde jpena jruzicka myoung number80 rbowen 15:05:18 <number80> ok, people who submitted topics are not here, so let's wait few more minute 15:05:33 <arxcruz> o/ 15:05:43 <number80> arxcruz: excellent :) 15:05:46 <number80> #chair arxcruz 15:05:47 <openstack> Current chairs: amoralej arxcruz chandankumar dmsimard jatanmalde jpena jruzicka myoung number80 rbowen 15:05:54 <number80> #topic Can we have DLRN api endpoint for OSP in RDO infrastructure? 15:06:01 <number80> arxcruz: let's start! 15:06:27 <arxcruz> yes, so, we want to use dlrn everywhere 15:06:45 <arxcruz> we know that rdo has the endpoint for ocata, master, pike 15:06:52 <arxcruz> we want to create one for OSP as well 15:06:59 <arxcruz> is it possible to enable it on rdo infrastructure ? 15:07:25 <jpena> I'm trying to understand this. The idea is to report on CI jobs for OSP, right? 15:07:52 <Duck> quack 15:07:52 <dmsimard> I think we've already discussed this in the past and decided against it but my memory fails, I think if we ended up wanting to do that, we'd need to map an OSP "puddle" back to a DLRN hash and report against that DLRN hash 15:08:03 <jschlueter> o/ 15:08:07 <Duck> sorry, my previous confcall is trying to finish now 15:08:09 <arxcruz> dmsimard: yes, esactly 15:08:10 <number80> #chair Duck jschlueter 15:08:11 <openstack> Current chairs: Duck amoralej arxcruz chandankumar dmsimard jatanmalde jpena jruzicka jschlueter myoung number80 rbowen 15:08:13 <arxcruz> exactly* 15:08:29 <dmsimard> arxcruz: so you don't need any "endpoints", right ? You just need credentials to report to the proper place 15:08:53 <dmsimard> say you're reporting data for an osp "pike" thing, the osp pike thing needs to know what RDO hash it came from and report it accordingly 15:08:54 <myoung> correct, Release Delivery is going to be dropping the dlrn hash used as base for OSP import co-located with puddle. so we'll be able to log results from ospphase0 (like we are doing for rdophase2 1nd 1) via dlrn api 15:09:03 <trown> o/ 15:09:12 <number80> #chair trown 15:09:13 <openstack> Current chairs: Duck amoralej arxcruz chandankumar dmsimard jatanmalde jpena jruzicka jschlueter myoung number80 rbowen trown 15:09:13 <myoung> so we would need endpoints for rhos-10, rhos-11, rhos-12 to report against 15:09:21 <myoung> like we have for newton/ocata/pike/master 15:09:27 <arxcruz> yes 15:09:30 <arxcruz> thanks myoung 15:09:45 <dmsimard> I don't think I'm following. What's the need here if you're not reporting against DLRN hashes ? 15:09:48 <jpena> myoung: but what would you be reporting on? For the DLRN API, the "base item" is a commit_hash+distro-hash 15:10:22 <Duck> quack, I'm ready now :-) 15:10:35 <jschlueter> jpena: that for that same set of changes "base item" what CI results are from OSP 15:10:44 <trown> ah right so pike would just have entries for OSP jobs running pike with the dlrn_hash they were imported rom 15:10:46 <trown> from 15:11:04 <dmsimard> trown: that's my understanding 15:11:16 <trown> myoung: arxcruz does that make sense? 15:11:18 <number80> ok, so we'd be running OSP jobs against RDO DLRN builds but they want to report the results to the nedpoint 15:11:28 <dmsimard> if I run an OSP pike job, and that OSP build is derived from hash 'abcd', that OSP pike job would report against pike on hash 'abcd' 15:11:30 <myoung> correct, today afaik the 'key' is commit_hash+distro_hash, logged against an endpoint like "pike" - would like to be able to log results against a separate endpoint, as an OSP puddle starts as a base hash, but can oftentimes have additional patches applied / removed. it's not "exactly" the same as rdo phases 15:11:45 <trown> number80: no the OSP jobs would be running against OSP... just reporting to the same API by an associated hash 15:11:50 <myoung> it's "mostly" the same :) 15:12:19 <dmsimard> myoung: What's the underlying need ? You want to decentralize the OSP promotion process ? 15:12:26 <number80> trown, myoung: so against OSP rebuilds of the RDO hash 15:12:30 <trown> myoung: but there is only one OSP "pike" right? so as long as you have a place to report, and a place to check results you would be good? 15:12:30 <myoung> but i don't necessarily want to "muddy results" by reporting results from OSP and RDO to same endpoint, afaict there's not a way to disambiguate other than encoding things into the job name, etc 15:12:34 <trown> number80: ya 15:12:49 <amoralej> dlrn api is intented to report ci results for jobs running with repos created by DLRN 15:12:54 <PagliaccisCloud> o/ 15:13:00 <number80> gotcha, I think it may require having additional fields, right? 15:13:05 <number80> #chair PagliaccisCloud 15:13:05 <openstack> Current chairs: Duck PagliaccisCloud amoralej arxcruz chandankumar dmsimard jatanmalde jpena jruzicka jschlueter myoung number80 rbowen trown 15:13:10 <amoralej> or derived from repos related to a hash 15:13:26 <trown> I am not sure extra fields are required 15:13:46 <trown> all promotions now happen based on an arbitrary list of jobs that need to pass for that promote 15:13:47 <dmsimard> Can someone please answer my previous question 15:14:00 <trown> so there is no reason adding more jobs reporting would mess that up 15:14:02 <dmsimard> What is the purpose ? You want to decentralize the OSP promotion process ? 15:14:21 <trown> dmsimard: it is more about reusing the same logic in all the phases 15:14:46 <number80> ok 15:14:52 <trown> dmsimard: so we have a promote script that checks a list of jobs and promotes ... and that same script could be used at all phases 15:14:59 <trown> as long as all phases report 15:15:03 <number80> so basically, they want a 2 or 3 phases pipeline 15:15:04 <arxcruz> exactly 15:15:10 <number80> RDO promotion is phase 0 15:15:12 <dmsimard> trown: sure, okay, so OSP jobs can still do that while reporting to RDO DLRN hashes 15:15:14 <myoung> OSP puddles are often the same content - at a patch level - as the contents of a delorean repo, but the RPM's are built from OSP distgit/repos, and there are oftentimes cases where additional patches are needed, either proactively or removed...so for OSP results it's really commit_hash+distro_hash, +- additional patches OSP only. If there's an additional field to use to report/query to disambiguate osp/rdo, cool. seems simpler to just have a 15:15:14 <myoung> discret endpoint though 15:15:23 <trown> dmsimard: yep agree 15:15:25 <number80> they want to record there, OSP CI result as phase 1 and 15:15:56 <dmsimard> what's the URL again for the dlrn api frontend ? I always forget it 15:15:59 <trown> myoung: I dont think an endpoint in needed... 15:16:33 <number80> it would be interesting to discuss with DCI folks 15:16:39 <myoung> until recently we didn't have have an artifact co-located with OSP puddles indicating which delorean hash the import was *based* on, we expect this to land soon so we can now log these results (that's the "why now") 15:16:40 <trown> myoung: we only care about job names 15:16:56 <dmsimard> myoung: that's great 15:16:58 <jpena> all you need to do to encode those additional patches and disambiguate is to use the "notes" field when calling report_result 15:17:28 <trown> jpena: myoung: ya I am unclear we are even interested in tracking those differences via dlrn_api... 15:17:33 <dmsimard> right, I think the job names and the notes field will be enough by themselves 15:17:35 <myoung> jpena: understood, I just wanted to be really clear then that if we're going to log (for example) osp12 results against "pike" there are 2 basic issues 15:17:50 <dmsimard> plus, it will be valuable data to get OSP jobs feedback on DLRN hashes IMO 15:17:50 <trown> seems we just care about did X jobs pass on this hash... if so then promote 15:17:53 <myoung> 1.) osp != rdo at times at a patch level, so we could have some variance, where we're muddying waters 15:18:03 <dmsimard> myoung: it doesn't matter because it's just data 15:18:06 <myoung> 2.) "osp 12" started at "master", until we created "pike" 15:18:11 <trown> ya, but osp == osp at all times 15:18:19 <myoung> every osp cycle it's "master" until the RDO branches are created 15:18:21 <trown> and we would only promote osp based on osp jobs 15:18:36 <myoung> so we'll have for example osp 13 results split between "master" and "queens" 15:18:38 <dmsimard> myoung: the OSP data won't be used in the RDO promotion process but the data will be there and available 15:18:42 <rdogerrit> Merged openstack/octavia-distgit rpm-master: Use %{service} macro instead of 'octavia' https://review.rdoproject.org/r/9775 15:18:42 <myoung> if we don't have a discreet endpoint for 13 15:18:46 <number80> myoung: as long as you keep the mapping 1:1 between those hashes and internal snapshots, it doesn't really matter 15:18:46 <dmsimard> I don't see that as a problem 15:18:52 <jpena> me neither 15:19:27 * myoung shrugs and nods 15:19:34 <dmsimard> if anything, it'll allow us to do data analytics (BIG DATA!) across a variety of RDO and OSP jobs on given dlrn hashes 15:20:07 <myoung> right, potentially complicating the ability to say "show me osp13 results" - we'll always have to hit multiple endpoints 15:20:15 <myoung> (queens, master) 15:20:24 <trown> 2 could be trickier, because the promote piece has to know which results to query and it would change over the course of 13 for example... but that seems like a pretty small issue 15:20:42 <jpena> creating specific endpoints opens its own can of worms, for example database integrity: commit ids in votes are a foreign key 15:20:45 <trown> myoung: ya the reverse is also true though if there was a seperate endpoint for 13 15:21:01 <myoung> trown: that's a good/valid point 15:21:04 <trown> myoung: take the query "show me all jobs that passed on hash Z" 15:21:14 <dmsimard> myoung, trown: osp13 still ultimately ties back to a dlrn hash whether it's on queens or master 15:21:16 <trown> myoung: you would need to query multiple endpoints to get all of that data 15:21:23 <myoung> so long as we're cool/ok with logging OSP results via dlrn api at a high level I'm happy 15:21:30 <dmsimard> the logic is client side, not server side 15:21:35 <myoung> just wanted to be clear that OSP is not always *the same patches per rpm* as RDO 15:21:39 <myoung> for a given puddle/hash 15:21:48 <dmsimard> that's fine 15:22:12 <myoung> so TLDR: use notes field and/or job name to indicate rdo vs. osp? 15:22:21 <trown> myoung: job name 15:22:21 <number80> myoung: as previously stated, as long as there's a 1:1 mapping, it doesn't matter 15:22:23 <dmsimard> +1 15:22:23 <jpena> myoung: yes 15:22:28 <number80> +! 15:22:30 <number80> 1 15:22:36 <trown> myoung: because promote logic is based on job names 15:22:36 <myoung> groovy. \o/ 15:22:51 <myoung> exciting to see dlrn api stretching into OSP-land #ftw 15:22:58 <number80> so can someone formulate an action? 15:22:59 <dmsimard> trown: right, but the notes field is freeform to add misc data so it can be used to identify whatever 15:23:02 <number80> or info item? 15:23:39 <myoung> trown: ack, makes sense. 15:24:39 <myoung> number80: if I'm understanding above the takeaway is we don't need an action, we can use things as is. 15:24:43 <trown> #info OSP will start reporting data too dlrn_api via the dlrn hash it was imported from ... soon 15:25:01 <trown> *to... 15:25:27 <number80> myoung: yeah, but we need a trace in meeting logs :) 15:25:29 <number80> trown: thanks 15:25:43 <number80> so next topic/ 15:25:46 <number80> ? 15:26:29 <number80> next 15:26:36 <number80> #topic Mailman3 migration 15:26:39 <number80> Duck: 15:26:43 <Duck> quack 15:27:00 <dmsimard> someone made the openstack bot operator ? 15:27:12 <Duck> I've been trying to get inputs on my tickets but fortunately people recently replied :-) 15:27:34 <Duck> would like some input from dmsimard too 15:27:38 <number80> (not me) 15:28:09 <Duck> so we can sort out what we do to fix the repository conflict with Mailman deps (for people not knowing the subject) 15:28:14 <dmsimard> right, so the gist of the problem is that the sensu implementation of opstools requires the installation of different repositories which happens to conflict with some things 15:28:27 <Duck> then there is still the LMTP bug (with loss of mail), I had no time to dig into it 15:28:32 <jruzicka> Duck, any news on the losing-mails-under-load issue? 15:28:48 <Duck> but no reply from MM upstream either 15:29:16 * number80 saw an Aurelien out of his cave somewhere 15:29:20 <Duck> dmsimard: from what we've seen by digging, a sensu *client* does not need all this 15:29:23 <Duck> only the sensu repo 15:29:30 <Duck> and this would be fine for MM 15:29:56 <Duck> so I proposed to test it on the machine by downgrading all packages to base sys and testing if sensu works well 15:30:09 <Duck> I can prepare the machine but have no knowledge about sensu 15:30:14 <Duck> and no access either 15:30:29 <number80> anyone who can help? 15:30:44 * number80 is un-sensu-ble 15:30:47 <Duck> dmsimard: so, first, is the plan ok (followed by patching the opstools role is it works), and if ok would you help me test ? 15:31:14 <dmsimard> Duck: if this is a blocker, let's do whatever is easiest to unblock it. I'm okay with not monitoring that machine for the time being 15:31:44 <dmsimard> We can look at why opstools-ansible pulls in those extra repositories if they don't seem to be required 15:31:58 <dmsimard> Wouldn't be the first patch I sent their way 15:32:51 <Duck> dmsimard: it's a blocker, but the LMTP bug is blocker too 15:33:10 <Duck> so if you have a little bit of time, we can test and then be sure it works before working on a patch 15:33:12 <dmsimard> I'm afraid I'm out of the loop on LMTP 15:33:25 <Duck> and btw the machine would be cleaned 15:33:43 <dmsimard> we can even reinstall it if you'd like 15:33:47 <Duck> just do not reapply the base playbook on it and we're fine :-) 15:34:08 <dmsimard> yeah that's fine 15:34:23 <Duck> so I can ping you to teste sensu ? 15:34:27 <Duck> -e 15:34:53 <Duck> dmsimard: my main problem is getting your opinion and time :-) 15:34:53 <rdogerrit> Attila Darazs created rdo-infra/ci-config master: Enable rdocloud based promotion for upstream jobs https://review.rdoproject.org/r/9778 15:35:04 <Duck> LMTP is much more complex I fear 15:35:29 <dmsimard> let's try and find out why opstools-ansible pulls in those repos first and we might not even need to do any further testing if it turns out it really doesn't need those repos 15:35:53 <dmsimard> but yes, it's something I can spend time on 15:36:33 <Duck> dmsimard: from what I've seen it's just because they mixed client and server config in the same bunch and this is part of the common install 15:36:36 <Duck> ok 15:37:05 <Duck> as for LMTP if anyone is proficient in Python and has some time, it could help. I'll try to ping upstream 15:37:36 <Duck> that's all for my side 15:37:39 <number80> #action Duck ping upstream about LMTP. Request for help if you can 15:37:42 <dmsimard> Duck: can you link a reference to the LMTP issue 15:37:54 <dmsimard> you keep mentioning it but I have no idea what you're talking about :) 15:38:02 <Duck> maybe dmsimard or apevec or jpena would have things to add 15:38:14 <Duck> let me find it 15:39:12 <Duck> https://github.com/mailman/mailman/issues/152 15:39:59 <dmsimard> hang on, so hyperkitty still sits on top of MM3 ? 15:40:07 <dmsimard> it's just the UI ? 15:40:33 <number80> hyperkitty is just MM3 UI 15:40:41 <number80> MM3 has splitted backend and frontend 15:40:42 <Duck> dmsimard: yes, a better UI 15:41:15 <Duck> PonyStuff is untested and I've got no deployment rules, so we're with the defaults, the most tested one 15:41:26 <Duck> (and packages) 15:41:52 <Duck> but the LMTP problem is routing the mails properly, so this is slightly critical 15:41:56 <dmsimard> ok 15:42:19 <Duck> it's in the mailman3 daemon, not in the UI 15:42:55 <Duck> OMFG, the certificate for mail.corp.redhat.com just expired :-/ 15:43:09 <Duck> sorry, ranting time :-) 15:43:30 <Duck> dmsimard: I can provide more insight by mail or IRC 15:43:42 <Duck> number80: too 15:45:18 <Duck> number80: please don't loop 15:45:33 <number80> ok 15:45:36 <Duck> maybe people have other topic to discuss 15:45:46 <number80> that was the last in the agenda 15:45:50 <Duck> ho 15:45:56 <number80> #topic open floor 15:46:08 <number80> last call for throwing whatever you have in mind 15:46:14 <number80> #info amoralej chairing next meeting 15:46:47 <number80> This week, I'll be cleaning up our release trackers + trello board 15:47:54 <number80> I also want to raise the attention on this discussion about dep management: 15:47:55 <number80> https://review.rdoproject.org/etherpad/p/dependencies-management 15:48:12 <number80> This concerns everyone in this channel, please look at it and provide your feedback 15:48:33 <number80> #action * help on brainstorming deps management 15:49:06 <number80> has anyone something to say or should I just close the meeting? 15:50:20 <jpena> nothing for me 15:52:40 <number80> ok 15:52:52 <number80> Thanks for attending, you get 7 minutes back of your life :) 15:52:55 <number80> #endmeeting