15:01:17 <jpena> #startmeeting RDO meeting - 2018-08-29
15:01:17 <openstack> Meeting started Wed Aug 29 15:01:17 2018 UTC and is due to finish in 60 minutes.  The chair is jpena. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:01:18 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:20 <openstack> The meeting name has been set to 'rdo_meeting___2018_08_29'
15:01:24 <leanderthal> o/
15:01:25 <number80> o/
15:01:31 <jpena> #topic roll call
15:01:35 <jpena> #chair leanderthal number80
15:01:35 <openstack> Current chairs: jpena leanderthal number80
15:01:49 <jpena> remember, the agenda is available at https://etherpad.openstack.org/p/RDO-Meeting in case you want to add any last-minute topic
15:01:59 <baha> o/
15:02:09 <ykarel> o/
15:02:21 <jpena> #chair baha ykarel
15:02:22 <openstack> Current chairs: baha jpena leanderthal number80 ykarel
15:02:55 <jpena> let's start with the agenda, we have many topics to cover
15:03:03 <jpena> #topic Policy for abandoning packages proposal
15:03:19 <jpena> ykarel, number80: is ^ yours?
15:03:33 <ykarel> not mine
15:03:34 <mjturek> o/
15:03:41 <jpena> #chair mjturek
15:03:42 <openstack> Current chairs: baha jpena leanderthal mjturek number80 ykarel
15:04:04 <number80> Yep
15:04:18 <number80> We have cases where packages lose their maintainers and are not replaced
15:04:24 <number80> So I suggest few things:
15:04:41 <number80> 1. ping once a year, maintainers to check if they're still active or not
15:05:10 <number80> 2. have a proper procedure to abandon and transition packages to someone else (and announce it if there are no candidates)
15:05:29 <number80> So what do you think and what's missing in the headlines?
15:07:19 <number80> Nobody? I guess I will then follow up in the mailing list :)
15:07:51 <number80> #action number80 create a thread about abandoning packages policy on the rdo-dev list
15:08:30 <jpena> I think both options are valid... It's more about how to manage packages when the initial maintainers are no longer responsive
15:08:38 <jpena> so probably option 1 is a bit better to me
15:09:52 <ykarel> so a year is not much time
15:10:09 <jruzicka> yearly maintainer ping might be worth a try
15:11:31 <jruzicka> proper procedure to abandon and transition packages... other than creating respective rdoinfo review?
15:11:31 <leanderthal> wouldn't it need to be twice yearly?
15:11:42 <ykarel> ack let's try that
15:11:43 <number80> OK
15:12:01 <number80> leanderthal: we can change the cadence if needed
15:12:05 <leanderthal> fair
15:12:17 <jruzicka> or maybe we could go with a less intrusive way of examining builds/distgits and detecting inactivity
15:12:29 <number80> jruzicka: at my non-surprise, people don't know what to do, so documenting it is useful :)
15:12:55 <jruzicka> number80, so basicaly that translates to documenting what to do... sounds useful :)
15:12:58 <number80> jruzicka: if we can exploit gerrit data, we can target people pinged
15:13:09 <number80> s/pinged/to be pinged/
15:13:47 <jruzicka> if a package hasn't been updated for some time and there is no activity from the maintainer somewhere, we might detect it and make someone check
15:14:12 <jruzicka> number80, yes gerrit data, distgit/patches branch commits, builds...
15:14:30 <number80> jruzicka: builds are now made by a bot so not relevant here :)
15:14:39 <jruzicka> right :)
15:15:11 <jruzicka> anyway a solution not requiring maintainer actions if (s)he's active is preferred here
15:16:01 <number80> ack
15:16:04 <jruzicka> documenting what to do when mainter is gone sounds almost unevitable
15:16:13 <number80> hell yeah :)
15:16:58 <jpena> shall we move on to the next topic?
15:17:09 <apevec> we could copy fedora unresponsive maint process?
15:17:36 <leanderthal> yessssss
15:17:39 <leanderthal> +1
15:18:17 <number80> apevec: we could adapt it
15:19:35 <leanderthal> exactly.
15:20:28 <number80> I have enough material to shape something for vote next week then :)
15:21:02 <jpena> #topic Release preparations
15:22:19 <number80> We have repositories ready (all 3 platforms) and same for centos-release-openstack-rocky
15:22:39 <number80> leanderthal: do you need any help with the release announce?
15:23:01 <leanderthal> number80, of course
15:23:14 <leanderthal> i have this https://www.rdoproject.org/rdo/release-checklist/
15:23:18 <leanderthal> #link https://www.rdoproject.org/rdo/release-checklist/
15:23:26 <number80> Yup
15:24:24 <leanderthal> and i've started https://etherpad.openstack.org/p/rdo-rocky-release
15:24:26 <leanderthal> #link https://etherpad.openstack.org/p/rdo-rocky-release
15:24:31 <number80> \o/
15:24:48 <rdogerrit> Ronelle Landy proposed rdo-jobs master: Add subnodes and primary node IPs to yaml file  https://review.rdoproject.org/r/16013
15:25:33 <number80> #action all help with the release announcement
15:25:37 <leanderthal> who is willing / able to contribute to the release announcement? this will be one of my main projects at ptg (the other one being the interviews)
15:26:43 <apevec> release announcement should go out before ptg?
15:26:51 <leanderthal> also, can we take a minute to squee for that sentence you made earlier, number80 ?
15:27:02 <leanderthal> #info We have repositories ready (all 3 platforms) and same for centos-release-openstack-rocky
15:27:05 <leanderthal> SQUEEEEEE
15:27:14 <number80> Yay \o/
15:27:18 <leanderthal> apevec, no, it'll go out during or immediately after ptg
15:27:23 <apevec> if CI works, we can release tomorrow?
15:27:28 <apevec> oh
15:27:36 <rdogerrit> Nicolas HICHER proposed rdo-jobs master: DNM: use hostvar to set /etc/nodepool/provider  https://review.rdoproject.org/r/15919
15:27:36 <leanderthal> apevec, i'd LOVE to do it before PTG
15:27:37 * ykarel having some network issues
15:27:51 <leanderthal> is it possible?
15:27:52 <rdogerrit> Nicolas HICHER proposed rdo-jobs master: DNM: use hostvar to set /etc/nodepool/provider  https://review.rdoproject.org/r/15919
15:28:01 <apevec> leanderthal, upstream GA is tomorrow
15:28:20 <apevec> ykarel, ^ how is CI doing, any blockers for release tomorrow?
15:28:37 <ykarel> apevec, just some NODE_FAILURES started
15:28:40 <ykarel> rest is good
15:28:42 <leanderthal> it'd be super keen to do the announcement tomorrow, but highly unlikely due to gathering the information together
15:28:49 <ykarel> apevec, upstream GA postponed?
15:29:02 <ykarel> wasn't that today?
15:29:16 <apevec> ykarel, schedule still has tomorrow
15:29:28 <apevec> https://releases.openstack.org/rocky/schedule.html#r-release
15:29:59 <rdogerrit> Nicolas HICHER proposed rdo-jobs master: Add vexxhost job and nodeset  https://review.rdoproject.org/r/15895
15:30:09 <apevec> leanderthal, yeah, also it usually takes time to get SIG rpms signed by centos and published on mirror.c.o
15:30:22 <ykarel> mmm may be i have misread earlier, was on 29th
15:30:59 <apevec> min rdo ga = upstream ga + time to rebuilds rpms in CBS + CI time + centos signing and sync to mirror.c.o
15:31:04 <apevec> leanderthal, ^
15:31:17 <apevec> for one release, it was <12h
15:31:21 <ykarel> apevec, also promotion to -testing and then -release
15:31:28 <apevec> number80, ^ or was it even less?
15:31:40 <apevec> ykarel, yeah, that's "CI time"
15:31:41 <leanderthal> apevec, that was two years ago - it was awesomeness. but the release announcement doesn't have the same speed
15:31:45 <leanderthal> due to gathering the info
15:31:46 <ykarel> apevec, ohhk
15:32:44 <apevec> yeah, we could start announcement draft earlier for next GA
15:33:11 <leanderthal> agreed
15:33:28 <number80> apevec: no less than 12h but if we had repo, we could have done within 2h
15:33:40 <number80> so it's possible to break the record this time
15:33:57 <ykarel> yeah we can try that
15:34:06 <ykarel> setting new targets
15:34:23 <apevec> ah, we already have http://mirror.centos.org/centos/7/cloud/x86_64/openstack-rocky/ !
15:34:45 <apevec> with RC builds
15:34:55 <ykarel> yes that was published today
15:35:01 <apevec> cool!
15:35:23 <ykarel> we only have few candidates, that i will be proposing patch in few minutes
15:35:33 <ykarel> so we are in sync
15:35:37 <number80> :)
15:35:51 <apevec> nice
15:35:53 <ykarel> but CI :(
15:36:16 <ykarel> seeing NODE_FAILURES
15:37:14 <apevec> not nice :(
15:37:35 <apevec> I hope it's not rdocloud weekly outage coming up
15:38:07 <ykarel> me too hoping the same
15:39:04 <jpena> next topic?
15:39:50 <number80> yup
15:40:00 <jpena> #topic Relationship with rpm-packaging project
15:40:04 <jpena> that's mine
15:40:35 <jpena> so you're probably aware that there is an upstream rpm-packaging project, where we are collaborating primarily with people from SUSE, but also from some other companies
15:41:18 <jpena> we've been successful in reusing some tools created by that project, but we've never been able to reuse the spec file templates generated by the project
15:41:50 <apevec> the only one we reuse directly is openstack-macros
15:41:54 <apevec> in rdo trunk
15:41:58 <jpena> so the RDO community involvement has always been like "half-done"
15:42:26 <apevec> we tried to reuse more for python3 poc
15:42:42 <jpena> during the last cycle, we've only seen reviews/commits from ykarel or me, and we need to consider how we want to collaborate there (if at all)
15:42:51 <apevec> but looks like maintainers lean to add py3 support directly
15:43:14 <apevec> jpena, I think it makes sense to continue contributing to common tooling
15:43:23 <apevec> where it makes sense like pymod2pkg
15:43:55 <apevec> but for spec.j2
15:44:50 <jpena> leaving the spec.j2 side would mean stopping the 3rd party CI jobs, I assume
15:44:58 <jpena> so we'd be effectively pulling off
15:45:10 <apevec> yeah, we wouldn't have a use-case to support it
15:45:44 <number80> yeah
15:46:18 <number80> At some point, we can't do everything so focusing on tooling is probably the most sensible path
15:46:25 <jpena> do you think we should discuss it on the mailing list, or can we make a decision now?
15:47:02 <number80> Expanding the discussion on the list wouldn't hurt but I'd limit it in time
15:47:12 <leanderthal> i recommend discussing ^ that.
15:47:21 <leanderthal> yes. start the discussion with a deadline
15:47:32 <jpena> ok, sounds like a plan
15:47:52 <jpena> #action jpena will start discussion on rdo-dev list about rpm-packaging involvement, with a deadline to decide
15:47:59 <leanderthal> something along the lines of, "this is what we'd like to do, we're making the decision at the rdo meeting on such and such date; please discuss here and / or attend the meeting to help decide"
15:48:23 <leanderthal> i recommend two weeks except that it's right over ptg, so.....
15:49:15 <jpena> let's see if we get somewhere in one week
15:49:29 <apevec> sounds good
15:49:30 <jpena> and let's move to the next topic, we have lots to cover and little time :D
15:49:30 <leanderthal> awesomeness
15:49:42 <jpena> #topic Patch for introducing job for ppc64le container builds
15:49:42 <apevec> let's move on, 10min left and few more topics
15:49:51 <jpena> mjturek, baha ^^
15:50:00 <mjturek> hey jpena pretty straight forward
15:50:12 <mjturek> have a patch available that should introduce the job
15:50:21 <mjturek> would like some eyes on it as it's pretty ugly right now
15:50:24 <baha> https://review.rdoproject.org/r/#/c/15978/
15:51:48 <leanderthal> mjturek, baha - also email devs list about this one
15:51:49 <apevec> adding few reviewers
15:51:54 <apevec> in gerrit
15:52:01 <leanderthal> awesomeness, thx apevec
15:52:18 <jpena> #action CI reviewers to check https://review.rdoproject.org/r/#/c/15978/
15:52:39 <mjturek> thanks all
15:52:49 <jpena> #topic Hardware for building octavia-tempest-test-golang package
15:53:01 <mjturek> so baha had emailed about this package
15:53:16 <mjturek> and the resolution seemed to be that it couldn't be built as there's no hardware for it in RDO
15:53:27 <mjturek> wondering who we could talk to about introducing power hardware into the cloud
15:53:40 <mjturek> and if that's something you all are open to
15:53:57 <leanderthal> me and ..... leadership team
15:54:12 <leanderthal> mjturek, let's have a meeting about this
15:54:23 <mjturek> sounds good
15:54:24 <apevec> mjturek, alternative was to build that pkg in CBS
15:54:36 <apevec> it's actually annoying subpkg
15:54:36 <leanderthal> is that possible? ^
15:54:39 <baha> That was my thought. Centos cloud has access to power nodes, and it's only one package
15:54:42 <apevec> just for testing
15:55:02 <mjturek> apevec that would be nice!
15:55:11 <apevec> it's not changing that frequently?
15:55:11 <leanderthal> did you two ask on #centos-devel ?
15:55:24 <apevec> leanderthal, CBS is multiarch
15:55:27 <baha> We figured we'd come here first, since it's for the delorean repo
15:55:38 <leanderthal> ooOOOoo
15:55:39 <leanderthal> right
15:55:52 <mjturek> https://lists.rdoproject.org/pipermail/dev/2018-August/008884.html thread here for those curious
15:56:10 <mjturek> package name is actually python-octavia-tests-tempest-golang
15:57:38 <mjturek> leanderthal: still open to introducing power hardware into rdo though, ping me after and we can discuss
15:57:42 <number80> golang?
15:57:57 <jpena> it's a single httpd server in golang, used for tests
15:57:59 <apevec> it's basically static https://github.com/openstack/octavia-tempest-plugin/commits/master/octavia_tempest_plugin/contrib/httpd/httpd.go
15:58:07 <apevec> jpena, yeah, silly isn't it
15:58:09 <number80> My left eyebrow raised so high that it touched my hair
15:58:14 <mjturek> :)
15:58:19 <apevec> number80, picture!
15:58:41 <leanderthal> mjturek, definitely, but i'm off in less than a minute - i'll ping you tomorrow
15:58:47 <apevec> jpena, I think we could just create a srpm from that
15:58:56 <mjturek> no worries at all leanderthal
15:59:01 <apevec> and have it in deps
15:59:11 <jpena> my gut reaction to this is: let's just rebuild it as a dep. We'd have to fix the octavia spec not to require a fixed version, but that should be enough
15:59:24 <apevec> yep
15:59:25 <jpena> https://github.com/rdo-packages/octavia-distgit/blob/rpm-master/openstack-octavia.spec#L144
15:59:49 <apevec> right
15:59:53 <mjturek> cool!
15:59:59 <jpena> who takes ownership?
16:01:06 <mjturek> jpena: correct me if I'm wrong, but rebuilding it would take place in CBS? should we reach out to centos-devel for help here?
16:01:31 <leanderthal> jpena, i need to scoot as it's the end of my day, but my topic is coming up; it needs to be discussed here and tomorrow i'll email the lists
16:01:40 <leanderthal> i'll read the notes tomorrow before i email
16:01:47 <jpena> leanderthal: ack
16:01:58 <apevec> mjturek, no, we can add it as any other dep in cloud sig tags
16:02:05 <apevec> via rdoinfo deps.yml
16:02:27 <jpena> apevec: so that would require a new spec in rdo-common, right?
16:02:43 <apevec> yes, we need to right a new spec to build only that go file
16:02:58 <apevec> basically extract it from octavia plugin spec
16:03:28 <jpena> ok, in the absence of other volunteers, I can have a look at that
16:03:58 <mjturek> thank you jpena
16:04:02 <jpena> #action jpena to extract octavia-tempest-test-golang to a separate dep
16:04:10 <apevec> and octavia plugin even has a release now! https://github.com/openstack/octavia-tempest-plugin/releases
16:04:27 <baha> Thank you very much!
16:04:30 <jpena> apevec: that's not even in the tempest plugin, it's part of the octavia repo
16:04:49 <jpena> we're late!
16:04:52 <jpena> #topic  propose to shift test days to M1 / M3 instead of M2 / GA
16:05:09 <apevec> jpena, it is in tempest plugin  https://github.com/openstack/octavia-tempest-plugin/blob/master/octavia_tempest_plugin/contrib/httpd/httpd.go
16:05:24 <jpena> #undo
16:05:25 <openstack> Removing item from minutes: #topic propose to shift test days to M1 / M3 instead of M2 / GA
16:05:37 <apevec> since we are over time, let's move this last topic to next week?
16:06:07 <apevec> also Rain left
16:06:32 <apevec> ah she said to still discuss it?
16:06:38 <apevec> I'd say +2 :)
16:06:39 <jpena> she said she wanted this topic discussed in the meeting
16:06:46 <jpena> let's get it back quickly
16:06:51 <jpena> #topic  propose to shift test days to M1 / M3 instead of M2 / GA
16:07:05 <apevec> make sense
16:07:11 <jpena> TL;DR: there's a lot to do for GA, no time for tests. So proposal: test M1/M3 instead
16:07:15 <jpena> I'm +1 to that
16:07:38 <jpena> anyone else?
16:07:40 <apevec> so for Stein that would be end of Oct and early March
16:07:46 <apevec> +1
16:08:10 <apevec> NB Stein is longer cycle upstream https://releases.openstack.org/stein/schedule.html
16:08:18 <apevec> GA is early April
16:08:55 <jpena> looks ok to me
16:09:02 <jpena> since nobody is opposing...
16:09:09 <number80> +1
16:09:11 <apevec> shipit
16:09:12 <jpena> #agreed test on M1 / M3 instead and cancel rocky test days GA next week
16:09:21 <jpena> #topic open floor
16:09:32 <jpena> we already have a chair for next week, anything else before we run?
16:09:47 <number80> Yes one more thing
16:10:05 <number80> Kudos to ykarel for driving his first release by himself :)
16:10:15 <jpena> ykarel++
16:10:56 * number80 is done
16:11:03 <jpena> let's finish, then, it's late
16:11:05 <jpena> #endmeeting