15:01:37 <amoralej> #startmeeting RDO meeting - 2017-11-29 15:01:38 <openstack> Meeting started Wed Nov 29 15:01:37 2017 UTC and is due to finish in 60 minutes. The chair is amoralej. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:39 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:42 <openstack> The meeting name has been set to 'rdo_meeting___2017_11_29' 15:01:57 <amoralej> #topic roll call 15:02:07 <ykarel> o/ 15:02:10 <mary_grace> o/ 15:02:25 <jpena> o/ 15:02:44 <amoralej> #chair dmsimard ykarel mary_grace jpena 15:02:45 <openstack> Current chairs: amoralej dmsimard jpena mary_grace ykarel 15:03:34 <number80> o/ 15:03:42 <rbowen> o/ 15:04:03 <amoralej> #chair number80 rbowen 15:04:04 <openstack> Current chairs: amoralej dmsimard jpena mary_grace number80 rbowen ykarel 15:04:50 <jruzicka> o/ 15:05:12 <amoralej> #chair jruzicka 15:05:13 <openstack> Current chairs: amoralej dmsimard jpena jruzicka mary_grace number80 rbowen ykarel 15:05:19 <amoralej> ok, let's start with first topic 15:05:42 <amoralej> #topic Test day: "trystack" for the duration of the test day 15:05:49 <amoralej> who sent that? 15:05:55 <rbowen> dmsimard and I, I believe. 15:05:59 <dmsimard> hi 15:06:17 <rbowen> Just to confirm that this is indeed what we're planning to do, and that folks are on board with this. 15:06:26 <dmsimard> So for the next test day we'll be doing an experimental thing that we hope will be successful 15:06:44 <dmsimard> and I'd like everyone to do a lot of buzz around this, tell your friends about it, blog about it etc etc 15:07:05 <amoralej> dmsimard, let me know how i can help 15:07:17 <amoralej> to deploy it or whatever 15:07:17 <number80> so do we have a proper howto for that? 15:07:19 <rbowen> We also need to come up with test scenarios that we encourage people to go through on this test cloud. 15:07:30 <dmsimard> For the next test day, in addition to the usual test scenarios where we encourage people to try to install openstack on their own, RDO will provide a test cloud so that people can test it from a user perspective 15:08:11 <dmsimard> This is cool for a lot of reasons, 1) installing openstack is boring, 2) they can test things within minutes instead of potentially hours 3) people don't necessarily have the hardware to install it on 15:08:28 <jruzicka> > True 15:08:54 <amoralej> #info in next test day, RDO will provide a test cloud so that people can perform user test-cases in addition to RDO deployment tests 15:08:56 <dmsimard> We're planning to do this first installation with the Packstack installer with as many projects as possible (magnum, etc.), on one single bare metal node with appropriate specifications 15:09:27 <dmsimard> If this experiment is successful, it would be great to try different installers throughout the cycle for each test day 15:09:55 <amoralej> dmsimard, so what about tenants 15:10:06 <amoralej> we'll create tenants in advance for users that request it? 15:10:11 <amoralej> or self-service? 15:10:12 <dmsimard> However this means involving the core teams of the installers not only to deploy the server for the duration of the test day but also help troubleshoot issues throughout the test days 15:10:44 <dmsimard> I think we'll want to create a tenant on-demand to avoid abuse, people interested in testing would let us know in advance if possible 15:10:57 <amoralej> ok 15:11:16 <dmsimard> We can probably write a small playbook that'd create the tenant/users and then just a list of user/passwords to iterate through 15:11:39 <amoralej> self-service + approval would be ideal, but i'm not aware of any out-of-the-box tool to do that 15:11:41 <number80> Then, we need a simple procedure to be written 15:11:47 <dmsimard> Since the first installer is Packstack, I'd like to propose dmsimard, amoralej and jpena as core folks for this first experiment :) 15:12:01 <amoralej> +1 15:12:02 <dmsimard> amoralej: right 15:12:04 <number80> otherwise, people will show up and the time to create tenants, then, they're gone 15:12:14 <dmsimard> number80: creating a tenant/user is easy.. 15:12:16 <number80> (and I know this time can be very short) 15:12:43 <jpena> if people register in advance, we could pre-create the users/tenants for them, then send the credentials on the test day 15:12:45 <number80> dmsimard: yeah, but it's better to ask people to register by advance (and fix for others on-demand) 15:12:56 <dmsimard> yes 15:13:14 <rbowen_> What mechanism do we want to use for folks to register? 15:13:38 <dmsimard> rbowen_: I'd probably centralize everything in an etherpad 15:13:58 <dmsimard> rbowen_: in the header you'd have a list of people participating (name, email), we'd create access and send them the credentials by email 15:14:14 <number80> *nods* 15:14:15 <dmsimard> rbowen_: and the pad would also contain feedback for the experiment (issues from an operational perspective, bugs found by users, etc.) 15:14:33 <dmsimard> because let's not forget that this is also a good opportunity to test the installer in question 15:15:13 <dmsimard> rbowen_: I can draft the etherpad format 15:15:24 <rbowen_> ok. 15:15:24 <dmsimard> if we're okay with that approach 15:15:27 <rdogerrit> Pradeep Kilambi created openstack/gnocchi-distgit rpm-master: Rename openstack-gnocchi to gnocchi https://review.rdoproject.org/r/10736 15:15:41 <rbowen_> And I'll turn the above transcript into a blog post draft, and run that by you. 15:15:55 <amoralej> ok, so we'll advertise it in users and devs maillist and ask to add themselves to the etherpad 15:16:09 <mary_grace> rbowen_: we can use that same text to put a brief paragraph in the newsletter as well. 15:16:16 <dmsimard> amoralej: right, I'd try to be as loud as I can everywhere 15:16:25 <amoralej> yeah, ok 15:16:25 <dmsimard> twitter, blogs, mailing lists 15:16:37 <dmsimard> I'll even post something to openstack-dev/openstack-operators 15:16:56 <rbowen_> The scheduled date is Dec 13/14, so we have a few days to get the word out. 15:17:02 <amoralej> how much capacity we will have? 15:17:15 <dmsimard> amoralej: looking at a 64GB RAM machine with sufficient disk space 15:17:17 <amoralej> could users deploy aio packstack in an instance in the tenant? 15:17:21 <amoralej> mmm 15:17:25 <ykarel> dmsimard, packstack is still not deprecated, right? last sunday in a meetup a guy was telling that he heard packstack is being deprecated 15:17:26 <mary_grace> Dec 13/14? or 14/15? 15:17:33 <rbowen_> Um ... checking 15:17:37 <amoralej> that's not much for users to deploy packstack in tenant 15:17:44 <rbowen_> 14/15. Thanks. 15:17:48 <dmsimard> I'm not sure the point is to allow people to deploy tripleo/packstack in that environment 15:18:08 <dmsimard> It's not a bad idea by any means, but it has an impact on the budget 15:18:16 <rbowen_> I'll update http://rdoproject.org/testday/ today, too. 15:18:29 <rbowen_> dmsimard: Let's start small, see how it goes, and then work up. 15:18:43 <jpena> well, a single 64 GB machine with an all-in-one setup will leave enough room for about 10-20 VMs with 4 GB of RAM, not more 15:18:44 <amoralej> dmsimard, it's a way to allow to test both users and admin use cases in the same environment 15:18:45 <dmsimard> like, I even considered limiting the flavors available 15:18:50 <amoralej> but only aio 15:18:55 <dmsimard> so that there's no abuse and room for everyone 15:19:06 <amoralej> and with 64GB, there is no much option 15:19:39 <dmsimard> Let's start with 64GB, if we're too successful we can consider bumping it up to 128GB for the next one :) 15:19:49 <amoralej> ok, so it sounds as a plan 15:19:57 <mary_grace> dmsimard: let me know how I can help amplify via RTs, blog round-up, etc. I'll keep an eye out, but if I miss anything, don't hesitate to reach out. 15:20:04 <dmsimard> mary_grace: ack 15:20:31 <dmsimard> that wraps up this topic 15:20:34 <amoralej> ykarel, it's not deprecated 15:20:50 <amoralej> but it's recomended only for PoC use-case 15:21:13 <dmsimard> ykarel: people tend to confuse the state of packstack upstream and in OSP 15:21:23 <amoralej> yeah 15:21:33 <ykarel> amoralej, Ok the same thing i told to the guy :) 15:21:34 <dmsimard> ykarel: I'm not sure what's the state of packstack in OSP anymore but it's probably not supported for production use case scenarios 15:22:01 <amoralej> for sure not, it's in a separated channel, iirc 15:22:01 <ykarel> dmsimard, ack 15:22:24 <amoralej> ok, so are we done about this topic? 15:22:26 <dmsimard> Packstack upstream is not "owned" by anyone other than the openstack community -- there's definitely not a lot of new features going in but it's very much kept in working condition 15:22:38 <dmsimard> I think so 15:23:00 <amoralej> dmsimard, we could create other etherpad to coordinate the deployment of the test cloud 15:23:26 <dmsimard> amoralej: sure 15:24:28 <amoralej> #action dmsimard will create a etherpad for users registration to the "test day cloud" 15:24:47 <dmsimard> #undo 15:24:48 <openstack> Removing item from minutes: #action dmsimard will create a etherpad for users registration to the "test day cloud" 15:24:58 <dmsimard> #action dmsimard will create a etherpad for users registration and feedback reports for the "test day cloud" 15:24:58 <amoralej> dmsimard, go ahead :) 15:25:15 <dmsimard> #action dmsimard to advertise the experiment on openstack-dev and openstack-operators 15:25:38 <dmsimard> #action everyone to be loud about this, it's very cool and it would be nice if it would be successful so we can keep doing it in the future 15:25:49 <rbowen> #action rbowen to draft blog post with information from this discussion, re test day cloud. 15:26:17 <amoralej> #action amoralej jpena dmsimard to build test cloud 15:26:25 <amoralej> ok, i think we are done 15:26:29 <amoralej> let's move on 15:26:54 <amoralej> #topic Revisiting RDO Technical Definition of Done 15:26:59 <amoralej> #info https://lists.rdoproject.org/pipermail/dev/2017-November/008412.html 15:27:06 <dmsimard> I've already posted my opinion on the thread 15:27:32 <amoralej> just to summarize, since queens tripleo can only deploy openstack in containers 15:27:58 <dmsimard> Beyond installers picking up problems with *packaging* such as missing packages (which sometimes happens due to missing tags, etc.) or bad mirrors or etc, I would not block the release of RDO 15:28:21 <amoralej> so we are reconsidering what's the definition of done should be, as we are not 15:28:59 <amoralej> dmsimard, but "installers" you mean packstack/puppet/tripleo/kolla/... 15:28:59 <number80> Well, I'm fine with that 15:29:08 <dmsimard> amoralej: yes 15:29:19 <number80> but we need to run 3o and other installers tests *and* fix issues during the cycle 15:29:20 <amoralej> i mean, with current proposal pacstack/p-o-i are blocking 15:29:35 <dmsimard> amoralej: the likelihood of issues being related to packaging beyond RC status is very very low 15:29:48 <dmsimard> amoralej: therefore if the mirror is good, we should release 15:30:01 <number80> e.g: tripleo containers using packages non-supported by RDO can cause issues later 15:30:24 <dmsimard> number80: we're always testing all installers throughout the whole cycle 15:30:52 <jpena> makes sense to me. Also, installers tend to release their final versions after the OpenStack release, so it makes sense not to block on TripleO 15:30:56 <number80> Yeah, but what I don't want to see if TripleO folks running their tests on their side, not reporting issues to us 15:31:19 <number80> Nightmare would be them setting up a copr to build packages there to workaround issues 15:31:45 <dmsimard> lol ? 15:31:52 <number80> It should be clear that's a no-no for us, if there are issues => fix must land in upstream, then RDO, then installer X 15:31:59 <amoralej> but a user can create their own containers and deploy from them 15:32:04 <amoralej> i'd like to validate that case 15:32:08 <number80> +1 15:32:44 <dmsimard> number80: the circumstances of a patch being carried in RDO before it lands upstream are probably not explicit but there must be a clear consensus and an exceptional situation for that to happen 15:32:47 <amoralej> in the past, both kolla and tripleo RC releases at GA has been good enought to validate 15:33:18 <dmsimard> number80: tripleo needing a patch in RDO because it hasn't been landed and tagged upstream would be quite exceptional and it's a case by case basis 15:33:47 <dmsimard> but it's not like we can't do hotfixes 15:34:11 <dmsimard> if this kind of issue shows up, let's fix them ad-hoc ad hotfixes 15:34:13 <number80> dmsimard: I was thinking about all kind of fixes, but TripleO should use our packages, not forks, not packages that are not provided by us 15:34:23 <dmsimard> at that point, RDO has done their job of shipping the signed tarballs as released upstream 15:34:39 <dmsimard> number80: if tripleo ends up using packaged forks, then we have other problems 15:34:51 <dmsimard> and I very much hope they don't do that 15:35:19 <amoralej> i don't think they will, i have not seen anything that can lead us to consider that, tbh 15:35:39 <number80> dmsimard: yes, and I'd like to state that in DoD, installers are not blockers for GA but they will be run during the whole cycle and issues will get fixed 15:36:12 <dmsimard> number80: yes, let's formalize that in the DoD. 15:36:20 <number80> DoD is not just GA criteria but also process to reach GA 15:36:20 <amoralej> "installer are not blockers" is to ambiguous, IMHO 15:36:29 <amoralej> s/to/too/ 15:36:46 <number80> amoralej: up to any better wording, I didn't get much thoughts to that :) 15:36:57 <dmsimard> Bugfixes can be released as adhoc hotfixes after release, not unlike how upstream freezes the branches and eventually allows backports to stable branches 15:37:04 <amoralej> i mean, if we are saying we need to pass packstack/p-o-i to promote, we are asuming they are blocking 15:37:06 <number80> Yeah 15:37:21 <amoralej> and they are installers 15:37:32 <dmsimard> promotion of the trunk repositories during the cycle != release 15:37:40 <dmsimard> the promotion is really a process to allow themselves to not break their gates 15:37:57 <amoralej> dmsimard, the cloudsig promotion 15:38:05 <dmsimard> oh, hmm 15:38:06 <amoralej> from -testing to -release is also gated 15:38:14 <amoralej> using weirdo 15:38:28 <amoralej> and i tripleo (non voting right now) 15:38:34 <dmsimard> I think that is for stable releases post-GA 15:38:42 <dmsimard> and yes, we should keep that clear 15:38:56 <amoralej> in pike we used it for initial tagging also 15:39:11 <amoralej> so, we need to validate packages are sane and work 15:39:12 <dmsimard> amoralej: the process of gating updates to the stable mirrors allows us to prevent regressions and breaking stable people and it is required to do that 15:39:30 <dmsimard> amoralej: yes, we used installers for pike, and you saw how long it took us to release it 15:39:42 <amoralej> dmsimard, but it was a tooling issue 15:39:49 <dmsimard> part was automation issues, but there was a lot of things like timeouts and installer bugs 15:40:01 <jpena> we've always used installers to validate the repos before release, haven't we? 15:40:06 <amoralej> in previous releases we also validated it using pipeline in ci.centos.org 15:40:12 <amoralej> yes 15:40:31 <dmsimard> yes, and they've been useful in finding those odd packages that didn't get tagged and things like that 15:40:41 <jpena> ok, so let's refocus a bit 15:41:03 <jpena> it's not like we don't want to use installers as a criteria for GA 15:41:17 <jpena> it's just that we want to use a more limited set, that allows us to do basic checks 15:41:28 <jpena> other than a full-fledged, multi-node, HA installation 15:41:38 <amoralej> especially because tripleo requires containers which are not a RDO deliverable 15:41:51 <jpena> exact 15:42:17 <amoralej> so, IMO, i'd keep non-container based installers as criteria 15:42:38 <amoralej> + a job to validate containers-build process from the packages in the cloudsig repo 15:43:11 <jpena> I would specify the criteria as "basic installers" (specifying which ones they are) + the container-build process you mention. We can always change what basic installers means to us in future releases 15:43:45 <dmsimard> jpena: even a limited set is awkward 15:43:56 <dmsimard> this limited set could be blocking due to issues unrelated to RDO and that's what concerning me 15:44:27 <amoralej> dmsimard, that has always been the case in previous releases 15:44:30 <jpena> that would be very rare 15:44:30 <dmsimard> It's almost.. like, we'll run these installer jobs and we'll, at our discretion, decide whether to release or not, depending on the results 15:45:25 <dmsimard> Were we not in agreement that using deployment projects that have the "cycle-with-trailing" model to vet the release was an issue earlier ? I'm confused 15:45:44 <amoralej> dmsimard, not afaik 15:45:54 <dmsimard> OpenStack releases on day 0, deployment projects can release on day 14 which means they are not ready 15:46:18 <amoralej> then, what's the point on releasing RDO on day 0 15:46:20 <dmsimard> If we use deployment projects to block the release of RDO, it means we can no longer guarantee day 0 release 15:46:28 <amoralej> i agree 15:46:39 <amoralej> but i prefer to provide repos that can be deployable 15:46:52 <amoralej> at least in default use cases, even if we have to wait 15:47:04 <jpena> What would be the point in releasing a set of repos that nobody can use? 15:47:04 <amoralej> in fact, it has not been an issue before 15:47:10 <dmsimard> It's not a bad thing if we want to do that, but one of the objectives (was it an OKR? I forget) was to release RDO within two hours of upstream 15:47:13 <amoralej> although, potentially, it may be 15:47:30 <amoralej> but with an acceptable quality 15:47:39 <dmsimard> jpena: Does OpenStack delay it's release if deployment projects can't deploy it ? 15:47:52 <dmsimard> We package OpenStack and the tarballs aren't going to change 15:47:54 <jpena> yes if it doesn't work on devstack(tm) 15:48:17 <dmsimard> jpena: right, but day 0 for us starts when they tag the release 15:48:37 <dmsimard> I'd rather we push this requirement upstream if we're going to block the release 15:48:49 <dmsimard> Worth discussing with the community/TC 15:49:02 <number80> (sorry, must be off immediately for family matters) 15:49:09 <dmsimard> people don't deploy devstack in production 15:49:10 <amoralej> number80, ok 15:49:42 <dmsimard> ok I'll take the action to ask the community about it 15:50:03 <jpena> dmsimard: my point (besides the pun) was: upstream won't release something that can't be at least deployed with devstack. Once we release, it should at least be deployable somehow 15:51:04 <dmsimard> jpena: I don't know, I wonder if that's actually documented somewhere 15:51:11 <dmsimard> jpena: do they have a definition of done ? 15:51:25 <jpena> dmsimard: I guess not, but devstack is part of the gate 15:51:44 <dmsimard> ok, I'll reach out and find out 15:52:09 <dmsimard> #action dmsimard to reach out to the community about definition of done and discuss cycle-with-trailing 15:52:15 <dmsimard> #undo 15:52:16 <openstack> Removing item from minutes: #action dmsimard to reach out to the community about definition of done and discuss cycle-with-trailing 15:52:20 <dmsimard> #action dmsimard to reach out to the OpenStack community about definition of done and discuss cycle-with-trailing 15:52:34 <amoralej> we can keep the discussion in the maillist thread 15:52:53 <dmsimard> so, another thing to consider in the DoD 15:53:04 <dmsimard> which is again awkward due to cycle-with-trailing 15:53:12 <dmsimard> is that we actually carry packages for tripleo 15:53:26 <dmsimard> so if we release before tripleo, we're actually releasing unreleased software 15:53:35 <dmsimard> very chicken-and-egg 15:53:36 <amoralej> we release RC reeleases 15:53:47 <amoralej> they always release before GA 15:53:51 <dmsimard> amoralej: right, but otherwise everything else is released 15:53:56 <dmsimard> amoralej: nova, keystone, etc 15:54:00 <amoralej> yes 15:54:38 <amoralej> so, there is a time gap between "main GA" to "cycle-with-trailing" where we are in a weird situation 15:54:45 <amoralej> with GA + RC builds 15:54:50 <amoralej> that's right 15:55:04 <amoralej> but RDO includes all of them 15:55:19 <amoralej> we can't do a partial release with only GA, IMO 15:55:28 <amoralej> so we have two options 15:55:50 <amoralej> 1. Create and validate GA + RC (for cycle-with-trailing) 15:56:01 <amoralej> 2. Wait for cycle-with-trailing 15:56:19 <amoralej> currently we are doing 1 15:56:51 <amoralej> and i'd say it has been working pretty well in last releases 15:56:57 <amoralej> correct me if i'm wrong 15:58:03 <jpena> we're running out of time, shall we move to the last item in the agenda? 15:58:06 <amoralej> we are almost out of time 15:58:07 <amoralej> yes 15:58:26 <amoralej> #action everyone to keep discussion about DoD in mailing list thread 15:58:45 <amoralej> #topic Dependencies automation status 15:58:57 <amoralej> #info https://review.rdoproject.org/etherpad/p/deps-automation 15:59:19 <amoralej> so, this is just to make everyone aware of the activities we are doing to automate dependencies management 15:59:22 <rdogerrit> Merged openstack/tempest-distgit pike-rdo: Disable warning-is-error for sphinx https://review.rdoproject.org/r/10734 15:59:59 <amoralej> the goal is to implement some automation to the processes to update dependencies in RDO repo 16:00:33 <amoralej> in the etherpad there are links to trello cards and reviews 16:00:55 <amoralej> so if anyone is interested in the topic, don't hesitate to join #rdo and ask 16:01:15 <amoralej> that's it for this topic 16:01:26 <amoralej> #topic open floor 16:01:34 <amoralej> who want to chair next week? 16:01:37 <amoralej> any volunteer? 16:02:12 <radez> dmsimard: ping, hey could you make me a pike tree under the aarch46 directory? 16:02:23 <jpena> note next Wednesday is a bank holiday in Spain, so amoralej and I will be off 16:02:25 <dmsimard> radez: yeah. 16:02:30 <radez> dmsimard: thx! 16:02:38 <amoralej> in fact, i'll be out the whole week :) 16:02:43 <dmsimard> I can chair 16:02:58 <amoralej> #action dmsimard will chair next weekly meeting 16:03:03 <radez> dmsimard: I'm going to work on queens and master next too so if you wanted to throw them in there too while you're at it 16:03:06 <amoralej> any other topic that you want to bring? 16:03:35 <amoralej> ok, i'm closing the meeting 16:03:43 <amoralej> thank you all for joining! 16:03:44 <amoralej> #endmeeting