15:01:19 <jpena> #startmeeting RDO meeting - 2018-01-31 15:01:20 <openstack> Meeting started Wed Jan 31 15:01:19 2018 UTC and is due to finish in 60 minutes. The chair is jpena. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:01:22 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:01:24 <openstack> The meeting name has been set to 'rdo_meeting___2018_01_31' 15:01:46 <jpena> Remember you can add last-minute topics at https://etherpad.openstack.org/p/RDO-Meeting 15:01:49 <jpena> #topic roll call 15:02:34 <mjturek> o/ 15:02:40 <amoralej> o/ 15:02:46 <ykarel> o/ 15:02:47 <mary_grace> o/ 15:02:49 <number80> o/ 15:03:09 <jpena> #chair mjturek amoralej ykarel mary_grace number80 15:03:09 <openstack> Current chairs: amoralej jpena mary_grace mjturek number80 ykarel 15:03:09 <bcafarel> o/ 15:03:14 <jpena> #chair bcafarel 15:03:15 <openstack> Current chairs: amoralej bcafarel jpena mary_grace mjturek number80 ykarel 15:05:02 <chandankumar> v\o/ 15:05:42 <jpena> #chair chandankumar 15:05:43 <openstack> Current chairs: amoralej bcafarel chandankumar jpena mary_grace mjturek number80 ykarel 15:05:47 <jpena> let's start with the agenda 15:05:56 <jpena> #topic Plan to provide VM images for Octavia 15:06:33 <jpena> there have been some discussions about how we could provide VM images for Octavia (and probably others like Manila in the future) 15:06:52 <jpena> bcafarel ^ do you want to summarize the current status? 15:07:23 <bcafarel> jpena: sure 15:08:06 <bcafarel> for octavia, patches to use distribution packages are merged now 15:08:48 <bcafarel> so the script (provided in openstack-octavia-diskimage-create package) can generate a VM image in a few commands (summarized in rdo-dev thread) 15:10:01 <bcafarel> current suggestion: to create a periodic job to rebuild the images (daily) then upload them to images.rdoproject.org 15:10:53 <bcafarel> that means for example packages in the image are ensured to be up-to-date (security etc) as one of the DIB steps is running yum update 15:11:22 <bcafarel> some nice fellow that I will not name tested the image generation and it did work fine :) 15:12:35 <bcafarel> from octavia/tripleo point of view, that sounds good, and as mentioned before it would help Octavia adoption (generating image can be trick sometimes) 15:13:13 <jpena> would that image be used in tripleo jobs? During some informal discussions, I remember someone mentioned a potential issue if a new image breaks the job 15:13:53 <jpena> until that happens, I think publishing the image should be ok, but I'd like to hear more feedback 15:14:39 <amoralej> we could test te images after creating it 15:15:18 <bcafarel> I think the original mention was in tripleo CI (beagles would know but I don't think he's around right now) 15:15:20 <amoralej> in the same job, i'm not sure if it's posible 15:16:27 <bcafarel> but yes initial goal is more to enable octavia usage with pre-built image 15:16:59 <rdogerrit> Yatin Karel proposed openstack/oslo-db-distgit rpm-master: Requirement sync for queens https://review.rdoproject.org/r/11681 15:18:38 <amoralej> i think it's good to start by creating images and pushing in periodic job 15:18:52 <jpena> yes, it's a good start 15:19:15 <jpena> we'll need to monitor disk usage in images.rdo, but it seems manageable 15:19:59 <amoralej> jpena, you are thinking in a periodic job in review.r.o, right? or ci.centos.org? 15:20:05 <jpena> amoralej: review.r.o 15:21:30 <jpena> if nobody complains, I'd propose to send an e-mail to dev@lists with the agreed proposal, then roll up our sleeves and implement it. 15:21:38 <amoralej> +1 15:21:50 <number80> +1 15:22:10 <bcafarel> I certainly won't complain on that :) 15:22:42 <number80> bcafarel: tsk tsk, you're not French enough if you can't complain about it :) 15:22:58 <jpena> #agreed Build images for Octavia using a periodic job on review.rdoproject.org, store on images.rdoproject.org 15:23:02 <bcafarel> ahah 15:23:11 <bcafarel> number80: I'm keeping my complains reserve for later 15:23:31 <jpena> #action jpena to send email to dev@lists with results of Octavia image discussion 15:23:45 <jpena> I think we can move on 15:24:16 <jpena> #topic Discuss Power CI for RDO 15:24:23 <jpena> mjturek ^ 15:24:24 <mjturek> hey! 15:24:38 <number80> welcome :) 15:24:49 <mjturek> thanks :) so, we're working on testing the ppc64le queens build of rdo available on rdotrunk (starting with current-passed-ci). 15:25:01 <mjturek> Basically we're installing with packstack and running tempest against it. 15:25:18 <mjturek> allinone on centos power guest 15:25:34 <mjturek> I've listed a couple of questions in the agenda that I think would really help us better understand what the rdo community would like from us CI wise. 15:25:54 <mjturek> could we go through them here? or would you rather I start this conversation on the ML 15:26:52 <jpena> let's start discussing here, if some topic needs more time we can send it to the ML 15:27:06 <mjturek> cool! 15:27:47 <mjturek> so the first question is what does a normal RDO CI scenario look like? Is it a packstack deployment with tempest like we have? or different? 15:28:20 <mjturek> any reference we can look at for htis? 15:28:45 <jpena> mjturek: do you mean the scenarios we use in the promotion pipeline? 15:29:27 <mjturek> jpena: yeah I think so 15:29:51 <jpena> amoralej: you know the current pipeline better than I do :) 15:30:18 <amoralej> i was thinking about the best way to test 15:30:19 <rdogerrit> Yatin Karel created openstack/oslo-i18n-distgit rpm-master: Adjust python2 requirements for Fedora https://review.rdoproject.org/r/11696 15:30:48 <amoralej> it may be interesting to report jobs to dlrn-api 15:31:17 <amoralej> mjturek, i'd start by running all packstack and p-o-i scenarios to start 15:31:53 <mjturek> amoralej: p-o-i? 15:32:03 <amoralej> puppet-openstack-integration 15:32:26 <mjturek> amoralej awesome - are these scenarios defined in wierdo? 15:32:28 <amoralej> mjturek, look at https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo_trunk-promote-master-current-tripleo/ 15:32:33 <mnaser> TIL yum -y install python2-shade is a thing, thanks :D 15:32:38 <mjturek> oh cool 15:32:49 <amoralej> mjturek, weirdo uses the scenarios defined in packstack and p-o-i repos 15:32:54 <amoralej> so, currently 15:33:19 <amoralej> what we do is to run all those + 2 tripleo jobs after there is an upstream promotion in tripleo pipeline 15:33:26 <amoralej> mjturek, do you use weirdo? 15:33:53 <mjturek> amoralej - we have our own setup actually 15:34:34 <amoralej> i'd recomend to re-use the tooling used in RDO CI, as much as you can 15:34:42 <amoralej> and improve it if needed 15:35:07 <mjturek> makes sense! so try to migrate over to wierdo 15:35:16 <rdogerrit> Yatin Karel proposed openstack/oslo-log-distgit rpm-master: Requirement sync for queens https://review.rdoproject.org/r/11682 15:35:36 <amoralej> mjturek, do you test some tripleo deployment using tripleo-quickstart? 15:35:54 <mjturek> amoralej - we don't have any tripleo jobs yet actually 15:36:13 <amoralej> that would also be good 15:36:26 <amoralej> but it'd probably require more work 15:36:39 <mjturek> fair enough - are the tripleo jobs virtualized? 15:36:51 <amoralej> do virtualization ppc64le support libvirt? 15:37:12 <mjturek> yep, we support libvirt 15:37:26 <mjturek> (see the PowerKVM CI job on nova for more detail) 15:37:43 <amoralej> tripleo-quickstart (aka oooq) can create required virtual machines in a host to test tripleo deployment 15:37:46 <amoralej> usint libvirt 15:37:48 <number80> yes, from my remembrance biggest blocker was mongodb for telemetry but it's gone :) 15:37:54 <number80> *on ppc64le 15:38:02 <mjturek> awesome! 15:38:48 <mjturek> I don't want to take up too much time here, I know there's another topic coming up. But the jist is start moving to wierdo and start with the packstack/p-o-i scenarios 15:38:54 <amoralej> ok 15:39:20 <amoralej> mjturek, my advise is you to start checking weirdo and try to mimic scenarios in https://ci.centos.org/view/rdo/view/promotion-pipeline/job/rdo_trunk-promote-master-current-tripleo/ 15:39:39 <mjturek> amoralej - perfect, I'll come here as I hit blocks 15:39:42 <amoralej> you should use the same rdo trunk hash repo 15:39:47 <amoralej> that we use 15:39:53 <amoralej> you can check current-tripleo 15:39:55 <amoralej> link 15:40:12 <amoralej> http://trunk.rdoproject.org/centos7-master/current-tripleo 15:41:15 <mjturek> cool, great info 15:41:19 <mjturek> thanks amoralej 15:41:33 <amoralej> ok, poke me in #rdo if you need something 15:41:42 <mjturek> will do! 15:41:50 <amoralej> ok 15:42:16 <jpena> let's move to the next topic, then 15:42:22 <jpena> #topic Preparation for queens release 15:43:34 <amoralej> we are starting the activities for queens GA preparation 15:43:39 <rdogerrit> Yatin Karel proposed openstack/oslo-messaging-distgit rpm-master: Requirement sync for queens https://review.rdoproject.org/r/11683 15:44:09 <amoralej> I've created a trello card in https://trello.com/c/4hiSJdKq/656-queens-release-preparation 15:44:28 <amoralej> libraries and most clients are already released 15:44:44 <amoralej> and requirements is frozen iiuc 15:45:07 <amoralej> so it's time to start doing the final spec adjustments, specially requirements updates 15:45:55 <amoralej> #action maintainers to send final updates for specs with requirements updates 15:46:25 <amoralej> additionally, we'll use it to adjust specs for fedora policies 15:46:44 <amoralej> specially about moving requirements to python2- instead of python- names 15:47:07 <amoralej> so, expect a bunch of reviews in queens-branching topic 15:47:37 <amoralej> we also need to freeze non-openstack puppet modules for queens soon 15:47:52 <amoralej> EmilienM, mwhahaha ^ let me know when do you think we can do it 15:48:03 <amoralej> if you prefer to wait until RC1, no problem 15:48:04 <EmilienM> yeah 15:48:06 <EmilienM> I think we can do it now 15:48:12 <EmilienM> I don't see any ongoing work in these modules 15:48:25 <EmilienM> amoralej: you do it manually or you have a script? 15:48:41 <amoralej> ok, if we get a promotion soon, i'll take builds from new promotion 15:48:44 <amoralej> EmilienM, manually 15:48:46 <EmilienM> perfect 15:49:28 <amoralej> #info we will pin non-OpenStack puppet modules for queens with the builds in next promotion 15:50:00 <amoralej> i think this was my update about the topic 15:50:18 <amoralej> ykarel, anything else to add? 15:50:29 <ykarel> amoralej, no 15:51:18 <amoralej> one concern i have is about synchronization to buildlogs and mirror.r.o 15:51:37 <amoralej> from queens we need to ensure that we don't synchronize common tags to queens repos 15:51:43 <rdogerrit> Yatin Karel proposed openstack/oslo-db-distgit rpm-master: Requirement sync for queens https://review.rdoproject.org/r/11681 15:51:45 <amoralej> only queens-testing and queens-release 15:51:55 <amoralej> number80, what's the best way to manage this? 15:52:05 <amoralej> fill a ticket to centos? 15:52:08 <amoralej> in advance? 15:53:09 <amoralej> i guess so :) 15:53:15 <amoralej> jpena, i think we can move on 15:53:19 <jpena> cool 15:53:25 <jpena> #topic Chair for the next meeting 15:53:30 <jpena> Any volunteer? 15:53:33 <amoralej> i can do it 15:53:42 <jpena> thx amoralej :) 15:53:49 <jpena> #action amoralej to chair the next meeting 15:53:54 <jpena> #topic open floor 15:54:14 <jpena> anything else to discuss? 15:54:46 <pabelanger> nodepool-builder is slow, how can we fix the IO issues there? 15:54:51 <pabelanger> and who could maybe do that? 15:55:02 <number80> amoralej: yes ticket but it will be when we set up repos 15:55:08 <number80> which is not happening yet 15:55:16 <amoralej> number80, ok 15:55:32 <mary_grace> I'm finalizing the February newsletter — if there's anything you want included, drop me a line in here or email: mthengva@redhat.com 15:55:32 <amoralej> just we need to remark about not syncying -common- 15:55:57 <rdogerrit> Yatin Karel proposed openstack/oslo-log-distgit rpm-master: Requirement sync for queens https://review.rdoproject.org/r/11682 15:56:25 <jpena> pabelanger: about nodepool-builder, I/O on ceph volumes is not superfast in RDO Cloud. The team is working on getting more SSD disks to make it faster, but that takes time 15:57:19 <pabelanger> jpena: maybe we could POC mount of local HDD in compute node too, if not SSD 15:57:36 <jpena> local HDD were not faster than ceph last time we tested it 15:57:40 <pabelanger> or consider moving nodepool-builder into another cloud 15:58:52 <tosky> (not of general interest, but I have a review for sahara which I'd really like to have merged before queens: I'd really https://review.rdoproject.org/r/#/c/10843/ ) 16:00:45 <jpena> ok, it's time to end the meeting, let's continue discussions after it 16:00:48 <jpena> #endmeeting