15:00:37 <PagliaccisCloud> #startmeeting RDO meeting - 2019-05-22 15:00:38 <openstack> Meeting started Wed May 22 15:00:37 2019 UTC and is due to finish in 60 minutes. The chair is PagliaccisCloud. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:39 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:42 <openstack> The meeting name has been set to 'rdo_meeting___2019_05_22' 15:01:15 <jpena> o/ 15:01:18 <apevec> o/ 15:01:21 <iurygregory> o/ 15:01:31 <cgoncalves> o/ 15:01:41 <amoralej> o/ 15:01:52 <PagliaccisCloud> #chair jpena apevec iurygregory cgoncalves amoralej 15:01:53 <openstack> Current chairs: PagliaccisCloud amoralej apevec cgoncalves iurygregory jpena 15:02:50 <cgoncalves> #topic What to do with lbaas in RDO Train 15:03:27 <cgoncalves> neutron-lbaas project was retired during this Train cycle. there will be no neutron-lbaas Train release 15:04:08 <baha> o/ 15:04:09 <cgoncalves> there are conversations about how to acomodate users in Train still wanting to migrate from lbaas to octavia 15:04:18 <PagliaccisCloud> #chair baha 15:04:19 <openstack> Current chairs: PagliaccisCloud amoralej apevec baha cgoncalves iurygregory jpena 15:04:29 <cgoncalves> Octavia added providers support in Rocky and evolved it in Stein 15:05:23 <PagliaccisCloud> also, Octavia manual config instructions make my face hurt 15:05:29 <cgoncalves> a group of people were initially proposing tagging lbaas stein in train, since the code was removed in master branch already 15:05:58 <cgoncalves> so that operators/users could have a last chance to migrate 15:06:27 <ykarel> o/ 15:06:40 <cgoncalves> but, as I was just discussing with some lbaas/octavia engineers, we might not need lbaas in Train after all 15:06:49 <PagliaccisCloud> #chair ykarel 15:06:50 <openstack> Current chairs: PagliaccisCloud amoralej apevec baha cgoncalves iurygregory jpena ykarel 15:07:09 <cgoncalves> operators/users should be able to upgrade to Train with lbaas disabled, migrate the lbaas DB records to octavia and purge lbaas DB tables afterward 15:07:46 <cgoncalves> there is a community-supported tool for that purpose. we know of at least one vendor that has tried it and reported successful live-migration 15:08:14 <cgoncalves> #link https://github.com/openstack/neutron-lbaas/tree/stable/stein/tools/nlbaas2octavia 15:08:47 <apevec> ok, so we could retire lbaas in Train, to follow upstream 15:08:49 <cgoncalves> we're confident that would do, but we are all open to other ideas 15:08:54 <cgoncalves> yes 15:08:55 <apevec> since RDO is defined as vanilla upstream 15:09:01 <amoralej> cgoncalves, that makes the migration directly from db to db without api access? 15:09:08 <cgoncalves> amoralej, correct 15:09:12 <apevec> any preferred timeline or we could do it right now? 15:09:16 <amoralej> that's good 15:09:37 <apevec> technically that would mean removing train* tags from rdoinfo 15:09:40 <amoralej> we can keep it pinned for one or two more weeks until this iis confirmed 15:10:00 <amoralej> anyway we need to fix things in other distgits p-o-i and tripleo 15:10:02 <apevec> cgoncalves, can you send rdoinfo review with above justification? 15:10:04 <amoralej> or at least kolla 15:10:06 <cgoncalves> if possible, I'd suggest keeping it for a bit more time just to allow other folks to process this info and raise objections 15:10:09 <amoralej> befoer removing 15:10:24 <apevec> amoralej, what needs to be fixed in poi? 15:10:27 <cgoncalves> apevec, sure 15:10:31 <apevec> and elsewhere 15:10:40 <amoralej> apevec, it's still running lbaas in some scenarios 15:10:46 <ykarel> jobs are running with neutron-lbaas 15:10:56 <amoralej> and kolla is still building lbaas-agent container iirc 15:11:05 <amoralej> at least in tripleo build-containers job 15:11:35 <amoralej> apevec, see https://review.opendev.org/#/c/658803/ 15:13:39 <cgoncalves> #action cgoncalves to send rdoinfo review with justification to keep neutron-lbaas for a few more weeks 15:13:41 <amoralej> cgoncalves, let's follow up next week 15:13:45 <cgoncalves> thanks, folks! 15:14:19 <apevec> cgoncalves, I'd say send rdoinfo review now, with -W 15:14:32 <apevec> then we collect links to things to be cleanedup there 15:14:49 * apevec prefers tracking in gerrit reviews 15:15:39 <cgoncalves> apevec, wait, rdoinfo review to delete lbaas for good or package stein code in master/train? 15:15:59 <apevec> to remove train tags 15:16:06 <cgoncalves> lbaas is already pinned to last commit before retirement 15:16:06 <amoralej> cgoncalves, if it's just as a temporary thing for a couple of weeks, i think we can pin it to last commit 15:16:06 <cgoncalves> ok 15:16:07 <apevec> that will stop DLRN building it in trunk 15:16:40 <cgoncalves> ok 15:16:54 <apevec> again, keep review open as a place to collect all the info 15:17:07 <amoralej> apevec, a trello card may be also good 15:17:13 <amoralej> pointint to the review 15:17:54 <apevec> review is more visible, trello is for RDO-only tracking 15:18:11 <apevec> here we need to coordinate multiple project 15:18:18 <amoralej> yes, right 15:18:37 <apevec> cgoncalves, btw can you answer beagles' migration concern in review.opendev.org/#/c/658803/ ? 15:19:48 <rdogerrit> Carlos Goncalves created rdoinfo master: Remove neutron-lbaas train tags https://review.rdoproject.org/r/20890 15:20:01 <cgoncalves> apevec, yes. I also reviewed other Tobias' patch about this 15:20:04 <cgoncalves> there ^ 15:20:27 <PagliaccisCloud> sweet, you're fast :D 15:21:39 <PagliaccisCloud> ready for the next topic? 15:21:49 <iurygregory> o/ 15:22:18 <PagliaccisCloud> ok moving on 15:22:25 <PagliaccisCloud> #topic ironic_prometheus_exporter in RDO (https://github.com/metal3-io/ironic-prometheus-exporter) 15:23:06 <iurygregory> ok =) 15:23:10 <iurygregory> the ironic_prometheus_exporter is composed of one oslo messaging notifier driver that will be used by ironic to transform sensor data from baremetal nodes in the Prometheus format and a Flask application that will provide the metrics to a Prometheus instance. 15:23:41 <iurygregory> in the metal3-io we would like to have a package for it since it will be used by Ironic =) 15:24:13 <apevec> I had a quick look, metal3 is using Ironic from master ? 15:24:33 <apevec> any plans for stable branches? 15:24:43 <apevec> and will they follow openstack releases? 15:25:02 <iurygregory> about stable/branches i think i can say yes 15:25:12 <iurygregory> but follow openstack releases Im not sure about it =( 15:25:30 <apevec> here's how is ironic currently consumed https://github.com/metal3-io/ironic-image/blob/master/Dockerfile 15:25:35 <iurygregory> not sure if dhellmann knows 15:25:44 <iurygregory> apevec, correct 15:25:49 <apevec> Ironic bits come from current-tripleo i.e. master 15:26:13 <apevec> (I don't really like that curl | ... but that's another topic :) 15:26:30 <iurygregory> we have a use case to get metrics (sensor data) from baremetal nodes and expose to Prometheus 15:27:01 <apevec> ok, to grill you some more: why not move that exporter to openstack governance under https://releases.openstack.org/teams/ironic.html ? 15:27:04 <amoralej> as stated previously, i think it'd be nice to get that under opendev and make it follows the same release models that ironic itself 15:27:24 <amoralej> including some CI to test it with ironic and so on 15:27:39 <iurygregory> we didnt start there because of the timeframe for demo etc 15:27:52 <iurygregory> to create upstream project etc.. 15:28:00 <apevec> that's fine, thinking about the near future 15:28:22 <iurygregory> but I think also we would need to have the oslo messaging notfier driver under oslo messaging correct? 15:28:22 <apevec> for now there isn't any releases yet https://github.com/metal3-io/ironic-prometheus-exporter/releases 15:28:37 <apevec> iurygregory, not necessarily 15:29:01 <amoralej> iurygregory, you mean being an oslo project? 15:29:06 <apevec> your actual dep is Ironic 15:29:15 <apevec> semantically 15:29:21 <amoralej> yeah, functinally it's tied to ironic iiuc 15:29:22 <iurygregory> amoralej, nope the code of the oslo driver in the oslo repository 15:30:11 <iurygregory> so its ok to have in the repository under opendev 15:30:14 <amoralej> sorry, i don't follow you now 15:30:27 <amoralej> "the code of the oslo driver in the oslo repository" 15:31:11 <amoralej> this is somehow integrated with oslo_messaging via oslo_messagin_notifications, right? 15:31:18 <iurygregory> I was wondering if would be necessary to move the driver to the oslo repository also, since we would have the repository in the openstack 15:31:19 <amoralej> but in a separated module 15:31:21 <iurygregory> correct 15:31:40 <apevec> it's not oslo candidate afaict 15:31:44 <apevec> this is the driver? https://github.com/metal3-io/ironic-prometheus-exporter/blob/master/ironic_prometheus_exporter/messaging.py 15:32:03 <iurygregory> apevec, yup 15:32:04 <apevec> it uses oslo but actual "biz" logic is Ironic 15:32:15 <iurygregory> correct 15:32:21 <apevec> like hardware.ipmi.metrics event 15:32:38 <iurygregory> the only usage for the driver is with Ironic 15:32:42 <apevec> oslo is for component reusable in all projects 15:33:49 <rdogerrit> Merged openstack/tripleo-common-distgit rpm-master: Exclude tripleo-deploy-openshift from OSP https://review.rdoproject.org/r/20883 15:33:54 <iurygregory> so to have a package for the ironic_prometheus_exporter I would need to move to opendev under ironic governance correct? 15:34:31 <iurygregory> and make sure it would follow openstack release 15:34:55 <apevec> iurygregory, that's for next step, you can propose it where it is now 15:35:19 <iurygregory> apevec, so I can follow the guide for openstack project correct? 15:35:26 <apevec> but we'd like to move it to a proper place asap 15:35:53 <iurygregory> apevec, ack I will talk with the metal3-io team 15:36:16 <apevec> iurygregory, yes, just to make sure: you'll own it so be prepared to fix any failures to build in trunk! 15:36:31 <iurygregory> apevec, yeah 15:36:32 <apevec> one FTBFS is blocking all 15:36:40 <ykarel> same thing was done with ansible-role-atos-hsm and ansible-role-thales-hsm 15:36:49 <ykarel> later they moved to openstack namespace 15:37:00 <apevec> yep 15:38:11 <iurygregory> I think that's it =), anything else I should be aware? 15:38:43 <ykarel> iurygregory, also u mentioned prometheus_client 15:39:06 <ykarel> that is also needed right? 15:39:23 <iurygregory> ykarel, the ironic_prometheus_exporter uses the prometheus_client 15:39:34 <apevec> is that packaged in Fedora? 15:39:40 <ykarel> apevec, yes https://koji.fedoraproject.org/koji/packageinfo?packageID=27137 15:39:43 <apevec> we'll need to import it as a dep 15:40:03 <ykarel> iurygregory, ack, it's needed to be included as rdo dep ^^ 15:40:10 <iurygregory> ykarel, ack 15:40:27 <ykarel> as explained https://www.rdoproject.org/documentation/requirements/#adding-a-new-requirement-to-rdo 15:40:35 <iurygregory> Ty! 15:41:00 <amoralej_> i was kicked off 15:41:16 * ykarel who kicked? apevec? 15:41:28 <amoralej_> nop, some network problem i mean 15:41:31 * ykarel nope he is not operator here 15:41:36 <apevec> I did not :) 15:41:50 <ykarel> :D 15:41:51 <PagliaccisCloud> OpenStack bot is pretty aggressive since BotGate :/ 15:42:13 <iurygregory> Thanks for the help everyone! 15:42:49 <PagliaccisCloud> thanks for joining us <3 15:42:54 <amoralej_> sorry, but i lost part of the conversation 15:43:06 <amoralej_> so the plan is to add the repo to opendev and then add the package? 15:43:09 <amoralej_> to rdo? 15:43:11 <apevec> iurygregory, thanks for explanations! 15:43:29 <apevec> amoralej_, we can add pkg and required dep first in rdo 15:43:36 <apevec> then move upstream in rdoinfo 15:43:44 <apevec> (strongly suggested) 15:43:54 <amoralej_> ok 15:43:59 <iurygregory> =D 15:44:02 <amoralej_> yeah, the dependency is the other topic 15:44:22 <amoralej_> i assume someone explained how to add the dep 15:44:30 <apevec> yes, ykarel linked the doc 15:44:32 <iurygregory> now I'm going to run to another meeting, thanks! 15:44:34 <amoralej_> ok 15:44:36 <amoralej_> thanks iurygregory 15:44:48 <apevec> let's link it for minutes: 15:44:57 <apevec> #link how to add a new dep in RDO https://www.rdoproject.org/documentation/requirements/#adding-a-new-requirement-to-rdo 15:45:07 <apevec> ^ does that work? 15:45:45 <amoralej_> yes 15:45:48 <PagliaccisCloud> yup 15:46:07 <apevec> looking at indirect deps, prometheus clients requires: 15:46:08 <apevec> python3dist(decorator) 15:46:08 <apevec> python3dist(twisted) 15:46:57 <ykarel> iirc we don't have twisted 15:47:14 <amoralej_> mm 15:47:18 <amoralej_> it rings me a bell 15:47:33 <ykarel> it's there, but i remember some issue in past, will recheck 15:47:51 <amoralej_> yes we have twisted in rdoinfo 15:47:56 <amoralej_> not sure if it's good enough 15:47:59 <amoralej_> but it's there 15:48:08 <amoralej_> python-twisted-16.1.1-3.el7 15:50:36 <apevec> we don't have decorator 15:51:03 <apevec> ok, we'll resolve those during dep adding process 15:51:22 <ykarel> that comes from centos 15:51:29 <amoralej_> it's in base 15:51:51 <amoralej_> we will see when iurygregory tries to add the new dep :) 15:52:06 <amoralej_> i think we can move to next topic 15:52:35 <PagliaccisCloud> ok, moving on 15:52:43 <apevec> yeah, need to check if base version is good enough 15:52:59 <PagliaccisCloud> #topic Next week's chair 15:53:12 <PagliaccisCloud> any volunteers? 15:53:31 <PagliaccisCloud> if no, i don't mind taking it again 15:54:00 * ykarel might not be available during next meeting 15:54:24 <amoralej_> i also can take it, but it's fine if you want PagliaccisCloud 15:55:26 <PagliaccisCloud> eh i'll grab the one after next week amoralej_ 15:55:40 <amoralej_> ok 15:55:48 <PagliaccisCloud> #action amoralej_ to chair next meeting 15:56:05 <PagliaccisCloud> #topic Open floor 15:57:17 <PagliaccisCloud> anything else you would like to discuss that wasn't on the agenda? 15:57:39 <amoralej_> ups, i forgot something, will add it for next meeting 15:58:08 <amoralej_> in a centos ML i read a question if SIGs would want to provide modules 15:58:25 <amoralej_> i need to check what value could bring us 15:58:31 <amoralej_> any idea? 15:58:51 <amoralej_> i mean, with CentOS8 15:59:20 <amoralej_> something we need to think about, I'll add for next meeting 16:00:59 <PagliaccisCloud> Good meeting folks, gonna go ahead & close out. See you all next week! 16:01:14 <PagliaccisCloud> #endmeeting