14:00:13 <mwhahaha> #startmeeting tripleo
14:00:13 <mwhahaha> #topic agenda
14:00:13 <mwhahaha> * Review past action items
14:00:13 <mwhahaha> * One off agenda items
14:00:13 <mwhahaha> * Squad status
14:00:13 <mwhahaha> * Bugs & Blueprints
14:00:13 <openstack> Meeting started Tue Nov 28 14:00:13 2017 UTC and is due to finish in 60 minutes.  The chair is mwhahaha. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:13 <mwhahaha> * Projects releases or stable backports
14:00:14 <mwhahaha> * Specs
14:00:14 <mwhahaha> * open discussion
14:00:14 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:15 <mwhahaha> Anyone can use the #link, #action and #info commands, not just the moderatorǃ
14:00:15 <mwhahaha> Hi everyone! who is around today?
14:00:16 <openstack> The meeting name has been set to 'tripleo'
14:00:20 <d0ugal> Hello!
14:00:22 <beagles> o/
14:00:25 <arxcruz> o/
14:00:31 <abishop> o/
14:00:39 <trown> o/
14:00:43 <ccamacho> hey folks
14:00:50 <slagle> hi
14:00:53 <gfidente> o/
14:00:54 <jfrancoa> o/
14:01:00 <EmilienM> o/
14:01:04 <lyarwood> o/
14:01:23 <marios> o/
14:01:29 <chem> o/
14:01:30 <shardy> o/
14:02:17 <jpich> o/
14:02:32 <atoth> o/
14:02:41 <adarazs|ruck> o/
14:02:58 <openstackgerrit> Merged openstack/tripleo-ui stable/pike: Change plan files whitelist when creating plan  https://review.openstack.org/523316
14:03:37 <mwhahaha> ok lets do this
14:03:38 <mwhahaha> #topic review past action items
14:03:41 <mwhahaha> none
14:03:53 <mwhahaha> moving on to the agenda
14:04:00 <mwhahaha> #topic one off agenda items
14:04:00 <mwhahaha> #link https://etherpad.openstack.org/p/tripleo-meeting-items
14:04:06 <mwhahaha> (gfidente) how to discern containerized vs non-containerized services within the templates?
14:04:25 <gfidente> yeah I was trying to understand if we have a way to do that from the templates right now?
14:04:56 <beagles> I could use that ability as well
14:05:10 <mwhahaha> seems like maybe a setting we could do in docker.yaml?
14:05:18 <mwhahaha> if one isn't already there
14:05:25 <marios> mwhahaha: well i think it would have to be per service?
14:05:45 <mwhahaha> depends on what you're looking for
14:05:50 <mwhahaha> but yea perhaps there as well
14:06:05 <marios> mwhahaha: like a  hiera we set for 'servicename_is_docker'  or something
14:06:17 <marios> gfidente: but what do you have in mind/ have a spec? bug?
14:06:46 <gfidente> marios no I hit frequently a pattern where I need to know what other services are deployed
14:06:53 <gfidente> for example, to build the list of pools to create
14:06:58 <gfidente> to grant permissions on a key
14:07:11 <jaosorior> o/
14:07:40 <marios> gfidente: well the service list you can get from the role data but the if service is containerized is not there if that is what you want specifically
14:07:42 <gfidente> I think I should approach this differently and emit from every service enabled a pool to be created
14:07:43 <marios> afaik at least
14:07:47 <mandre> o/
14:08:45 <gfidente> regarding containerized vs non-containerized , that seems just another special case where we need to grant permissions on a file only if the targer service is not containerized
14:09:12 <gfidente> I was mostly trying to understand if there were ideas on how this could have been approached
14:10:13 <ooolpbot> URGENT TRIPLEO TASKS NEED ATTENTION
14:10:14 <ooolpbot> https://bugs.launchpad.net/tripleo/+bug/1731063
14:10:14 <openstack> Launchpad bug 1731063 in tripleo "CI: tempest TestVolumeBootPattern tests fail due to not being able to ssh to the VM" [Critical,Triaged]
14:10:15 <ooolpbot> https://bugs.launchpad.net/tripleo/+bug/1734134
14:10:15 <openstack> Launchpad bug 1734134 in tripleo "Pike periodic promotion job multinode-1ctlr-featureset016 fail with error running docker 'gnocchi_db_sync' - rados.Rados.connect PermissionDeniedError: error connecting to the cluster" [Critical,In progress] - Assigned to Giulio Fidente (gfidente)
14:10:27 <gfidente> anyway looks like the answer is, not right now
14:10:37 <gfidente> I think we can move on
14:10:55 <mwhahaha> ok thanks
14:10:59 <gfidente> emitting hieradata
14:11:06 <gfidente> only works with puppet
14:11:26 <gfidente> maybe we can append ansible vars into a playbook in the future, not sure
14:11:29 <mwhahaha> no it works beyond that
14:11:36 <mwhahaha> because you can query hieradata externally
14:11:39 <marios> we are still applying puppet and the hiera can still be quieried
14:12:11 <gfidente> yeah so that probably is fine if we move the logic in the playbook vs heat
14:12:38 <gfidente> (heat templates)
14:13:09 <mwhahaha> well you need to set the fact that it is containerized vs not in the THT
14:13:23 <mwhahaha> then how that information gets consumed in the deployment can live in ansible/puppet
14:13:36 <jtomasek_> o/
14:13:48 <mwhahaha> it seems that writing out a hash in hiera of the containerized services that can be queried might be the best way at the moment
14:13:49 <openstackgerrit> Merged openstack/python-tripleoclient master: Deploy with --config-download even when some role has count 0  https://review.openstack.org/523136
14:14:02 <matbu> o/
14:14:52 <mwhahaha> anyway moving on
14:15:10 <mwhahaha> #topic Squad status
14:15:10 <mwhahaha> ci
14:15:10 <mwhahaha> #link https://etherpad.openstack.org/p/tripleo-ci-squad-meeting
14:15:10 <mwhahaha> upgrade
14:15:10 <mwhahaha> #link https://etherpad.openstack.org/p/tripleo-upgrade-squad-status
14:15:11 <mwhahaha> containers
14:15:11 <mwhahaha> #link https://etherpad.openstack.org/p/tripleo-containers-squad-status
14:15:12 <mwhahaha> integration
14:15:12 <mwhahaha> #link https://etherpad.openstack.org/p/tripleo-integration-squad-status
14:15:13 <mwhahaha> ui/cli
14:15:13 <mwhahaha> #link https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status
14:15:14 <mwhahaha> validations
14:15:14 <mwhahaha> #link https://etherpad.openstack.org/p/tripleo-validations-squad-status
14:15:15 <mwhahaha> networking
14:15:15 <mwhahaha> #link https://etherpad.openstack.org/p/tripleo-networking-squad-status
14:15:16 <mwhahaha> workflows
14:15:16 <mwhahaha> #link https://etherpad.openstack.org/p/tripleo-workflows-squad-status
14:15:50 <mwhahaha> i see folks updating their status now :D
14:16:52 <mwhahaha> any other status related items that folks want to raise attention on?
14:19:03 <mwhahaha> sounds like nope
14:19:10 <mwhahaha> #topic bugs & blueprints
14:19:11 <mwhahaha> #link https://launchpad.net/tripleo/+milestone/queens-2
14:19:11 <mwhahaha> For Queens we currently have 71 (+1) blueprints and about 541 (+19) open bugs. 256 queens-2 and 285 queens-3.
14:19:41 <mwhahaha> so a reminder, queens-2 ends next week
14:20:00 <mwhahaha> please get your blueprints updated with their current status
14:20:16 <mwhahaha> remember we want features merged by the end of queens-2
14:20:43 <mwhahaha> considering we have ~541 open bugs, we shouldn't be continuing to add features in queens-3
14:20:52 <EmilienM> what did we say about CI changes?
14:21:00 <mwhahaha> which ci changes
14:21:10 <EmilienM> I'm worried about what kind of change in CI can we make after m-2
14:21:25 <openstackgerrit> Keith Schincke proposed openstack/tripleo-heat-templates master: Add ceph-rbdmirror ansible container service  https://review.openstack.org/520244
14:22:11 <mwhahaha> EmilienM: so if it's an addition, i think those are fine. I'm not sure which types of changes you're planning. are you talking about the ovb stuff?
14:22:28 <EmilienM> no, the ovb stuff will be fine by end of next week.
14:22:37 <EmilienM> I'm interested by the scenarios and undercloud-container
14:22:51 <mwhahaha> you can start planning them but I'm not sure we should switch to that
14:22:59 <EmilienM> weshay and Slower have some WIP but I'm afraid we won't make the m-2 schedule
14:23:02 <mwhahaha> i still wanted the undercloud container jobs by m2
14:23:16 * mwhahaha pokes dprince, weshay & Slower
14:23:27 <EmilienM> I propose to re-discuss when the work has been done
14:23:42 <EmilienM> we can use 2 weeks and observe stability numbers
14:23:47 <EmilienM> and take a decision afterward
14:23:50 <mwhahaha> sure
14:24:06 <mwhahaha> I think for things that are close to being done by m2 i'd be ok letting slip ~2weeks
14:24:11 <mwhahaha> but that's it
14:24:29 <EmilienM> I agree
14:24:45 <mwhahaha> so as a reminder for folks who have open reviews for blueprints and features, get your status updated and be able to report how close to being done
14:24:45 <EmilienM> it's a trade-off if we want to release on time and on good conditions
14:24:47 <dprince> mwhahaha: I think we are close
14:25:56 <mwhahaha> dprince: sounds good, if you need reviews plz ping us
14:26:10 <mwhahaha> any other bugs/bluepritn items?
14:27:00 <jkilpatr> when a overcloud is being deployed if a node gets stuck in wait-call-back is there a retry on that now?
14:27:25 <mwhahaha> jkilpatr: depends
14:27:30 <mwhahaha> #topic projects releases or stable backports
14:27:41 <mwhahaha> queens-2 next week
14:27:49 <mwhahaha> any backports that need attention?
14:28:29 <shardy> https://review.openstack.org/#/c/522803/ needs a review please, and I'd like to land/backport https://review.openstack.org/#/c/513450/ as that's a regression for pike
14:29:55 <flaper87> bogdando: shardy jistr http://logs.openstack.org/51/521951/17/check/ansible-role-k8s-keystone-kubernetes-centos/8511f86/job-output.txt.gz#_2017-11-28_13_15_58_711025 T_T
14:30:13 <flaper87> this is the iptable rules for that job: http://logs.openstack.org/51/521951/17/check/ansible-role-k8s-keystone-kubernetes-centos/8511f86/primary/logs/iptables.txt.gz
14:30:33 <EmilienM> shardy: ack
14:30:41 <flaper87> no idea why dns doesn't work there
14:30:44 <bogdando> flaper87: dns dns dns
14:30:46 <mwhahaha> any other backport items?
14:30:50 <bogdando> again dns
14:30:58 <flaper87> interestingly enough, 2 of the dns containers run http://logs.openstack.org/51/521951/17/check/ansible-role-k8s-keystone-kubernetes-centos/8511f86/primary/logs/k8s-describe-all.txt.gz
14:31:09 <flaper87> bogdando: yeah, figured as much but not sure why it doesn't work :(
14:31:16 <mwhahaha> flaper87: we're in a meeting
14:31:25 <mwhahaha> moving on to specs
14:31:28 <mwhahaha> #topic specs
14:31:29 <mwhahaha> #link https://review.openstack.org/#/q/project:openstack/tripleo-specs+status:open
14:31:33 <flaper87> mwhahaha: oh man, so sorry :(
14:31:41 <mwhahaha> so reminder, we will be freezing the specs next week
14:31:45 <mwhahaha> please review open specs
14:32:09 <mwhahaha> we don't have that many so it should be trivial
14:32:21 <mwhahaha> the important one for jaosorior is https://review.openstack.org/521727
14:32:31 <mwhahaha> please take a second to review the ipsec spec
14:33:19 <mwhahaha> any other spec related topics?
14:33:31 <EmilienM> ipsec for queens?
14:33:37 <EmilienM> hum
14:34:00 <mwhahaha> that's the hope, but we'll see
14:34:02 <EmilienM> like the spec was pushed 8 days again, (in the middle of QUeens)
14:34:10 <EmilienM> s/again/ago/
14:34:24 <EmilienM> I thought we were doing better at planning
14:34:36 <EmilienM> to me, a spec added in the middle of a cycle is for the next cycle
14:34:48 <EmilienM> our experience should help to make a better planning
14:34:53 <shardy> Hey I've been working on some interface tweaks to enable multiple compute-only stacks
14:35:16 <shardy> it's been possible for a while, but I'm trying to make it easier - wasn't planning a spec but can do one if folks want one?
14:35:20 <mwhahaha> EmilienM: i agree but it is was it is
14:36:03 <shardy> it's not really a feature, more an improved interface I think
14:36:08 <EmilienM> shardy: specs aren't required all the times, it's just a good way to collaborate on planning and design
14:36:09 <mwhahaha> shardy: it would probably be beneficial to have one. is the thought to support cells or something?
14:36:34 <shardy> mwhahaha: Initially it's just to enable potentially easier scaling where you don't want to update a single heat stack with 2000 nodes in it
14:36:46 <shardy> mwhahaha: but yeah could be a stack per cell or something in future
14:36:56 <shardy> also it's for folks that want to scale out without touching the controlplane
14:37:31 <mwhahaha> yea it wouldn't hurt to write out these use cases in a spec
14:37:41 <shardy> mwhahaha: ack OK I'll push one today
14:37:53 <mwhahaha> i'm also wondering how much of a deal that is as we switch to ansible driven deployment
14:38:22 <shardy> mwhahaha: yeah that's kind of a step further, e.g deploy the controlplane w/heat then just use ansible to configure the computes
14:38:34 <bogdando> flaper87: where is this test located?
14:38:35 <shardy> but we'd still need a tool to generate the inventory in that case
14:38:41 <mwhahaha> yea
14:38:54 <shardy> mwhahaha: for now I'm making use of the dynamic-inventory, just so we can work out how to decouple things
14:39:24 <mwhahaha> sounds good
14:39:30 <shardy> https://review.openstack.org/#/q/topic:compute_only_stack2+(status:open+OR+status:merged) has the first steps anyway
14:40:03 <mwhahaha> moving on to open discussion since we're basically doing that now :D
14:40:04 <mwhahaha> #topic open discussion
14:40:17 <shardy> hehe yeah sorry about that :D
14:40:48 <EmilienM> we have 2 CI alerts
14:40:56 <mwhahaha> so yea wouldn't hurt for a spec on that. would also help us track the work
14:41:02 <EmilienM> and one of them is here for long time: https://bugs.launchpad.net/tripleo/+bug/1731063
14:41:02 <openstack> Launchpad bug 1731063 in tripleo "CI: tempest TestVolumeBootPattern tests fail due to not being able to ssh to the VM" [Critical,Triaged]
14:41:15 <slagle> shardy: do you think scaling will still be an issue when driving everythng with config-download?
14:41:15 <arxcruz> EmilienM: dalvarez is working on that
14:41:28 <EmilienM> arxcruz: so why he's not assigned?
14:41:31 <slagle> shardy: e.g., is the scaling issue due to the # of resources
14:41:48 <EmilienM> when you work on a bug, put your name on it so we know something happens
14:41:53 <arxcruz> EmilienM: he just joined this morning on that, I'll check with him
14:41:59 <arxcruz> EmilienM: ok
14:42:02 <shardy> slagle: yeah probably less of an issue, but I'm still not sure we'd want really huge environments deployed with a common stack for controlplane and compute nodes?
14:42:10 <dalvarez> EmilienM, arxcruz i was asked to take a look by lpeer just in case i could see something
14:42:13 <shardy> so this is an attempt to give some more options
14:42:16 <EmilienM> arxcruz: it's not obvious. 8 days without any comment and no assignment
14:42:31 <EmilienM> dalvarez: please assign yourself to the bug
14:42:42 <openstackgerrit> Dmitry Tantsur proposed openstack/instack-undercloud master: Enable the ansible deploy interface out of box  https://review.openstack.org/522568
14:43:25 <slagle> shardy: true. but i'd almost rather see us work on more native ironic and ansible interfaces to enalbe that, instead of building more reliance on Heat
14:43:40 <slagle> shardy: just deploy some nodes with ironic, use ansible to configure them
14:44:01 <shardy> slagle: yes we could do that, but atm we'd still need to generate the playbooks and inventory
14:44:02 <slagle> as opposed to multiple stacks, which is going to cause issues with a lot of baked in assumptions everywhere
14:44:07 <dtantsur> a (shameless) highlight of something that aligns well with the idea of an ansible deploy: https://blueprints.launchpad.net/tripleo/+spec/ansible-deploy
14:44:53 <shardy> slagle: I think do do pure ansible we'd have to do some more work, e.g refactor all the heat-config things into pure ansible, and write a tool that converts all composable service templates into ansible roles
14:45:04 <EmilienM> dalvarez: what's your launchpad ID?
14:45:23 <dalvarez> EmilienM, not sure if i have to be the asignee anyways but i'll do it
14:45:28 <shardy> which would be cool, but a step beyond what I was attempting
14:45:49 <dalvarez> EmilienM, done
14:45:50 <slagle> shardy: i don't think we'd have to take it that far. we could make what gets generated with config-download configurable via ansible directly
14:45:53 <EmilienM> dalvarez: cool
14:46:17 <slagle> shardy: or perhaps this is an opportunity to integrate with apb directly
14:46:32 <slagle> to use those native roles
14:46:44 <shardy> slagle: ack, yeah open to ideas but I was looking for ways to help us scale in the Queens timeframe
14:46:57 <shardy> e.g before we move to the roles flaper87 has been working on
14:47:10 <shardy> as atm those expect k8s etc
14:47:56 <slagle> shardy: right, for queens this could be difficult. i'm just adverse to adding new deps on Heat around multiple stacks, etc.
14:48:05 <slagle> as that becomes more difficult to move away from in the future
14:48:30 <slagle> but yea for Queens, not sure what options there really are.
14:48:39 <shardy> slagle: sure, I'm not really saying we have to use heat, only that we could deploy the controlplane with no computes, then work out what data is needed to configure the computes via ansible
14:48:55 <mwhahaha> shardy: given that m2 is next week, i don't think we should target this for queens. i'd rather see a spec and conversations on how to approach it in rocky
14:48:58 <shardy> slagle: config download provides a nice starting point for that, but definitely more we can do there
14:49:32 <shardy> mwhahaha: well lets have the discussion and see where it goes I guess
14:49:37 <mwhahaha> sure
14:49:42 <mwhahaha> anyway any other topics?
14:49:49 <dtantsur> another small highlight please
14:49:59 <dtantsur> we're moving away from classic drivers in ironic
14:50:07 <dtantsur> so I'm putting a lot of patches as part of https://bugs.launchpad.net/tripleo/+bug/1690185
14:50:07 <openstack> Launchpad bug 1690185 in tripleo "[RFE] Deprecate classic drivers" [High,In progress] - Assigned to Dmitry Tantsur (divius)
14:50:29 <dtantsur> I'd appreciate attention to them to avoid a rush later on, when deprecation warnings start to pop up
14:50:33 <openstackgerrit> Merged openstack/tripleo-heat-templates stable/pike: Ensure os-net-config conditional for upgrade doesn’t fail.  https://review.openstack.org/523073
14:50:39 <dtantsur> (or when we actually pull the trigger next cycle (?))
14:50:43 <dtantsur> thanks
14:50:55 <EmilienM> you mean, you need reviews?
14:51:13 <dtantsur> yep, I need reviews
14:51:26 <dtantsur> it's a lot of small patches to ~ all OoO projects
14:51:57 <EmilienM> dtantsur: we like to create etherpads in this kind of situation
14:52:06 <EmilienM> dtantsur: so people can follow the WiP
14:52:08 <dtantsur> good point, I'll get you one
14:52:18 <EmilienM> dtantsur: send it to ML, people will help
14:52:19 <rdopiera> in fact, we like to create etherpads in every situation
14:52:25 <slagle> dtantsur: is all the work done for undercloud deploy and undercloud install?
14:52:39 <dtantsur> slagle: I'm on "undercloud install" stage currently
14:52:51 <slagle> dtantsur: i was looking at https://review.openstack.org/#/c/519300/ earlier after you pasted it
14:52:53 <dtantsur> also some THT patches are up as well
14:53:11 <slagle> and was wondering if you shouldn't just be making your effort focused on undercloud deploy
14:53:29 <dtantsur> slagle: if you promise me that people switch to it in Queens in production ;)
14:53:40 <slagle> i can't promise :)
14:53:46 <slagle> i think it has to be in undercloud deploy though
14:53:52 <dtantsur> I'm going to do both
14:53:56 <slagle> ok
14:54:01 <dtantsur> especially since it also affects ironic in the overcloud
14:54:04 <dtantsur> ETOOIRONIC
14:54:35 <dtantsur> I'm waiting for undercloud install bit to merge to cargo-cult it to undercloud deploy
14:54:41 <dprince> yeah, do it in both so we have parity maybe
14:55:41 <mwhahaha> ok i've got to take a kid to school, any other notable topics?
14:55:58 <EmilienM> mwhahaha: close it, thanks
14:56:04 <EmilienM> we can continue here
14:56:17 <mwhahaha> thanks everyone
14:56:19 <mwhahaha> #endmeeting