14:00:13 #startmeeting tripleo 14:00:13 #topic agenda 14:00:13 * Review past action items 14:00:13 * One off agenda items 14:00:13 * Squad status 14:00:13 * Bugs & Blueprints 14:00:13 Meeting started Tue Nov 28 14:00:13 2017 UTC and is due to finish in 60 minutes. The chair is mwhahaha. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:13 * Projects releases or stable backports 14:00:14 * Specs 14:00:14 * open discussion 14:00:14 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:15 Anyone can use the #link, #action and #info commands, not just the moderatorǃ 14:00:15 Hi everyone! who is around today? 14:00:16 The meeting name has been set to 'tripleo' 14:00:20 Hello! 14:00:22 o/ 14:00:25 o/ 14:00:31 o/ 14:00:39 o/ 14:00:43 hey folks 14:00:50 hi 14:00:53 o/ 14:00:54 o/ 14:01:00 o/ 14:01:04 o/ 14:01:23 o/ 14:01:29 o/ 14:01:30 o/ 14:02:17 o/ 14:02:32 o/ 14:02:41 o/ 14:02:58 Merged openstack/tripleo-ui stable/pike: Change plan files whitelist when creating plan https://review.openstack.org/523316 14:03:37 ok lets do this 14:03:38 #topic review past action items 14:03:41 none 14:03:53 moving on to the agenda 14:04:00 #topic one off agenda items 14:04:00 #link https://etherpad.openstack.org/p/tripleo-meeting-items 14:04:06 (gfidente) how to discern containerized vs non-containerized services within the templates? 14:04:25 yeah I was trying to understand if we have a way to do that from the templates right now? 14:04:56 I could use that ability as well 14:05:10 seems like maybe a setting we could do in docker.yaml? 14:05:18 if one isn't already there 14:05:25 mwhahaha: well i think it would have to be per service? 14:05:45 depends on what you're looking for 14:05:50 but yea perhaps there as well 14:06:05 mwhahaha: like a hiera we set for 'servicename_is_docker' or something 14:06:17 gfidente: but what do you have in mind/ have a spec? bug? 14:06:46 marios no I hit frequently a pattern where I need to know what other services are deployed 14:06:53 for example, to build the list of pools to create 14:06:58 to grant permissions on a key 14:07:11 o/ 14:07:40 gfidente: well the service list you can get from the role data but the if service is containerized is not there if that is what you want specifically 14:07:42 I think I should approach this differently and emit from every service enabled a pool to be created 14:07:43 afaik at least 14:07:47 o/ 14:08:45 regarding containerized vs non-containerized , that seems just another special case where we need to grant permissions on a file only if the targer service is not containerized 14:09:12 I was mostly trying to understand if there were ideas on how this could have been approached 14:10:13 URGENT TRIPLEO TASKS NEED ATTENTION 14:10:14 https://bugs.launchpad.net/tripleo/+bug/1731063 14:10:14 Launchpad bug 1731063 in tripleo "CI: tempest TestVolumeBootPattern tests fail due to not being able to ssh to the VM" [Critical,Triaged] 14:10:15 https://bugs.launchpad.net/tripleo/+bug/1734134 14:10:15 Launchpad bug 1734134 in tripleo "Pike periodic promotion job multinode-1ctlr-featureset016 fail with error running docker 'gnocchi_db_sync' - rados.Rados.connect PermissionDeniedError: error connecting to the cluster" [Critical,In progress] - Assigned to Giulio Fidente (gfidente) 14:10:27 anyway looks like the answer is, not right now 14:10:37 I think we can move on 14:10:55 ok thanks 14:10:59 emitting hieradata 14:11:06 only works with puppet 14:11:26 maybe we can append ansible vars into a playbook in the future, not sure 14:11:29 no it works beyond that 14:11:36 because you can query hieradata externally 14:11:39 we are still applying puppet and the hiera can still be quieried 14:12:11 yeah so that probably is fine if we move the logic in the playbook vs heat 14:12:38 (heat templates) 14:13:09 well you need to set the fact that it is containerized vs not in the THT 14:13:23 then how that information gets consumed in the deployment can live in ansible/puppet 14:13:36 o/ 14:13:48 it seems that writing out a hash in hiera of the containerized services that can be queried might be the best way at the moment 14:13:49 Merged openstack/python-tripleoclient master: Deploy with --config-download even when some role has count 0 https://review.openstack.org/523136 14:14:02 o/ 14:14:52 anyway moving on 14:15:10 #topic Squad status 14:15:10 ci 14:15:10 #link https://etherpad.openstack.org/p/tripleo-ci-squad-meeting 14:15:10 upgrade 14:15:10 #link https://etherpad.openstack.org/p/tripleo-upgrade-squad-status 14:15:11 containers 14:15:11 #link https://etherpad.openstack.org/p/tripleo-containers-squad-status 14:15:12 integration 14:15:12 #link https://etherpad.openstack.org/p/tripleo-integration-squad-status 14:15:13 ui/cli 14:15:13 #link https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status 14:15:14 validations 14:15:14 #link https://etherpad.openstack.org/p/tripleo-validations-squad-status 14:15:15 networking 14:15:15 #link https://etherpad.openstack.org/p/tripleo-networking-squad-status 14:15:16 workflows 14:15:16 #link https://etherpad.openstack.org/p/tripleo-workflows-squad-status 14:15:50 i see folks updating their status now :D 14:16:52 any other status related items that folks want to raise attention on? 14:19:03 sounds like nope 14:19:10 #topic bugs & blueprints 14:19:11 #link https://launchpad.net/tripleo/+milestone/queens-2 14:19:11 For Queens we currently have 71 (+1) blueprints and about 541 (+19) open bugs. 256 queens-2 and 285 queens-3. 14:19:41 so a reminder, queens-2 ends next week 14:20:00 please get your blueprints updated with their current status 14:20:16 remember we want features merged by the end of queens-2 14:20:43 considering we have ~541 open bugs, we shouldn't be continuing to add features in queens-3 14:20:52 what did we say about CI changes? 14:21:00 which ci changes 14:21:10 I'm worried about what kind of change in CI can we make after m-2 14:21:25 Keith Schincke proposed openstack/tripleo-heat-templates master: Add ceph-rbdmirror ansible container service https://review.openstack.org/520244 14:22:11 EmilienM: so if it's an addition, i think those are fine. I'm not sure which types of changes you're planning. are you talking about the ovb stuff? 14:22:28 no, the ovb stuff will be fine by end of next week. 14:22:37 I'm interested by the scenarios and undercloud-container 14:22:51 you can start planning them but I'm not sure we should switch to that 14:22:59 weshay and Slower have some WIP but I'm afraid we won't make the m-2 schedule 14:23:02 i still wanted the undercloud container jobs by m2 14:23:16 * mwhahaha pokes dprince, weshay & Slower 14:23:27 I propose to re-discuss when the work has been done 14:23:42 we can use 2 weeks and observe stability numbers 14:23:47 and take a decision afterward 14:23:50 sure 14:24:06 I think for things that are close to being done by m2 i'd be ok letting slip ~2weeks 14:24:11 but that's it 14:24:29 I agree 14:24:45 so as a reminder for folks who have open reviews for blueprints and features, get your status updated and be able to report how close to being done 14:24:45 it's a trade-off if we want to release on time and on good conditions 14:24:47 mwhahaha: I think we are close 14:25:56 dprince: sounds good, if you need reviews plz ping us 14:26:10 any other bugs/bluepritn items? 14:27:00 when a overcloud is being deployed if a node gets stuck in wait-call-back is there a retry on that now? 14:27:25 jkilpatr: depends 14:27:30 #topic projects releases or stable backports 14:27:41 queens-2 next week 14:27:49 any backports that need attention? 14:28:29 https://review.openstack.org/#/c/522803/ needs a review please, and I'd like to land/backport https://review.openstack.org/#/c/513450/ as that's a regression for pike 14:29:55 bogdando: shardy jistr http://logs.openstack.org/51/521951/17/check/ansible-role-k8s-keystone-kubernetes-centos/8511f86/job-output.txt.gz#_2017-11-28_13_15_58_711025 T_T 14:30:13 this is the iptable rules for that job: http://logs.openstack.org/51/521951/17/check/ansible-role-k8s-keystone-kubernetes-centos/8511f86/primary/logs/iptables.txt.gz 14:30:33 shardy: ack 14:30:41 no idea why dns doesn't work there 14:30:44 flaper87: dns dns dns 14:30:46 any other backport items? 14:30:50 again dns 14:30:58 interestingly enough, 2 of the dns containers run http://logs.openstack.org/51/521951/17/check/ansible-role-k8s-keystone-kubernetes-centos/8511f86/primary/logs/k8s-describe-all.txt.gz 14:31:09 bogdando: yeah, figured as much but not sure why it doesn't work :( 14:31:16 flaper87: we're in a meeting 14:31:25 moving on to specs 14:31:28 #topic specs 14:31:29 #link https://review.openstack.org/#/q/project:openstack/tripleo-specs+status:open 14:31:33 mwhahaha: oh man, so sorry :( 14:31:41 so reminder, we will be freezing the specs next week 14:31:45 please review open specs 14:32:09 we don't have that many so it should be trivial 14:32:21 the important one for jaosorior is https://review.openstack.org/521727 14:32:31 please take a second to review the ipsec spec 14:33:19 any other spec related topics? 14:33:31 ipsec for queens? 14:33:37 hum 14:34:00 that's the hope, but we'll see 14:34:02 like the spec was pushed 8 days again, (in the middle of QUeens) 14:34:10 s/again/ago/ 14:34:24 I thought we were doing better at planning 14:34:36 to me, a spec added in the middle of a cycle is for the next cycle 14:34:48 our experience should help to make a better planning 14:34:53 Hey I've been working on some interface tweaks to enable multiple compute-only stacks 14:35:16 it's been possible for a while, but I'm trying to make it easier - wasn't planning a spec but can do one if folks want one? 14:35:20 EmilienM: i agree but it is was it is 14:36:03 it's not really a feature, more an improved interface I think 14:36:08 shardy: specs aren't required all the times, it's just a good way to collaborate on planning and design 14:36:09 shardy: it would probably be beneficial to have one. is the thought to support cells or something? 14:36:34 mwhahaha: Initially it's just to enable potentially easier scaling where you don't want to update a single heat stack with 2000 nodes in it 14:36:46 mwhahaha: but yeah could be a stack per cell or something in future 14:36:56 also it's for folks that want to scale out without touching the controlplane 14:37:31 yea it wouldn't hurt to write out these use cases in a spec 14:37:41 mwhahaha: ack OK I'll push one today 14:37:53 i'm also wondering how much of a deal that is as we switch to ansible driven deployment 14:38:22 mwhahaha: yeah that's kind of a step further, e.g deploy the controlplane w/heat then just use ansible to configure the computes 14:38:34 flaper87: where is this test located? 14:38:35 but we'd still need a tool to generate the inventory in that case 14:38:41 yea 14:38:54 mwhahaha: for now I'm making use of the dynamic-inventory, just so we can work out how to decouple things 14:39:24 sounds good 14:39:30 https://review.openstack.org/#/q/topic:compute_only_stack2+(status:open+OR+status:merged) has the first steps anyway 14:40:03 moving on to open discussion since we're basically doing that now :D 14:40:04 #topic open discussion 14:40:17 hehe yeah sorry about that :D 14:40:48 we have 2 CI alerts 14:40:56 so yea wouldn't hurt for a spec on that. would also help us track the work 14:41:02 and one of them is here for long time: https://bugs.launchpad.net/tripleo/+bug/1731063 14:41:02 Launchpad bug 1731063 in tripleo "CI: tempest TestVolumeBootPattern tests fail due to not being able to ssh to the VM" [Critical,Triaged] 14:41:15 shardy: do you think scaling will still be an issue when driving everythng with config-download? 14:41:15 EmilienM: dalvarez is working on that 14:41:28 arxcruz: so why he's not assigned? 14:41:31 shardy: e.g., is the scaling issue due to the # of resources 14:41:48 when you work on a bug, put your name on it so we know something happens 14:41:53 EmilienM: he just joined this morning on that, I'll check with him 14:41:59 EmilienM: ok 14:42:02 slagle: yeah probably less of an issue, but I'm still not sure we'd want really huge environments deployed with a common stack for controlplane and compute nodes? 14:42:10 EmilienM, arxcruz i was asked to take a look by lpeer just in case i could see something 14:42:13 so this is an attempt to give some more options 14:42:16 arxcruz: it's not obvious. 8 days without any comment and no assignment 14:42:31 dalvarez: please assign yourself to the bug 14:42:42 Dmitry Tantsur proposed openstack/instack-undercloud master: Enable the ansible deploy interface out of box https://review.openstack.org/522568 14:43:25 shardy: true. but i'd almost rather see us work on more native ironic and ansible interfaces to enalbe that, instead of building more reliance on Heat 14:43:40 shardy: just deploy some nodes with ironic, use ansible to configure them 14:44:01 slagle: yes we could do that, but atm we'd still need to generate the playbooks and inventory 14:44:02 as opposed to multiple stacks, which is going to cause issues with a lot of baked in assumptions everywhere 14:44:07 a (shameless) highlight of something that aligns well with the idea of an ansible deploy: https://blueprints.launchpad.net/tripleo/+spec/ansible-deploy 14:44:53 slagle: I think do do pure ansible we'd have to do some more work, e.g refactor all the heat-config things into pure ansible, and write a tool that converts all composable service templates into ansible roles 14:45:04 dalvarez: what's your launchpad ID? 14:45:23 EmilienM, not sure if i have to be the asignee anyways but i'll do it 14:45:28 which would be cool, but a step beyond what I was attempting 14:45:49 EmilienM, done 14:45:50 shardy: i don't think we'd have to take it that far. we could make what gets generated with config-download configurable via ansible directly 14:45:53 dalvarez: cool 14:46:17 shardy: or perhaps this is an opportunity to integrate with apb directly 14:46:32 to use those native roles 14:46:44 slagle: ack, yeah open to ideas but I was looking for ways to help us scale in the Queens timeframe 14:46:57 e.g before we move to the roles flaper87 has been working on 14:47:10 as atm those expect k8s etc 14:47:56 shardy: right, for queens this could be difficult. i'm just adverse to adding new deps on Heat around multiple stacks, etc. 14:48:05 as that becomes more difficult to move away from in the future 14:48:30 but yea for Queens, not sure what options there really are. 14:48:39 slagle: sure, I'm not really saying we have to use heat, only that we could deploy the controlplane with no computes, then work out what data is needed to configure the computes via ansible 14:48:55 shardy: given that m2 is next week, i don't think we should target this for queens. i'd rather see a spec and conversations on how to approach it in rocky 14:48:58 slagle: config download provides a nice starting point for that, but definitely more we can do there 14:49:32 mwhahaha: well lets have the discussion and see where it goes I guess 14:49:37 sure 14:49:42 anyway any other topics? 14:49:49 another small highlight please 14:49:59 we're moving away from classic drivers in ironic 14:50:07 so I'm putting a lot of patches as part of https://bugs.launchpad.net/tripleo/+bug/1690185 14:50:07 Launchpad bug 1690185 in tripleo "[RFE] Deprecate classic drivers" [High,In progress] - Assigned to Dmitry Tantsur (divius) 14:50:29 I'd appreciate attention to them to avoid a rush later on, when deprecation warnings start to pop up 14:50:33 Merged openstack/tripleo-heat-templates stable/pike: Ensure os-net-config conditional for upgrade doesn’t fail. https://review.openstack.org/523073 14:50:39 (or when we actually pull the trigger next cycle (?)) 14:50:43 thanks 14:50:55 you mean, you need reviews? 14:51:13 yep, I need reviews 14:51:26 it's a lot of small patches to ~ all OoO projects 14:51:57 dtantsur: we like to create etherpads in this kind of situation 14:52:06 dtantsur: so people can follow the WiP 14:52:08 good point, I'll get you one 14:52:18 dtantsur: send it to ML, people will help 14:52:19 in fact, we like to create etherpads in every situation 14:52:25 dtantsur: is all the work done for undercloud deploy and undercloud install? 14:52:39 slagle: I'm on "undercloud install" stage currently 14:52:51 dtantsur: i was looking at https://review.openstack.org/#/c/519300/ earlier after you pasted it 14:52:53 also some THT patches are up as well 14:53:11 and was wondering if you shouldn't just be making your effort focused on undercloud deploy 14:53:29 slagle: if you promise me that people switch to it in Queens in production ;) 14:53:40 i can't promise :) 14:53:46 i think it has to be in undercloud deploy though 14:53:52 I'm going to do both 14:53:56 ok 14:54:01 especially since it also affects ironic in the overcloud 14:54:04 ETOOIRONIC 14:54:35 I'm waiting for undercloud install bit to merge to cargo-cult it to undercloud deploy 14:54:41 yeah, do it in both so we have parity maybe 14:55:41 ok i've got to take a kid to school, any other notable topics? 14:55:58 mwhahaha: close it, thanks 14:56:04 we can continue here 14:56:17 thanks everyone 14:56:19 #endmeeting