14:00:26 #startmeeting tripleo 14:00:26 #topic agenda 14:00:26 * Review past action items 14:00:26 * One off agenda items 14:00:26 * Squad status 14:00:27 * Bugs & Blueprints 14:00:27 * Projects releases or stable backports 14:00:27 Meeting started Tue Sep 3 14:00:26 2019 UTC and is due to finish in 60 minutes. The chair is mwhahaha. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:28 * Specs 14:00:28 * open discussion 14:00:28 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:29 Anyone can use the #link, #action and #info commands, not just the moderatorǃ 14:00:29 Hi everyone! who is around today? 14:00:30 The meeting name has been set to 'tripleo' 14:00:36 o/ 14:00:42 mwhahaha: i am curious about one thing: are you against adding the zuul-job or even about ading the molecule scenario to the repo? having that scenario would be handy for local testing/development afaik 14:00:49 o/ 14:00:53 Hi 14:00:58 o/ 14:01:02 o/ 14:01:03 o/ 14:01:05 \o/ 14:01:05 o/ 14:01:06 zbr: setting up molecule is not handy for local testing/development IMHO 14:01:17 zbr: that's your workflow which may not be useful for everyone 14:01:46 it might be for other project, but not that one 14:02:38 0/ 14:04:21 o/ 14:04:24 mwhahaha +1 - I use molecule a lot, but never on my local workstation. 14:04:51 alright let's get on with it 14:04:58 #topic review past action items 14:04:58 None 14:05:11 need more action items 14:05:26 o/ 14:05:31 #topic one off agenda items 14:05:31 #link https://etherpad.openstack.org/p/tripleo-meeting-items 14:05:39 (Tengu + holser) podman ghost containers https://bugzilla.redhat.com/show_bug.cgi?id=1747885 14:05:41 bugzilla.redhat.com bug 1747885 in rhosp-director "[UPDATE] undercloud update failed with error "ERROR configuring mysql"" [Urgent,Assigned] - Assigned to cjeanner 14:05:51 oh noes 14:06:02 mwhahaha sorry 14:06:02 :) 14:06:08 :) 14:06:12 I know you like podman issues, mwhahaha 14:06:21 holser: care to do the talk? 14:06:30 sure 14:07:40 o/ 14:09:43 so we have a potential solution 14:10:00 but tl;dr we need to add --storage in podman rm in THT/container-puppet.py 14:10:14 and do a downstream backport of the version of podman. It's a safe mitigation according to podman folks 14:10:21 until they root cause the actual issue 14:10:36 o/ 14:10:41 \o/ 14:13:02 o/ 14:13:20 sooo anything else about this issue? 14:13:26 we just need to workaround a podman problem? 14:13:27 EmilienM to add --storage we need newer podman 14:13:37 podman 1.0.3 doesn't have such option 14:13:44 holser: that's what I wrote if you read me 14:13:51 "do a downstream backport of the version of podman" 14:13:58 i don't think we can do that 14:14:01 * holser hides 14:14:03 i thought the containers team needs to 14:14:07 it's already done 14:14:10 ok 14:14:11 and I'm working on the import now 14:14:17 can we move on and take it offline? 14:14:25 fair enough 14:14:31 I am fine 14:14:47 ssbarnea: Use case: How to testing python code locally, isolated and on multiple-platforms? 14:15:08 zbr please explain 14:16:34 zbr ^ 14:16:40 use case: i develop code on a centos-7 host and I want to check how it behaves on another platform, like rhel-8 14:17:24 now I cannot run anything *local* to do this, I need to provision these platforms on my own, even if I want to do only a "cat /etc/os-release". 14:18:20 funny that I'm looking at an article that might do just that for you... https://developers.redhat.com/blog/2019/08/23/run-red-hat-enterprise-linux-8-in-a-container-on-rhel-7/?sc_cid=701f2000000RtqCAAS 14:18:39 even when I do succed doing that I often re-use these them between tests and they become dirty, so when I test again i may get different results. 14:18:48 Merged openstack/tripleo-docs master: Fix trunk.rdoproject.org url https://review.opendev.org/679621 14:19:06 my concern is running rhel-8 upstream, as it's not an approved platform 14:19:16 even if it's just a container 14:19:18 if I would be able to test locally what-happens-on-platform-x, before even creating a CR, it means faster feedback-loop, better CR from strart, less load on zuul. 14:19:41 zbr have you run that idea by the infra guys? 14:19:43 IMHO we should not be reliant on deployment tests for pure python projects such as tripleo-repos. In the tripleo-repos case we should be able to write a unit test to cover any specific cases that we expect to support. Then the actual integration tests should likely be covered by our normal CI jobs. 14:20:07 weshay: we have being running fedora jobs for 9+ months, even if is not approved platform ;) 14:20:12 mocking out the cat /etc/os-release should be trivial 14:20:15 fedora is approved 14:20:22 We've done multi-os testing via unit tests in puppet for years so I don't see why we can't perform similar tests in python. 14:20:27 ok.. cool 14:20:47 The whole point of tripleo-repos was to replace the ansible specific code that existed in quickstart which didn't have any form of unit testing and would have required something like molecule to actually check. We were repeatedly bitten by changes to those roles. 14:20:51 also, i care less about what if we run them or not, more about for me being able to run them locally. -- is practical. 14:21:18 zbr mwhahaha so.. do we need to mock rhel-8 or can we use a container? 14:21:38 so my way is being more testy driven development 14:21:42 s/testy/test 14:21:47 vs deployment driven development 14:21:59 we've missed a ton of issues by relying on deployments 14:22:01 see tripleo-ci 14:22:20 so i would prefer that we not push reliance on actual deployments unless absolutely necessary 14:22:29 in this case, for tripleo-repos I do not believe it's is necessary 14:22:51 so .. just making sure I understand.. you would like to see us help push more molecule tests across ansible-roles, and unit tests across python 14:22:55 mwhahaha -- i agree here, still. molecule is clearly not deployment, is more closer to functional testing, i would say. 14:23:26 right that's fine, I just don't think that's the platform we should be pushing for python projects 14:23:29 ansible stuff, sure 14:23:31 python no 14:23:40 it's not what the other upstream projects are doing 14:23:52 if you want to test, use an actual zuul node as that's the current pattern 14:24:10 if molecule gets adopted for other openstack python projects, then we can revisit 14:24:18 but i believe we should limit it to ansible related items at this time 14:24:23 similar to the beaker tests we have in puppet 14:24:31 we don't push beaker for python even if we could reuse it 14:24:44 it has it's place but it should be specifically defined when/where to be used 14:26:05 mainly I seen molecule here as a way of allowing us some kind of testing which was impossible (or inconvenient) outside zuul, by using local virtualization. 14:26:17 Emilien Macchi proposed openstack/tripleo-heat-templates master: container-puppet: run podman rm with --storage https://review.opendev.org/679793 14:26:56 i confess, it is bit outside its original scope, but it works. not going to push more for it, just wanted to explain why I found it useful. 14:28:00 i see it as a specific method of doing functional testing which may work for you but may not for others. there are other options (vagrant, beaker, etc) 14:28:38 I'm just not sold on this specific case 14:29:46 mwhahaha so to summarize, limit molecule use for ansible only 14:30:06 in other use cases, mock out what is required 14:30:11 yes i believe that should be the current scope 14:30:27 we can revisit later, but i think that's where we should draw the line today 14:30:33 zbr comments? 14:30:37 zbr: can't we say, start using it as much as possible for ansible. once we start to have some decent coverage, revisit the 'use it for python ' too? 14:30:58 zbr: was just on a call with panda, he was saying there are good use case for molecule/pyuthon (e.g. different distros) 14:31:26 no more comments 14:31:32 i also think this should be something we raise to the larger community as possible replacements for maybe existing testing 14:31:35 zbr: so maybe once we start using it more/enough for ansible, folks might like to revisit, and we can also see what the wider community does wrt python/moluecule 14:31:47 if we get broader adoptions then it makes it easier to say we should be using it too 14:31:56 but currently the upstream doesn't do this type of testing 14:32:35 yep, I had similar concerns, when you're testing multiple platforms on a node, you're adding a provisioning layer on top of zuul that is left to the developer 14:33:00 and that's something we definitely want to do with support from infra 14:33:17 but the use case is clear 14:33:31 upstream testing was mainly focused around use of zuul exclusively, every time i mentioned something related to local testing they were not really interested, zuul was considered enough. 14:33:56 but i am still working to slowly introduce them to molecule for zuul roles, i will see what a success rate I will have. 14:34:19 zull is grat, but nothing beats running local/isolated tests, the iteration on the changes is really much faster 14:34:50 Harald Jensås proposed openstack/tripleo-heat-templates master: Don't add IpList for disabled networks https://review.opendev.org/679354 14:35:03 Jose Luis Franco proposed openstack/tripleo-heat-templates stable/stein: Split upgrade_steps_playbook into different plays. https://review.opendev.org/679737 14:35:05 mainly "create new zuul jobs" was the defacto answer for any new test-case, but we all know what happens when your test matrix explodes ;) 14:35:21 but if can have coverage even for multiple platforms with just unit test then it's ok for me 14:35:45 zbr: one problem with unit tests is we currently have some 14:35:50 I just hope it can be achieved easily. 14:35:50 zbr: i.e. we'd need to convert those first 14:36:00 zbr: vs net new additions for the molecule/ansible case 14:36:09 i definitely don't think unit tests should be replaced by molefule items 14:36:39 augmented? 14:36:50 not replaced, moleucle would probably just run tox on a different distribution 14:37:11 there shouldn't be a need to run tox a different distro 14:37:17 like that doesn't make sense really 14:37:40 molecule should be actually validating system changes not for use to rerun unit tests 14:37:48 unit tests shouldn't care what distro it's run on 14:37:55 that's the whole point of python 14:40:03 so what if python code touches distro things, 14:40:15 that depends a lot on what your python code is doing, the code mayb be portable, but if it does os.system("python") and python exec does not exist on newer platforms, you will fail to find a platform specific bug. 14:40:23 like https://github.com/openstack/tripleo-repos/blob/master/tripleo_repos/main.py#L79-L106 14:40:23 so in the case of tripleo-repos, installing tripleo-repos and executing it would be the molecule tests 14:40:46 yes 14:40:51 I imagine 14:40:52 in the case of that, we really should be doing a pyton file open and parsing, not relying on sourcing os-release 14:41:03 that bit of code is not really proper python 14:41:07 it's a bit more of a shell hack 14:41:38 however, it can easily be mocked to return different valuyes 14:41:51 so mocking _get_distro to return various info is trivial 14:42:22 anyway i think we've discussed this a bit much, let's take further discussions about specifics offline 14:42:35 +1 14:42:48 alright moving on 14:42:48 fultonj: Split Control Plane Unified Ansible Inventory 14:42:54 ack 14:43:02 i have an open blueprint 14:43:11 i'm raising my hand and saying i plan to have it done before m3 14:43:24 note m3 is this week 14:43:27 ack 14:43:31 so we need to merge it like today/tomorrow 14:43:34 for a chance to make m3 14:43:36 yes 14:43:44 two pathces 14:43:51 one is ready for review. it depends on a second 14:43:59 the second just needs to unit test i'm writing 14:44:06 https://review.opendev.org/#/c/664065 14:44:07 https://review.opendev.org/#/c/679520 14:44:15 currently they are zuul -1, is that not related? /me has to check 14:44:25 isn't M3 not next week? 14:44:32 sept 9 14:44:33 6 days 14:44:42 monday 14:44:44 yes Sep 09 - Sep 13 14:45:04 oh right 14:45:23 ok so next week, needs to land this week then 14:45:30 (ideally) 14:45:32 mwhahaha: yes, i think this is possible 14:46:02 wanted to state this. since it's an open blueprint and m3 is coming. 14:46:07 that's all unless anyone has comments 14:46:14 owalsh: ^ fyi 14:47:34 k i'll keep an eye on them this week and review 14:47:41 anything else about these patches? 14:48:06 the inventory generation code is kept the same if there's one heat stack 14:48:13 we're just adding a multistack option 14:48:27 so i don't think it's dangerous 14:49:05 k 14:49:12 shall we move on? 14:49:17 works for me. thanks 14:49:22 Chandankumar: Gating https://github.com/containers/libpod repo using ooo CI jobs 14:49:23 fultonj: ack 14:49:33 yes it's me 14:49:40 soooo shouldn't podman stuff be gated in like delorean deps? 14:49:49 or are we getting it straight from centos repos 14:50:03 * fultonj will remove dnm on https://review.opendev.org/#/c/679520 today when unit tests added 14:50:05 it's already gated in RDO 14:50:26 mwhahaha: we can gate any project hosted on github using software factory github app 14:50:38 like https://review.rdoproject.org/r/#/c/21994/ 14:50:41 there is some work also done by jpena to test ceph-ansible 14:50:49 chandankumar in theory, we haven't done it yet 14:51:02 we have a to-do item from gfidente to do it for ceph 14:51:08 and now podman 14:51:10 weshay++ 14:51:16 ceph-ansibke 14:51:19 aye 14:51:28 Currently In RDO, we update the version then test it 14:51:38 isn't shade doing it already; so it's a working example? 14:51:41 if it does not passes all the jobs then we roll back 14:51:48 seems more like a questions for the containers team on viability of doing such things. 14:52:00 now we support two distro both having different podman versions 14:52:03 Sorin Sbarnea proposed openstack/tripleo-ci master: Removed fedora28-master from molecule testing https://review.opendev.org/679800 14:52:55 It is basically about gating early once a release lands or on each commit where we have a feeling that it is going to break the tripleo stuff 14:52:57 chandankumar either way.. we should probably tackle this first w/ ceph-ansible 14:53:02 then look at podman 14:53:16 the ceph-ansible folks have been pretty patient 14:53:36 weshay: ok, I will keep this in the tripleo-ci backlog 14:53:49 i assume most of the podman contributor/maintainers are not that aware of TripleO, so just running jobs there might not help 14:53:50 But one question how we want to gate? 14:53:50 chandankumar ya.. put it right under ceph-ansible 14:54:03 would need some monitoring and tracking also 14:54:04 per commit or per tag 14:54:16 ykarel chandankumar there are some background conversations going on about podman atm 14:54:37 regarding this 3rd party checking 14:55:10 it may be premature for this level of detail in this meeting 14:55:14 weshay, where ? 14:55:15 weshay: ack 14:55:17 weshay, ack 14:55:25 alright we only have 5 minutes left and we have the rest of the agenda to tackle. let's take the rest of this offline 14:55:30 we have https://github.com/packit-service/packit provided by fedora can be also used 14:55:32 I'll be in touch w/ you guys offline 14:55:34 sounds like there's some other discussions around this 14:55:42 moving on real quick 14:55:46 * mwhahaha goes turbo 14:55:53 look at em go 14:55:54 #topic Squad status 14:55:54 ci 14:55:54 #link https://etherpad.openstack.org/p/tripleo-ci-squad-meeting 14:55:54 upgrade 14:55:54 #link https://etherpad.openstack.org/p/tripleo-upgrade-squad-status 14:55:55 edge 14:55:55 #link https://etherpad.openstack.org/p/tripleo-edge-squad-status 14:55:56 integration 14:55:56 #link https://etherpad.openstack.org/p/tripleo-integration-squad-status 14:55:57 validations 14:55:57 #link https://etherpad.openstack.org/p/tripleo-validations-squad-status 14:55:58 networking 14:55:58 #link https://etherpad.openstack.org/p/tripleo-networking-squad-status 14:55:59 transformation 14:55:59 #link https://etherpad.openstack.org/p/tripleo-ansible-agenda 14:56:10 call for any important status items? 14:56:28 just a note on the current situation in CI. There was an outage this weekend in the dlrn repo instances with cascading effect also on the AFS mirrors. Right now every other job is unable to correctly download delorean repos for a specific hash. jpena has the specifics, but apparently DNS propagation is giving some problems. 14:56:47 :( 14:57:26 the pass rate upstream is quite terrible atm 14:57:32 sounds like maybe hold off on some rechecks until resolved 14:57:39 can someone send a note to the ML 14:57:49 with details and an update when it's resolved 14:58:11 I'll make sure it's done 14:58:15 thanks 14:58:18 #topic bugs & blueprints 14:58:18 #link https://launchpad.net/tripleo/+milestone/train-3 14:58:18 For Train we currently have 27 blueprints and 506 (+1) open Launchpad Bugs. 505 train-3, 1 train-rc1. 167 (-3) open Storyboard bugs. 14:58:18 #link https://storyboard.openstack.org/#!/project_group/76 14:58:28 fyi i created the future milestones now that we have a name for U 14:58:31 and created an rc1 14:58:34 m3 is next week 14:58:42 #topic projects releases or stable backports 14:58:52 m3 is next week, so land your patches (when ci is fixed) 14:59:04 #topic specs 14:59:04 #link https://review.openstack.org/#/q/project:openstack/tripleo-specs+status:open 14:59:30 plz update your specs for the ussuri cycle 14:59:35 i'll lift my -2s today 14:59:41 #topic open discussion 14:59:48 anything else (with a minute to go)? 15:00:24 thanks everyone 15:00:25 #endmeeting