16:04:37 #startmeeting openstack_ansible_meeting 16:04:38 Meeting started Tue Mar 31 16:04:37 2020 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:04:39 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:04:41 The meeting name has been set to 'openstack_ansible_meeting' 16:04:42 #topic office hours 16:04:44 o/ 16:05:57 o/ 16:06:07 o/ 16:08:29 ok, so, arxcruz, take a world:) 16:08:38 *word 16:08:40 lol 16:08:47 hehe 16:08:59 so, we are working in consolidate our skip list in one single repository 16:09:01 https://opendev.org/openstack/openstack-tempest-skiplist 16:09:22 the idea is have a tool that will give you a list of tests to be skiped based on job, release, and installer (tripleo, osa, etc) 16:09:37 also, a ansible module to call it directly on ansible 16:09:53 we want it integrated with os_tempest as much as possible as well 16:10:17 the idea is call something like tempest-skip --release master --job bla 16:10:27 and it return the skipped tests that we can pass to tempest 16:11:00 if anyone is interested in help, you are more than welcome, we are now in phase of discuss what the tool will do, and how 16:11:06 so it's a good start point :) 16:11:31 we are doing this, because now, tripleo have jobs per component 16:11:37 tripleo-component-compute 16:11:41 tripleo-component-network 16:12:05 and sometimes we see tests failing in one job, but not in the other, because the component have a bug or whatever other reason 16:12:18 so we need now to be able to have a skip list per job/release 16:12:32 and we were for a long time wanting to have the skip list in their own repository 16:12:53 instead of use the one we are using right now, that is from our now deprecated validate-tempest role 16:13:16 if osa are interested on this approach, it would be nice to coordinate collaboration :) 16:13:35 that's it :) 16:14:01 ok, I see. Not really sure I got how ansible module should act. Like what it should do except running that command and what output it will provide? 16:14:35 the mvp is call this command, and it return a list of the tests to be skipped, that can be saved in a txt file and pass to tempest 16:15:00 as we are doing today 16:15:22 have an ansible module is just an idea if that will be done, or it would be easier to just call the command we are discussing 16:15:25 vars_files: "{{ release ~ '/' ~ job '/' skiplist.yml }}" 16:15:27 Ok, so it's output can be registered and passed to tempest role include as a variable? 16:16:01 yes 16:16:05 probably can be done 16:16:16 as i said, we are in the beginning 16:16:28 planning everything 16:16:37 actually yes, I like jrosser's way of thinking... 16:16:51 this can probably be an ansible role that is called with branch/job and a var name 16:17:03 it then set_fact that var name 16:17:09 then everything is nicely decoupled 16:18:19 yup, can be done in this way 16:18:33 but i really looking for more integration between tripleo and osa :D 16:18:33 maybe these can all co-exist 16:18:43 and have it integrated in os_tempest role 16:18:49 not only for us, but to osa 16:18:50 i expect OSA would prefer something natively ansible in preference to a cli tool 16:19:16 and that's why I wanted to have an ansible module or role 16:19:27 sure 16:19:57 is there anything you would like to specifically integrate in os_tempest? 16:20:07 roles calling roles can get messy 16:20:21 I would like that the skip list used by osa be there as well :) 16:20:28 of course cores would be by both groups 16:20:44 right - so if we could set a var with a role that generated the skip list we can pass it to os_tempest today 16:21:06 yup 16:21:13 we can work in this direction 16:21:45 and that would get wired in somewhere like this https://github.com/openstack/openstack-ansible/blob/master/playbooks/os-tempest-install.yml#L31-L33 16:22:14 i have to be afk for a while 16:22:16 sure 16:22:22 noonedeadpunk maybe you have some thoughts too? 16:23:20 anyway, we are working now in how the tool, so it might take a while until we are in a position to make everything work together 16:23:27 so, all help are welcome :) 16:23:30 that's all from me 16:24:09 Yeah, I actually think that roles should remain as lightweight as possible. As we have option to write blacklists it's good to use it. IF somesthing needs to be adjusted in os_tempest regarding format of passed variables it it - it's good 16:24:28 But not sure that we should add this module as a requirement to the role 16:24:46 As it will influence not so well in case of role standalone usage 16:24:54 I see 16:25:04 yeah, we can think about it in the future 16:25:10 when we have something to show :D 16:26:44 actually even if we make such dependendy - another var should be passed to notify whether use it or not 16:27:32 But I think that we also may be using your blacklisting role for our CI jobs as well 16:29:51 so I probably pretty interested to have such tooling 16:30:22 cool :) 16:30:27 glad to hear :) 16:39:21 So I got everything up and running last night, could login to the web interface and even uploaded an image. This morning looking at things I found my compute node is not there. 16:39:45 looking at the system, the service for the neutron agent crashing/restarting constantly 16:39:53 neutron-linuxbridge-agent: 2020-03-31 09:38:10.766 18509 ERROR neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Interface eth12 for physical network flat does not exist. Agent terminated! 16:40:00 velmeran: sorry we have kinda meeting here :p 16:40:06 at least trying to have:) 16:40:06 ah no problem 16:40:43 Ok, so another thing I wanted to say is that our rocky finally entered EM 16:41:06 and I hope that train bump will be merged soon as well 16:41:35 btw, openstack seems not to be supporting python 3.5 which comes with debian stretch 16:41:58 however, we deploy venvs on py3.5 there and CI says it's wrking 16:42:36 so we can kinda continue doing that or can actually rollback to py2... 16:42:59 which will be kinda regression for users 16:53:32 * jrosser back 16:59:16 jrosser: do you have some thoughts on this? 16:59:57 the easiest thing would be to not deploy rally, on stretch 17:00:47 In terms of rally, it can be deployed on py3 I believe 17:01:14 so the issue there is the lack of py3 support on train 17:01:24 the thing is that py3.5 has not been tested according to https://governance.openstack.org/tc/reference/runtimes/train.html#python-runtime-for-train 17:01:49 hrrm well yes then the whole business of deploying on stretch is not supported on that basis? 17:02:07 smth like that 17:02:13 despite it works now 17:02:16 maybe we start small 17:02:18 (probably) 17:02:32 backport the necessary changes to python_venv build, which are are going to need anyway 17:02:41 and then switch over just the utility host stuff 17:02:58 but it will still fail though? 17:03:01 becasue 3.5 17:03:21 nope. but we don't run tempest against all projects tbh 17:03:47 i thought the main issue was the installation of rally requiring >= py3.6 17:03:54 what do you what to backport for python_venv_build? 17:04:08 jrosser: yeah, in case it's from master 17:04:24 i fear we may be talking about different things :) 17:04:30 but I think we can bump rally to 1.7 and live with it 17:05:14 Also we maybe should do it in better way but currently it's not easy without circullar dependency 17:05:41 ok. so I think we have 2 problems now. Rally that's support only py3.6+ 17:06:00 and openstack not tested with 3.5 (but it seems to work as for now) 17:08:25 I think between not being able to deploy rally and deploy <3.0.0 it's better to chose deploy <3.0.0? 17:08:50 yes i would agree 17:08:55 And actually https://review.opendev.org/#/c/715215/ passes for debian 17:09:24 and that patch also fixes centos 17:09:32 yeah 17:09:35 it's a bit messy 17:09:51 but can't imagine another cleaner patch without disabling half of the ci 17:10:11 it's ok - these are all external things that have changed underneath us 17:10:42 i think we would better spend the time getting the backlog of patches in good shape than worry too much about stretch 17:11:00 unless there are some deployments that are depending on something we are missing 17:11:32 yeah, agree 17:12:26 so I think we almost have clean branches then 17:12:30 i need to go AFK again (TZ changed this is now an hour later for me) 17:12:34 except lxc thing 17:12:46 changed for me as well... 17:12:53 ok, then I think we've done 17:12:57 #endmeeting