15:00:48 #startmeeting neutron_ci 15:00:49 Meeting started Wed May 13 15:00:48 2020 UTC and is due to finish in 60 minutes. The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:50 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:52 The meeting name has been set to 'neutron_ci' 15:00:52 hi 15:01:00 hi 15:01:32 o/ 15:01:33 o/ 15:02:28 Grafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate 15:02:30 Please open now :) 15:02:50 and now, lets start :) 15:02:51 #topic Actions from previous meetings 15:02:58 maciejjozefczyk will increase timeout in neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovn_db_sync.TestOvnNbSyncOverTcp.test_ovn_nb_sync_log 15:03:03 slaweq, done 15:03:09 maciejjozefczyk: thx :) 15:03:13 that's fast 15:03:23 but I haven't time to verify if this helped 15:03:32 I didn't saw such failure this week 15:03:37 but maybe others did 15:03:56 no sorry 15:04:45 I don't think I saw one either (so that sounds good) 15:04:52 ralonsoh: no need to sorry, maybe it's just fixed now :) 15:05:11 ok, next one 15:05:14 bcafarel to check neutron-dynamic-routing failurs in stable/train 15:05:52 it was indeed caused by recent tempest install change, there is a LP and patch up for review 15:06:02 https://review.opendev.org/#/c/725920/ 15:07:01 thx bcafarel 15:07:02 +2 15:07:04 :) 15:07:38 almost there :) 15:07:43 ok, so next one 15:07:45 ralonsoh to check ovn jobs timeouts 15:08:03 sorry I dind't spend too much time on this 15:08:08 and I didn't find anything yet 15:08:25 but are ovn jobs still timeouting so often? 15:08:32 not so often 15:08:35 I think it was like 100% of runs last week 15:08:42 or almost 15:08:47 but I still need to find why we could have this 15:08:52 ok 15:09:00 and maybe push a solution for the CI 15:09:00 o/ sorry I am late 15:09:01 so can I assign it to You for next week? 15:09:04 sure 15:09:07 thx 15:09:18 #action ralonsoh to continue checking ovn jobs timeouts 15:09:29 hi njohnston :) 15:09:33 and the last one 15:09:35 maciejjozefczyk to check failing compile of ovs in the fedora periodic job 15:09:42 #link https://review.opendev.org/#/c/726759/ 15:09:59 Fedora has new kernel and OVS doesn't support compilation for it 15:10:04 ok, I saw this patch 15:10:07 (newer than 5.5) 15:10:20 and I have one question - did You test that it will solve problem with fedora job? 15:10:58 can You maybe send dnm patch on top of this one, where You will add this fedora job to check queue, then it will be run as other jobs and we will see if that helped 15:11:07 actually not on the gates 15:11:28 I can verify now sending another patch that relies on this one 15:11:32 * maciejjozefczyk will do after meeting 15:11:43 thx 15:11:50 but the failure in the fedora job was related to that compilation, so it should fix this 15:12:01 just to make sure it solves our issue with periodic job :) 15:12:05 yep ++ 15:12:29 and that's all actions from last wek 15:12:32 *week 15:12:48 anything else You want to add here? or can we move on to the next topic? 15:14:17 ok, so lets move on 15:14:19 #topic Stadium projects 15:14:32 standardize on zuul v3 15:14:58 as we talked on Monday, I will send this week patch to switch ovn grenade job to zuulv3 syntax 15:15:15 for networking-odl we have patches ready for review, right? 15:16:06 yes, I ahve to check 15:16:12 have -^ 15:16:41 ok, and then only midonet will left 15:16:44 I have a comment so I ahve to check, but it is ongoing 15:16:53 thx lajoskatona 15:17:25 from other things related to stadium, we fixed midonet gate in master and ussuri 15:17:31 thx ralonsoh for patch for that 15:17:39 yw! 15:18:00 and I proposed serie of patches https://review.opendev.org/#/q/status:open+branch:stable/ussuri+topic:ussuri-release to use ussuri jobs from neutron-tempest-plugin in ussuri branch 15:18:07 please review them :) 15:18:54 anything else regading stadium projects' CI? 15:20:30 I think there are a few patches sent recently for hacking 3, to add to review pile :) 15:21:38 "Bump hacking min version to 3.0.1" topic for most of them 15:22:39 thx bcafarel I will check them tonight 15:24:20 bcafarel: I see such patch only for networking-baremetal, which isn't neutron project 15:24:31 ok, I see 15:24:37 for other projects too 15:24:43 I will review them later today 15:25:48 yes search does not work well there 15:26:16 I mostly saw them passing on #openstack-neutron bot 15:27:39 lets move on to the next topic then 15:27:48 #topic Stable branches 15:27:57 Train dashboard: http://grafana.openstack.org/d/pM54U-Kiz/neutron-failure-rate-previous-stable-release?orgId=1 15:27:59 Stein dashboard: http://grafana.openstack.org/d/dCFVU-Kik/neutron-failure-rate-older-stable-release?orgId=1 15:28:21 I think we should update those dashboards now, as ussuri is officially released 15:28:57 true, add it to my pile 15:29:03 thx bcafarel :) 15:29:15 #action bcafarel to update stable branches grafana dashboards 15:29:40 also overall most backports are waiting atm on pycodestyle capping 15:29:58 yes, pep8 failing 100% times 15:30:15 https://review.opendev.org/#/c/727274/3 just merged on ussuri, we will need it up to rovky included 15:30:19 *rocky 15:32:48 I just +W on train patch 15:33:01 so we will need still backports for older ones 15:33:27 bcafarel: other jobs in stable branches are fine, right? 15:33:39 or do we have there any other problems which I missed? 15:34:22 yes the rest of the jobs are fine we actually got a few backports merged without too many rechecks :) 15:35:36 ok, so lets move on to the next topics 15:35:37 #topic Grafana 15:35:48 #link http://grafana.openstack.org/dashboard/db/neutron-failure-rate 15:35:53 and some stats: 15:36:01 Average number of rechecks in last weeks: 15:36:02 week 18 of 2020: 1.25 15:36:04 week 19 of 2020: 2.0 15:36:06 week 20 of 2020: 0.67 15:36:10 so in overall it seems that it's not very bad 15:37:22 I don't see anything alarming there 15:38:00 a quiet week for release time :) 15:38:47 seems so :) 15:39:43 anything else about grafana and our dashboards? 15:41:07 ok, so lets move on 15:41:10 #topic fullstack/functional 15:41:18 first functional tests 15:41:43 I wanted to ask You about what do You think if we would switch functional uwsgi job to be voting? 15:42:00 +1 seems to be quite stable 15:42:02 IMO it's more stable now, and we are at the beginning of the cycle 15:42:04 +1 15:42:13 so worst case we can revert it :) 15:42:55 ok, so I will propose such patch this week 15:43:08 #action slaweq to switch functional uwsgi job to be voting 15:43:27 and I didn't found any new issues in functional job this week 15:43:32 I found one bug in the way we execute functional tests: https://bugs.launchpad.net/neutron/+bug/1878160 15:43:32 Launchpad bug 1878160 in neutron "[OVN] Functional tests environment is using old OVN" [High,In progress] - Assigned to Maciej Jozefczyk (maciej.jozefczyk) 15:43:48 The OVS and OVN are compiled from hardcoded version 2.12 15:44:03 So we actually test master code with old OVN as a backend (at least NBDB) 15:44:25 I started working on a solution for that, in order to have possibility to specify tags from which we can compile OVN/OVS for functional tests 15:44:32 we're going to reuse the same for OVN octavia provider 15:44:45 #link https://review.opendev.org/#/c/727193/ 15:45:02 For now I'm checking how to properly perform it 15:46:03 but are You going to propose such 2 new functional tests jobs? or is that only for Your testing now? 15:46:49 for OVN octavia provider we can try to propose job for OVN master and OVN latest release job 15:47:09 and I need to discuss withing OVN team if we need such -master and -release job in Neutron 15:47:30 TBH I'm not sure if we want yet another functional tests job 15:47:52 we are testing I think master and released ovn in some scenario job 15:48:00 so maybe it would be enough :) 15:48:19 In scenario jobs we're testing only release OVN branch 15:48:27 master is 'experimental' for now 15:48:48 but the problem with it is that we write new code that needs changes in core OVN, mostly those are send in master and not yet released 15:48:59 some of those changes relies on OVN schema 15:49:17 and we have logic like: if required change is not in schema, fallback to the old behavior 15:49:39 so for such changes there is experimental queue :) 15:49:46 So when we for now have master or release job only, we test only 1 'code path' 15:50:26 So we merge code that its not tested? :) 15:50:41 You can run experimental jobs on every patch You want 15:50:52 experimental queue is not voting 15:50:52 just put comment "check experimental" in gerrit 15:51:05 yes, but we need some "balance" 15:51:14 IMHO we already have too many jobs in check queue 15:51:27 yes that is why Lucas find some idea 15:51:37 and if we will start testing also with various versions of ovs/ovn/other soft - we will be lost 15:51:50 #link https://review.opendev.org/#/c/717270/ 15:52:14 Maybe we can define -master job that will be voting only if change is related to OVN? if not: -release job will be run *only* 15:52:53 For now we're producing a lot of code that fills the gaps between ovs/ovn, and I feel sometimes we don't have everything properly covered and then we find issues in d/s gates 15:54:23 I 15:54:33 I'm not sure about it :/ 15:54:38 But those issues could be fine u/s if we would have master jobs voting, which will including the OVN code part, that is triggered by OVN driver. 15:54:43 find* 15:55:03 yes, I don't like to much jobs also... 15:55:45 if change will touch ovn files and also something else, like e.g. some constants it will run all jobs then, right? 15:55:46 maybe we can add -master to gate? I dunno... I need to talk about it during OVN team meeting, maybe we'll find some solution for that, or create general rule to run exeperimental job manually before merging, then +w 15:55:57 so in most cases we will probably run all jobs still 15:56:32 for constants yes 15:56:43 for other cases should run only -release job 15:56:53 but yeah, probability of changing constants is high 15:58:10 ok, we will have to get back to it - maybe it will be good topic for ptg :) 15:58:16 :) 15:58:33 we are almost running out of time now, 15:58:33 yes :) 15:58:53 so is there anything else anyone wants discuss quickly? 15:58:58 no thanks 15:58:59 last minute call :) 15:59:03 nope 15:59:10 btw ralonsoh I just replied to Your comment in https://review.opendev.org/#/c/726777/2 15:59:14 thanks! 15:59:26 please let me know if You still have any concerns with this 15:59:34 and thx for attending the meeting 15:59:37 o/ 15:59:40 #endmeeting