18:29:42 #startmeeting networking_policy 18:29:43 Meeting started Thu Dec 14 18:29:42 2017 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:29:44 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:29:46 The meeting name has been set to 'networking_policy' 18:30:02 mostly wanted to report on my progress on pike sync 18:30:14 hi! 18:30:38 tbachman: annakk: perhaps if you have some time, you can take a quick look at: #link https://review.openstack.org/514864 18:30:52 SumitNaiksatam: will have a look next week 18:30:58 tbachman: we can delay the above if you think there are higher priority things 18:31:00 will do 18:31:13 tbachman: annakk thanks 18:31:22 that one and its backport have my +2 18:31:28 rkukura: thanks 18:31:45 the other patch is this #link https://review.openstack.org/518183 18:32:18 i have been trying to keep rebase repeatedly, but the current patch set is not the latest nor rebased 18:33:05 hopefully we can merge the former patch next week, it will reduce pain of rebasing 18:33:26 on the above patch, unfortunately i still have some UTs which are not passing 18:33:46 most notably, there are two classes of issues: 18:34:13 1. gbpservice.neutron.tests.unit.plugins.ml2plus.test_apic_aim.TestAimMapping.test_shared_network_topologies 18:34:36 this is the problem i had reported last week also, and unfortunately i havent been able to crack it yet 18:35:06 was this problem with the notifications? 18:35:10 the issue is that somehow NetwrokSegment object gets added to the session prior to it being added for delete, and the delete fails 18:35:18 oh, that 18:35:19 rkukura: this is not the notification problem, i fixed that one 18:35:40 there is nothing specific to segments in that test 18:36:00 rkukura: on the notification problem, the issue was that the notification queue was getting reset on every call to getting the "writer” 18:36:15 ok 18:36:28 rkukura: so the notifications were not going out (as opposed to there being a mismatch) 18:36:35 rkukura: something to watch out for in the future as well 18:36:43 glad you figured that out 18:36:49 i have simplified the patching considerably 18:36:54 tricky! 18:37:19 on the other problem, yes NetworkSegment is involved as a cascaded delete 18:37:34 the problem is that it seems to get added to the session when a get is done 18:37:58 and stays in the sqlalchemy’s “identity_map” even when the delete is call 18:38:00 sounds weird 18:38:07 *called 18:38:14 its not supposed to be there at that time 18:38:25 and this does not happen in the case of an external segment 18:38:38 is this happening on network delete? 18:38:50 rkukura: yes 18:39:01 but it doesnt happen all the time 18:39:19 i did a lot of breakpointing in the sqlalchemy code to get some of these clues, but unfortunately i am still not able to fix the problem 18:39:22 does it ever happen if just that single UT is run? 18:39:30 rkukura: yes 18:39:36 always 18:39:51 so sometimes it succeeds when run with other UTs? 18:40:02 no it doesnt seem to ever succeed 18:40:16 but there are other deletes in this test and other tests which seem to succeed 18:40:28 so this test seems to be taking it down a code path that is problematic 18:40:30 thought you said “it doesn’t happen all the time” 18:40:52 yes, i meant it doesnt happen with external networks for example 18:41:10 but there is a “net1” in this test, and it’s delete always fails 18:41:35 ok, so nothing intermittent about this specific test, run individually or with others? 18:41:37 i believe there are a few other tests also which fail predictably and on the network delete 18:42:19 rkukura: no not at all, at least its consistently reproducible, if its any consolation 18:42:19 might be best to focus on the simplest test that always fails on network delete 18:42:29 rkukura: okay, makes sense 18:42:36 i will try to identify that 18:43:11 2. gbpservice.neutron.tests.unit.services.grouppolicy.test_aim_mapping_driver.TestL2PolicyWithAutoPTG.test_auto_ptg_lifecycle_unshared 18:43:51 in this test a update_policy_target_group is done, which is supposed to fail (shared attribute is attempted to be edited) 18:44:05 so the operation fails as expected 18:44:11 however the rollback does not happen 18:44:26 if i try this operation in isolation, the rollback works 18:44:41 so again, not sure what’s happening specifically in that test 18:45:19 SumitNaiksatam: What exactly do you mean by “rollback does not happen”? Do you mean that the transaction gets committed? 18:45:26 rkukura: yes 18:46:12 so its like this - in the update call we update the shared attribute first in the DB, then we do the check if the update is valid 18:46:30 if its not valid, we raise an Exception, and the DB update is expected to rollback 18:46:59 is it possible we don’t have an open transaction surrounding all of this? 18:47:08 sounds like another transaction sneaked in 18:47:17 or autocommit 18:47:45 rkukura: annakk: yeah thats my suspicion, but the check we are doing is in the UT, so perhaps some issue with the particular UT setup itself 18:47:57 if there is no transaction open on this session, I think each change will get committed when a flush happens 18:47:57 rkukura: all transactions autocommitted by default 18:48:35 I’m suggesting the update might be happening outside any transaction 18:48:53 and therefore auto-committed on the next DB query using the session 18:49:06 rkukura: yeah my suspicion as well, but to the extent i can tell its inside transaction :-) 18:49:32 i mean the transactions are nested and not independent transactions 18:49:44 definitely the same session? 18:50:07 rkukura: yes 18:50:48 anyway, like last week, i think i need to rebase and upload the latest patch set 18:51:12 ok 18:51:39 so that you will have a better feel for what i am referring to 18:52:05 ok, i will watch 18:52:15 thanks 18:52:39 thats it from me 18:52:46 anything else we want to discuss today? 18:53:09 not from me 18:53:27 not from me - thanks for merging the UT schema patches - working on AIM validation again 18:53:49 rkukura: thanks for those patches 18:54:14 alrighty, i will update you guys as soon as i have something more to share (will post the patch set shortly) 18:54:26 most lilkely will be bugging you if i continue to hit a wall 18:54:27 thanks 18:54:28 SumitNaiksatam: thanks! 18:54:37 thanks rkukura annakk tbachman for joining 18:54:41 thanks, bye! 18:54:46 bye! 18:54:48 #endmeeting