15:00:17 #startmeeting manila 15:00:18 Meeting started Thu Sep 6 15:00:17 2018 UTC and is due to finish in 60 minutes. The chair is tbarron. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:22 The meeting name has been set to 'manila' 15:00:31 .o/ 15:00:35 Hi 15:00:44 o/ 15:00:56 ping xyang 15:01:02 ping ganso 15:01:07 ping erlon 15:01:08 hello 15:01:11 ping tpsilva 15:01:15 ping vkmc 15:01:27 Hi all 15:01:36 hello 15:01:46 Agenda: https://wiki.openstack.org/wiki/Manila/Meetings 15:01:59 Probably a short meeting today ... 15:02:20 #topic Announcements 15:02:28 Next week is PTG! 15:02:51 Amit just pinged me, he's in SFO on his way to the rockies ... 15:03:06 We won't have *this* meeting next week. 15:03:21 But we'll meet for PTG on Monday and Tuesday. 15:03:42 Tune in to #openstack-ptg for real-time updates. 15:04:07 ganso: are you still planning on bringing the webcam? 15:04:17 tbarron: yes! 15:04:36 ganso: excellent. We'll do remote connectivity the way we did in Dublin then. 15:04:52 What meeting platform will you use? 15:05:13 In dublin we ended up with bluejeans iirc. 15:05:22 vkmc got it going. 15:05:51 I will check with her. 15:05:56 +1 for the AV club 15:06:01 afaik bluejeans is easier to convert to a video format than webex. I remember gouthamr saying it was a lot of effort to convert the videos to upload to youtube 15:06:22 she mad some nice youtube recordings for us. 15:06:45 people can set up webex for an additional voice channel if useful. 15:06:47 yeah, cisco had a proprietary converter that stopped working for the latest webex 15:07:03 don't know if they released/fixed the converter 15:07:14 but bluejeans has a perfectly good dialin for phone 15:07:37 No complaints about bluejeans 15:07:43 and you can run it on android, etc. if you don't want to pollute your notebook. 15:07:51 kk 15:08:05 Reminder that the planning etherpad is: 15:08:12 in the channel topic 15:08:14 and 15:08:14 Although bluejeans has the distinction of being the only app to ever cause a kernel panic on my laptop 15:08:30 #link https://etherpad.openstack.org/p/manila-ptg-planning-denver-2018 15:08:36 bswartz: it eats ram 15:09:12 I'll spend some time cleaning up that page but go on and dump ideas into it. 15:09:31 Also indicate there or ping me with any time restriction/topic interestes. 15:09:50 Remember we're on mountain time, UTC-6 I think. 15:10:09 Do you plan to go 9AM-5PM local time? 15:10:21 zhongjun2_: we'll be sure to talk about access-list-prio early 15:10:30 Thanks 15:10:44 bswartz: yes, at least start at 9. Maybe we don't have to go till 5. 15:11:06 There will be lots of remotees to the east of us. 15:11:31 We might do some working sessions towards the end of the day Monday and Tuesday. 15:11:40 I'll miss being able to hang out with you guys 15:12:01 bswartz: we are going to miss you! It will feel really different. 15:12:04 But I'll stay as late as I can on the remote connection 15:12:28 We'll have a team dinner with the Cinder folks Tuesday night at 7:30. 15:12:37 bswartz: :'( 15:12:44 jungleboyj and amito are planning where. 15:12:45 At that place with the pork wings? 15:12:51 so stay tuned ... 15:13:13 bswartz: not sure if the dinner will be there, but I'll surely head there grab some pork wings 15:13:15 We'll start Monday with a retrospective on rocky. 15:13:37 I set up an etherpad which you can start filling out *now*! 15:13:56 #link https://etherpad.openstack.org/p/manila-rocky-retrospective 15:14:18 Please take a little time to think about what went well and what we can do better in Stein. 15:14:54 Also, we have through Tuesday to brainstorm on topics for the Berlin Forum. 15:15:17 Pls. think of stuff that we should discuss with Operators. 15:15:27 And note ideas here: 15:16:00 #link https://etherpad.openstack.org/p/manilap-berlin-forum-brainstorm 15:16:38 tbarron, typo in the title :) 15:16:39 In Vancouver we had several high performance / scientific sig type folks in the room for this 15:17:02 gouthamr: I'll make a new etherpad in a moment w/o the typo 15:17:07 gouthamr++ 15:17:42 and I'll add a bit of context to the pad 15:17:59 #link https://etherpad.openstack.org/p/manila-berlin-forum-brainstorm 15:19:13 anyways, the Vancouver session was worth it for meeting the people who showed up, Minn Supercomputing Center, the big array telescope whose acronym is escaping me, and CERN of course 15:19:25 SKA 15:19:32 square kilometer 15:20:14 I'll email the dev and ops lists with these etherpad links. 15:20:23 And solicit input. 15:20:37 Any other announcements? 15:21:02 #topi Open Discussion 15:21:09 #topic Open Discussion 15:21:14 I have a topic 15:21:24 ganso: go for it 15:21:35 my team is working for a functional test for a specific bug we are fixing 15:22:04 the test involves creating an additional neutron subnet and then creating 1 share on each subnet (2 total) 15:22:30 there is a cleanup problem to delete the neutron subnet 15:22:38 it says it has a port in use 15:22:56 Any nova VMs involved? 15:22:57 it needs a sleep of 20 seconds to pass the test without complaining about ports in use 15:23:02 bswartz: no VMs 15:23:11 What happens during those 20s? 15:23:29 ganso gets quick coffed 15:23:32 coffee 15:23:44 bswartz: I believe that it gives time to delete the share server and clear out all allocated neutron ports 15:23:48 ganso: are you explicitly cleaning up the share server? 15:24:00 20s isn't much time to get coffee 15:24:14 gouthamr: at the moment, we aren't. We are doing experiments on another class with admin client to do that 15:24:28 s/coffed/covfefe 15:24:36 bswartz: if the coffee machine is right beside you, it is 15:24:37 I think gouthamr is onto the right idea 15:25:00 In a testing context, you need to use an admin credential to force deletion for cleanup purposes 15:25:11 Otherwise the required wait time is hard to guess 15:25:39 Even if you had a proper wait loop that queried on readiness for deletion, you can't bound how long that loop will run 15:25:41 bswartz: yes, but I believe our mechanism is a bit flawed, in the sense that it is a bit incompatible with our current cleanup strategy 15:25:50 when we delete shares, share servers are not deleted instantly 15:26:04 But they should be 15:26:06 I tried to set that flag "delete_share_server_with_last_share" and it still did not work 15:26:07 ganso: all test classes that create a share should have an admin client now, thanks to vkmc's latest work.. 15:26:41 ganso that last thing might be a bug 15:26:56 gouthamr: hmmm I didn't know that in detail. Will test to see if I have access to admin client now 15:26:57 Or if the deletion is asynchronous, it might have started but not finished 15:27:19 You need a way to synchronously delete for testing purposes 15:27:39 so, one thing I know, is that when deleting a share server, after the teardown, the driver returns, and then the manager does some stuff, and lastly, it sends an asynchronous (AFAIK) request to neutron to deallocate the ports 15:28:04 but I was still surprised that it took 20 seconds for the ports to be cleared up. That's too much 15:29:01 the sleep proves that there is nothing wrong other than things that should be cleared immediately instead of relying on a timer to do so (or at least be configurable (and working, as "delete_share_server_with_last_share" seems to not be working properly)) 15:29:06 i'm surprised the share server was always cleaned up with a 20s delay 15:29:18 bswartz: the delete is not asynchronous as we have a waiter right after the delete 15:29:21 because the default automatic cleanup interval is 10 mins 15:29:52 oh wait, you mentioned you turned off automatic cleanup, and set "delete_share_server_with_last_share" instead 15:29:53 gouthamr: yea I found in the config options saying that the "minimum is 10 min", clearly it is being deleted in less than 10 minutes 15:30:49 I am actually confused as to what is causing the share server to be deleted. With "delete_share_server_with_last_share" set to False, it still gets cleared within 20 seconds 15:30:54 ganso: 10 mins absolute with best effort, that begins with manila/share process.. so some lag might occur 15:31:44 Ah 15:32:05 I bet that delete_share_server_with_last_share is what's ensuring that it happens relatively quickly (20 seconds instead of 10 minutes) 15:32:16 right now, I am investigating this https://github.com/openstack/manila-tempest-plugin/blob/master/manila_tempest_tests/tests/api/admin/test_share_servers.py#L245 15:32:21 as it could lead me to the answer 15:32:30 bswartz: but that flag is disabled 15:32:30 ganso: you're using pre-provisioned test credentials and not dynamically creating them? 15:32:38 gouthamr: I tried with both, result is the same 15:32:41 Maybe the wait loop is waiting on the wrong thing 15:33:36 bswartz: could be 15:33:55 but I get the feeling that this cleanup of share servers will need some refactoring 15:34:09 as it is not working as it is intended to be 15:35:02 anyway this is what I had to say 15:35:21 ganso: thanks for bringing the issue to our attention 15:35:32 ganso: and thanks in advance for fixing it 15:35:37 :) 15:35:40 =) 15:35:40 +1 15:35:48 gotta squash those bugs 15:35:56 Anything else today? 15:36:11 3 15:36:17 2 15:36:24 1 15:36:27 ty! 15:36:31 OK, thanks all!! 15:36:36 #endmeeting