17:00:22 #startmeeting nova notification 17:00:23 Meeting started Tue Apr 18 17:00:22 2017 UTC and is due to finish in 60 minutes. The chair is gibi. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:24 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:26 The meeting name has been set to 'nova_notification' 17:00:27 hi! 17:00:31 o/ 17:00:52 dansmith is lurking too i think 17:01:22 ohh two cores, I think that is an all time record in the life of this meeting 17:01:57 well this is your intervention gibi 17:02:02 hah 17:02:10 dansmith: hi 17:02:17 ohai 17:02:23 I think we should start 17:02:51 yes 17:03:08 #topic searchlight integration 17:03:29 I got some sad news from sneti and aunnam. 17:03:43 yeah osic 17:03:51 they will not be able to continue the https://review.openstack.org/#/q/status:open+topic:bp/additional-notification-fields-for-searchlight 17:04:03 i think Kevin_Zheng could help with this 17:04:12 he's dependent on the instance tags stuff for a bp of his 17:04:38 i can confirm with him later 17:04:38 yes, the instance tags part of that bp the hardest so far 17:05:05 I can take over the BDM patch of that bp 17:05:25 if needed 17:05:54 are we agreed now on the way forward with tags? 17:06:13 Is it option b) then? 17:06:27 https://review.openstack.org/#/c/407228/ 17:06:27 so https://review.openstack.org/#/c/407228/ 17:06:35 i forgot what i said there last week 17:06:47 b) create a new payload class that inherits from InstanceActionPayload and adds tags. Use this new payload for instance.create. Extend InstanceUpdatePayload to have tags (that is used for instance.update) 17:07:12 yes 17:07:15 so we're just going down the route of having action-specific payloads, 17:07:22 which i think is ok, and was probably a mistake to not do that from the start 17:07:40 because as we've seen, everytime something needs to go into an action, if it's using the base payload it affects all action payloads 17:07:40 agree 17:07:43 which is unfortunate 17:07:49 makes sense to me 17:08:10 there was a counterargument that we don't need other things for all action payloads like we do today, like flavor 17:08:13 I'm thinking about doing a patch that introduce the action specific payload at a single step 17:08:17 but...i don't have a good reply 17:08:46 anyway, i'm trying to think about this like the rest api, which has specific responses for each api 17:09:02 and then we can add things to one api's response without affecting others 17:09:21 I think the argument is that the consumer shall not consume all the notifications to build up the instance representation 17:09:34 s/shall/should/ 17:09:40 should not need to 17:09:59 yeah, which makes sense if you don't know the internals, 17:10:03 but I just think it's too expensive 17:10:06 without consumers, it's hard to know what they need 17:10:17 we're throwing the kitchen sink in the payload 17:10:20 * johnthetubaguy lurks 17:10:23 and don't know if that's necessary 17:10:53 isn't the problem we don't have an "tag updated" payload? its just yet-another instance.update notification? 17:10:54 anyway, i think we agree that we'll have a new payload for instance.create.start/end with tags 17:11:06 johnthetubaguy: that's a separate spec that Kevin_Zheng owns 17:11:08 and is dependent on this 17:11:13 OK 17:11:14 https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/send-tag-notification.html 17:11:33 mriedem: agreed then 17:11:41 ok cool, 17:11:46 its just you can't specify any tags when you create a server, AFAIK, so maybe I am a bit confused 17:11:57 and this also means we can drop the change that puts tags in the default list of fields to load when getting an instance, right? 17:12:07 johnthetubaguy: Kevin_Zheng is also working on that :) 17:12:10 mriedem: yes, we can drop that 17:12:11 which is dependent on this 17:12:17 mriedem: ah, good 17:12:30 johnthetubaguy: https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/tag-instances-when-boot.html 17:12:35 Kevin <3's tags 17:12:46 :) 17:13:30 still on searchlight. I try to follow the patch on searchlight side that adds support for consuming versioned notifications 17:13:49 at this point, its just searchlight thats going to consume these, that we know about I mean 17:14:28 johnthetubaguy: yes, as far as I know 17:14:41 ok i've dropped https://review.openstack.org/#/c/415298/ 17:14:49 so based on https://review.openstack.org/#/c/453352/ it seems that searchlight can be adapted to consume the versioned notifications without too much work 17:14:54 mriedem: thanks 17:15:09 oh nice 17:15:15 mriedem: this can also be dropped https://review.openstack.org/#/c/435146/16 17:16:44 done 17:16:59 thanks for that too 17:17:23 do we have other searchlight related issue to discuss? 17:17:56 last question i have is, 17:18:01 for https://review.openstack.org/#/c/407228/, 17:18:10 should we do just the instance.update thing now? 17:18:20 and leave instance.create and the new payload for https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/tag-instances-when-boot.html ? 17:19:03 That would made https://review.openstack.org/#/c/407228/ nice and simple as instance.update has its own payload type 17:19:12 yeah let's do that then 17:19:14 so it is OK for me 17:19:52 gibi: are you going to take over that patch then? 17:21:00 First I planned to take over the BDM path but if the tags are more important and Kevin is busy then I can start with the tags 17:21:30 you might be able to get the instance.update + tags one done faster 17:21:37 sure 17:21:37 since i imagine you know what needs to change there 17:21:39 cool 17:21:47 cool 17:22:32 if nothing else on searchilght then I'd like to use this nice audience to ask about soft_delete and force_delete notifications 17:23:02 oh boy 17:23:02 ok 17:23:05 i defer to dansmith 17:23:15 wat, why? 17:23:20 #topic soft and force delete notifications 17:23:21 because you love soft delete 17:23:35 so there was a regression about delete notifications 17:23:45 couple of weeks ago 17:23:49 no, there was a regression in functionality, and it's called "soft delete" 17:23:59 yeah yeah 17:24:01 :) 17:24:24 I tried to continue the actual transformation work of the special case delete notifications after the regression was resolved 17:24:27 https://review.openstack.org/#/c/443764/1/nova/compute/api.py 17:24:33 this is the next step ^^ 17:24:57 reusing the notification context manager in other places 17:25:31 and of course I hit some soft delete and force delete cases again 17:25:37 "Basically sending a soft_delete notification in a local delete case doesn't really make any sense, and that's the only time that we send the soft_delete notification, so Dan and I were thinking we should just not ever send a soft_delete notification since it never is a soft_delete when that happens." 17:25:41 and I need you help to remove or support those 17:26:03 i have to dredge this all up from memory 17:26:08 because it was really really confusing at the time 17:26:15 but that comment sums it up, 17:26:17 yeah 17:26:21 when we do a soft delete, but the compute is down, 17:26:25 it's not really a soft delete 17:26:34 soft delete implies it's around for a bit until the compute "reclaims" it 17:26:37 and hard deletes it 17:26:51 with 'local delete' in the api, the compute host is down so the api hard deletes it then and there 17:27:00 so sending out a soft_delete notification in that case doesn't really make sense 17:27:22 OK so in L1868 there is no need for soft_delete but the delete_type there could contain 'force_delete' as well 17:27:48 is force_delete a thing? 17:27:56 it is 17:28:09 we should have limp_delete 17:28:12 i guess that's soft_delete 17:28:29 roulette_delete 17:28:33 sometimes we delete, sometimes we odn't 17:28:34 *don't 17:29:00 could we simply forget about soft_ and force_ delete in notifications and only use delete? 17:29:31 _do_delete and _do_force_delete are exactly the same 17:30:19 except the delete_type :) 17:30:53 yeah 17:31:10 so the only difference between force_delete and delete is the @check_instance_state decorator 17:31:50 but there is some complication based on the comment in https://github.com/openstack/nova/blob/master/nova/compute/rpcapi.py#L976 17:31:57 which i think just means, you can't delete an instance that is undergoing a task_state transition 17:32:07 but force_delete bypasses the task_state check 17:32:41 I see 17:33:14 gibi: because https://github.com/openstack/nova/blob/master/nova/cells/rpcapi.py#L539 17:33:17 then it might make sense to keep force_delete in the notification 17:33:20 that's just saying they need to be the same signature 17:33:25 it was a bug fix at one point 17:33:51 yeah force_delete might make sense still 17:34:06 OK, so nobody down there uses the value of delete_type 17:34:09 soft_delete notifications are ok for the normal case where the compute host is up and we delete the instance on the compute 17:34:41 https://review.openstack.org/#/c/443764/1/nova/compute/api.py@1868 is not the normal case thoguh 17:34:43 *though 17:34:49 because if not instance.host and not shelved_offloaded: 17:35:44 OK 17:36:07 I guess @2066 is also not the normal case 17:36:36 but then where is the normal case for soft_delete? :) 17:37:09 I I found it 17:37:33 gibi: https://review.openstack.org/#/c/443764/1/nova/compute/api.py@1908 17:37:34 is the normal case 17:37:39 https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2575 17:37:40 the cb is the _do_soft_delete method 17:38:32 and that will send notification from the compute manager 17:38:50 https://review.openstack.org/#/c/443764/1/nova/compute/api.py@2120 17:38:58 yup[ 17:39:21 then I can say that in the compute api every delete notification is either delete of force_delete 17:39:31 s/of/or/ 17:40:18 is it OK to keep force_delete there as it is a bit different for delete due to skipping the state check? 17:40:26 s/for/from/ 17:41:29 force_delete is ok for this i think 17:41:52 if the instance is hard deleted in the api because the compute host is down or the instance is shelved-offloaded, then it's never a soft delete 17:41:57 which means keeping the delete_type as a parameter of the context manager 17:42:09 yeah i suppose 17:42:12 OK 17:42:33 then I will fix the two failing force_delete unit test and remove the two meaningless failing soft_delete unit test 17:43:18 ok 17:43:38 cool thanks 17:44:04 I will update the patch 17:44:11 i left some comments in there too 17:44:13 replied to your questions 17:44:21 thanks for that too 17:44:41 I have one more thing for today 17:44:54 #topic factor out sample file duplications 17:44:56 i hope it's easy 17:45:16 so two weeks ago we discussed this a bit https://review.openstack.org/#/c/452818/ 17:45:54 basically it changing the sample files to use json refs instead of duplicating the actul sample data 17:46:23 and it needs a 3pp lib https://pypi.python.org/pypi/jsonref 17:46:31 I reached out to the author of that lib 17:46:48 to ask about a new release of the lib and ask about the liveness of it 17:46:57 unfortunately he is not working on the lib any more 17:47:24 I see two possible way forwad (if you like the idea of the deduplication) 17:47:43 a) take over the maintenance of the lib, release a new version etc 17:48:09 b) add a small, limited, but hand rolled implementation of the ref resolution to nova tests 17:48:26 probably depends on how hard (b) is 17:48:48 if you did (a) you still need to get it into global-requirements, which has it's own rules 17:48:55 one of which is things like py3 support, 17:48:58 and if it's still maintained 17:49:39 as we only need to resolve refs from local file system for our test it seems not too complex but I have to try first to see if there are devils in the details 17:49:56 and we also does not need to support recursive resolution in nova 17:50:06 so i'd lean toward b 17:50:09 if you can get that to work 17:50:40 I'm bit affraid to convinve the community that I will be able to maintain the lib (or a fork of the lib) so I would first try the option b 17:51:02 if that is OK for you 17:51:07 yes 17:51:17 coolio 17:51:19 my answer to all multiple choice questions is "b" 17:51:29 I need to remember that :) 17:51:51 I think that was all from me today 17:51:56 you helped a lot 17:51:59 thanks again 17:52:23 Is there anyithing else or I can go home? :) 17:52:40 nothing from me 17:53:05 thank you guys 17:53:12 #endmeeting