17:00:41 #startmeeting vmwareapi 17:00:46 hi 17:00:49 Meeting started Wed Jul 16 17:00:41 2014 UTC and is due to finish in 60 minutes. The chair is tjones. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:50 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:52 The meeting name has been set to 'vmwareapi' 17:01:16 hi golks 17:01:21 s/golks/folks 17:01:22 heh 17:01:27 Hello 17:01:54 fyi: gary and dado can't make it 17:02:03 s/dado/rado 17:02:09 ok bummer 17:02:14 lets get started then 17:02:32 just want to bring to your attention - the schedule 17:02:35 #link https://wiki.openstack.org/wiki/Juno_Release_Schedule 17:02:41 juno-2 is next week! 17:03:32 that came fast!!! 17:03:42 o/ 17:03:57 #topic BP under review 17:04:14 #link https://review.openstack.org/#/q/status:open+project:openstack/nova-specs+message:vmware,n,z 17:04:21 so take a look here - there is nothing 17:04:25 no open specs 17:05:19 here are the merged specs #link https://review.openstack.org/#/q/status:merged+project:openstack/nova-specs+message:vmware,n,z 17:05:34 kiran is not here 17:05:37 his is missing 17:05:42 tjones: n00b question: what does 'open' mean in the above context? 17:05:56 open means still in review 17:06:04 not yet approved 17:06:10 merged means approved 17:06:12 Thanks 17:06:16 Ok 17:06:28 yeah so i am wondering what happened to the specs we had last week in review 17:06:53 johnthetubaguy was doing some spec cleanup 17:07:01 Did he kill them? 17:07:06 https://review.openstack.org/#/c/98704/ 17:07:07 -2 them 17:07:08 yep 17:07:15 I suspect I did, sorry, its freeze time 17:07:50 bummer 17:07:51 so, in future, it would be awesome to see lots of +1s on your specs that you really think are ready, it will help us identify when a sub-group has blessed something 17:08:15 now we are compiling the list of things that might want exceptions, but its getting too big already 17:08:24 yeah thanks johnthetubaguy. i see that in your comment 17:08:32 do you have anything that is about driver consistency? 17:08:42 johnthetubaguy: Like what? 17:09:00 like adding support for API xyz that doesn't already work 17:09:06 or something like that 17:09:32 I don't think so 17:09:45 i don't think so either 17:10:22 While we're on the subject, I (very briefly) discussed the refactor series with johnthetubaguy, this morning I think 17:10:30 bummer, oh well, feel free to raise one or two you think are super super important, but I can't promise anything 17:10:50 We've got a huge patch series which is ready for core review 17:11:10 However, because the phase 3 stuff hasn't landed, yet, we can't mark the BP as needs review 17:11:29 Which means we're less likely to get attention on the stuff which is already in 17:11:40 I think we need to land that asap 17:12:00 I am working on getting the phase 3 stuff posted in the next day or two. 17:12:17 vuil: Awesome 17:12:18 But honestly the current huge list of patches are ready and orthogonal 17:12:32 vuil: Phase 3 patches? 17:12:33 We should encourage review eyes on them 17:12:38 lets talk about aproved BP then 17:12:46 #topic approved BP 17:12:51 so - the refactor!! 17:13:11 mdbooth: I mean the current rather sizeable list of reviews that we should have the cores look at 17:13:43 #link http://paste.fedoraproject.org/118493/40552732/raw/ 17:13:47 wow - it's 15 or so patches now right? 17:13:49 Did I do that right? 17:14:06 I was referring to the refactor-related chain of patches 17:14:23 vuil: Cool. I'm surprised they're orthogonal, though. 17:14:44 I can't imagine it's possible to touch that code without conflicting with something in the existing series. 17:14:55 for review purposes, I mean. It's not like the current dozen or so patches cannot be reviewed and merged 17:15:25 Yes, they can 17:15:36 mdbooth: can you paste the link to the dependent patches list like you did that time? 17:15:37 The phase 3 code mainly refactors the second part of spawn that the current series mostly does not touch. 17:15:42 the list has grown 17:15:44 tjones: ^^^ 17:15:53 that is all of our ourstanding patches 17:16:00 not just spawn 17:16:06 refactor 17:16:14 tjones: If you make it wide enough you can see the refactor series in the middle of it 17:16:41 ok i see it 17:16:43 Incidentally, I've now ported the dep feature to qgerrit 17:16:47 wow 17:17:07 The above was generated by my qgerrit fork 17:17:24 106794 to 104148 17:17:24 So I could probably improve it by removing some fields :) 17:18:03 this is great 17:18:21 anyway —— do you guys think we can get this done by juno-2 (7/24)? 17:18:56 It's not unreasonable to ask that the cores look at those right? They have had various stages of +1/+2 already 17:19:10 yes that seems very reasonable 17:20:07 vuil: Right. If we genuinely think the whole BP is landed, though, we can mark it as needs review 17:20:15 Which apparently puts it in a priority queue 17:21:08 vuil: mdbooth: do you have anything ele to do or is it done? 17:21:23 tjones: Nothing on refactor, no. 17:21:45 lets mark it as need review then! 17:21:51 The last patch in my list https://review.openstack.org/#/c/98322/, is being rebased and broken up right now. 17:22:05 Here's output with fewer fields, btw: http://paste.fedoraproject.org/118519/40553124/raw/ 17:22:09 Slightly easier to read 17:22:16 I do think it is not a stretch to mark this as needs review. 17:22:25 ok so vuil when you have pushed it, lets mark the BP as need review 17:22:30 It will take time for the cores to wade through even the initial list 17:22:42 https://blueprints.launchpad.net/openstack?searchtext=vmware 17:23:38 we are not going to have much time to get vsan, spbm, etc in after refactor 17:24:08 spbm has a dep on oslo.vmware 17:24:23 I think we need to push oslo.vmware hard 17:24:54 If we don't, it won't make juno-3 17:25:05 that BP is already marked as need review. it's quite large. i see dims and gary have already +1 17:25:22 it could use a few more from this team before we start asking the cores to look 17:25:27 I think I've reviewed it at least once 17:25:34 Although I haven't looked in detail recently 17:26:01 so for the next week - the prio is spawn refactor reviews and oslo.vmware reviews 17:26:07 #link https://review.openstack.org/#/c/70175/ 17:27:10 70175 is large by necessity, but changes are quite mechanical 17:27:24 oslo.vmware maybe higher prio as it's already marked in review - so it really has a chance of making j-2 17:27:39 anything else on BP for juno? 17:28:40 on the glance side (which might be relevant for vmware stuff at some point): I am asking an exception for this one https://review.openstack.org/#/c/84887/ 17:29:56 will there be significant changes in nova if you get it approved? 17:30:05 mdbooth: hey, I got distracted, but about those refactor patches, if you people can review them all and get it to where you are all happy, and add lots of +1s, it makes life easier for the core reviewers that come along afterwards, and should make it easer to get merged 17:30:36 johnthetubaguy: We're there :) 17:30:55 It's good to go 17:31:13 tjones: I can bring up the patches, they are already pretty much written since icehouse 17:31:19 or maybe havana 17:31:38 arnaud__: lol. 17:31:42 it's not too big 17:31:55 fairly mechanical I would say 17:32:15 have you rebased on top of the refactor work ?? 17:32:38 I am not the author of the patches 17:32:42 just the spec 17:32:42 :) 17:32:47 ah ok 17:32:57 I doesn't impact vmware driver 17:33:20 ok no worries then ;-) 17:33:25 anything else on BP 17:33:26 ? 17:34:51 #topic bugs 17:35:26 im doing a quick triage 17:35:32 this one looks interesting 17:35:35 #link https://bugs.launchpad.net/nova/+bug/1337798 17:35:36 Launchpad bug 1337798 in nova "VMware: snapshot operation copies a live image" [Undecided,New] 17:35:47 Ah, yes 17:35:54 * mdbooth had forgotten that one 17:36:08 vuil: I replied, btw 17:36:21 I think you may have misread it 17:36:39 ah reading now. 17:37:53 vuil: My suspicion is aroused because we do: 17:37:58 snapshot = self._create_vm_snapshot(instance, vm_ref) 17:38:14 And the only other place the snapshot variable is used is when it's deleted 17:38:32 IIRC there are some apis for streaming a snapshot, and it doesn't look like we're using them 17:38:57 yeah, the snapshot changes the vmdk file that is written to, making the original vmdk the read-only (frozen) data 17:39:09 we copy that out to glance, 17:39:25 Ok, so: 17:39:28 dest_vmdk_file_path = ds_util.build_datastore_path(datastore_name, 17:39:28 "%s/%s.vmdk" % (self._tmp_folder, random_name)) 17:39:29 deleting snapshot undoes the disk chain lengthening, restores file name , etc. 17:39:38 That's now the snapshot 17:39:55 ? 17:40:02 key is vmdk_file_path_before_snapshot 17:40:21 probably bad name, what it is saving is the path that will become the frozen disk after the snapshot 17:40:44 sourceName=vmdk_file_path_before_snapshot, 17:40:52 Yeah, that's deeply suspicious :) 17:40:58 it also happens to be the path of the disk that we write to again once the snapshot is deleted. 17:41:08 Is that documented anywhere? 17:41:20 Also, what happens if you make 2 snapshots? 17:41:26 They can't both have the original uri 17:41:44 * mdbooth has seen vmdk filenames with lots of zeroes in them 17:42:21 only the first one does, the chain build from there. If we take two (which we are careful not to), the path is still valid and points to the first. 17:42:57 not the first one 17:43:27 Anyway, it was non-obvious so I opened that bug to ensure somebody looked 17:43:32 anyway, I now get your concern, we can sort this out offline to make sure we didn't miss anything. 17:43:32 You looked :) 17:43:42 so the code works but is confusing? 17:43:52 *im confused* 17:43:56 Might be worth leaving the bug open to track adding a big comment 17:44:03 ok lets do that 17:44:20 i don't have any other bugs to bring up. anyone else? 17:45:01 * mdbooth doesn't have anything 17:45:15 topic #opendiscussion 17:45:45 i guess we don't want to ask for exception on any of the missed BP. The only one i am wondering about is kiran's 17:46:20 thoughts on that (or any of the others)? 17:46:32 tjones: Link? 17:47:01 https://review.openstack.org/#/c/98704/ 17:48:28 https://review.openstack.org/#/q/status:abandoned+project:openstack/nova-specs+message:vmware,n,z 17:48:32 here is all of them 17:48:45 fiy: I sent an email to the ML about oslo.vmware and requirements. If anyone has a pov: http://www.mail-archive.com/openstack-dev%40lists.openstack.org/msg29544.html , that would help 17:49:24 arnaud__: Reading 17:49:54 fwiw, I only really read posts with [nova] or [vmware] 17:50:06 * mdbooth can't handle the firehose 17:50:35 I forgot the vmware tag 17:50:44 I realized that just after sending 17:50:45 lol 17:52:40 * mdbooth isn't familiar with project requirements in openstack, unfortunately 17:53:15 I will ping openstack-qa to see if they are familiar with that 17:53:22 arnaud__: did you talk with dhellmann ? 17:53:26 yep 17:53:41 he is suggesting to add retrying to requirements of icehouse 17:53:55 but that would mean repackaging for operators 17:54:12 sdague may have an idea 17:54:30 also, that would not fix the issue on the long run from pov 17:55:29 anyway, I will let you know if I find a solution 17:56:03 ok 5 more minutes - anyone have other stuff? 17:57:25 My changes to qgerrit just got merged upstream 17:57:38 So if you pull that you can generate those dep tree views 17:57:41 mdbooth: cool! 17:57:47 please review oslo.vmware https://review.openstack.org/70175 17:58:06 and the other refactor patches starting with https://review.openstack.org/106794 17:58:33 #endmeeting