17:00:24 #startmeeting vmwareapi 17:00:25 Meeting started Wed Mar 5 17:00:24 2014 UTC and is due to finish in 60 minutes. The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:29 The meeting name has been set to 'vmwareapi' 17:00:47 #topic icehouse feature freeze 17:01:10 whoami 17:01:21 #undo 17:01:21 Removing item from minutes: 17:01:27 #topic greetings 17:01:38 Folks want to go around and introduce them selves? 17:01:49 Eric here from vmware 17:01:52 I'm Shawn Hartsock… I run the IRC meeting and keep track of blueprints etc. 17:02:07 Vincent Hou from IBM. 17:02:14 Arnaud VMware 17:02:54 zhaoqin from ibm 17:02:54 Good to meet everyone. 17:03:10 hi vincent 17:03:28 Hey. 17:03:29 * hartsocks smiles and nods at everyone 17:03:47 Okay. So for those who might be new... 17:03:58 #link https://wiki.openstack.org/wiki/Icehouse_Release_Schedule 17:04:25 We're still in the IceHouse release cycle. But we've passed Nova's Feature Freeze (known as FF on IRC) 17:05:01 If you have a blueprint that is not targeted at IceHouse-3 (or i-3) as of today … you have to apply for an Exception to the rule... 17:05:05 known as a FFE 17:05:23 #topic icehouse Feature Freeze 17:05:32 #link https://etherpad.openstack.org/p/vmware-subteam-icehouse 17:05:49 I've compiled notes on what stayed targeted at IceHouse and what didn't. 17:06:05 If you're on the Nova project… bad news all around. 17:06:22 Not a single one of the blueprints (aka features) stayed inside IceHouse. 17:06:40 That means if you want to try and make this release you need to apply for a FFE. 17:07:39 I am going to apply for a FFE for the image handling code 17:07:41 I've identified two blueprints I'll be filing for an FFE. If you would be so kind give a +1 by the BP you think I should ask for a FFE on … or a −1 if you disagree and think we should revise the BP in Juno. 17:07:46 hartsocks: i have askfed for one for image cache 17:07:47 the base framework has been merged 17:08:07 could I get you folks to note that on the etherpad for coordination sake? 17:08:09 garyk has asked for one for iso 17:08:32 arnaud: i recommend you get core review sponsorship ahead of time if possible 17:08:34 okay, cool. 17:09:11 yep tjones 17:09:15 arnaud: danpb expressed an interest 17:09:42 (assuming we're talking about the image cache: sorry I'm late) 17:10:27 mdbooth: no this is utilizing the image handling changes that arnaud made in glance 17:10:34 Ah, sorry 17:10:38 mdbooth: actually no I am talking of the image handling (from glance to nova) 17:11:34 i will be back in a few minutes. sorry 17:11:55 arnaud: are we tracking that? do we need to? 17:12:33 hartsocks: yet lets please track it 17:12:40 link? 17:12:58 I will update the page as soon as I am in the office 17:13:08 the bp has been marked as implemented 17:13:15 arnaud: okay. 17:13:18 so I need to discuss with the core first 17:13:41 arnaud: It might already be on my list… https://blueprints.launchpad.net/glance/+spec/vmware-datastore-storage-backend 17:14:00 this one is the glance code 17:14:03 this has been merged 17:14:21 okay, I already marked that one down in the "made it" column. 17:14:27 there is this one https://review.openstack.org/#/c/63975/ and https://review.openstack.org/#/c/67606/ 17:15:07 arnaud: okay the BP links there are broken (on the reviews) 17:15:27 arnaud: but I was not tracking that one. 17:15:28 yes (I think is because the bp became "implemented" 17:15:29 ) 17:16:26 okay. I've made note of the patch set. We'll clean up the notes later. 17:16:48 sabari: do we want to ask for FFE on SPBM? 17:17:19 i guess it did not make it into cinder??? 17:17:28 I don't think so 17:17:33 tjones: let me check if it made in cinder 17:17:51 tjones: nope 17:17:59 I didn't even send for review the patch for glance 17:18:20 none of the cinder patches merged 17:18:26 bummer 17:19:21 On the bright-side we're fairly well lined up for Juno-1 right? 17:19:28 yes 17:19:29 lol 17:19:33 yeah.... 17:19:43 I think cinder-nova-glance should have spbm in juno1 17:19:47 it will be good to have spbm to do away with regex to choose datastores, which is quite painful for an admin. 17:19:53 vsan? 17:20:00 vsan juno 17:20:33 ok - well it is what it is 17:21:55 Well. I don't think there's much we can do in this meeting other than self-organizing around who is going to help ask for FFE for which BP. 17:22:15 So long as we can help each other out on that front, I think we're doing all we can. 17:22:48 we already asked for image-cache right ? 17:22:52 yes 17:23:01 we can't ask for spbm since cinder is not in 17:23:04 sabari: https://etherpad.openstack.org/p/vmware-subteam-icehouse 17:23:08 we have asked for iso 17:23:22 mark up the etherpad as appropriate so we're coordinated. 17:24:02 ok 17:24:08 lets just be sure we ask for what we *really* need to support our customers and not what we wish for 17:24:16 yeah, just to avoid spamming/crossing over each other. 17:24:33 IMO it is image cache, ISO, and glance image support. 17:24:50 but you can certainly disagree - lets make sure we are on the same page 17:25:18 BTW: https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage I know of one large customer seeking that feature but they wanted it in spring this year at the earliest. 17:25:24 im thinking about the things that really hurt our customers 17:25:47 The ISO feature seems like a no-brainer. It's a fairly basic admin tool. 17:26:14 hatsocks: that bp is partially implemented. We do support root disk of different flavor. 17:26:20 just that we miss on ephemeral disks. 17:26:26 image cache - without it they fill up their datastores with unused images 17:26:49 I've got no objection to the image cache system changes. 17:27:08 sabari: ephemeral disks are a big use-case in OpenStack style deployments in some clouds. 17:27:24 sabari: without ephemeral disks there are a number of use-cases we just don't support. 17:27:43 For example, it is common to boot an image with a config drive… 17:27:48 hartsocks: do you think we should ask for FFE on that one? 17:28:02 yes. but I'm admittedly biased. I've worked on it a while. 17:28:14 it's been in play since before havana 17:28:16 hartsocks: we do support booting with config drives. 17:28:29 … and then populate the root disk with an OS 17:28:37 … and the ephemeral disk with the specific application. 17:28:53 The ephemeral disk typically acts as the application drive for an application server. 17:29:03 you said there was 1 customer blocked on this but they don't need it until spring? 17:29:06 This is a typical deploy strategy for shops using JVM based application servers. 17:29:32 tjones: yes it was in their future plans. When I saw Tang was going to finish this in Havana I thought we would be well set. 17:30:04 ive not been paying attention to that one 17:30:28 tjones: I thought it had been in play so long and *so well* reviewed that it would be a sure fire merge by now. 17:30:41 ugh 17:30:56 tjones: this is the *second* attempt at ephemeral disks. The first was in Havana-2 17:31:06 (that was July 2013) 17:31:35 so I've been shepherding that for 9 months now. It's kind of become personal at this point. :-/ 17:31:50 jenkins barfed on both dependent patches 17:31:54 nice. 17:32:08 yup, we should consider it next. I based it on the fact that there hasn't been any talk on vmware community forum about ephemeral disk support. 17:32:45 fair enough. 17:33:00 BTW - v3 diags did make it 17:33:04 so we had 1 17:33:09 in nova 17:33:10 :-) 17:33:14 yay! 17:33:30 but v3 itself may not right - i didn't follow that thread completely. 17:33:30 sabari: curious, what vmware community forum do you monitor? 17:33:35 so apologies if i missed something. 17:33:39 true 17:33:43 i lost track of it too 17:34:06 browne: #link http://communities.vmware.com/community/vmtn/openstack 17:34:28 sabari: nice, thx 17:35:41 I want to talk about some of driver-specific-snapshot BP at some point today. If we run out of time, remind me. 17:35:55 It's definitely a Juno BP though… priorities. 17:36:02 OK 17:36:22 BTW: I've started ... 17:36:30 #link https://etherpad.openstack.org/p/vmware-subteam-juno 17:36:38 for what it's worth... 17:36:50 https://blueprints.launchpad.net/nova/+spec/driver-specific-snapshot 17:37:40 vincent_hou: you guys brought up some thing that really made me think about how the current driver works and I think the current snapshot system is fundamentally wrong. But, more on that later. 17:37:50 :vincent_hou I did not undstande your last comment in this BP 17:38:12 sorry, typo 17:38:50 I don't know if it's relevant, yet, but for Juno I think we'd be well served to serialise development somewhat 17:39:02 :vincent_hou can you explain it a little more? 17:39:04 Interested parties in driver-specific-snapshot … let's talk over on #openstack-vmware after we clear the icehouse related discussion. I just didn't want to forget you :-) 17:39:24 mdbooth: okay, you've piqued my interest. 17:39:47 hartsocks: Just thinking of ways to reduce the 'in-flight' 17:39:53 chaochin: which comment or sentence exactly? 17:40:12 If we pick a set of features which can be developed either entirely independently 17:40:14 your last comment today.. 17:40:15 mdbooth: any suggestions on making this better are HIGHLY appreciated 17:40:15 Or serially 17:40:35 Then absolutely hammer on a much smaller set until they get merged 17:40:38 Before moving on 17:41:27 Right now we've got a massive in-flight 17:41:44 And so many inter-dependencies it makes my head hurt 17:42:23 mdbooth: I'm not sure how we ended up with these long-chain dependencies between patches but that is definitely a problem. 17:42:28 chaochin: we will discuss it later. 17:42:33 I wholeheartedly agree Matt. The independent stuff we can call out and have them be worked on well, independently. 17:42:45 vincent_hou: thx 17:43:22 We can be more focused in the 'hammering', but the 'move on' part depends a lot on the review process, and till things merge, they will just always interdependent on one anothers' changes. 17:44:00 and the longer it takes the worse it gets 17:44:23 Develop, then swap reviews until you get attention? ;) 17:44:35 We could just deal with merge conflicts as they happen. 17:44:42 take refactoring vmops (or even spawn). it has to happen, but it has to happen quickly so we can take advantage of it 17:45:42 mdbooth: do you still have those lovely graphs you posted on the ML? 17:45:57 mdbooth: care posting them here with a #link tag? 17:46:01 Let me dig 17:46:31 #link http://lists.openstack.org/pipermail/openstack-dev/2014-February/028077.html 17:46:46 #link http://lists.openstack.org/pipermail/openstack-dev/attachments/20140225/d0de8dd7/attachment.svg 17:46:54 #link http://lists.openstack.org/pipermail/openstack-dev/attachments/20140225/d0de8dd7/attachment-0001.svg 17:47:53 I think there should be more utility patches that gets shared between features in-review. Otherwise, the common code will be reimplemented at different places. 17:47:57 I think we need to ask for help, but we also need to focus on what we can do 17:48:29 What's interesting is the Nova bubble on the number of days in flight graph is 10 days and vmwareapi is 12 days. 17:48:35 Did I read that right? 17:48:44 Yeah 17:48:57 I'm actually not convinced that number is correct 17:49:11 mdbooth: does that mean our code is not getting looked at for 12 days or that it's not merging for 12 days? 17:49:12 However, I haven't spent the time digging 17:49:24 If that number is accurate... 17:49:43 It's supposed to be a measure of the time it takes the final patch version to be approved 17:49:56 I *might* draw the conclusion that our code *is* getting looked at but it's being rejected at a rate much higher than other people's. 17:50:38 I would accept that as supporting the conclusion that for whatever reason we're getting rebuffed on code-quality (whatever that might actually mean don't take it personally folks). 17:50:51 I can do some more work on this if people think it's interesting 17:50:56 I do. 17:51:06 However, it didn't seem to get much traction from the people who matter 17:51:09 So I'm not convinced 17:51:31 Well, sometimes these things are sales-pitches not substance. 17:51:48 But I think the real value in your graph is the two numbers together. 17:51:48 it really would be nice if there was a core person working on the driver. that would help considerabley. 17:52:11 if it is any concillation the xen drivers also take time to get reivews in and they have i think 4 cores 17:52:37 garyk: I'm sure that would help, primarily I think it's at matter of meeting their expectations which we simply don't know what they are yet as a group. 17:53:05 i think that each core has different expectations (which is a healthy thing). 17:53:29 and we can't match those if we get random cores? 17:53:34 core-reviewers that is. 17:53:50 that is the $ million question 17:54:09 Which I think is what mdbooth's number *might* answer. 17:54:41 I would love to be able to drill down to the patch level with these stats. 17:55:12 we know we are suffering from a quality perspective in our driver. we've been hesitant to make large changes due to the lengthy review process. but now we are at a point were we *have* to do it. so we need to make a plan for doing that in juno. that's what we need to figure out - is how to do most effectively and to get it in in juno-1 so we can take advantage of it 17:55:24 mdbooth, do you have the script(s) you used to generate up somewhere in case someone wants to have a play 17:55:43 sgordon_: Check out the original mail post. There's a link to the code. 17:55:47 ta 17:56:19 tjones: yes, unfortunately we are in a chicken-and-the-egg type of problem I think. 17:57:22 and for the record, garyk is right… we would be helped tremendously by having a core-reviewer working with us. Then we would at least know we were measuring the right metrics for one of our +2 votes :-) 17:57:43 We can help ourselves with in-flight though by having less stuff in flight 17:58:09 mdbooth: so your council is literally .... 17:58:10 I think we've hit a point where it's counter-productive both to ourselves and external reviewers 17:58:19 *do less* 17:58:22 :-) 17:58:29 ok lets take that. to have less stuff in flight we need smaller patches that have good tests and we need to get them in quickly 17:58:30 * mdbooth heads for the beach 17:58:45 guys, it seems that we are overrunning. 17:58:56 we have 2 minutes ;-) 17:59:26 hartsocks: I still have one question 17:59:29 tjones: so for that maybe as a group we should look at just addressing the top blueprints say 3 at a time and working down a list holding reviews. 17:59:30 tjones: Also perhaps more review swaps to get more attention 17:59:38 Although garyk is a machine 17:59:59 chaochin: are you around for the next 30 minutes I'll be on #openstack-vmware 18:00:01 I am pretty sure he literally is one 18:00:05 Well that's time. 18:00:06 hartsocks: Will we hope to let glance to recognize VM template in the future? 18:00:07 mdbooth: by review swap - you mean review other peoples patchs that are *not* vmware patches 18:00:16 tjones: Yes 18:00:30 mdbooth: yes we def need to do that 18:00:31 * hartsocks calls time 18:00:37 #endmeeting