17:00:27 #startmeeting VMwareAPI 17:00:28 Meeting started Wed Jan 15 17:00:27 2014 UTC and is due to finish in 60 minutes. The chair is hartsocks. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:29 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:32 hi! 17:00:32 The meeting name has been set to 'vmwareapi' 17:00:36 Who's around? 17:00:40 hi 17:00:47 i am here 17:00:50 brief intros this week? 17:00:51 * mdbooth is here 17:02:01 AndyDugas from VMware is lurking 17:02:10 I'm Shawn Hartsock, your friendly neighborhood community secretary aka community chair 17:02:18 Hey Shawn 17:02:38 browne - Eric Brown from VMware 17:02:47 :-) hey Andy 17:03:08 browne: welcome to the new guy! 17:03:22 Matt Booth: Red Hat and OpenStack n00b 17:03:43 mdbooth: welcome new guy :-) all hands are welcome 17:04:00 * mdbooth got his ATC wings today when a 2 line patch in the libvirt driver was accepted :) 17:04:10 :-) 17:04:14 congratulations 17:04:56 * hartsocks listens for a moment 'cuz he knows it's just 9am in California 17:05:40 we're doing short intros for those just joining... 17:06:40 okay... 17:06:51 just do this if you join late btw 17:06:55 \o 17:06:59 \o 17:07:01 heh 17:07:01 :-D 17:07:08 do what? i just joined 17:07:13 #link https://wiki.openstack.org/wiki/Meetings/VMwareAPI 17:07:20 tjones: just getting rolling 17:07:37 #undo 17:07:38 Removing item from minutes: 17:07:53 #link https://wiki.openstack.org/wiki/Meetings/VMwareAPI#Agenda 17:08:12 Okay, meeting agenda, blueprints, bugs, open discussion 17:08:21 #topic blueprints 17:08:37 We're just a few days away from icehouse-2 feature freeze 17:08:56 We've been tracking our BP for icehouse-2 here: 17:09:04 #link https://etherpad.openstack.org/p/vmware-subteam-icehouse-2 17:09:12 I've just updated the page. 17:09:57 It turns out that there was a review deadline *yesterday* where if your patch for your feature aka blueprint wasn't set to status 'ready for review' you got *bumped* to icehouse-3. 17:10:29 Unfortunately the 3 of the top 7 blueprints we identified were bumped. 17:10:51 Actually, the top 3 priority BP were bumped :-( 17:11:09 not sure that i understand the point about the review deadline 17:11:18 hartsocks: we also need to have CI listening to all patches etc by i-2. CI is blocked on the image cache AND the session management BP 17:11:23 all of the BP's menationed have received reviews. 17:11:46 well, don't shoot the messenger 17:11:56 I'm just doing y'all's book keeping here. 17:12:17 i know but…. is there anything that can be done?? image cache has been reviewed up, down, and sideways 17:12:24 The short of it is if you didn't manually set the status "Needs Review" the core-reviewers assumed you weren't paying attention. 17:13:07 So what we need to do is get garyk & vuil to set "needs review" … and we'll need to call attention to those blueprints. 17:13:13 i checked image cache yesterday and it was needs review 17:13:35 #link https://blueprints.launchpad.net/nova/+spec/vmware-image-cache-management 17:13:42 currently set to icehouse-3 17:14:00 right and it's in "needs code review" 17:14:05 ah 17:14:12 I think I see why it was bumped... 17:14:12 just like yesterday 17:14:18 #link https://blueprints.launchpad.net/nova/+spec/multiple-image-cache-handlers 17:14:51 tjones: yeah, that all went down last night. 17:15:16 we will not be able to make the CI commitments without image cache 17:15:16 the image-cache-handlers BP is a dependency of the vmware-image-cache-management BP 17:15:54 #action call out https://blueprints.launchpad.net/nova/+spec/vmware-image-cache-management to russellb 17:16:08 CI? 17:16:31 continuous integration tests 17:16:35 Thanks 17:16:40 mdbooth: there is the upstream gating. 17:16:55 each driver is meanet to have their own CI. the vmware one is called minesweeper 17:17:21 at the moment we are trying to speed up the time taken to run all of the tests 17:17:37 with the image cache temp patch the times are improved as we can run tests in parallel 17:17:41 because we have a requirements to run within 4 hours 17:17:59 Got it 17:19:24 okay. 17:19:37 I've put a note on the BP to russellb and I'll see about calling this out later. 17:19:45 after this meeting. 17:20:24 The other BP that slipped are still important but I think we can tolerate slipping a bit better because it doesn't endanger Minesweeper. 17:20:45 BTW: anyone in here want to give an update on Minesweeper while we're talking about it? 17:21:03 hartsocks: i can try 17:21:29 the last few days there have been infrastructure issues which have been resolved (or are in the process of being resolved) 17:21:42 this has caused a lag in the scores from the minesweeper 17:21:53 the queue of reviews is also terribly long 17:22:04 thats about all at the moment 17:22:19 garyk: do we have a public link for that so we can show the core-reviewers? 17:22:24 Does it have a web interface, btw? 17:22:32 garyk: I know we expose logs now. 17:22:38 hence the importance of getting parallel runs 17:22:49 * hartsocks digs around for a link to Minesweeper logs 17:22:59 mdbooth: it is jenkins like the upstream one. there logs from the runs are posted but access is currently not poissble 17:23:01 the logs are posted with the patches 17:23:23 Ok 17:23:24 web access is not possible you mean. the logs are accessible 17:23:25 https://review.openstack.org/#/dashboard/9008 17:23:40 #link http://162.209.83.206/logs/ 17:23:47 tjones: correct. 17:23:49 The directory numbers are review numbers. 17:24:23 looks like we have no publicly viewable UI to show health of the Minesweeper itself. 17:24:38 please note that the minesweeper is currently working with 2 projects - nova and neutron 17:25:38 anything else to cover here before I start whining to core-reviewers? 17:25:43 :-) 17:26:04 i have spoken with vipin and can give an update regarding the oslo support. 17:26:06 sorry for jumping all over you hartsocks - but this is critical 17:26:08 just let me know when 17:26:26 the review queue is long in general though, not just vmware stuff 17:26:35 russellb: hey 17:26:38 hi 17:27:23 russellb: brief summary is this BP https://blueprints.launchpad.net/nova/+spec/vmware-image-cache-management improves our Minesweeper performance significantly ... 17:27:35 but it was bumped to icehouse-3 17:27:44 wasn't set to "needs code review" 17:27:52 we bumped everything not ready for review 17:27:57 since we have a big enough queue of stuff ready 17:28:03 Well, without it we're in danger of not being able meet the 4 hour test turn-around requirement. 17:28:30 If we can get some slack either on the time for testing to turn around or on the BP we'll be in much better shape. 17:28:46 the deadlines for all of this are icehouse 17:28:50 not icehouse-2 17:29:51 I think we had Jan 22nd in mind for some reason. Might have been an internal deadline now that I think about it. 17:29:59 ok 17:30:15 So as a driver we're cool? 17:30:18 actually i thought it was THE deadline (not internal) 17:30:29 we did say icehouse-2 was a soft deadline actually ... and we'd start putting warning messages after icehouse-2 17:30:32 let me pull up the wiki page ... 17:30:37 right, 17:30:40 if not - all i can say is phew! 17:30:49 but for things like this where we're steaming towards success, I think we're fine 17:31:01 we don't need to put in a warning because minesweeper is taking five hours, IMHO 17:31:06 https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DeprecationPlan 17:31:14 dansmith: +1 17:31:18 dansmith: i agree with you. great progress has been made 17:31:35 based on the existing progress, totally agreed no warning for this driver 17:31:50 * hartsocks bows graciously 17:31:50 I think the warnings go in for drivers that have no (current) hope of meeting the deadline, like powervm would have been, and docker still is (I think) 17:32:09 yeah 17:33:06 i think we should just start a -dev thread at icehouse-2 asking for status from everyone 17:33:31 or just do it on the date and watch the fireworks 17:33:35 either way is fine with me :P 17:33:40 you guys are in good shape 17:33:48 good to hear 17:34:44 anything else on these Blueprints that were moved from icehouse-2 to icehouse-3? 17:36:14 Okay, 17:36:33 Blueprints for vmwareapi not in Nova… 17:36:51 #link https://blueprints.launchpad.net/cinder/+spec/vmdk-storage-policy-volume-type I had noted for icehouse-2 now set for icehouse-3 17:36:58 That's in Cinder. 17:37:10 I'll send a note to Subbu on this one. 17:37:41 garyk: you said you spoke to the team doing our Oslo work. 17:37:47 hartsocks: yes 17:37:53 garyk: now would be a good time to talk about that :-) 17:38:11 at the moment vipin is addressing all of the comments posted on the patch set. he will be posting another patch hopefully tomorrow. 17:38:29 please note that this is an initial forklift and later patches will deal with improvements 17:38:32 I feel bad for beating him up a bit in review. 17:38:53 the purpose is to get common code into oslo and be used by nova/cinder/glance and if you guys have been following ceilometer 17:39:24 he has addressed most. :) 17:39:37 I've had a hard time getting real time chat with Vipin, we're too many timezones apart. 17:39:50 #link https://etherpad.openstack.org/p/vmware-oslo-session-management-notes 17:40:05 I've made this as an attempt to catch notes we might want to share on the topic. 17:41:07 In general, I know we don't want to shift the API while doing the "forklift" process but I still want to try and address error logging and handling issues that make debugging a nightmare for us in some corner cases. 17:42:00 okay. 17:42:14 other blueprints? 17:43:31 * hartsocks waits for network lag and slow typists 17:44:29 #topic bugs 17:44:50 I'll send out my mailing update after this meeting… 17:44:59 #link https://etherpad.openstack.org/p/vmware-subteam-icehouse-2 17:45:10 has my preliminary report 17:45:29 #link https://review.openstack.org/#/c/62587/ 17:45:52 that is stable/havana 17:45:58 heh. 17:46:03 bug in my report system. 17:46:32 most serious issue at the moment is the https://review.openstack.org/#/c/64598/ 17:46:49 minesweeper passes on this but for some reason it is failing to do the +1 17:47:04 that is causing a ton of our patches to fail the CI 17:47:42 #action shill for one more +2 on https://review.openstack.org/#/c/64598/ 17:48:19 :-) 17:48:36 okay, open discussion time? 17:49:07 Next week we'll have to focus on bugs to make up for shorting them this week. 17:49:22 will there be a meeting next week? 17:49:31 good question... 17:50:00 would it be possible that at the nova meetup whoever is going to try and maybe sit with one or more cores to review a blueprint 17:50:00 i'll be in costa rica ;-) 17:50:04 Jan 22nd is not a holiday as far as I see... 17:50:10 tjones: unacceptable! 17:50:14 hee hee 17:50:35 Monday Jan 20th is a holiday so no meeting Monday m'kay? 17:50:47 #topic open discussion 17:50:58 my thinking is that if a ton of nova people are int he same room then why not divide into grouos and have a ton of eyes on certain bp's. maybe russellb can priorities X BP's and thena group of people spend a session on those (cross my fingers that we may have 1 or 2 in the mix) 17:52:54 garyk: that's a good idea and I had hoped we could get some good skills sharing with core-reviewers 17:53:31 garyk: one of my focuses has been on use of Python 3's mock libs… I hope I can contribute more broadly there... 17:53:54 nice 17:54:17 I'm currently toying with the idea that we should be able to Mock *any* observed failure mode in production/test and code for it in a unit test… 17:54:32 … not sure how realistic that is, however. 17:54:57 when does openstack plan to switch over to python 3? 17:55:28 I have no idea… but the code is slowly being walked into python 3 compatibility. 17:55:43 ah 17:55:52 Right now you have to write things so they work from 2.6 through 3.3 if I recall correctly. 17:55:59 the nova client has it as part of the gating 17:56:33 I've put some notes in some places that say things like "once python 2.6 is dropped rewrite" 17:56:39 Another Red Hatter mentioned to me today that mock is preferred over mox in new code. Is that true in the vmwareapi driver? 17:56:50 I would like it to be. 17:57:05 yeah, i believe mox doesn't work in python 3.x 17:57:24 I have some broken code which is WIP where I'm playing with mock over mox: https://review.openstack.org/#/q/topic:bp/vmware-soap-session-management,n,z 17:57:35 WIP: work in progress 17:57:45 https://wiki.openstack.org/wiki/Python3 17:58:21 BTW: there is a mox lib that works in Python 3 but IIRC it's not officially supported or something like that. 17:58:45 browne: ah, it's on that page, mox3 17:58:56 yeah, just saw that too 17:59:12 We're out of time. 17:59:36 We're over on #openstack-vmware if anyone needs to chat. Sometimes I'm quite colourful over there. 17:59:49 Good night 17:59:51 Otherwise, same time next week in here. We'll focus on bugs. 18:00:05 #endmeeting