21:00:38 #startmeeting nova 21:00:38 Meeting started Thu Jun 6 21:00:38 2013 UTC. The chair is dansmith. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:39 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:41 The meeting name has been set to 'nova' 21:00:43 #link https://wiki.openstack.org/wiki/Meetings/Nova 21:00:51 russellb asked me to run this in his absence 21:00:53 who is around? 21:00:57 hi 21:00:58 hi 21:00:59 \o 21:01:00 o/ 21:01:02 hi 21:01:02 o/ 21:01:16 hi 21:01:22 hi 21:01:30 \\o 21:01:41 cool, not sure we'll have all that much to talk about, but we'll run through the same ol' agenda items and see 21:01:49 #topic havana blueprints 21:01:56 blueprints: we've got 'em 21:02:04 anything anyone needs to bring up about a blueprint? 21:02:18 any blueprints needing (re-)targeting or anything? 21:02:24 o/ 21:02:29 vishy: go! 21:02:36 o/ 21:02:53 well i have two small blueprints i will be creating soon 21:03:09 one was the ml post and poc i made earlier about volume-swap 21:03:18 the other is about live-snapshots that i mentioned yesterday 21:03:33 * dansmith thinks live snapshots == cool 21:04:15 vishy: are you expecting these to be targeted to H2? 21:04:22 possibly 21:04:25 i think yes 21:04:30 since i don't think either is very big 21:04:35 okay, cool 21:04:44 the outstanding question is someone going to implement them for other hypervisors? 21:05:29 hartsocks: ? 21:05:44 @dansmith I'll look at it. 21:05:50 okay 21:06:14 I don't think we have representation from any of the other hypervisors here, but I assume they would have spoken up on the ML 21:06:23 I can bring it up with johnthetubaguy re: xen 21:06:28 alaski: cool 21:06:28 just to get his thoughts 21:06:35 dansmith: i can look at it for powervm 21:06:46 #action alaski to talk to johnthetubaguy about live snapshots and volume swap 21:07:04 #action mriedem1 to look at live snapshots and volume swap for powervm 21:07:18 vishy: anything else on that? 21:07:31 @dansmith I'll put it on the vmwareapi team agenda to discuss 21:07:52 #action hartsocks to bring up volume swap and live snapshots on the vmwareapi team meeting 21:08:06 nope 21:08:16 okay, alaski: did you have a blueprint to bring up? 21:08:47 dansmith: that was my hello wave. Though I do have a quick question on the scheduling blueprint that I can ask 21:08:58 cool 21:09:37 So in order to break up the tasks to prepare for orchestration, things like allocate_networks are looking like they need a new compute rpcapi call 21:10:03 But for that one in particular, the only reason it can't be done in conductor is the get_mac_* call which differs based on virt driver 21:10:40 I guess I'm wondering what the thought is on more back and forth RPC calls, and if I should worry about trying to mive that up a level 21:11:05 even though conceptually it fits on the compute node. It just doesn't actually need to run there based on the logic I've seen 21:11:21 but it might someday, right? 21:11:29 seems like a very virt-driver-ish sort of thing 21:11:49 it is 21:12:11 And I kind of like it staying there. It's just that spawn would be moving from one RPC call to probably 4 21:12:16 yeah 21:12:25 just wanted to make sure no one is concerned about that 21:12:26 interestingly, 21:12:40 this would apply rather neatly to objects, although kinda different than the other cases 21:12:51 where we could ask for an driver info object from compute once, 21:13:01 which could implement the mac function locally, never needing an rpc call back, 21:13:15 or it could make it remoted and then every call would go back to compute if needed 21:13:23 but that's just craziness since I have objects on the brain :) 21:13:35 comstud: any thoughts on the extra rpc traffic alaski is asking about? 21:14:03 :) that sounds great actually, the never needing an RPC call. Since in this case the logic is driver specific, but not host specific 21:14:03 the problem is expanding instances right? 21:14:11 sorry, i need to scroll back 21:14:18 more RPC calls 21:14:22 the problem is moving things out of compute to conductor, 21:14:30 Why is the message larger ? 21:14:34 and then needing access to things that compute has, like asking the virt driver to generate a mac 21:14:41 Is this the qpid problem we're talkinga bout? 21:14:45 It doesn't support > 64K msg size 21:14:49 no 21:14:52 Oh 21:14:52 The message isn't larger, just more of them 21:15:03 also, the qpid fix was merged yesterday 21:15:07 hah 21:15:23 I am in favor of more calls from conductor to compute 21:15:28 we should have woken up ol' man bearhands earlier 21:15:28 * johnthetubaguy has stopped playing the tuba 21:15:28 if that's what's being asked 21:15:38 Yeah, I forgot about this mtg :) 21:16:10 comstud: cool. Just wanted to get some positive vibes before I do all that plumbing 21:16:29 Splitting up the work into more calls makes for better error recovery and retrying and whatever you may need to do 21:16:32 IMO 21:16:53 sorry, I missed stuff, conductor is gonna start making all the RPC calls to compute still right? 21:16:55 I'm fine with it on the backend... but not in the API :) 21:17:01 alaski: sounds like you're good then 21:17:10 comstud +1 hoping migrate improves due to that 21:17:12 (ie, not when a caller is blocking) 21:17:15 dansmith: cool 21:17:24 just an fyi, i'm planning on submitting a blueprint to support config drive and attach/detach volumes for the powervm driver sometime here (hopefully next week) 21:17:25 johnthetubaguy: conductor to compute manager 21:17:37 alaski: cool, all good 21:17:47 okay, so we're good on that it sounds like 21:17:50 +1 21:17:53 I'm good 21:18:05 dansmith: 'for now' 21:18:07 mriedem1: did you have anything more to say about that, or just a warning to the group? 21:18:14 just giving a warning 21:18:24 comstud: right, until you change your mind. it's always implied 21:18:27 mriedem1: okay 21:18:29 correcto 21:18:31 any other blueprint discussion? 21:18:54 can i bring api-validation up? 21:18:59 sure 21:19:35 now we are discussing WSME or jsonschema for validation. 21:19:45 neat 21:21:02 https://review.openstack.org/#/c/25358/ 21:21:23 thank you. I couldn't find it. 21:22:04 ken1ohmichi: I don't think we are getting WSME in Nova for a while? afaik no one is working on it. 21:22:17 In this patches, WSME is not used for nova-v3-api. so I'd like to discuss jsonschema will be used for nova-v3-api. 21:22:38 cyeoh: yes, right. 21:24:02 is there some discussion to have here? 21:24:53 any other blueprint items to bring up? 21:25:23 alright 21:25:25 #topic bugs 21:25:32 we have those too! 21:25:46 is this nova meeting? is it still active? how do I make a comment/discuss a topic? 21:25:54 dansmith: https://blueprints.launchpad.net/nova/+spec/flavor-instance-type-dedup needs a review 21:26:07 I don't have the link to the bug plot, does anyone have it handy? 21:26:10 or two or three 21:26:14 58 new bugs right now 21:26:15 whenry_: yes, it's the right time to talk about the qpid leak 21:26:48 #link https://blueprints.launchpad.net/nova/+spec/flavor-instance-type-dedup 21:27:08 * whenry_ has identified the qpidd leak problem. it's a bad nasty one. tross and I have been looking at it and we think we have fixed it and simplified dramatically 21:27:31 whenry_: okay, is there a link to a review? 21:27:57 dansmith, I don't have a link right now. we are still testing but it looks promising. 21:28:07 where to I post link for review when it's time 21:28:07 ? 21:28:15 okay, is this different than the one alaski mentioned a minute ago? 21:28:22 yes 21:28:35 okay 21:28:36 alaski's qpid bug was the 65535 limit on a message 21:28:40 gotcha 21:28:43 this one is qpid leaking lots of exchanges 21:29:04 okay 21:29:08 whenry_: is https://review.openstack.org/#/c/29617/ helpful or unrelated? 21:29:23 do we need to discuss it? sounds like until it hits gerrit, there's not much else to do, is that right? 21:29:25 (It at least lets us turn durable on and off for qpid like we can for rabbit.) 21:29:32 dripton: 65535 limit on what? 21:29:44 size of a message sent across qpid 21:29:55 there is no such limit 21:29:59 see commit 781a8f908cd3e5e69ff8b88d998fa93c48532e15 from yesterday 21:30:04 I think it was just the size of a string within a message, but not exactly sure 21:30:05 dripton: it's size of a single string, IIRC 21:30:09 right 21:30:12 dripton: there's no such limit unless you're carrying the content in a header 21:30:20 rather than the message body 21:30:55 well, the commit went in yesterday, so you're a bit late, but feel free to go back and see if there's a better way. 21:31:55 dripton, who was the "feel free" comment addressed to? 21:32:13 dragondm, this re the string size limit? 21:32:21 you and tedross. Anyone who thinks that wasn't a real qpid problem that alaski fixed. 21:32:27 dripton, , this re the string size limit? 21:32:34 whenry_: right 21:32:49 dripton, ack I'll let tedross look at that 21:32:50 well, I didn't fix it. I just copied the fix from oslo 21:33:45 okay, so I think we're good on these two issues for now, anything else? 21:34:03 so re https://review.openstack.org/#/c/29617/ when do we discuss this? 21:34:14 jog0: shall we get back to your blueprint link in open discussion? 21:34:19 what use case is begging for durable? 21:34:45 dansmith: I hope there isn't to much to discuss about it, just to review 21:34:48 whenry_: I think the leak was begging for a way to turn durable off 21:34:51 Hrrrm? 21:35:05 jog0: okay 21:35:10 dripton, oh well that's fixed now ;-) when we send the code for review 21:35:15 dansmith: the only thing related to bugs i'd bring up is asking for help from core reviewers to check this out, i think it's ready: https://review.openstack.org/#/c/30694/ 21:35:26 dragondm, sorry about that .. go back 21:36:22 okay, anything else on bugs? 21:36:34 heh, if we're soliciting reviews, I've got one: https://review.openstack.org/#/c/30036/ 21:37:04 everyone has a pet review, lets bring up critical bugs needing discussion here and save the others for the end okay? :) 21:37:15 sorry. 21:37:22 #topic subteam reports 21:37:30 db 21:37:31 who has a subteam report to make? 21:37:37 scheduler 21:38:04 xenapi: smokestack (was) back testing on XenServer 6.1 EOM 21:38:09 okay, dripton: go for db 21:38:13 johnthetubaguy: nice :) 21:38:32 DB meeting failed to happen again. Need a new meeting time that's more Europe friendly since that's where everyone is. 21:38:39 But we got a bunch of good patches in 21:38:50 And maurosr fixed the postgresql gate failure, which is awesome. 21:38:52 that's all 21:38:55 dansmith: credit to danprince for his good work 21:39:12 johnthetubaguy: sorry, you said EOM, so you have to save it until next week :) 21:39:17 dripton: okay, cool 21:39:30 n0ano: scheduler, go for it 21:39:33 dansmith: lol, oK 21:39:38 ironic 21:39:51 long discussion on isolation filter, hope to coalesce (sp) 3 BPs into one ... 21:40:18 jog0, brought up issue of BlueHost creating a 16K OpenStack cluster and had to throw out the scheduler (didn't scale)... 21:40:31 expect to see a lively discussion over that issue at the meeting next Tues. 21:40:44 that's about it, read the IRC log for details 21:40:49 hah, okay, cool 21:41:04 hartsocks: vmware 21:41:19 So, we're starting to get some motion on blueprints. 21:41:29 There are several blueprint related patches up for review. 21:41:54 We're having some in depth design discussions on some of these b/c a few conflict with each other. 21:42:19 Also, reviews are taking a long time. 21:42:59 We're looking at getting together a group of people who are really good with VMwareAPI to kind of "pre-review" these patches ahead of the core team that way y'all can focus on the OpenStack bits of the patches. 21:43:29 That… and we've got guys in about 5 timezones now… so yeah. That's fun. 21:43:50 That's it for us. See the Wiki for logs. 21:45:04 welcome to the club :) 21:45:04 it's really scary when you get five guys in six timezones, but.. 21:45:04 okay, cool 21:45:04 devananda: ironic 21:45:10 hi! 21:45:26 wanted to say thanks to dansmith - we've ported some of his work with unified objects 21:45:42 and are moving with that now, hoping to see it get into oslo :) 21:45:46 woo! 21:46:07 other than that, i think there's not a lot of nova-facing news. 21:46:32 [eol] 21:46:37 heh, okay, cool 21:46:50 comstud: you want to (or want me to) talk about objects a little? 21:46:50 oh! 21:47:00 i nearly forgot - one other thing. 21:47:17 gherivero is porting a lot of the image manipulation from nova to glance-client 21:47:21 * devananda looks for the link 21:47:54 #link https://review.openstack.org/#/c/31473/ 21:48:08 not sure how directly that affects you guys, but worth knowing about 21:48:14 really done this time :) 21:48:20 cool 21:48:31 well, I'll talk about objects real quick then 21:48:47 so, we're making lots of progress on getting objects into the tree 21:49:07 well 21:49:11 we've got a lot of base stuff integrated and we keep refining that with the patches to actually start using them floating ever so slightly above the cut line 21:49:12 we need reviewers. 21:49:17 other than me 21:49:20 and dan 21:49:21 :) 21:49:24 heh, yes :) 21:49:26 https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/unified-object-model,n,z 21:49:37 devananda: what image manip? 21:49:49 dansmith has about 100 patches up already 21:49:49 er, 21:49:53 #link https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/unified-object-model,n,z 21:49:54 heh 21:50:09 this meeting was a great break from gerrit emailing me constantly with more dansmith patches 21:50:17 haha 21:50:30 vishy: comments on that review explain it pretty thoroughly. tl;dr - nova and cinder duplicate a bunch of the code to download an manipulate glance images. ironic needs to do the same stuff 21:50:49 vishy: and we figure it's time to share that code instead of keep copying it all over ... :) 21:50:53 devananda: does that mean the code comes out of nova after it goes into glanceclient? 21:51:08 i guess i'm not clear on "manipulate" 21:51:18 dripton: i think that's the plan, yes. 21:51:23 vishy: nova's image_service code. 21:51:24 devananda: ++ 21:51:37 nova/image/glance.py 21:51:53 yeah I'm wondering what manipulation it did 21:52:00 dripton: i'm not sure if ghe is planning on doing that (would love it if any nova folks want to help out) 21:52:04 vishy: file injection 21:52:06 just curious if it overlaps with the stuff going into brick 21:52:10 i think that's just his word for create/update/delete.. which it all wraps 21:52:13 :) 21:53:30 if we need this much wrapping of glanceclient... 21:53:36 it does feel like there's a problem with glanceclient 21:53:43 or something. 21:54:06 ok i don't see anything in there for qcow or lvm handling 21:54:09 seems ok then 21:55:37 okay, anything else on subteam reports? 21:56:18 well... 21:56:35 some of us vmware folks might need to be schooled on what "ephemeral" disks are... 21:57:02 sounds like a perfect segway into the open discussion/whining portion 21:57:07 #topic open discussion 21:57:12 I keep forgetting to mention we also have #openstack-vmware 21:57:33 for *developer* discussion tho 21:57:49 Please don't send disgruntled Admins in there... 21:58:09 :-p 21:59:53 okay, time 22:00:02 #endmeeting