20:37:51 #startmeeting reddwarf 20:37:52 Meeting started Tue May 14 20:37:51 2013 UTC. The chair is SlickNik. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:37:53 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:37:55 The meeting name has been set to 'reddwarf' 20:38:03 #topic Update to action items 20:38:20 #link http://eavesdrop.openstack.org/meetings/reddwarf/2013/reddwarf.2013-05-07-21.05.html 20:38:34 First one is mine. 20:38:53 I haven't had a chance to look into archiving the logs yet. 20:39:05 Got pulled into working on other stuff. 20:39:14 So I'm going to re-action this. 20:39:43 #action Slicknik to look into archiving logs for rdjenkins test runs. 20:39:54 awwww 20:40:15 datsun180b: you got the second one.. 20:40:15 SlickNik: i was hopin i would come back from vacay and it would be done :-P 20:40:16 Next I'm up. I managed to pull a couple of the gerrit changesets and run them with no problem on my machine, but I haven't had a chance to figure out what the delta between jenkins and my own box is 20:40:17 haha 20:40:45 Besides that, the jobs appear to be working with unprecedented consistency as of late 20:40:50 datsun180b: I've only seen it fail on the cloud instances. 20:40:56 datsun180b: and it's intermittent 20:41:07 datsun180b: Yeah, seems to be happening less off late too. 20:41:22 So its interesting, hub_cap couldn't run it on the Rackspace cloud due to the resize tests never finishing IIRC 20:41:39 It seems like we may just hit these issues in general when running on a cloud 20:42:03 maybe a longer timeout? 20:42:22 run it manually and see how long it takes 20:42:23 grapex: perhaps that's the case. I'm inclined to put this on the back burner for now and keep an eye out for it happening again. 20:42:27 maybe it is just taking longer 20:42:29 Maybe we should consider running only a subset of the real mode tests on the reddwarf Jenkins box. 20:42:35 #agree 20:42:51 grapex: i dont think thats a good idea 20:42:53 My issue is I feel the Reddwarf Jenkins box has been failing a lot of pull requests, which causes them to not get looked at and slows things down 20:42:57 Moving forward I'd love to remove as many free radicals as possible 20:43:04 We'd run everything except for the resize tests. 20:43:26 cp16net: I'm not saying we canonically get rid of them, I just don't think we have the environment to run these tests now 20:44:02 grapex / cp16net: In the near future, I want to figure out how moving to openstack-ci will affect this too. 20:44:14 It'd be interesting to know if there are issues with resize related tests in Tempest on openstack-ci 20:45:06 thats a good point 20:45:30 I think we should follow up with hub_cap / ttx / openstack_ci to figure out what the next steps would be to get integrated with devstack_vm_gate. 20:45:34 action item for someone to look into that? 20:45:59 any volunteers? 20:46:20 I can't speak for hub_cap but he had mentioned that recently 20:46:22 i wasn't real helpful except as a control group this last week 20:46:39 #action SlickNik to follow up with hub_cap / openstack_ci to see what the next steps are. 20:46:59 cool, I can follow up on it. 20:47:05 thanks for that 20:47:08 no worries. 20:47:12 let's keep moving. 20:47:23 I think that means you just got #3 20:47:24 How about a second one 20:47:31 robertmyers, you're next. 20:47:36 Wait 20:47:48 sure, grapex? 20:47:49 okay, the notification pull request is passing now 20:47:58 Let's create an action item to look into if there are resize test problems in Tempest and see if maybe we can determine a fix 20:48:45 That's a good idea. 20:49:00 grapex: do you want to follow up on that? 20:49:07 Sure 20:49:22 #action GrapeX to determine if Tempest community has similar issues with resize. 20:49:27 thanks! 20:49:42 moving on to the notifications patch. 20:49:47 back to you SlickNik for #5 20:49:54 Thanks robertmyers, it looks like it's passing. 20:50:12 I think it is ready, just needs a +2 20:50:20 or more eyes onit 20:50:36 i think we just need more eyes in general 20:50:40 #link https://review.openstack.org/#/c/26884/ 20:50:46 go now 20:51:03 datsun180b: like flies? 20:51:07 well my +1 only means so much 20:51:13 compound eyes. 20:51:35 i had issues this morning signing into review these 20:51:40 lookslike its working now tho 20:51:50 i'll look at it today/tonight 20:51:56 I +2'ed. hub_cap had some comments, so was waiting for either him / grapex to approve. 20:51:58 but yes, more eyes in general 20:52:00 #link https://review.openstack.org/#/q/is:watched+status:open,n,z 20:52:13 SlickNik: Good point 20:52:39 Just wanted to make sure we got all the comments addressed. 20:52:56 okay, #5. 20:53:05 Didn't get a chance to look into it yet. 20:53:06 Reminder- if you want something to get into trunk, and you comment on it, be sure to +1 it later! Otherwise it can look like there's concern over it and will make people not look at it. I've done the same thing myself... 20:53:41 SlickNik: Can +1's in bulk get something merged? 20:54:00 nope, grapex. 20:54:18 SlickNik: Ok, thanks 20:54:19 all right, i had my concerns 20:54:25 yeah only approval merge 20:54:42 for a merge, a patch needs at least one +2 and an "Approved" 20:55:19 That's the end of action items. 20:55:50 So what else to discuss? 20:55:52 #topic TC 20:56:00 Just walking through the agenda 20:56:15 Anything to discuss here? 20:56:19 i dont think anyone updated the agenda... 20:56:27 that was old i believe 20:56:38 hey looks like we agreed on 30min early ;) 20:56:42 cp16net: I think you're right. 20:56:43 btw... we are incubation 20:56:46 :-P 20:56:51 welcome hub_cap. 20:56:57 cp16net: yay! :P 20:57:00 hub_cap: yeah it was a surprize to be as well 20:57:10 moving on 20:57:24 hub_cap: I was momentarily surprised, but am used to constantly learning things late and so quickly got over it and accepted the time as the new reality. 20:57:28 hub_cap any updates on next steps for incubation? 20:57:32 not yet 20:57:43 live in the now 20:57:51 exactly 20:57:59 #topic OpenVZ 20:58:04 sry i just gotback from lunch, working on some super secret stuffs ;) 20:58:15 hub_cap: perl rewrite? 20:58:17 yay openvz 20:58:23 go? 20:58:35 Go... so hot right now. Go. 20:58:41 no worries. Just got done with the action items and working through the rest of what looks like last weeks agenda. :P 20:58:47 currently I'm merging in the migration code we wrote internally and will be releasing it into the wild 20:58:54 it's in my public repo 20:58:57 hold for link 20:59:03 holding. 20:59:10 #link https://github.com/imsplitbit/nova/tree/openvz_support 20:59:13 thx 20:59:29 imsplitbit: This needs some ascii art on the README 20:59:43 #agreed 20:59:43 the code is currently merged in but I need to add more unittests for the migration code 20:59:46 #agreed with grapex 20:59:50 and then test in my lab 20:59:59 then add ascii art to the README 21:00:12 should be done by EOB monday 21:00:13 imsplitbit: So will this code as time goes on need to be constantly kept up to date with Nova trunk? 21:00:13 ok then we can put the stamp of approval on it 21:00:28 grapex: well I think we just tag releases 21:00:29 I know at one point there was an idea of making it only unique files with a few patches to existing files 21:00:38 this one will be for havanna 21:00:41 havana 21:00:42 imsplitbit: yea that sound like a good idea 21:01:06 I should be able to be backported to grizzly 21:01:14 but right now it's 100% bleeding edge 21:01:26 awesome 21:01:28 Cool 21:01:29 gotcha. 21:01:32 next up 21:01:45 #topic Jenkins 21:02:01 Things seem much better in rdJenkins world. 21:02:31 It would seem so, but I'm still keeping an eye on it 21:02:33 Did anything change redstack script or tests related to cause that? 21:02:42 Most builds seem to pass, and no more false positive. 21:03:31 Well, we nailed down the right regex to use, and Matty made some needed fixes to the Jenkins cloud instance plugin. 21:03:41 I don't know aobut "no more", but it seems they're greatly reduced compared to say two weeks ago 21:04:04 But even last week I had something get thrown to "abandoned" because rdjenkins zapped it and a week passed 21:04:15 datsun180b: What day was that? 21:04:16 I did notice that rdjenkins stopped working from :8080 and seems to have gone to :80 21:04:22 datsun180b: I haven't seen one in two weeks. If you see one, let me know. 21:04:30 It's only SSL right now. 21:04:33 Well I woke it up yesterday, let me find the link 21:04:36 So 443, I believe. 21:04:56 #link https://rdjenkins.dyndns.org 21:05:04 #link https://review.openstack.org/#/c/28061/ for example 21:05:17 All I did was wake it up and it all passed again 21:06:00 So whatever you're feeding jenkins now, keep it up 21:06:29 heh, will do 21:06:41 anything else Jenkins related? 21:06:42 steroids? 21:07:01 actually powerthirst. 21:07:06 datsun180b: So are these failures resize? 21:07:12 http://www.youtube.com/watch?v=qRuNxHqwazs 21:07:22 i think it was a failure to upload to glance lacking a table called 'reddwarf' 21:07:26 Because if not I'd rather not commit to that action item if the issues are not related to the resize tests. 21:07:37 oh datsun180b: that was when devstack broke us. 21:07:51 SlickNik: It's got what tests need. :) 21:07:52 gotcha, sounds like we found the loose bearing 21:08:19 yup, I can send you the patchset that fixed it, fyi. 21:08:23 after the meeting. 21:08:33 you know where to find us 21:08:35 okay, let's move on. 21:08:39 yup! 21:08:53 #topic Backup Status 21:09:09 Got a bunch of comments from you guys. 21:09:27 Working on addressing them and uploading a new patchset. 21:09:32 most of mine were just nits, but it looks good 21:09:34 So stay tuned for more on that. 21:09:36 Cool 21:09:59 nothing more on that. 21:10:12 #topic Notification Plan 21:10:43 Let's get robertmyers' patch merged. 21:10:54 yes 21:11:05 and start talking exist events 21:11:05 working on getting the exists event stood up for our billing team to test 21:11:25 i'll +1 it legit, it's been in an open tab all day 21:11:38 yup. thanks robermyers and juice for the awesome work on this. 21:11:39 juice: are you doing a public code or private? 21:12:23 public 21:12:46 just need to get patch submitted so saurabh can deploy to a test env. here 21:12:56 and then I'll iterate over it to improve 21:12:57 I believe the idea is to do something along the lines of the way nova does it. 21:13:07 in taskmanager? 21:13:30 yes, in taskmanager... 21:13:33 task manager would be the best place to put it 21:13:33 juice? 21:14:18 okay any more notifications related info? 21:14:58 ... 21:14:59 #topic Rootwrap 21:15:12 I don't think anyone has looked into this. 21:15:22 not i, said the Datsun 21:15:25 SlickNik: is there a blueprint on exist events? 21:15:48 sorry late to that party 21:15:55 cp16net: I don't think so. juice? 21:16:24 hub_cap: We can haz blu print, plz? 21:16:24 #link https://blueprints.launchpad.net/reddwarf/+spec/reddwarf-notifications 21:16:27 nope 21:16:37 oh wait there is one! ;) 21:16:40 heh 21:16:41 oh, tahnks robertmyers 21:16:56 Looks like it's part of the original notifications blue print. 21:17:22 So let's keep on with updating that. 21:17:36 ok 21:18:13 As for the rootwrappah, I think we're gonna pass on that one till we have some more bandwidth to work on it. 21:18:44 So it might be a while before we tackle it. 21:19:40 moving on 21:19:56 #topic Actions / Action Events 21:21:09 I believe we de-prioritized that for the moment based on all the actions that came out of incubation... 21:21:38 SlickNik: I think hub_cap has been busy, so no news on that front. 21:21:50 yup, that was my understanding. 21:22:00 def grapex, no news 21:22:04 so that brings us to... 21:22:07 ill be finishing it soon, prolly nxt wk 21:22:20 #topic Meeting Time 21:22:44 ooh ooh 21:22:45 so.... 21:22:50 can we talk about ephemeral storage 21:22:57 yes 21:23:07 1. what is it 21:23:23 2. make a blueprint? 21:23:29 ^^ 21:23:33 sorry.. i was assumed that it was mentioned last meeting which i didnt attend to 21:23:33 or let us help you make it :) 21:23:43 we can give you a blank BP if you need it 21:23:55 there is a bug https://bugs.launchpad.net/reddwarf/+bug/1175719 21:24:31 we can convert that bug into a bp if you want 21:24:53 Please do 21:25:33 esmute: I've got a question- how come on the models, we're renaming what is called "ephemeral" by Nova to "storage?": https://review.openstack.org/#/c/28751/3/reddwarf/flavor/models.py 21:25:42 here in HP we need not only support for Volumes and root partition, which is controlled by the reddwarf_volume_support flag 21:26:00 esmute: can you use this plz 21:26:01 https://blueprints.launchpad.net/reddwarf/+spec/ephemeral-storage-volume 21:26:05 we also need to have a support to storing mysql data in ephemeral partitions 21:26:55 grapex: we thought that the term "ephemeral" maybe too confusing for customer...and we didnt want to use "disk" because it overlapped with the "disk" from nova 21:27:01 so we decided on "storage" 21:27:16 will do cp16net 21:27:17 So ephemeral is an alternative to volume or root storage 21:27:17 grapex: also because — like volumes — all of the ephemeral drive will be available as mysql 'storage' for the database instance. 21:27:18 esmute: So this is funny- hub_cap, SlickNik, your thoughts on this might be helpful- 21:27:26 we originally made Flavors look like Nova flavors on purpose 21:27:31 as a convience to the customers 21:27:38 But the TC seemed to not like that 21:27:56 however, renaming flavor attributes seems confusing to me. Clearly it's a Nova flavor, so why rename the fields? 21:28:09 i dislike the rename of fields 21:28:22 maybe we should have a "moved" and point to the nova install if they try to get flavors 21:28:30 i dislike change fields purpose 21:29:08 I'm not sure I know exactly where I stand on this one yet. 21:29:10 so you guys like to leave it as "ephemeral"? 21:29:28 like the reddwarf_volume_support was true/false and now you repurpose it to be 3 different values? 21:29:34 I'd prefer to leave it as "ephemeral", but honestly I need to research a bit more on ephemeral flavors before I know what I think. 21:29:52 so you are mounting something from the host i assume for this? 21:30:02 Might need to read up / think a bit more about this one. 21:30:14 cp16net: yes.. because originally, there were only two options.. volumes/root partition 21:30:25 no it was volumes on or off 21:30:27 now we wanted to support another option.. ephemeral 21:30:33 yes.. 21:30:34 hub_cap: it's auto mounted as vdb when it's part of the flavor. 21:30:51 we might want to change this to some sort of strategy 21:30:52 when you boot an instance with ephemeral, a new partition is made available..../dev/vdb 21:30:57 esmute: Wouldn't ephmeral just mean the flavor is ephemeral and volume support is off? 21:31:07 reddwarf.compute.volume.Ephemeral etc etc 21:31:08 which can be mounted and used the same way as volume-support 21:31:14 and it can stay "disk" 21:31:19 It sounds like the scope of ephemeral storage is bigger than our meeting allotment 21:31:23 u have to ask yourself this 21:31:33 does the customer care whether they get ephemeral or some other "disk" 21:31:33 Ok, I get it- the ephmeral part is for the unused disk part of flavors 21:31:41 which maybe we now need to be aware of in the Reddwarf API 21:31:48 Ok, this is a huge philosophical can of worms. :) 21:31:49 or is this just for the people that stand up and manage the service 21:31:55 grapex: no. because originally when you had reddwarf_volume_support ==false, the data was stored in the root partition 21:32:01 to me, the customer cares about "local" vs "remote" 21:32:20 technically the disk _in_ the image is ephemeral right? 21:32:26 I'm concerned that they'll see "storage: 0" in a 4G flavor and pick up the big red phone 21:32:32 hub_cap: so you are saying "local" == ephemeral? 21:32:37 ya 21:32:41 techinally yes 21:32:46 when u dleete an instance esmute, does it not go away? 21:32:52 but the strategy for how your partition it is up to the implementors 21:33:01 youre saying u want 2 partitions instead of 1 21:33:02 well if it reboots it does not disappear 21:33:03 thats all 21:33:14 sure nither does the vm cp16net 21:33:36 do we foresee a case where we would want to store the data in the root partition? 21:33:47 ie /dev/vda 21:34:02 hub_cap: yes to your question 21:34:20 right what im getting at is 21:34:23 does the customer care 21:34:36 if they see flavor disk=100 or ephemeral=100 does it make a diff to them 21:34:53 or do they see disk=100 and the people running it in the background say, lets do a 100g vol called vdb 21:35:20 If I'm reading these changesets right though, if we're not using ephemeral then we're going to see "storage: 0" in our flavors 21:35:33 hub_cap: but these are different in nova. And nova flavors expose them separately. 21:35:34 confirm / deny 21:35:36 Any chance we can take this discussion offline? 21:36:02 datsun180b: Yeah, I agree. That's probably a good case for renaming storage to ephemeral. 21:36:03 i second that 21:36:06 datsun180b: i am about to change that.. if we are not using ephemeral, we wont display it 21:36:06 grapex: +1 21:36:21 esmute: that is a good solution too. 21:36:22 good to hear on both counts 21:36:26 Yup ++ to grapex 21:36:33 if that's the case I'll scrap these comments 21:36:42 one question i have 21:36:47 ga 21:36:51 Let's take this offline. 21:36:59 Looks like we have some more to discuss here. 21:37:05 back to #reddwarf then? 21:37:22 Sure. 21:37:31 for you guys, with local storage, it doesnt matter if its stored in root or in ephemeral partition right? 21:37:34 #topic Open Discussion. 21:37:49 as long as it is "local" 21:38:10 i like cheese 21:38:10 Any other items for discussion? 21:38:24 hub_cap: pepperjack is my fav 21:38:41 Tillamook Smoked Extra Sharp Cheddar 21:38:46 cheddar or die 21:38:48 Let's start the academy aware music 21:38:52 *award 21:39:07 Wensleydale for the win... 21:39:17 okay, sounds good. 21:39:17 i think we are done. 21:39:23 Alright, this was a good one. 21:39:24 #end meeting 21:39:24 salty blue cheese FTW 21:39:27 thanks everyone 21:39:29 #endmeeting