14:01:00 #startmeeting sahara 14:01:01 Meeting started Thu Jun 4 14:01:00 2015 UTC and is due to finish in 60 minutes. The chair is SergeyLukjanov. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:02 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:04 The meeting name has been set to 'sahara' 14:01:14 #link https://wiki.openstack.org/wiki/Meetings/SaharaAgenda 14:01:26 #topic sahara@horizon status (crobertsrh, NikitaKonovalov) 14:01:32 #link https://etherpad.openstack.org/p/sahara-reviews-in-horizon 14:01:39 is it actual list? 14:02:12 SergeyLukjanov: the changes on review a still there 14:02:21 A few bug fixes and enhancements I've added, on review of course. 14:02:31 I did talk with the horizon folks..... 14:02:31 I mean, hi folks :) (wrong window) 14:02:52 They are thinking that for L, a single core +2 combined with enough +1s from us will get a patch merged. 14:03:25 crobertsrh, it'll be great IMO 14:03:26 nice 14:03:27 crobertsrh: that's gread 14:03:32 great* 14:03:35 crobertsrh, and add you to the core team ;) 14:03:45 heh 14:04:05 +1 14:04:22 hi all, these days I found the gate-sahara-neutron-direct-spark-aio test always failed, is it a bug? was it resolved now? 14:04:48 alazarev linked to a bug patch for that issue, in one of the reviews 14:05:15 https://review.openstack.org/#/c/187155/ 14:06:42 #topic News / updates 14:06:55 * SergeyLukjanov recovering from the vacation prev. week 14:07:23 SergeyLukjanov, any update on Sahara-Ironic integration blog post / patches? 14:07:24 That is what work is for, right? Recovering from vacation? 14:07:49 tmckay, blog post is on review for the blog :( 14:07:55 i've been researching the improved secret spec more, and also working on a spec for keystone session objects. i have a few questions about the latter, for the group. 14:08:12 SergeyLukjanov, well, review is good! :) Almost done then. 14:08:23 tmckay, we've not yet worked on the patches, we hope to attach volumes using cloud-init 14:08:45 rewrote heat engine to use ResourceGroup, please review https://review.openstack.org/#/c/186543/ 14:08:47 still working with auto-tune hadoop configs, for vanilla ready for review 14:08:54 #link https://review.openstack.org/#/c/177280/ 14:08:54 I'm now working on a few specs we were discussing on summit, hopefully will start to publish them soon 14:09:33 There are some NetApp guys near me physically, we are going to meet face to face and brainstorm about Sahara/Manila integration. There was a Manila talk at Summit that mentioned Sahara integration, so I followed up with them. I'll let you know how it turns out. 14:09:47 i just started working on the multiple clusters change, we need to discuss some issues before i can start implementing 14:09:58 tmckay, cool, looking forward for info 14:10:13 I am wondering if we can get some security benefits from it (multi-tenancy, file shares scoped to users maybe) and also some file transfer effiiciency (job binaries mounted in the cluster) 14:10:52 and maybe, hdfs shares (intel wrote a Manila hdfs driver I think) can provide a simple way for users to set up shared libraries for Hadoop and host them in a long-running hdfs share 14:11:03 that could be linked into multiple clusters. Things like that. 14:11:20 tmckay, cool 14:11:26 #topic Open discussion 14:11:33 tmckay, sounds interesting 14:11:42 can we talk about keystone session objects for a few minutes? 14:12:01 Yes, we did hdfs share in Manila. 14:12:30 elmiko, I don't see sreshetnyak and apavlov online now 14:12:30 Working on interface map; pretty much down to tests and review revisions @ this point (client and Horizon changes to follow). 14:12:55 SergeyLukjanov: ok, i'll keep working on the spec. maybe we can talk next week about it 14:13:04 egafford, I am trying to run against that but I hit a snag with my neutron based stack :( 14:13:15 but, I am looking at it. Looks good so far 14:13:23 And we can discuss it how to use it in sahara. 14:13:24 tmckay: Interesting; I'm also using Neutron. We should talk. 14:14:05 tmckay: (Also, I'm sure there are bugs yet, as I posted a working happy path WIP without even vaguely adequate testing, and knew it.) 14:14:10 weiting, certainly 14:14:27 egafford, ack on WIP, I understand 14:17:11 SergeyLukjanov, are you working on a spec for moving plugins out-of-tree? 14:17:18 tmckay, yup 14:18:02 in my list now - resources ACL, extract plugins, extract scenario tests, 14:18:36 okay. We talked about that a little more, and what impact multiple versions of plugins supported against multiple versions of Sahara might do to the CI matrix. How do we test it all? Who tests it? 14:18:41 I've got the most powerful jetlag this time after a few days at home :( 14:18:45 heh 14:19:07 but we can ask such questions on review ^^ 14:19:48 tmckay, it's one of the most important questions, I'd like to start spec with a basic ideas and grow it to the state it cover all questions 14:19:50 yea, i would imagine those will be addressed in the spec 14:19:55 because it's very huge and important thing 14:20:02 SergeyLukjanov: +1 14:20:16 so far, I think it could be done be min set of tests vs not all plugins 14:20:23 for the stable branches 14:20:43 we don't need to test all plugins, we need to check that api isn't changed 14:22:56 probably it's a good time to chat about multiple cluster creation 14:23:00 tellesnobrega, ^^ 14:23:04 sure 14:23:06 okay. Looking forward to the spec :) thanks 14:23:22 we need to define a couple things 14:23:46 first is to decide if we implement the change on the sahara side or horizon and client side 14:24:29 tellensnobrega, sorry, what exactly does "multiple cluster creation" mean? I can already create multiple clusters :) 14:24:38 I think I need more details 14:24:52 ok 14:24:57 Any feature that we have should eventually be mirrored in the client lib and horizon 14:25:16 the idea is to put a field to allow the user to select how many cluster with that configuration he wants to create 14:25:55 ah, ok. So it's like pushing the launch button multiple times. We need a naming convention 14:26:04 Is there a real use case for such a feature? 14:26:22 in nova this is implemented on server side, REST has a field "count", as I understand the proposal is to make the same in Sahara 14:26:23 i can think on research labs (like the one i work) 14:26:35 alazarev, exactly 14:26:49 crobertsrh, the motivation would be fewer clicks and forms 14:26:59 sometime we need to create more than one cluster to process different sets of data and so on 14:27:24 tellesnobrega, and clusters will be exactly the same> 14:27:26 same image, same ssh key, etc. just a different name 14:27:32 yes 14:27:53 Possible to do this with just a horizon change, if desired. 14:27:54 I wonder if one big cluster, with multi-tenancy (I know we don't have it) would work as well 14:28:24 it is easy to implement, the only thing I concirned is use case, I can hardly imagine situation when multiple clusters with the same parameters are needed and this can't be automated via script 14:28:24 tmckay: i think we might need some sort of job queueing for that though? 14:28:37 if do on client side - there is an issue how to check quotas 14:28:46 tellesnobrega, how many cluster you would like to create at the same time in your case? 14:29:58 that depends, we have different labs that use our cloud, one of them is just analytics stuff, they have lots of students needing clusters there, so sometimes plenty, sometimes 2, most of the times just 1 for sure 14:30:54 so, the good news that it's already partially implemented - case when count==1 14:31:08 very true 14:31:11 yes 14:31:11 :) yay, partly done 14:31:16 lol 14:31:31 count==0 is implemented too 14:31:40 alazarev, that was POC 14:31:58 You could make an argument for count==-1 too. 14:32:21 egafford, -inf as well, yay 14:32:23 will we implement for count > 1? 14:32:29 so, the last case is count==2 14:32:35 :) 14:33:09 I'm ok with this feature, the only question I have - what's the correct place to do it 14:33:36 if we'd like to check quotas then we need to do it on server side somewhere after intelligent quotas check 14:33:38 and do we just add "-N" to the name given to the cluster? 14:34:13 tmckay, I think so 14:34:15 tmckay, i was thinkinh uuid, but was suggested the -N 14:34:24 so the name isn't too big 14:34:34 -N is nice so you know they're in a series 14:34:41 +1 14:34:43 uuid could be confusing 14:34:54 what about network settings? do we want the clusters segregated? Do we need to change neutron settings for that? (not sure) 14:35:08 good question 14:35:21 just wondering if anything else on the cluster launch form has to be tweaked besides name 14:35:33 (well, the cluster JSON really) 14:36:21 tmckay, it looks like nothing, you can create clusters with the same parameters 14:36:53 not sure, btu i think the network can keep the same settings for now, if we detect problems we improve it 14:37:42 +1 for -N (in the best case impl exactly the same as it done in nova) 14:38:00 +1 14:38:18 sounds like we are assuming that when creating multiple clusters they will all belong to the same keystone project? 14:38:43 I might suggest that we pad with as many zeros as needed for the last number in the sequence. 14:39:05 elmiko, yes 14:39:09 elmiko, yeah. All within the same tenant. otherwise you're going to have to log out and log in again in the UI. 14:39:10 (So that an alphabetic sort by name will yield the expected order, in case anyone cares.) 14:39:14 egafford, like that 14:39:20 k, just wanted to confirm 14:39:25 egafford: +1 14:39:45 although, i can't imagine needing to pad with more than 1 or *maybe* 2 0's lol 14:40:07 but, who knows... 14:40:18 i think 1 is enough, but 2 would be a safer choice 14:40:20 you never know 14:40:22 you can have up to 1000 clusters with one button push. If you need more, you can push the button again 14:40:25 yea, 2 for safety 14:40:32 tmckay: lol! 14:40:44 tmckay, +1 14:40:56 Well, I just figure, okay, you're asking for 10 clusters? 1 x "0". 349? You're crazy, but okay, you get 2 zeros. 400,000? That's ambitious. 5 zeros. 14:41:31 oh, are you saying dynamically create the padding 0s? 14:41:42 But just padding 2 might be easier, and yeah, it's hard to imagine a case for 100,000 individual clusters. 14:42:04 sorry, this is serious bike-shedding at this point 14:42:05 i'm ok with dynamic as well 14:42:09 what about API request and response? It's breaks backward compatible with older sahara versions? 14:42:11 elmiko: That was my thought, but auto-padding with a rational number also meets the use case in huge numbers of cases. 14:42:19 True. 14:42:43 sreshetnyak: not sure, sounds like it will just be an extra param in the json 14:42:55 We could use random words from a dictionary, with localization 14:43:11 sreshetnyak, if we implement in the sahara side, that might be a point where we need to return a list of clusters 14:43:26 hey, now that sreshetnyak is here could we talk about keystone sessions for a minute? =) 14:44:03 elmiko, hi :) 14:44:10 hey! 14:44:17 we need to decide where to implement it, but we can talk later about this 14:45:00 so, i'm working through the spec for replacing keystone Client with Session based auth. i'm thinking the first step will be to replace our usage of keystone clients, but as a follow up we should replace all the service clients with Session usage. does that make sense? 14:45:00 elmiko, apavlov wants works on keystone sessions 14:45:11 cool! 14:45:40 elmilo, i think you need to talk to him 14:45:51 apavlov, here? 14:47:09 oh well, i'll try and send something to the ML 14:47:25 sreshetnyak: do you know if he's working on a poc or spec yet? 14:47:56 also, should i add both of you to the spec as co-assignees? 14:48:59 elmiko, no, but he wants to start 14:49:16 sreshetnyak: ok, i'll have to talk with him 14:49:22 is he in saratov timezone? 14:49:30 yes 14:49:32 k 14:49:36 thanks =) 14:51:11 9 mins left 14:51:18 anything else to chat? 14:51:27 Nothing from me 14:51:58 quotas is a separate topic to discuss, now we are retrieving the whole neutron DB for each cluster creation, horizon does the same thing 14:52:18 just a quick question: what's the status of HDP 2.2 plugin? 14:53:03 hdd, the first parts are already on review 14:53:07 by sreshetnyak 14:53:25 We discussed the idea of using an agent and the possibility of Zaqar adoption at Summit. It was extremely brief, and I don't think anything was decided, but I'd like to get a notion of where we are on the question of trying to move away from straight SSH for communication. 14:53:48 ok, cool, I'll check it out. I'm eager to test 14:53:58 the multiple clusters questions that are still open we can discuss later 14:54:27 (This could be a worth discussing next week as a larger topic.) 14:54:34 * flaper87 drops something here: http://specs.openstack.org/openstack/zaqar-specs/specs/liberty/pre-signed-url.html 14:54:46 flaper87: :D 14:54:48 flaper87, thx :) 14:55:13 egafford, so, it's the base for the potential guest agent impl 14:56:23 SergeyLukjanov: Right; I just didn't come away with a firm picture of whether the guest agent impl is potential or planned. 14:57:11 egafford, it's a potential implementation option, but now it looks like the best option 14:57:39 egafford, with zaqar queues per cluster and signed urls it'll be the full isolation for the control traffic 14:57:46 egafford, and no need for SSH :) 14:58:08 I see. So we are moving toward a guest agent, and Zaqar is a potential route toward it. Cool; that's what I needed to know. Thanks. 14:58:20 (Potential and likely.) 14:59:09 egafford, yup, in real life - someday, hopefully sooner :) 14:59:20 we're going out of time 14:59:23 thanks folks! 14:59:26 #endmeeting