14:01:19 #startmeeting sahara 14:01:20 Meeting started Thu Jun 18 14:01:19 2015 UTC and is due to finish in 60 minutes. The chair is SergeyLukjanov. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:21 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:23 The meeting name has been set to 'sahara' 14:01:25 o/ 14:01:46 #topic sahara@horizon status (crobertsrh, NikitaKonovalov) 14:01:53 #link https://etherpad.openstack.org/p/sahara-reviews-in-horizon 14:02:15 I've no updates here since I've been focused on other things 14:02:38 But I guess not much has changed form the last meeting 14:02:52 Event log change still on review, that is sad 14:02:57 NikitaKonovalov, do you probably know anything about moving sahara dashboard to contrib dir in horizon? 14:03:07 vgridnev: =( 14:03:08 Template editing changes still on review == sadd 14:03:32 I haven't added any new UI patches recently, been working more on service side of things. 14:03:33 SergeyLukjanov: haven't seen any movement there 14:03:56 NikitaKonovalov, any movements in other projects? 14:04:56 looks like no 14:05:01 sahara was just enabled for horizon integration tests https://review.openstack.org/#/c/192645/ 14:05:49 oh, merged, nice; I missed the commit message 14:06:02 SergeyLukjanov: does that mean that we can have fake plugin tests 14:06:23 (another review will reenable the horizon integration tests as voting) 14:06:49 NikitaKonovalov, hm, I think so 14:06:54 I mean make UI create a cluster with a fake plugins an check that all panels, configs, etc are in place 14:07:00 NikitaKonovalov: I though Horizon integration tests are for full deployment 14:07:22 NikitaKonovalov, tosky I think it should be possible now 14:07:52 We have registered a bp to integrate manila and sahara in https://blueprints.launchpad.net/sahara/+spec/edp-manila-hdfs. We are working on the spec. 14:07:58 https://github.com/openstack/horizon/blob/master/openstack_dashboard/test/integration_tests/tests/test_sahara_job_binaries.py 14:08:04 Hope we can have a spec by the next week 14:08:50 kchen, excellent 14:08:57 job_binaries are quite standard == easy to test 14:09:06 clusters should be more interesting 14:09:53 NikitaKonovalov, yeah, but seems like no difference if we have full devstack install 14:10:00 anything else re sahara@horizon? 14:10:29 nothing I can think of 14:10:33 nothing from me 14:10:34 so, let's move on 14:10:38 thx folks 14:10:39 #topic News / updates 14:11:50 I've been working on refactoring of sahara.utils module. There are too many things now some of which are required by provisioning plugins and some not 14:11:54 i was out part of last week and at spark summit this week. i have talked with apavlov about the keystone sessions, i think we are close to a spec for that. also i've been investigating using gabbi to fuzz test a live sahara server. also, lots of cool stuff at spark summit, i think we need to improve our spark support =) 14:12:13 Finished work with recommendations provider, really need for review some my changes 14:12:15 I've been doing a little experimental work getting a Spark 1.3 cluster up and running the Zeppelin notebook stuff on top of the cluster. Looks promising so far. 14:12:21 #link https://review.openstack.org/#/q/owner:%22Vitaly+Gridnev%22+status:open,n,z 14:12:28 I'm working on HDP 2.2 plugin 14:12:29 So the idea is to move the code from utils to where it's actually needed so the sahara-plugins split will be easier 14:13:30 NikitaKonovalov, at least in theory, if we'll agree to do it :) 14:13:33 i'm working on custom scenarios 14:13:43 reviewing/testing egafford's job interface mapping, I think it's almost ready for a +2 from me 14:13:46 working on scheduler and recurrence edp job 14:13:47 looks good 14:13:51 I'm trying to test hadoop performance with disks attached directly to VM via cinder driver 14:15:52 sounds like time to move on 14:16:05 #topic Liberty-1 14:16:08 maybe for open topics, but I think someone needs to work on hdp plugin ci tests (if not already) or we should move them to non-voting for now. They break too often, and HWX is not here to fix them :) 14:16:13 so, we have a liberty-1 next week 14:16:31 tmckay, esikachev working on CI failures now 14:16:39 yay! thanks 14:17:20 #info Liberty-1 next 14:17:25 #undo 14:17:26 Removing item from minutes: 14:17:39 #info Liberty-1 next week (Jun 23-25) 14:17:44 #link https://wiki.openstack.org/wiki/Liberty_Release_Schedule 14:18:02 #info Liberty release tag for sahara will be 3.0.0 14:18:11 neat 14:18:12 so, Liberty-1 is 3.0.0b1 14:18:46 the Sahara changes was applyed already in I6a35fa0dda798fad93b804d00a46af80f08d475c 14:18:55 and I'm now working on doing the same for the rest repos 14:19:30 so, note that epoch should be increased :) 14:19:36 any questions about it? 14:20:35 #topic Next client releases 14:20:56 I'd like all of you to think about all potential changes to client that should be done in Liberty 14:21:06 especcially new features addition 14:21:22 and I'd like to make a release schedule for the client 14:21:45 SergeyLukjanov, why do we need release schedule for it? 14:21:58 SergeyLukjanov, usually it is shipped when needed 14:22:03 note: we should have final "feature" release of client by liberty-3 14:22:04 I have at least 2 client changes that will be needed, for editing data sources and job binaries. 14:22:10 alazarev, it's needed for us 14:22:32 at least one change with auto-configuration parameter to client 14:22:34 i would like to add storm job submission on the client 14:22:37 I have one tiny one that tosky noted. default templates should be marked somehow in the template list output 14:22:51 there are two main issues I'd like to solve by listing needed client features - not miss any of them in official Liberty client and decrease latency of releasing new features 14:23:09 and also if multiple clusters creation enters I would have to implement on the client as well (if we use a new api call) 14:23:17 Interface field on both job and job template need to get into the client as well. 14:23:30 SergeyLukjanov: +1 14:23:30 tellesnobrega, will it be different comparing to the other jobs? 14:23:40 no 14:24:03 i have to take a look to see if any changes are needed 14:24:41 sounds like we need an etherpad for this (did I miss one above?) 14:25:18 tmckay, +1 14:25:24 https://etherpad.openstack.org/p/sahara-liberty-client 14:25:38 yay! good work SergeyLukjanov. Fast too :) 14:26:10 Please, add links to the specs / CRs, put your name to have a contact, if you have an idea, when the patch will be ready, put the date as well. 14:26:26 #link https://etherpad.openstack.org/p/sahara-liberty-client 14:26:31 just for the logs =) 14:26:48 tmckay, I've been creating it in time when you write the idea :) 14:26:52 elmiko, thx 14:26:58 parallel, excellent 14:27:06 tmckay, yeah, like a hadoop :) 14:27:20 so, it'll help a lot to plan client release and merge all of the stuff in time 14:27:22 We Are a Cluster 14:27:32 +1 14:27:33 yay! 14:27:49 I'll write a follow-up to mailing list as well 14:28:09 all, make sure you actually create the bug/spec/bp if you put ( words ) on the etherpad! 14:28:18 I'll do mine today 14:28:26 random idea: do we probably want to make a saharaclient-core group with all sahara-cores plus someone else? 14:29:01 SergeyLukjanov: is it a different group in gerrit? 14:29:06 hmmm, do other projects do this? 14:29:16 tmckay, swift already doing it 14:29:21 My first instinct is that It seems like it might be overkill 14:29:24 tmckay, probably someone else 14:29:28 crobertsrh, ++ 14:29:39 yea, kinda agree with crobertsrh 14:29:41 what about combined client? will python-saharaclient go away eventually? 14:29:56 but it could be helpful if will have someone very active in client 14:30:02 not the case right now 14:30:04 tmckay: the CLI client could go away, but not the library 14:30:05 I think we only have a few client changes per cycle, not too bad 14:30:10 gotcha 14:30:20 tmckay, its python part still needed, it's only about combined CLI 14:30:20 Agreed that it seems like overkill. 14:30:32 so I hear -1 14:30:39 ("client" is a bit overloaded word in OpenStack, unfortunately) 14:30:47 yeah 14:31:08 #agreed no need for the separate saharaclient core group 14:31:16 crobertsrh, btw, I was going to jam through those spec approvals today 14:31:24 everyone has had more than enough time to comment :) 14:31:25 great 14:31:35 if someone has a big issue, they can re-open as a CR 14:32:06 #topic Hadoop 1 drop 14:32:13 I'd like to chat about it again 14:32:15 poor hadoop 1 14:32:40 question: vanilla is the first, and then it will be for all other plugins, or is it up to each plugin "maintainer"? 14:33:00 there were no requests for the hadoop 1 for the whole sahara life (even when it was savanna, eho, etc) 14:33:14 tosky, HDP plugin is now maintained by community 14:33:20 due to the lack of HWX support 14:33:21 so we won't support hadoop1? 14:33:30 huichun, do you need it? :) 14:33:48 need not 14:34:46 my reasoning for it - not to support the obsolete thing, it requires CIs, images, etc. 14:34:53 but noone asking for it 14:35:06 +1 14:35:26 SergeyLukjanov: HDP1 has been a repeat offender in terms of failing all the CI jobs, as well. 14:35:38 so do we need a deprecation cycle? 14:35:56 SergeyLukjanov: is also Mapr 3.1.1 based on Hadoop 2? 14:35:57 deprecated for L, removed in "M"onster 14:36:07 +1 14:36:10 tmckay, probably not for the Hadoop 1 14:36:23 not sure that we should keep them while noone using them 14:36:42 yea, makes no sense to keep it if there have been no requests for it. 14:36:46 even apache has lost a bit of interest in hadoop 1 14:36:48 truth is, plugin SPI is relatively stable, wouldn't be hard for someone to maintain out-of-tree if they really wanted to 14:37:00 tellesnobrega, correct 14:37:03 tmckay: From a customer perspective, everyone's on 2. For enthusiasts, less sure; there could be someone out there running Hadoop 1 on Sahara in his/her basement who'd be very angry at us. 14:37:14 tmckay, if we'll be going to extract plugins, it'll be additional work 14:37:20 egafford: lol 14:37:22 egafford, they always have Kiko 14:37:37 plus sreshetnyak is now working on the HDP plugin rework and we'll be able to drop current HDP plugin as well 14:37:49 ack. I have no problem with it 14:38:23 and Liberty will be shipped to customers like early next year, so, I think it's pretty safe :) 14:38:37 NikitaKonovalov, have you already created Hadoop 1 drop spec? 14:38:48 yes 14:38:54 plus it makes the gate faster ++ 14:39:04 tmckay, exactly! 14:39:11 #link https://review.openstack.org/#/c/192563/ 14:39:14 great 14:39:24 so, we could discuss it offline in spec 14:39:40 it sounds like we have partial agreement on dropping hadoop 1 14:40:02 at least no users here ;) 14:40:16 we can always send an email about it to openstack-dev and see if anyone screams 14:40:28 Like a doctor saying "Does this hurt?" 14:40:43 tmckay, yeah 14:40:45 #topic Open discussion 14:40:49 then we can prescribe Hadoop 2 as the cure 14:41:03 has anyone talked with the cognitive team? 14:41:16 to find out what it is they are putting together? 14:41:20 good question. channel is kind of quiet 14:41:27 (by kind of, I mean totally) 14:41:41 oh, I have a question about unit tests 14:41:55 quick copy&paste coming :) 14:41:57 we have a global dependency on testtools>=0.9.6, but some unit test implicitely depends on newer versions 14:42:02 for example, I've seen usage of assertRegex which works only with testtools>=1.2 (thanks to unittest2); and other stuff like that 14:42:26 should we consider these usages as wrong and file bugs for them? Otherwise the global requirements have no meanings 14:42:30 It's not a problem on the gates because they test only the last version of deps (so testtools 1.8) 14:43:06 i have a question about the multiple clusters spec. sreshetnyak talked to me this week suggesting that i created a new api call, clusters/multiple, this way we keep compatibility and we dont to wait until API v2 to have this feature in 14:43:38 tosky, so shouldn't the global dep be moved past 0.9.6, if the gates use 1.8? 14:43:52 sorry tosky, kinda broke the thought there 14:44:07 tmckay: that's the other possibility, but I'm not sure how it works in that case 14:44:25 tmckay: maybe there are requirements from distributions; ubuntu has 0.9.6, we ship 1.1... 14:44:32 I have no idea 14:44:33 I see 14:44:51 tosky: maybe talk with the infra team to learn a little more about global reqs and why it's at 0.9.6? 14:45:05 otherwise, yea bugs would be good 14:45:10 it would be worth to check, but maybe SergeyLukjanov knows something with his infra hat 14:45:36 or at least, who can we talk about for this? 14:45:39 tosky, well, technically, it's not bounded. It doesn't say >= 0.9.6, < XXXX 14:45:54 so, it's not really a bug per se. I see your point, though 14:46:15 tmckay: it is a bug, IMHO; it's not working with a supported version 14:46:17 I think we need to fix the global req. bump it up, or bound it, or everything is okay 14:46:22 guess it depends how each distro satisfies those reqs 14:46:26 so either you raise the dependency or you fix the code 14:46:32 tosky: +1 14:47:27 yeah, I think investigate moving the version in the global req, and if it can't be moved, then bound it, and find bugs 14:47:36 oops, I'm return back :) 14:47:46 (imho the global requirements.txt that everybody must take is a sort of annoyance) 14:48:04 SergeyLukjanov, lots of talk about infra stuff ^^ 14:48:10 pino|work: necessary evil i suppose ;) 14:48:11 tosky, IMO it's good to bump testtools version 14:48:26 tosky, there were the same input from heat team 14:48:39 SergeyLukjanov: yep, they have a similar issues in their unit tests 14:48:39 there are already one bump patch for testtools 14:48:40 https://review.openstack.org/#/c/192574/ 14:48:47 that's what my grep says :) 14:49:08 oh, from yesterday 14:49:15 in fact, we should be able to work on a min version 14:49:21 if the dependency is raised, will this be backported to kilo? 14:49:26 because also kilo is affected 14:49:31 but, the specific solution for it now is better to bump min version 14:49:46 if we can bump, that seems like the ideal 14:50:15 let's review and +1/2 https://review.openstack.org/#/c/192574/ 14:50:35 sounds good 14:51:11 tellesnobrega: i think adding new endpoints to v1.1 is fine with regards to v2. v2 will be more about improving the api and perhaps breaking some bad patterns from the past. 14:51:37 sure 14:51:45 anyone opposes to that? 14:51:52 elmiko, ++ 14:52:28 i'm going to rewrite the spec detailing that i'm going to add a new call to the API 14:52:50 also going to write the spec for the saharaclient 14:53:26 tellesnobrega: sounds good 14:53:31 SergeyLukjanov: have you talked with the cognitive team at all? (i'm curious what kind of overlap we might have with them) 14:53:58 elmiko, (trying to remember what is cognitive) 14:54:05 MLaaS 14:54:32 i'm wondering if they are going to deploy spark or something 14:54:37 (mailing lists as a service) 14:54:42 lol 14:54:54 tmckay, thats what i read 14:54:55 lol 14:54:56 MachineLearning-aas 14:55:34 elmiko, we could just start posting stuff in the channel 14:56:07 yea, i'm just curious if anyone has talked to them yet. that will be the next step, but i have too much on my plate already ;) 14:56:16 SergeyLukjanov: fyi https://wiki.openstack.org/wiki/Cognitive 14:56:30 oh, I remeber there were an email about it 14:56:48 and I answered something like folks, please, don't duplicate :) 14:56:57 it's good if they will use sahara as a base 14:57:07 for example to provisioning cluster for ML 14:58:13 yea, that would be awesome 14:59:43 wow, we used the whole meeting 15:00:22 the same with happened with the project Surge (stream processing on openstack) 15:00:35 interesting... 15:00:58 Hi, L3 meeting scheduled for now. 15:01:24 i guess our time is up 15:01:28 yep, we have to leave 15:01:30 bye all 15:01:38 carl_baldwin: sorry 15:01:39 bye 15:01:39 bye 15:01:43 bye 15:01:47 just need SergeyLukjanov to #endmeeting 15:01:50 elmiko: thanks! 15:02:00 hi 15:02:01 carl_baldwin: Error: Can't start another meeting, one is in progress. Use #endmeeting first. 15:02:16 #endmeeting