17:00:26 #startmeeting Murano Community Meeting 17:00:27 Meeting started Tue Jan 21 17:00:26 2014 UTC and is due to finish in 60 minutes. The chair is ativelkov. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:28 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:30 The meeting name has been set to 'murano_community_meeting' 17:00:47 Hi folks 17:01:09 It's time for Murano meeting, isn't it? 17:01:20 Hi 17:02:00 Let's start then 17:02:05 hi 17:02:16 #topic AI review 17:02:38 We had lot's of actions related to the new DSL 17:02:51 stanlagun to submit the sources to a custom repo 17:03:12 Stan, as far as I remember you've done that 17:03:24 Could you please remind us the url to this repo? 17:03:40 https://github.com/istalker2/MuranoDsl 17:04:20 #info new DSL is temporary availalble at custom repo at https://github.com/istalker2/MuranoDsl 17:04:50 We'll need to decide where to put it when it is ready for integration into 0.5 17:05:32 Right now?) 17:06:00 We'll, there is no rush, but we need to start working on the actual 0.5 soon 17:06:45 I suggest murano-common or repository 17:06:58 +1 for murano-common for DSL 17:07:24 DSL engine is reusable, so common is a good placce for it 17:07:44 murano-shared 17:07:58 stanlagun: is it going to be a new repo? 17:07:59 ok, let's move it to murano-common. One more repository& ) 17:08:42 We do have murano common and shared has the same meaning though 17:09:12 We will need to reorganize our repos. But this requires a discussion - let's first cover all the open AIs first 17:09:23 I would suggest having "murano" for reusable parts and "murano-services" for services 17:10:09 Second AI is "stanlagun to update the DSL description according to the implemented PoC" 17:10:40 PoC is still in its final implementation steps, right? 17:11:29 not yet done as PoC is not fully implemented and still there are several problems left. But it is close to final steps as today I've done ~70% of remaining changes in DSL 17:12:12 Good. 17:12:38 So, the last AI is not applicable as weel. It was on me, and I was supposed to organize a call with Dmitry to demonstrate the PoC 17:12:51 So, we expect this to happen during this week 17:13:02 right? 17:14:09 not sure on documentation, but as for PoC - yes 17:14:33 #info DSL PoC to be completed this week. Documentation could be delayed 17:14:39 So, the AIs remain the same 17:14:47 #action stanlagun to update the DSL description according to the implemented PoC 17:14:56 #action ativelkov to schedule a meeting with Dmitry 17:15:04 OK, we are done with the AIs 17:15:09 #topic New DSL 17:15:46 So, speaking about the new DSL: what remain to be done in terms of implementation? 17:17:13 Several APIs to talk to Heat and Agents, several YAQL functions/macro and most difficult - memoization of some DSL methods 17:17:43 what do you mean by memoization? 17:17:46 That is make Deploy be callable many times without side effects 17:18:05 And syncronization between several parallel callers 17:19:42 Ah, you mean the logic when a method can be caused only once with the given set of parameters, and the result is kept in memory, so subsequent calls return the same without actual computations? 17:19:49 can be alled* 17:19:53 called**, sorry ) 17:20:10 yes, more or less 17:21:10 Ok. Any estimates on this? 17:22:16 this is all parts of PoC 17:22:26 so this week, as i said 17:23:11 Ah, good 17:24:03 Anything else to discuss about the New DSL? 17:24:36 lets continue on next CM 17:25:20 Ok 17:25:29 #topic Release 0.4.1 status 17:25:50 katyafervent2, can you give an update on the ststaus of the release? 17:26:14 Heap, everything is on track 17:27:14 On the last meeting we agreed to add support for Neutron LB in this delivery. Any progress on this? 17:27:20 Work for all items that we were added to our deliverables are in progress 17:27:35 ativelkov, yes 17:27:53 First part of blueprint was implemented by Serg 17:28:01 neutron support was reviewed and added two new blueprints 17:28:13 for 0.4.1 17:28:14 tnurlygayanov_: have we deployed the LB to some of our labs? 17:28:14 https://blueprints.launchpad.net/murano/+spec/auto-assign-floating-ip 17:28:15 And just small piece left 17:28:25 and https://blueprints.launchpad.net/murano/+spec/auto-assign-virtual-ip 17:28:52 Also we do have several medium bugs left 17:28:56 yes, I deployed it on my QA environment with devstack 17:29:32 #info Neutron LB support implementation in progress 17:29:41 #info release 0.4.1 on track 17:30:07 And from now we do need to concentrate more on testing 17:30:32 Please note that 0.4.1 should not have any strict dependencies on IceHouse artifacts or trunk 17:30:34 Also Dmitry is done with devstack integration 17:31:34 yes, we will test it with devstack from trank 17:31:55 tnurlygayanov_: please test in on latest Havana 17:31:56 Let's move on 17:32:14 ativelkov, it it already done. 17:32:21 good. 17:32:56 The reason I am asking is that it is likely that 0.4.1 may be included in MOS 4.1, which will be Havana-based 17:33:24 So, the next item in our agenda 17:33:34 #topic Repository reorganization 17:33:50 That's a long topic and probably will require more meetings 17:34:03 But in general I see a problem with our infrastructure 17:34:29 we have 11 (eleven) repositories 17:34:58 that is about 5 times more then optimal :) 17:35:26 We need to reorganize them 17:35:47 At the same time, we need to preserve testability 17:36:58 Also, it is ok to have a separate repos for python bindings (python-muranoclient or whatever it is called) and OpenStack Dashboard plugin 17:39:12 I would suggest to have a signle repository for the main components of Murano (we will need to merge murano-api, murano-conductor and murano-repository), a single repo for python bindings (need to merge python-muranoclient and murano-metadataclient) 17:39:16 services + python-client + UI + app catalog services metadata + (maybe) shared code + docs = 5-6 at minimum. How do you make less? 17:39:49 what about murano-deployment ? 17:39:56 +1 17:40:02 what is the diff between "service" and "app catalog services metadata"? 17:40:19 services = api/conductor etc. 17:40:30 metadata should be there as well 17:40:39 it is a core component for us 17:40:46 metadata = app templates, sample services. Those are pure YAML/templates etc 17:41:09 Ah, the examples 17:41:13 They should go to extra 17:41:32 as well as deployment scripts, buildable docs etc 17:41:48 Don't think that metadata for core classes like Object can be considered as extra 17:42:19 I don't speak about core classes 17:42:36 core shoould be stored in the main repo, altogether with the engine itself 17:43:30 I don't see the reason for such granularity 17:43:42 Core classes are part of initial AppCatalog content and should be also installed to glance 17:44:18 And what is the problem with that if they reside in the primary repo? 17:44:27 Alongside with common base class library 17:45:40 If they would be in a separate repository companies could make forks with custom basic workflows adopted for concrete IT 17:46:01 We don't want any forks of our base library! 17:46:09 Why? 17:46:19 We have inheritence for that 17:46:31 Forking is not an openstack way of doing this 17:46:58 Base library should remain base :) 17:47:48 And anyway, if somebody wants to fork, they can fork the whole repo 17:48:30 The packaging and glance-publsihing should be done by deploymnet scripts 17:48:50 I don't speak about inheritance. I talk about uses cases like "all Vms in our cloud must be booted from volume" or have additional security constraints or additional workflow steps. If you use inheritance for this you would have to create a custom version of every possible service in catalog instead of modifying one class "Instance" 17:48:51 And the location of the sources for these packages is irrelevant 17:50:41 in this case yes, the custom fork of the library is possible, but I don't see reasons to have different repo for that 17:51:08 There may be need to modify the engine as well - with approximately the same probability of this need 17:51:20 So, they can fork the whole repo 17:52:00 Anyway, I would suggest to initiate a discussion 17:52:14 Probably there is more then one way of doing this 17:52:41 I suggest to write to infra ML and ask for better practises and some advices from openstack folks 17:53:00 Well not with same probability. Metadata is designed to be modified by cloud-ops while code is not. But this not neccessary mean you're wrong :) 17:54:04 Fo me metadata is an advanced version of config files 17:54:23 Cool. Do you suggest to have a separate repo for all the config files? ;) 17:55:08 #action ativelkov to write to ML about repository re-organization 17:55:30 Guys, we have 5 minutes left. It seems like we have to continue the repo discussion later 17:55:57 One more note: next week gokrokve and me will be at Glance mini-summit in Washington DC 17:56:24 Could someone else lead a community meeting on the next week? 17:58:39 Any volunteers? ;) 17:59:21 Volunteers will be assigned :-) 17:59:29 Nice idea ) 17:59:41 I likee this way of volunteering! ) 17:59:47 Ok, thank's all for joining 17:59:53 #endmeeting