20:01:42 <ttx> #startmeeting tc 20:01:43 <openstack> Meeting started Tue Aug 27 20:01:42 2013 UTC and is due to finish in 60 minutes. The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:01:43 <megan_w_> them too :) 20:01:44 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:01:46 <openstack> The meeting name has been set to 'tc' 20:01:55 <ttx> Our short agenda: 20:02:01 <ttx> #link https://wiki.openstack.org/wiki/Governance/TechnicalCommittee 20:02:08 <ttx> #topic Marconi incubation request: initial discussion 20:02:18 <ttx> #link http://lists.openstack.org/pipermail/openstack-dev/2013-August/014076.html 20:02:22 <ttx> #link https://wiki.openstack.org/wiki/Marconi/Incubation 20:02:29 <ttx> We generally do incubation requests over two meetings 20:02:40 <ttx> so that we can ask questions and get precise answers before we actually vote 20:02:55 <ttx> Marconi folks: care to quickly introduce yourselves and the project ? 20:03:03 <russellb> o/ 20:03:05 <ttx> then we'll fire all kind of questions at you 20:03:08 <kgriffs> sure thing 20:03:16 <kgriffs> I'm Kurt Griffiths 20:03:39 <kgriffs> I've been nominated to be the PTL for Marconi if it is accepted 20:03:49 <kgriffs> I'll let everyone else introduce themselves... 20:03:57 * flaper87 is Flavio Percoco 20:04:14 <cppcabrera> Alejandro Cabrera here, enjoying working on the marconi project for some time now. 20:04:20 * ametts is Allan Metts, Director of Engineering at Rackspace 20:04:25 <flaper87> I've been helping kgriffs leading the project since its very beggining 20:04:37 <amitgandhi> Amit Gandhi - Dev Manager at Rackspace 20:04:42 <megan_w_> Megan Wohlford, I am a Product Manager at Rackspace, working on Marconi 20:04:44 <malini> I am Malini Kamalambal, developer in test 20:04:53 * flaper87 is a Software Engineer at Red Hat 20:05:04 <kgriffs> and, I should mention I am an architect at Rackspace 20:05:31 <ttx> kgriffs: You mention 1.0 in your incubation wiki page. How far are you from this featureset ? 20:05:32 <oz_akan_> Oz Akan, Engineering Manager at Rackspace, I have been working on deployment and performance side of Marconi 20:05:55 <markmc> lots of managers :) 20:06:09 <kgriffs> heh, Oz is very hands on. :D 20:06:18 <gabrielhurley> inapproriate 20:06:55 <ttx> kgriffs: is it more the current state, or the expected Icehouse state ? 20:07:24 <ttx> (or the expected J state) 20:07:28 <kgriffs> ttx: we are close. We have some performance work to do, we need to improve logging, and we have to finish up the marconi proxy 20:07:52 <kgriffs> I would like to have a 1.0 release ready for Icehouse 20:07:58 <ttx> kgriffs: You compared with Amazon SQS in one of the follow-up emails, mentioning what you did and what they did not... 20:08:03 <dolphm> kgriffs: how was the 1.0 featureset defined? to fulfill a specific use case or to achieve parity with an alternative solution? 20:08:05 <ttx> Is there anything they support that you don't ? Large messages, delay queues, long retention time, long polling... 20:08:11 <dolphm> v1.0 API, specifically 20:08:34 <flaper87> #link https://wiki.openstack.org/wiki/Marconi/specs/api/v1 20:08:42 <kgriffs> ttx: yes, they support delayed messages 20:08:58 <kgriffs> we have it on our roadmap but are waiting to see what the demand is 20:09:04 <ttx> kgriffs: +1 20:09:21 <russellb> who is they? 20:09:27 <flaper87> russellb: SQS 20:09:31 <russellb> ok. 20:09:39 <ttx> russellb: was in response to my question 20:09:42 <kgriffs> we have also discussed doing long polling and even push-based transports (websockets, zmq) but I feel like I want to get a solid baseline release out first. 20:09:47 <russellb> ah k 20:10:12 <ttx> kgriffs: right. Was just checking those were not somehow off-limits. 20:10:18 <kgriffs> nope 20:10:24 <kgriffs> (not off limits) 20:10:40 <ttx> kgriffs: dolphm had a question above 20:10:51 <kgriffs> dolphm: the project's requirements have evolved over time 20:11:00 <kgriffs> let me elaborate. :D 20:11:21 <kgriffs> so, the initial set came out of a brainstorming unconference session at the Grizzly summit 20:12:17 <kgriffs> we have refined them based on community discussion, mostly during our team meetings in #openstack-meeting-alt and in #openstack-marconi 20:12:42 <kgriffs> also, a little on the mailing list, but we seemed to have better luck engaging folks via IRC 20:13:04 <kgriffs> so, early on, Iron.IO was involved in giving feedback, also flaper87 (Flavio) from RedHat 20:13:49 <kgriffs> and at my own company, Rackspace, we have been talking with lots of "internal" customers and have also surveyed regular customers to find out what they want 20:14:26 <kgriffs> obviously, we ended up with a ton of ideas that we won't implement for 1.0 but we have plenty to keep us busy for quite a while. 20:14:46 <ttx> other questions ? 20:14:55 <kgriffs> so, overall, I've tried to be very open and public about design/requirements 20:15:01 <ttx> or comments/opinions ? 20:15:05 <markmc> kgriffs, there was talk of a zmq backend - how does/will that work? 20:15:19 <kgriffs> FWIW, some reqs. also came from discussions at the last summit in Portland 20:15:32 <markmc> e.g. are zmq endpoints exposed to users, or is it purely an implementation detail? 20:15:37 <kgriffs> markmc: I'll let flaper87 take that q. 20:15:42 * notmyname thinks a queue service is a terribly useful feature that is missing from openstack 20:15:50 <zaneb> fwiw we have use cases waiting for something like this already in Heat 20:16:03 <jd__> hm does that rely on oslo? 20:16:03 <russellb> notmyname: agreed from a high level, at least. trying to work through the approach though ... 20:16:05 <ttx> notmyname: +1... AWS has had SQS since 2004, two years before they introduced S3 and EC2 20:16:22 <flaper87> markmc: re zmq. Current proposal is to implement a transport plugin for zmq, which means, being able to talk to marconi using zmq as we were using HTTP 20:16:27 <markmc> jd__, just for oslo utilities AFAIK 20:16:45 <russellb> doesn't exist yet though, right? 20:16:48 <russellb> (the zmq thing) 20:16:51 <markmc> flaper87, ok, so it's an alternate frontend 20:16:51 <notmyname> so the important point here is that marconi isn't provisioning new queue for people, but actually implementing a queue service 20:16:52 <flaper87> there are 2 layers in Marconi, one for the transports and one for the storage. They both are pluggable 20:16:53 <notmyname> right? 20:16:54 <russellb> i see an empty __init__.py file, heh 20:16:56 <flaper87> markmc: correct 20:16:58 <flaper87> russellb: correct 20:16:59 <markmc> flaper87, backend remains the same? 20:17:04 <flaper87> markmc: yup 20:17:04 <russellb> flaper87: storage == the messaging backend? 20:17:09 <flaper87> russellb: correct 20:17:14 <russellb> and right now you have mongodb? 20:17:19 <notmyname> flaper87: "pluggable" is a horrible word. too many assumptions about the definition 20:17:25 <russellb> (and sqlite, but presumably for testing) 20:17:27 <flaper87> russellb: as for now we have sqlite and mongodb 20:17:39 <russellb> so does mongodb have magic messaging semantics or what? 20:17:43 <flaper87> notmyname: sorry, bad wording from my side. We allow third-party modules through stevedore 20:17:58 <russellb> like .... why is this not an API on top of rabbit or some existing message broker? 20:18:07 <kgriffs> we have discussed supporting AMQP and/or SQLAlchemy as well, but we want to keep the set of "official" drivers small. 20:18:07 <cppcabrera> russelb: yep, sqlite is a testing storage backend. 20:18:12 <flaper87> russellb: not, mongodb works as a database (as mysql / psql would do). 20:18:25 <cppcabrera> *russellb 20:18:26 <flaper87> russellb: an API on top of existing message brokers is on our roadmap 20:18:38 <russellb> right, so what makes that the first choiec for a messaging service backend? 20:18:42 <flaper87> so far we have plans for 3 more storage backend 20:18:52 <flaper87> 1) rel dbs, 2) rabbit 3) proton 20:19:06 <kgriffs> russellb: we wanted to include a mongo driver since it can be easier to achieve HA than with, say, RabbitMQ 20:19:21 <markmc> how do you implement "List messages" with rabbit as a storage backend? 20:19:23 <kgriffs> it also gives us some extra features 20:19:23 <flaper87> russellb: the main reason is that we wanted to focus on the API without struggling with AMQP issues (and Message broker issues) in first place 20:19:46 <russellb> markmc: heh, true. 20:19:46 <dolphm> markmc: i was just wondering what the use case was for supporting "list messages" 20:19:59 <russellb> also, i'm very interested in digging into your views on what backends go in the project vs out 20:20:12 <russellb> because based on some list posts, your position doesn't seem to align that well with other projects 20:20:27 <ttx> kgriffs: <notmyname> so the important point here is that marconi isn't provisioning new queue for people, but actually implementing a queue service, right? <-- would like to see the answer to that question as well 20:20:42 <kgriffs> dolphm: so, we are supporting listing messages when you want observers that don't claim messages (pub-sub). 20:21:09 <kgriffs> ttx: correct, marconi is a queuing service itself, not a provisioning service 20:21:36 <zaneb> that's a little bit frightening ;) 20:21:41 <jd__> Redis anyone? 20:21:51 * jd__ likes to pop tech' randomly 20:21:54 <cppcabrera> +1 jd__ 20:21:59 <flaper87> jd__: cppcabrera played with that but the implementation isn't complete yet 20:22:38 <kgriffs> again, regarding listing messages, it allows interesting hybrid semantics 20:22:39 <markmc> flaper87, kgriffs, how do you implement "List messages" with rabbit as a storage backend? 20:23:00 <jd__> they don't? 20:23:13 <russellb> heh 20:23:17 <kgriffs> markmc: that's a great question. 20:23:17 <markmc> (trying to get my head around how a storage backend based on an existing broker would work) 20:23:20 <flaper87> markmc: sorry, I missed that question 20:23:22 <kgriffs> that came up last week 20:23:42 <markmc> (and if it makes sense to do that, why would it not be the default) 20:23:55 <markwash> just out of curiousity, why would someone want to use a rabbit backend for marconi? 20:24:09 <notmyname> markmc: kgriffs: based on the previous answer, wouldn't that mean that marconi isn't using something like rabbit et al for a stoage backend 20:24:29 * ttx would generally prefer less backends and a more consistent experience 20:24:35 <kgriffs> markwash: some people have expressed interest in doing it in order to bridge to a different (may or may not be legacy) system 20:24:40 <oz_akan_> markmc: I think queuing as a service has different requirements than a standalone queuing application like rabbit or redis 20:24:45 <notmyname> ttx: indeed 20:24:48 <russellb> ttx: yeah, but i also like not implementing our own, where it makes sense 20:24:57 <markmc> notmyname, double negative there, so not sure I understand - but my question is "how would a rabbitmq storage backend work?" 20:25:00 <markwash> kgriffs: I feel like some sort of adapter/shoveling mechanism would be better for that 20:25:37 <markmc> oz_akan_, so a rabbitmq storage backend doesn't make sense? 20:25:49 <kgriffs> that's would be my preference as well, esp. considering that parts of the API would not be implementable on rabbit 20:25:55 <russellb> saying that we're building a queue service, but also saying that using existing message queueing systems is *not* appropriate seems odd at first take 20:25:55 <notmyname> markmc: from my limited knowledge, it doesn't sound like that would make sense (using a queue as storage for a queue) 20:25:59 <kgriffs> flaper87: thoughts? 20:26:11 <SpamapS> from my limited experience with AMQP, messages do not have to be ACK'd (and thus will not be removed)... 20:26:20 <russellb> but we're building an API for a queue, not a queue itself ... i hope? 20:26:33 <kgriffs> the emphasis is on the API, yes 20:26:38 <flaper87> kgriffs: agreed 20:26:41 <annegentle> russellb: that's my understanding 20:26:53 <zaneb> russellb: "<kgriffs> ttx: correct, marconi is a queuing service itself, not a provisioning service" 20:27:06 <flaper87> I think not all features around the API will be supported by all backends 20:27:14 <oz_akan_> markmc: I can't say doesn't makes sense but probably is a response to a different need than using mongodb as a backend 20:27:16 <flaper87> storage backends* 20:27:19 <kgriffs> i mean, it isn't provisioning AMQP queues or something 20:27:20 <russellb> zaneb: i get that, but you can still have an API on top of an internal message fabric that you didn't build yourself, right? 20:27:35 <kgriffs> it is a service itself in that it has it's own API that is multi-tenant 20:27:36 <markwash> I don't think that's the point 20:27:36 <cppcabrera> #info marconi is a queuing service itself, not a provisioning service 20:27:36 <dolphm> russellb: not today 20:27:54 <russellb> dolphm: yeah, i know ... 20:28:02 <russellb> speaking hypothetically, i suppose. 20:28:22 <kgriffs> to be clear, it is a data API, not a management API (although a management API will come along at some point) 20:28:31 <torgomatic> seems to me that if you have a set of API methods that are supported, and using storage backend X precludes implementation of at least one of those API methods, then using storage backend X is precluded 20:28:43 <torgomatic> (in response to "why not rabbitmq") 20:28:52 <markwash> and marconi storage backens *implement* a queue, correct? 20:28:53 <zaneb> kgriffs: OK, so the user doesn't get direct access to the backend, but neither is marconi touching every message itself? 20:29:13 <kgriffs> what do you mean by "touching"? 20:29:27 <zaneb> queuing might be a more accurate term 20:29:33 <markwash> zaneb: I think all messages are submitted and retrieved through the marconi api #amiwrong? 20:29:36 <ttx> markwash: IIUC Marconi implements a queue, and uses sqlite/MongoDB/* to store data 20:29:43 <flaper87> markwash: correct 20:30:35 <ttx> It can use Rabbit to store data too, but that's a bit of a corner case 20:30:42 <markwash> kgriffs: what kinds of usage monitoring/notifications do you support at present? 20:30:48 <ttx> So yes, it's a bit scary 20:30:58 <notmyname> ttx: s/scary/awesome/ 20:30:58 <ttx> scarier than, say, Trove is. 20:31:20 <flaper87> notmyname: +1 :) 20:31:23 <ttx> notmyname: oh right. s/scary/exciting/ :) 20:31:26 <kgriffs> markwash: for the end user, for ops, or both? 20:31:33 <markwash> kgriffs: ops/billing 20:31:37 <annegentle> ttx: scary because of vagarity in backends and API matching up? 20:31:45 <markwash> kgriffs: I guess I probably mean "ceilometer" 20:31:47 <markwash> actually 20:32:05 <ttx> annegentle: scary because writing message queue software or a database can be hard 20:32:21 <markwash> IIRC SQS charged some small amount per api request 20:32:25 <annegentle> ttx: okie 20:32:34 <gabrielhurley> seealso: quotas/limites 20:32:41 * gabrielhurley can't type today 20:32:47 <kgriffs> markwash: so, let me answer that 20:32:51 <ttx> annegentle: though I'm also concerned about supporting too many backends and frontends making it a deployer dream and a user nightmare 20:33:04 <flaper87> gabrielhurley: we have some rate limits but no quotas yet 20:33:10 <annegentle> ttx: yeah I was more scared of backend frontend and docs 20:33:12 <flaper87> gabrielhurley: that's on our roadmap, though 20:33:14 <notmyname> markwash: isn't that "simply" marconi being good about reporting what's going on? is ceilometer support required out of the gate? 20:33:14 <kgriffs> depending on what you are metering, you may just use web server logs 20:33:36 <torgomatic> ttx: as long as the same API methods are supported regardless of backend, I don't see why that would be scary for users at all 20:33:39 <kgriffs> otherwise, we have a bp or two to work on collecting operational stats 20:33:53 <ttx> torgomatic: I'd agree with that. 20:33:58 <kgriffs> we have been discussing whether/how to use ceilometer for that 20:34:00 <markwash> notmyname: oh, I'm just being a bit curious 20:34:04 <kgriffs> or statsd or whatever 20:34:19 <jd__> if there's a WSGI API with middleware, it's pretty easy to account for request using Ceilometer 20:34:31 <ttx> torgomatic: as long as we don't start having extensions for those who deploy with RabbitMQ as backend, we should be good. 20:34:32 * torgomatic is a big statsd fan, fwiw 20:34:42 <kgriffs> yep, the HTTP transport is WSGI 20:34:51 <kgriffs> marconi doesn't try to provide a web server. 20:34:53 <flaper87> we want to integrate w/ ceilometer at some point 20:35:02 <jd__> if there's a need to account for things like number of messages stored, I guess we should talk about it 20:35:10 <flaper87> Marconi provides a wsgi app that can be used w/ any container 20:35:26 * markwash envys 20:35:33 <flaper87> it does provide a base server using wsgiref, which is suppose to be used for testing 20:35:44 <cppcabrera> fwiw, I've had pretty awesome experiences so far wrapping marconi in middleware. It plays nice with keystone-auth-middleware. :) 20:35:47 <kgriffs> flaper87: right, forgot to mention that. 20:35:50 <kgriffs> :D 20:36:27 <notmyname> kgriffs: what do you mean by (paraphrasing) "it's and http server" but "it's not a web server"? 20:36:31 <kgriffs> so, a developer can pip install marconi today and be up and running in a couple minutes 20:37:10 <kgriffs> I mean, it doesn't speak/parse HTTP - it doesn't self-host unless you count wsgiref for development/testing 20:37:18 <markwash> its a wsgi application, not a wsgi server 20:37:32 <ttx> OK, is there a specific area we'd like to see precised before next week ? 20:37:45 <cppcabrera> #info marconi is a wsgi application, not a wsgi server 20:37:54 <ttx> So far I've spotted a few worries and a few surprises, but nothing obscure ? 20:37:56 <markmc> perhaps clarification what parts of the API would be required for storage backends 20:38:14 <markmc> i.e. that a storage backend wouldn't be accepted unless it could implement "list messages" 20:38:23 <ttx> markmc: personally I'd prefer if the API was the same for all backends 20:38:25 <markmc> or that "list messages" is an optional API 20:38:34 <markwash> it sounds like the whole rabbit backend thing should be deemphasized. . it was somewhat confusing for us I guess. . . 20:38:34 <flaper87> markmc: +1 20:38:35 <ttx> otherwise I'd question the usefulness of the extra backend 20:38:50 <jgriffith> ttx: +1 in fact I'd almost think that should be required 20:39:08 <annegentle> I'd prefer the API be standard for all backends as well 20:39:17 <markwash> but that's just my "if I were you" opinion, not any kind of tc-vote-related issue 20:39:17 <flaper87> ttx: I wouldn't question it. You can have Marconi deployed in different regions and running on top of different backends 20:39:18 <jd__> makes sense 20:39:36 * markmc isn't against discoverable extensions, but "list messages" seems like a fundamental part of the API 20:39:39 <kgriffs> personally, I'm not a big fan for having partial API support depending on backends. If we think something like that is needed, it would be better to split marconi into two projects 20:39:45 <markmc> if it's not a fundamental part of the API, that should be called out 20:39:49 <ttx> flaper87: sure, as long as they have the same API 20:39:51 * SpamapS would like to raise a hand for "extra backends" 20:40:00 <SpamapS> MongoDB.. AGPL.. its a problem. 20:40:18 * gabrielhurley is nt a huge fan of mongo at scale, either... 20:40:26 <dhellmann> isn't the AGPL only an issue if you're making custom changes to the app? 20:40:26 <russellb> is the mongodb client lib agpl? 20:40:27 <SpamapS> I understand that RabbitMQ is sufficiently different from a data store that it confuses the situation.. 20:40:33 <flaper87> russellb: nope 20:40:34 <markwash> sure, other backends. . but not a broker backend 20:40:41 <kgriffs> but, we went for listing messages since if nothing else it lets you audit work queues, which was always a pain with SQS 20:40:52 <russellb> from an openstack perspective, it's the client lib license we're mostly concerned with 20:40:54 <dolphm> kgriffs: is "list messages" the only api feature specifically intended to support pub-sub? 20:41:00 <russellb> but surely deployers care about mongo itself, too 20:41:07 <kgriffs> yes 20:41:09 <russellb> deployers and packagers / vendors 20:41:12 <SpamapS> russellb: All due respect, but that makes no sense at all. 20:41:16 <zaneb> dhellmann: it's not that it's an *issue*, so much as that you have to convince the whole world that it's not an issue 20:41:24 <SpamapS> russellb: what good is a client library without the database? 20:41:29 <flaper87> pymongo is under the Apache license https://github.com/mongodb/mongo-python-driver/blob/master/LICENSE 20:41:32 <kgriffs> I am not opposed to having alternate backends, but it may be a matter of other SQL/NoSQL rather than adding in AMQP 20:41:33 <russellb> SpamapS: read the rest of what i said there bud before you freak out 20:41:38 <dolphm> kgriffs: (was that a yes to my question or to russellb?) 20:41:54 <markmcclain> I'd like to revisit one aspect of the wsgi topic… namely that marconi uses another wsgi framework not found in any other integrated projects 20:41:54 <markmc> SpamapS, an AGPL client lib means (or could mean, depending on your take on these things) that Marconi wouldn't be distributable under the Apache License 20:41:56 <SpamapS> russelb: Sorry, the rest came in while I was mounting my high horse. 20:42:03 <russellb> SpamapS: i noticed 20:42:05 <markmc> SpamapS, which is a requirement of all our projects 20:42:13 <markmc> SpamapS, that's the valid distinction russellb is making 20:42:37 <SpamapS> markmc: Yes, and the AGPL license for MongoDB means you can't deploy without accepting the terms of the AGPL on at least MongoDB. 20:42:53 <notmyname> kgriffs: I like that. backend == some way to durably store stuff. not the thing that does the queue work 20:42:57 <ttx> markmc: how did we solve that in ceilometer-land ? 20:42:58 <markmc> SpamapS, doesn't affect the license that Marconi is distributed under 20:43:03 <ttx> or did we not ? 20:43:04 <SpamapS> My point isn't "burn MongoDB", it is "Allow deployers to choose alternatives, please." 20:43:10 <markmc> ttx, it's not an AGPL client lib, it's not an issue 20:43:18 <jd__> markmcclain: +1 20:43:26 <kgriffs> SpamapS: +1 20:43:39 <russellb> SpamapS: fair enough. 20:43:54 <jd__> ttx: we didn't? 20:44:04 <jd__> :) 20:44:27 * ttx slaps jd__ 20:44:45 <jd__> markmcclain raised a good point on API btw 20:44:47 <dhellmann> jd__: well, we built the sqlalchemy driver, too, didn't we? I don't remember the timing of that, but I think it was at least in progress. 20:44:50 <flaper87> markmcclain: we had a plan to test Pecan and try to replace Falcon, however, the last benchmark ( kgriffs has more details about this than me) resulted in falcon being faster and we needed that 20:45:00 <jd__> dhellmann: I think so, indeed 20:45:11 <flaper87> that doesn't mean we won't "try" again, we just didn't stressed that point so we could focus on the design 20:45:12 <SpamapS> ceilometer has the appropriate abstraction to allow deployers to write a new backend and choose an alternative. 20:45:15 <lifeless> I must have missed something, if you can't use the service without an AGPL service, isn't that still an issue? 20:45:17 <flaper87> hope that make sense 20:45:18 <jd__> flaper87: hm, because others projects don't need speed? :] 20:45:22 * flaper87 is trying to find that blueprint 20:45:32 <flaper87> jd__: not saying so :) 20:45:40 <jd__> SpamapS: doesn't marconi have that? 20:45:47 <dolphm> flaper87: i'd be curious to see the benchmarks that supported that decision 20:45:52 <annegentle> I did find the prereq "install mongodb" a bit surprising myself. 20:46:00 <lifeless> You can legally distribute binaries of it, sure, but that doesn't get you the ability to /use/ it. 20:46:07 <kgriffs> dhellmann: re Pecan, it isn't off the table, just was de-prioritized since just getting a solid baseline release has been taking all our time 20:46:17 <SpamapS> jd__: indeed, but there was talk of that being confusing because RabbitMQ was mentioned in the same breath as sqlite/mongodb. I want to make sure the abstraction isn't thrown out with the confusion. 20:46:27 <flaper87> annegentle: we can remove that from the wiki / readme 20:46:40 <lifeless> flaper87: so you don't need mongodb ? 20:46:44 <markmcclain> flaper87: ok.. would interested to see the benchmarks and code to see how they are constructed 20:46:45 <flaper87> mongodb is what we suggest for production right now because there's no other option 20:46:48 <flaper87> right now 20:46:50 <kgriffs> jd__: re speed, not all projects have APIs that are directly in the line of fire 20:46:56 <annegentle> flaper87: nah, if it's the "easiest" way forward for people then speak the truth :) 20:46:59 <lifeless> flaper87: so you do need it: it should stay on the wiki. 20:46:59 <flaper87> I wouldn't suggest using sqlite in production 20:47:15 <jd__> kgriffs: everything's relative? ;) 20:47:26 <kgriffs> jd__: yep. 20:47:27 <jd__> kgriffs: but I get what you mean though 20:47:27 <ttx> lifeless: they are entering incubation, not being integrated yet 20:47:28 <flaper87> we can remove it as soon as we add more backends, I guess 20:47:43 <annegentle> flaper87: right 20:47:53 <jd__> SpamapS: ack, abstraction's good :) 20:47:54 <lifeless> ttx: sure, still a bit nervous-making 20:47:57 <kgriffs> I like having sqlite because say I am implementing a Rust library 20:48:00 <ttx> so we bless the general architecture and the promise of it, more than the details 20:48:03 <kgriffs> maybe I want to run on my local box 20:48:15 <kgriffs> with sqlite I don't have to install anything, just pip install and run marconi-server 20:48:26 <kgriffs> anyway, that's a discussion for another time I suppose 20:48:48 <ttx> we need to move to open discussion in a bit. This will be continued next week. Last minute questions ? 20:49:00 <flaper87> Can we make a list of points we should prepare for next week? 20:49:03 <annegentle> ttx: right this is about incubation now. Ok I'm good. 20:49:03 <jd__> talking about architectures, there's a marconi-gc and marconi-server that needs to run and that's it? 20:49:04 <flaper87> markmc: mentioned something 20:49:11 <flaper87> but I might have missed other points 20:49:12 <kgriffs> flaper87: +1 20:49:20 <ttx> annegentle: and definitely too late to be integrated in Icehouse anyway. 20:49:58 <kgriffs> ttx: so, seems like there would be plenty of time to iron out any implementation concerns 20:50:05 <russellb> btw, when are the next elections again? 20:50:11 <ttx> kgriffs: indeed 20:50:14 <kgriffs> (if targeting the J release) 20:50:15 <russellb> and is there any reason to wait for voting on things incubating next cycle? 20:50:17 <ttx> russellb: end of Sept 20:50:28 <jgriffith> russellb: we haven't in the past 20:50:42 <russellb> k, i'm fine with it, just wanted to make sure ... 20:51:03 <ttx> that brings us to... 20:51:08 <hub_cap> drumroll 20:51:08 <ttx> #topic Open discussion 20:51:16 <ttx> Next week we'll start the end-of-cycle graduation review for Trove and Ironic 20:51:22 <ttx> To decide if they should be integrated for the Icehouse release 20:51:24 <hub_cap> ohh ohh ttx pick me pick me, i have a issue 20:51:31 <jgriffith> hah 20:51:32 <ttx> hub_cap: you wanted to raise an issue with your work on Trove Heat integration ? 20:51:38 <hub_cap> yes so its going quite well 20:51:50 <hub_cap> with one small issue, SpamapS and sdake and i talked about yest 20:52:06 <hub_cap> the create stack requires a users password currently 20:52:19 <ttx> russellb: so a project starting incubation now/nextweek will definitely be too young to graduate nextweek/theweekafter 20:52:19 <hub_cap> and trove never takes a users password, only token 20:52:27 <russellb> ttx: sure 20:52:41 <russellb> ttx: i was basically considering projects applying now as incubating during the icehouse cycle 20:52:55 <hub_cap> so until trusts is baked in to heat, we dont have a way to create stacks on behalf of a user 20:52:56 <shardy> hub_cap: and this will be resolved when heat-trusts gets merged, right? 20:53:06 <ttx> russellb: that' what we are discussing for Marconi. 20:53:13 <russellb> right. 20:53:15 <hub_cap> shardy: i believe so, from what SpamapS/sdake mentioned 20:53:17 <zaneb> shardy: yes 20:53:25 <hub_cap> oh wait it was zaneb 20:53:28 <shardy> which, hopefully, should happen now for Havana if all goes to plan 20:53:36 <ttx> russellb: i actually encourage projects to file now, so that they can get more space at design summit 20:53:36 <hub_cap> yes so thats what im afraid of 20:53:40 <shardy> as the keystoneclient stuff we were gated on has been released 20:53:42 <zaneb> shardy: although, it sounds like we have to change the middleware to allow it too 20:53:43 <hub_cap> "hopefully" :) 20:53:44 <russellb> ttx: makes sense 20:54:09 <hub_cap> even if it is merged it such a small window to get it working properly 20:54:12 <shardy> hub_cap: yeah, the whole thing has taken way longer than I ever expected 20:54:37 <hub_cap> hehe shardy /me understands... heat integration sure did take a while too for me ;) 20:54:41 <zaneb> hub_cap: so the back end of trove is switchable between heat/non-heat, right? 20:54:44 <SpamapS> So bottom line is, Trove should be able to use Heat as soon as Heat can use trusts. 20:54:47 * markwash disappears for 25 minutes 20:54:50 <zaneb> you weren't planning to cut directly over 20:54:55 <hub_cap> zaneb: yes, default is non heat 20:55:20 <hub_cap> correct as discussed by the TC Icehouse would be optional, and then release+1 would be default 20:55:31 <SpamapS> The eventual plan is for extra capabilities in trove, like complex replication scenarios and HA etc. etc. will make use of Heat, but for now.. Trove works fine w/o Heat. 20:55:35 <hub_cap> its basically deprecating the old way of provisioning so it follows the same thing as say, removing nova volume 20:55:40 * SpamapS hopes he got that right. 20:55:51 <hub_cap> correct SpamapS 20:55:52 <zaneb> so, what I suggested to hub_cap is to continue the integration work and try to get things to the point where the lack of token-only middleware in Heat is the only thing preventing support being turned on 20:56:10 <shardy> zaneb: +1 20:56:20 <hub_cap> and thats what im working with now zaneb, create is almost finished 20:56:21 <zaneb> and if heat-trusts makes it in to Havana, flip the switch 20:56:25 <hub_cap> and ive got ~1 wk to finish it 20:56:39 <hub_cap> well yes assuming 1) trusts works flawlessly, and 2) the switch is really small 20:56:56 <zaneb> hub_cap: well, you're not actually making use of the trusts part 20:57:03 <hub_cap> correct zaneb 20:57:06 <zaneb> so it doesn't really have to work ;) 20:57:13 <anteaya> please find the patch creating the goverance repo here: https://review.openstack.org/#/c/43002/ awaiting your reviews 20:57:28 <hub_cap> haha zaneb nice 20:57:39 <hub_cap> i just need token based stack creation :) 20:57:44 <ttx> #info patch creating the goverance repo here: https://review.openstack.org/#/c/43002/ awaiting your reviews 20:57:47 <zaneb> indeed 20:58:03 <anteaya> thanks ttx forgot that 20:58:22 <shardy> Well the trusts functionality will work with both token and user/pass auth, so you can test it by flipping a config filel option in heat 20:58:25 <hub_cap> so i am slightly concerned that heat integration will creep into Icehouse if the pieces dont land perfectly 20:58:39 <hub_cap> cool shardy that makes it easy 20:58:50 <ttx> hub_cap: you could grant yourself a feature freeze exceptoin over that, if you need a few more days 20:59:10 <hub_cap> yes i would need that, as well as having heat land trusts 20:59:10 <ttx> we'll all blame shardy for that 20:59:20 <hub_cap> :) 20:59:28 <shardy> ttx: haha ;) 20:59:36 <hub_cap> so if we are 100% certain trusts will land then ill grant a FF exception 20:59:39 <hub_cap> for myself lol 20:59:42 <zaneb> shardy will blame keystone, no doubt ;) 20:59:42 <ttx> hub_cap: we can continue the discussion in the incubated project section of the next meeting 20:59:44 * shardy looks for someone else to blame.. 20:59:50 <hub_cap> yes ttx 20:59:56 <ttx> let's close this one 20:59:57 <hub_cap> im good w that 20:59:59 <ttx> final words ? 21:00:06 <hub_cap> hugs 21:00:07 <hub_cap> always hugs 21:00:12 * dolphm accepts the blame 21:00:18 <ttx> #endmeeting