16:09:50 #startmeeting marconi 16:09:51 Meeting started Mon Nov 18 16:09:50 2013 UTC and is due to finish in 60 minutes. The chair is kgriffs. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:09:52 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:09:56 The meeting name has been set to 'marconi' 16:09:59 #link https://wiki.openstack.org/wiki/Meetings/Marconi#Agenda 16:10:13 #topic Hong Kong summit retrospective 16:10:35 ametts, flaper87: how did you think things went? 16:10:53 I'm going to drop a few relevant links here. 16:10:56 I'm very happy with how things went 16:10:57 #link https://etherpad.openstack.org/p/IcehouseMarconiFeaturesAPIChanges 16:11:02 #link https://etherpad.openstack.org/p/IcehouseMarconiNotificationService 16:11:06 #link https://etherpad.openstack.org/p/IcehouseMarconiAMQPSupport 16:11:08 People attended to our sessions 16:11:11 #link https://etherpad.openstack.org/p/IcehouseMarconiInOut 16:11:16 we've got feedback from the community overall 16:11:35 Decent attendance: 30-50 people at the design sessions, and I counted about 65 at the afternoon session. 16:11:42 After our talk people gave feedback as well 16:12:02 We probably would have had more folks there, except it was Friday afternoon and people were starting to head out already. 16:12:06 I'm also happy with how kgriffs session went 16:12:49 I also thought it went well 16:12:55 In spite of the jet lag. :p 16:12:59 Was there any particular feature 16:13:06 that people were asking for? 16:13:23 Lots of people interested in notifications 16:13:31 one person was asking about security 16:13:52 as in, how do I ensure that one client can't get messages meant for a different one 16:14:06 Another theme was "queue provisioning as a service" 16:14:16 hmmm 16:14:31 yeah 16:14:47 I think at some point it may make sense to spin up a project to provision single-tenant AMQP instances or somthing, ala Trove 16:14:53 so notifications is definitely a hot-topic - cool. 16:14:53 I'm not 100% sure about it but I haven't put many thoughts on that 16:15:00 right 16:15:11 something to keep in the back of our minds - not a high priority right now 16:15:37 HP suggested several things 16:16:01 like, binary payloads combined with storing the content type in the metadata 16:16:05 (ala Swift) 16:16:32 Sounds like they don't have bandwidth to contribute code at the moment, but maybe that will change if they hire more people 16:16:54 flaper87, ametts: other things you would call out? 16:16:59 I'm looking forward to that! +1 for contributor bandwidth 16:17:18 that's cool. I found Hung suggestions very valuable, unfortunatelly many of them didn't fit into current MArconi's goals 16:17:26 but we should definitely keep an eye on those 16:17:35 +1 16:17:46 he had some good "lessons learned" to share based on customer feedback 16:17:58 sounded like most customers are internal ones, fwiw 16:18:02 Sounds like there was good support for an early 1.1 version in the fourth session. 16:18:05 (internal == HP) 16:18:10 yep 16:18:22 I was happy that Everett attended 16:18:32 flaper87: He maintains the Rackspace Java SDK 16:19:44 - Jclouds guy 16:19:45 https://github.com/everett-toews 16:20:04 flaper87: you should talk about client design with him some time 16:20:18 ow, didn't know that! Awesome! 16:20:21 which also reminds me, a couple people asked about JMS support 16:20:23 I'll definitely do that 16:20:35 flaper87: maybe you can ask him about JMS as well 16:20:45 +1 16:20:52 rock on 16:20:57 I've already some ideas re JMS / AMQP / STOMP / MQTT 16:21:02 but we need to dig more into that 16:21:04 kk 16:21:13 and sounds like Jth summit thing 16:21:19 anyway, I'll ping him offline 16:22:05 otherwise, I still need to finish digesting my notes from the summit and updating the wiki/blueprints accordingly. TBH, I've been busy just catching up with email and preparing for the Solum mini-design-summit happening in SFO tomorrow and wed. 16:22:35 #action flaper87 to get in touch with Everett Toews and talk shop 16:23:04 speaking of which, we should start thinking about how Marconi could tie into a PaaS 16:23:27 +1 16:23:33 +1 16:23:38 ok, anything else on the topic of the summit? 16:23:56 I have to take a deeper look at Solum, I whish I had more bandwidth to contirbute to it 16:24:08 flaper87: bandwidth is hard to come by. I hear ya! 16:24:19 I need to upgrade to 10g 16:24:23 lol 16:24:31 :P 16:24:54 fwiw I am planning to starting contributing on a regular basis to Oslo 16:25:09 I also may be doing some webob work 16:25:12 we'll see 16:25:19 so much to do, so little time. :p 16:25:27 aaanyway 16:25:46 I would like to see more Marconi folks contributing to other OS projects 16:26:09 helps us stay in sync with the community, and we may end up getting people to reciprocate and contribute to Marconi 16:26:24 I know flaper87 already does that. 16:26:29 (kudos) 16:26:35 :D 16:26:40 I see the value of it. Just got to make the time. :) 16:27:14 ok, let's move on 16:27:21 #topic review actions from last mtg 16:27:30 http://eavesdrop.openstack.org/meetings/marconi/2013/marconi.2013-10-28-16.04.html 16:27:34 #link http://eavesdrop.openstack.org/meetings/marconi/2013/marconi.2013-10-28-16.04.html 16:27:43 I'll go first 16:27:51 T-shirts: mission accomplished! 16:28:12 kgriffs: +1 16:28:15 w00t 16:28:15 we handed out all but about 4 of the 75 I brought with me 16:28:30 * ametts is glad kgriffs didn't get arrested by Hong Kong Customs agents 16:28:34 lol 16:28:36 lol 16:28:50 they were like, "dude, do you have any drugs in there?". 16:28:51 "no" 16:28:53 "carry on" 16:28:58 (I'm not sure how to encode bail on an expense report) 16:29:06 LOL 16:29:16 lol 16:29:30 ametts: you have the contact info for the screen printing shop if you'd like to order more 16:29:42 Yes I do. Thanks. 16:30:07 They are great for meetups and recruiting fairs 16:30:20 flaper87: I will forward the graphics on to you 16:30:32 moving on 16:30:36 kgriffs: awesome, thanks! 16:30:42 " flaper87 to firm up cross-transport-api-spec" 16:31:18 kgriffs: done, I've been also chatting with Cindy about that 16:31:28 we've been writing mores stuff there 16:31:39 and We'll start coding on it ASAP 16:31:49 ok, cool 16:32:01 once the whole idea is well defined, we'll also move the info to the wiki 16:32:08 Architecture related info 16:32:13 flaper87: you may also want to chat with the Nova guys about this 16:32:21 since they will be using JSON schema as well, sounds like 16:32:28 yeah 16:32:41 jsonschema - picking up steam across the board. gtk. 16:32:48 we are also using it in Glance, FWIW. But I'll definitely keep an eye on Nova work 16:32:58 I envision a sort of RESTful WSME module growing out of this that multiple projects can share. 16:33:32 alcabrera: oh yeah, you should have been at the Pecan/WSME nova session 16:33:37 oh? 16:33:49 let me summarize 16:34:08 "you guys should be using Pecan/WSME for your next API" 16:34:14 "OK. Who else is using it?" 16:34:35 "Just ceilometer, but we have commitments from a bunch of other projects" 16:34:51 "Blah blah blah..." 16:34:59 "So how do extensions work with WSME" 16:35:49 "They don't really. WSME definitions are static. But that is OK because you can just define a generic string table and use that. And OS projects should stop supporting arbitrary extensions anyway." 16:35:59 "That's not going to happen any time soon" 16:36:17 "But we need to enforce portability of OS apps and scripts" 16:36:43 [Back-and-forth, no consensus reached on that point] 16:36:51 doesn't sound like that went well. 16:37:14 "So, going back to WSME and extensions, did I hear right that the definitions are static." 16:37:17 "That's correct." 16:37:26 "OK, then that isn't going to work." 16:37:40 [blah blah blah...] 16:37:50 "OK, let's go with Pecan and JSON schema" 16:38:07 I forgot to mention this little gem earlier... 16:38:25 heh. :) 16:38:30 "Marconi gets a pass on using Falcon since they are a data plane api" 16:38:37 oh yeah! 16:38:40 I heard that one from flaper87 16:38:50 very cool 16:39:01 Lemme just note that down... 16:39:08 #info Marconi gets a pass on using Falcon since they are a data plane api 16:39:15 but, I still promised to do a POC pecan driver and try to help improve Pecan performance, which is starting to look like improving webob 16:39:35 isn't a spin off of webob used in Swift? 16:39:39 swiftob? 16:39:51 p.s. - flaper87 and I only spoke a couple sentences the entire time 16:40:18 swob 16:40:22 :D 16:40:22 not sure how related it is 16:40:41 we may be able to "steal" some code or at least ideas from there 16:41:00 ok, so other action item 16:41:08 alcabrera to research ways we can use heat to deploy marconi itself, as well as provision queues, whatever 16:41:20 started 16:41:38 I did some reading on Heat this morning to familiarize myself with what it is, what it can be used for. 16:41:43 The whole orchestration idea. 16:41:46 alcabrera: swob = a swift replacement to webob that doesn't break on every minro version bump 16:41:46 alcabrera: have you already reached out to keith bray? 16:41:58 kgriffs: I have not. 16:42:09 notmyname: does it have the same interface 16:42:17 alcabrera: ok, I will introduce you 16:42:25 thanks! 16:43:10 kgriffs: it's intended to be a drop-in replacement (ie that's what it first did). I assume it still is, but we don't test that path. https://github.com/openstack/swift/blob/master/swift/common/swob.py#L17 16:44:08 notmyname: ah, interesting. I wonder how much in Pecan would break if you just swapped it in place for webob 16:45:18 transit time to the office. if there are other questions, just ping me in one of those dozen or so of channels I'm in 16:45:27 notmyname: will do, thanks! 16:46:02 I'll keep researching heat-side and I'll speak with Keith Bray for next time. 16:46:26 sounds like a plan 16:46:41 #action alcabrera to keep researching heat, and engage with the team 16:47:01 #action kgriffs to play with swob 16:47:12 #topic Sharding Updates (cpp-cabrera) 16:47:58 alright 16:48:02 lots of great news here 16:48:20 so - we have sharding being tested extensively over at Rackspace. 16:48:50 Here's a report from a test run last week using 3 mongodb shards: 16:48:55 #link http://198.61.239.147:8000/log/20131112-1806/report.html 16:49:38 While the report tells us little about the scaling behavior, it *is* scaling, as we were maxing out a single storage instance at about 2000 rps previously. :) 16:49:42 So - sharding works! 16:49:46 so we peak at just under 4000 shards 16:50:17 well, not 4000 shards... 4000 rps. :) 16:50:19 but yes. 16:50:21 one other data point - I noticed that the standard deviation (so to speak) is a lot smaller under heavy load for shards vs. no shards 16:50:34 so, it gives more consistent performance to users 16:50:38 which is always a good thing 16:50:46 (just ask anyone using AWS) 16:50:50 yup 16:50:59 hahaha 16:51:09 we're better able to hide GC latency this way. :D 16:51:19 I believe the reason is that having more shards adds more "jitter" 16:51:36 as in, more chances for requests to get through in a timely manner without stalling 16:52:03 you know, chaos theory. ;) 16:52:36 This'll be true, as long as long as few heavy hitters end up colocated - the "busy queues are colocated" problem 16:52:41 coined as of this moment. :P 16:53:03 So other sharding news - 16:53:06 The patches! 16:53:22 We have patches awaiting review, and I've been updating as I go. 16:53:34 Thanks for the review, flaper87! 16:53:39 alcabrera: +1 16:53:59 The entry patch to sharding is: 16:54:01 #link https://review.openstack.org/#/c/54605/ 16:54:11 BTW, we've been lacking of reviews lately. We should get back to do a number of reviews per day 16:54:11 The rest follows from there. 16:54:18 flaper87: +1 16:54:28 I think the summit threw us all off our routine. 16:54:30 :D 16:55:18 Hopefully, we can get sharding merged soon. I'm requesting some help with reviews on that. 16:55:20 so that's that for sharding. Any thoughts or questions on this? 16:56:04 +1 wrt reviews 16:56:27 flaper87, alcabrera: I would love it if you guys could also peak at solum once in a while and do some +1/-1's there 16:56:38 I want to make sure it is aligned with other projects 16:56:46 #link https://review.openstack.org/#/q/status:open+project:stackforge/solum,n,z 16:56:48 sure thing! 16:57:05 wow, solum is busy. 16:57:11 A page of reviews pending! 16:57:23 I will be doing lots of reviews for Marconi this week 16:57:45 alcabrera: looks like you have -1 here to address 16:57:46 https://review.openstack.org/#/c/54605/ 16:58:16 flaper87: I was kind of waiting for you to get to a point where you are ready to +2 alcabrera's patches before I did my final review 16:58:32 kgriffs: yup. I'm trying to DRY up the sharding implementation. It's proving to be a little tricky. 16:58:45 I've addressed all feedback on that one save for the DRYness. 16:58:53 I went over the patches quite a bit, esp. when I filled in a week or so ago for alcabrera while he was out of the office 16:59:01 ok 16:59:10 maybe we leave the DRYness for a follow-up patch? 16:59:15 kgriffs: sounds good! I'll take another look at that patch 16:59:28 it wouldn't hurt to save it for later, IMO. 16:59:40 flaper87: thanks. I just feel like I am too close to the code and need to let it "cool off" before I look again 17:00:08 flaper87: thoughts on saving DRY refactoring? 17:00:33 (for the near future) 17:00:58 If DRY changes are not big, I'd prefer seeing them in this patch, otherwise we can let them land in a separate patch 17:01:01 * kgriffs has his eye on the time 17:01:04 oops - I actually didn't upload the patch where I addressed the feedbaclk. Let me do that now... >.> 17:01:13 flaper87: agreed 17:01:14 *feedback 17:01:24 alcabrera: how much churn would the DRY work create? 17:01:40 kgriffs: It would change the implementation for every forwarded method. 17:01:54 ah, *that* 17:02:05 I remember thinking it could be made more DRY 17:02:11 I may have even commented on it at one point 17:02:22 but that's beside the point 17:02:26 hmmm 17:02:28 The pattern currently is: 1) lookup a target, 2) if it exists, call a method on a particular controller, 3) if it doesn't return a value or raise an exception. 17:02:46 1) ... 2a) ... 2b) 17:02:57 * flaper87 just read alcabrera answer to his comment about that 17:05:03 ok, we are out of time 17:05:03 should we move on to the next topic and address sharding further in #openstack-marconi? 17:05:14 yep, that would do it. :P 17:05:18 hahaha 17:05:41 #action alcabrera to work on sharding feedback and get it approved 17:05:53 #endmeeting