21:00:02 <devkulkarni> #startmeeting Solum Team Meeting
21:00:02 <openstack> Meeting started Tue Jun  9 21:00:02 2015 UTC and is due to finish in 60 minutes.  The chair is devkulkarni. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:06 <openstack> The meeting name has been set to 'solum_team_meeting'
21:00:19 <devkulkarni> #link https://wiki.openstack.org/wiki/Meetings/Solum#Agenda_for_2015-06-09_2100_UTC Agenda for today
21:00:30 <devkulkarni> #topic Roll Call
21:00:36 <devkulkarni> Devdatta Kulkarni
21:00:57 <devkulkarni> hi adrian_otto
21:01:02 <devkulkarni> I just started the meeting..
21:01:12 <adrian_otto> devkulkarni: good. I was going to ask you to chair
21:01:18 <adrian_otto> I am still in transit today.
21:01:42 <james_li> james li
21:01:45 <devkulkarni> ok.. we had discussed it last time that you might be out
21:01:56 <devkulkarni> hope you have safe travel
21:02:01 <devkulkarni> hi james_li
21:02:09 <adrian_otto> thanks!
21:02:33 <mkam> Melissa Kam
21:02:46 <devkulkarni> hi mkam
21:03:40 <pritic> Hello, this is Priti Changlani, new to the team.
21:03:46 <devkulkarni> hi pritic
21:03:54 <devkulkarni> glad to have you on the team
21:04:28 <devkulkarni> I am going to continue with next phase of meeting.
21:04:43 <james_li> Hi pritic
21:05:00 <devkulkarni> If anyone wants to chime in to mark their presence please feel free to do so anytime during the meeting
21:05:05 <devkulkarni> #topic Announcements
21:05:08 <datsun180b> ed cranford
21:05:19 <kebray> kebray here.
21:05:23 <datsun180b> i promise i'm here
21:05:23 <devkulkarni> any announcements from anyone
21:05:29 <devkulkarni> hi kebray datsun180b
21:06:23 <devkulkarni> moving on to review action items topic
21:06:29 <devkulkarni> #topic Review Action Items
21:07:02 <james_li> devkulkarni: just saw you new spec: https://review.openstack.org/#/c/189929/3
21:07:17 <devkulkarni> we had two action items for adrian_otto.. but we can continue them to next week I guess since adrian_otto you are out.. let me know
21:07:37 <devkulkarni> james_li: yeah.. i just submitted it, it is not completely ready yet.
21:08:03 <devkulkarni> to jog our memory, the action items were: 1) adrian_otto to spring clean our blueprints 2) adrian_otto to spring clean our bug list
21:08:31 <james_li> devkulkarni: do we want to send it to mail list once you finish writing it?
21:08:45 <devkulkarni> james_li: we could
21:08:50 <james_li> ok
21:08:59 <devkulkarni> I definitely want to get randallburt's opinions on it
21:09:24 <devkulkarni> I am going to carry forward the two action items mentioned above for next time
21:09:32 <devkulkarni> #action  adrian_otto to spring clean our blueprints
21:09:40 <devkulkarni> #action adrian_otto to spring clean our bug list
21:09:45 <adrian_otto> tx!
21:10:02 <devkulkarni> thanks adrian_otto
21:10:19 <devkulkarni> #topic BP/Task Review
21:10:43 <devkulkarni> I can talk about the spec that james_li was referring to above.
21:10:54 <devkulkarni> hi gpilz
21:11:00 <gpilz> hi
21:11:15 <devkulkarni> we are in the Task Review topic
21:11:26 <devkulkarni> ok, about the spec --
21:11:45 <devkulkarni> it is about how to support app update without changing the app's URL
21:12:03 <devkulkarni> the basic idea is to use a heat template with load balancer and a server
21:12:16 <james_li> app's endpoint URL
21:12:25 <devkulkarni> yes, that is correct james_li
21:12:59 <devkulkarni> in the spec I have outlined a two step process that is supposed to achieve the end goal
21:13:19 <devkulkarni> the main constraint to keep in mind is we may have multiple deployers
21:13:44 <devkulkarni> and so need to ensure that race conditions don't lead to incorrect system state
21:14:02 <devkulkarni> (such as, more than two servers being created within the heat stack)
21:14:24 <devkulkarni> please take a look at the spec whenever you get a chance
21:14:33 <james_li> so the spec is just focused on a *single* server? will it apply to the case that apps with multiple servers?
21:15:11 <devkulkarni> james_li: I have not considered apps with multiple servers. we will have to add support for multiple servers from the ground up (API layer, worker, etc.)
21:15:30 <adrian_otto> devkulkarni: Magnum has a solution for that
21:15:31 <devkulkarni> we are not there yet in other areas of the code
21:15:41 <devkulkarni> oh nice!
21:15:49 <adrian_otto> so if you deploy into a Magnum pod that might be one less thing to deal with
21:15:54 <devkulkarni> adrian_otto: mind elaborating on it?
21:15:54 <adrian_otto> the actual solution is in Heat
21:16:11 <adrian_otto> it has a new feature that allows for concurrent updates to the same stack
21:16:25 <devkulkarni> adrian_otto: I see..
21:16:52 <devkulkarni> is there any spec/docs that you can share with us on this? I would like to take a look
21:16:56 <adrian_otto> it automatically serializes them so the last one is complete before you get an UPDATE_COMPLETE status back from the heat API.
21:17:08 <adrian_otto> randallburt has details on this one
21:17:14 <adrian_otto> I don't know about docs on it
21:17:14 <devkulkarni> ok cool.
21:17:24 <devkulkarni> I will follow up with randallburt on this
21:17:36 <devkulkarni> thanks for the the pointer adrian_otto
21:17:40 <adrian_otto> np
21:18:44 <devkulkarni> thanks james_li and adrian_otto for the comments
21:18:50 <adrian_otto> I do remember more about it
21:18:57 <adrian_otto> if you use a ScalingGroup resource
21:19:17 <adrian_otto> you can define a webhook for scaling up the count, and back down
21:19:39 <adrian_otto> you can pass in a desired value of elements to those webhooks
21:19:59 <devkulkarni> and this webhook will be triggered on Solum app update…?
21:20:08 <adrian_otto> so if you have two callers asking for the new count to be "3" that's fine
21:20:37 <adrian_otto> if the goal is to scale it to 0 and then back to a nonzero value
21:20:57 <adrian_otto> then you scale to 0, wait for the count to reach 0, and then adjust it again to the nonzero value.
21:21:22 <devkulkarni> heat takes care of serializing the calls then?
21:21:27 <adrian_otto> yes
21:21:34 <devkulkarni> nice
21:21:42 <adrian_otto> I think you can also indicate which node to kill off in the scale down call
21:21:55 <adrian_otto> so if you wanted you could set the value to n+1
21:22:24 <adrian_otto> then do an n-1 indicating the uuid of the server resource  or container resource you want to eliminate.
21:22:46 <adrian_otto> having a scaling group of Magnum containers could make that go really fast.
21:22:56 <adrian_otto> it will soon have support for auto-scaling the bay
21:23:43 <devkulkarni> cool.. how far along is python-magnumclient? could it be used to do these things from within Solum?
21:24:00 <adrian_otto> it's stable enough for that in my view
21:24:24 <adrian_otto> there could be some new API functions coming to support he upcoming Mesos bay type.
21:24:45 <devkulkarni> ok.. may be we can add a bug/story to our backlog to investigate its usage and possible integration
21:25:21 <devkulkarni> sure.. I guess the current bay types should be fine for solum, right?
21:25:33 <adrian_otto> but the existing API should be enough for what Solum needs. It has support for concurrent multi-version API support. So you can even get new versions of it, and keep old ones around without the need to tweak things integrated with older API versions.
21:26:14 <devkulkarni> nice
21:26:19 <adrian_otto> yes, I think the Swarm Bay type is probably enough for the Solum use case. That would give you more control than the Kubernetes bay would
21:26:42 <adrian_otto> so you could have Solum control the LB pool membership
21:27:24 <adrian_otto> we already hit a Docker API on the node we bring up through Heat
21:28:02 <adrian_otto> instead we just bring up a Magnum (swarm) bay, and bring up containers using the Bay's docker API
21:28:10 <adrian_otto> super small change to Solum
21:28:43 <adrian_otto> and would allow us to fall back to using Heat directly (no Magnum) in clouds that have Heat but not Magnum
21:29:07 <devkulkarni> so you are saying that the LB pool is actually a pool of container instances which can be controlled via the swarm bay api
21:29:31 <adrian_otto> yes, that would be preferred, right?
21:29:37 <devkulkarni> but what about situations when the LB pool is actually made up of VM instances
21:29:45 <adrian_otto> that way we could create and destroy them really fast.
21:30:08 <adrian_otto> either way you use Heat.
21:30:25 <adrian_otto> you just decide whether to use the Heat Docker resource or the Heat Magnum resource.
21:30:25 <devkulkarni> sure
21:30:41 <adrian_otto> using an alternate template in each case
21:31:06 <devkulkarni> ok.. at a high-level I think I get what you are suggesting.. will need to dig little deeper to understand how it will all fit together in solum
21:31:32 <adrian_otto> some contributors from Cisco are working on the Heat Magnum resource(s)
21:31:45 <devkulkarni> let me take an action item to file a bug to investigate solum-magnum integration
21:31:54 <adrian_otto> so sdake should be able to name them if you want to know more than what you find up for review on in tree now.
21:32:21 <devkulkarni> sure.. I can reach out to sdake to find out the current state of that resource
21:32:52 <adrian_otto> he's traveling today as well, but I expect him back later this week.
21:33:18 <devkulkarni> #action devkulkarni to file a bug to investigate solum-magnum integration outlining the various options, relevant documentation, etc.
21:33:23 <devkulkarni> sure..
21:33:35 <devkulkarni> you guys hang out on #magnum?
21:33:49 <adrian_otto> #openstack-containers
21:33:53 <devkulkarni> ok
21:34:37 <devkulkarni> thanks adrian_otto for all the pointers on this
21:34:46 <adrian_otto> my pleasure
21:35:03 <devkulkarni> are there other tasks/blueprints that we want to discuss today?
21:36:25 <devkulkarni> ok, I will continue to open discussion then..
21:36:35 <devkulkarni> #topic Open Discussion
21:37:49 <adrian_otto> pritic you still here?
21:38:22 <pritic> I am, hi!
21:38:50 <adrian_otto> thanks for joining us today. Would you feel comfortable taking a moment to introduce yourself to the rest of our team?
21:40:22 <pritic> Sure, I am working with the Rackspace Solum QE Team as a summer intern. Orginally I am from India, but I have been in the US since last fall for my masters in Computer Science at University of Florida.
21:40:56 <adrian_otto> excellent. I'm looking forward to working with you.
21:41:01 <pritic> It is my first day today and I am really looking forward to a great summer experience.
21:41:17 <mkam> :D
21:41:30 <adrian_otto> you're lucky to be on such a great team
21:41:43 <adrian_otto> it should indeed be a fun and challenging summer for you
21:41:53 <devkulkarni> +1
21:42:16 <pritic> That's the plan!
21:42:41 * adrian_otto needs do disambark. Catch you later
21:42:48 <devkulkarni> thanks adrian_otto
21:42:51 <adrian_otto> disembark
21:43:00 <devkulkarni> others — anything else for today or should we call it?
21:43:39 <datsun180b> nothing from me
21:43:56 <devkulkarni> ok.. mkam, james_li, gpilz, pritic?
21:44:05 <mkam> I'm good
21:44:10 <james_li> yes
21:44:30 <pritic> I am good. Thanks.
21:44:34 <devkulkarni> ok then.. ending the meeting
21:44:37 <devkulkarni> #endmeeting