16:00:35 <adrian_otto> #startmeeting Solum Team Meeting
16:00:37 <openstack> Meeting started Tue Mar 18 16:00:35 2014 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:38 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:40 <openstack> The meeting name has been set to 'solum_team_meeting'
16:00:59 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Solum#Agenda_for_2014-03-18_1600_UTC Our Agenda
16:01:14 <adrian_otto> #topic Roll Call
16:01:17 <adrian_otto> Adrian Otto
16:01:18 <paulmo> Paul Montgomery
16:01:19 <datsun180b> Ed Cranford
16:01:22 <tomblank> tom blankenship
16:01:23 <julienvey> Julien Vey
16:01:31 <gokrokve_> Georgy Okrokvertskhov
16:01:39 <stannie> Pierre Padrixe
16:01:45 <devkulkarni> Devdatta Kulkarni
16:02:07 <noorul> Noorul Islam K M
16:02:18 <coolsvap> Swapnil
16:02:46 <muralia> murali
16:03:37 <adrian_otto> Hi everyone.
16:03:50 <adrian_otto> got all the green beer out of our systems yet?
16:04:02 <devkulkarni> :D
16:04:25 <adrian_otto> ok, we can begin!
16:04:30 <adrian_otto> #topic Announcements
16:04:46 <adrian_otto> last night I proposed a set of adjustments to the Solum core reviewer team
16:05:03 <adrian_otto> #link http://lists.openstack.org/pipermail/openstack-dev/2014-March/030250.html
16:05:46 <adrian_otto> so that should wrap up today, and I will adjust the settings in Gerrit and LP
16:06:13 <adrian_otto> Any thoughts on the core reviewer subject?
16:07:02 <devkulkarni> is now 24 hour core team?
16:07:04 <devkulkarni> :)
16:07:09 <adrian_otto> yes, in fact
16:07:12 <datsun180b> i've got not problem with it
16:07:14 <rajdeep> +1 on the changes
16:07:18 <datsun180b> or no problem
16:07:47 <tomblank> +1
16:07:49 <adrian_otto> one issue we were seeing is that patches landed by contributors outside of the US timezones needed to wait a long time for patches to merge
16:08:06 <noorul> I agree and I think this will help
16:08:16 <adrian_otto> so this new team will help address that
16:08:31 <devkulkarni> +1
16:08:36 <aratim> +1 on the new core reviewers
16:08:40 <adrian_otto> it's pretty well balanced I think in terms of geography and affiliations
16:09:04 <adrian_otto> we are rather close to M1
16:09:26 <adrian_otto> Roshan was kind enough to draft a vision statement for the M1 exit criteria
16:09:28 <adrian_otto> https://wiki.openstack.org/wiki/Solum/Milestone1
16:10:01 <devkulkarni> The steps need to be revisited.
16:10:01 <adrian_otto> so during open discussion today we can critique that, and see if any tweaks are appropriate
16:10:07 <devkulkarni> ok.
16:10:19 <adrian_otto> devkulkarni: noted
16:10:37 <adrian_otto> #topic Review Action Items
16:10:45 <adrian_otto> adrian_otto to draft mission statement and solicit input form interested contributors (gokrokve, devkulkarni)
16:10:57 <adrian_otto> I did make a rough draft at https://etherpad.openstack.org/p/solum-mission
16:11:11 <adrian_otto> but I would like to get input on that, and suggested alternatives
16:11:16 <devkulkarni> I haven't had a chance to think about it
16:11:19 <noorul> Ops usually deploys binary artifact
16:11:34 <noorul> User = application developer building app, or ops engineer deploying/monitoring the app
16:11:53 <adrian_otto> noorul: yes, let's circle back to that in a moment after blueprint updates
16:12:08 <adrian_otto> adrian_otto to locate Nova blueprints/reviews on libvirt driver support for Docker
16:12:27 <adrian_otto> I did not find this. I admit I did not look very hard either
16:12:41 <adrian_otto> so I will carry this forward without closing the item
16:12:41 <gokrokve_> If I am not mistaken. Docker plugin was removed from Nova repo.
16:12:47 <adrian_otto> #action adrian_otto to locate Nova blueprints/reviews on libvirt driver support for Docker
16:13:01 <devkulkarni> Is libvirt driver support for Docker going to land in Icehouse?
16:13:02 <adrian_otto> gokrokve_: yes, it was. I think that happened last week.
16:13:08 <julienvey> gokrokve_: Yes, it's in Stackforge now
16:13:19 <devkulkarni> gokrokve_: that is different, if I understand it correctly
16:13:20 <adrian_otto> julienvey: can you supply a link?
16:13:37 <julienvey> https://github.com/stackforge/nova-docker
16:13:55 <adrian_otto> #link https://github.com/stackforge/nova-docker
16:13:57 <gokrokve_> Was Devstack support also removed?
16:13:57 <devkulkarni> Docker plugin has been removed, but integration via libvirt was another path (at least thats what I thought in our last meeting)
16:14:12 <julienvey> devkulkarni: +1
16:14:28 <devkulkarni> gokrokve_: what do you mean? (most likely, yes)
16:14:57 <gokrokve_> When devstack installs nova most probably it will not install docker.
16:15:27 <devkulkarni> yeah, that is what I assume will happen
16:15:35 <gokrokve_> Why do we need libvirt docker support? We plan to use Heat fro VM creation, Heat will use nova for that.
16:15:55 <adrian_otto> gokrokve_: we want instances that launch very fast
16:16:16 <adrian_otto> so we like the idea of having Nova be able to produce instances based on container technology
16:16:18 <devkulkarni> But Nova with libvirt driver that has Docker support will provide us fast launches as mentioned above
16:16:25 <noorul> https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg19119.html
16:17:03 <coolsvap> docker has been removed from devstack as well
16:17:12 <rajdeep> or we could explore using plain lxc for the time being
16:17:20 <devkulkarni> thanks coolsvap for confirming
16:17:20 <stannie> +1 rajdeep
16:17:33 <stannie> and we've a lxc driver already available in libvirt
16:17:47 <rajdeep> not sure what additional benefit docker provides over lxc for solum use case
16:17:48 <gokrokve_> I am not sure how it will work with Heat.
16:18:01 <rajdeep> unless you want to use docker images
16:18:03 <noorul> But how are we going to use heroku app in this case?
16:18:05 <devkulkarni> -1 to plain lxc (we don't have build scripts/steps for those)
16:18:18 <adrian_otto> rajdeep: good question. It's container images. That's the difference
16:18:19 <gokrokve_> I like containers if they are abstracted on Heat level.
16:18:26 <noorul> docker provides lang pack alternative
16:18:44 <rajdeep> perhaps we should have container abstract layer
16:18:47 <rajdeep> like cloud foundry
16:18:51 <adrian_otto> lxc gives you a way to start a process, but does not give you a way to do incremental releases of code to sart those processes
16:18:58 <rajdeep> they are not tied to a container implementation
16:19:09 <devkulkarni> +1 to noorul's points
16:19:33 <adrian_otto> rajdeep: we already have an abstraction through Heat+Nova
16:19:56 <adrian_otto> and there is a place to plug in multiple hypervisor tech (or containers) behind that
16:20:00 <rajdeep> my concern is tight coupling between docker and solum
16:20:10 <rajdeep> might not be a great idea in the long run
16:20:17 <noorul> rajdeep: that tight coupling is for m1 I suppose
16:20:17 <gokrokve_> rajdeep: Agree.
16:20:24 <adrian_otto> it's really not coupling with Docker so much as coupling with an image format for containers
16:20:57 <aratim> Can we use the Docker plugin for Heat ? #link https://github.com/dotcloud/openstack-heat-docker
16:21:01 <gokrokve_> Solum should state clearly that there will be other options via plugins
16:21:04 <adrian_otto> and until we have a multitude of choices for that, I don't see the harm in selecting one as the default format
16:21:10 <aratim> It does not use Nova though.
16:21:13 <devkulkarni> aratim: that by passes Nova right?
16:21:23 <aratim> yes thats right
16:21:36 <noorul> gokrokve_: We can also use image builder
16:21:37 <adrian_otto> aratim: that's a possibility too, but we prefer not to do that
16:22:05 <noorul> gokrokve_: But then for m1 it requires more work
16:22:14 <gokrokve_> noorul: Yes. But it should not be a custom fix in Solum.
16:22:15 <adrian_otto> using a Docker plugin for Heat would be a nice last resort if we can't put something under nova
16:22:26 <gokrokve_> noorul: Agree. For M1 it will be an overkill.
16:22:27 <julienvey> http://blog.docker.io/2014/03/docker-will-be-in-openstack-icehouse/
16:22:37 <aratim> yeah bypassing Nova would not be a great approach
16:23:30 <gokrokve_> I think this is up to docker how to work with openstack. As soon as they keep Heat resource consistent we should not care how it works under the hood.
16:24:08 <gokrokve_> Still it can be an obstacle on Solum incubation.
16:24:24 <rajdeep> lxc is integrated into nova
16:24:25 <adrian_otto> We will discuss this at the Solum Summit
16:24:26 <rajdeep> http://docs.openstack.org/trunk/config-reference/content/lxc.html
16:24:27 <adrian_otto> https://etherpad.openstack.org/p/SolumRaleighCommunityWorkshop
16:24:30 <devkulkarni> why do you say that? if it comes in Icehouse I don't see that to be an issue
16:24:55 <devkulkarni> sorry, after Icehouse
16:26:29 <devkulkarni> I guess we need a path forward in the interim. My preference order is 1) disk-image-builder 2) Docker Heat plugin 3) LXC
16:27:20 <devkulkarni> We are already using approach 1 (using paulczar's build_app scripts).
16:27:21 <gokrokve_> I saw some parts of disck-image-builder implemented. Can we use it for M1?
16:27:51 <devkulkarni> gokrokve_: yes, we are doing that. we use Docker to build the app but then inject it in the VM image
16:27:57 <adrian_otto> gokrokve_: we can, provided the Vm images we use are small enough
16:28:02 <devkulkarni> and upload VM image to glance, which is then spun up by Heat
16:28:16 <gokrokve_> Cool.
16:28:25 <adrian_otto> https://wiki.openstack.org/wiki/HypervisorSupportMatrix
16:28:53 <adrian_otto> that shows the gaps in LXC features (snapshot is a very telling row)
16:29:17 <adrian_otto> ok, so let's advance through our agenda a bit
16:29:32 <adrian_otto> #topic Review Blueprints: https://launchpad.net/solum/+milestone/milestone-1
16:29:44 <adrian_otto> #link https://blueprints.launchpad.net/solum/+spec/api Solum API (aotto)
16:29:57 <devkulkarni> +1 to snapshots requirement
16:30:09 <adrian_otto> all API functionality needed for M1 has been merged, or is in review.
16:30:13 <adrian_otto> anything I missed?
16:30:19 <devkulkarni> yes..
16:30:35 <devkulkarni> we need julienvey's lp API code to be merged
16:30:45 <devkulkarni> don't know whether that has been merged yet or not
16:30:50 <julienvey> not yet
16:31:04 <devkulkarni> ok, lets try to get that in soon. we need it.
16:31:04 <adrian_otto> ok, link the review permalinks here, and I will prioritize reviewing that code
16:31:28 <julienvey> https://review.openstack.org/#/q/status:open+project:stackforge/solum+branch:master+topic:new-crud-lp,n,z
16:31:52 <adrian_otto> ok, thanks
16:32:00 <devkulkarni> paulczar: you missed lots of interesting discussion ;)
16:32:07 <adrian_otto> julienvey: be ready for a number of iterations of feedback
16:32:15 <julienvey> I'm ready :)
16:32:21 <paulczar> sorry,  had a cantidate call I had to take
16:32:38 <adrian_otto> paulczar: check the transcripts when we adjourn
16:32:59 <adrian_otto> #link https://blueprints.launchpad.net/solum/+spec/solum-minimal-cli Command Line Interface for Solum (devdatta-kulkarni)
16:33:18 <devkulkarni> muralia is working on adding 'status' field and adding lp commands
16:33:22 <adrian_otto> we had a pypy breakdown that jammed up all of Ci for the python-solumclient
16:33:29 <adrian_otto> I merged a fix yesterday that solved that
16:33:40 <noorul> devkulkarni: lp commmands for m1?
16:33:48 <devkulkarni> the 'status' is for 'assembly create' (since it will be async now — thanks to datsun's work)
16:34:05 <devkulkarni> noorul: need to update the bp/etherpad/wiki for this
16:34:20 <muralia> im working on that noorul. To register and get a list of lp's
16:34:22 <devkulkarni> adrian_otto: thanks for fixing that
16:34:27 <datsun180b> i feel like i owe an explanation for the rpc services
16:34:33 <adrian_otto> on the subject of STATUS attributes
16:34:49 <adrian_otto> I suggest that all status values be in UPPERCASE
16:35:04 <adrian_otto> and that we use the values consistently among multiple resources
16:35:10 <devkulkarni> fine by me.
16:35:20 <adrian_otto> so that we don't just make them up arbitrarily
16:35:29 <muralia> +1.
16:35:30 <aratim> +1
16:35:35 <devkulkarni> datsun180b: may be you can give the update/explanation when we come to the last bp update for today
16:35:45 <datsun180b> right, i'll wait my turn
16:36:23 <adrian_otto> ok, I look forward to the status field feature finishing up, I recognize that as a really important one.
16:36:25 <devkulkarni> that is all adrian_otto on the CLI bp for updates
16:36:37 <adrian_otto> #link https://blueprints.launchpad.net/solum/+spec/solum-git-pull Pull integration of Solum from an external Git repo (kraman)
16:37:04 <devkulkarni> about this — the main thing has been generation of trigger_url and where do we store trigger_id
16:37:21 <adrian_otto> this was a subject of discussion in #solum yesterday
16:37:24 <devkulkarni> julienvey, aratim, adrian_otto, and myself discussed this yesterday
16:37:53 <devkulkarni> and we have agreed to a solution where we will define a new table <plan_id, hook_id, assembly_id>
16:37:55 <adrian_otto> devkulkarni: did you get a chance to reflect on this further?
16:37:56 <devkulkarni> aratim is working on this
16:38:19 <devkulkarni> adrian_otto: not yet. need to check if asalkeld reply to the comments on aratim's patch
16:38:25 <devkulkarni> *replied
16:38:44 <adrian_otto> aratim: is there a review posted for this WIP yet?
16:38:53 <devkulkarni> adrian_otto: lot of good options came up in yesterday's discussion though
16:38:54 <adrian_otto> if so, let's reference that here
16:39:25 <devkulkarni> that is all as far as updates go
16:39:38 <adrian_otto> #link https://blueprints.launchpad.net/solum/+spec/specify-lang-pack Specify the language pack to be used for app deploy (devdatta-kulkarni)
16:39:48 <devkulkarni> this we kind of covered.
16:40:01 <adrian_otto> ok, next...
16:40:02 <devkulkarni> julienvey and muralia are tackling the remaining lp actions
16:40:02 <adrian_otto> #link https://blueprints.launchpad.net/solum/+spec/logging Logging Architecture (paulmo)
16:40:32 <devkulkarni> paulmo: I feel this done, right?
16:40:44 <paulmo> The trace/logging code was merged.  Just needs to be used throughout the code.
16:40:53 <adrian_otto> should we be marking this BP as completed?
16:41:07 <paulmo> Sure, we can make a new one for other features.
16:41:10 <adrian_otto> because there is scope in the BP, such as structured logging...
16:41:30 <adrian_otto> I'm not clear on what we completed, and what's still planned
16:41:34 <paulmo> Sure, we have structured logging ability now.
16:41:55 <adrian_otto> excellent, and what about documentation for using that?
16:42:02 <paulmo> We identify confidential (operator-only) data, we can structure log output like JSON, etc...
16:42:27 <paulmo> Blueprint, the tests run on the code provide good examples, and I had an example file as part of the original pull request for folks to look at.
16:42:35 <paulmo> (and I'm happy tohelp anyone that wants to learn more)
16:43:40 <adrian_otto> thanks paulmo
16:43:53 <adrian_otto> #link https://blueprints.launchpad.net/solum/+spec/deploy-workflow Workflow outlining deployment of a DU (asalkeld/devdatta-kulkarni)
16:44:04 <devkulkarni> datsun180b you chance now :)
16:44:09 <devkulkarni> *your
16:44:26 <datsun180b> where to start
16:44:58 <devkulkarni> high-level should be fine — what RPC services have you added, how asalkeld is using them, etc.
16:45:00 <datsun180b> in order to desync the build and deploy process i've build three rpc services in a sort of notification setup
16:45:29 <devkulkarni> cool
16:45:41 <datsun180b> so there's deployer, conductor, and worker. worker is builder-api's man-about-town, freeing up the latter to respond immediately to its requests
16:46:00 <adrian_otto> is the conductor a dispatcher?
16:46:23 <datsun180b> conductor's name parallels its cousin in trove, it's to facilitate status updates
16:46:25 <adrian_otto> the name conductor is unfortunate, as OpenStack ahs a number of things called that, all which do different things
16:46:42 <adrian_otto> supervisor?
16:47:12 <adrian_otto> anyway, jsut htink on that for later
16:47:14 <adrian_otto> continue
16:47:16 <datsun180b> we're not married to the names until the service_enable lines get into infra
16:47:27 <devkulkarni> adrian_otto: after datsun180b is done do you want to give a follow up on the trollius issue, how it relates to datsun180b's work, what is the py33 issue, py33 gating email from yesterday, etc.?
16:48:24 <datsun180b> well i don't know what more to explain right now. the basic rundown is each of the services has their ear to their own topic, populated by another service that we'd prefer not to block until these processes are complete
16:48:29 <devkulkarni> +1 to datsun180b — we can change the names
16:48:48 <adrian_otto> relevant thread: http://lists.openstack.org/pipermail/openstack-dev/2014-March/030230.html
16:48:50 <adrian_otto> #link http://lists.openstack.org/pipermail/openstack-dev/2014-March/030230.html
16:48:51 <devkulkarni> datsun180b: cool. that is great information.
16:49:02 <datsun180b> and as the services can respond before a task is complete, necessarily we'll need to convey the intermediate states, hence murali's recent status work
16:49:14 <adrian_otto> datsun180b: are you up-to-date on the content of that ML thread?
16:49:17 <noorul> datsun180b: Can the patch be split into three?
16:50:09 <datsun180b> noorul: i'm willing to negotiate, a lot of it's repeated in triplicate
16:50:17 <datsun180b> adrian_otto: let's say 80%
16:51:14 <datsun180b> now at present if i'm read up these services are using eventlet and that won't fly with 3.3, which is the cue for the trollius work, right adrian_otto?
16:51:35 <adrian_otto> datsun180b: yes.
16:52:01 <adrian_otto> and the new code in oslo.messaging is not considered "great"
16:52:06 <datsun180b> okay, so then i'm at 100%
16:52:18 <adrian_otto> even by Victor Stinner (author)
16:52:19 <datsun180b> so how to move forward from here?
16:52:44 <adrian_otto> I am happy using a subprocess listening to a topic for now.
16:53:05 <adrian_otto> but I'm open to criticism if there is a better clear answer that works equally well for py26 and py33
16:53:23 <datsun180b> oh that's another thing, i don't intend to pour cement around thes agents. in the future i'd like for them possibly to manage their own pools
16:53:56 <datsun180b> at present i suppose scaling would just be a matter of spawning extra services, and that's probably not the best long-term solution
16:53:57 <adrian_otto> datsun180b: that might be a suitable job for nodepool
16:54:12 <paulczar> datsun180b: it's easy enough to operationalize running multiple instances of a binary … so I'm not sure we really need to solve that in solum itself
16:54:25 <datsun180b> with conductor (facilitating db updates) it should be easier to move these agents away from the host box
16:54:40 <paulczar> especially if this is a shorter term solution and we'll circle back to threading when the py3.3 stuff settles with oslo
16:55:01 <datsun180b> so i think i've covered the basics, anything else i can explain?
16:55:23 <adrian_otto> time for open discussion I think
16:55:29 <devkulkarni> noorul had issues with turning off py33 gates (making non-voting). have we resolved that?
16:55:37 <adrian_otto> #topic Open Discussion
16:55:51 <devkulkarni> noorul ^^
16:56:15 <noorul> devkulkarni: it has been made non-voting
16:56:23 <devkulkarni> ok, great !
16:56:44 <devkulkarni> adrian_otto: the steps for M1 — they don't seem to align with what we have been working towards
16:56:59 <devkulkarni> Step 1: Register plan
16:57:18 <adrian_otto> from earlier noorul mentioned in the context of https://wiki.openstack.org/wiki/Solum/Milestone1 "Ops usually deploys binary artifact
16:57:18 <adrian_otto> User = application developer building app, or ops engineer deploying/monitoring the app"
16:57:21 <devkulkarni> After this user can either do git push or call assembly create
16:57:36 <devkulkarni> oh yeah, lets discuss that first
16:57:59 <adrian_otto> we might not have enough time to do it justice today
16:58:15 <devkulkarni> ok. are we having meeting next week?
16:58:30 <paulczar> we'll be meeting in person next week!
16:58:31 <julienvey> adrian_otto: can we share an etherpad on this ?
16:58:40 <adrian_otto> good point
16:58:54 <adrian_otto> julienvey:  please make one and link here
16:58:58 <julienvey> yes
16:59:17 <adrian_otto> paulczar: we should cancel the team meeting though
16:59:25 <julienvey> #link https://etherpad.openstack.org/p/solum-milestone1
16:59:32 <adrian_otto> I will be on my flight
16:59:37 <paulczar> true!
16:59:52 <devkulkarni> what are the timings of the summit?
17:00:03 <adrian_otto> any objections to cancel of 3/24 meeting?
17:00:13 <devkulkarni> no
17:00:15 <adrian_otto> Summit is all day Tue-Wed
17:00:20 <tomblank> no...
17:00:21 <adrian_otto> ok
17:00:29 <adrian_otto> #agreed no meeting on 2014-03-24
17:00:37 <adrian_otto> willmeet at Sulum Summit
17:00:39 <julienvey> 25 actually
17:00:41 <adrian_otto> thanks everyone
17:00:48 <adrian_otto> #endmeeting