19:00:01 <amitgandhinz> #startmeeting Poppy Weekly Meeting
19:00:02 <openstack> Meeting started Thu Oct  2 19:00:01 2014 UTC and is due to finish in 60 minutes.  The chair is amitgandhinz. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:00:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:00:06 <openstack> The meeting name has been set to 'poppy_weekly_meeting'
19:00:11 <amitgandhinz> #topic Roll Call
19:00:17 <amitgandhinz> o/
19:00:22 <obulpathi> o/
19:00:26 <tonytan4ever> o/
19:00:58 <malini1> o/
19:01:12 <amitgandhinz> #link agenda https://wiki.openstack.org/wiki/Meetings/Poppy
19:01:45 <ametts> o/
19:02:14 <amitgandhinz> #link: http://eavesdrop.openstack.org/meetings/poppy_weekly_meeting/2014/poppy_weekly_meeting.2014-09-25-19.00.html
19:02:24 <amitgandhinz> #topic Review Last Weeks Items
19:02:34 <amitgandhinz> #link: http://eavesdrop.openstack.org/meetings/poppy_weekly_meeting/2014/poppy_weekly_meeting.2014-09-25-19.00.html
19:02:52 <amitgandhinz> amitgandhinz to investigate MaxCDN CDN Manager API for master/sub accounts
19:02:55 <amitgandhinz> yeah
19:03:02 <amitgandhinz> i should just table this one lol
19:03:22 <amitgandhinz> actually let me create a bp for it and unassign my name from it
19:03:28 <amitgandhinz> i dont know when im going to get to this
19:03:43 <obulpathi> thats a good idea for long living tasks
19:03:55 <obulpathi> or long waiting tasks
19:03:56 <amitgandhinz> megan_w_ to get a MaxCDN point of contact
19:04:04 <amitgandhinz> megan_w is out today
19:04:12 <amitgandhinz> anyone get the PoC from her?
19:04:15 <malini1> no
19:04:23 <tonytan4ever> not yet.
19:04:36 <catherine_> can assign her all the things :)
19:04:41 <amitgandhinz> hehe
19:04:44 <obulpathi> haha
19:04:45 <malini1> :D
19:04:54 <amitgandhinz> she is back on monday then out again next week
19:04:54 <tonytan4ever> I am also waiting for PoC of MaxCDN from her too.
19:05:04 <amitgandhinz> so if you want this then get on her case all day monday
19:05:08 <amitgandhinz> ping her every 5 minutes
19:05:11 <malini1> we created tht action for you tonytan4ever ;)
19:05:31 <amitgandhinz> #action megan_w_ to get a MaxCDN point of contact for tonytan4ever
19:05:54 <amitgandhinz> #topic bp Updates
19:06:08 <amitgandhinz> #link https://blueprints.launchpad.net/poppy
19:06:23 <amitgandhinz> get-service
19:06:31 <amitgandhinz> tonytan4ever:  update?
19:06:39 <tonytan4ever> That one has been merged.
19:06:55 <tonytan4ever> Now I am just waiting for bugs to emerge and fix them if any.
19:07:04 <amitgandhinz> ok
19:07:09 <amitgandhinz> obulpathi: list=services
19:07:19 <obulpathi> it is in progress
19:07:25 <obulpathi> in review
19:07:36 <malini1> I have API tests for the GET service..But they are currently failing at the gate because they run against mock DB :(
19:07:42 <obulpathi> had a bug, fixed that and incorporating review comments
19:07:46 <amitgandhinz> ok
19:07:56 <amitgandhinz> tonytan4ever: store-provider-details
19:08:00 <malini1> obulpathi: have you already fixed the 500s?
19:08:18 <tonytan4ever> That one has been done along with the create-service bp
19:08:18 <obulpathi> malini1: no not yet
19:08:25 <obulpathi> I am still testing other stuff
19:08:46 <tonytan4ever> So it has been merged.
19:09:02 <amitgandhinz> nitin is not around, so no update on the sqla driver
19:09:10 <amitgandhinz> we do need this for the gate
19:09:24 <amitgandhinz> has anyone heard from his recently?
19:09:29 <obulpathi> nop
19:09:29 <amitgandhinz> s/his/him
19:09:30 <malini1> no
19:09:44 <tonytan4ever> nothing from me.
19:10:04 <amitgandhinz> im tempted to unassign it from him until he comes back
19:10:18 <amitgandhinz> i dont think he started it yet, soo....
19:10:21 <malini1> amitgandhinz: tht sounds reasonable
19:10:27 <obulpathi> +1
19:10:37 <tonytan4ever> sure.
19:10:49 <amitgandhinz> miqui: update on add-docstrings?
19:11:27 <obulpathi> miqui won't be able to attend
19:11:38 <malini1> miqui has already started on it
19:11:54 <obulpathi> yep
19:11:59 <amitgandhinz> ok
19:12:03 <amitgandhinz> i;l keep it as started
19:12:16 <amitgandhinz> tonytan4ever: delete-service?
19:12:36 <tonytan4ever> This one is in progress.
19:13:01 <tonytan4ever> I am actively working on it, and should be rolling out a PR in few days.
19:13:08 <amitgandhinz> awesome
19:13:21 <amitgandhinz> amitgandhinz: dns-driver
19:13:25 <amitgandhinz> oooh thats me =)
19:13:36 <amitgandhinz> so i started investigating this one
19:13:47 <amitgandhinz> looked at Rackspace Cloud DNS which has  a 500 record limit
19:14:07 <amitgandhinz> so im trying to figure out how this would scale out for many customers
19:14:16 <obulpathi> 500 record limit per domain?
19:14:19 <amitgandhinz> still investigating...
19:14:22 <amitgandhinz> per account
19:14:28 <obulpathi> ok
19:14:34 <malini1> do we need to look at designate as well?
19:14:45 <amitgandhinz> designate?
19:15:00 <amitgandhinz> oooh
19:15:07 <malini1> Openstack Designate https://wiki.openstack.org/wiki/Designate
19:15:12 <amitgandhinz> nice
19:15:23 <amitgandhinz> yes, it would make sense to build a designate driver also
19:15:47 * amitgandhinz malini1 is the openstack guru
19:15:54 <malini1> doesnt look like they are incubated
19:15:58 <malini1> but neither are we :D
19:16:18 <malini1> I googled it right before posting the link ;)
19:16:25 <amitgandhinz> hehe
19:16:31 <obulpathi> hahaha .. malini1 is smart
19:16:50 <obulpathi> may be we should ping them and see if we can colloborate?
19:16:53 <amitgandhinz> ok ok....
19:17:04 <amitgandhinz> obulpathi: patch-service
19:17:20 <obulpathi> I started it, but no progress as I am working on lsit
19:17:33 <obulpathi> s/lsit/list
19:17:35 <amitgandhinz> can i remove your name for now, and reassign when you pick it up?
19:17:40 <obulpathi> sure
19:18:04 <amitgandhinz> obulpathi: mock-cassandra
19:18:22 <obulpathi> I amde good progress
19:18:40 <obulpathi> and submitted a work-in-progress patch
19:19:15 <obulpathi> will work on it once I finish my current tasks
19:19:33 <obulpathi> which one should I go after list? mock-cassandra or patch?
19:19:33 <amitgandhinz> ok.  since it is a low priority bp, right now i will mark it as deferred
19:19:37 <obulpathi> ok
19:19:39 <amitgandhinz> we can puck it back up later
19:19:43 <amitgandhinz> s/puck/pick
19:19:49 <obulpathi> ok
19:19:57 <amitgandhinz> i think list and patch are more important
19:20:48 <obulpathi> list is almost done, will start working on patch, once I finish list
19:20:53 <amitgandhinz> cool
19:20:59 <amitgandhinz> malini1: gate-cassandra
19:21:09 <malini1> I just started looking at it today
19:21:26 <malini1> Cassandra is not part of official deb repo
19:21:53 <malini1> But as long as we are in stackforge, we can install cassandra from 3rd party repos
19:22:12 <malini1> Right now I am going through a bunch of yaml figuring out how to do that
19:22:27 <malini1> is not a lot of fun -but am making progress
19:23:18 <malini1> tht's all I have..next bp :-P
19:23:44 <amitgandhinz> and the last one is.
19:23:53 <amitgandhinz> miqui: home-doc
19:24:24 <amitgandhinz> since he isnt here right now...this discussion just happened on the poppy channel
19:24:31 <obulpathi> we dont have any updates from miqui on this bp
19:24:41 <amitgandhinz> basically the home doc got neglected and we need to add the newer endpoints to it
19:24:50 <obulpathi> ok
19:25:01 <amitgandhinz> #topic New Items
19:25:09 <amitgandhinz> no new items on the agenda
19:25:18 <amitgandhinz> #topic Open Discussion
19:25:30 <amitgandhinz> any one have anything they want to discuss/
19:25:46 <obulpathi> testing cassandra with tox
19:25:59 <amitgandhinz> obulpathi: the stage is yours
19:26:12 <obulpathi> malini1: can you please shed some light on how to do that?
19:26:16 <amitgandhinz> #topic testing cassandra with tox
19:26:33 <malini1> obulpathi: my battery is down :(
19:26:51 <malini1> hmmm..you want to talk abt API tests with cassandra, rt?
19:27:05 <obulpathi> as suggested by amitgandhinz, if we can run tests tox tests and pass configuration parameters to tox as command line parameters
19:27:07 <amitgandhinz> malini1 needs some apple juice ;-)
19:27:08 <obulpathi> it would be great
19:28:11 <obulpathi> this way we don't need to change the config file
19:28:22 <malini1> lets add a bp for tht & figure out how to make it work at the gate & dev laptops
19:28:28 <obulpathi> great :)
19:28:39 <amitgandhinz> obulpathi: can you make the bp
19:28:43 <obulpathi> and I promise to buy you apple juice after that :D
19:28:49 <obulpathi> sure
19:28:54 <obulpathi> amitgandhinz: sure
19:30:05 <amitgandhinz> ok any other topics?
19:30:32 <malini1> I believe obulpathi wanted to talk abt mock provider
19:30:33 <tonytan4ever> Can I talk about the status on poppy service ?
19:30:56 <malini1> sure tonytan4ever
19:31:00 <amitgandhinz> #topic poppy service status
19:31:07 <amitgandhinz> floor is yours
19:31:16 <tonytan4ever> We define poppy service with 3 different status
19:31:31 <tonytan4ever> creating, deployed, and delete_in_progress.
19:32:06 <tonytan4ever> obviously each status is based on corresponding provider's service status.
19:32:14 <malini1> we also have an unknown status, rt?
19:32:28 <tonytan4ever> I don't think unknown status is any useful.
19:32:33 <malini1> +1
19:33:01 <tonytan4ever> After I implemented all these endpoint, I found only those three statues make sense.
19:33:14 <amitgandhinz> what about updating?
19:33:36 <malini1> creating can become in_progress
19:33:37 <tonytan4ever> If a service gets created in poppy, it is in created status.
19:33:49 <tonytan4ever> I will explain in_progress in a minute.
19:34:42 <tonytan4ever> when doing updating, the service should be still in deployed status, unless we need to use a "updating" status to do something with a client.
19:35:27 <tonytan4ever> Now each poppy service has a provider_details field with a lot of provider's detail information in it.
19:36:25 <tonytan4ever> Each provider's detail will have status of "in_progress", "deployed", "disabled", and "delete_in_progress"
19:37:13 <tonytan4ever> And poppy service's status should be a calculated field based on each provider detail's status.
19:37:24 <amitgandhinz> +1
19:38:05 <malini1> tonytan4ever: it is deployed, only if it is a success for all providers under the flavor?
19:38:09 <tonytan4ever> So we will not store an extra status field on service schema of poppy .
19:38:20 <tonytan4ever> yes that's correct.
19:38:32 <amitgandhinz> that makes sense, always derive from provider_details
19:39:00 <tonytan4ever> all provider detail's status should be deployed, then a poppy service' status will be deployed ( for one flavor)
19:39:00 <malini1> sounds like a good idea tonytan4ever
19:39:23 <obulpathi> +1
19:39:27 <amitgandhinz> +1
19:39:35 <malini1> do we have some way of telling the user it is partially succesful?
19:39:57 <malini1> it gets deployed in provider_a but not in provider_b
19:40:01 <malini1> does the user care?
19:40:08 <amitgandhinz> i dont think the user cares
19:40:13 <amitgandhinz> it should be all or nothing
19:40:28 <amitgandhinz> think heat for example, do i care if some of my servers were provisioned?
19:40:46 <amitgandhinz> if it failed, i would start again right?
19:40:56 <malini1> the user might want to have the entries removed from the provider who succeded
19:41:08 <amitgandhinz> if it fails, delete the service?
19:41:15 <amitgandhinz> which will then cleanup at the provider?
19:41:52 <malini1> but then we returned a failed status, so the user wont know if they have to delete
19:41:53 <tonytan4ever> I don't think that's a good idea, because we then need to delete those successful ones.
19:42:25 <amitgandhinz> but when they do a get on that service they see failed.  so they should delete right?
19:42:32 <amitgandhinz> why leave a bad service config around?
19:43:05 <amitgandhinz> and as long as they dont cname to the provisioned service it wont be used by their customers right?
19:43:26 <tonytan4ever> when one provider fails at creating, but another provider created successfully, if we do a full delete, we also need to delete those successful provider's service.
19:43:34 <malini1> yes
19:43:35 <amitgandhinz> yes
19:44:19 <malini1> we are relying on the user to make a delete call, after a failed service
19:44:21 <amitgandhinz> the other option is to rollback at the end of create if one or more fail
19:44:27 <tonytan4ever> and that in turn increase the chance of failure, and it will also increase the time of deleting that service
19:46:55 <tonytan4ever> So image this, if a user were to creates a service named "service_abc", it failed in fastly, but successful in Akamai/Cloudfront
19:47:06 <obulpathi> ok
19:47:43 <tonytan4ever> to follow this all-or-nothing approach, we would need to delete the service name "service_abc" from Akamai/Cloudfront side, which will possibly take 10+min
19:48:12 <tonytan4ever> during this 10min, the user cannot create another service "service_abc" in Akamai/CLoudfront side,
19:48:20 <tonytan4ever> and that is confusing for them.
19:48:37 <malini1> now imagine we leave tht service in Akamai/Cloudfront, with no service_abc in fastly
19:48:54 <malini1> user makes a poppy call to post service_abc
19:49:06 <amitgandhinz> so heat has that problem too - when you delete a stack it takes time, and meanwhile you cant create a new stack with that name (and yes its annoying)
19:49:12 <malini1> now it will fail in Akamai/Cloudfront coz its already there,
19:49:31 <malini1> Assuming it goes thru ok in fastly the second time, we are still a failed ^o)
19:49:39 <amitgandhinz> good point
19:49:49 <obulpathi> all or nothing is a clean way
19:49:55 <malini1> maybe we are overthinking this
19:50:19 <amitgandhinz> lets start of the easy approach.  if one fails, the service is in failed state and user must delete the service themselves
19:50:35 <malini1> we will let catherine_ mention this in the docs ;)
19:50:37 <obulpathi> +1
19:50:40 <amitgandhinz> +1
19:50:41 <tonytan4ever> +1
19:50:56 <obulpathi> this way the user no longer has confusion why he can recreate a new service with same name
19:51:04 <amitgandhinz> #agreed if one provider fails on create, the service is in failed state and user must delete the service themselves
19:51:04 <catherine_> Eek!
19:51:05 <obulpathi> why he cannot
19:51:26 <amitgandhinz> 9 min remain
19:51:33 <amitgandhinz> tonytan4ever: any more on status ?
19:51:42 <tonytan4ever> that's all
19:51:55 <tonytan4ever> I am drinking apple juice now.
19:52:14 <malini1> let light be with you!
19:52:37 <amitgandhinz> any other topics?
19:52:49 <amitgandhinz> going once
19:52:52 <amitgandhinz> going twice
19:52:58 <amitgandhinz> gone
19:53:03 <amitgandhinz> ok thanks everyone
19:53:07 <amitgandhinz> good discussion today
19:53:07 <malini1> 'bye' folks
19:53:11 <catherine_> byr!
19:53:11 <amitgandhinz> #endmeeting