19:00:01 #startmeeting Poppy Weekly Meeting 19:00:02 Meeting started Thu Oct 2 19:00:01 2014 UTC and is due to finish in 60 minutes. The chair is amitgandhinz. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:00:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:00:06 The meeting name has been set to 'poppy_weekly_meeting' 19:00:11 #topic Roll Call 19:00:17 o/ 19:00:22 o/ 19:00:26 o/ 19:00:58 o/ 19:01:12 #link agenda https://wiki.openstack.org/wiki/Meetings/Poppy 19:01:45 o/ 19:02:14 #link: http://eavesdrop.openstack.org/meetings/poppy_weekly_meeting/2014/poppy_weekly_meeting.2014-09-25-19.00.html 19:02:24 #topic Review Last Weeks Items 19:02:34 #link: http://eavesdrop.openstack.org/meetings/poppy_weekly_meeting/2014/poppy_weekly_meeting.2014-09-25-19.00.html 19:02:52 amitgandhinz to investigate MaxCDN CDN Manager API for master/sub accounts 19:02:55 yeah 19:03:02 i should just table this one lol 19:03:22 actually let me create a bp for it and unassign my name from it 19:03:28 i dont know when im going to get to this 19:03:43 thats a good idea for long living tasks 19:03:55 or long waiting tasks 19:03:56 megan_w_ to get a MaxCDN point of contact 19:04:04 megan_w is out today 19:04:12 anyone get the PoC from her? 19:04:15 no 19:04:23 not yet. 19:04:36 can assign her all the things :) 19:04:41 hehe 19:04:44 haha 19:04:45 :D 19:04:54 she is back on monday then out again next week 19:04:54 I am also waiting for PoC of MaxCDN from her too. 19:05:04 so if you want this then get on her case all day monday 19:05:08 ping her every 5 minutes 19:05:11 we created tht action for you tonytan4ever ;) 19:05:31 #action megan_w_ to get a MaxCDN point of contact for tonytan4ever 19:05:54 #topic bp Updates 19:06:08 #link https://blueprints.launchpad.net/poppy 19:06:23 get-service 19:06:31 tonytan4ever: update? 19:06:39 That one has been merged. 19:06:55 Now I am just waiting for bugs to emerge and fix them if any. 19:07:04 ok 19:07:09 obulpathi: list=services 19:07:19 it is in progress 19:07:25 in review 19:07:36 I have API tests for the GET service..But they are currently failing at the gate because they run against mock DB :( 19:07:42 had a bug, fixed that and incorporating review comments 19:07:46 ok 19:07:56 tonytan4ever: store-provider-details 19:08:00 obulpathi: have you already fixed the 500s? 19:08:18 That one has been done along with the create-service bp 19:08:18 malini1: no not yet 19:08:25 I am still testing other stuff 19:08:46 So it has been merged. 19:09:02 nitin is not around, so no update on the sqla driver 19:09:10 we do need this for the gate 19:09:24 has anyone heard from his recently? 19:09:29 nop 19:09:29 s/his/him 19:09:30 no 19:09:44 nothing from me. 19:10:04 im tempted to unassign it from him until he comes back 19:10:18 i dont think he started it yet, soo.... 19:10:21 amitgandhinz: tht sounds reasonable 19:10:27 +1 19:10:37 sure. 19:10:49 miqui: update on add-docstrings? 19:11:27 miqui won't be able to attend 19:11:38 miqui has already started on it 19:11:54 yep 19:11:59 ok 19:12:03 i;l keep it as started 19:12:16 tonytan4ever: delete-service? 19:12:36 This one is in progress. 19:13:01 I am actively working on it, and should be rolling out a PR in few days. 19:13:08 awesome 19:13:21 amitgandhinz: dns-driver 19:13:25 oooh thats me =) 19:13:36 so i started investigating this one 19:13:47 looked at Rackspace Cloud DNS which has a 500 record limit 19:14:07 so im trying to figure out how this would scale out for many customers 19:14:16 500 record limit per domain? 19:14:19 still investigating... 19:14:22 per account 19:14:28 ok 19:14:34 do we need to look at designate as well? 19:14:45 designate? 19:15:00 oooh 19:15:07 Openstack Designate https://wiki.openstack.org/wiki/Designate 19:15:12 nice 19:15:23 yes, it would make sense to build a designate driver also 19:15:47 * amitgandhinz malini1 is the openstack guru 19:15:54 doesnt look like they are incubated 19:15:58 but neither are we :D 19:16:18 I googled it right before posting the link ;) 19:16:25 hehe 19:16:31 hahaha .. malini1 is smart 19:16:50 may be we should ping them and see if we can colloborate? 19:16:53 ok ok.... 19:17:04 obulpathi: patch-service 19:17:20 I started it, but no progress as I am working on lsit 19:17:33 s/lsit/list 19:17:35 can i remove your name for now, and reassign when you pick it up? 19:17:40 sure 19:18:04 obulpathi: mock-cassandra 19:18:22 I amde good progress 19:18:40 and submitted a work-in-progress patch 19:19:15 will work on it once I finish my current tasks 19:19:33 which one should I go after list? mock-cassandra or patch? 19:19:33 ok. since it is a low priority bp, right now i will mark it as deferred 19:19:37 ok 19:19:39 we can puck it back up later 19:19:43 s/puck/pick 19:19:49 ok 19:19:57 i think list and patch are more important 19:20:48 list is almost done, will start working on patch, once I finish list 19:20:53 cool 19:20:59 malini1: gate-cassandra 19:21:09 I just started looking at it today 19:21:26 Cassandra is not part of official deb repo 19:21:53 But as long as we are in stackforge, we can install cassandra from 3rd party repos 19:22:12 Right now I am going through a bunch of yaml figuring out how to do that 19:22:27 is not a lot of fun -but am making progress 19:23:18 tht's all I have..next bp :-P 19:23:44 and the last one is. 19:23:53 miqui: home-doc 19:24:24 since he isnt here right now...this discussion just happened on the poppy channel 19:24:31 we dont have any updates from miqui on this bp 19:24:41 basically the home doc got neglected and we need to add the newer endpoints to it 19:24:50 ok 19:25:01 #topic New Items 19:25:09 no new items on the agenda 19:25:18 #topic Open Discussion 19:25:30 any one have anything they want to discuss/ 19:25:46 testing cassandra with tox 19:25:59 obulpathi: the stage is yours 19:26:12 malini1: can you please shed some light on how to do that? 19:26:16 #topic testing cassandra with tox 19:26:33 obulpathi: my battery is down :( 19:26:51 hmmm..you want to talk abt API tests with cassandra, rt? 19:27:05 as suggested by amitgandhinz, if we can run tests tox tests and pass configuration parameters to tox as command line parameters 19:27:07 malini1 needs some apple juice ;-) 19:27:08 it would be great 19:28:11 this way we don't need to change the config file 19:28:22 lets add a bp for tht & figure out how to make it work at the gate & dev laptops 19:28:28 great :) 19:28:39 obulpathi: can you make the bp 19:28:43 and I promise to buy you apple juice after that :D 19:28:49 sure 19:28:54 amitgandhinz: sure 19:30:05 ok any other topics? 19:30:32 I believe obulpathi wanted to talk abt mock provider 19:30:33 Can I talk about the status on poppy service ? 19:30:56 sure tonytan4ever 19:31:00 #topic poppy service status 19:31:07 floor is yours 19:31:16 We define poppy service with 3 different status 19:31:31 creating, deployed, and delete_in_progress. 19:32:06 obviously each status is based on corresponding provider's service status. 19:32:14 we also have an unknown status, rt? 19:32:28 I don't think unknown status is any useful. 19:32:33 +1 19:33:01 After I implemented all these endpoint, I found only those three statues make sense. 19:33:14 what about updating? 19:33:36 creating can become in_progress 19:33:37 If a service gets created in poppy, it is in created status. 19:33:49 I will explain in_progress in a minute. 19:34:42 when doing updating, the service should be still in deployed status, unless we need to use a "updating" status to do something with a client. 19:35:27 Now each poppy service has a provider_details field with a lot of provider's detail information in it. 19:36:25 Each provider's detail will have status of "in_progress", "deployed", "disabled", and "delete_in_progress" 19:37:13 And poppy service's status should be a calculated field based on each provider detail's status. 19:37:24 +1 19:38:05 tonytan4ever: it is deployed, only if it is a success for all providers under the flavor? 19:38:09 So we will not store an extra status field on service schema of poppy . 19:38:20 yes that's correct. 19:38:32 that makes sense, always derive from provider_details 19:39:00 all provider detail's status should be deployed, then a poppy service' status will be deployed ( for one flavor) 19:39:00 sounds like a good idea tonytan4ever 19:39:23 +1 19:39:27 +1 19:39:35 do we have some way of telling the user it is partially succesful? 19:39:57 it gets deployed in provider_a but not in provider_b 19:40:01 does the user care? 19:40:08 i dont think the user cares 19:40:13 it should be all or nothing 19:40:28 think heat for example, do i care if some of my servers were provisioned? 19:40:46 if it failed, i would start again right? 19:40:56 the user might want to have the entries removed from the provider who succeded 19:41:08 if it fails, delete the service? 19:41:15 which will then cleanup at the provider? 19:41:52 but then we returned a failed status, so the user wont know if they have to delete 19:41:53 I don't think that's a good idea, because we then need to delete those successful ones. 19:42:25 but when they do a get on that service they see failed. so they should delete right? 19:42:32 why leave a bad service config around? 19:43:05 and as long as they dont cname to the provisioned service it wont be used by their customers right? 19:43:26 when one provider fails at creating, but another provider created successfully, if we do a full delete, we also need to delete those successful provider's service. 19:43:34 yes 19:43:35 yes 19:44:19 we are relying on the user to make a delete call, after a failed service 19:44:21 the other option is to rollback at the end of create if one or more fail 19:44:27 and that in turn increase the chance of failure, and it will also increase the time of deleting that service 19:46:55 So image this, if a user were to creates a service named "service_abc", it failed in fastly, but successful in Akamai/Cloudfront 19:47:06 ok 19:47:43 to follow this all-or-nothing approach, we would need to delete the service name "service_abc" from Akamai/Cloudfront side, which will possibly take 10+min 19:48:12 during this 10min, the user cannot create another service "service_abc" in Akamai/CLoudfront side, 19:48:20 and that is confusing for them. 19:48:37 now imagine we leave tht service in Akamai/Cloudfront, with no service_abc in fastly 19:48:54 user makes a poppy call to post service_abc 19:49:06 so heat has that problem too - when you delete a stack it takes time, and meanwhile you cant create a new stack with that name (and yes its annoying) 19:49:12 now it will fail in Akamai/Cloudfront coz its already there, 19:49:31 Assuming it goes thru ok in fastly the second time, we are still a failed ^o) 19:49:39 good point 19:49:49 all or nothing is a clean way 19:49:55 maybe we are overthinking this 19:50:19 lets start of the easy approach. if one fails, the service is in failed state and user must delete the service themselves 19:50:35 we will let catherine_ mention this in the docs ;) 19:50:37 +1 19:50:40 +1 19:50:41 +1 19:50:56 this way the user no longer has confusion why he can recreate a new service with same name 19:51:04 #agreed if one provider fails on create, the service is in failed state and user must delete the service themselves 19:51:04 Eek! 19:51:05 why he cannot 19:51:26 9 min remain 19:51:33 tonytan4ever: any more on status ? 19:51:42 that's all 19:51:55 I am drinking apple juice now. 19:52:14 let light be with you! 19:52:37 any other topics? 19:52:49 going once 19:52:52 going twice 19:52:58 gone 19:53:03 ok thanks everyone 19:53:07 good discussion today 19:53:07 'bye' folks 19:53:11 byr! 19:53:11 #endmeeting