21:01:43 #startmeeting crossproject 21:01:44 Meeting started Tue Jul 14 21:01:43 2015 UTC and is due to finish in 60 minutes. The chair is nikhil_k. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:01:45 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:01:47 The meeting name has been set to 'crossproject' 21:01:50 o/ 21:01:50 Hi 21:01:52 o/ 21:01:52 here 21:02:00 here 21:02:02 yo/ 21:02:04 o/ 21:02:05 courtesy ping for david-lyle flaper87 dims ttx johnthetubaguy rakhmerov 21:02:09 courtesy ping for smelikyan morganfainberg bswartz slagle adrian_otto mestery 21:02:11 courtesy ping for kiall jeblair thinrichs j^2 stevebaker mtreinish Daisy 21:02:15 courtesy ping for notmyname dtroyer isviridov gordc SlickNik loquacities thingee 21:02:18 courtesy ping for hyakuhei redrobot TravT emilienm SergeyLukjanov devananda 21:02:18 o/ 21:02:20 o/ 21:02:21 courtesy ping for boris-42 21:02:22 o/ 21:02:23 o/ 21:02:24 o/ 21:02:27 o/ 21:02:37 * fungi is mostly here for this 21:02:39 o/ 21:02:43 o/ 21:02:44 * dhellmann is going to have to duck out soon 21:02:55 o/ 21:03:01 o/ 21:03:11 o/ 21:03:12 \o 21:03:24 #info Agenda: https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting 21:03:45 Before we get started 21:04:12 o/ 21:04:19 hi 21:04:22 I wanted to ask if there's anyone available to signup for volunteering chair for next week's meeting 21:04:56 Won't be able to cover this one if nobody signs up 21:05:06 * EmilienM in holidays 21:05:09 so we'd likely skip the meeting if nobody volunteers :) 21:05:09 do it, it's fun and rewarding! 21:05:27 annegentle: I put my name for August 21:05:34 thanks EmilienM 21:05:48 o/ 21:05:56 "cancelled due to oscon" 21:06:39 ok, cancelled it is, then! 21:06:57 #agreed we will skip the meeting next week 21:07:02 * fungi hangs a "gone conferencing" sign on the meeting room door 21:07:24 Let's get started. 21:07:33 #topic Team announcements (horizontal, vertical, diagonal) 21:07:50 #info horizontal announcements 21:08:00 any news on this? 21:08:00 On release management side, I don't have anything specific to announce this week 21:08:06 dhellmann: anything ? 21:08:19 nothing from me this week 21:08:19 * ttx blames fireworks 21:08:24 nothing this week for qa 21:08:55 Nova wise, we have our midcyle next week, no other big news 21:09:10 I don't have a separate section for the API_WG specs, thought we can cover them here 21:09:38 nikhil_k: we don't have any new specs to announce at this time. we will have more up for review later this week. 21:09:50 thanks elmiko 21:10:01 #info vertical announcements 21:10:20 Any other midcycles to announce? 21:10:42 ironic (finally) came to a consensus on how we're doing independent release cycles, and will be doing one shortly after we finish up one feature in flight 21:10:55 * notmyname is working on getting a new python-swiftclient release 21:11:10 also, Swift will be doing a midcycle hackathon next month 21:11:43 oh, on the release front, the infra spec for the automation around release tagging landed: http://specs.openstack.org/openstack-infra/infra-specs/specs/centralize-release-tagging.html 21:11:58 oh, and ironic midcycle is 8/12-14 in seattle 21:12:20 defcore mid-cycle in two weeks 21:12:31 notmyname: mind adding the info here #link https://wiki.openstack.org/wiki/Sprints ? 21:12:36 July 29/30 in Austin 21:13:17 nikhil_k: we have a virtual one 21:13:20 wiki is updated 21:13:35 puppet openstack group has a sprint (or midcycle) in september 21:13:46 horizon midcycle is next week 21:14:30 good stuff 21:14:36 since everyone is saying something, ceilometer had our's last week 21:15:24 timely announcement ;) 21:15:24 I guess we are short of announcements this week so we don't have a diagonal one 21:15:41 But great to hear all the midcycle shout outs 21:15:58 Moving on.. 21:16:12 A couple of cross project specs 21:16:17 first up 21:16:19 #topic Replace eventlet + monkey-patching with ?? 21:16:30 #link https://review.openstack.org/#/c/164035 21:17:03 There are decent number of comments on it already but it would be nice to have diverse feedback 21:18:10 Are there any crazy ideas for a replacement? 21:18:58 I like that John has written let's propose the most dramatic change first 21:20:09 this is josh right? 21:20:31 Joshua Harlow, yes 21:20:57 I know glyph had some good ideas about slowly transitioning to twisted or async-io by using a non greenlet hub 21:21:09 #help as a first work item on the spec: Request PTLs feedback/analysis on there own projects thread-safety. 21:21:33 johnthetubaguy: indeed, I just pointed glyph at this spec 21:21:48 jroll: me too, dropped him an email, cools 21:22:05 honestly we need to identify actual issues really 21:22:24 I am interested in swift's performance issues, I should catch up with the go folks about that I guess 21:22:55 I haven't looked over this in detail yet, but yeah, I'm pretty concerned about Swift being able to work under a thread/fork model 21:23:04 well, my big issue was the DB problems, but it seems like we are making those go away, a little bit 21:23:10 I think we have some in Glance right up front 21:23:25 ie today's swift clusters are required to handle many thousands of req/sec. 21:24:51 anyways, dhellmann has come great comments on there, I should take a look 21:26:52 I feel like this spec may be too large of a change (with too many unknowns) to reasonably discuss and come to any consensus in this meeting 21:27:06 yeah, seems like this spec needs some use cases from OpenStack projects 21:27:13 clearly eventlet has some problems 21:27:18 threads and asyncio also have problems 21:27:32 we need to evaluate the real-world usefulness of switching more 21:27:50 jroll: right, I don't see anything compelling enough in the spec 21:28:07 johnthetubaguy: agree 21:28:53 thanks guys 21:28:58 moving on.. 21:29:02 #topic Service Catalog updates 21:29:09 #link https://review.openstack.org/#/c/181393 21:29:14 holla 21:29:36 exciting topic, I know! 21:30:20 Looks neatly written ;-) 21:30:31 I agree ha ha :) 21:30:55 +1 on Standard required naming for endpoints 21:30:55 as usual, naming is the hardest part, but I think we can get a lot done towards consistency in the catalog 21:31:53 annegentle: I think you hit all the key issues there 21:32:27 sdague as partner-in-crime, er, collab 21:32:51 I hope we got everything, the etherpad got hosed, cough cough, but I think it's good-to-go 21:33:07 kind-of-side-question: is there a way for keystone to send different catalogs for internal/external use? 21:33:25 (I know little-to-nothing about catalogs, feel free to tell me that's a dumb question) 21:33:34 jroll: hm, I dunno. dolphm or morgan would know 21:33:43 jroll: it would certainly neaten up things 21:34:07 jroll: I need to look at some of that for the nova config 21:34:09 annegentle: as an operator, that feels kind of key to me 21:34:12 there are the three urls I think 21:34:19 jroll: sort of. there's a feature called endpoint filtering that lets you whitelist certain endpoints for certain projects 21:34:45 jroll: just don't mistake it for security by obscurity! 21:34:48 jroll: there are different endpoint types you can set for a service 21:35:07 its the handing it out the end user issue thats not so good 21:35:14 it only being in nova config is handy 21:35:21 dolphm: of course, more like "nova should use this neutron endpoint that's only accessible internally" 21:35:44 dolphm +1 on the obscurity thing, but this is more URLs on an internal network that are of no use to end users 21:35:47 I'm not opposed to continuing to put the url in config files, but this spec specifically calls that out as bad 21:36:01 jroll: yeah, I was adding a comment on that just now 21:36:01 jroll: that's the intention of the "internal" endpoint interface type 21:36:26 dolphm: maybe the filtering stops us handing that out the the user, and thats the bit thats handy 21:36:32 dolphm: ok, cool. I want to make sure that doesn't get nuked here :) 21:36:40 dolphm: for glance its a round robin list of ips in some cases 21:36:45 is this a good place to bring up sub-project endpoints possiblity? 21:37:11 I may just put that comment in the spec 21:37:31 nikhil_k: sure, would like an example 21:37:47 I'm glad this made it on the cross project agenda so I could get a sense of whether we've all looked at it 21:37:54 so it can go another week or so 21:38:00 annegentle: ah, so one possible issue 21:38:06 if you use keystone auth for non-openstack project 21:38:12 you might get name clashes 21:38:22 in the catalog 21:38:30 like there could be two sorts of compute APIs 21:38:40 given we will establish a naming criterion it would be nice to assume that we may have more than one endpoint per project-scope 21:38:46 johnthetubaguy: yeah I'm not sure what to do about that, alluded to it with the tie-in to projects.yaml for service name 21:38:57 on the keystone "give different endpoints for internal/external use" -- morganfainberg ? 21:38:59 I think the namespacing liek the one above should help ^ ? 21:39:04 annegentle: I was just thinking os-compute rather than compute, but its messy :( 21:39:25 devananda: I think dolphm answered that for me :) 21:39:25 johnthetubaguy: yah 21:39:37 jroll: oh, cool. i missed it in the scrollback then 21:39:49 morganfainberg is traveling today 21:39:55 our midcycle starts tomorrow 21:40:16 definitely comment on the review and I'll respond 21:40:27 re: service names vs. project names -- i think this is more complex than we're giving it credit for 21:40:32 annegentle: "The tying between auth and getting a service catalog seems unnecessary" -- this might be tricky in some cases 21:41:18 devananda: seriously... 21:41:24 I think some value added services that we keep optional and may be exposed to say premium users 21:41:28 "might get name clashes" is unacceptable. two different services with different REST APIs both registering with the same name in keystone? ..... 21:41:55 in such cases exposing the endpoint might be irrelevant 21:42:17 devananda: I think we need to tie to governance/projects.yaml to ensure unique 21:42:34 devananda: so its actually happened already, I think, we use the same auth for our old cloud and openstack cloud, I think the old one is call "compute" already, but I could be wrong 21:42:44 annegentle: I agree. Except that puts the TC back in the seat of blessing things 21:42:55 annegentle: well I am talking non-openstack projects using keystone really 21:43:19 johnthetubaguy: there are a lot of new-openstack-projects that dont have official service names, right? 21:43:25 our in this context being rackspace, I need to stop using pronouns 21:43:30 devananda: very true, containers? 21:43:36 * devananda checks his assumption in the gov repo 21:44:16 devananda: well, I am thinking about the competing ones that are not yet registered as such, the folks looking in the door of the tent, or something 21:44:45 If someone has a single deployment (under same keystone) as hybrid (pub + priv) then name clashes are evident 21:45:07 nikhil_k: that would be different regions I think 21:45:10 there should be uniquenes per region though. 21:45:16 exactly - unique per region 21:45:24 johnthetubaguy: yeah it's the competing ones I was thinking of orignally 21:45:30 (I can't spell) 21:45:46 but if I had two computes in the same region, with different REST APIs, well, that's broken 21:45:54 anyway, perhaps we're bikeshedding now 21:45:59 x-container until approved? :) 21:46:05 hmm, I don't have a real world use case where the same oper uses the same DC for pub and priv so will take the region argument 21:46:11 devananda: we have that at Rackspace now with "first gen compute" and "next gen compute" and yea it is confusing 21:46:22 devananda: yeah, so Rackspace has that, sort of, but anyways, fun fun 21:46:27 :) 21:46:42 I guess we've enough spice to see the spec move forward in some direction 21:46:46 sure, thanks nikhil_k 21:47:06 I see tpatil sneaked in his etherpad 21:47:11 hi 21:47:15 #topic return request-id to caller 21:47:19 In the last meeting, lifeless suggested to return x-openstack-request-id back to the caller in the response itself. 21:47:23 #link https://etherpad.openstack.org/p/request-id 21:47:28 We have analyzed and documented all cinder client methods information in google spreadsheet 21:47:33 #link: https://docs.google.com/spreadsheets/d/1al6_XBHgKT8-N7HS7j_L2H5c4CFD0fB8xT93z6REkSk/edit?usp=sharing 21:47:43 There are 6 different values returned from volume/snapshots methods 21:47:51 list, dict, resource class object, None, Tuple(Response object, None), Cinder exception 21:48:00 For each of the above return value, we have identified what changes are required to pass response headers containing x-openstack-request-id in it. You can find all that information on the etherpad 21:48:30 I would like to talk about few limitations of solution #3 21:48:55 Deleting metadata is one big problem as internally it deletes metadata key one by one so it’s not possible to return response header back to the caller 21:49:11 Retrieving x-openstack-request-id from response is not uniform for each method 21:49:23 You can see example of how to get x-openstack-request-id here 21:49:28 https://review.openstack.org/#/c/201434/2/nova/volume/cinder.py, refer to _cinder_volumes method. 21:49:48 POC: python-cinderclient patch: https://review.openstack.org/#/c/201428/ 21:49:56 tpatil: I thought we were thinking about a list of request-ids, for the general case? I can't remember why that was a bad idea now 21:50:06 johnthetubaguy: I don't see why it would be a bad idea 21:50:15 clearly one-call-one-id is the special case 21:50:50 johnthetubaguy: for snapshot list, there will be single request, so we have added ListWithHeader which will contain response header 21:52:07 I think I like option one, where get_request_id() returns a list or request IDs 21:52:11 tpatil: sorry, not sure I get the comment, we can return a list of request-ids in all cases I think 21:52:25 jroll: agreed we should have one option 21:52:37 s/list or/list of/ 21:52:52 johnthetubaguy: Are you talking about deleting metadata case? 21:53:01 tpatil: all cases 21:53:01 tpatil: talking about all cases really 21:53:05 a list can have one item 21:53:16 if there's one request, return a list of one request ID 21:53:35 in the order the requests were made 21:54:00 yeah, I feel asleep just before the meeting, so only half awake, but I thought we discussed this a few weeks back, and went for a list of request-ids, I figured we found something bad with that and went back to a single request-id, but my mind is probably playing tricks on me 21:54:11 nope 21:54:32 the main thing last week was the discussion about return-with or separate-call or trigger-event 21:55:08 yeah, it was maybe the week before we got into request-ids vs a list, but it probably got buried somewhere 21:55:10 and I believe we wanted to return objects? 21:55:20 return-with seems bad for the obvious reason that the return value may have different types 21:55:46 separate-call seems like the most straightforward option, from a dev standpoint 21:55:55 jroll: that's one concern we have 21:56:15 I thought the idea was we pass the return value to a function that extracts the value for the user? 21:56:21 so we get the best of both worlds 21:56:23 jroll: so its hugely racy and hard to get right, from a dev standpoint 21:56:40 I agree, it very racy with separate call 21:56:53 johnthetubaguy: yeah, or just an attribute at a well known place... 21:57:00 return values of None, True and False are where its tricky 21:57:02 lifeless: thing = client.do_a_thing(); req_id = thing.request_ids() # is racy and hard? 21:57:23 jroll: Thats return-with 21:57:40 we going to have time for open discussion before the end? 21:57:42 oh. ohhhh. I totally had this backward. 21:57:54 jroll: thing = client.do_a_thing(); req_ids = client.get_request_ids() # thats separate-call 21:57:55 jroll: I think the issue before was client.do_a_thing() client.get_request_id() 21:58:02 yeah, that 21:58:04 Ok, I think we have some interest for open discussion 21:58:09 lifeless: yeah, got it. it doesn't return a response object 21:58:09 Request everyone to please add your feedback to the etherpad so that I can include it in the specs 21:58:19 by next meeting 21:58:26 so I think we have support for #3 here? 21:58:27 Looks like we need to move this discussion to the etherpad #link https://etherpad.openstack.org/p/request-id 21:58:38 we are just arguing about what it is called 21:58:38 tpatil: no meeting next week (pls scroll back) 21:58:45 but maybe thats just me? 21:58:50 I know, next-to-next meeting 21:59:03 #topic Open Discussion 21:59:12 may be we can roll over a min or two if needed 21:59:17 The instance user spec could use some attention: https://review.openstack.org/#/c/186617/ 21:59:18 kfox1111 ^ 22:00:04 don't forget, tomorrow's the deadline for the Call for Speakers https://www.openstack.org/summit/tokyo-2015/call-for-speakers/ 22:00:08 #link https://www.openstack.org/summit/tokyo-2015/call-for-speakers/ 22:00:53 I mentioned it earlier, but anyone who is interested in interop and testing standards please think about attending the defcore mid-cycle #link https://etherpad.openstack.org/p/DefCoreFlag.MidCycle 22:01:23 kfox1111: i'll take a look, sahara could really use something like this 22:01:35 elmiko: thanks. 22:01:52 yeah. most of the projects that build on top of vm's seem to need it. 22:02:04 Alright, we are out of time now. 22:02:04 kfox1111: thanks for moving this to the backlog, it would be good to get eyes on these ideas 22:02:06 i thought it was a cool idea last time this came up 22:02:12 Thanks for joining, have a great day/evening! 22:02:22 #endmeeting