14:02:25 #startmeeting glance 14:02:26 Meeting started Thu Aug 15 14:02:25 2013 UTC and is due to finish in 60 minutes. The chair is markwash. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:02:27 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:02:29 The meeting name has been set to 'glance' 14:02:40 lets see if folks show up today, I forgot to send out a reminder :-) 14:03:13 hi 14:03:55 vasiliy: hi! 14:04:18 o/ 14:04:22 markwash: I'd like to discuss blueprint https://blueprints.launchpad.net/glance/+spec/glance-nfs-storage-support and the patch https://review.openstack.org/#/c/39080/ 14:05:47 hey 14:06:04 markwash: i'd shared some progress with flwang yesterday ( https://github.com/komawar/glance/commits/async-workers-2 ) 14:06:24 might help him with his work 14:07:00 okay 14:07:01 also, wanted to discuss why worked on this part (why was getting blocked on other parts of code due to lack of db support atm) 14:07:02 markwash: venkatesh and me have been discussing db optimization some. just some concerns maybe we can bring up in open discussion 14:07:08 markwash: my question is - do we need to continue working on this patch. could you please take a look at my latest comment in the patch https://review.openstack.org/#/c/39080/ 14:07:13 esheffield: o/ 14:07:21 \o 14:08:27 okay, lets talk a bit about async processing first, and then take a look at nfs and db stuff, sound good? 14:08:36 yup 14:08:37 +1 14:08:53 ok 14:09:02 the first thing I want to mention there is 14:09:16 I retargeted the relevant blueprints to the "next" milestone 14:09:29 which is essentially my way of saying that they're approved and important 14:10:01 markwash: so when is our feature freeze date? 14:10:21 but that I don't want to communicate any sense of delivery date that isn't pretty certain 14:10:38 flwang, we don't have one apart from the the general openstack one, let me look 14:11:12 looks like september 5th 14:11:29 well, september 4th actually 14:11:31 #link https://wiki.openstack.org/wiki/Havana_Release_Schedule 14:12:21 nikhil: flwang how is coordination going? 14:12:23 got it, thanks, markwash 14:13:31 markwash: undersatnd what flwang is approaching here 14:13:44 oh? 14:13:47 and looks like i won't step on his toes 14:14:10 and venkatesh and I are talking a bit today on how we can get some db migration+driver code going 14:14:26 okay cool 14:14:26 based on the assignment of last meeting, I'm responsible from api part 14:14:45 i meant undersatnd that flwang is working on the api part and that he may need db code (not sure) 14:14:47 nikhil and I have discussed it yesterday 14:14:54 okay cool 14:14:58 however if it helps we have some discussion/work in progress 14:15:14 yes, I saw your link, I'd like to look into it some today 14:15:27 sure, that would be great 14:15:39 could not sync with you yesterday due to timing conflict 14:15:42 today for sure 14:15:46 okay. . vasiliy shall we talk a bit about nfs? 14:15:59 markwash: yes - please 14:16:19 so, we have a lot of comments from community 14:16:35 please read my latest comment on patch to be on the same page 14:17:05 okay, one sec 14:17:33 I have only one question - would community like to have this feature or not? we spent a lot of time but 'meta data' FileSystem store can solve the same problem - return 'direct url' to clients 14:18:23 that does seem to be an important question 14:18:34 I know that metadata was set up with this kind of thing in mind 14:18:58 maybe jbresnah had glusterfs more in mind 14:19:31 but glusterfs and nfs should basically just mean slightly different metadata? 14:20:38 markwash: metadata solution is more common 14:21:08 vasiliy: is there any reason you can think of why metadata would be worse than creating a new store driver? 14:21:56 only new features for multi-shares supporting 14:22:04 as load balancer, priority and so on 14:22:22 hmm 14:22:45 what's multi-shares? essentially client-side processing of multiple equivalent nfs paths to data? 14:23:37 ability to have several mounted shares and select between them most suitable share to store new image 14:24:25 do you think it would be possible to use multiple fs locations to achieve the same thing? 14:25:12 actualy - I'm not sure that it's possible 14:26:02 could please explaine - what do you mean by "multiple FS locations"? 14:26:16 sorry, had a brief problem on my end and had to run. . but I'm back 14:26:27 ok 14:26:52 we added a feature recently to support multiple image locations--an image can share with the client that it is stored in multiple places and where those places are 14:27:04 the idea sounds similar to multishares, except broader than nfs 14:27:18 so if you store something in both, say, cinder, and swift, the client can pick which one 14:27:42 or if you store an image in two swift endpoints on different systems, the client can pick which one if he wants to do a direct download 14:28:28 how can we select in which place - we can put an image? 14:29:04 we haven't quite gotten that far, but it would be possible to start streaming to multiple locations on an upload 14:29:21 or some sort of multi-locations setup based on policy 14:29:35 policy/config 14:29:54 vasiliy: I think it sounds like the answer is we'd prefer to work within existing fs store if at all possible 14:30:18 * iccha makes mental note to ask markwash a question related to multiple image locations 14:30:28 we can implement balancing between shares. and I'm not sure that it can be implemented using multiple image locations 14:32:36 vasiliy: is there any reason to believe that load balancing between nfs shares shouldn't be generalized to other forms of storing images? 14:32:53 we might have to table this for now, since I think it deserves a fair amount of thought 14:34:11 markwash: do you mean to hold this feature or discuss it later with other developers? 14:34:24 both 14:34:34 we are pretty close to the end for havana, and I'd rather not rush the design 14:35:13 so, when do we need to return back for discuss of it? 14:35:53 can we basically try to discuss it again next week, with jbresnah around? 14:35:58 I really want his input 14:36:09 markwash: ok - good 14:36:24 its hard for him to make to the early meeting, since its his 4am 14:36:35 oh - clear 14:36:38 of course, it may be a bad time for you next week. . UTC 20:00 14:37:16 #action markwash discussion with jbresnah about nfs store 14:37:37 vasiliy: let me know if next week's meeting (thursday at utc 20:00) does not work out 14:37:42 iccha: db stuff? 14:37:59 oh and afterwards at some point we should talk again about docs 14:38:03 markwash: ok - see you next week 14:38:14 yeah so venkatesh and me were talking about how our joins are getting ever growing 14:38:52 images, members, properties, tags all queried for one call and we were wondering if there is a better way to approach this 14:38:59 fundamental design wise 14:39:09 venkatesh: has a patch up for optimization for now 14:39:54 i am working on it. But still when more filter gets added up esp for properties etc., the query performs slower. 14:40:25 I'd love to see some measurements around these things 14:40:49 +1 what would be the best way to get performance testing in openstack? 14:40:56 iccha: I would like to see some comparison 14:41:06 change and after change 14:41:20 before change and after change 14:41:35 flwang: I am working on it. soon will share it with the team. 14:41:45 for a while I've been toying with the idea of a dedicated glance benchmark suite 14:41:54 but for db stuff, it may make sense to take a different approach 14:42:42 markwash: you thinking like mongodb or something like that? 14:42:58 oh no, still talking about measuring performance 14:43:11 I mean, for db stuff, it may make sense to put instrumentation in at an internal interface 14:43:20 rather than at the api level, which is what a benchmark would have to do 14:44:25 anyway, for me, I'm interested in seeing slightly more detailed proposals for both incremental and radical changes to the db, but there's a good rule out there, measure before you optimize :-) 14:44:28 we would need a standardize db for that 14:45:26 i agree with iccha 14:45:44 interesting, what do you mean by "standardize db" exactly? 14:46:50 in the sense different developers maybe testing in different envs or scale. the measurement should be done at different scales and numers provided. and i do not know whats a god way to provide a large db for testing for all 14:47:01 ah 14:47:37 we may at least define a rule 14:47:50 I think some kind of standard setup would be helpful for sharing results, but I think we could do some important work without one 14:48:04 such as image list in 0.x sec with 10000+ images :) 14:48:19 basically, each interested org could have their own "standard" setup, and use it to compare before and after code changes 14:48:22 and publish the results 14:49:41 venkatesh: is there a patch for your optimization stuff? 14:49:43 markwash: +1 14:50:04 yes mark, https://review.openstack.org/#/c/41404/ 14:50:45 I have done major part of work. In testing phase and collecting some explain plans to compare the results. 14:51:08 okay cool 14:51:54 thanks venkatesh looks like a good idea! 14:52:21 but mark the query is growing. we need a check. :) 14:52:28 I agree 14:53:09 also wanted to check with you if we need to send back image properties and locations as well as part of the result? 14:53:26 I don't quite follow that 14:53:29 i meant through left outer join. 14:53:40 real quick, we have 7 minutes left 14:53:49 doc updates markwash 14:54:03 no the query returns images with their properties and locations 14:54:09 yes, lets fit in some doc discussion, we can revisit the db performance 14:54:17 ok mark 14:54:32 * markwash passes the mic to iccha 14:54:58 markwash: i thought u spoke to some ppl and had updates :) 14:55:14 btw the documentation team is doing some revamping efforts 14:55:31 where there are having contact points from each project to help keep documentation in track 14:55:55 that sounds sensible 14:56:03 so they can be posted on new features and add relevant documentation 14:56:21 they have not started anything concrete yet but will keep us posted 14:56:58 we had some documentation action items from last time 14:57:10 i guess brianr is not here, we can follow up wiht him next meeting 14:57:44 other brief topics? 14:57:51 3 mintues left. . . 14:58:09 real quick - anyone have any updates on the glace service thing we discussed last week? 14:58:21 I haven't done anything with that since then 14:58:42 . . . 14:58:54 which glance service thing again? 14:59:39 the one Ghe was working on - moving that common code from cinder and nova into glance client 14:59:48 ah, right 15:00:02 there was a bit more discussion, and I think we redirected our goals 15:00:12 I think there is a lot of value in having client code that is version independent 15:00:18 in fact, I'm kinda confused that we dont already have that 15:00:38 however, I'm concerned about publishing and supporting the interface pulled from nova/cinder 15:00:49 so we redirected their efforts away from glanceclient 15:01:15 okay, that's about all we have time for, folks 15:01:16 ah, ok 15:01:19 thanks 15:01:40 #endmeeting