21:00:32 <notmyname> #startmeeting swift 21:00:33 <openstack> Meeting started Wed Oct 25 21:00:32 2017 UTC and is due to finish in 60 minutes. The chair is notmyname. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:34 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:36 <openstack> The meeting name has been set to 'swift' 21:00:40 <notmyname> who's here for the swift team meeting? 21:00:42 <mattoliverau> o/ 21:00:47 <m_kazuhiro> o/ 21:00:48 <rledisez> hi o/ 21:01:53 <notmyname> hmm... I'm sure there's more than 4 people out there :-) 21:01:59 <tdasilva> hi 21:02:07 <peluse> yo! 21:02:50 <timburke> o/ 21:02:51 <acoles> hi, sorry to be late 21:03:13 <notmyname> now that acoles is here, we can finally begin 21:03:14 <notmyname> ;-) 21:03:18 <peluse> sheesh 21:03:36 <notmyname> acoles: no worries. we haven't started yet :-) 21:03:39 <acoles> I was here I just didn't say hello 21:03:47 <notmyname> ah. of course 21:04:28 <notmyname> ok, let's get started then. most of the people I expected to discuss today's topic are here 21:04:34 <notmyname> #link https://wiki.openstack.org/wiki/Meetings/Swift 21:04:41 <notmyname> #topic next release 21:04:52 <notmyname> it's time to tag another release 21:04:58 <notmyname> #link https://wiki.openstack.org/wiki/Swift/PriorityReviews 21:05:06 <notmyname> priority reviews page is updated with 2.16 stuff 21:05:17 <notmyname> it's in rough order that I'd prioritize it 21:05:26 <notmyname> please review stuff as you are able 21:05:54 <notmyname> ideally, I'd like to release next week (earlier than later, since i'll be flying to the openstack summit late next week) 21:06:14 <notmyname> are there any questions about any of these patches? 21:06:47 <notmyname> ok 21:06:52 <mattoliverau> Nope, let's aim for next week 21:07:02 <notmyname> #topic bug triage 21:07:08 <notmyname> #link https://etherpad.openstack.org/p/swift-bug-triage-list 21:07:18 <notmyname> last week we said we'd have a bug triage day next week 21:07:32 <notmyname> ideally, that would be right after a release is tagged :-) 21:07:43 <notmyname> which is great 21:08:10 <notmyname> so put the bug-bash day on your calendar and let's clean up that list 21:08:24 <notmyname> (this is a reminder topic instead of one that needs a lot of discussion, I think) 21:08:44 <notmyname> ok, now for the meaty topics :-) 21:08:52 <notmyname> #topic SPDK and swift 21:09:07 <notmyname> today peluse is back with us (yay) to talk about something he's been working on 21:09:12 <peluse> rock n roll 21:09:14 <notmyname> peluse: take it away 21:09:27 <peluse> I'm thinking I should have typed some shit up in advance to avoid all the typos I'm about to introduce :) 21:09:31 <peluse> anyways... 21:09:34 <peluse> http://spdk.io 21:09:45 <notmyname> #link http://spdk.io 21:09:54 <peluse> is the URL as I mentioned before. Quick high level overview then I'll bring up a proposal someone in our community has made 21:10:02 <peluse> that we haven't spent a whole lot of time thinking about TBH 21:10:06 <notmyname> ok 21:10:25 <peluse> Also, here's a SNIA talk I did last month about SPDK in general and one relevant component called blobstore https://www.snia.org/sites/default/files/SDC/2017/presentations/Solid_State_Stor_NVM_PM_NVDIMM/Luse_Paul_Verma_Vishal_SPDK_Blobstore_A_Look_Inside_the_NVM_Optimized_Allocator.pdf 21:10:37 <peluse> So SPDK is a set of user space components that is all BSD licensed 21:11:09 <peluse> its used in a whole bunch of ways but mainly by storage appliances to optimize SSD performance in what swift would call the storage node 21:11:19 <peluse> FYI its in Ceph already but not the default driver 21:11:41 <peluse> and when I say "it" I mean whatever component the system has chosen to take on, in Ceph its the user space polled mode NVMe driver 21:11:56 <peluse> there are some basic perf marketing type hypes slides in that deck I pated in for anyone interested 21:12:19 <peluse> pretty huge gains when you consider latency and CPU sensitive apps running with latest SSDs 21:12:35 <notmyname> so the basic idea is a fast/efficient way to talk to fast storage media that might potentially be useful in swift's object server? 21:12:46 <peluse> anyway, that's the real trick is that its all user space, direct access to HW, no INTs and no locking 21:12:51 <peluse> yup 21:13:08 <peluse> but there are a ton of compoennts, well not a ton, but a bunch that would not be relevant 21:13:15 <timburke> could it be useful for the account/container servers, too, or are we just looking at object servers (and diskfile in particular)? 21:13:17 <notmyname> what are the integration points. I doubt it's as simple as mmaping a file and your'e done 21:13:19 <peluse> and some are lirbaries and some are applications. 21:13:44 <peluse> I think since its SSD only (well not techncially but it wouldn't make sense to use on spinning media) most likelt container 21:13:46 <rledisez> so we are talking of objec servers on SSD. is it a real use case? (i would think it's the target of ceph, very low latency) 21:14:06 <peluse> if you used object servers there are probably some limitations wrt what we call blobstore 21:14:21 <peluse> I'l get to the integration question in a sec 21:14:44 <peluse> so, assuming a node takes on the user space NVMe driver and the driver talks directly to HW you can see there no kernel and no FS 21:15:00 <peluse> so... unless the storage application talks in blocks it doesn't make much sense 21:15:11 <notmyname> ok 21:15:16 <peluse> blobstore is SPDK's answer to this but its not a FS 21:15:46 <peluse> it's a super simple way for apps that don't talk blocks that can use a really simple file-ish object-ish like interface to take advantage of SPDK 21:15:54 <peluse> so for example, RocksDB 21:16:07 <peluse> in that slide deck I mention some work we did there to bolt blobstore up to RocksDB as a back end 21:16:13 <notmyname> so ... as you know swift likes to be HW and driver agnostic. what does this tie in too? is it possible to write stuff in a way that works if you have fast media or not? 21:16:15 <peluse> its that kind of idea that might makes sense for Swift 21:16:21 * jungleboyj looks in late 21:16:32 <notmyname> or is the idea that swift would engage spdk mode if it detects flash? 21:16:50 <peluse> so there are lots of things that can be done there 21:17:09 <peluse> but yeah I think anything more aggressive than NVMe only would not be worth it 21:17:18 <peluse> SPDK doesn't automateically do any of that kind of detection 21:17:33 <peluse> so that would have to be considered 21:17:33 <notmyname> that makes sense 21:17:41 <notmyname> I could imagine swift detecting that 21:18:07 <peluse> and blocstore itself is pretty immature, need to point that out. We just now added code to recover from a dirty shutdown if that gives you an idea 21:18:10 <notmyname> ok, so tell me (us) more about the blobstore. would that be a diskfile thing? 21:18:16 <peluse> so this whole thing would be a proof of concept type activity for sure 21:18:24 <notmyname> how does this make rledisez's LOSF work awesomer? 21:18:26 <peluse> so yeah, I think diskfile would make sense 21:18:40 <peluse> but I don't rememeber the details there of course. my brain is pretty small :) 21:18:53 <peluse> In that slide deck you can see a super simple example of the interface 21:19:34 <peluse> blobstore bascially takes over an entire disk, writes its own private metadata and then the app create "blobs" and does basic LBA sized reads and writes to them 21:19:46 <notmyname> ah, ok 21:19:47 <peluse> it can't handle sub-LBA access (by design) 21:20:17 <peluse> well, we can them pages in blobstore but they're 4K 21:20:17 <notmyname> that sounds like a haystack-in-a-library thing. or something similar to what you're working on rledisez 21:20:59 <rledisez> yes, blobstore would be what we call volume. and I guess it embed its own k/v indexation. so it looks similar in some ways 21:21:20 <peluse> yeah, I think the integration effort w/Swift for production would be a decent sized lift but for a POC may be worth it provided, maybe for container SSDs, the latency and CPU usage bebenfit made sense 21:21:23 <notmyname> peluse: is there any spdk component that could replace sqlite? eg some kv store that does transactions? 21:21:34 <notmyname> eg to replace the container layer 21:21:54 <peluse> rocksDB would be the closest match, using blobstore as a backing component 21:22:07 <peluse> but that's really what Wewe's proposal was - to add a k/v interface on blobstore 21:22:14 <notmyname> ah ok. so a 3rd part db that works with spdk 21:22:27 <peluse> yeah, maybe that's the best first step 21:23:04 <notmyname> any questions from anyone, so far? 21:23:11 <peluse> I can't remember what sqlite guts look like, can you easily replace the storage engine as its called in like MariaDB, anyone know? 21:23:25 <notmyname> no 21:23:33 <peluse> yeah, OK didn't think so 21:23:36 <notmyname> sqlite is "just" a DB library 21:23:37 <tdasilva> dumb question from me, but can you explain the difference from spdk and the intel cas tech? 21:23:49 <notmyname> ^ not a dumb question 21:23:54 <peluse> sure, good question 21:24:00 <peluse> they are totally different for one thing 21:24:21 <peluse> CAS is a caching project/product that works between an app and the FS. 21:24:52 <peluse> SPDK is a whole bunch of stuff, but not caching layers. It has to be integrated with an application unless you use one of the things like the compiled iSCSI target 21:25:48 <peluse> dunno if that's enough explanation - block cache vs library of stuff for integration, mainly polled mode device driver for NVMe 21:26:25 <peluse> so Q for you guys, is there any urgency with container SSDs and latency and/or using a bunch of CPU? 21:26:29 <tdasilva> peluse, so spdk provides performance improvements by substituting the FS and writing directly to block storage 21:26:37 <rledisez> do you handle caching in bdev or blobstore? or do you assume the underlaying device is fast enought 21:26:41 <peluse> tdasilva, yup 21:26:55 <peluse> rledisez, there's no data caching at all right now 21:27:11 <tdasilva> peluse: very similar to bluestore? 21:27:26 <peluse> bdev is a layer for abstracting different types of block devices. For example we can have an NVMe at the bottom of the stack or a RAM disk and for layers above bdev they don't care. its super light wieght 21:27:54 <peluse> tdasilva, yeah, bluestore and blobstore area lot alike but bluestore was done of course just for Ceph and I think is more mature/feature rich right now 21:28:26 <peluse> but Sage mentioned in his keynote at SNIA SDC about looking at maybe using rocksdb w/blobstore at some point in the future (dont quote me though) 21:28:44 <peluse> that would be in addition to bluestore as backing FS though, no isntead of 21:28:54 <notmyname> peluse: what questions do you have for us? 21:29:02 <tdasilva> peluse: ack, thanks 21:29:19 <peluse> jsut the one above about pain points wrt latency and or CPU utilization around SSDs 21:30:11 <peluse> well, and if anyone is interested enough to work with someone from the SPDK community to try and see if there's some sort of proof of concept worth messing with here 21:30:13 <notmyname> only pain points I've seen recently with the container layer is drive fullness and the contaienr replicator not having all the goodness we've added to the object replicator for when drives fill up 21:30:52 <notmyname> rledisez: how about you? any latency or cpu issues on containers or accounts? 21:31:11 <rledisez> peluse: from my experience, there is not really a pain point about storage speed on containers. having a lot of containers slo down some process (like replicator) as they need to scan all db. not sure yet if blobstore would help here 21:31:50 <peluse> wen I say CPU util, there's more in that deck I referenced, using SPDK (nvme + blobstore) greatly reduces CPU utillization while at the same time greatly improving perf 21:31:57 <peluse> so you get kinda a two fer one thing 21:32:22 <peluse> so for containers you'll get more CPU utillization for other things happening on the storage node, and the IOs will be faster and more repsonsive 21:32:38 <peluse> (or your money back) 21:32:44 <notmyname> heh 21:33:02 <rledisez> how can you measure that CPU usage related to kernel/fs. i don't think i see any, but i would like to check 21:33:12 <rledisez> most of the cpu usage comes from replicator or container-server 21:33:29 <peluse> There's a perf blog on spdk.io that may have some good info in it, honestly I haven't read it :( 21:33:49 <peluse> but we have some folks in our comm that live for that kinda stuff so I can ask there and get back to y'all 21:34:30 <peluse> rledisez, yeah unless used for object storage wouldn't help w/replicator 21:34:48 <rledisez> if you have a magic command to get the cpu usage i would be interested (i guess it would be something related to perf command) 21:35:33 <notmyname> honestly, spdk sounds really cool. it seems like something that would be great for an all-flash future. (but I'm not sure if anyone deloying swift is there yet) 21:35:48 <peluse> rledisez, yeah I dunno the details of the various measurements but the team has looked at every metric known to man using a variety of tools 21:36:25 <notmyname> peluse: do you have people in the spdk community who are interested in swift? if so, are they interested because they just want to integrate spdk everywhere or because they are using swift already? 21:37:00 <peluse> Wewe is the only person I know that's brought it up and he wasn't able to get connected today due to network issues 21:37:33 <peluse> right now there's more demand on features/integration than there is anything else so I don't think the former is driving anyone 21:37:52 <notmyname> ok 21:38:06 <peluse> which is one of the reasons I wanted to chat w/you guys about this - if it doesn't make a lot of sense to investigate from your perspective we certainly have enough work on our plate :) 21:38:51 <peluse> that's all I got for ya, other questions? 21:39:00 <notmyname> I think it makes sense when looking a few years into the future and preparing for that. it doesn't make sense from the sense that all of our current employers have a huge amount of stuff we need to do in swift way before we get to needing spdk 21:39:14 <peluse> yup yup 21:39:14 <notmyname> (my opinion) 21:39:34 <peluse> what is the current split of SSD usage, still mostly containers? 21:39:35 <notmyname> definitely something I want to keep an eye on 21:39:38 <notmyname> yeah 21:39:48 <peluse> cool 21:39:57 <notmyname> flash still too expensive for interesting-sized object server deployments 21:40:08 <peluse> makes sense 21:40:19 <notmyname> people these days are going for bigger nodes. 80 10TB in a single chassis 21:40:28 <notmyname> (and getting all the eww that implies) 21:40:51 <peluse> well, that's not to say nobody on this end will work on a proof of concept anyways and if so I'll encourage them to check in the Swift comm frequently of course... 21:40:53 <rledisez> i like the idea, and we can surely share some stuff between LOSF/blobstore but i think that people looking for really low latency object store will check ceph as by its design/implem, it looks more suited 21:40:54 <notmyname> let's move on so we can give m_kazuhiro appropriate time :-) 21:40:58 <notmyname> peluse: that's great! 21:41:08 <peluse> thanks for the time guys!! 21:41:09 <notmyname> and thanks for stopping by to give an update 21:41:27 <peluse> my pleasure... ping me later if anyone has followup questions. take care! 21:41:29 <notmyname> rledisez: I can get you in contact with peluse if you can't find him on IRC later 21:41:43 <notmyname> #topic symlinks 21:41:49 <mattoliverau> Yeah thanks peluse, sounds like cool tech :) 21:41:54 <notmyname> m_kazuhiro: looks like the discussions and code have been going well! 21:42:05 <notmyname> only one more big question, and that's for CORS, right? 21:42:12 <notmyname> #link https://etherpad.openstack.org/p/swift_symlink_remaining_discussion_points 21:42:34 <m_kazuhiro> notmyname: Yes. There is only one discussion point for symlink. It's about CORS. 21:42:59 <m_kazuhiro> Details is in #4 of the etherpad page. 21:42:59 <m_kazuhiro> Overview is that... 21:43:03 <notmyname> timburke and I talked about it as soon as I walked in the office this mornign. he didn't even let me put down my bag! ;-) 21:44:00 <m_kazuhiro> When symlink and the target in different containers and these container have diffecent CORS settings... 21:45:19 <m_kazuhiro> clients will receive error response to GET/HEAD symlink even if the request follows CORS setting of the symlink container. 21:46:04 <m_kazuhiro> The discussion point is that "Do we accept this behavior?" and "If update behavior, how to update?" 21:46:28 <notmyname> timburke: can you give a summary of what we talked about earlier? (you understand the context better than me) 21:48:23 <timburke> m_kazuhiro: so is the error you mention because of ACLs, or because of CORS allowed-origin settings? 401/403 because of container ACL is definitely fine -- and we can currently return such (kinda curious) responses 21:49:09 <timburke> (ie, 200 on the preflight OPTIONS request, but then a 401/403 on the subsequent GET/POST/whatever) 21:50:04 <timburke> the behavior i'd expect, given two containers both publicly readable, one with a permissive allowed-origin one without 21:51:41 <m_kazuhiro> timburke: Because of ALCs. My concerning case is that CORS setting is same with ALCs but clients will receive ACL error even if following CORS settings. 21:52:07 <timburke> would be that a symlink from the permissive container into the "normal" one would work -- we'd 200 the OPTIONS request (because of the settings on the container that's actually in the HTTP request path), then allow the subsequent GET (because the ACLs on both containers allow it) 21:52:40 <timburke> while the *other* way wouldn't work because we fast-fail the OPTIONS request and the ACLs have no bearing 21:54:25 <timburke> m_kazuhiro: do we know the corresponding behavior for DLO/SLO? 21:54:51 <kota_> sorry, I did sleep too much 21:55:05 * kota_ just get waken up. 21:55:14 <timburke> like, if i have a DLO in one container, which has one set of ACLs and one particular CORS setting, but all of its segments are in another container where everything's different... 21:55:25 <m_kazuhiro> timburke: I'm not sure for DLO/SLO. 21:57:30 <timburke> it seems like a similar situation could arise -- following whatever precedent that gives us would at least have the advantage of consistency 21:57:55 <mattoliverau> +1 21:58:14 <mattoliverau> Sounds like a nice way of answering the question 21:58:31 <notmyname> :-) 21:58:51 <m_kazuhiro> timburke: So, the conclusion is that we should accept and keep current behavior. correct? 21:59:47 <timburke> pretty sure. it's worth double checking (and probably having a func test or two that include OPTIONS requests) 21:59:56 <notmyname> yeah 22:00:01 <m_kazuhiro> +1 22:00:36 <notmyname> I was just thinking that with questions like this, a functional test for each and the question "which one do we want to get passing" would be a great way to do a discussion 22:00:49 <notmyname> ...and we're out of our time 22:00:58 <timburke> generally, though, it seems like we rarely think about OPTIONS in middleware, so symlinks probably behaves like slo/dlo :-) 22:01:07 <notmyname> m_kazuhiro: I think you've got enought to go on today, right? 22:01:20 <m_kazuhiro> notmyname: Yes! 22:01:23 <notmyname> great! 22:01:30 <notmyname> thanks everyone for coming! 22:01:33 <notmyname> thank you for your work on swift 22:01:37 <notmyname> #endmeeting