15:04:25 #startmeeting cinder_testing 15:04:26 Meeting started Wed Aug 31 15:04:25 2016 UTC and is due to finish in 60 minutes. The chair is scottda. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:04:27 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:04:30 The meeting name has been set to 'cinder_testing' 15:05:00 hi 15:05:09 o/ 15:05:15 In another meeting as usual, but will kinda be paying attention. :) 15:05:26 hi 15:05:34 hello! 15:05:53 We've nothing on the agenda. I've been trying to narrow down the oom_killer issue... 15:06:07 same here, sprint demo meeting every other wednesday :) 15:06:25 hi 15:06:25 I cannot repro oom on stable/mitaka with 'tox -epy34' and 2GB VM (as opposed to Master, which repros every time) 15:06:54 I would like to ask smcginnis and all cores to take a look on https://review.openstack.org/#/c/348449/ - fake drivers integration for devstack 15:07:11 e0ne: Opened 15:07:22 I'm doing a loose binary search on commits to see if I can find a point-in-time where the problem starts. 15:07:23 e0ne: I'll look 15:07:39 smcginnis: thanks. IMO PTL vote will be useful for it 15:07:57 xiexs proposed openstack/cinder: Convert InvalidVolumeMetadataSize to webob.exc.* https://review.openstack.org/356213 15:08:06 scottda: My concern is that it's just been a gradual increase in memory needed to run the UTs, so I'm worried we won't be find a smoking gun. 15:08:11 I would like to ask that we try to figure out a way to test manage/unmanage in the gate. cFouts has been mentioning it at lot lately but we have internal tests for those functions that have been broken about 3 times so far this release due to a lack of functional gate tests 15:08:16 We could ask people to review patrickeast 's devstack patches, but I see one with +2 and 8 +1's and yet not reviews in 4 weeks! 15:08:21 i'm planning to cycle back to the oom bug this week myself, and look for general issues in that regard 15:08:29 smcginnis: Yes, that's my worry as well. 15:08:52 This causes us lots of headaches trying to get patches through our internal review process :( 15:08:56 eharney: Are you able to reproduce it? 15:09:03 akerr: +1 15:09:06 geguileo: i did last week 15:09:19 eharney: What are the requirements to trigger it? 15:09:39 Any help is welcome. I've tried a few approaches, but I'm sure there's ideas I'm missing... 15:09:40 easy route is to run unit tests on a smaller VM, i think i was using 1500MB of RAM 15:09:47 i'm going to go at it this week w/ profiling tools 15:09:57 geguileo: I trigger with 2GB vm, running tox -epy34, every time 15:10:12 eharney: scottda OK, maybe I'll give it a go 15:10:18 It occasionally hits with 4GB RAM, so I think anything under that increases the likelihood that it will happen. 15:10:27 I see a slow, steady increase in memory usage as the unit tests run. 15:11:06 strace doesn't show me anything stuck or looping in system calls. 15:11:07 scottda: So it's not a couple of tests fault, but a general issue 15:11:23 geguileo: That's my feel. 15:11:30 geguileo: That's the current hypothesis 15:11:43 I tried removing the zonemanager tests, but it still reproduced. 15:12:02 I might try removing other entire directories. 15:12:16 scottda: +1 - that can help us narrow the issue. 15:12:25 scottda: You should be able to automate that. :) 15:12:53 We didn't use to hit these issues before 15:13:17 Has anybody tried to git bisect it? 15:13:35 geguileo: Yeah, but akerr said he had to up internal VMs for unit tests from 2GB-> 4GB in January 15:13:57 geguileo: No, I just ran stable/mitaka without repro this morning. I can try some bisect next. 15:14:19 scottda: I think that could at least give us an idea 15:14:20 xiexs proposed openstack/cinder: Make examples consistent with actul API https://review.openstack.org/326852 15:15:14 scottda: geguileo: and we had to up it from 1G to 2G a while before that, so it has been a gradual thing for a long while now 15:15:40 akerr: That's great to know 15:15:59 Because this could be related to the number of tests more than anything else 15:16:12 Ivan Kolodyazhny proposed openstack/cinder: RBD Thin Provisioning stats https://review.openstack.org/178262 15:16:15 So it's probably something in the base class 15:17:24 OK, so there's that issue... 15:17:44 Anything we need to talk about regarding upcoming release? 15:17:45 Are tests subject to FeatureFreeze? 15:18:04 scottda: I don't think they should be 15:18:13 I have a few things related to rolling upgrades testing, not sure if related to upcoming release 15:18:19 no, me neither. just asking 15:19:04 I've been fiddling with some ansible scripts to deploy on multiple nodes at mitaka combined with some python scripts for setting up a volume / reading and writing to it and backing it up https://github.com/ntpttr/rolling-upgrades-devstack 15:19:05 ntpttr: That's stuff in devstack or infra? It'd be nice to get rolling upgrades in before the release, I would think. 15:19:30 scottda: +1 15:19:35 I've come across a few things that we might want to take a look at, or at least have in some kind of upgrade documentation 15:19:41 scottda: Actually multinode grenade is in check queue now, non-voting currently. 15:19:52 scottda, I've been doing manual tests w/ devstack, not infra 15:20:23 ntpttr: What are those issues? 15:20:25 ntpttr: Oh, upgrade docs would be cool. I've wanted to write them, but stopped when noticed that there isn't a good place to put them. 15:20:30 Catching up - tests will not be subject to feature freeze. 15:20:52 One thing is, after upgrading the DB and API service to master from mitaka, it's possible that no volume creation will work, because in mitaka having a volume_type of 'None' is okay, but recently an exception has started to be thrown for that 15:21:13 so people will need to add a default_volume_type before they upgrade 15:21:20 ntpttr: i thought we fixed that 15:21:35 eharney: I thought so too... 15:21:46 eharney: I ran into the issue yesterday after upgrading the BD and just the API service, with the rest running on mitaka 15:21:46 ntpttr: https://review.openstack.org/#/c/353665/ 15:22:13 ntpttr: please file a bug if it still throws exceptions 15:22:24 eharney: I'll give it another test today and file if it does 15:22:52 ntpttr: Thanks, what other issues? 15:23:06 ntpttr: dulek We'd like some upgrade docs. I'm guessing we can figure somewhere to put them. 15:23:16 Another thing I ran into had to do with the backup service - it seems like when Cinder is out of syncc with the rest of the deployment (in my case it was cinder at mitaka and the other services at master), creating a backup results in an error 15:23:40 ntpttr: Yes, I've seen that. I'm not sure what the resolution was... 15:23:57 I thought it might have had to do with cinder and swift being at different branches, but it looked like the problem was with rootwrap or something before any calls to swift were even made 15:24:09 I think it was a version pinning thing in the DB 15:24:19 ntpttr: What error? I've tested that manually and fixed a bug related to that. 15:24:46 ntpttr: https://review.openstack.org/#/c/350534/ 15:24:48 dulek: I don't have a paste of the error handy, but it was rootwrap throwing an unauthorized action exception I think 15:25:14 ntpttr: Oh, so maybe rootwrap filters weren't updated. 15:25:40 ntpttr: If there's such requirement when upgrading, we need to signal it in the release notes. 15:25:41 ntpttr: I think you should create bugs for all the issues with instructions and maybe an Etherpad so we can nail those down asap 15:25:52 geguileo: Will do 15:25:58 ntpttr: Thanks!! 15:26:03 np! 15:26:07 ntpttr: Thanks a lot! 15:26:13 Okay, so can I use a moment here? 15:26:19 ntpttr: Please let us know when you have a list of them so we can give them priority 15:26:21 I had one other question related to this, but go ahead dulek 15:26:34 ntpttr: Oh, sorry, go on. :) 15:26:35 geguileo: Sure, I'll try and nail it down today and tomorrow 15:26:49 ntpttr: Awesome! 15:27:06 ntpttr: Go with your question. 15:28:02 dulek: Okay thanks :). I was just wondering if it would maybe be good to have some kind of API that an admin could call when they're planning an upgrade, something that could check to see if there are any pending taks like a backup or volume being created, so that services don't go down in the middle of the process 15:28:28 Maybe also to send a pause signal to stop new processes from starting until the upgrade has begun 15:29:34 a deployment tool could use it to wait to shut down until a running process is complete 15:29:35 ntpttr: Good thinking. So I've always assumed this is addressed by the fact that services on close are finishing all the jobs running already 15:29:37 ntpttr: If the service is properly configured then they won't just go down in the middle 15:29:55 ntpttr: There is a timeout that must be set 15:30:05 Only SIGKILL destroys the service immediately. 15:30:06 ntpttr: But nobody sets it up 15:30:11 And now the fun part! 15:30:25 geguileo: oh really? That's good to know - I assumed though that in a deployment tool like kolla that destroys containers and brings up new ones wouldn't be checking that 15:30:27 I think that in DevStack service may get killed immediately. 15:30:33 kolla just being one example 15:30:56 Becasuse oslo.service has some strange stuff related to process being a daemon and not. 15:31:18 ntpttr: It's done by oslo service, dulek and I will be talking about that among other things in our OpenStack Barcelona talk 15:31:36 geguileo: awesome, I'll be sure to check that out 15:31:37 ntpttr: Yup, in case of kolla that sucks. But IMO that should be addressed in Kolla. 15:31:57 dulek: I agree - if the services can handle it deployment tools can make use of it 15:32:13 that was sort of my idea w/ the api, just a tool for deployment options to make use of 15:32:19 if it already exists that's great 15:33:31 dulek: What did you want to talk about? 15:33:43 I think that answers my question, I'll be sure to watch your talk in person or via internet if I can't make it to the summit 15:34:00 thanks! 15:34:19 Just wanted to note that multinode grenade job is in check queue as non-voting right now. I wonder what are the requirements on job stability to make it voting. 15:34:29 And how to actually check job stability. 15:34:47 Because goo.gl/g6GO7t isn't really clear. ;) 15:37:43 Hm, did I just got disconnected? ;) 15:37:45 e0ne: Do you have thoughts? You've looked at moving various jobs to voting in the past. 15:37:58 dulek: No, I just have no idea on those 15:38:25 scottda: AFAIR, we don't have requirements, but usually ask to have it stable for 1-3 months 15:38:49 e0ne: And how do we check its stability? 15:39:02 yeah, and then we ask at a cinder meeting about people's opinions to move to voting, IIRC 15:39:12 geguileo: http://graphite.openstack.org/ 15:39:30 geguileo: and compare it to destack+lvm job 15:39:47 e0ne: Oh, OK, comparing it 15:39:52 e0ne: Thanks! 15:39:59 dulek: np 15:40:12 So - we'll have voting upgrade testing in few months. :) 15:40:23 (unless it will get unstable :P) 15:40:44 lol 15:40:59 dulek: let's talk about it on design session:) 15:41:00 I have a question related to the HA A/A testing 15:41:50 Will anybody with a storage that supports CGs be able to tests that part? 15:42:14 patrickeast: Are you already doing this with your CI? 15:42:59 I have no access to that kind of storage for tests 15:43:09 me neither 15:43:10 And I'm afraid that with the ammount of changes that are going in they'll break something 15:43:21 geguilieo: if we get e0ne's patch in https://review.openstack.org/#/c/348449/, we can use the fake gate driver 15:43:37 that uses LVM 15:44:01 geguileo: Fake driver has CG support. 15:44:01 geguileo: Or GateDriver. 15:44:01 geguileo: It mocks CGs on LVM. 15:44:25 I was hoping for some real tests... 15:44:40 geguileo: that is real test 15:44:54 geguileo: it creates volumes on LVM 15:44:56 xyang: Then it's not mocking them? 15:44:56 hey guys,who have any idea about the Jenkins issues? 15:45:03 not mocking 15:45:14 xyang: Aaaaaah, awesome!!! 15:45:16 geguileo: it is called fake just because it is not for production 15:45:42 xyang: OK, then I'll try to use that one on my tests 15:45:45 geguileo: because it can't guarantee consistency on the snapshots, but we are not testing that part anyway 15:45:49 And see if I can try it with it 15:45:52 geguileo: It mocks CGs consistency, not Cinder resources. ;) 15:46:07 dulek: Cool 15:46:17 dulek: to be accurate, it does not mock consistency 15:46:20 Then I will be able to test it myself 15:46:32 dulek: it claims that it is not consistent 15:46:46 xyang: dulek Thanks for the answers :-) 15:46:47 but definitely good enough for testing 15:47:08 xyang: I think we had same thing in mind. :) 15:47:20 dulek: sure:) 15:47:38 Anything else on that topic geguileo ? I'm another HA testing question... 15:47:51 s/I'm/I've 15:48:10 dulek: Do we have any multi-node testing other than for upgrades? 15:49:04 I'm thinking we could start in on the next steps for HA, either 2 c-vol with Ceph or work on shared LVM solution or both in parallel. 15:50:53 I think we could/should try to wrap as much as possible into one multi-node configuration, to avoid having to get multiple jobs and changes through the infra/devstack review process. I don't know if there's other multi-node testing that could run on the same config? 15:51:12 Sorry, only just back to my desk. Lots talked about today, I'll read the log. 15:51:29 scottda: I think some devstack-gate changes will be required to make configurations different on primary and sub. 15:51:37 scottda: But yes - that should be next step. 15:51:46 Maybe testing migrate/retype between nodes? 15:52:20 dulek: Yes, that's a good idea. I'll start with that, since I've got in-flight patches already for the single-node case. 15:52:25 I think we would need to run 2 cinder-volume services in each node 15:52:38 One of those with LVM and out of the cluster 15:52:38 geguileo: Does DevStack allow that? 15:52:57 And the other with a storage that can be clustered 15:53:09 dulek: Well, if you use a custom local.sh you can do it 15:53:48 geguileo: local.sh, or localrc? 15:54:05 geguileo: OK, we'll keep that in mind as we set this up. Thanks. 15:54:16 dulek: local.sh 15:54:18 sry, was afk for a bit, scottda geguileo: yea pure's ci is testing cg's with HA stuff 15:54:29 patrickeast: AWESOME!!! 15:54:37 BTW, 5 minutes before cinder meeting. Let's wrap this up... 15:54:55 dulek: geguileo: i've got a job that does pure + ceph in AA on two nodes, but the ceph plugin doesn't seem to like it 15:55:00 Anything else today? 15:55:08 so ceph wont just be out of the box clustered kinda deal 15:55:12 patrickeast: lol, we'll have to look at that 15:55:36 patrickeast: Deploying with devstack you mean? 15:55:41 geguileo: yea 15:55:52 geguileo: ceph itself is fine, just the devstack setup scripts 15:56:11 patrickeast: You have to deploy Ceph outside of devstack 15:56:23 patrickeast: And not let devstack do the deployment of Ceph 15:56:50 ah, thats unfortunate 15:56:53 patrickeast: Or let it do it and then do some ceph config copying and have a different local.conf for the other node 15:57:05 geguileo: we probably need to change that if we're going to use that for our gate tests with AA 15:58:00 ok, time to move to the next meeting. Thanks everyone 15:58:00 #endmeeting