Wednesday, 2019-07-24

*** brinzhang has joined #openstack-cinder00:03
*** brinzhang_ has quit IRC00:06
*** _hemna has joined #openstack-cinder00:15
*** ianychoi has quit IRC00:18
*** ianychoi has joined #openstack-cinder00:20
*** tejdeep has quit IRC00:46
*** Liang__ has joined #openstack-cinder00:59
*** _hemna has quit IRC01:14
*** eharney has quit IRC01:15
*** imacdonn has quit IRC01:18
*** imacdonn has joined #openstack-cinder01:18
*** _erlon_ has quit IRC02:05
*** brinzhang_ has joined #openstack-cinder02:08
*** brinzhang has quit IRC02:12
*** zzzeek has quit IRC02:16
*** stakeda has joined #openstack-cinder02:17
*** zzzeek has joined #openstack-cinder02:19
*** deiter has quit IRC02:21
*** n-saito has joined #openstack-cinder02:27
*** dklyle has quit IRC02:34
*** david-lyle has joined #openstack-cinder02:35
*** tkajinam_ has joined #openstack-cinder02:57
*** tkajinam has quit IRC02:59
*** m75abrams has joined #openstack-cinder03:05
*** tkajinam__ has joined #openstack-cinder03:07
*** tkajinam_ has quit IRC03:09
*** psachin has joined #openstack-cinder03:30
*** dviroel has quit IRC03:33
*** abhishekk has joined #openstack-cinder03:34
*** gkadam has joined #openstack-cinder03:50
*** gkadam has quit IRC03:50
*** whoami-rajat has joined #openstack-cinder03:52
*** udesale has joined #openstack-cinder03:54
*** sapd1_x has joined #openstack-cinder03:57
*** sapd1_x has quit IRC04:07
*** tejdeep has joined #openstack-cinder04:12
*** pcaruana has joined #openstack-cinder04:27
*** jmccrory has quit IRC04:52
*** Luzi has joined #openstack-cinder04:55
*** jmccrory has joined #openstack-cinder05:04
*** _hemna has joined #openstack-cinder05:11
*** pcaruana has quit IRC05:12
*** jmccrory has quit IRC05:14
*** jmccrory has joined #openstack-cinder05:14
*** _hemna has quit IRC05:15
*** boxiang has joined #openstack-cinder05:25
*** pcaruana has joined #openstack-cinder05:25
*** jmccrory has quit IRC05:41
*** ruffian_sheep has joined #openstack-cinder05:46
*** ruffian_sheep has quit IRC05:59
*** tejdeep has quit IRC06:02
*** jmccrory has joined #openstack-cinder06:11
*** baojg has joined #openstack-cinder06:26
*** baojg has quit IRC06:27
*** baojg has joined #openstack-cinder06:28
*** jawad_axd has joined #openstack-cinder06:38
*** tosky has joined #openstack-cinder06:41
*** raghavendrat has joined #openstack-cinder06:42
raghavendratis whoami-rajat: online?06:43
whoami-rajatraghavendrat: Hi06:43
raghavendratthis is regarding; whenever you get some time, could you please review & share your feedback06:44
*** brinzhang has joined #openstack-cinder06:48
whoami-rajatraghavendrat: sure06:56
*** sahid has joined #openstack-cinder06:59
*** bhagyashris_ has joined #openstack-cinder07:04
*** rcernin has quit IRC07:04
*** tesseract has joined #openstack-cinder07:05
*** dpawlik has joined #openstack-cinder07:07
*** _hemna has joined #openstack-cinder07:12
*** _hemna has quit IRC07:16
*** gfidente has joined #openstack-cinder07:29
gfidentegeguileo in the upgrade from queens to train07:30
gfidenteis there a specific order in which it is best to upgrade the cinder services?07:30
geguileogfidente: is this a rolling upgrade?07:30
gfidentegeguileo no I think it's going to be like it was for newton to queens07:31
gfidenteso we'd basically run all the db schema updates and jump the packages to the newest version07:32
gfidenteone node at a time07:32
geguileogfidente: I assume this is going to be a FFU07:34
gfidentegeguileo yeah that07:34
gfidenteI think the assumption is that upgrading api/scheduler on controlplane nodes07:34
gfidenteone by one07:34
gfidentethen move on to -volume07:34
gfidentewill magically get everything back to work on the newer version07:35
gfidenteassuming service is unavailable or unreliable until all -volume instances are upgraded too07:35
geguileogfidente: in Queens we added a "bad" online migration script07:36
geguileogfidente: that requires it to be run with cinder-volumes running07:36
geguileogfidente: so we need to run the cinder-manager online-data-migrations command with cinder-volumes running07:37
geguileogfidente: and then we can stop them and continue the FFU normally07:37
gfidentegeguileo and volume needs to stay at the queens level07:37
gfidenteuntil the onlinemigrations finished07:38
geguileogfidente: yeah, until the quees online migrations are completed07:38
gfidentewait, this is to upgrade to train07:38
geguileogfidente: then we can go through the cycle of upgrading to N+1, db sync, online migration, upgrading to N+2, etc07:38
gfidenteyeah and can -volume instances be upgraded last, jumping to train07:39
gfidentein case they are on dedicated nodes07:39
geguileoiirc on FFU we don't start the services until we have upgraded everything to N+307:40
geguileoso the order is not really relevant, since everything will have been stopped07:40
gfidentegeguileo yeah but if the services are on different nodes it might happen07:41
gfidentethat api/scheduler get to the newer version before volume is upgraded07:41
gfidenteI assume the N / N-3 won't work?07:41
gfidenteI mean, it's not reliable?07:41
geguileothe db-sync and online migration happens on a single node iirc07:42
geguileoso you only install on the intermediary versions on 1 node07:42
geguileoto be able to run the migrations07:42
geguileoand then on the rest of the nodes you just installed the N+3 version07:43
gfidenteyeah and assuming on the api nodes you jump db to N+307:43
*** raghavendrat has quit IRC07:43
gfidentebut still have -volume running at the N level, is that a situation in which service is supposedly available07:43
gfidenteor not until -volume gets to N+3 too?07:44
geguileoNo, you have stopped all the services in FFU07:44
geguileoSo you have NOTHING running at N, N+1, N+207:44
gfidentegeguileo that is the point I was trying to make, not if -volume is on a separate node07:44
geguileoyou had running N, you stop, do db-online-migration in N, then install N+1 in 1 node, do db-sync, do online-migration, install N+2 in 1 node, db-sync, online-migration, install N+3 on 1 node, db-sync (no online-migration here), install N+3 on all nodes, run N+3 on all nodes07:45
geguileogfidente: it is irrelevant where the services are installed07:46
geguileogfidente: you have 1 node that will be in charge of DB changes07:46
geguileogfidente: and then you will install N+3 on ALL of them with the DB ready07:46
geguileogfidente: so you will be running N+3 on all of them, regardless of the order at which they start07:46
geguileogfidente: I have never run FFU, but that was the design I heard last07:47
gfidentegeguileo no if -volume is on a different node07:48
geguileogfidente: why not?07:48
gfidentegeguileo we might end up in a situation where certain nodes (controlplane) hosting api/scheduler are at train level, and got the db migrated07:48
gfidentebut -volume instances are all still up and running at the queens level07:48
geguileogfidente: that should never happen07:49
geguileogfidente: because FFU should stop all services before doing the FFU07:49
gfidentegeguileo it is, but it's not doing it on all nodes07:49
geguileoregardless of where the services are running07:49
geguileogfidente: then that's a bug07:49
gfidentefor example it doesn't stop nova-computes when upgrading the controlplane07:49
gfidenteok but I think I get the point, all services must be upgraded to the train level for service to be restored07:50
gfidentea different question now07:50
gfidenteand thanks a lot for helping clarifying this07:50
geguileogfidente: all services OF THE SAME COMPONENT must be upgraded at the same time07:51
gfidenteyeah of the same component of course, all cinder services in this case07:51
geguileogfidente: and it is recommended that you don't run different components with a difference of more than 1 release07:51
gfidenteso the different question is07:51
geguileogfidente: btw, I just detected a bug in cinder's online-db-migrations07:52
* geguileo facepalms07:52
gfidentewait a sec the other question07:52
geguileoa bug that we've been carrying since Rocky07:52
gfidentecan a nova guest be hard rebooted with cinder/api down, if it had any cinder volume attached?07:52
* gfidente doesn't think so07:53
geguileogfidente: that's a Nova question...07:53
gfidentegeguileo right07:53
geguileogfidente: I don't know if they detach the volumes on hard reboot or if they let them as they were07:53
gfidentelyarwood ^^ ? :D07:54
geguileogfidente: even if the detach them, depending on the release they could have the connection info on their DB and use it to attach again without contacting Cinder07:54
lyarwoodgeguileo / gfidente ; we detach on hard reboot07:56
lyarwoodso reading back, no we can't hard reboot with c-api down07:57
gfidentelyarwood so to add up on your discussion yesterday, guests using cinder volumes can only be rebooted if cinder is available07:57
gfidentedid I get it right?07:57
gfidenteyeah that07:57
gfidentethank guys07:57
geguileolyarwood: you call os-brick to detach, but do you also call Cinder to remove the export/map?07:57
lyarwoodgeguileo: yes we do now07:58
lyarwoodgeguileo: prior to queens we didn't07:58
geguileolyarwood: good to know, that's probably the best thing to do   :-)07:58
lyarwoodgeguileo: yeah, helps reset connection_info etc07:58
*** e0ne has joined #openstack-cinder07:58
whoami-rajatgeguileo: Hi, so i really don't know why this is happening, whenever i run all tests (tox -epy27), and i set08:05
whoami-rajat1) persistence -> db : all tests pass08:05
whoami-rajat2) persistence -> memory : hit the following traceback
whoami-rajatthe persistence is set in test_base and no changes are made to dbms tests.08:06
geguileowhoami-rajat: looks like some test is trying to access a volume that doesn't exist08:06
whoami-rajatgeguileo: yep, but test_base is only running 2 tests with memory persistence and clearing all class attributes at teardown, don't understand why it affects the dbms tests. the same error i hit before when trying out memory pers.08:08
geguileowhoami-rajat: I can help you find the issue08:09
geguileowhoami-rajat: how can I get the code that is failing?08:09
whoami-rajatgeguileo: should i update the PS, i refactored helper methods into a separate file.08:10
geguileowhoami-rajat: if you update the PS with what's failing I can have a look and help you figure out what's wrong (it could be a cinderlib bug)08:11
whoami-rajatgeguileo: sure, will update in few secs08:11
openstackgerritRajat Dhasmana proposed openstack/cinderlib master: Fix: Snapshot creation with volume types
whoami-rajatgeguileo: ^ done08:12
geguileowhoami-rajat: thanks, I'll have a look in 10 minutes, as soon as I finish fixing an online data migration bug I've just detected08:13
whoami-rajatgeguileo: oh ok. sure, i think i've a doubt regarding that too, hah08:15
*** tkajinam__ has quit IRC08:18
openstackgerrityenai proposed openstack/cinder master: Fix creation from raw image failed when qemu-img is not installed
*** gfidente has quit IRC08:25
geguileowhoami-rajat: interesting. It only fails in Python 3. Python 2 work fine...08:26
whoami-rajatgeguileo: oh, these are different from the results of my local env but the test failure reason is same.08:28
openstackgerrityenai proposed openstack/cinder master: Add host level for volume_get_all_by_host
openstackgerrityenai proposed openstack/cinder master: Fix creation from raw image failed when qemu-img is not installed
openstackgerritBhaa Shakur proposed openstack/cinder master: Zadara VPSA: Move to API access key authentication
openstackgerritGorka Eguileor proposed openstack/cinder master: Fix online data migrations
geguileowhoami-rajat: I get the error on Python 2 as well XD  Time to debug it08:38
openstackgerrityenai proposed openstack/cinder master: Change volume_host to backup_host in backup rpcapi
openstackgerrityenai proposed openstack/cinder master: Change volume_host to backup_host in backup rpcapi
whoami-rajatgeguileo:  hah, cause gate isn't running the tests that are failing locally :(08:40
geguileowhoami-rajat: whaaaat?  :-O08:41
whoami-rajatgeguileo: oh wait, my bad08:41
whoami-rajatgeguileo: i did a typo while searching the test, sorry08:41
geguileoyou gave me a scare  XD08:41
geguileojaja, np, I did the same thing on Monday08:42
whoami-rajatgeguileo:  oh, still gate is executing tests differently :/08:45
whoami-rajatgeguileo: i've one more query to ask if i'm not wasting much of your time :)08:46
openstackgerrityenai proposed openstack/cinder master: Speed up starting cinder-backup
openstackgerrityenai proposed openstack/cinder master: Speed up starting cinder-backup
geguileowhoami-rajat: sure08:51
whoami-rajatgeguileo: thanks08:51
*** altlogbot_0 has quit IRC08:51
*** irclogbot_3 has quit IRC08:51
*** altlogbot_1 has joined #openstack-cinder08:52
*** irclogbot_0 has joined #openstack-cinder08:52
whoami-rajatgeguileo: so while upgrading suppose STEIN-> TRAIN, we run online migrations of stein and then migration scripts of train, is there a way to run online data migration of TRAIN before the migration scripts?08:52
openstackgerrityenai proposed openstack/cinder master: Correct the exception msg of ImageUnacceptable
geguileowhoami-rajat: no, the process to upgrade from stein to train is: stein online data migrations and then train db sync08:53
geguileowhoami-rajat: but we have a online-data-migration bug in Cinder Rocky, Stein, and Train08:54
openstackLaunchpad bug 1837703 in Cinder "Cinder Rocky, Stein, and Train are running wrong online-data-migrations" [Undecided,In progress] - Assigned to Gorka Eguileor (gorka)08:54
geguileowhoami-rajat: I just proposed a fix for it in master, and we'll have to backport it to Stein and Rocky08:54
whoami-rajatgeguileo: oh yes, thanks for fixing this. i think we never faced the issue cause two migrations never overlapped08:56
geguileowhoami-rajat: the problem we also had is that in Queens we had a terrible data migration that required cinder-volume services to be running08:57
geguileowhoami-rajat: and since we didn't remove as we should in Rocky, Stein, and Train, we end up requiring this incorrect behavior when upgrading to Stein and Train as well08:58
whoami-rajatgeguileo: oh, so i've implemented a migration to change volume_type_id(nullable=False) but it requires all volumes and snapshots to migrate to not None types, sadly both migrations are part of same patch and grenade is failing, so i need to implement the db migration in the next release, got it.08:58
geguileowhoami-rajat: sorry, I didn't get it...08:59
openstackgerrityenai proposed openstack/cinder master: Correct the exception msg of ImageUnacceptable
geguileowhoami-rajat: snapshot don't have volume types09:00
geguileowhoami-rajat: and the "proper" way to implement the setting of this new type on the loading of Volume OVOs when they detect that they don't have a volume type09:01
geguileowhoami-rajat: not on a migration09:01
whoami-rajatgeguileo: yeah, i agree the data migrations shouldn't be travelling to next release, can we also add an upgrade check in the patch you proposed.09:02
geguileowhoami-rajat: and then have an online-data-migration on that same patch that actually sets it09:02
geguileowhoami-rajat: upgrade check?09:02
whoami-rajatgeguileo: yep
geguileowhoami-rajat: I was wrong, the snapshots do have volume type references  (bad memory)09:03
whoami-rajatgeguileo:  the snapshot table do have a volume_type_id field.09:03
geguileowhoami-rajat: yup, I had forgotten09:04
whoami-rajatgeguileo: so to brief out my query,
whoami-rajatthis particular migration is failing the grenade on gate cause the online migrations that needs to be performed are part of the same patch09:05
whoami-rajatthis script should be implemented in next release IIUC09:05
geguileowhoami-rajat: that code should be a check in the status command09:07
geguileowhoami-rajat: not part of the DB09:07
*** lpetrut has joined #openstack-cinder09:07
whoami-rajatgeguileo: oh ok. but in the next release right?09:08
geguileowhoami-rajat: I think so09:09
*** awalende has joined #openstack-cinder09:10
whoami-rajatgeguileo: ok. thanks for the help!09:10
geguileowhoami-rajat: I'm debugging the cinderlib thing now, but I'll have a look later to see if I should add the check you suggested to the data migration patch09:14
openstackgerritye proposed openstack/cinder master: This fix let the delete err-info more precisely
whoami-rajatgeguileo: since you mentioned that online migrations of that release shouldn't go into the next release, this sounds similar to the need of upgrade check.09:16
*** davidsha has joined #openstack-cinder09:16
geguileowhoami-rajat: since the migration should have been removed in Rocky we would need the check since rocky (if we end up needing it)09:17
whoami-rajatgeguileo: thanks for looking into the issue, i'm getting to know cinderlib from docs and testing and will try to help if i find something.09:18
whoami-rajatgeguileo: but the base framework is implemented in stein and seemingly shouldn't be backported. hmm. i think the upgrade check should be a separate patch09:19
geguileowhoami-rajat: oh, then I would just backport it to stein09:20
whoami-rajatgeguileo: yep, the online migration removal could move to rocky and the upgrade check to stein, (since we allowed upgrade checks to be backported that were implemented in train)09:21
whoami-rajatgeguileo: but i'm really not sure, jungleboyj and smcginnis  would provide better input here than me :)09:22
geguileowhoami-rajat: looks like the problems are being introduced by some of your tests...09:23
geguileowhoami-rajat: 'i'll try to figure out why now09:23
geguileowhoami-rajat: we'll want to backport the check for sure  :-)09:23
*** Liang__ has quit IRC09:25
whoami-rajatgeguileo: yes, when set to memory persistence, my tests are causing the issues in other tests but i'm not really sure how :(09:26
geguileowhoami-rajat: I now have an easy way to reproduce it just running 1 of your tests and then a DB test  ;-)09:27
*** Kuirong has quit IRC09:37
*** boxiang has quit IRC09:42
whoami-rajatgeguileo: oh yes that failed, thanks for the heads up, now i can debug better :)09:42
*** lpetrut has quit IRC09:47
*** Kuirong has joined #openstack-cinder09:49
*** m75abrams has quit IRC09:53
*** trident has quit IRC10:06
*** ganso has quit IRC10:06
*** ganso has joined #openstack-cinder10:07
*** trident has joined #openstack-cinder10:08
*** lpetrut has joined #openstack-cinder10:09
*** brinzhang has quit IRC10:10
*** boxiang has joined #openstack-cinder10:18
*** sahid has quit IRC10:24
*** abhishekk has quit IRC10:24
*** Kuirong has quit IRC10:35
*** dpawlik has quit IRC10:39
geguileowhoami-rajat: I figured out what the problem is... It is not your fault  :-(10:45
geguileowhoami-rajat: the issue is in cinderlib test code, and it appears in your patch because test_base goes before test_dbms10:46
geguileowhoami-rajat: working on a patch to fix this10:47
*** carloss has joined #openstack-cinder10:49
whoami-rajatgeguileo: oh ok. i tried the debug run and got error during conversion of snapshot ovo to dict ( dict(snap._ovo)) , glad the problem is tracked :)10:49
*** sahid has joined #openstack-cinder10:54
*** bhagyashris_ has quit IRC10:55
geguileowhoami-rajat: renaming the test file from test_base to test_this_sucks should make the tests run (I haven't tried), but I'm looking to fix it right10:56
whoami-rajatgeguileo: yup, that works. IIUC the catch here is to run the tests after dbms, i tried running memory tests before dbms and the same failure occurred.11:00
geguileowhoami-rajat: exactly that's when the problem happens11:01
geguileowhoami-rajat: it's because memory uses DB class which replaces some OVO methods11:01
whoami-rajatgeguileo:  oh, can we use cinderlib with multiple plugins simultaneously or only one plugin should be configured at a time?11:03
geguileowhoami-rajat: that's precisely the problem, cinderlib is only meant to run with 1 persistence plugin11:03
geguileowhoami-rajat: but in the tests we have to work around that limitation to test the different ones we have11:04
openstackgerritPawel Kaminski proposed openstack/os-brick master: connectors/nvme: Wait utill nvme device show up in kernel
whoami-rajatgeguileo:  ok. so we need to optimize the teardown code in memory tests to leave cinderlib in original state after the test run?11:07
geguileowhoami-rajat: yah, I have that code now11:07
geguileowhoami-rajat: but in one of the multiple tests I run I had an error, so I was trying to reproduce it to see what went wrong there...11:07
*** dpawlik has joined #openstack-cinder11:08
whoami-rajatgeguileo: great :)11:08
*** boxiang has quit IRC11:10
*** boxiang has joined #openstack-cinder11:11
*** udesale has quit IRC11:15
*** m75abrams has joined #openstack-cinder11:18
*** deiter has joined #openstack-cinder11:29
*** irclogbot_0 has quit IRC11:33
*** irclogbot_3 has joined #openstack-cinder11:34
openstackgerrithjy proposed openstack/cinder master: Add MacroSAN cinder driver
geguileowhoami-rajat: finally found the other issue... you'll have to change in your code the tearDown from "= {}" to ".clear()" and I'll have to change it as well in the test_memory as well11:51
whoami-rajatgeguileo: wow :D11:51
geguileowhoami-rajat: yeah, since there can only be 1 plugin the volumes and other dicts are class attributes11:51
geguileowhoami-rajat: so we are not actually cleaning them when we set them to = {} and when you instantiate another memory persistance test they come back to live11:52
whoami-rajatgeguileo: i'm still seeing the same failure :(11:59
geguileowhoami-rajat: no, no, that's additionally11:59
geguileowhoami-rajat: I'm writing the commit message for the proper fix11:59
geguileowhoami-rajat: but even with that fix you'll still need to make that .clear() modification12:00
whoami-rajatgeguileo: oh, ok12:00
*** jsquare has quit IRC12:00
*** deiter has quit IRC12:00
whoami-rajatgeguileo: thanks for fixing this, it was a good learning experience :)12:00
*** jsquare has joined #openstack-cinder12:01
geguileowhoami-rajat: for me this was hell  XDXDXD12:01
geguileonasty bugs12:01
whoami-rajatgeguileo: lol, hope i don't find any bug while implementing the volume type code in cinderlib12:03
*** zul has joined #openstack-cinder12:04
geguileowhoami-rajat: well, you know were to find me if you do ;-)12:05
openstackgerritGorka Eguileor proposed openstack/cinderlib master: Fix cleanup of persistence tests
whoami-rajatgeguileo: glad that you're VERY active !12:06
geguileowhoami-rajat: ^ try rebasing your patch on top of that one and making the "clear()" change12:06
jungleboyjwhoami-rajat:  ++12:06
whoami-rajatgeguileo: Thanks!!12:07
toskyhi! When would be a good time to discuss about the porting of the remaining legacy jobs to native Zuul v3 jobs?12:08
*** rosmaita has joined #openstack-cinder12:09
geguileotosky: probably in today's cinder IRC meeting12:09
toskyshould I add a point to the agenda?12:09
geguileotosky: yup just add it to
geguileotosky: thanks!12:11
*** deiter has joined #openstack-cinder12:15
*** dviroel has joined #openstack-cinder12:20
openstackgerritRajat Dhasmana proposed openstack/cinderlib master: Fix: Snapshot creation with volume types
*** mriedem has joined #openstack-cinder12:35
*** coreycb has quit IRC12:45
*** coreycb has joined #openstack-cinder12:47
*** eharney has joined #openstack-cinder12:49
*** e0ne has quit IRC12:57
*** e0ne has joined #openstack-cinder12:57
*** Kuirong has joined #openstack-cinder13:01
*** jawad_axd has quit IRC13:01
*** awalende has quit IRC13:04
*** awalende has joined #openstack-cinder13:05
*** awalende has quit IRC13:09
*** deiter has quit IRC13:19
*** awalende has joined #openstack-cinder13:34
openstackgerritBhaa Shakur proposed openstack/cinder master: Zadara VPSA: Move to API access key authentication
openstackgerritRafael Weingärtner proposed openstack/cinder master: Add mv to return volume type name and ID with volumes
*** awalende has quit IRC13:38
*** ganso has quit IRC13:39
openstackgerritRafael Weingärtner proposed openstack/cinder master: Add mv to return volume type name and ID with volumes
*** enriquetaso has joined #openstack-cinder13:42
openstackgerritRafael Weingärtner proposed openstack/cinder master: Add mv to return volume type name and ID with volumes
*** ganso has joined #openstack-cinder14:01
*** Luzi has quit IRC14:14
*** lennyb has quit IRC14:18
*** raghavendrat has joined #openstack-cinder14:18
raghavendrathi hemna: and jungleboyj: this is regarding; whenever you get some time, could you please review & share your feedback14:20
jungleboyjraghavendrat:  Ok.  Will do.14:21
*** dpawlik has quit IRC14:25
*** mriedem has quit IRC14:29
*** dpawlik has joined #openstack-cinder14:33
*** raghavendrat has quit IRC14:34
*** psachin has quit IRC14:41
*** raghavendrat has joined #openstack-cinder14:42
raghavendratthanks jungleboyj: for providing +214:43
jungleboyjraghavendrat:  Welcome.  :-)14:43
*** jmlowe has quit IRC14:45
*** david-lyle is now known as dklyle14:45
*** dpawlik has quit IRC14:48
*** ag-47 has joined #openstack-cinder14:48
ag-47Hello everyone14:49
*** raghavendrat has quit IRC14:51
ag-47i'm having a probleme each time i launch cinder-volume-usage-audit, it creates a Aborted connection on the mysql  logs, it is not critical but did anyone had the same problem14:52
ag-47i'm on rocky with ubuntu 18.0414:56
*** sahid has quit IRC14:59
*** jmlowe has joined #openstack-cinder15:00
*** mriedem has joined #openstack-cinder15:09
jungleboyjI have not heard of that problem before.15:10
jungleboyjIs there any indication of what is causing the abort?  Are we maybe not closing the connection properly?15:10
eharneyit looks likely that that tool just doesn't do any teardown of the db connections15:11
jungleboyjeharney:  That was what I was thinking.15:11
openstackgerritEric Harney proposed openstack/cinder master: Prevent double-attachment race in attachment_reserve
eharneyfinally got unit tests in working order for this ^    i'd love some reviews since this is a pretty nasty bug15:21
*** tejdeep has joined #openstack-cinder15:42
openstackgerritCY Chiang proposed openstack/cinder stable/stein: doc: Fix rbd driver marked support multiattach
*** boxiang has quit IRC15:50
*** boxiang has joined #openstack-cinder15:50
*** Liang__ has joined #openstack-cinder15:54
openstackgerritGorka Eguileor proposed openstack/cinder master: Fix online data migrations
geguileowhoami-rajat: ^ Added the status checks like we discussed15:57
whoami-rajatgeguileo: we can't add checks for queens now :(15:59
*** Liang__ is now known as LiangFang15:59
geguileowhoami-rajat: these are checks that make sense here as well15:59
geguileowhoami-rajat: they are not for queens, they are for Stein15:59
geguileoan later15:59
whoami-rajatgeguileo: oh wait,16:00
whoami-rajatgeguileo: yep, realized it later,16:00
geguileoso what we are doing is detecting if you try to upgrade from queens or later to train without running the online data migrations from queens16:00
whoami-rajatgeguileo: oh ok. it's a case for FFU16:01
geguileowhoami-rajat: also if someone tries to do the upgrade manually16:01
geguileowhoami-rajat: like directly jumping from queens to stein, or something like that16:01
whoami-rajatgeguileo: are we planning to backport the removal of migrations to rocky?16:02
geguileowhoami-rajat: only to where the status tool exists16:02
geguileowhoami-rajat: if it didn't exist in Rocky, then we won't backport that part to Rocky16:03
*** Kuirong has quit IRC16:03
whoami-rajatgeguileo: okay, wanted to know that only. Thanks for including the upgrade check :)16:04
geguileowhoami-rajat: it was a good suggestion :-)16:04
openstackgerritEric Harney proposed openstack/cinder-specs master: Volume Rekey spec
*** mvkr_ has quit IRC16:10
*** pcaruana has quit IRC16:13
*** mvkr_ has joined #openstack-cinder16:16
*** m75abrams has quit IRC16:19
whoami-rajateharney: could you take a look at rosmaita  comment here ?16:20
*** ag-47 has quit IRC16:27
*** tejdeep has quit IRC16:53
*** davidsha has quit IRC17:01
*** mmethot_ is now known as mmethot17:02
jungleboyjgeguileo:  You still around?17:04
geguileojungleboyj: yeah, I'm around :-17:04
jungleboyjHey.  So, pots and I got a note from a customer seeing multipathing issues.17:05
jungleboyjThe problem sounds familiar.17:05
geguileojungleboyj: what was the problem? was it FC or iSCSI/17:06
jungleboyjAfter migrating a VM from one node to another there are faulty paths left on the source.17:06
geguileojungleboyj: but the migration completed?17:06
geguileojungleboyj: it could be that Nova ignored some disconnect issues from OS-Brick17:08
jungleboyjThis is back on Pike.17:08
jungleboyjI feel like we have made changes in that code path since.17:08
geguileojungleboyj: they should see some tracebacks in the hk-m1-comp-node22 nova compute logs17:08
geguileojungleboyj: I have made a few, but depends on the release they are running17:09
geguileojungleboyj: I would first look at the compute logs and why it wasn't properly detached17:10
geguileojungleboyj: and I think at some point we fixed something in Nova as well17:10
geguileobut I really don't recall what was it...17:10
jungleboyjOk.  I will get more info from the customer.17:11
smcginnisgeguileo: All of those os-brick fixes got backport back to pike, right?17:11
smcginnisjungleboyj: Do they have the latest os-brick release for pike? ^17:11
*** LiangFang has quit IRC17:12
jungleboyjThat is a good question.  I will check on that as well.17:13
geguileosmcginnis: : : I'm not 100% sure...  But I think most of them did17:13
geguileojungleboyj: it could also be related to this Nova bug I was thinking of
jungleboyjThat wasn't backported to pike.17:15
geguileojungleboyj: It's also important to know what if the open iscsi version has support for disabling automatic scans17:15
jungleboyjWe could do that though if it looks like the same issue.17:15
geguileojungleboyj: if disabling automatic scans is not supported, then it could be that they don't see a backtrace in the logs, and the issue is a race condition (which the manual scans prevent)17:16
jungleboyjgeguileo:  Thanks for the pointers.  I will send them the info and see what they say.17:17
jungleboyjsmcginnis:  What do we want to do about the MacroSAN driver?17:17
jungleboyjI see that they have been pushing patches up.  But if their CI is down now ...17:18
geguileojungleboyj: they can tell if their open-iscsi version supports manual scans by checking one of the nodes and seeing if it has the 'node.session.scan' parameter17:18
* geguileo thinks the issue is probably that their iscsi version doesn't have that feature17:19
jungleboyjIt is SuSE Open Cloud 817:19
geguileojungleboyj: I know SuSE has an open iscsi package that supports manual scans, because one of hemna's customers has been having trouble with it, but I don't know what your customer is running...17:22
jungleboyjOk.  I will have them check that as well.17:22
hemnaSOC8 is fairly old now17:27
hemnaI'm not sure what version of the open-iscsi utils that is17:27
hemnageguileo: what version of open-iscsi added the manual scan feature ?17:27
geguileohemna: I don't remember, I added it a while, while back...17:28
geguileoDamn, the last patch is from March 2017!17:29
hemnasoc8 has open-iscsi 2.0.87617:30
geguileohemna: it seems like it was in 2.0.87517:30
geguileohemna: then if they are on that version it should have manual scans and os-brick should be using them17:30
hemnajungleboyj: have to see what version you have17:30
*** tejdeep has joined #openstack-cinder17:31
potsgeguileo: are there any specific guidelines for setting up multipath.conf for use with os-brick?17:31
jungleboyjOk, just collected up those questions and are sending them.17:31
geguileopots: not really17:31
jungleboyjhemna: geguileo  Thanks for the help.17:31
*** gnufied has quit IRC17:31
geguileopots: it should work with almost anything17:32
hemnapots: it should work with friendly names on or off if that's what you might be asking17:33
potsgood guess :)17:34
*** awalende has joined #openstack-cinder17:34
hemnageguileo: fwiw, one of my qa guys tested the iscsi iface0 workaround and got it to work17:35
openstackgerritJay Bryant proposed openstack/cinder stable/rocky: Revert "Declare multiattach support for HPE MSA"
hemnageguileo: you da man17:35
hemnajungleboyj: revert?17:35
hemnaah ok17:35
jungleboyjhemna:  Yep.17:35
* hemna read the patch17:35
jungleboyjIt was an update now that the commit message is accurate.17:36
*** e0ne has quit IRC17:37
*** awalende has quit IRC17:39
*** pcaruana has joined #openstack-cinder17:41
*** tpsilva has joined #openstack-cinder17:41
*** henriqueof has joined #openstack-cinder17:47
*** ag-47 has joined #openstack-cinder17:49
ag-47response for message [17:10] : i launch the tool in debug mode so  i had this as information related to the db connection17:53
ag-47Other than this i didn't see anything weird17:54
*** KeithMnemonic has joined #openstack-cinder17:54
*** ag-47 has quit IRC18:05
*** ag-47 has joined #openstack-cinder18:13
*** gary_perkins_ has quit IRC18:19
*** gary_perkins has joined #openstack-cinder18:20
*** ag-47 has quit IRC18:21
*** jmlowe has quit IRC18:33
*** henriqueof has quit IRC18:40
*** tosky has quit IRC18:42
*** tesseract has quit IRC18:49
*** lpetrut has quit IRC18:50
*** henriqueof has joined #openstack-cinder19:13
*** jmlowe has joined #openstack-cinder19:22
*** e0ne has joined #openstack-cinder19:25
*** vishalmanchanda has quit IRC19:32
*** e0ne has quit IRC19:46
TheJuliaI've got a devstack change I've been trying to make and I'm seeing a weird failure with cinder's grenade scripting, has anyone ever seen before?19:48
*** e0ne has joined #openstack-cinder19:57
*** henriqueof has quit IRC20:01
*** eharney has quit IRC20:05
*** e0ne has quit IRC20:19
jungleboyjsmcginnis: rosmaita whoami-rajat  I have RSVPd for the PTG in Shanghai,  I guessed we would get maybe 15 people based on responses.20:29
rosmaitajungleboyj: that seems like a pretty decent turnout20:30
whoami-rajatjungleboyj: great20:30
jungleboyjHalf of what we would normally get.  We had 15 or 30 in Denver.20:30
*** deiter has joined #openstack-cinder20:35
BLZbubbai'm seeing a bandwidth limit with cinder-iscsi, about 400MB/sec20:38
BLZbubbaiirc, tgtd uses a single socket and doesn't support MC/S - am I the only one seeing this limit?20:39
BLZbubbai remember a couple years ago i set up 4 ip aliases and multipath to the same target, and it went like 4x faster20:40
*** hemna has quit IRC20:50
*** hemna has joined #openstack-cinder20:50
*** eharney has joined #openstack-cinder20:56
*** e0ne has joined #openstack-cinder21:04
*** pcaruana has quit IRC21:05
*** henriqueof has joined #openstack-cinder21:07
*** imacdonn has quit IRC21:16
BLZbubbaduh, i should just use the nvmeof version21:25
*** henriqueof has quit IRC21:27
*** e0ne has quit IRC21:30
jungleboyjTheJulia:  Generally that error message means that the Instance didn't finish booting.  So, if it boot from volume something went wrong there.  It started pinging.  Not sure why SSH didn't come up.21:36
*** e0ne has joined #openstack-cinder21:39
*** e0ne has quit IRC21:46
*** mchlumsky has quit IRC21:59
*** harlowja has quit IRC22:45
*** tkajinam has joined #openstack-cinder22:57
*** carloss has quit IRC23:15
*** rcernin has joined #openstack-cinder23:30
*** Kuirong has joined #openstack-cinder23:46
openstackgerritMerged openstack/cinder stable/rocky: Revert "Declare multiattach support for HPE MSA"

Generated by 2.15.3 by Marius Gedminas - find it at!