Wednesday, 2025-10-22

opendevreviewJoan Gilabert proposed openstack/watcher master: Disable cinder volume service in subnode  https://review.opendev.org/c/openstack/watcher/+/96446208:00
opendevreviewJoan Gilabert proposed openstack/watcher master: Disable cinder volume service in subnode  https://review.opendev.org/c/openstack/watcher/+/96446210:14
opendevreviewJoan Gilabert proposed openstack/watcher-tempest-plugin master: Skip Storage Capacity test until it's fixed  https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/96453211:16
opendevreviewJoan Gilabert proposed openstack/watcher master: Disable cinder volume service in subnode  https://review.opendev.org/c/openstack/watcher/+/96446211:18
opendevreviewMerged openstack/watcher master: Remove unused glance client integration  https://review.opendev.org/c/openstack/watcher/+/96321711:44
jgilabersean-k-mooney chandankumar when you have some time please review https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/964532 and https://review.opendev.org/c/openstack/watcher/+/96446212:38
jgilaberthey are needed to forward with the addition of volume migration tempest tests12:39
sean-k-mooneylooks like chandan approved the first already so ill look at teh other now12:43
sean-k-mooneyim not sure i agree with the second patch12:44
sean-k-mooneywe really shoudl have the backend on diffent hosts12:44
sean-k-mooneywhy woudl we want to test with 2 backends on the constoelr when that is not really reflective of how it would be deployed in production12:45
jgilaberit came from a discussion with dviroel in https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/95864412:46
jgilaberthis one in particular https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/958644/comment/a081ec2e_c35d284e/12:46
dviroelfor lvm it is true that both will be on the same host, but the usual deploy is to have multiple backends configured in a single cinder config file12:47
jgilaberI'll admit I'm not very familiar with the difference between Active-Active or Active-Passives modes of Cinder (I'm trying to read on that=12:47
dviroelwe shouldn't deploy multiple c-vols12:47
sean-k-mooney but that testing somethign else12:48
sean-k-mooneyit very common to have multipel cinder voluems each pointing at diffent backends12:48
sean-k-mooneythis si not really related to active active vs active passive12:48
sean-k-mooneyits about ensureing we have 2 seperate dataplanes12:48
dviroeland it is now how cinder is testing on upstream too12:49
sean-k-mooneywe coudl do that with the lvm dirver on one host but we would need to loopback device with two diffen lvm volume pools12:49
sean-k-mooneythere are two diffent test senairos12:51
sean-k-mooneymovign between too pools on one backend12:51
sean-k-mooneyand moving between 2 backedns12:51
sean-k-mooneywe supprot both in watcher12:51
sean-k-mooneyand the exsting test are mainly for migration between seperate backedn which is why we are testing with 2 c-vols on diffent hosts12:52
dviroeliiuc the second one is not working in this strategy, and requires a fix12:52
jgilaberthe current setup is configuring the same backend on different hosts, right?12:53
jgilaberboth the controller and compute configure a 'lvmdriver-1' backend12:53
sean-k-mooneythe second config is the default we use in all devstack multi node jobs https://github.com/openstack/devstack/blob/master/.zuul.yaml#L69912:53
sean-k-mooneyjgilaber: yep which is a vlid config12:54
jgilaberright now that is the only configuration supported for volume migrations in watcher12:54
dviroelyes, they will be different since the host is different12:54
sean-k-mooneythe backend is defien by not jsut the drive rname but also the host12:55
sean-k-mooneyso right now we have supprot for migrating between storage backedn and or between pools12:55
sean-k-mooneythe same opertoin is used for both cases in cinder12:56
jgilabernot between storage backend, since the migrate method expects the destination to have the same 'volume_backend_name'12:57
sean-k-mooneynot on the cider side12:57
dviroelno, this is the bug in watcher side12:57
sean-k-mooneyif we require that that is a bug which i tough twe dicused12:57
sean-k-mooneyright12:57
jgilaberyes, that is a bug, and I have a fix for it up12:58
sean-k-mooneyright so when fixign that bug we will need to still have two c-vol instnaces12:58
dviroelit should work with 2 backends, despite the c-vol?12:59
sean-k-mooneyyes13:00
sean-k-mooneythe presence of the c-vol on the compute is not incorrect13:00
sean-k-mooneyand it need to work in that case13:00
dviroelok, i just don't see that clear anywhere too13:01
dviroelbut it was working13:01
sean-k-mooneyits valid for example to migrate form lvm on the contoler to ceph or netapp on the compute/subnode13:01
sean-k-mooneydviroel: that how we test volume migation in the nova gate13:01
sean-k-mooneywe alwasy have 2 c-vols13:01
sean-k-mooneyand we test migrating between the 2 backend hosts to make sure the data micration actully works13:02
sean-k-mooneyits not related to active active or active passive at all13:02
dviroelso if is a valid config and works, that's fine them13:06
dviroelbut maybe the watcher is not really prepared for that, when looking only at "backend_name" 13:06
dviroelsince they deploy with the same name13:06
sean-k-mooneyif we want to run 2 backend on a single host with one cinder-vol instnace we need to make sure both backend are using diffent stroage. menaing 2 loopback device with 2 seperate lvm volume groups13:06
dviroeli guess that this is coverage somehow, since we have some cinder jobs using 2 lvm backends in the same host13:07
dviroelbut i didn't check  relly13:07
sean-k-mooneyi have never seen an lvm job like that for what its worth13:08
sean-k-mooneyso this looks like a very non standard job proposal13:08
sean-k-mooneyit may be valid but i think we need to look at how this is respened on the dataplane side and in cidner api to confirm13:09
dviroelthis one https://zuul.opendev.org/t/openstack/build/3c783bfaa7824d7b8f572860b894216613:09
sean-k-mooneythe fact that that is non-voting is not promising13:10
jgilaberthis is the cinder conf from my patch https://07c2e7671593cc30b79e-a759d6b54561529b072782a6b0052389.ssl.cf2.rackcdn.com/openstack/87adae320fa64ae283202ebe60313f01/controller/logs/etc/cinder/cinder_conf.txt13:10
jgilaberwith the two backends in the controller13:10
opendevreviewMerged openstack/watcher-tempest-plugin master: Skip Storage Capacity test until it's fixed  https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/96453213:17
dviroelthis ^ test wasn't trigger even with 2 backends enabled, because they had the same name?13:20
* dviroel need to check the skip_checks in this case13:20
jgilaberit relies on the tempest config backend_names13:21
sean-k-mooneyok so that is using two lvm volume groups13:21
jgilaberwhich is not set in the current job13:21
sean-k-mooneyso that is also valid but that not really what i woudl consdier mutli backend13:21
sean-k-mooneythat is more a single backend with 2 pools13:21
sean-k-mooneywell i dont know its maybe multi backend but its not hwo we normally test that13:22
jgilaberack, so the question is do want to test both scenarios? So instead of changing the existing job, add a new one?13:22
dviroelack, i f you want to migrate across nodes, the previous would be the right one13:22
sean-k-mooneywe definlty want to have at least one job mighrating the data across nodes13:23
dviroelnot needed I think, should be the more complex/common one13:23
dviroelacross node in this case13:23
sean-k-mooneynomrally when you have diffent backend deploy they dont end up usign the same driver i.e. they would be netapp and cpeh13:23
sean-k-mooneyso im concerned that if we are not testign phsycially sepeate host we may hit issues or miss bugs that we test in the more standard case13:24
dviroelit depends,  it is common two have more than 2 netapp backend yes13:25
sean-k-mooneyjgilaber: do you happen to know what cidner returns for the pool list in both cases13:25
jgilaberthis is from the decision-engine logs with the current setup13:26
jgilaberOct 21 19:30:37.095967 np87be28cf22444 watcher-decision-engine[93381]:   <StorageNode host="np47ad865b91dc4@lvmdriver-1" zone="nova" status="enabled" state="up" volume_type="['lvmdriver-1']" uuid="" human_id="">13:26
jgilaberOct 21 19:30:37.095967 np87be28cf22444 watcher-decision-engine[93381]:     <Pool name="np47ad865b91dc4@lvmdriver-1#lvmdriver-1" total_volumes="1" total_capacity_gb="28" free_capacity_gb="28" provisioned_capacity_gb="0" allocated_capacity_gb="0" virtual_free="0" uuid="" human_id=""/>13:26
jgilaberOct 21 19:30:37.095967 np87be28cf22444 watcher-decision-engine[93381]:   </StorageNode>13:26
jgilaberOct 21 19:30:37.095967 np87be28cf22444 watcher-decision-engine[93381]:   <StorageNode host="np87be28cf22444@lvmdriver-1" zone="nova" status="enabled" state="up" volume_type="['lvmdriver-1']" uuid="" human_id="">13:26
jgilaberOct 21 19:30:37.095967 np87be28cf22444 watcher-decision-engine[93381]:     <Pool name="np87be28cf22444@lvmdriver-1#lvmdriver-1" total_volumes="1" total_capacity_gb="28" free_capacity_gb="28" provisioned_capacity_gb="0" allocated_capacity_gb="0" virtual_free="0" uuid="" human_id=""/>13:26
jgilaberOct 21 19:30:37.095967 np87be28cf22444 watcher-decision-engine[93381]:   </StorageNode>13:26
jgilaberOct 21 19:30:37.095967 np87be28cf22444 watcher-decision-engine[93381]: </ModelRoot>13:26
jgilabernot great formatting, you can see it by searching for 'StorageNode' in https://daf6a8c4e4a7fc0beca6-64bde2a99f8c0525b91ac2e42ffc128a.ssl.cf5.rackcdn.com/openstack/ee005da80be347b69b4efcf6df0adbae/controller/logs/screen-watcher-decision-engine.txt13:27
jgilaberthis is from the modified job with the two backends in the controller https://07c2e7671593cc30b79e-a759d6b54561529b072782a6b0052389.ssl.cf2.rackcdn.com/openstack/87adae320fa64ae283202ebe60313f01/controller/logs/screen-watcher-decision-engine.txt13:28
jgilaberhttps://pastebin.com/tCi5C1Bc here is hopefully more legible13:29
sean-k-mooneyso https://docs.openstack.org/api-ref/block-storage/v3/index.html?expanded=detach-volume-from-server-detail#list-all-back-end-storage-pools is what i really want to see for both toplogies13:31
jgilaberthat is also in the decision engine logs, give a minute and I'll collect it for both jobs13:32
dviroelnot sure if we log that in dec-eng13:32
jgilaberwe do when building the storage data model, here is the response for both jobs https://pastebin.com/vw1eY05h13:36
dviroeljgilaber: so, in the end is perfectly fine to have the 2 c-vols working without any AA or AP mode, sorry about the confusion13:37
dviroelwe could just drop the second change e and focus on the fix now?13:38
jgilaberno problem dviroel I wasn't sure about it when we mentioned, I find the naming around most cinder concepts quite confusing, so this discussion has been quite helpful13:40
jgilaberI have the series for volume migration tests in tempest https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/958644 the first one has a workaround for the bug about the backend name13:40
dviroelyeah, the thing is that there was 2 pools, from different backends, but that have the same backend_name, but they are still 2 different pools13:41
jgilaberthe last patch of the series removes the workaround but depends on the bug fix13:42
jgilaberso we could also merge the bug fix first and then add the testing13:42
dviroeli would go to try to fix the bug first, yes13:43
jgilaberthis is the bug fix for reference https://review.opendev.org/c/openstack/watcher/+/96385713:43
sean-k-mooneyso in the first case https://pastebin.com/vw1eY05h we have 2 diffent backend both with a singel pools with the same poolname. and in the second case we have 2 backend with 2 diffent pools names14:01
sean-k-mooneythe pool name cant be assumed to be unique in the cluster as far as i am aware14:01
dviroelwe can't just have 2  host@backend#pool 14:03
dviroelthe other options are valid14:04
dviroelwhat we need to take care is that, if they have the same backend_name, and one pool each, doesn't mean that we have a backend with 2 pools14:05
dviroelwhich is different14:06
dviroelonly means that we have 2 pools14:06
dviroelthe scheduler should acepts the migration since they have the same backend_name in this case14:07
dviroelif we set the backend_name in the volume_type14:07
jgilaberI that internally watcher only deals with pools, so whether they are part of the same backend or not should not make any difference I think14:07
jgilaberat least for zone migration14:07
sean-k-mooneywe cant have 2 identical ones no but as long as one oof host, backend or pool are diffent 14:08
sean-k-mooneythen we can do a volume migrate between them14:08
sean-k-mooneyit shoudl be noted too that pools are entirly optional in the cinder api14:08
sean-k-mooneyand cider volume migrate was origianl for migrativ between diffent host and/or diffent backends14:09
sean-k-mooneywhat shold not be assimign that there are diffent pools in the zone migration stragey14:10
sean-k-mooneyhttps://opendev.org/openstack/watcher/src/branch/master/watcher/decision_engine/strategy/strategies/storage_capacity_balance.py is the only one that is ment to be pool specific14:12
sean-k-mooneyi guess the you can argure that zone migration is prhased in term of pools as well14:13
sean-k-mooneybut it really shoudl be prahsed in terms of the triple in liekly both cases14:13
sean-k-mooneyi.e. the fact that this has pool in the name is rathr unfortunet https://opendev.org/openstack/watcher/src/branch/master/watcher/decision_engine/strategy/strategies/zone_migration.py#L114-L12314:14
jgilabertbh looking through the zone migration code now I don't think it requires a pool name at all14:14
jgilaberit's just using pool to refer to some storage host14:14
sean-k-mooneyhttps://docs.openstack.org/api-ref/block-storage/v3/index.html?expanded=detach-volume-from-server-detail#migrate-a-volume14:14
jgilaberI think just passing some host@backend would work just fine14:14
sean-k-mooneyso volume migrate on the sider ssid does not14:15
sean-k-mooneyand technially the document usin git for  host@backend not  host@backend#pool14:15
dviroeli think that the strategy should only mention pools if the idea is just to migrate between pools, of the same backend. which is not the case14:15
dviroeland then only storages that supports multiple pools would be considered14:16
dviroelthese ones are migrating between backends 14:16
sean-k-mooneyya so long term watcher shoudl supprot migraton acrros backend host or pool because cidner support all 314:17
dviroelwe would just need to fix the parameters naming, and docs to make that clear 14:18
dviroeland fix bugs14:18
sean-k-mooneyyep which is what i tought was the orginal plan. not the parmater namign but just to supprot all the migration cases14:19
jgilaberI don't think there is much to change in the zone migration, the issues will mostly be in the volume migration action and the cinder helper14:19
sean-k-mooneywe can deprecate the existign parmater and add a src/dest storage_backend or similar 14:19
jgilaberthere is where the assumptions are made14:19
sean-k-mooneyhow do we want to proceed14:24
sean-k-mooneychanging parmater like this is not a bug14:24
sean-k-mooneyit technaily shoudl have a spec to cover the api and upgrade impact14:24
sean-k-mooneydo we want to fucs on jsut the pool case whre we stictly only supprot migrating between pools on the same "backend" even thogh that is not a documetned usecase fo rthe cinder api14:25
sean-k-mooneywe coudl talks about it at the ptg or in the irc call tomorow14:25
dviroeli think that we can properly document that for now, and we should still support all migration types,  and not restrict to the same backend and so14:26
jgilaberother that the known bug, I don't think we need to change much to support any migration that is supported by cinder14:27
dviroeljgilaber: yeah, we are just talking that the name is confusing in the parameter, which we can discuss on chaging in the future14:28
jgilaberI can write a spec for the parameters change, but I think we can do that while moving forward with the tempest testing14:28
dviroelother than that, we would just fix docs and any other bug14:28
jgilaberack, +1 from me14:29
jgilaberI'll bring the topic up tomorrow in the irc meeting for visibility14:29
sean-k-mooney+114:31
dviroelack14:31
sean-k-mooneyif we need to update the ci job temporaly we can14:33
sean-k-mooneyi just dont want that to be the long term state14:33
jgilaberif the current job is valid I don't think we need to, we just need to fix the known bug14:36
sean-k-mooneyits valid if we supprot cross backend migration14:36
sean-k-mooneyor cross host migration14:37
jgilaberIt should work https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/963860 removes the problematic check and it works fine14:37
sean-k-mooneybut its not valid if we only supprot cross poll migration on the same host and backend14:37
jgilaberhttps://softwarefactory-project.io/zuul/t/rdoproject.org/build/eb3f8a5518e44b8d9e1bf2a39cadd35a migrates volumes across two different backends14:38
sean-k-mooneyright my point is it depend on what we claim to support rather then what actuly works14:38
sean-k-mooneyso this is not really testing the right thing https://review.opendev.org/c/openstack/watcher-tempest-plugin/+/963860/5/watcher_tempest_plugin/tests/scenario/test_execute_zone_migration.py#13614:39
sean-k-mooneyin that its eplicktly testing for multipel pool form a stinle backend on a singel host14:39
sean-k-mooneyi.e. that stil enfoceing the srict case14:40
sean-k-mooney{pool['name'] for pool in pools['pools']} is still depending on unique pool names at the cinder level14:41
jgilaberah ok, I see what you mean14:41
dviroelright, we need to take care with that, only host@backend#pool are unique14:42
jgilaberactually I don't think that is wrong, the pool['name'] returns the host@backend#pool14:43
jgilaberpool['capabilities']['pool_name'] would return only the pool name, which will not be necessarily unique14:43
dviroel"pools": [{"name": "np6144f3e50f784@lvmdriver-1#lvmdriver-1"14:44
dviroelright14:44
opendevreviewAlfredo Moralejo proposed openstack/watcher master: APISchedulingService migrate audits also on first discovery of services  https://review.opendev.org/c/openstack/watcher/+/96398115:13
opendevreviewAlfredo Moralejo proposed openstack/watcher master: Move decision-engine monitoring service to the decision-engine  https://review.opendev.org/c/openstack/watcher/+/96325215:13
opendevreviewAlfredo Moralejo proposed openstack/watcher master: Add second instance of watcher-decision-engine in the compute node  https://review.opendev.org/c/openstack/watcher/+/96454615:13
opendevreviewAlfredo Moralejo proposed openstack/watcher master: APISchedulingService migrate audits also on first discovery of services  https://review.opendev.org/c/openstack/watcher/+/96398116:24
opendevreviewAlfredo Moralejo proposed openstack/watcher master: Move decision-engine monitoring service to the decision-engine  https://review.opendev.org/c/openstack/watcher/+/96325216:24
opendevreviewAlfredo Moralejo proposed openstack/watcher master: Add second instance of watcher-decision-engine in the compute node  https://review.opendev.org/c/openstack/watcher/+/96454616:24
sean-k-mooneysigh ok16:44
sean-k-mooneyit does not help that thre are like 3 overloaded meaning of pool vs backend ectra16:45
sean-k-mooneymost people i think woulc all the triple of  host@backend#pool the storage backend where as cider calls it the host for the migrate api and we are acllign it the pool or pool['name']16:47
jgilaberI agree the naming is quite confusing, I still get confused by it, and it makes the discussions longer than necessary since we often means different things16:57
dviroelmanila has the same host,backend,pool naming :) 17:11
dviroelin manila docs, https://docs.openstack.org/api-ref/shared-file-system/#start-migration - host format is defined as host@backend#pool17:13
jgilaberthe host for a volume in cinder is also host@backend#pool, so the substitute for src_pool should probably be src_host with the full name17:17
jgilaberand funny enough the help for the openstack volume migrate shows also the full name:17:19
jgilaber  --host <host>17:19
jgilaber                        Destination host (takes the form: host@backend-name#pool)17:19
jgilaberI think that was the format in the documentation not long ago, I think it might have been changed recently17:19
dviroelhum,  so it doesn't support only host@backend? 17:21
dviroelIIRC, in manila, for driver that don't report pools, it creates a default pool for the entire backend in this case...17:21
sean-k-mooneyat somepoint they added added pool17:22
sean-k-mooneybut they never updated the api ref to mention it17:22
sean-k-mooneyhttps://docs.openstack.org/api-ref/block-storage/v3/index.html#migrate-a-volume sayus 17:23
sean-k-mooneyhost is "The target host for the volume migration. Host format is host@backend. Required before microversion 3.16."17:23
dviroelyeah17:23
dviroelcinder also creates a default pool for drivers that don't report pools17:24
dviroelhttps://github.com/openstack/cinder/blob/d02171164bdd702b12b59888b744d172f30d712d/cinder/scheduler/host_manager.py#L252-L26017:24
sean-k-mooneyyou say that but we have a bug report where it does not17:24
sean-k-mooneyand i have checked that with ceph i think17:24
sean-k-mooneyand it didint17:24
dviroelyeah, i remember17:24
sean-k-mooneyhttps://bugs.launchpad.net/watcher/+bug/208811817:26
jgilaberyes I think I also checked with ceph and nfs and the pool_name is not there when reporting the pool capabilities17:26
jgilaberin my env this command works well openstack volume migrate test_migrate --host jgilaber-watcher-2@lvmdriver-217:26
jgilaberand so does openstack volume migrate test_migrate --host jgilaber-watcher-3@lvmdriver-3#lvmdriver-317:27
jgilaberunless this has changed in the flamingo release, my env is a bit old17:27
sean-k-mooneyit should not17:27
sean-k-mooneyi think pool is only requried if there are pools even then that not entrily clear17:28
sean-k-mooneyjgilaber: can you run https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/volume-backend.html#volume-backend-pool-list17:28
sean-k-mooneyopenstack volume backend pool list --long | nc termbin.com 999917:29
jgilabersure https://termbin.com/zyvr17:33
jgilaberI've messed a bit with the backend configs in my environment, so it doesn't follow the default deployment config17:33
jgilaberI've changed backend and pool names17:33
sean-k-mooneyack you have 4 backends on 3 hosts17:34
sean-k-mooney2 on  jgilaber-watcher-3 and one on the othef 217:34
jgilabercorrect17:34
jgilaberI also changed poolnames to configure types that would only be scheduled on some hosts17:35
sean-k-mooneycan you do the same with --debug i want to see what the raw json api output is17:35
sean-k-mooneywell i can kidn of see what i want i guess17:36
sean-k-mooneyname is the fully qualifed name and pool_name is just the part oafter hte #17:36
jgilaberhere it is just in case https://termbin.com/iy5b17:36
jgilaberthat's right17:37
jgilaberalso the pool name matches the value of volume_backend_name in the backend configuration, which I found surprising17:37
dviroelyeah, that's what the scheduler does if no pool is reported17:38
dviroelit creates one pool with the same name of the backend17:38
dviroelso we always have a pool17:38
dviroelthe problem is that not always have the attribute: "pool_name" 17:39
jgilaberack, that makes sense, and tbh I found it documented somewhere17:39
dviroelhttps://www.irccloud.com/pastebin/9ST8oQXS/17:39
sean-k-mooneyjgilaber: so the reason i wanted the debug with the raw json was to see if it was the client or the api that was doing tata17:39
dviroelcheck this get_pools from a cinder job with ceph backend17:40
dviroel"name": "np3b49a2e5bd594@ceph#ceph" - "but no pool_name" 17:40
sean-k-mooneyright17:40
jgilaberdviroel, that matches what I remember, same for nfs17:40
sean-k-mooneyso its not in the api resonce17:40
jgilaberit's driver dependent17:41
sean-k-mooneyits added to the name but the pool_name key is not injected17:41
sean-k-mooneyya which also matches teh bug17:41
sean-k-mooneyhttps://bugs.launchpad.net/watcher/+bug/208811817:41
sean-k-mooneyso name will alwasy have a pool which will just be a copy of the backend name if there is no pool name17:42
jgilaberso it looks to me like watcher is doing the right thing, but it should call it 'host' instead of 'pool'17:43
dviroelyes, the pool really exist in the scheduler, but it doesn not populate all capabilites, since it does not populate "pool_name", not sure about others17:44
sean-k-mooneywe shoudl proably ignore the pool_name filed entirly and just parse "name"17:44
sean-k-mooneyand ya in the input we shoudl jsut call it host since that is the api field or maybe something like FQBN fully qulaifed backend name17:46
dviroelyeah, we should17:47
sean-k-mooneyhost is clsoe to what the cli uses but i think the imporant thing for use is just to docuemnt the format proplry and nodte that the pool (after the #) is option17:47
sean-k-mooney*optional17:47
jgilaberack, I'll get started on a spec for that tomorrow17:49
sean-k-mooneyok im going to call it a day. cool can you summerise how we want to proceed tomorrwo in the irc meeting17:49
sean-k-mooneyi.e what part you want to do as a bug vs the rest17:50
jgilaberI will also redeploy my env and try again, to ensure that with an up to date deployment the os-migrate_volume api accepts both with and pool name17:50
jgilaberyep, I'll do that tomorrow17:50
sean-k-mooneythanks17:50
jgilaberthanks for the discussion, this has been immensely helpful!17:50
dviroel++ tks folks17:54

Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!