16:01:35 <jungleboyj> #startmeeting Cinder
16:01:36 <openstack> Meeting started Wed Oct 16 16:01:35 2019 UTC and is due to finish in 60 minutes.  The chair is jungleboyj. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:37 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:38 <smcginnis> o/
16:01:40 <openstack> The meeting name has been set to 'cinder'
16:01:48 <davidsha> o/
16:01:50 <geguileo> hi! o/
16:01:54 <jungleboyj> @!
16:01:54 <_pewp_> jungleboyj (✧∇✧)╯
16:01:56 <eharney> hi
16:02:07 <e0ne> hi
16:02:21 <tosky> o/
16:02:26 <carloss> hi
16:02:33 <lseki> hi
16:02:34 <davee__> o/
16:02:58 <jungleboyj> Howdy everyone.
16:03:13 <davee__> greetings all
16:03:20 <jungleboyj> Give everyone another minute to join.
16:04:34 <walshh_> hi all
16:05:25 <jungleboyj> Ok.  Lets get started.
16:05:44 <jungleboyj> Our last Train meeting.
16:05:48 <jungleboyj> #link https://etherpad.openstack.org/p/cinder-train-meetings
16:06:05 <jungleboyj> And, therefore I guess the last meeting that I will be running.  :-)
16:06:21 <jungleboyj> #topic announcements
16:06:50 <jungleboyj> So, next week we will switch to the Usurri meeting etherpad.
16:06:55 <jungleboyj> #link https://etherpad.openstack.org/p/cinder-ussuri-meetings
16:07:53 <jungleboyj> If you have topics for next week please put them in there.
16:08:21 <jungleboyj> BTW, Brian isn't feeling well today so I am going to run things.
16:08:25 <jungleboyj> For this meeting.
16:08:40 <jungleboyj> Anyway, want to get feedback on how to handle the ping list.
16:09:01 <jungleboyj> Brian has copied it over right now.  Do we want to keep the current list or remove it and start fresh?
16:09:37 * jungleboyj hears crickets
16:09:56 <e0ne> AFAIR, we usually start with an empty list each release
16:09:57 <smcginnis> I think start fresh.
16:09:57 <davee__> start fresh to trim dead weight
16:10:10 <smcginnis> Kind of the reason to have a new etherpad anyway...
16:10:31 <jungleboyj> Ok, so, I will add a new ping list out there and give it a couple of meetings then we will remove the old list.
16:11:04 <jungleboyj> Thanks for the feedback.
16:11:32 <jungleboyj> Reminder to please add to the PTG planning list:
16:11:35 <jungleboyj> #link https://etherpad.openstack.org/p/cinder-ussuri-ptg-planning
16:11:56 <jungleboyj> Not a lot of content right now.  Don't want to go all the way to Shanghai for nothing.  ;-)
16:12:30 <davee__> frequent flier miles for that trip is not exactly nothing ;)
16:12:41 <smcginnis> Maybe a good task for Brian to post the ML to try to get some topic on there from the Chinese community.
16:13:03 <jungleboyj> davee__:  True.  And I am already Comfort + there and back.
16:13:22 <whoami-rajat> Hi
16:14:05 <jungleboyj> Ok.  So, Forum Sessions.  We have these two accepted:
16:14:17 <jungleboyj> #link https://www.openstack.org/summit/shanghai-2019/summit-schedule/events/24404/how-are-you-using-cinders-volume-types
16:14:28 <jungleboyj> #link https://www.openstack.org/summit/shanghai-2019/summit-schedule/events/24403/are-you-using-upgrade-checks
16:14:51 <jungleboyj> If you have notes, discussion points, etc. on these please add them here:
16:14:54 <jungleboyj> https://etherpad.openstack.org/p/cinder-shanghai-forum-proposals
16:15:03 <jungleboyj> #link https://etherpad.openstack.org/p/cinder-shanghai-forum-proposals
16:15:15 <jungleboyj> We will get them into etherpads for discussion at the summit.
16:15:28 <jungleboyj> Ok.  That was what I had for announcements.
16:15:46 <jungleboyj> #topic follow-up discussion on Removing Legacy Attach Code in Nova
16:15:57 <jungleboyj> So, we said we would continue this discussion this week.
16:16:09 <jungleboyj> geguileo: mriedem Updates here?
16:16:46 <geguileo> jungleboyj: I had a look at the mail thread, replied, and then got dragged into fixing FC in OS-Brick, so I'm not up to date with the conversation
16:17:19 <jungleboyj> geguileo:  Ok.  I don't think there was a lot more discussion after your last note.
16:17:39 <geguileo> jungleboyj: I think there was a reply
16:18:02 <geguileo> jungleboyj: but in summary I don't think Nova should try to find a flow with what we have that works for all cases
16:18:07 <jungleboyj> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010066.html
16:18:19 <geguileo> I think we should add new functionality to Cinder
16:18:29 <jungleboyj> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010069.html
16:18:32 <smcginnis> That sounded like the best path forward.
16:18:36 <geguileo> one that allows them to add the connection_info
16:18:44 <jungleboyj> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-October/010071.html
16:18:46 <geguileo> on the Cinder attachment object
16:19:18 <jungleboyj> geguileo:  Ok, and I think that was the general agreement for last week's meeting as well.
16:19:23 <smcginnis> I get a vague feeling that was discussed with John as these APIs were being designed and shot down, but I have no idea why at this point.
16:19:30 <smcginnis> Maybe it was a slightly different idea though.
16:19:48 <geguileo> I don't know
16:19:56 <jungleboyj> smcginnis:  That does feel slightly familiar.
16:20:23 <geguileo> but I think that that's the option that reduces the chances of finding weird bugs
16:20:33 <smcginnis> Well, unless someone steps forward with an argument of why it shouldn't be done, I think it makes sense given the current use case.
16:20:35 <geguileo> that will be blamed on Cinder even if it's not our "fault"
16:20:50 <geguileo> smcginnis: +1
16:21:23 <smcginnis> Tangential, but Matt had a good idea for a tempest plugin test to enforce the initialize_connection issue.
16:21:43 <smcginnis> But then, that's only really useful if the CIs actually run with the tempest plugin, which I think right now they do not.
16:21:49 <smcginnis> At least the majority of them.
16:21:51 <davee__> user:: I didn't say it was your fault, I said I blamed you
16:22:00 <geguileo> smcginnis: what was that meant to test?
16:22:17 <mriedem> that creating more than 1 attachment with the same host connector doesn't result in multiple backend connections
16:22:30 <mriedem> i.e. that the driver code was idempotent i think
16:22:32 <smcginnis> The after the fact assumption enforced by nova that calling initialize_connection multiple times would be idempotent.
16:22:47 <geguileo> mriedem: but that's not a requirement in Cinder
16:22:50 <smcginnis> davee__: ++ :D
16:23:01 <geguileo> mriedem: it's something that we should have been explicit about, but we weren't
16:23:16 <geguileo> because we don't have the driver interface properly documented
16:23:37 <mriedem> given the api isn't even documented i'm not sure how anyone outside of digging into the cinder code would know the intentions of the api
16:23:48 <tosky> smcginnis: let's start from the basics: how do we force he CIs to run the tests from cinder_tempest_plugin?
16:24:13 <smcginnis> tosky: Best we've been able to come up with is a periodic review of what they are actually running.
16:24:14 <jungleboyj> tosky: Oh, that is a whole other discussion.
16:24:21 <jungleboyj> smcginnis: ++
16:24:27 <smcginnis> Since everyone reports a slightly different format, hard to automate the checking.
16:24:31 <tosky> it may be useful for other tests too
16:24:32 <mriedem> before going off and adding a spec and new API for updating the connection_info on an attachment, geguileo should probably read my response
16:24:51 <mriedem> because i also feel like jgriffith would be rolling in his grave talking about adding that :)
16:24:58 <smcginnis> :D
16:25:06 <jungleboyj> Ha!
16:25:17 <geguileo> mriedem: I will read the reply, but I don't think my recommendation will change...
16:25:31 <smcginnis> If we say his name three times, maybe he'll show up and review it. ;)
16:26:03 <jungleboyj> https://gph.is/1u23SD7
16:26:13 <jungleboyj> smcginnis: You mean jgriffith
16:26:15 <geguileo> the 2 APIs were not meant to be mixed
16:26:21 <davee__> smcginnis: or stab you in the ear with a Q-tip
16:26:29 <mriedem> what is wrong with this again:
16:26:36 <mriedem> 1. create new attachment,
16:26:41 <mriedem> 2. complete that attachment,
16:26:56 <mriedem> 3. call os-terminate_connection for the old attachment with the old host connector?
16:27:09 <mriedem> well, the order there is wrong
16:27:09 <geguileo> #2 may create a different mapping on the backend or may not
16:28:06 <mriedem> how is that different from migrating a server?
16:29:20 <jungleboyj> I was going to say that this would be good for a cross project discussion (yelling across the room we will be in) but mriedem is lucky enough to avoid the trip across the pond.
16:29:26 <mriedem> i.e. create a new attachment on dest host, terminate the connection for the source host, complete the dest host attachment to put the volume back into in-use status
16:29:44 <geguileo> mriedem: because migrating uses 2 different hosts
16:30:00 <geguileo> mriedem: and the bug in Nova where it was calling Cinder initialization a second time was fixed
16:30:07 <mriedem> i'd have to confirm but i wouldn't be surprised if we do ^ for same-host resize as well
16:30:25 <mriedem> geguileo: you're talking about live migratoin right
16:30:26 <mriedem> ?
16:30:31 <geguileo> mriedem: yup
16:30:38 <mriedem> i mean even just simple same-host resize
16:31:00 <geguileo> mriedem: then you may get a similar bug
16:31:22 <geguileo> I don't know the Nova code
16:31:24 <mriedem> which would have been around since....forever
16:31:51 <mriedem> i would need to confirm we do that for same host resize but i don't have anything coming to mind that we treat same-host resize different wrt volumes
16:32:49 <mriedem> anyway we can move to the ML thread again
16:32:52 <mriedem> and yeah i won't be in shanghai
16:34:33 <jungleboyj> Ok.  Lets move this to the ML if you can keep working this geguileo ?
16:34:39 <geguileo> mriedem: the paths of old terminate and removing an attachment are completely different
16:35:12 <mriedem> not from a nova pov
16:35:21 <geguileo> rofl rofl rofl
16:35:24 <geguileo> good for you
16:35:42 <mriedem> my point is if this is a problem where the host doesn't change for some drivers, it's been a problem forever
16:36:24 <geguileo> you say it as if that would surprise you...
16:37:08 <mriedem> no it wouldn't
16:37:13 <mriedem> nothing surprises me in openstack anymore
16:37:21 <mriedem> i'm surprised when shit *works*
16:37:29 * jungleboyj shakes my head
16:38:15 <jungleboyj> Ok.  Lets move on to the other topics and we can keep working this one in the channel or through the ML.
16:38:57 <jungleboyj> Any disagreement?
16:39:07 <mriedem> nope
16:41:00 <jungleboyj> Okie dokie.
16:41:10 <jungleboyj> #topic Team Dinner at Shanghai PTG.
16:41:17 <jungleboyj> Anyone interested in doing this?
16:41:43 <e0ne> +1
16:41:56 <jungleboyj> There is 1 yes.
16:42:03 <jungleboyj> e0ne:  You will be there?
16:42:12 <geguileo> jungleboyj: +1
16:42:33 <e0ne> jungleboyj: yes, just booked a hotel and flights
16:42:39 <smcginnis> I like food.
16:42:51 <jungleboyj> Oh yeah, there have been some updates in the etherpad.
16:43:00 <jungleboyj> Actually have a good list of people there.
16:43:06 <jungleboyj> smcginnis:  So do I.  Too much.
16:44:00 <jungleboyj> Ok, so, it sounds like there is interest.  I will work with Brian to plan something.
16:44:22 <lseki> +1
16:44:40 <geguileo> jungleboyj: thanks!
16:44:48 <jungleboyj> #action jungleboyj  to work with rosmaita to plan a dinner.  Can talk about nights, etc in next week's meeting.
16:45:26 <jungleboyj> #topic recording of PTG meetings
16:45:43 <jungleboyj> So, Given the great firewall, I am trying to decide what we want to do here.
16:46:54 <jungleboyj> I am going to sign up for a VPN that is supposed to work, but who knows.
16:47:08 <jungleboyj> I am also only taking my work laptop.  Not personal one.
16:47:26 <e0ne> as a backup plan, we can record it and publish videos after the PTG
16:47:34 <jungleboyj> I see that we have a few people listed as remote attendees.
16:48:01 <jungleboyj> e0ne: True.  Also concerned by the fact that we are all going to be in one big room.
16:48:13 <smcginnis> I like that plan e0ne.
16:48:29 <geguileo> jungleboyj: true, everyone in one room should not be great for remotees
16:48:31 <smcginnis> Even if we can't do real time, we can at least record things for others to watch later.
16:48:39 <smcginnis> You know, if they have trouble sleeping or something.
16:48:58 <jungleboyj> smcginnis:  So we will have at least content like I the other events.
16:49:23 <geguileo> yeah, that would be nice
16:49:35 <whoami-rajat> Glad I'm not remote this time :)
16:49:51 <davee__> can anyone point out where to find more info on doing that remote since I cannot attend this one?
16:49:52 <jungleboyj> Ok.  So I will still bring my big Mic and camera.
16:50:19 <jungleboyj> davee__:  Well, we will put info in the etherpad and IRC as it happens if we are able to do so.
16:50:58 <jungleboyj> I will need someone to take over recording on the second day as I will be in TC meetings.
16:51:28 <jungleboyj> Ok.  That answers my question there.
16:51:55 <jungleboyj> #topic Update of legacy jobs for moving to py3 only.
16:52:00 <jungleboyj> Who added this topic?
16:52:43 <smcginnis> Ah, that'd be me.
16:52:48 <jungleboyj> :-)
16:52:50 <smcginnis> Just a heads up to start thinking or researching.
16:52:52 <jungleboyj> smcginnis:  Take it away.
16:53:03 <smcginnis> I know we have a PTG topic to talk about moving to py3 only.
16:53:23 <smcginnis> One thing that I saw somewhere was a mention that some of the legacy jobs may not be set up right to run on a py3-only node.
16:53:29 <smcginnis> I'm not sure if we are impacted or not.
16:53:36 <smcginnis> We have the LIO-barbican job.
16:53:49 <smcginnis> And I think we run a few others that are not in-tree.
16:53:54 <tosky> if a certain zuul patch lands, we may quickly kill a few of the legacy jobs
16:54:05 <smcginnis> OK, great. Thanks tosky.
16:54:23 <smcginnis> I don't really have the bandwidth to investigate, so I wanted to at least make sure others were aware of it.
16:54:26 <tosky> (see my pending patches; of course you all can start checking if they do what they are supposed to do in the meantime :)
16:54:37 <smcginnis> tosky: Do you have a link to that patch?
16:54:51 <smcginnis> Would be great if folks could review those and get them through.
16:54:58 <smcginnis> That would be one less concern for the migration.
16:55:15 <smcginnis> Nothings finalized with the overall plan, but the hope is to be able to drop py2 support by milestone 1.
16:55:28 <smcginnis> So really just a couple months to identify any blockers to doing that.
16:55:40 <smcginnis> And personally, I really wish I had more time to rip out all that compat code. :)
16:55:51 <tosky> uhm, https://review.opendev.org/#/q/status:open+owner:%22Luigi+Toscano+%253Cltoscano%2540redhat.com%253E%22++topic:zuulv3
16:56:26 <smcginnis> Awesome, thanks tosky
16:56:28 <jungleboyj> https://gph.is/2F9U7tV
16:56:32 <tosky> and also this zuul patch: https://review.opendev.org/#/c/674334/ (but you can see that from the dependency)
16:57:01 <smcginnis> Cool, I will try to review those later.
16:57:08 <jungleboyj> Cool.
16:57:15 <smcginnis> I guess that's all from me. Running out of time anyway.
16:57:17 <jungleboyj> Anything else there smcginnis
16:57:22 <smcginnis> Nope
16:57:23 <jungleboyj> Ok.
16:57:28 <jungleboyj> #topic open discussion
16:57:35 <jungleboyj> Any topics for the last 3 minutes/
16:57:46 <anastzhyr> In the end,  I wanted to say hello to everyone,  I am a newbie in Cinder
16:57:59 <jungleboyj> anastzhyr: Welcome!
16:58:02 <anastzhyr> And I wanted to contribute to Cinder+Tempest
16:58:04 <tosky> as a follow-up to my previous question (how do we force 3rd party CIs to run the cinder_tempest_plugin?)
16:58:22 <jungleboyj> anastzhyr:  Let us know if you have questions.
16:58:30 <tosky> welcome anastzhyr
16:58:37 <anastzhyr> And I would be happy for any help and support in my first steps in open source
16:58:45 <jungleboyj> tosky:  Need to go through the logs and see who is running them and not.
16:58:54 <anastzhyr> Thanks a lot
16:59:03 <smcginnis> Welcome anastzhyr!
16:59:08 <tosky> jungleboyj: I guess we don't have a unified log place; but do all of them at least publish the subunit file?
16:59:17 <tosky> we can probably discuss it after the meeting, or next week
16:59:23 <whoami-rajat> tosky: I think improving the test coverage in cinder-tempest-plugin is also a related topic
16:59:31 <jungleboyj> whoami-rajat:  ++
16:59:43 <smcginnis> tosky: They should be. But then, they also should be running the plugin tests.
16:59:59 <smcginnis> And should be running py3, and should be .....
17:00:02 <anastzhyr> Right now I start with unit tests for creating and deleting volumes
17:00:05 <jungleboyj> There isn't a unified log location.
17:00:09 <tosky> the two issues can be solved in parallel (making sure that the CIs run the plugin, and that the test coverage is increased)
17:00:10 <smcginnis> Times up
17:00:17 <jungleboyj> I usually start with http://cinderstats.ivehearditbothways.com/cireport.txt and go look at what they push up.
17:00:36 <tosky> ack, thanks!
17:00:37 <jungleboyj> Ok.  We need to stop for today.  We can take this discussion to the cinder channel.
17:00:51 <jungleboyj> anastzhyr:  Join us in #openstack-cinder if you have more question.
17:00:55 <jungleboyj> Thanks everyone!
17:00:57 <whoami-rajat> Thanks!
17:00:59 <jungleboyj> #endmeeting