Wednesday, 2024-11-13

*** mhen_ is now known as mhen02:35
jbernard#startmeeting cinder14:01
opendevmeetMeeting started Wed Nov 13 14:01:12 2024 UTC and is due to finish in 60 minutes.  The chair is jbernard. Information about MeetBot at http://wiki.debian.org/MeetBot.14:01
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:01
opendevmeetThe meeting name has been set to 'cinder'14:01
Luzio/14:01
jbernard#topic roll call14:01
whoami-rajathey14:01
jbernardo/ heya everyone14:01
simondodsleyo/14:01
rosmaitao/14:01
msaravanHi 14:01
mheno/14:01
manudineshHi everyone14:02
sp-bmilanovo/14:02
Luzio/14:02
akawaio/14:02
eharneyo/14:03
jbernardhello everyone, thanks for coming14:05
jbernardlets get started14:05
jbernard#topic annoucements14:05
jbernardM1 is this week14:05
jbernard#link https://releases.openstack.org/epoxy/schedule.html14:05
jbernardI was in a TC meeting yesterday disucssing the state of our documentation for 3rd party CI14:07
jbernardrosmaita reminded me of this one: https://etherpad.opendev.org/p/cinder-drivers-documentation14:07
jbernardthe general idea in the meeting is that it's difficult to find up-to-date docs on how to configure zuul to run devstack, test a local feature, and report that to gerrit14:08
simondodsleyi agree. it was very painful setting up SoftwareFactory - very little or poor documentation for Zuul configuration of SF14:09
jbernardi suspect that's true, CI is not my area of comfort14:09
jbernardso I'm wondering, what are our toughts, do we agree? Is there a doc/s that isnt' well known?14:09
jbernardsimondodsley: how did you finally get it to work, just trial and error?14:10
whoami-rajatI remember we had a presentation in some PTG about setting up CI with software factory, not sure how in-depth was it but at least a reference14:10
jbernardfor the TC meeting reference, here's the backlog: https://meetings.opendev.org/meetings/tc/2024/tc.2024-11-12-18.00.log.html14:10
sp-bmilanov>  test a local feature, and report that to gerrit14:11
sp-bmilanovWhat does a local feature in the context of a Zuul configured as a third-party CI mean?14:11
sp-bmilanov> I remember we had a presentation in some PTG about setting up CI14:11
sp-bmilanovthat was me presenting :)14:11
rosmaitasomeone from Pure did a virtual presentation at one of the midcycles (i think), and someone else did a presentation at the last vancouver summit14:11
rosmaitasp-bmilanov: was that you at vancouver?14:11
jbernardthey're mostly concerned about how potential contributers may be turned away if 3rd party CI is both required and nearly un-navigatable14:11
simondodsleyThis is all the documentation out there for SoftwareFactory - https://softwarefactory-project.io/docs/contributor/index.html. It is currently only supported on RHEL 9 these days as well.14:12
jbernardoy14:12
rosmaitawell, should be ok on Rocky 9, right?14:12
simondodsleyThe Pure person is no longer with us, hence why I had such issues rebuilding a dead CI14:12
jbernardsp-bmilanov: as in, testing code that requires local hardware14:12
jbernardsp-bmilanov: cinder is relatively unique to the other projects in that respect14:13
simondodsleyNo idea- it comes from Red Hat so unlikely they will help with Rocky 9 issues14:13
simondodsleymanila also requires 3rd party CIs as well14:13
sp-bmilanovjbernard: sure, but what I would usually do submit a WIP to upstream which will get detected as an event at our Zuul, that's why I got confused by the "local" part14:14
sp-bmilanovI got what you mean now14:14
jbernardsp-bmilanov: ahh, understandable14:14
jbernardi think i'll try to compile a list of our various docs to start14:15
simondodsleysp-bmilanov: you should really be running all the main tox tests locally and passing before submitting an upstream patch 14:16
jbernardhaving a single living setup/configuration document would be ideal, but starting with a survey of what we have seems like a good starting point14:16
sp-bmilanovsimondodsley: oh yeah, definitely, it's after those that I send the WIP upstream14:17
simondodsleyjbernard: it might be worth setting up a working group to try and build a CI from scratch and document it. 14:17
jbernardsimondodsley: that's an excellent idea14:17
sp-bmilanov+114:18
simondodsleyI have some SF documentation from my build attempts. Happy to join this14:18
jbernardgiven that I have nearly zero experience in doing that, i have no idea how much time and effort might be required14:18
sp-bmilanovsimondodsley: what did you end up doing with your CI? Zuul from scratch or a new SF?14:18
rosmaitaone thing to ask here though, is how vendors feel about third party CI in general14:18
simondodsleynew SF 14:18
simondodsleyi think 3rd party is critical for cinder14:18
rosmaitabecasue the alternative is no CI and no in-tree non-community drivers14:18
jbernardrosmaita: my sense is that they are interested in complying, as long as the hoops are not too numerous and time consuming14:19
simondodsleythere are so many potential changes that could affect other drivers, especially if vendors make changes to os-brick14:19
simondodsleygetting the CI setup is the annoying part - but once it is up and running it doesn't need much care and feeding14:19
jbernardrosmaita: but i have no empirical evidence, just a hunch14:19
jbernardok, how about this:14:20
jbernardi will distill the TC meeting backlog into a summary, collect our current scattered docs into a single reference14:21
jbernardand we can see what we have, and try to create a POC project that demonstrates a simple CI that works14:21
jbernarda starting point14:21
jbernardboth for potential new drivers, and a reference for existing ones14:21
rosmaitait would be good if we could get a "tiger team" of vendor driver maintainers to work together on this14:22
simondodsleyi'll help14:22
jbernardrosmaita: i agree14:22
rosmaitasimondodsley: ++14:22
sp-bmilanovI can help too14:22
jbernardthank you simondodsley and sp-bmilanov 14:23
rosmaitasp-bmilanov: ++14:23
jbernardok, that's enough to start14:23
simondodsleyit would be good to have someone from the Red Hat Software Factory team assist as well...14:23
jbernardi can try to arrange that14:23
jbernardor at least reach out and see if it's possible14:23
simondodsleythey can then see how their stuff is being used in real life and see how they can make their docs better14:23
sp-bmilanovTBH I am not sure SF is viable anymore since it requires an OKD cloud (I need to fact-check the specifics)14:24
simondodsleyyou can use RHEL 9 for SF - OpenShift is also an option14:24
msaravanWe tried deploying SF from scratch on a centos box few months back, and it worked  after lot of attempts. 14:24
jbernardsp-bmilanov: the TC backlog (https://meetings.opendev.org/meetings/tc/2024/tc.2024-11-12-18.00.log.html) has some discussion about different approaches14:24
jbernardmsaravan: the issues you faced and what you ended up with would be really helpful as we try to capture this information14:25
simondodsleymsaravan: that is the issue - it's the multiple attempts due to poor support and documentation that is annoying14:25
sp-bmilanovjbernard: thanks14:25
msaravanSure, we have our notes.. and we can share 14:25
rosmaitai'm glad to see that a key takeaway from this discussion is that vendors think that the third-party CI requirement is worthwhile14:25
jbernardsp-bmilanov: there was significant discussion after the meeting ended, i have that in a local log i can make available14:26
jbernardsp-bmilanov: but ill try to summarize it in the thing I start 14:26
sp-bmilanovjbernard: nice, can you send it over at #openstack-cinder ?14:26
jbernardrosmaita: i think the value is understood, it's the time, trial-and-error, and poor docs that are frustrating14:27
jbernardsp-bmilanov: can do14:27
sp-bmilanovsimondodsley: sure, and I guess that's not an issue for many vendors, but the minimal base for a cluster is IIRC three machines and 96GB ram.. Zuul can be run on a smaller machine14:28
simondodsleyI have SF running my CI on a single VM (ironically in OpenStack)14:28
sp-bmilanovgetting the new SF to run now includes getting the undercloud to run14:28
simondodsleyyou don't need all those resources14:29
sp-bmilanovis that an SF version that's still not EOL?14:29
simondodsleyyes14:29
sp-bmilanovnice, I need to read thru the docs again then14:30
simondodsleyi ended up speaking directly with the SF developers14:30
jbernardok, i think we have a starting point14:31
jbernardi will have updates in next week's meeting, if not sooner14:31
jbernard#topic multi-transport protocol volume driver support14:31
jbernardsp-bmilanov: ^14:31
jbernard#link https://review.opendev.org/c/openstack/cinder/+/84753614:32
sp-bmilanovthanks, that's a continuation of last week's discussion: how can we enable a single backend driver to handle multiple transport protocols14:33
simondodsleyi think the model followed by all other vendors is to have seperate drivers for different protocols.14:34
sp-bmilanovright now, in initialize_connection(), we try to detect what the host wants and return an appropriate driver_volume_type14:34
sp-bmilanovyep, but there are use cases with hypervisors connecting over different protocols14:35
simondodsleythat is OK - you can define multiple backend stanzas in the config file pointing to the same backend, but with a different driver. 14:36
simondodsleythen you just use volume_types14:36
sp-bmilanovbut in that case, how would Cinder work if the volume is requested from a client that does not support the protocol of the backend instance that manages the volume?14:39
simondodsleyif you have a situation where there are multi hypervisors in the cluster that don't all have the same dataplane connectivity, you can also cater for that with nova flavors14:39
sp-bmilanovso a Nova flavor would map to another volume type?14:40
sp-bmilanovs/another volume type/another volume type -> another volume backend/14:41
simondodsleyyou set a flavor to have a requirement of a hypervisor of a specific device availablility, eg an FC card. I can't remmeber the exact setup, but we use that in our CI ironically14:42
simondodsleyit is usual though tht production clusters will all have hypervisors with the same connectivty so cater for failover or upgrsde events14:43
sp-bmilanovusually yes, but not always14:44
whoami-rajatsp-bmilanov, I'm not 100% clear on the use case, the consumer of cinder volumes (mostly Nova) sets up the appropriate initiator related packages in the compute node right? doesn't this look like a deployment/consumer issue than a cinder issue?14:46
sp-bmilanovthe issue is that a volume managed by one backend instance (that defines the transport protocol as well) will not be accessible to part of the clients that do not support that protocol14:48
simondodsleyi just had a quick look - we use flavor metadata parameter of `pci_passthrough:alias`. In this case , This flavor is then reflected in the zuul config. So maybe this is only good for a CI system.14:48
simondodsleysp-bmilanov: i think your issue is because you are a SDS solution, rahter than a full external storage platform.  If that is the case, this must be catered for in the was Ceph deals with the issue???14:50
whoami-rajatwhat is the client that we are referring to here? is it a compute node or a different consumer?14:51
sp-bmilanovsimondodsley: I see Ceph exporting only over iSCSI in the codebase?14:52
sp-bmilanovwhoami-rajat: yes, compute nodes14:53
simondodsleyso what are the protocols SP support?14:53
sp-bmilanovthe StorPool block protocol and iSCSI14:53
sp-bmilanovwhoami-rajat: we can extend this to how Cinder attaches volumes to itself as well 14:54
sp-bmilanovbut it's mainly compute nodes14:54
simondodsleyso SP has it's own connector for your block storage protocol?14:55
sp-bmilanovyes, but the SP driver in Cinder can also expose volumes over iSCSI, and in that case, os-brick uses the iSCSI connector/initiator14:55
whoami-rajatlooks like it https://github.com/openstack/os-brick/blob/master/os_brick/initiator/connectors/storpool.py14:55
simondodsleymy concern is that is you start merging protocols into a single driver, this will potentially cause issues in the future. What happens if/wne you start to support NMVe14:58
sp-bmilanovthe way it's proposed now, that would be a case of extending initialize_connection()14:59
jbernardwe're nearly out of time14:59
jbernardmhen, Luzi: i see your updates, regarding your questions14:59
simondodsleyand then what happens when you add FC support - it will get too complicated. Better to split14:59
jbernardsp-bmilanov: can you weigh in on https://etherpad.opendev.org/p/cinder-epoxy-meetings#L86 ? it is storpool ci related15:00
jbernardmhen, Luzi ill look at the IBM CI situation15:00
sp-bmilanovjbernard: thanks, I will have a look15:00
jbernardsp-bmilanov: is there a technical reason that it cannot be split?15:00
jbernardlets continue this one in the cinder channel15:01
jbernardor in next week's meeting15:01
jbernardvery quickly15:01
sp-bmilanovlet's continue next week15:01
jbernardthis friday is the 3rd friday, so it's the festival of reviews15:01
jbernardi propose we add spec reveiw to that15:02
jbernardthere are only 2 or 315:02
jbernardraised by whoami-rajat in the agenda15:02
whoami-rajat++ thanks15:02
jbernardand with that I think we've covered everything (at least in some part)15:02
jhorstmannI hope to have the dm-clone driver spec ready by friday as well15:03
jbernard#topic last thoughts15:03
jbernardjhorstmann: awesome15:03
whoami-rajatjust wanted to raise awareness so we don't forget about specs (I also have one there) + anyone who want to propose spec for features, this was a reminder15:03
whoami-rajatthanks jhorstmann , looking forward to it15:03
jbernardok, thanks everyone!15:03
jbernard#endmeeting15:03
opendevmeetMeeting ended Wed Nov 13 15:03:52 2024 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:03
opendevmeetMinutes:        https://meetings.opendev.org/meetings/cinder/2024/cinder.2024-11-13-14.01.html15:03
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/cinder/2024/cinder.2024-11-13-14.01.txt15:03
opendevmeetLog:            https://meetings.opendev.org/meetings/cinder/2024/cinder.2024-11-13-14.01.log.html15:03
whoami-rajatthanks!15:03
sp-bmilanovthanks15:04

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!