*** mhen_ is now known as mhen | 02:35 | |
jbernard | #startmeeting cinder | 14:01 |
---|---|---|
opendevmeet | Meeting started Wed Nov 13 14:01:12 2024 UTC and is due to finish in 60 minutes. The chair is jbernard. Information about MeetBot at http://wiki.debian.org/MeetBot. | 14:01 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 14:01 |
opendevmeet | The meeting name has been set to 'cinder' | 14:01 |
Luzi | o/ | 14:01 |
jbernard | #topic roll call | 14:01 |
whoami-rajat | hey | 14:01 |
jbernard | o/ heya everyone | 14:01 |
simondodsley | o/ | 14:01 |
rosmaita | o/ | 14:01 |
msaravan | Hi | 14:01 |
mhen | o/ | 14:01 |
manudinesh | Hi everyone | 14:02 |
sp-bmilanov | o/ | 14:02 |
Luzi | o/ | 14:02 |
akawai | o/ | 14:02 |
eharney | o/ | 14:03 |
jbernard | hello everyone, thanks for coming | 14:05 |
jbernard | lets get started | 14:05 |
jbernard | #topic annoucements | 14:05 |
jbernard | M1 is this week | 14:05 |
jbernard | #link https://releases.openstack.org/epoxy/schedule.html | 14:05 |
jbernard | I was in a TC meeting yesterday disucssing the state of our documentation for 3rd party CI | 14:07 |
jbernard | rosmaita reminded me of this one: https://etherpad.opendev.org/p/cinder-drivers-documentation | 14:07 |
jbernard | the general idea in the meeting is that it's difficult to find up-to-date docs on how to configure zuul to run devstack, test a local feature, and report that to gerrit | 14:08 |
simondodsley | i agree. it was very painful setting up SoftwareFactory - very little or poor documentation for Zuul configuration of SF | 14:09 |
jbernard | i suspect that's true, CI is not my area of comfort | 14:09 |
jbernard | so I'm wondering, what are our toughts, do we agree? Is there a doc/s that isnt' well known? | 14:09 |
jbernard | simondodsley: how did you finally get it to work, just trial and error? | 14:10 |
whoami-rajat | I remember we had a presentation in some PTG about setting up CI with software factory, not sure how in-depth was it but at least a reference | 14:10 |
jbernard | for the TC meeting reference, here's the backlog: https://meetings.opendev.org/meetings/tc/2024/tc.2024-11-12-18.00.log.html | 14:10 |
sp-bmilanov | > test a local feature, and report that to gerrit | 14:11 |
sp-bmilanov | What does a local feature in the context of a Zuul configured as a third-party CI mean? | 14:11 |
sp-bmilanov | > I remember we had a presentation in some PTG about setting up CI | 14:11 |
sp-bmilanov | that was me presenting :) | 14:11 |
rosmaita | someone from Pure did a virtual presentation at one of the midcycles (i think), and someone else did a presentation at the last vancouver summit | 14:11 |
rosmaita | sp-bmilanov: was that you at vancouver? | 14:11 |
jbernard | they're mostly concerned about how potential contributers may be turned away if 3rd party CI is both required and nearly un-navigatable | 14:11 |
simondodsley | This is all the documentation out there for SoftwareFactory - https://softwarefactory-project.io/docs/contributor/index.html. It is currently only supported on RHEL 9 these days as well. | 14:12 |
jbernard | oy | 14:12 |
rosmaita | well, should be ok on Rocky 9, right? | 14:12 |
simondodsley | The Pure person is no longer with us, hence why I had such issues rebuilding a dead CI | 14:12 |
jbernard | sp-bmilanov: as in, testing code that requires local hardware | 14:12 |
jbernard | sp-bmilanov: cinder is relatively unique to the other projects in that respect | 14:13 |
simondodsley | No idea- it comes from Red Hat so unlikely they will help with Rocky 9 issues | 14:13 |
simondodsley | manila also requires 3rd party CIs as well | 14:13 |
sp-bmilanov | jbernard: sure, but what I would usually do submit a WIP to upstream which will get detected as an event at our Zuul, that's why I got confused by the "local" part | 14:14 |
sp-bmilanov | I got what you mean now | 14:14 |
jbernard | sp-bmilanov: ahh, understandable | 14:14 |
jbernard | i think i'll try to compile a list of our various docs to start | 14:15 |
simondodsley | sp-bmilanov: you should really be running all the main tox tests locally and passing before submitting an upstream patch | 14:16 |
jbernard | having a single living setup/configuration document would be ideal, but starting with a survey of what we have seems like a good starting point | 14:16 |
sp-bmilanov | simondodsley: oh yeah, definitely, it's after those that I send the WIP upstream | 14:17 |
simondodsley | jbernard: it might be worth setting up a working group to try and build a CI from scratch and document it. | 14:17 |
jbernard | simondodsley: that's an excellent idea | 14:17 |
sp-bmilanov | +1 | 14:18 |
simondodsley | I have some SF documentation from my build attempts. Happy to join this | 14:18 |
jbernard | given that I have nearly zero experience in doing that, i have no idea how much time and effort might be required | 14:18 |
sp-bmilanov | simondodsley: what did you end up doing with your CI? Zuul from scratch or a new SF? | 14:18 |
rosmaita | one thing to ask here though, is how vendors feel about third party CI in general | 14:18 |
simondodsley | new SF | 14:18 |
simondodsley | i think 3rd party is critical for cinder | 14:18 |
rosmaita | becasue the alternative is no CI and no in-tree non-community drivers | 14:18 |
jbernard | rosmaita: my sense is that they are interested in complying, as long as the hoops are not too numerous and time consuming | 14:19 |
simondodsley | there are so many potential changes that could affect other drivers, especially if vendors make changes to os-brick | 14:19 |
simondodsley | getting the CI setup is the annoying part - but once it is up and running it doesn't need much care and feeding | 14:19 |
jbernard | rosmaita: but i have no empirical evidence, just a hunch | 14:19 |
jbernard | ok, how about this: | 14:20 |
jbernard | i will distill the TC meeting backlog into a summary, collect our current scattered docs into a single reference | 14:21 |
jbernard | and we can see what we have, and try to create a POC project that demonstrates a simple CI that works | 14:21 |
jbernard | a starting point | 14:21 |
jbernard | both for potential new drivers, and a reference for existing ones | 14:21 |
rosmaita | it would be good if we could get a "tiger team" of vendor driver maintainers to work together on this | 14:22 |
simondodsley | i'll help | 14:22 |
jbernard | rosmaita: i agree | 14:22 |
rosmaita | simondodsley: ++ | 14:22 |
sp-bmilanov | I can help too | 14:22 |
jbernard | thank you simondodsley and sp-bmilanov | 14:23 |
rosmaita | sp-bmilanov: ++ | 14:23 |
jbernard | ok, that's enough to start | 14:23 |
simondodsley | it would be good to have someone from the Red Hat Software Factory team assist as well... | 14:23 |
jbernard | i can try to arrange that | 14:23 |
jbernard | or at least reach out and see if it's possible | 14:23 |
simondodsley | they can then see how their stuff is being used in real life and see how they can make their docs better | 14:23 |
sp-bmilanov | TBH I am not sure SF is viable anymore since it requires an OKD cloud (I need to fact-check the specifics) | 14:24 |
simondodsley | you can use RHEL 9 for SF - OpenShift is also an option | 14:24 |
msaravan | We tried deploying SF from scratch on a centos box few months back, and it worked after lot of attempts. | 14:24 |
jbernard | sp-bmilanov: the TC backlog (https://meetings.opendev.org/meetings/tc/2024/tc.2024-11-12-18.00.log.html) has some discussion about different approaches | 14:24 |
jbernard | msaravan: the issues you faced and what you ended up with would be really helpful as we try to capture this information | 14:25 |
simondodsley | msaravan: that is the issue - it's the multiple attempts due to poor support and documentation that is annoying | 14:25 |
sp-bmilanov | jbernard: thanks | 14:25 |
msaravan | Sure, we have our notes.. and we can share | 14:25 |
rosmaita | i'm glad to see that a key takeaway from this discussion is that vendors think that the third-party CI requirement is worthwhile | 14:25 |
jbernard | sp-bmilanov: there was significant discussion after the meeting ended, i have that in a local log i can make available | 14:26 |
jbernard | sp-bmilanov: but ill try to summarize it in the thing I start | 14:26 |
sp-bmilanov | jbernard: nice, can you send it over at #openstack-cinder ? | 14:26 |
jbernard | rosmaita: i think the value is understood, it's the time, trial-and-error, and poor docs that are frustrating | 14:27 |
jbernard | sp-bmilanov: can do | 14:27 |
sp-bmilanov | simondodsley: sure, and I guess that's not an issue for many vendors, but the minimal base for a cluster is IIRC three machines and 96GB ram.. Zuul can be run on a smaller machine | 14:28 |
simondodsley | I have SF running my CI on a single VM (ironically in OpenStack) | 14:28 |
sp-bmilanov | getting the new SF to run now includes getting the undercloud to run | 14:28 |
simondodsley | you don't need all those resources | 14:29 |
sp-bmilanov | is that an SF version that's still not EOL? | 14:29 |
simondodsley | yes | 14:29 |
sp-bmilanov | nice, I need to read thru the docs again then | 14:30 |
simondodsley | i ended up speaking directly with the SF developers | 14:30 |
jbernard | ok, i think we have a starting point | 14:31 |
jbernard | i will have updates in next week's meeting, if not sooner | 14:31 |
jbernard | #topic multi-transport protocol volume driver support | 14:31 |
jbernard | sp-bmilanov: ^ | 14:31 |
jbernard | #link https://review.opendev.org/c/openstack/cinder/+/847536 | 14:32 |
sp-bmilanov | thanks, that's a continuation of last week's discussion: how can we enable a single backend driver to handle multiple transport protocols | 14:33 |
simondodsley | i think the model followed by all other vendors is to have seperate drivers for different protocols. | 14:34 |
sp-bmilanov | right now, in initialize_connection(), we try to detect what the host wants and return an appropriate driver_volume_type | 14:34 |
sp-bmilanov | yep, but there are use cases with hypervisors connecting over different protocols | 14:35 |
simondodsley | that is OK - you can define multiple backend stanzas in the config file pointing to the same backend, but with a different driver. | 14:36 |
simondodsley | then you just use volume_types | 14:36 |
sp-bmilanov | but in that case, how would Cinder work if the volume is requested from a client that does not support the protocol of the backend instance that manages the volume? | 14:39 |
simondodsley | if you have a situation where there are multi hypervisors in the cluster that don't all have the same dataplane connectivity, you can also cater for that with nova flavors | 14:39 |
sp-bmilanov | so a Nova flavor would map to another volume type? | 14:40 |
sp-bmilanov | s/another volume type/another volume type -> another volume backend/ | 14:41 |
simondodsley | you set a flavor to have a requirement of a hypervisor of a specific device availablility, eg an FC card. I can't remmeber the exact setup, but we use that in our CI ironically | 14:42 |
simondodsley | it is usual though tht production clusters will all have hypervisors with the same connectivty so cater for failover or upgrsde events | 14:43 |
sp-bmilanov | usually yes, but not always | 14:44 |
whoami-rajat | sp-bmilanov, I'm not 100% clear on the use case, the consumer of cinder volumes (mostly Nova) sets up the appropriate initiator related packages in the compute node right? doesn't this look like a deployment/consumer issue than a cinder issue? | 14:46 |
sp-bmilanov | the issue is that a volume managed by one backend instance (that defines the transport protocol as well) will not be accessible to part of the clients that do not support that protocol | 14:48 |
simondodsley | i just had a quick look - we use flavor metadata parameter of `pci_passthrough:alias`. In this case , This flavor is then reflected in the zuul config. So maybe this is only good for a CI system. | 14:48 |
simondodsley | sp-bmilanov: i think your issue is because you are a SDS solution, rahter than a full external storage platform. If that is the case, this must be catered for in the was Ceph deals with the issue??? | 14:50 |
whoami-rajat | what is the client that we are referring to here? is it a compute node or a different consumer? | 14:51 |
sp-bmilanov | simondodsley: I see Ceph exporting only over iSCSI in the codebase? | 14:52 |
sp-bmilanov | whoami-rajat: yes, compute nodes | 14:53 |
simondodsley | so what are the protocols SP support? | 14:53 |
sp-bmilanov | the StorPool block protocol and iSCSI | 14:53 |
sp-bmilanov | whoami-rajat: we can extend this to how Cinder attaches volumes to itself as well | 14:54 |
sp-bmilanov | but it's mainly compute nodes | 14:54 |
simondodsley | so SP has it's own connector for your block storage protocol? | 14:55 |
sp-bmilanov | yes, but the SP driver in Cinder can also expose volumes over iSCSI, and in that case, os-brick uses the iSCSI connector/initiator | 14:55 |
whoami-rajat | looks like it https://github.com/openstack/os-brick/blob/master/os_brick/initiator/connectors/storpool.py | 14:55 |
simondodsley | my concern is that is you start merging protocols into a single driver, this will potentially cause issues in the future. What happens if/wne you start to support NMVe | 14:58 |
sp-bmilanov | the way it's proposed now, that would be a case of extending initialize_connection() | 14:59 |
jbernard | we're nearly out of time | 14:59 |
jbernard | mhen, Luzi: i see your updates, regarding your questions | 14:59 |
simondodsley | and then what happens when you add FC support - it will get too complicated. Better to split | 14:59 |
jbernard | sp-bmilanov: can you weigh in on https://etherpad.opendev.org/p/cinder-epoxy-meetings#L86 ? it is storpool ci related | 15:00 |
jbernard | mhen, Luzi ill look at the IBM CI situation | 15:00 |
sp-bmilanov | jbernard: thanks, I will have a look | 15:00 |
jbernard | sp-bmilanov: is there a technical reason that it cannot be split? | 15:00 |
jbernard | lets continue this one in the cinder channel | 15:01 |
jbernard | or in next week's meeting | 15:01 |
jbernard | very quickly | 15:01 |
sp-bmilanov | let's continue next week | 15:01 |
jbernard | this friday is the 3rd friday, so it's the festival of reviews | 15:01 |
jbernard | i propose we add spec reveiw to that | 15:02 |
jbernard | there are only 2 or 3 | 15:02 |
jbernard | raised by whoami-rajat in the agenda | 15:02 |
whoami-rajat | ++ thanks | 15:02 |
jbernard | and with that I think we've covered everything (at least in some part) | 15:02 |
jhorstmann | I hope to have the dm-clone driver spec ready by friday as well | 15:03 |
jbernard | #topic last thoughts | 15:03 |
jbernard | jhorstmann: awesome | 15:03 |
whoami-rajat | just wanted to raise awareness so we don't forget about specs (I also have one there) + anyone who want to propose spec for features, this was a reminder | 15:03 |
whoami-rajat | thanks jhorstmann , looking forward to it | 15:03 |
jbernard | ok, thanks everyone! | 15:03 |
jbernard | #endmeeting | 15:03 |
opendevmeet | Meeting ended Wed Nov 13 15:03:52 2024 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:03 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/cinder/2024/cinder.2024-11-13-14.01.html | 15:03 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/cinder/2024/cinder.2024-11-13-14.01.txt | 15:03 |
opendevmeet | Log: https://meetings.opendev.org/meetings/cinder/2024/cinder.2024-11-13-14.01.log.html | 15:03 |
whoami-rajat | thanks! | 15:03 |
sp-bmilanov | thanks | 15:04 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!