* puck wavesd | 08:00 | |
tobberydberg | o/ | 08:00 |
---|---|---|
fkr | o/ | 08:00 |
fkr | how is everyone? | 08:02 |
gtema | o/ - looks like a pattern | 08:02 |
fkr | i killed the pattern | 08:02 |
tobberydberg | ;-) | 08:03 |
tobberydberg | All good here! Hope you are all well as well! | 08:03 |
tobberydberg | #startmeeting publiccloud_sig | 08:03 |
opendevmeet | Meeting started Wed Feb 15 08:03:51 2023 UTC and is due to finish in 60 minutes. The chair is tobberydberg. Information about MeetBot at http://wiki.debian.org/MeetBot. | 08:03 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 08:03 |
opendevmeet | The meeting name has been set to 'publiccloud_sig' | 08:03 |
tobberydberg | #link https://etherpad.opendev.org/p/publiccloud-sig-meeting | 08:04 |
tobberydberg | As usual, please put your name in there | 08:04 |
tobberydberg | Awesome that somebody stepped up and created the agenda - puck? | 08:05 |
puck | Yeah, guilty as charged. | 08:05 |
tobberydberg | Thank you :-) | 08:05 |
puck | np | 08:05 |
puck | I also started putting some content into the spec. | 08:05 |
puck | (but only a little bit) | 08:06 |
tobberydberg | #topic 1. How to continue the "standard set of properties" work | 08:06 |
tobberydberg | I saw that, great! | 08:06 |
puck | I'm thinking that the spec probably needs detail on what each of the properties are about. | 08:07 |
tobberydberg | Looking at that it seams to have the decided additions and looks correct | 08:07 |
tobberydberg | +1 | 08:07 |
tobberydberg | I guess we will have to structure it and outline it towards a template... | 08:11 |
puck | Agreed | 08:12 |
fkr | aye | 08:13 |
tobberydberg | Not really sure if there exists a template somewhere to look at ... was trying to find that... | 08:13 |
puck | I can't even think of where one would be. | 08:16 |
puck | Withing OpenStack that is. | 08:16 |
tobberydberg | I mean, we can look at specs from other teams etc | 08:16 |
tobberydberg | https://specs.openstack.org/openstack/cinder-specs/specs/2023.1/extend-volume-completion-action.html | 08:16 |
tobberydberg | for example | 08:17 |
fkr | seems legit to follow that (from quickly eyeballing it) | 08:17 |
puck | Yeah, fair enough. | 08:18 |
tobberydberg | Have been scrolling through a bunch of specs form various teams....This is a more "top level spec" that will (hopefully) result in specs in various teams... | 08:19 |
fkr | I wanted to suggest ADR style in the beginning | 08:20 |
fkr | (however I lack in-depth knowledge how this is usually done within openstack) | 08:20 |
fkr | however ADR-style would offer a 'high-level spec' from which in-depth specs for the teams can be done/can come | 08:21 |
gtema | ADRs are too "new" for OpenStack to start adopting | 08:21 |
gtema | but I would agree it is a pretty good usecase | 08:21 |
tobberydberg | ok. I guess dependencies is needed in there as well | 08:24 |
tobberydberg | To be successful we have dependencies towards nova, glance, sdk at least, right? | 08:25 |
tobberydberg | Implementation specifics seams pretty far off for us to go into here ... | 08:26 |
gtema | dependency on SDK is totally different to nova/glance, but yes | 08:27 |
gtema | same way also ansible-collections-openstack would be one | 08:27 |
puck | yup, object storage (although should be agnostic between swift and radosgw) | 08:28 |
tobberydberg | Yea, indeed | 08:28 |
tobberydberg | So I guess next step is to drafting some text here as well. I will try to find some time to give it a stab until next meeting. | 08:32 |
puck | Cool | 08:34 |
tobberydberg | Should we spend a few minutes on the rest of the topics on the agenda? | 08:35 |
gtema | +1 | 08:36 |
fkr | +1 | 08:37 |
tobberydberg | #topic 2. A number of distros publish images directly to the big cloud providers, can we facilitate this for OpenStack public clouds? (puck) | 08:37 |
tobberydberg | I leave the word a little bit to you puck here :-) | 08:38 |
gtema | I do not think this will/can ever happen. At least on our side we prepare all images to include supplementary HW drivers and do some other "security" related changes | 08:38 |
gtema | therefore those bare images are not really working properly in our cloud (only on few basic flavors) | 08:38 |
puck | Interesting, we just publish the vendor images. I'd like us to customise some, but it hasn't happened yet. | 08:39 |
puck | Especially Ubuntu, since we aren't paying them the license fees, we can't modify them. | 08:39 |
gtema | this is not our case and we are also obliged (in front of customers) to do additional security hardenings | 08:40 |
tobberydberg | But you all allow users to "bring your own image", right? | 08:40 |
gtema | yes | 08:40 |
puck | Yes, we allow customers to bring their own image. | 08:40 |
puck | It is a bit annoying to see the distros uploading their images for the big three and officially publishing them. I was just wondering if there is anything we can do to help get the smaller players recognised. | 08:41 |
tobberydberg | So, I like the idea of having a central local for "openstack ready images" ... but I agree that there will be hard to get all public clouds to actually use them | 08:41 |
puck | Even finding those images for some distros is hard! | 08:41 |
tobberydberg | exactly that puck I agree to | 08:41 |
gtema | I can't even also imagine i.e. Fedora pushing their build to 100 other OpenStack based clouds | 08:41 |
tobberydberg | Not sure how to address the issue though | 08:42 |
gtema | also from security pov giving somebody from outside write permissions for public images is definitely not going to work on our side | 08:42 |
puck | Unfortunately I have no idea, which is why I wanted to table it. ;) Perhaps a central location within OpenStack that points to where distro images are available from? | 08:43 |
puck | Public cloud providers could indicate which images they make available and whether they're vanilla, or modified? | 08:43 |
tobberydberg | Well, we could potentially have one central "for OpenStack" | 08:43 |
gtema | I can imagine building some sort of portal for the OS based clouds from where they can do something like: "import latest Fedora/Ubuntu image into my cloud" | 08:44 |
gtema | but the biggest question for me - why | 08:44 |
gtema | what should this actually address | 08:44 |
fkr | how is the security aspect handled with the "big clouds"? I meant, I suspect that they will analyze the images that are pushed by the distros before releasing them to their customers | 08:44 |
puck | I don't think that happens for the Debian images. | 08:44 |
fkr | interesting. since the notion feels different between "pulling the image from the distro and offering it customers" to "having the distro directly push it" even though, there is not really a difference ;) | 08:45 |
frickler | for the SCS project we have https://github.com/osism/openstack-image-manager which tries to keep track of the various upstream sources | 08:46 |
fkr | good point frickler | 08:46 |
gtema | cool, I meant exactly something like that | 08:46 |
puck | Subtle difference, we smoke screen the images before we make them public. (Spin up an instance, make sure we can ssh in and ping out.) That process is automated. | 08:46 |
frickler | we could try to move that into a more general upstream location | 08:46 |
tobberydberg | That is what I meant as well :-) | 08:47 |
tobberydberg | That would make a lot of sense imo frickler | 08:47 |
frickler | so would this fit as repo owned by this sig? or do you see a different entity? | 08:48 |
frickler | (pending discussion within the scs community of course) | 08:49 |
tobberydberg | That is a good question...I don't see that it doesn't fit, but maybe where are even better suited location? | 08:50 |
tobberydberg | Some tests etc can be done towards multiple clouds for each image ... potentially each cloud is able to sign up for verification of image in there cloud... | 08:52 |
frickler | maybe we could then get vendors to contribute by updating information about their images in that repo | 08:52 |
fkr | frickler: +1 | 08:52 |
frickler | the verification could maybe be combined with what gtema is building for SDK/OSC | 08:52 |
tobberydberg | "Image Ubuntu XX.XX is proven to work fine on these clouds" kind of thing | 08:53 |
frickler | in terms of access to clouds being needed | 08:53 |
fkr | frickler: i can take the discussion into team iaas @ scs for 'upstreaming' the image manager, since we have the discussion on long-term maintainence anyways | 08:53 |
frickler | but I'm not sure if the sdk project would be a good home for this. otoh maybe why not | 08:53 |
gtema | yes, this sounds feasible. Some sort of sdk/osc driven verification for that is possible | 08:54 |
gtema | most likely not the SDK itself, but something like a new redesigned "certification" platform | 08:54 |
tobberydberg | Yea ... like the one for "external tempest testing" | 08:55 |
frickler | I was just talking in terms of openstack governance where the openstack-image-manager might be homed | 08:55 |
tobberydberg | So, this feels like something we can continue to talk about and I think a Forum session around this might be suitable as well | 08:56 |
gtema | +1 | 08:57 |
tobberydberg | #link https://etherpad.opendev.org/p/publiccloud-sig-vancouver-2023-forum-sessions | 08:57 |
tobberydberg | If you feel puck, please put it in there as a suggestion | 08:58 |
puck | Okay, sure | 08:58 |
* puck considers attending possibly contentious topic of "do tested clouds need to OpenInfra financial members". :) | 08:59 | |
tobberydberg | We are running out of time here ... we have one item more on the agenda before other matters :-) | 08:59 |
tobberydberg | Shall we push that one until next meeting? | 08:59 |
gtema | yeah | 09:00 |
puck | ack | 09:00 |
tobberydberg | I guess that questions have multiple answers | 09:00 |
tobberydberg | Running the tests are one thing, be presented as "certified" most probably do yea | 09:01 |
puck | Yup, but we can park that for next time. | 09:02 |
puck | And in fact, we're out of time. | 09:02 |
tobberydberg | yea we are | 09:02 |
tobberydberg | One quick last thing ... should we try to have a session during the PTG? | 09:02 |
tobberydberg | Could it be worth starting to present our ideas around standard properties, certifications etc there? | 09:04 |
puck | Seems sensible. | 09:04 |
gtema | yes, why not | 09:04 |
tobberydberg | I'll make sure to sign up for that then! | 09:05 |
tobberydberg | I know I'm doing some travelling around those dates, but not the full week | 09:05 |
tobberydberg | Thanks a lot for today folks! Talk to you soon! | 09:05 |
puck | Cheers, hope you all have a good day! | 09:06 |
tobberydberg | #endmeeting | 09:07 |
opendevmeet | Meeting ended Wed Feb 15 09:07:49 2023 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 09:07 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/publiccloud_sig/2023/publiccloud_sig.2023-02-15-08.03.html | 09:07 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/publiccloud_sig/2023/publiccloud_sig.2023-02-15-08.03.txt | 09:07 |
opendevmeet | Log: https://meetings.opendev.org/meetings/publiccloud_sig/2023/publiccloud_sig.2023-02-15-08.03.log.html | 09:07 |
gtema | thks, have a nice day | 09:08 |
andrewbogott_ | since upgrading keystone to Zed I'm seeing quite a few "MySQL server has gone away" log messages from sqlalchemy. Only from keystone, though, not in the other services. Anyone else seeing that? | 14:11 |
*** Guest3941 is now known as diablo_rojo | 15:03 | |
felixhuettner[m] | is the connection_recycle_time in openstack lower than the idle timout of mysql (or a proxy in between). Otherwise this could cause that message | 17:09 |
andrewbogott_ | felixhuettner[m]: (much later) my keystone connection_recycle_time is 300, my haproxy server timeout is 90m, my mysql wait_timeout is 3600 | 23:37 |
andrewbogott_ | I keep looking for other sneaky timeouts that are interposing but those values seem to me like they should work! I tried setting everything to an across-the-board 3600s and got many many more 'has gone a way' messages. | 23:38 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!