tobberydberg | Hello folks, time for Public Cloud SIG meeting o/ | 08:01 |
---|---|---|
tobberydberg | Anyone around? | 08:01 |
gtema | yey, thks a lot for reminder yesterday ;-) | 08:01 |
tobberydberg | That was kind of the only action point from last meeting, so would have been bad if I didn't remember to do that ;-) | 08:02 |
gtema | lol | 08:02 |
gtema | do we have agenda for today? | 08:03 |
tobberydberg | But I hope to see a few more people here joining in, but lets start the meeting so we get some logs for it | 08:03 |
gtema | cool | 08:03 |
tobberydberg | #startmeeting publiccloud_sig | 08:04 |
opendevmeet | Meeting started Wed Aug 31 08:04:01 2022 UTC and is due to finish in 60 minutes. The chair is tobberydberg. Information about MeetBot at http://wiki.debian.org/MeetBot. | 08:04 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 08:04 |
opendevmeet | The meeting name has been set to 'publiccloud_sig' | 08:04 |
tobberydberg | Agenda and meeting notes here: https://etherpad.opendev.org/p/publiccloud-sig-meeting | 08:04 |
tobberydberg | #topic 1. Last meeting log/notes review | 08:05 |
gtema | I was thinking bit further on the point with using "alias" metadata or property for having "standardized" names | 08:06 |
gtema | and I still think this is the only thing we could ever reach among different clouds | 08:06 |
tobberydberg | I agree, that is probably the way to get somewhere | 08:07 |
Puck__ | Or at least have a mapping from the standardised to the local names. | 08:07 |
Puck__ | Aliases would be nice. | 08:08 |
gtema | well, this kind of mapping would need to be stored somewhere | 08:08 |
gtema | if we ever reach ".well-known" - maybe | 08:08 |
gtema | but that would mean all tools need to know that | 08:08 |
tobberydberg | I had another thought though, to have a "non enforcing" naming convention as a guideline | 08:09 |
Puck__ | Start with having it in docs. | 08:09 |
gtema | having some "guideline" to agree on is of course a must. Otherwise we will never proceed :) | 08:10 |
Puck__ | os_distro is an example of recommended names. Dunno how well adopted they are. | 08:10 |
tobberydberg | So, the basic in being able to have somekind of mapping is "non enforcing naming convention", right? | 08:10 |
Puck__ | Yeah, perhaps a survey of what is currently in use would be good. Does refstacj c | 08:11 |
gtema | I think what is important is to agree on attributes which are important and must be in some form discoverable in every cloud | 08:11 |
Puck__ | Refstacj collect that? (Sorry, using a phone) | 08:11 |
gtema | os_distro - surely yes, but smth like hw_disk_bus is maybe not so important | 08:11 |
tobberydberg | +1 | 08:12 |
gtema | Puck__ - refstack doesn't care at all about anything, that is why we think in the terms of public cloud alternative need to be developed | 08:12 |
gtema | it is sad that so few people are here around, as if this is again not required by anybody | 08:13 |
Puck__ | Ack. Was hoping there was already some central information. | 08:13 |
tobberydberg | yes exactly | 08:13 |
tobberydberg | And for me it would be an addition to refstack and a check that can run from central location towards all public clouds in the program. | 08:13 |
Puck__ | That'd be good. I don't know when we last ran refstack | 08:14 |
gtema | Puck__ - haven't you received recently reminder to upload fresh results? | 08:14 |
tobberydberg | Exactly, and can easily be faked by setting up a isolated env to run the checks, which makes the results not worth much | 08:15 |
Puck__ | Erm, dunno. I'd have to go and dig. I remember it being discussed a while ago... | 08:15 |
gtema | tobberydberg: I propose to start declaring properties of things we think should be standardized and make voting to figure out what community thinks which are mandatory and which not | 08:15 |
NilsMagnus[m] | I still wonder why using the filters of meta data shouldn't be sufficient? That on turn would require some conventions for the meets data itself, for example the operating system. | 08:15 |
NilsMagnus[m] | meets=meta | 08:15 |
gtema | Nils Magnus: because first there must be agreement on what comes into metadata at all | 08:15 |
gtema | and with which name | 08:16 |
Puck__ | The benefits would definitely be easier sharing if terraforn etc. | 08:16 |
tobberydberg | That is probably a good starting point. For the central rund of the full powered program we need some collab with interop SIG later on | 08:16 |
Puck__ | So, perhaps we need to work out what metadata needs to be standardised? | 08:16 |
gtema | that is exactly what my proposal is about | 08:16 |
NilsMagnus[m] | Maybe we could relate to LSB (Linux standard base)? | 08:17 |
NilsMagnus[m] | For naming conventions | 08:17 |
Puck__ | Refstack (or whatever) should also be run from a central location against the public facing API for each cloud. | 08:17 |
tobberydberg | Puck__ +1 | 08:17 |
gtema | agree with Nils, LSB is a good start. Not 100% sure how i.e. good windows images will fit into it, but definitely worth checking | 08:18 |
tobberydberg | But as gtema says, we will need to define what to check and which attributes etc before that | 08:18 |
Puck__ | Yup. Should we have people put their ideas on the etherpad to be discussed next time? (List them, put +1 against the ones you agree with ) | 08:19 |
gtema | I started last time exactly this | 08:20 |
Puck__ | Oh, awesome! | 08:20 |
gtema | and tried to get some of the props from SovereignCloudStack proposals | 08:20 |
gtema | (https://etherpad.opendev.org/p/publiccloud-sig-kickoff) | 08:20 |
Puck__ | Okay, I see the empty list now! :) | 08:20 |
gtema | under "resources to be standardized" | 08:21 |
tobberydberg | #link https://etherpad.opendev.org/p/publiccloud-sig-kickoff | 08:21 |
tobberydberg | You can find it there | 08:21 |
tobberydberg | haha | 08:21 |
tobberydberg | So, as starting point we will go with flavors and images | 08:21 |
gtema | I doubt there is any interest on having something more then those 2 | 08:22 |
Puck__ | os_distro, sw_release for metadata | 08:22 |
Puck__ | Volume name | 08:22 |
Puck__ | Object storage plicy | 08:23 |
Puck__ | Volume type, not name (i.e nvme and IOPS) | 08:23 |
gtema | why should volume name matter? | 08:23 |
gtema | for swift policies I also do not really think it is worth of any effort - this is too variable | 08:24 |
Puck__ | (The text input box on the webclient on my phone is crazy small) | 08:24 |
Puck__ | Happy to be voted down, just throwing out some ideas | 08:25 |
gtema | looking at lsb and ansible I see it returns: codename=Core, description="CentOS Linux...", id=CentOS, major_release=7, release=7.5.xxxx | 08:25 |
tobberydberg | I agree that there could probably be more. But I wonder if it will be easier to solve the flavor and image situation first, as the 2 most obvious ones first before identifying more? | 08:25 |
Puck__ | Ack, so re | 08:25 |
Puck__ | Sure | 08:25 |
tobberydberg | os_distro mapping "id", release mapping or major_release os_version | 08:26 |
Puck__ | Recommended os_distro names are already in the official docs. | 08:27 |
gtema | which docs are you refering? | 08:27 |
Puck__ | Erm, hang on | 08:28 |
tobberydberg | https://docs.openstack.org/glance/latest/admin/useful-image-properties.html | 08:29 |
tobberydberg | ?? | 08:29 |
Puck__ | Yes, that us ut | 08:30 |
Puck__ | Just found it. :) | 08:30 |
gtema | hmm, maybe then just list all of them and open voting which should become part of "standard" | 08:31 |
gtema | ? | 08:31 |
gtema | for me 80% of them are not really helpful | 08:31 |
tobberydberg | Yea, because naming will be impossible to align with | 08:31 |
gtema | not really, I can't imagine why i.e. hw_video_model would matter for me | 08:32 |
Puck__ | I reckon that a minimum set should defined and others are optional. | 08:33 |
tobberydberg | No, a selected list of which attributes is needed indeed | 08:33 |
Puck__ | We'd also like to see another which defines the key software installed for app specific images. | 08:34 |
gtema | tja, and this is where it gets funny - something you find important is not what others think is important | 08:35 |
Puck__ | We need that for some licensing requirements. Currently were overloading os_diateo for that, which we hate. | 08:35 |
Puck__ | os_distro | 08:36 |
gtema | but you can add any other property as you want. Here we need to find a common base what really matters for end user | 08:36 |
tobberydberg | could add other properties yes for licensed stuff indeed, that would be really useful | 08:37 |
tobberydberg | We have a shitty solution for that as well | 08:37 |
gtema | ok, time is ticking. I suggest everyone interested should add missing (and vote) to defined properties for flavors and images | 08:41 |
gtema | and maybe we can "touch" point of doing useful tests as part of the public clouds certification and not what currently is used by certification? | 08:41 |
tobberydberg | Yea - maybe move them over to the new etherpad first | 08:41 |
Puck__ | Sorry, by battery is almost flat. I'll probably drop off soon. | 08:41 |
gtema | from SDK/cli pov I really struggle with different behavior of clouds on relatively simple cases | 08:42 |
tobberydberg | Yea, that is for a fact an issue | 08:42 |
gtema | I actually was also asking to have possibility to make sdk/cli verification on various clouds what should help improve user experience | 08:43 |
Puck__ | So, a basic set of tests that should pass. | 08:43 |
Puck__ | Agreed. We run various tempest tests against ours on a regular basis. | 08:43 |
tobberydberg | (moved over the notes to the new etherpad - voting could fake place there) | 08:44 |
gtema | we could build up set of useful tests with sdk and use this as some part of "certification" or at least initial information what is going wrong | 08:44 |
Puck__ | Ideally clouds could flag what services they run (discovered via keystone catalogue?) And then test against those. | 08:44 |
gtema | Puck__ I am very unhappy on tempest as such, since it verifies cloud from developer perspective mostly. It barely touches "user" side | 08:45 |
tobberydberg | Like that and think that is a really good start for this | 08:45 |
tobberydberg | Puck__ That is also a really good idea | 08:45 |
gtema | sdk is already taking care of service discovery and runs only tests for available services | 08:45 |
Puck__ | We made more tempest tests to test usage. | 08:45 |
gtema | on the other side we test our cloud with ansible, where we built "user scenarios" and run them in the loop | 08:46 |
Puck__ | Sorry, I need head off. | 08:46 |
gtema | advantage here is that "anybody" can understand what is going on and can easily extend | 08:46 |
gtema | this does not require tempest knowledge, which is itself not very user friendly | 08:47 |
tobberydberg | I thought about "alias" as we earlier discussed as well .... worth getting that custom attribute in there to align with a "naming convention"? | 08:47 |
gtema | on the other side together with sovereign cloud stack we now integrate this approach into their "certification" | 08:47 |
tobberydberg | That is something I will have too look into, that sounds like a really good thing | 08:48 |
gtema | additional advantage out of box - we test also performance of the cloud with that for every single API call being made | 08:48 |
tobberydberg | That sounds really good as well. Even deeper, like disk, network and cpu performance? | 08:50 |
gtema | yes | 08:50 |
tobberydberg | interesting | 08:50 |
gtema | its quite extensible | 08:50 |
tobberydberg | that might be interesting to get in here in the future as well | 08:50 |
gtema | and for sure we also have possibility to monitor up to "ns lookup performance" inside the cloud | 08:50 |
tobberydberg | So, time is almost out here.. Should I send an email about the list in etherpad and ask people to go in and vote before next meeting? | 08:53 |
tobberydberg | I mean, 2 of us here right now only... | 08:53 |
gtema | :) | 08:53 |
gtema | would be good I guess | 08:53 |
tobberydberg | Do you have any personal thoughts about the "alias" thing? | 08:53 |
gtema | not really. If we agree on specific metadata being present (and really verified) we can solve the problem this way | 08:54 |
gtema | and it is easier then having cryptic names inside of aliases, which anyway need to be parsed and processed | 08:54 |
Vladi[m] | so the name convention for common base of properties should be in some special form for example starting with __ just to avoid some potential conflicts with existing properties on different clouds? | 08:55 |
tobberydberg | Yea ... alias needs to be processed to give anything useful | 08:56 |
gtema | so for me alias would contain all of the anyway agreed properties and require complex parsing to get down to things, that should be there already and make "filtering" easier | 08:56 |
gtema | I prefer much more standardizing properties and their names to alias | 08:57 |
gtema | this fits natively into the current concept without changes | 08:57 |
tobberydberg | Yup, agree on that, properties will resolve that situation as long as all clients can make the filtering based on it | 08:57 |
tobberydberg | Vladi[m] I guess that is definitely something to have in mind | 08:58 |
gtema | and this is much easier compared to following parsing names and especially extending this name by "new" additions | 08:58 |
tobberydberg | +1 | 08:58 |
tobberydberg | Ok, time is up for this week. I will send out an email asking people to go in and vote, add suggestions as well if they want to. | 08:59 |
tobberydberg | Another short thing before we close this | 09:00 |
gtema | cool, thks tobberydberg | 09:00 |
tobberydberg | We missed the deadline for the PTG ... worth trying to get us in late? | 09:00 |
gtema | similar to SDK issue - do you think lot of people are going to join since it is anyway virtual (like exactly this meeting)? | 09:01 |
frickler | maybe somehow attach to the operators sessions? | 09:01 |
gtema | I would rather think on arranging ad-hock video meeting if desired | 09:01 |
tobberydberg | Yea, I agree | 09:02 |
frickler | getting some visibility on the PTG timetable might still be useful and attract some people | 09:02 |
gtema | hm, maybe | 09:02 |
tobberydberg | Good point. We can definitely check if we can tag along the operators sig if not our own... | 09:03 |
tobberydberg | Ok. I check, then we really need to avoid collisions with a few other teams for it to be worth it... | 09:04 |
gtema | ok | 09:04 |
tobberydberg | Thanks for today and see you in 2 weeks :-) | 09:05 |
tobberydberg | #endmeeting | 09:05 |
opendevmeet | Meeting ended Wed Aug 31 09:05:20 2022 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 09:05 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/publiccloud_sig/2022/publiccloud_sig.2022-08-31-08.04.html | 09:05 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/publiccloud_sig/2022/publiccloud_sig.2022-08-31-08.04.txt | 09:05 |
opendevmeet | Log: https://meetings.opendev.org/meetings/publiccloud_sig/2022/publiccloud_sig.2022-08-31-08.04.log.html | 09:05 |
gtema | see ya | 09:05 |
ttx | #startmeeting large_scale_sig | 15:01 |
opendevmeet | Meeting started Wed Aug 31 15:01:00 2022 UTC and is due to finish in 60 minutes. The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot. | 15:01 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 15:01 |
opendevmeet | The meeting name has been set to 'large_scale_sig' | 15:01 |
ttx | #topic Rollcall | 15:01 |
mdelavergne | Hi! | 15:01 |
felixhuettner[m] | Hi | 15:01 |
ihti[m] | Hi! | 15:01 |
ttx | Welcome back! | 15:01 |
ttx | Who else is here for the Large Scale SIG meeting? | 15:01 |
ttx | pinging amorin | 15:02 |
felixhuettner[m] | that looks like a small round | 15:02 |
ttx | let me see if Belmiro is anywhere close | 15:02 |
ttx | Hmm, looks like he is not | 15:03 |
ttx | That is indeed a short crew, but let's roll it through anyway or we will never get back to a rhythm :) | 15:04 |
ttx | ihti[m]: is it your first Large Scale SIG meeting/ | 15:04 |
ttx | ? | 15:04 |
ihti[m] | Yes, I am colleague of Felix :) | 15:04 |
ttx | Oh right! | 15:04 |
ttx | Well, welcome and thanks for joining | 15:05 |
ttx | Our agenda for today is at: | 15:05 |
ttx | #link https://etherpad.openstack.org/p/large-scale-sig-meeting | 15:05 |
ttx | #topic OpenInfra Live September 29 episode - Deep dive into Schwarz Gruppe | 15:05 |
ttx | First topic is the OpenInfra Live episode we'll be running on Sept 29 | 15:05 |
felixhuettner[m] | we already collected some fun stories to share there | 15:06 |
felixhuettner[m] | especially about the rabbit you all know and love :) | 15:06 |
ttx | That's great! Usually Belmiro starts a thread to discuss content as we get closer to the event | 15:06 |
ttx | I'll try to make sure he starts one | 15:06 |
felixhuettner[m] | ok, sounds great | 15:07 |
ttx | felixhuettner[m]: ihti[m] : have you viewed a former Deep Dive episode to see what to expect? It's a pretty loose format | 15:07 |
felixhuettner[m] | yep, two collegues of us did also join one in the past | 15:07 |
ttx | We try to have recurring hosts (mnaser, amorin and belmiro) | 15:07 |
* frickler sneaks in late | 15:07 | |
ttx | right, but the deep dive is a specific format, centered on one company in particular | 15:08 |
ttx | frickler: hi! | 15:08 |
ttx | We did one on OVHcloud and one on Yahoo so far | 15:08 |
ihti[m] | Yes, have seen a couple of the deepdives in teh past so quite familier with the format | 15:08 |
felixhuettner[m] | ok, then we'll definately take a look | 15:08 |
ttx | just go to https://openinfra.dev/live/ and search for "Deep Dive" | 15:09 |
ihti[m] | Ah okay you just had 2 till now, so didn't miss any :) | 15:09 |
ttx | It's more like an open discussion between ops, only live on te Internet | 15:09 |
ttx | usually pretty popular episodes | 15:09 |
ttx | anyway, I planned to use that meeting to confirm the hosts but none of them are around | 15:10 |
ttx | so I'll follow up on the email thread we started | 15:10 |
felixhuettner[m] | ok, thanks | 15:10 |
ttx | #action ttx to confirm hosts and ask belmiro to start a content thread about Sept29 episode | 15:11 |
ttx | felixhuettner[m]: ihti[m]: do you have questions about this show, before we move on to another topic? | 15:11 |
felixhuettner[m] | not from me | 15:11 |
ihti[m] | Nope | 15:11 |
ttx | We usually join 30min in advance to have time to walk around the platform | 15:12 |
ttx | ok! | 15:12 |
ttx | #topic Status on docs website transition | 15:12 |
ttx | I pushed a few changes to clean up the generated docs website at https://docs.openstack.org/large-scale/ | 15:12 |
ttx | https://review.opendev.org/c/openstack/large-scale/+/854419 is the last one | 15:13 |
felixhuettner[m] | they look a lot nicer now | 15:13 |
frickler | oh, I missed that one, will review later | 15:13 |
ttx | One that is approved and merged I will replace the old wiki pages with a redirect message | 15:13 |
ttx | pointing people to the new location | 15:13 |
frickler | +1 | 15:13 |
ttx | #action ttx to replace all Large Scale SIG wiki pages with redirects to the docs (once 854419 merges) | 15:14 |
ttx | Any question or comment on that topic? | 15:14 |
ttx | alright then | 15:15 |
ttx | #topic RabbitMQ questions | 15:15 |
felixhuettner[m] | thanks, i collected a few over the last 2 month | 15:15 |
ttx | I'm not sure with amorin and mnaser and Belmiro away we will have tat many answers right now, but can't hurt to ask | 15:16 |
felixhuettner[m] | one thing is the `rabbit_transient_queues_ttl` setting which is recommended in https://docs.openstack.org/large-scale/other/rabbitmq.html#rabbit-transient-queues-ttl | 15:16 |
felixhuettner[m] | and for me that feels more like something we need to fix in oslo.messaging | 15:16 |
ttx | felixhuettner[m]: yeah that definitely feels like a workaround | 15:17 |
felixhuettner[m] | so what happens is that when a agent shuts down then the fanout queues it created are not deleted | 15:17 |
felixhuettner[m] | and they fill up until they are deleted | 15:17 |
felixhuettner[m] | the short look i had in the oslo.messaging code looked like it should actually remove them on shutdown | 15:17 |
felixhuettner[m] | but that does not seem to work reliably | 15:17 |
felixhuettner[m] | if there is no specific reason for that then i would open a bug for that, maybe we can then get rid of this recommendation | 15:18 |
ttx | yeah, that sounds like a good path to follow... It might be a tricky one to fix though | 15:19 |
ttx | but filing a bug sounds like a good start | 15:19 |
frickler | iiuc the recommendation also helps in case of unscheduled shutdown, like hardware failure | 15:19 |
frickler | but creating and possibly fixing a bug seems useful anyway | 15:19 |
felixhuettner[m] | thats a really valid point | 15:20 |
felixhuettner[m] | i did not think about that | 15:20 |
felixhuettner[m] | but then i'll go ahead and create a bug | 15:20 |
felixhuettner[m] | #action felix.huetter to create a but regarding olso.messaging not deleting fanout queues when neutron agents stop | 15:20 |
ttx | yeah not sure fixing the bug would make the recommendation invalid | 15:20 |
ttx | but that's a good open discussion to have | 15:20 |
felixhuettner[m] | aaah, cant even write my name :) | 15:21 |
ttx | we got it | 15:21 |
felixhuettner[m] | ok :) | 15:21 |
felixhuettner[m] | the other thing is the ha policies we recommend | 15:21 |
ttx | what about the other issue? | 15:21 |
felixhuettner[m] | at the moment only the normal incoming queues are made durable and HA | 15:21 |
felixhuettner[m] | however we do not do the same thing for reply queues | 15:21 |
felixhuettner[m] | but since they are also tied to the lifetime of the service using them i don't see why we should treat them differently | 15:22 |
felixhuettner[m] | (and also oslo.messaging treats them differently) | 15:22 |
ttx | that's a good question, would be good to have others opinion on it | 15:23 |
ttx | the "policy" might have been from the one contributor to that doc, and not that much shared | 15:23 |
felixhuettner[m] | i think it fits to what oslo.messaging does | 15:24 |
ttx | so reopening the case is interesting | 15:24 |
felixhuettner[m] | as long as it does not set the durable flag on these queues we can not make them ha anyway | 15:24 |
ttx | hah | 15:24 |
ttx | This one could be worth a ML thread then, if it's oslo.messaging behavior | 15:24 |
ttx | it's technically not a bug since it works as designed... just the design is questionable | 15:25 |
felixhuettner[m] | yep | 15:25 |
felixhuettner[m] | ok, then i'll send something to the ML | 15:25 |
ttx | yeah, so my recommendation would be to open a ML thread and see who gets out of the woodwork to defend the current behavior | 15:25 |
felixhuettner[m] | #action felix.huettner to raise a question on the mailinglist why reply queues are not created as durable | 15:25 |
ttx | Alright, anything else on that topic? | 15:26 |
felixhuettner[m] | nothing from me | 15:26 |
ttx | #topic Next meetings | 15:27 |
ttx | Next meeting is September 14, but I'll be in Dublin for Open Source Summit EU so I won't be able to chair it | 15:27 |
ttx | Who is available to run the meeting? We definitely need one as there might be last minute details to discuss for the Deep dive on Sept 29 | 15:27 |
ttx | i could ask belmiro but if we have a volunteer present that might be a stronger bet | 15:28 |
felixhuettner[m] | i'll probably be there (unless something breaks :) ) | 15:28 |
frickler | I could do it, but I don't mind if you ask belmiro or amorin first | 15:28 |
ttx | OK how about I ask them (we need them on that one anyway) and if they can't I'll take one of your generous offers to help | 15:29 |
ttx | #info September 14 - IRC meeting (ttx to ask belmiro/amorin if they can chair, or felixhuettner/frickler if they can't) | 15:30 |
felixhuettner[m] | sounds good | 15:30 |
ttx | #action ttx to ask belmiro/amorin if they can chair next meeting | 15:31 |
ttx | #info September 29 - willbe our OpenInfra Live | 15:31 |
ttx | #topic Open discussion | 15:31 |
ttx | Anything else, anyone? | 15:31 |
felixhuettner[m] | not from me | 15:31 |
mdelavergne | not from my part | 15:31 |
frickler | I have an ad for the publiccloud_sig | 15:31 |
ihti[m] | nothing from my side | 15:31 |
frickler | might be interesting for some people here, too | 15:32 |
ttx | frickler: promote away! | 15:32 |
ttx | definitely big overlap | 15:32 |
frickler | https://meetings.opendev.org/meetings/publiccloud_sig/2022/publiccloud_sig.2022-08-31-08.04.html is the meeting that happened earlier today | 15:32 |
frickler | happening every other wed at 08 UTC | 15:32 |
frickler | so EU friendly as well as NZ hopefully | 15:33 |
frickler | current hot topic is discussing how to provide standard cloud images | 15:33 |
ttx | #info Public Cloud SIG meetings every other Wednesday at 8UTC | 15:33 |
frickler | like tagged via metadata or standardized names | 15:34 |
RamonaBeermann[m] | is it in this channel? | 15:34 |
frickler | yes | 15:34 |
ttx | yes | 15:34 |
RamonaBeermann[m] | thx | 15:34 |
ttx | frickler: anything else? | 15:35 |
frickler | I think that's it from me | 15:35 |
ttx | Alright, then thanks everyone for attending | 15:35 |
ttx | #endmeeting | 15:35 |
opendevmeet | Meeting ended Wed Aug 31 15:35:40 2022 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:35 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/large_scale_sig/2022/large_scale_sig.2022-08-31-15.01.html | 15:35 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/large_scale_sig/2022/large_scale_sig.2022-08-31-15.01.txt | 15:35 |
opendevmeet | Log: https://meetings.opendev.org/meetings/large_scale_sig/2022/large_scale_sig.2022-08-31-15.01.log.html | 15:35 |
mdelavergne | thanks! see you! | 15:35 |
felixhuettner[m] | thank you | 15:35 |
ihti[m] | Thanks! | 15:35 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!