*** slaweq has quit IRC | 00:04 | |
*** jamesmcarthur has joined #openstack-tc | 00:14 | |
*** jamesmcarthur has quit IRC | 00:19 | |
*** jamesmcarthur has joined #openstack-tc | 00:24 | |
*** jamesmcarthur has quit IRC | 00:29 | |
*** tetsuro has joined #openstack-tc | 01:33 | |
*** david-lyle is now known as dklyle | 02:05 | |
*** jamesmcarthur has joined #openstack-tc | 02:09 | |
*** jamesmcarthur has quit IRC | 02:28 | |
*** jamesmcarthur has joined #openstack-tc | 02:31 | |
*** jamesmcarthur has quit IRC | 02:42 | |
*** tetsuro has quit IRC | 04:02 | |
*** tetsuro has joined #openstack-tc | 04:12 | |
*** tetsuro has quit IRC | 05:24 | |
*** tetsuro has joined #openstack-tc | 05:25 | |
*** evrardjp has quit IRC | 05:34 | |
*** evrardjp has joined #openstack-tc | 05:34 | |
*** tetsuro has quit IRC | 05:37 | |
*** lpetrut has joined #openstack-tc | 07:03 | |
*** witek has joined #openstack-tc | 07:56 | |
*** e0ne has joined #openstack-tc | 07:57 | |
*** e0ne has quit IRC | 08:00 | |
*** tosky has joined #openstack-tc | 08:27 | |
*** e0ne has joined #openstack-tc | 08:42 | |
*** e0ne has quit IRC | 08:44 | |
*** rpittau|afk is now known as rpittau | 08:55 | |
ttx | ohai | 09:00 |
---|---|---|
mnaser | o/ | 09:04 |
mnaser | i've been having crazy ideas lately and wondering if many of our *aaS projects can benefit in some way or another by being integrated directly on top of k8s | 09:05 |
mnaser | we seem to be pretty good at delivering *aaS -- k8s is pretty good at scheduling lightweight controllers | 09:05 |
mnaser | so something like manila (or even trove) can beenfit from that whole existing ecosystem and all they provide is a nicely integrated API to deliver these blocks of infrastructure | 09:06 |
*** slaweq has joined #openstack-tc | 09:07 | |
ttx | I think that was the thinking behind standalone Cinder | 09:09 |
ttx | and Kuryr | 09:10 |
ttx | or are you talking about something else? | 09:10 |
mnaser | ttx: yeah, something similar to that, but making services both functional as standalone but also leveraging the k8s system | 09:14 |
mnaser | for example: instead of octavia creating service vms inside nova, it can simply schedule containers inside a k8s cluster that's properly configured (with cinder/kuryr/neutron to be able to provide a 'native' experience) | 09:15 |
mnaser | and that way amphoras are not these heavy images but they're just docker images that can be pulled anytime/anywhere | 09:15 |
ttx | Could be containers that either run in a pod or a VM, and support both "native K8s" and "native openstack" | 09:18 |
mnaser | right, i think that might simplify the deployment experience so much more too (and the maintenance story too) | 09:18 |
mnaser | it's really difficult right now to manage those service VM services, they've historically been one of the hardest to deploy/manage | 09:19 |
ricolin | o/ | 09:22 |
mnaser | a wild idea spinning off of this is much of the openstack services perhaps being k8s operators which provide both a native k8s api and native openstack api | 09:25 |
mnaser | and given we have possiblity of keystone auth in k8s, you could potentially view your resources via 2 different apis | 09:25 |
ttx | that... might be a bit too wild | 09:29 |
mnaser | i need to come up with really wild things so that the less wild ones seem more sane | 09:30 |
mnaser | =P | 09:30 |
ttx | haha | 09:33 |
ttx | I suggest we rewrite nova in serverless where the function would just spin up a vm. | 09:34 |
mnaser | now you're making mine seem reasonable | 09:34 |
mnaser | thank you | 09:34 |
ttx | I call it serverlesserver | 09:34 |
evrardjp | reading scrollback | 09:34 |
mnaser | i wonder how hard it would be to poc this for something that's relatively simple like 'cache as a service' for something like memcache | 09:34 |
evrardjp | mnaser: yeah that's been what we've discussed with mugsie and zaneb | 09:35 |
ttx | Also we should write it in Rust because Go is so yesterday | 09:35 |
evrardjp | haha | 09:35 |
evrardjp | not sure licensing is appropriate ;) | 09:35 |
mnaser | i mean i'd be up for it to be in go but not sure how many other people would be excited about that :) | 09:36 |
mnaser | having static builds and no dependency mess is really, really nice. | 09:36 |
evrardjp | but yeah, I think the most successful services forward will be the ones that not only work inside the whole traditional openstack, but also live outside it (standalone), like ironic, cinder, or manila. But hey I am no oracle :p | 09:36 |
evrardjp | mnaser: there is still a dependency mess | 09:36 |
evrardjp | it's just hidden somewhere else | 09:37 |
ttx | and a supply chain mess | 09:37 |
evrardjp | about that I was analysing some things :) | 09:37 |
evrardjp | should we have an openstack OS? :D | 09:37 |
mnaser | i dunno, i've enjoyed it and haven't had too many bad experiences with go | 09:37 |
ttx | on topic: https://kubernetes.io/blog/2020/02/07/deploying-external-openstack-cloud-provider-with-kubeadm/ | 09:40 |
evrardjp | I should probably give a link to our suse thing doing the same :p | 09:41 |
mnaser | njeat | 09:41 |
mnaser | neat | 09:41 |
evrardjp | I obviously won't | 09:41 |
mnaser | ttx: the idea is to leverage all of that. so for example, add kuryr to the mix and you can easily have a 'memcache as a service' | 09:42 |
mnaser | and i hate to say it but getting a k8s cluster up is a hell of a lot easier to do that openstack, so i dont know if people will be that freaked out y it. | 09:43 |
evrardjp | not sure to understand that last sentence. | 09:43 |
mnaser | if our *aaS projects can be simplified by having access to a k8s cluster | 09:44 |
mnaser | i think that's a relatively trivial bridge to jump to get a lot of value | 09:44 |
evrardjp | oh you mean exposing the openstack services inside k8s? | 09:44 |
evrardjp | that's the scope of openstack cloud provider integration ... | 09:45 |
mnaser | let me try again | 09:45 |
mnaser | "if you want to use openstack's memcache-as-a-service, you will need a kubernetes cluster that's integrated with your existing openstack cloud using the cloud provider / kuryr / etc" | 09:45 |
mnaser | because it will be deploying pods on top of that | 09:46 |
mnaser | i'm thinking at the end of the day, openstack's users are those who want *aaS. k8s users are those who are building things on top if t | 09:46 |
mnaser | historically, we've called ourselves a 'cloud operating system' so we might as well as deliver on that in the best way possible | 09:46 |
evrardjp | wait, so you mean rewiring APIs to basically trigger operators? | 09:47 |
mnaser | possibly, something along those lines | 09:47 |
evrardjp | let me rephrase this | 09:47 |
mnaser | hence my comment earlier of: some of our services can become k8s operators that rely on a cluster that is deeply integrated with the core openstack services such as neutron/cinder | 09:47 |
evrardjp | well, I guess I don't need if you said so | 09:47 |
mnaser | and yet deliver an openstack api experience (and by second nature, a k8s native one too) | 09:48 |
evrardjp | mnaser: I think what matters here is that we don't have to many *aaS -> Some on which it's more appropriate to be in k8s should be moved there | 09:48 |
mnaser | because the openstack api experience (a normal rest api) will end up just creating those resources which are then managed | 09:48 |
evrardjp | is that what you're saying? | 09:48 |
ttx | So.. a brand of k8s operators that would depend on Kubernetes being run on openstack cloud provider ? | 09:48 |
mnaser | i think we have some *aaS that benefit a lot of being operators imho | 09:49 |
mnaser | yes, but more than openstack cloud provider, also things like kuryr | 09:49 |
mnaser | so you can access them for example from your VMs | 09:49 |
mnaser | or expose them via floating ips | 09:49 |
ttx | "operators for openstack-backed kubernetes" ? | 09:49 |
mnaser | correct | 09:49 |
ttx | god I hate that operators name | 09:49 |
mnaser | and there's a ton of projects that could fit that criteria | 09:49 |
evrardjp | ttx: what mnaser is looking for is to say (for example) "I provide you db aaS, and you can reach it from anywhere, vms or k8s cluster. And behind the scenes, it's on k8s". | 09:50 |
ttx | Even koperators would have been better, and I also hate the knaming kmania | 09:50 |
evrardjp | mnaser: did I get that right on your user perspective? | 09:50 |
evrardjp | ttx: haha | 09:50 |
mnaser | evrardjp: somewhat correct, id drop the 'or k8s cluster' | 09:50 |
evrardjp | but this is kool | 09:50 |
mnaser | because we can expose a k8s api to add/remove/manage | 09:50 |
mnaser | but not to run any workloads | 09:51 |
mnaser | that's another story | 09:51 |
evrardjp | yup | 09:51 |
mnaser | so you use rbac rules to limit only for those 'operator managed' resources | 09:51 |
evrardjp | I didn't mean that the thing had to run inside :) | 09:51 |
evrardjp | just reachable/provisonable | 09:51 |
evrardjp | but yeah | 09:51 |
mnaser | as an operator (the human, not the k8s concept), this makes like so much easier | 09:52 |
mnaser | and it opens up a whole bunch of useful things where someone might say "openstack is neat but i like their memcache operator so i'll just deploy that on my k8s and contribute fixes that i need" | 09:53 |
evrardjp | this/these new *aaS API can probably be scaffolded | 09:53 |
mnaser | it feels a little weird, building tools for k8s, but in a way it opens up value beyond openstack and potentially other contributors | 09:53 |
mnaser | obviously at the heart of this remains our core stuff: nova/neutron/cinder/etc | 09:54 |
evrardjp | I think this would be nice to write in an email on the ML and distill that in the ideas repo! :) | 09:55 |
evrardjp | see if that gets traction | 09:55 |
mnaser | i think i'll wait until NA wakes up and people see the insanity that i'm bringing up and hear what they have to say =P | 09:56 |
*** slaweq has quit IRC | 09:56 | |
*** e0ne has joined #openstack-tc | 09:57 | |
evrardjp | I have serious doubts on the willingness to maintain an api for a translation layer | 09:58 |
evrardjp | well it's not only translation but... | 09:58 |
mnaser | you dont need an api for translation layer | 09:58 |
evrardjp | ttx: about your questions on ideas, indeed the intention was that anyone could merge | 09:58 |
mnaser | you just need an api that creates k8s cr's | 09:58 |
evrardjp | that's what I meant | 09:59 |
mnaser | probably relatively trivial given our current openstack tooling imho | 09:59 |
ttx | evrardjp: ok, so it's more of a community wall ? And core reviewers are just checking that it's an idea and not spam but would otherwise not question the content? | 10:00 |
evrardjp | probably not hard indeed | 10:00 |
evrardjp | ttx: correct | 10:00 |
evrardjp | it's written in the documentation of the repo | 10:00 |
ttx | evrardjp: OK, so it's a bit like how we (I) handle openstack-planet or irc-meetings | 10:00 |
ttx | Will +1 all, sorry for having held it | 10:01 |
evrardjp | the review should also reflect what's being discussed on the ml | 10:01 |
evrardjp | so a simple check | 10:01 |
evrardjp | but yeah, not questioning the content | 10:01 |
evrardjp | if you could tox -e docs on the ideas repo, and xdg-open doc/build/html/index.html you should see :) | 10:01 |
evrardjp | or wait until this whole thing gets merged :) | 10:02 |
*** rpittau is now known as rpittau|bbl | 11:21 | |
*** fungi has quit IRC | 11:24 | |
*** fungi has joined #openstack-tc | 11:27 | |
*** openstackgerrit has joined #openstack-tc | 11:47 | |
openstackgerrit | Jean-Philippe Evrard proposed openstack/governance master: Introduce 2020 upstream investment opportunities. https://review.opendev.org/707120 | 11:47 |
*** ricolin has quit IRC | 12:10 | |
*** jaosorior has joined #openstack-tc | 12:15 | |
*** jaosorior has quit IRC | 12:32 | |
*** adriant has quit IRC | 13:02 | |
*** adriant has joined #openstack-tc | 13:03 | |
*** jamesmcarthur has joined #openstack-tc | 13:15 | |
openstackgerrit | Jean-Philippe Evrard proposed openstack/governance master: Add QA upstream contribution opportunity https://review.opendev.org/706637 | 13:16 |
*** rpittau|bbl is now known as rpittau | 13:22 | |
*** jamesmcarthur has quit IRC | 13:24 | |
*** jamesmcarthur has joined #openstack-tc | 13:25 | |
gmann | o/ | 13:27 |
*** jamesmcarthur has quit IRC | 13:28 | |
*** jamesmcarthur has joined #openstack-tc | 13:28 | |
*** jamesmcarthur has quit IRC | 13:35 | |
openstackgerrit | Nate Johnston proposed openstack/governance master: Add QA upstream contribution opportunity https://review.opendev.org/706637 | 13:35 |
njohnston | o/ | 13:35 |
smcginnis | There are a lot of folks running k8s, but also a very large set not running it with no interest in it. | 13:39 |
tbarron | I think mnaser evrardjp and zaneb among others know my interest in running manila "share servers" per-tenant in dynamically spawned containers | 13:39 |
tbarron | so k8s orchrestrated | 13:39 |
smcginnis | So that means integration with k8s could be very useful to some end users, but it would greatly complicate projects because they would need to handle both. So it's and AND and not an OR. Or not even an OR if I followed the conversation. | 13:40 |
tbarron | i'm actually less interested in "manila standalone" if that's like what jgriffith did for "cinder standalone", i.e. no-auth w/o keystone | 13:40 |
mnaser | the idea is that these projects are entirely changed in architecture in that its how its built on top of k8s overall | 13:40 |
tbarron | I want keystone multi-tenancy in the API, a majore deficit in k8s for those who think k8s plus metal3 plus kubevirt would be sufficient by itself for an IAAS | 13:41 |
mnaser | and the apis dont do much more than create a k8s resource that then creates it, its a huge shift | 13:41 |
mnaser | obviously i imagine that we'd do some mapping where project => ns or something along those lines | 13:41 |
mnaser | but i think the value we get out of it means ripping out a *crapton* of code (like the generic driver in manila, amphora vm management/deleting/etc in octavia, etc) | 13:42 |
tbarron | so the direction of https://github.com/tombarron/pocketmanilakube is to run manila and keystone under k8s | 13:42 |
mnaser | and realistically most of this will probably be abstracted in the deployment tools | 13:42 |
mnaser | no one wants to run rabbitmq but they all do anyways :P | 13:42 |
tbarron | zaneb has suggested getting the manila services to use grpc instead of rabbitmq | 13:42 |
mnaser | ++ | 13:43 |
*** jamesmcarthur has joined #openstack-tc | 13:43 | |
tbarron | really the only dependencies are rpc/messxaging across services, database, and keystone | 13:43 |
tbarron | mnaser: that means no generic driver of course with its cinder, neutron, nova dependencies | 13:44 |
mnaser | right but it would abstract it to say.. csi stuff, which cinder has an impl and these services would run on an "openstack-ified k8s" which has those integrated and plumbed a few layers down | 13:44 |
smcginnis | A/23 | 13:46 |
tbarron | mnaser: yeah, i'm not saying throw out cinder, just run cinder-csi and manila-csi and cinder and manila without cross-dependencies | 13:48 |
mnaser | and hell, if someone wants to use someting else under it, why not | 13:49 |
*** ijolliffe has joined #openstack-tc | 13:52 | |
zaneb | mnaser: I wish to subscribe to your newsletter :) | 14:03 |
fungi | and here i thought i was going to wake up to a proposal for rewriting swift in swift | 14:05 |
ttx | swiftly | 14:05 |
mnaser | SoS would be accurate | 14:05 |
fungi | if only all of our services could have programming languages named after them | 14:06 |
fungi | http://navgen.com/nova/ | 14:06 |
mnaser | https://github.com/the-neutron-foundation/neutron-language | 14:07 |
fungi | see, apparently they do | 14:08 |
smcginnis | Cinder has a book, does that count? https://www.goodreads.com/book/show/36381037-cinder | 14:08 |
fungi | why did i never think of this before? | 14:08 |
smcginnis | "the tale of a teenage cyborg who must fight for Earth's survival against villains from outer space | 14:08 |
smcginnis | Beat that manila. | 14:08 |
zaneb | rofl | 14:08 |
fungi | well, there's also a c++ visualization lib named cinder, but i'm still digging deeper | 14:08 |
fungi | granted, that book is hard to top | 14:08 |
zaneb | mnaser, evrardjp: re dependency management in golang: without go modules it's at least as/more painful than in Python. (I have high hopes for go modules, but not tried it out yet.) It does at least put it entirely on the developer and not make it the deployer's problem, so that's not nothing | 14:08 |
mnaser | gomodules have been nothing but a joy personally | 14:09 |
mnaser | you dont have to wake up to know every deployment ever that didnt pin virtualenv break one day :) | 14:09 |
fungi | at least there are workarounds | 14:10 |
tbarron | smcginnis: well you didn't have the smarts to pick an obscure skunk as a mascot, or to name yourself after an office folder only used by previous generations | 14:10 |
zaneb | mnaser: it does indeed look better once you can migrate to it | 14:11 |
tbarron | mnaser: go mod is useless automation that ruins requirements' teams' job security | 14:11 |
mnaser | :p | 14:11 |
tbarron | seriously for manila-csi and nfs-csi (just switched from go dep) they seem to be a nice simplification | 14:13 |
zaneb | tbarron: from go dep or just dep? | 14:15 |
tbarron | zaneb: go dep | 14:16 |
zaneb | you just skipped right over dep? :) | 14:16 |
zaneb | let's hope they got it right on the third try | 14:17 |
zaneb | because Python certainly didn't | 14:17 |
* fungi still uses manila file folders to organize his personal papers | 14:19 | |
* fungi gets the feeling he's being called "old" | 14:19 | |
mnaser | ok so it seems like kuryr recently added support for the ability to connect multiple vifs.. but more importantly to a subnet defined in run-time | 14:33 |
mnaser | i keep going back to 'memcache as a service' cause it seems like the inital trivial one | 14:33 |
mnaser | it means that you can probably give it a subnet id and then rely on kuryr to plumb the memcache containers correctly (and make sure they're configured via env properly) | 14:34 |
mnaser | looks like we literally have all the right tools available today :) | 14:35 |
tbarron | zaneb: i may not know what i'm talking about, manila-csi was built using 'go mod'; csi-driver-nfs just switched from using 'Gopkg.lock' and 'Gopkg.toml' to go modules | 14:37 |
zaneb | yeah, I think 'Gopkg.lock' and 'Gopkg.toml' are dep. I never actually used go dep so I don't know how that worked | 14:39 |
mnaser | i am fairly sure go modules make use of go.mod files | 14:40 |
zaneb | yes, go mod is the latest (and best) and uses go.mod files instead | 14:41 |
*** witek has quit IRC | 14:41 | |
tbarron | agree, my ignorance was exactly what we were moving *from* with the Gopkg.lock and Gopkg.toml | 14:43 |
zaneb | you know, it's entirely possible that I made up the idea that the thing before dep was called go dep | 14:47 |
zaneb | anyhoo | 14:47 |
zaneb | mnaser: I agree with large parts of your idea. I think the future of cloud (especially public cloud) is in managed services (like DBaaS), and that by far the easiest way to run those is going to be on k8s. we've been trying for nearly a decade to get Nova to support the stuff that e.g. Trove needed to do that; Nova just never cared about that use case and now the world has moved on | 14:55 |
zaneb | mnaser: I also think the future of cloud is on bare metal, and I think that's where we disagree. but I think a lot of the things we're both thinking about are common to both. I'm very interested in finding ways to do managed services that can work in either context | 14:57 |
mnaser | zaneb: good to hear | 15:00 |
mnaser | i'd like to personally explore the possibility of what this might look like, so i'm personally going to: a) write a simple standalone memcache operator that works with vanilla k8s, b) build a k8s cluster that integrates with neutron using kuryr, c) iterate on (a) to support plugging into specific subnets as a user, d) create a very simple wrapper api that creates resources on behalf of a user with a restful api (to | 15:02 |
mnaser | where operator picks things up afterwards) | 15:02 |
mnaser | because i think it makes sense to see what that might really look like in practice and maybe share that poc for other to reproduce/try out | 15:02 |
zaneb | mnaser: you could save time by just using an existing operator | 15:03 |
zaneb | I don't see one for memcache | 15:04 |
zaneb | but you could do e.g. etcd or something | 15:04 |
mnaser | zaneb: that's actually a reasonable thing, the main reason behind having access to an existing one is finding a way to add our openstack-y bits on top of it (i.e. a subnetId field for example) | 15:04 |
mnaser | but i think i can get away with that by using a very lightweight operator that just does all this on top of it | 15:05 |
zaneb | yeah, that's what I would do | 15:05 |
mnaser | so it creates etcd resoures | 15:05 |
* mnaser scrolls through https://operatorhub.io/ | 15:05 | |
zaneb | because eventually you'll want to extend this to running any kind of service that the cloud operator can install an operator for | 15:05 |
* zaneb also hates the name 'operator' | 15:05 | |
mnaser | i wonder if i can get away with actually just embedding other operators into one big openstack operator | 15:06 |
mnaser | (i've done this in the past and it worked pretty nicely) | 15:07 |
mnaser | so you just have 1 run one pre-integrated operator rather than 1 "openstack" operator and 1 "per service" | 15:07 |
mnaser | (all you do is register the controllers when you start up and you're good to go) | 15:07 |
zaneb | for running OpenStack services themselves, you mean? | 15:07 |
mnaser | no for example a: openstack-operator which embeds memcache-operator and postgres-operator etc | 15:08 |
mnaser | so you dont need to run openstack-operator + all of these other ones, it's just one big operator that manages all of the CRs | 15:08 |
zaneb | ah, right | 15:08 |
zaneb | potentially | 15:09 |
evrardjp | Some colleagues proposed this kind of meta operator , I am not too fond of it. I don't see it more valuable to deal with the lifecycle of the child operators than dealing with them directly in some kind of tooling | 15:11 |
evrardjp | else your 'openstack-operator' becomes very complex. | 15:12 |
evrardjp | But I would be glad to be proven wrong. | 15:12 |
mnaser | going back to the deployer experience | 15:12 |
mnaser | i think they'd rather have 1 much easier deployed with sane defaults operator than our historical million choices | 15:13 |
evrardjp | and in it, just boolean to flip if you want *insert child operator * ? | 15:13 |
mnaser | or it always runs and if you want to disable it you use rbac | 15:14 |
mnaser | i wonder how much time it could take me to get something like trove's api "re-implemented" with postgres-operator for example | 15:16 |
evrardjp | I am not sure the user experience will be better than k8s service catalog. | 15:19 |
evrardjp | or maybe I misunderstand you | 15:20 |
evrardjp | I guess it depends on what you want the users to see. | 15:20 |
mnaser | users would see openstack only resources | 15:20 |
mnaser | they wouldnt see the stuff plumbed beneath it | 15:20 |
johnsom | mnaser: FYI, there are still some major limitations in the k8s scheduler for infrastructure type applications like load balancing. | 15:20 |
mnaser | johnsom: oh? curious as to hear what are some of them for example? | 15:21 |
johnsom | We have been following it closely, but there is a lot of pushback for what we need out of the scheduler | 15:21 |
mnaser | johnsom: any examples that stick out? | 15:22 |
*** jamesmcarthur has quit IRC | 15:23 | |
johnsom | FYI, I am about 30 minutes out from being in the "office" and my first cup of coffee so bear with me. | 15:24 |
evrardjp | :) | 15:24 |
johnsom | The most important one is controlling the scheduler from "evacuating" infrastructure services from pods. I.e. currently there is no way to stop k8s from killing a instance to "move" it to another pod for workload balancing. | 15:25 |
mnaser | ahaha | 15:25 |
johnsom | This means you can't stop it from interrupting network flows, like downloading a large file, mid-flow. | 15:26 |
*** jamesmcarthur has joined #openstack-tc | 15:26 | |
mnaser | hmm | 15:26 |
johnsom | They have added the new priority concept, but it still doesn't preempt this. | 15:26 |
johnsom | The other issue we see, but there are (granted ugly) ways around is the layers of packet translation required with the k8s pods. | 15:27 |
mnaser | ah yes, well with my (concept) you'd have your pods plugged directly into the neutron port | 15:27 |
mnaser | so it is not much different than a vm | 15:28 |
johnsom | Right. That is the best answer, but not straight forward with k8s | 15:28 |
mnaser | i feel like there should be a solution to the problem of "a long running task" esp in terms of networking | 15:29 |
johnsom | FYI, we have had discussions about this at a few PTGs, so there are also notes on our etherpads. | 15:29 |
mnaser | our amphoras today could technically run into the same issue in other ways | 15:29 |
fungi | and presumably with network devices like load balancers you need more than one logical port (or some nasty hairpinning/dsr solutions) | 15:29 |
mnaser | fungi: or the load balancer lives on the same network | 15:29 |
mnaser | and floating ip attach to get it to an external net | 15:29 |
fungi | "the same network" makes no sense in a routing context. there are numerous networks | 15:30 |
johnsom | Not so much, the service VM approach is very stable as we have some level of control. Plus, we can implement HA technologies that allow very fast failover that you can't with "traditional" k8s | 15:30 |
zaneb | johnsom: such as? I curious what HA stuff is not possible with k8s | 15:30 |
zaneb | *I'm | 15:31 |
johnsom | fungi Correct, there are issues with "one-armed" or hairpin from a performance perspective. | 15:31 |
johnsom | This is an issue with the "trunking" trick like kuryr | 15:31 |
fungi | oh, or do kubernetes-land load balancers nat the source as well as the destination? i guess that's another way around it | 15:33 |
fungi | still ugly and prone to overflowing pat tables | 15:33 |
johnsom | zaneb I didn't say "not possible" (I'm not much of a believer), but requires some non-k8s patterns. Specifically I am talking about using VRRP with something like keepalived to migrate IP addresses quickly (we are currently at around a second, but moving sub-second in a release or two). | 15:33 |
fungi | did the vrrp patents eventually expire? | 15:33 |
johnsom | fungi Yes, there is NAT involved with the current, limited solutions. But right now, most LBs are sitting outside the k8s scheduler. | 15:34 |
johnsom | fungi That is long since gone and CARP is dead. lol | 15:34 |
fungi | er, right, it was cisco asserting the hsrp patents covered vrrp implementations | 15:34 |
*** witek has joined #openstack-tc | 15:35 | |
zaneb | johnsom: are you familiar with MetalLB? (not as a replacement for L7 load balancing, obviously, but for managing failover of traffic into the L7 load balancer it seems like a viable approach) | 15:35 |
johnsom | Our goal in an HA solution is to be able to failover inside the TCP retry window. | 15:35 |
zaneb | fungi: "kubernetes-land load balancers" consist mostly of 'let the cloud do that for us' | 15:35 |
johnsom | zaneb I am! It also lives outside the k8s scheduler | 15:35 |
zaneb | true | 15:36 |
fungi | zaneb: sure, i'm wondering more what "the cloud" is doing in those scenarios (especially when it's not openstack) | 15:36 |
johnsom | It is pretty good actually | 15:36 |
evrardjp | yup | 15:36 |
johnsom | I assume you all have seen the new cloud management service added to k8s? | 15:36 |
johnsom | Basically, yes, they are acknowledging that there are useful cloud infrastructure services that live outside the k8s scheduler. | 15:37 |
fungi | yep! the official answer to how to manage your infrastructure for kubernetes seems to be to have it install openstackl | 15:37 |
zaneb | *gasp* | 15:37 |
johnsom | https://kubernetes.io/docs/concepts/overview/components/#cloud-controller-manager | 15:37 |
mnaser | i'm sure there might be roadblocks or specific services where it might make sense to maybe have 1 or the other | 15:37 |
mnaser | and i think k8s as a platform will probably slowly develop to maybe adjust to those patterns | 15:38 |
mnaser | esp with how quickly things are moving | 15:38 |
mnaser | it's sad to see sometimes but kinda amazing to see the huge amounts of fast progress that goes there | 15:38 |
fungi | i wouldn't count on them moving quickly indefinitely. their activity curve basically looks like ours but shifted a few years | 15:38 |
*** jamesmcarthur has quit IRC | 15:38 | |
johnsom | Yeah, really, we have been constantly searching for a workable container story. K8s is getting there, but still has some issues that make it less than ideal for an infrastructure service. The health monitoring and pod spin up and down are the major issues. | 15:39 |
fungi | and lf/cncf have also basically stopped touting yearly activity metrics and switched to cumulative (conveniently that always increases) | 15:39 |
johnsom | It does, very similar growing pains as well | 15:39 |
zaneb | fungi: it's a question of what's going to eat their lunch the way they ate our lunch. I don't see anything on the horizon | 15:39 |
fungi | well, they didn't eat our lunch. they siphoned off the distracting vendors trying to cram things down our throats so they could advertise their name on something trendy | 15:40 |
mnaser | ^^ | 15:40 |
mnaser | those two arrows were for zaneb :p | 15:40 |
mnaser | as someone who's been interacted a lot with k8s, my "i need to get an app up" needs are much better served with plenty of tooling around it | 15:41 |
fungi | last i saw their activity is already heading for a trough, and i don't know what specifically is "eating their lunch" if that's what you like to blame hype trends on... ai/ml? edge? | 15:41 |
zaneb | fungi: they did, and they also siphoned off all of the application developers who were still hoping that we were going to build a thing that they could build applications against | 15:41 |
fungi | ahh, yeah, luckily i don't think that should have been in scope for openstack anyway | 15:42 |
zaneb | well, you have plenty of company | 15:42 |
mnaser | well turns out people want *aaS and openstack promised that and we struggle to deliver | 15:42 |
zaneb | bingo | 15:43 |
mnaser | and k8s doesnt care about delivering that so that's a really good space for us to leverage their tooling | 15:43 |
mnaser | so we can deliver *aaS better and get those users those infrastructure blocks they need | 15:43 |
johnsom | There have been a lot of people burned by their k8s deployments as well..... | 15:43 |
fungi | i personally think the curve is more of a sociological pattern, where technologies become less interesting as they're entrenched and as they start to work well and people are no longer talking about them as much because they can take them for granted | 15:43 |
johnsom | It is hard to shift your brain and re-architecture to use that environment properly | 15:44 |
zaneb | fungi: there were 12k people at KubeCon and they were talking plenty | 15:44 |
zaneb | and mostly users, not just people trying to sell stuff to each other | 15:45 |
johnsom | Does anyone know if there are OpenStack people starting work on the new cloud provider framework for k8s? (new as of 1.16) | 15:45 |
fungi | that's awesome. i hope they can turn those users into code contributors | 15:45 |
fungi | (some significant fraction of them i mean) | 15:46 |
*** jamesmcarthur has joined #openstack-tc | 15:46 | |
fungi | kubernetes represents a large benefit to our community, so i'd like to see them continue to be able to maintain it | 15:46 |
zaneb | johnsom: what part is new in 1.16? | 15:47 |
johnsom | zaneb https://github.com/kubernetes/enhancements/blob/master/keps/sig-cloud-provider/20190422-cloud-controller-manager-migration.md | 15:48 |
johnsom | "Support a migration process for large scale and highly available Kubernetes clusters using the in-tree cloud providers (via kube-controller-manager and kubelet) to their out-of-tree equivalents (via cloud-controller-manager)." | 15:49 |
fungi | johnsom: diablo_rojo may know, she has stepped into that group after hogepodge moved on | 15:49 |
johnsom | Cool, thanks. | 15:50 |
zaneb | johnsom: isn't this it? https://github.com/kubernetes/cloud-provider-openstack | 15:50 |
zaneb | afaik ours was split out a long time back already | 15:50 |
johnsom | zaneb If I am following the k8s correctly, that is the "old" code/way | 15:50 |
johnsom | Ah, no, it looks like you are right, this already is using the cloud manager | 15:51 |
johnsom | Thanks, there is my answer | 15:51 |
fungi | and yes, that's teh effort hogepodge pushed in sig-cloud-provider | 15:53 |
zaneb | https://github.com/kubernetes/enhancements/issues/669#issuecomment-537277111 | 15:53 |
fungi | basically removing all the in-tree providers and separating them into their own repositories | 15:53 |
fungi | openstack's got done quickly, i think it was the first to complete extraction, some of the others took much longer | 15:55 |
*** KeithMnemonic has quit IRC | 15:56 | |
*** lpetrut has quit IRC | 16:01 | |
*** cgoncalves has joined #openstack-tc | 16:05 | |
diablo_rojo | fungi, johnsom catching up... | 16:06 |
fungi | diablo_rojo: i think we got the answer, it was about the (now relatively old and complete) effort for extracting the cloud providers from the kubernetes source tree | 16:06 |
johnsom | diablo_rojo: I think I got my answer already. | 16:07 |
diablo_rojo | Ah yes. I think that is still ongoing. The plan was for most providers to be extracted from tree and have a beta by the end of the next release? | 16:07 |
fungi | but as for the openstack provider it was done with extraction over a year ago if memory serves | 16:08 |
diablo_rojo | Ehhh it might be extracted, but I dont think it has feature parity with what was in tree. | 16:09 |
zaneb | the OpenStack one is marked as 'implemented' as of last week https://github.com/kubernetes/enhancements/pull/1492/files#diff-c5fdd15e8c9a42196844891e9726a417R17 | 16:09 |
diablo_rojo | zaneb, thank you! | 16:09 |
zaneb | diablo_rojo: I didn't do it ;) | 16:10 |
diablo_rojo | zaneb, but you found the link which is helpful :) | 16:10 |
smcginnis | Since there are more tc-members here now, just raising again that this week was reserved for any discussion or needs the TC has before starting the official W naming poll. | 16:18 |
smcginnis | https://wiki.openstack.org/wiki/Release_Naming/W_Proposals | 16:18 |
ttx | smcginnis: Could you start a ML thread on them? | 16:18 |
ttx | Lots of good names | 16:19 |
ttx | Potential cultural sensitivity on Wuhan (positive or negative I don't know) | 16:20 |
ttx | Also Wodewick if it's indeed perceived as a joke on speech-impaired people | 16:21 |
ttx | Pretty sure Wakanda will be striked out on trademark review, but that's not really our role to anticipate that | 16:21 |
ttx | (probably same for Wookiee -- the film industry is well known to protect their names fiercely) | 16:22 |
ttx | I guess Whiskey/Whisky could also be seen as culturally problematic (promote alcohol consumption) | 16:27 |
fungi | stein and icehouse were technically street names | 16:29 |
smcginnis | ttx: ML post sent. | 16:29 |
ttx | OK will repeat that early feedback there | 16:29 |
gmann | fungi: zaneb diablo_rojo johnsom : ++ few points on in-tree and separate openstack-provider in k8s. continuous testing was fixed and enabled now along with well documented with this- https://github.com/kubernetes/kubernetes/pull/85637#issuecomment-584308490 | 16:35 |
gmann | it is marked as beta also with this. and ask people to migrate from in-tree one - https://github.com/kubernetes/kubernetes/pull/85637 | 16:35 |
gmann | not sure how easy/tough the migration is. | 16:36 |
*** e0ne has quit IRC | 16:40 | |
*** e0ne has joined #openstack-tc | 16:41 | |
*** slaweq has joined #openstack-tc | 16:47 | |
*** e0ne has quit IRC | 16:51 | |
*** rpittau is now known as rpittau|afk | 17:00 | |
*** gmann is now known as gmann_afk | 17:20 | |
*** evrardjp has quit IRC | 17:34 | |
*** evrardjp has joined #openstack-tc | 17:34 | |
*** jamesmcarthur has quit IRC | 17:40 | |
*** e0ne has joined #openstack-tc | 17:42 | |
*** e0ne has quit IRC | 17:58 | |
*** gmann_afk is now known as gmann | 18:49 | |
*** e0ne has joined #openstack-tc | 19:15 | |
*** e0ne has quit IRC | 19:24 | |
*** e0ne has joined #openstack-tc | 21:19 | |
*** jamesmcarthur has joined #openstack-tc | 22:06 | |
*** ijolliffe has quit IRC | 22:14 | |
*** slaweq has quit IRC | 22:25 | |
*** jamesmcarthur has quit IRC | 22:41 | |
*** witek has quit IRC | 22:47 | |
*** slaweq has joined #openstack-tc | 22:49 | |
*** iurygregory has quit IRC | 22:52 | |
*** slaweq has quit IRC | 22:55 | |
*** e0ne has quit IRC | 22:58 | |
*** jamesmcarthur has joined #openstack-tc | 23:02 | |
*** jamesmcarthur_ has joined #openstack-tc | 23:05 | |
*** jamesmcarthur has quit IRC | 23:05 | |
*** slaweq has joined #openstack-tc | 23:11 | |
*** slaweq has quit IRC | 23:16 | |
*** jamesmcarthur_ has quit IRC | 23:24 | |
*** jamesmcarthur has joined #openstack-tc | 23:25 | |
*** jamesmcarthur has quit IRC | 23:25 | |
*** jamesmcarthur has joined #openstack-tc | 23:26 | |
*** jamesmcarthur has quit IRC | 23:32 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!