*** whoami-rajat has quit IRC | 00:01 | |
*** ricolin has joined #openstack-tc | 00:55 | |
*** wxy-xiyuan has joined #openstack-tc | 01:11 | |
*** tdasilva has quit IRC | 01:20 | |
*** tdasilva has joined #openstack-tc | 01:21 | |
*** mriedem has quit IRC | 01:49 | |
*** whoami-rajat has joined #openstack-tc | 03:06 | |
*** Luzi has joined #openstack-tc | 05:03 | |
*** jaosorior has quit IRC | 06:03 | |
*** jaosorior has joined #openstack-tc | 06:20 | |
-openstackstatus- NOTICE: The git service on opendev.org is currently down. | 06:51 | |
*** ChanServ changes topic to "The git service on opendev.org is currently down." | 06:51 | |
*** jamesmcarthur has joined #openstack-tc | 07:04 | |
*** iurygregory has joined #openstack-tc | 07:15 | |
*** adriant has quit IRC | 07:17 | |
*** adriant has joined #openstack-tc | 07:18 | |
*** jaosorior has quit IRC | 07:57 | |
*** tosky has joined #openstack-tc | 08:30 | |
-openstackstatus- NOTICE: Services at opendev.org like our git server and at openstack.org are currently down, looks like an outage in one of our cloud providers. | 08:35 | |
*** ChanServ changes topic to "Services at opendev.org like our git server and at openstack.org are currently down, looks like an outage in one of our cloud providers." | 08:35 | |
*** ChanServ changes topic to "OpenStack Technical Committee office hours: Tuesdays at 09:00 UTC, Wednesdays at 01:00 UTC, and Thursdays at 15:00 UTC | https://governance.openstack.org/tc/ | channel logs http://eavesdrop.openstack.org/irclogs/%23openstack-tc/" | 08:42 | |
-openstackstatus- NOTICE: The problem in our cloud provider has been fixed, services should be working again | 08:42 | |
*** jamesmcarthur has quit IRC | 08:48 | |
*** lpetrut has joined #openstack-tc | 09:15 | |
*** lpetrut has quit IRC | 09:16 | |
*** lpetrut has joined #openstack-tc | 09:16 | |
*** e0ne has joined #openstack-tc | 09:32 | |
*** jaosorior has joined #openstack-tc | 10:47 | |
*** adriant has quit IRC | 11:07 | |
*** adriant has joined #openstack-tc | 11:08 | |
*** jaosorior has quit IRC | 11:08 | |
*** mriedem has joined #openstack-tc | 11:51 | |
*** sapd1_ has joined #openstack-tc | 11:59 | |
*** sapd1 has quit IRC | 11:59 | |
*** iurygregory has quit IRC | 12:11 | |
*** iurygregory has joined #openstack-tc | 12:11 | |
*** jeremyfreudberg has joined #openstack-tc | 13:17 | |
*** jaosorior has joined #openstack-tc | 13:23 | |
*** Luzi has quit IRC | 13:33 | |
*** ricolin has quit IRC | 13:36 | |
*** AlanClark has joined #openstack-tc | 13:41 | |
*** jaosorior has quit IRC | 13:43 | |
*** iurygregory has quit IRC | 13:59 | |
*** ijolliffe has joined #openstack-tc | 14:01 | |
*** iurygregory has joined #openstack-tc | 14:02 | |
*** AlanClark has quit IRC | 14:21 | |
*** ijolliffe has quit IRC | 14:43 | |
*** lbragstad has joined #openstack-tc | 14:51 | |
*** zaneb has joined #openstack-tc | 14:56 | |
fungi | it's thursday office hour yet again | 15:00 |
---|---|---|
mnaser | \o/ | 15:00 |
*** ricolin_phone has joined #openstack-tc | 15:00 | |
evrardjp | o/ | 15:01 |
ricolin_phone | O/ | 15:02 |
*** zaneb has quit IRC | 15:02 | |
mnaser | so uh | 15:02 |
mnaser | i'm pretty concerned at the current state of magnum | 15:02 |
mnaser | i personally have struggled to get any code landed that fixes fundamental bugs that affect our deployment and had to unfortunately run a fork with cherry-picks cause its taken so long | 15:03 |
lbragstad | o/ | 15:03 |
mnaser | there's 4 reviewers in magnum-core, 2 of which havent renewed things for months | 15:03 |
mnaser | and the 2 "active" cores, i've emailed and asked to add a few potential ones but with no action done | 15:03 |
*** zaneb has joined #openstack-tc | 15:03 | |
*** zbitter has joined #openstack-tc | 15:05 | |
mnaser | i'd like for this project to succeed but .. i dont know what to do at this point | 15:06 |
mnaser | it's broken out of the box right now. | 15:07 |
*** zaneb has quit IRC | 15:08 | |
lbragstad | has anyone floated the idea of expanding the core team to allow for more review throughput? | 15:09 |
lbragstad | other teams have done that to land critical patches | 15:10 |
evrardjp | mnaser: so basically the action item of last time didn't result in a good outcome for you? | 15:11 |
evrardjp | lbragstad: it was proposed in these office hours I think... maybe 2 weeks ago? mnaser when did you bring that up last time? | 15:11 |
lbragstad | aha | 15:12 |
mnaser | i did that, i proposed two people (not me :]) and there was some "yay good idea" | 15:12 |
mnaser | but no actionable things | 15:12 |
evrardjp | oh I thought you were planning to send an email | 15:12 |
mnaser | nope i sent it out already, got a positive response but no action was taken (or maybe those indivuduals refused to be on core) | 15:12 |
mnaser | for all i know, nothing changed | 15:13 |
evrardjp | ok | 15:13 |
evrardjp | so basically you're saying we effectively lost the way to provision k8s cluster on top of openstack using a community project. | 15:13 |
jroll | provision a k8s cluster via an API* | 15:14 |
evrardjp | sorry to sound harsh, but that sounds like a big deal to me :p | 15:14 |
jroll | using an openstack community project* | 15:14 |
evrardjp | jroll: thanks for clarifying | 15:14 |
jroll | there are definitely FOSS projects to provision k8s clusters :D | 15:14 |
evrardjp | that's exactly what I meant | 15:14 |
evrardjp | ofc you can still do it with heat and/or terraform using opensuse tools :) | 15:14 |
mnaser | i mean, i think it can work with like 1 specific config | 15:15 |
mnaser | but the defaults (like for example flannel) are broken | 15:15 |
evrardjp | wrong smiley but everyone got the idea there :) | 15:15 |
mnaser | they used to work but now out of the box, you cant just upload a fedora image and install magnum and get a cluster, most of that doesnt work out of the box | 15:15 |
jroll | yeah, that's rough | 15:16 |
zbitter | so IMHO for provisioning within your own tenant, everything is going to move to cluster-api in the medium term | 15:16 |
*** zbitter is now known as zaneb | 15:16 | |
jroll | right | 15:16 |
zaneb | wrong nick | 15:16 |
evrardjp | zaneb: do you think the current cores of magnum have moved to that? | 15:17 |
zaneb | managed services could be a different story | 15:17 |
zaneb | evrardjp: I have no idea | 15:17 |
mnaser | yeah, but that's still in a little while (cluster-api) and that requires a "hosted" k8s cluster that a user has to self host | 15:17 |
zaneb | just looking in my crystal ball here | 15:17 |
jroll | does magnum still suffer from the security problems with service VM authentication and such? | 15:17 |
*** dklyle has quit IRC | 15:17 | |
mnaser | so it implies that user needs to a) create the "bootstrap" cluster ?? somewhere ?? and then use that to deploy again it | 15:17 |
mnaser | jroll: no it mostly uses heat to orchestrate this stuff | 15:17 |
mnaser | with the os-collect-config and it's friends | 15:18 |
evrardjp | mnaser: the bootstrap cluster can be pivoted to the final cluster after creation | 15:18 |
evrardjp | if necessary | 15:18 |
mnaser | evrardjp: right, but that's not exactly a good experience | 15:18 |
zaneb | mnaser: there's a bootstrap host, but you can run it on a VM on your laptop and it goes away once the cluster is up | 15:18 |
*** dklyle has joined #openstack-tc | 15:18 | |
mnaser | making an api request to get a cluster | 15:18 |
mnaser | vs having to setup a cluster locally to get a cluster | 15:18 |
mnaser | but that's a whole another discussion :) | 15:18 |
evrardjp | mnaser: the problem is not that, the problem is that we have software on our hands that don't match user expectations due to brokennesss | 15:18 |
evrardjp | let's not fix cluster api right now :) | 15:18 |
jroll | mnaser: so the magnum VMs are all owned by the tenant, and don't talk to magnum or anything? and so the user has to manage the cluster post-provision? | 15:18 |
mnaser | jroll: yes, its a user-owned cluster, magnum talks to it but over public apis | 15:19 |
mnaser | and it only talks to it to scale up or down afaik | 15:19 |
mnaser | it doesnt necessarily provide an "integration" point | 15:19 |
mnaser | maybe it does upgrades now? i dunno :) | 15:19 |
jroll | hm, ok | 15:19 |
mnaser | also, it deploys on top of fedora atomic 27, we're at 29 and atomic is being replaced by fedora coreos i think | 15:20 |
mnaser | so its quite seriously lagging behind in terms of being a solid deliverable :X | 15:20 |
jroll | just trying to figure out how much value magnum actually adds, and how much I care if it ends up going away | 15:20 |
zaneb | F27 has been EOL for quite a while | 15:20 |
mnaser | well i think you would care about it because: it provides an easy self-serve way for users to get a k8s cluster on demand | 15:20 |
mnaser | users *and* machines (think zuul k8s cluster for every job, lets say) | 15:21 |
jroll | sure | 15:21 |
jroll | but if other things in the landscape do it better... ¯\_(ツ)_/¯ | 15:21 |
jroll | (I don't yet know if they do) | 15:21 |
evrardjp | so many tools | 15:22 |
mnaser | cluster-api is an option, but its still very early and you might as well as jsut deploy a bunch of vms and use kubeadm at that point if your'e going to go through the hassle of setting up a local vm/cluster to pivot to etc | 15:22 |
*** ricolin has joined #openstack-tc | 15:22 | |
jroll | there's metalkube if you're deploying on bare metal | 15:23 |
mnaser | i mean i spent a significant time trying to refactor things and trying to leverage the existing infrastructure inside magunm | 15:23 |
*** e0ne has quit IRC | 15:23 | |
zaneb | jroll: there will be but it's even earlier for metalkube | 15:24 |
mnaser | so that it relies on kubeadm to deploy instead of all the manual stuff it has now | 15:24 |
jroll | zaneb: ok | 15:24 |
mnaser | but.. i cant merge simple stuff righ tnow | 15:24 |
mnaser | i can't start working on something more complex and knowing ill just end up having to run a fork. | 15:24 |
jroll | right | 15:24 |
jroll | we also can't expect to keep every current openstack project alive, unfortunately | 15:25 |
evrardjp | jroll: agreed. | 15:25 |
mnaser | right, but i guess in this case, someone is ready to do the work and cleanup, but their hands are tied up :) | 15:25 |
jroll | so I think when these things come up, it's important to ask questions like "how many people use this", "how valuable is it", "are there other tools that do it better", etc | 15:25 |
evrardjp | cf. convo we had at last ptg about projects dying | 15:25 |
mnaser | and i've tinkered with the idea of: forking magnum and ripping out all the extra stuff in it which is cruft/extras | 15:25 |
mnaser | and make it a simple rest api that gives you k8s containers. | 15:26 |
*** ijolliffe has joined #openstack-tc | 15:26 | |
mnaser | (cause remember, magnum is a "coe" provisioner and not k8s one, but it only mostly is used for that) | 15:26 |
fungi | sorry, in several discussions at once, but the biggest complaint i've heard about users running kubernetes in their tenant is that it results in resources which are consumed 100% of the time (to maintain the kubernetes control plane), and second is that it means the users have to know how to build and maintain kubernetes rather than just interact with it | 15:26 |
zaneb | mnaser: wait, did you just describe Zun? | 15:28 |
mnaser | zaneb: zun afaik *uses* k8s clusters to run "serverless" (aka one time) workloads | 15:28 |
evrardjp | I don't think so | 15:28 |
zaneb | nope | 15:28 |
evrardjp | isn't zun just running containers? | 15:28 |
zaneb | that's a different project | 15:28 |
mnaser | i meant rather than having magnum be a "coe" deployment tool, it becomes a "k8s" deployment tool only | 15:28 |
evrardjp | I thought zun would integrate into k8s | 15:28 |
jroll | I read it as "gives you k8s clusters", not containers, but I'm not sure. but "k8s containers" aren't really a thing so | 15:28 |
mnaser | oh yeah thats qinling | 15:29 |
ricolin | evrardjp, more try to do serverless IMO | 15:29 |
zaneb | zun is a Nova-like API for containers instead of VMs | 15:29 |
mnaser | yes yes my bad | 15:29 |
evrardjp | I understood it like zaneb :) | 15:29 |
* ttx waves | 15:29 | |
mnaser | well no, i just meant removing all the stuff that lets you deploy dcos/swarm | 15:29 |
evrardjp | mnaser: our governance allows you to create and/or fork magnum project to make it what you want | 15:29 |
mnaser | and make it something that deploys k8s only and nothing else | 15:29 |
zaneb | mnaser: right, yeah, fair to say that k8s has beaten Mesos and whatever that Docker rubbish was called at this point | 15:29 |
evrardjp | hahaha | 15:30 |
mnaser | and that would be a cleaner api than the current "label" based loose api | 15:30 |
evrardjp | not sure that's politically correct, but that surely made me smile | 15:30 |
jroll | so if you're deleting most of the code and changing the API... sounds like a new project :) | 15:30 |
evrardjp | :) | 15:30 |
* mnaser shrugs | 15:30 | |
evrardjp | and then the question becomes -- why this vs using terraform for example? | 15:31 |
ttx | Yes, Zun just runs containers (not on K8s) | 15:31 |
mnaser | evrardjp: the api | 15:31 |
evrardjp | so yeah one is an API, the other one is client based | 15:31 |
mnaser | POST /v1/clusters => get me a cluster | 15:31 |
ttx | There is definitely value in having "K8s clusters as a service" | 15:31 |
mnaser | GET /v1/clusters => heres all my lcusters | 15:31 |
evrardjp | I get the idea, I am just playing devil's avocate here | 15:31 |
mnaser | the state is not local which is huge value for anything that grows to more than one person imho | 15:31 |
ttx | Now there may be simpler ways of doing that than going through Heat and disk images | 15:32 |
zaneb | ttx: is there though? to me it depends on whether you're talking about a managed service or just a provisioning tool | 15:32 |
evrardjp | advocate* | 15:32 |
ttx | zaneb: obviously useful for public clouds to do a GKE-llike thing | 15:33 |
ttx | but also for others (think CERN) | 15:33 |
zaneb | ttx: GKE is a managed service. I agree that's valuable | 15:33 |
evrardjp | ttx: it's also good for implementing provider of cluster-api | 15:33 |
ttx | On my recent trip I had multiple users that were interested in giving that service to their users | 15:33 |
*** ricolin_ has joined #openstack-tc | 15:33 | |
jroll | I agree there's values in being able to push a button and get a k8s cluster, I'm just wondering if there's value in that being an openstack project. it's a layer above the base infrastructure. | 15:33 |
mnaser | imho talking to users, people seem a lot more interested in prividing k8s-as-a-service | 15:33 |
ttx | their users in that case being internal developers | 15:33 |
evrardjp | but there is already a cluster api provider for openstack | 15:33 |
mnaser | which is in alpha | 15:34 |
mnaser | and all of cluster-api is being rewritten | 15:34 |
jroll | are k8s contributors still mostly avoiding working on openstack projects because they're labelled openstack? | 15:34 |
evrardjp | yeah that's fair | 15:34 |
*** ricolin_phone has quit IRC | 15:34 | |
mnaser | *and again* cluster-api is something you have to build a local VM or a cluster $somewhere to be able to make it work | 15:34 |
jroll | s/still// | 15:34 |
mnaser | jroll: i dunno, i dont think so? | 15:34 |
evrardjp | jroll: I don't think it's that | 15:34 |
ttx | jroll: I think there is. As demonstrated by all the public clouds | 15:34 |
mnaser | ttx: and yeah i agree, users want to provide a managed k8s as a service as they dont want to maintain things for their users | 15:34 |
mnaser | err, they WANT to maintain them | 15:35 |
* ricolin_ just failing in connection so resend... | 15:35 | |
mnaser | and make sure their clusters are all up to date, etc | 15:35 |
ricolin_ | magnum make a nice bridge between k8s and OpenStack, which include connect the management work flow across also like integrate autoscaling for k8s on top of OpenStack, we do can try to figure out what other options we have but we need to also remember we have to also thing about those integration too | 15:35 |
ttx | mnaser: did you contact the good folks at CERN for an assessment ? They droev Magnum recently, depend on it | 15:35 |
jroll | so if there's so many clouds that want k8s as a service. and we have a project that does that, but is broken. why aren't these clouds contributing to making it work? or why is vexxhost the only one? | 15:35 |
jroll | are they all running a fork or? | 15:35 |
ttx | OVH developed their own Kubernetes as a service solution | 15:36 |
fungi | jroll: also some are avoiding working on openstack because they keep hearing from various places that openstack is a dead end and they should pretend it never existed | 15:36 |
lbragstad | ^ that's my question | 15:36 |
lbragstad | er - i have the same question :) | 15:36 |
ttx | They tried Magnum, fwiw | 15:36 |
mnaser | ttx: i have emailed both cores (one which is at cern), again, they were excited about having cores but no actionable things happened | 15:36 |
ttx | But they may have reached the same conclusion mnaser did | 15:36 |
mnaser | afaik city also runs magnum | 15:36 |
*** ricolin has quit IRC | 15:36 | |
ricolin_ | ttx Catalyst too depends on magnum IIRC | 15:36 |
mnaser | catalyst is in magnum-core | 15:36 |
ttx | Like, you need to deploy Heat to have Magnum | 15:37 |
fungi | getting flwang to provide an update on the current state of magnum from his perspective may be a good idea | 15:37 |
ttx | if you don;t want that, you may prefer to build your own thing | 15:37 |
ricolin_ | fungi, +1 | 15:37 |
ttx | And I'd say, deploying for Kubernetes in 2019 you have lots of simpler options | 15:37 |
lbragstad | adriant might know something about it too - i want to say he worked with magnum some in the past at catalyst | 15:37 |
zaneb | mnaser: I think it'd fair to say that cluster-api might be 6-12months away from primetime, and we have a short-term interest in maintaining magnum to cover that gap, but it's not totally surprising that people aren't queueing up to invest in it | 15:37 |
mnaser | zaneb: i agree but i cant imagine cluster-api is replacing it. unless we implement some really badass way of running the bootstrap cluster side by side with openstack | 15:38 |
ttx | zaneb: my understanding is that Cluster-API would not alleviate the need for magnum-like tooling | 15:38 |
*** ricolin_ is now known as ricolin | 15:38 | |
mnaser | so users dont have to *deploy* a bootstrap cluster | 15:38 |
mnaser | magnum will continue to solve a different issue | 15:38 |
ttx | But yes... Cluster-API would probably trigger a Magnum 2.0 | 15:38 |
ttx | that would take advantage of ity | 15:38 |
ttx | Also, cluster-api is imho farther than 12 months away | 15:39 |
mnaser | so the only way magnum would be 'alienated': cluster-api running with openstack auth (as part of control plane), when creating a cluster, it uses those credentials to create a cluster in your own tenant | 15:39 |
zaneb | mnaser: you can always spin up the bootstrap VM on openstack itself from the installer (which already has your creds anyway)... I really don't see that being an obstacle to people | 15:39 |
mnaser | ++ it's undergoing a huge "redesign" atm | 15:39 |
ttx | from what hogepodge tells me of progress | 15:39 |
mnaser | not an obstacle for people but lets say zuul wanted to add k8s cluster support | 15:40 |
mnaser | 1 cluster for each job | 15:40 |
zaneb | it makes more sense to talk about the parts of cluster-api separately | 15:40 |
ttx | Next steps would be: is there critical mass to maintain Magnum as it stands ? Would there be additional mass to redo it in a simpler way (one that would leverage some recent K8s deployment tooling) ? | 15:41 |
zaneb | the Cluster part of cluster-api may be some time away and there are disagreements about what it should actually do, and it involves a master cluster to create the workload clusters and blah blah blah | 15:42 |
mnaser | ive personally been experimenting keeping heat as the infra orchestrator, but using ubuntu as the base os + kubeadm to drive the deployment | 15:42 |
zaneb | the Machine part of cluster-api is here to stay with only minor tweaks, and it's the important part | 15:42 |
mnaser | but it involves a lot of change inside magnum that while can be done, i dunno if i can actually getting it to land | 15:43 |
mnaser | and a lot has changed since magnum existed | 15:44 |
mnaser | we have really neat tooling like ansible to be able to orchestrate work against vms (instead of heat and its agents) | 15:44 |
mnaser | kubeadm is a thing which gives us a fully conformant cluster right off the bat | 15:44 |
ricolin | Cluster API is a way easier and native solution for sure, but can we jump back on what can we deal with magnum's current situation? | 15:44 |
ttx | ricolin: I think we need to reach to current users and see if there is critical mass to maintain Magnum as it stands | 15:45 |
ricolin | so we need actions for ack users and cores | 15:46 |
ttx | Also reach out to thiose who decided not to use Magnum but build their own thing | 15:46 |
evrardjp | we are just answering the questions "are there alternatives explaining the disappearance of interest" I think | 15:46 |
ttx | and ask why | 15:46 |
mnaser | i think it just doesn't work easily and people give up | 15:47 |
ttx | I can speak to OVH to see why they did not use Magnum for their KaaS | 15:47 |
ttx | mnaser: yes that would be my guess too | 15:47 |
ricolin | agree, we do need a more detail evaluation | 15:47 |
evrardjp | I can tell why the next suse caasp product is not integrating with magnum, but I am not sure it interests anyone | 15:47 |
ttx | mnaser: could you reach out to the public clouds that DID decide to use magnum ? You mentioned Citycloud | 15:47 |
evrardjp | it's not a community project, it's a product | 15:47 |
mnaser | a product can be a result of an upstream community project :D | 15:48 |
evrardjp | fair | 15:48 |
mnaser | ttx: i can try, but i usually hear the "we're too busy" or "it just works in this one perfect combination we have" | 15:48 |
mnaser | (until you upgrade to stein and it breaks) | 15:48 |
mnaser | i guess it hits harder as we are closer to latest so we see these issues crop up before most.. | 15:48 |
*** altlogbot_1 has quit IRC | 15:48 | |
ttx | mnaser: also would be good to hear from CERN | 15:49 |
* ricolin wondering if it make sense to have an full evaluation and ask everyone help to work under a single structure or have APIs to cover all?:) | 15:49 | |
ttx | They are probably the ones that are the most dependent on it, and they are also teh ones leading its maintenance recently | 15:50 |
*** altlogbot_2 has joined #openstack-tc | 15:50 | |
fungi | this is all great discussion, but we really ought to bring this up on the openstack-discuss ml where the maintainers and deployers of magnum can be realistically expected to chime in | 15:52 |
ttx | yes++ | 15:52 |
ricolin | fungi, +1 | 15:52 |
fungi | it might be good if mugsie and evrardjp, as the tc liaisons to magnum, could convince flwang to start that ml thread, even? | 15:52 |
ttx | 1/ ML, 2/ reach out to known users (or known non-users like OVH) and get details | 15:53 |
fungi | if flwang can't/won't start the discussion on the ml, then one of us can of course | 15:53 |
* mnaser has already reached out once (not to start a topic but about the core thing) | 15:54 | |
mnaser | perhaps someone else so im not nagging :) | 15:54 |
ricolin | I can do it:) | 15:54 |
ricolin | we got more close TZ | 15:54 |
ricolin | closer than evrardjp I believe:) | 15:55 |
fungi | thanks ricolin. to be clear, i do think it would be healthy for the ptl of magnum to start the discussion | 15:55 |
evrardjp | indeed. I don't mind sending emails though | 15:55 |
*** jeremyfreudberg has quit IRC | 15:55 | |
evrardjp | moar emails | 15:55 |
mnaser | ok great, thanks for coming to my ted talk everyone | 15:55 |
evrardjp | fungi: that's what I was thinking of -- how do we make that happen? | 15:55 |
ttx | mnaser: thanks for raising it | 15:55 |
mnaser | if it doesnt piss off the world, i'll.. work on magnum 2.0 -- publicly | 15:56 |
fungi | well, by asking him to start a "current state of magnum" or "what's to come for magnum" or similar sort of thread | 15:56 |
mnaser | and then we can maybe converge or just have something else | 15:56 |
mnaser | because at this point i've tried to do a demonstrable effort of pushing/fixing patches without much success, so ill probably have a branch somewhere | 15:57 |
fungi | flwang does at least seem to chime in on some of the recent magnum bug discussions on the ml, so presumably he's able to send messages at least occasionally | 15:58 |
ricolin | regarding getting user feedback, what actions do we list now? | 15:59 |
fungi | looks like the magnum "containers" meetings were being held somewhat regularly through april of this year, but they haven't had another logged for several months now. i see some discussion on the ml about meetings but seems they don't get around to holding them | 16:02 |
mnaser | on my side, i have tried to work with the osf to get magnum certified for the latest releases of k8s (conformant) | 16:04 |
mnaser | kinda like our powered program | 16:04 |
mnaser | and i cant really do that because the patches that need to land havent landed | 16:04 |
fungi | it was certified under an earlier version, right? i vaguely remember something about that | 16:05 |
mnaser | yes, but i think 1.12 (or was it 1.13 certification) has now expired | 16:05 |
mnaser | so 'technically' speaking we're certified for an expired thing so we cant really use that (which is no bueno) | 16:05 |
*** iurygregory has quit IRC | 16:20 | |
hogepodge | Sorry to drop in late on this | 16:21 |
hogepodge | Part if it's coming up because we lost our K8s certification about a month ago. | 16:21 |
hogepodge | I was working with mnaser (who was doing all the work really tbh) to run the conformance tests and reestablish that certification, and we honestly thought it would be a quick job. It wasn't, and its in part because magnum needs to be updated to work with recent K8s releases. | 16:24 |
hogepodge | A few points on the discussion. With the success that CERN is having with Magnum, managing hundreds of clusters with it, I think it's a valuable project. Installing Kubernetes securely and easily is a problem, and Magnum provides an API that is a solution to that problem. | 16:25 |
hogepodge | We need to think of Cluster API as a tool for installing K8s also. It's still in development and not stable yet, but it will be stable with basic functionality soon. If the project leaders are successful, it will become the preferred tool for installing and managing K8s clusters on any cloud, and will add other valuable features like auto-scaling and auto-healing that should be provider-independent. | 16:26 |
hogepodge | So if Magnum is to continue, at some point in the future it would be to benefit of the project to use Cluster API as the orchestration tool under the hood. It's not an either-or proposition. | 16:27 |
*** lpetrut has quit IRC | 16:29 | |
hogepodge | But if as a community we decide there's more value to user-managed clusters using the deployment tools out there and Magnum isn't providing value for our users, that's fine. We should support efforts to maintain openstack provider integrations and cluster management tools though. | 16:31 |
*** ricolin has quit IRC | 16:39 | |
*** diablo_rojo has joined #openstack-tc | 17:11 | |
*** jaypipes has quit IRC | 17:18 | |
* dhellmann apparently missed an epic office hours | 18:01 | |
dhellmann | mnaser : another option to consider is just having the TC add some core reviewers to the magnum team, if the existing team is not responsive. That would obviously have to come after starting the new mailing list thread. | 18:02 |
fungi | yep, i totally see that as a possible step, but only after there's been community discussion | 18:04 |
*** dklyle has quit IRC | 18:11 | |
*** dklyle has joined #openstack-tc | 18:12 | |
*** jamesmcarthur has joined #openstack-tc | 18:17 | |
*** dims has quit IRC | 18:19 | |
*** dims has joined #openstack-tc | 18:29 | |
*** lbragstad has quit IRC | 18:51 | |
*** mriedem has quit IRC | 18:54 | |
*** mriedem has joined #openstack-tc | 19:03 | |
*** tosky has quit IRC | 19:10 | |
hogepodge | mnaser: it doesn't help much because I'm not core, but I went and reviewed the entire patch backlog on magnum | 19:15 |
hogepodge | to my eye there's a lot of pretty basic maintenance stuff that should be no problem to merge | 19:15 |
*** jaypipes has joined #openstack-tc | 19:16 | |
*** dims has quit IRC | 19:30 | |
jrosser | re. magnum from a deployer perspective has been a hard journey, just about made things work in rocky fixing a bunch of bugs along the way and chasing reviews to get bugfixes backported - i figure not many folks will follow it through so persistently with contributions | 19:38 |
*** dims has joined #openstack-tc | 19:39 | |
*** e0ne has joined #openstack-tc | 19:39 | |
jrosser | then i've had to revert a bunch of broken code out of the stable branches which should never have been merged to master imho, those reverts are largely still not merged | 19:39 |
*** jamesmcarthur has quit IRC | 19:40 | |
jrosser | given it's all broken in stein, i'm sad to say that my users are now using rancher instead | 19:40 |
*** jamesmcarthur has joined #openstack-tc | 19:41 | |
dhellmann | I'm not sure it's absolutely necessary for us to provide openstack APIs to do *everything* | 19:42 |
fungi | especially if there are other api services which can be run alongside openstack, and especially especially if they can share resources (authentication, block storage, networks, et cetera) | 19:44 |
*** jamesmcarthur has quit IRC | 19:46 | |
evrardjp | dhellmann: agreed with you | 20:05 |
evrardjp | it's not my approach to maintain an API if a client I don't have to maintain does an equivalent feature... | 20:06 |
evrardjp | rather help maintain the client instead than building my own thing | 20:06 |
*** jamesmcarthur has joined #openstack-tc | 20:11 | |
*** jamesmcarthur has quit IRC | 20:19 | |
*** jamesmcarthur has joined #openstack-tc | 20:30 | |
*** diablo_rojo has quit IRC | 20:40 | |
*** mriedem has quit IRC | 20:52 | |
*** mriedem has joined #openstack-tc | 20:53 | |
*** jamesmcarthur has quit IRC | 21:00 | |
*** jamesmcarthur has joined #openstack-tc | 21:01 | |
*** jamesmcarthur has quit IRC | 21:13 | |
*** diablo_rojo has joined #openstack-tc | 21:19 | |
*** whoami-rajat has quit IRC | 21:28 | |
hogepodge | jrosser: I saw the reversions. Those are failing the zuul gate it looks like | 21:34 |
hogepodge | dhellmann: sure, the issue right now being is we have a high profile user running a lot of clusters. If I had to guess why patches haven't been merged, it's because we're outside of the academic season and also pushing against European holidays. | 21:36 |
hogepodge | In the short term I would support promoting some trusted people to core and letting them merge patches, especially the easy ones (there are a bunch of cleanups) and the critical ones that have been production tested, then reevaluating what the path forward is once the core team members are back | 21:38 |
*** jamesmcarthur has joined #openstack-tc | 21:46 | |
*** jamesmcarthur has quit IRC | 21:51 | |
*** e0ne has quit IRC | 21:56 | |
dhellmann | hogepodge : ok, but "it's a bad time of the year" seems like a very good reason for us to push to expand the size of that team, too | 22:07 |
dhellmann | the sun doesn't set on openstack, and all that | 22:07 |
*** ijolliffe has quit IRC | 22:36 | |
*** jamesmcarthur has joined #openstack-tc | 22:38 | |
*** jamesmcarthur has quit IRC | 22:44 | |
*** mriedem has quit IRC | 23:18 | |
*** jamesmcarthur has joined #openstack-tc | 23:20 | |
hogepodge | oh yeah, definitely | 23:22 |
*** jamesmcarthur has quit IRC | 23:24 | |
*** tjgresha has quit IRC | 23:31 | |
*** jamesmcarthur has joined #openstack-tc | 23:50 | |
*** jamesmcarthur has quit IRC | 23:55 | |
*** smcginnis has quit IRC | 23:56 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!