*** jamesmcarthur has quit IRC | 00:02 | |
*** lbragstad has quit IRC | 00:59 | |
*** lbragstad has joined #openstack-tc | 01:08 | |
*** lbragstad has quit IRC | 01:09 | |
*** tosky has quit IRC | 01:17 | |
*** jamesmcarthur has joined #openstack-tc | 01:32 | |
*** jamesmcarthur has quit IRC | 03:21 | |
*** jamesmcarthur has joined #openstack-tc | 03:24 | |
*** jamesmcarthur has quit IRC | 03:31 | |
*** jamesmcarthur has joined #openstack-tc | 04:00 | |
*** dangtrinhnt_x has joined #openstack-tc | 04:02 | |
*** dangtrinhnt_x has quit IRC | 04:09 | |
*** jamesmcarthur has quit IRC | 04:47 | |
*** jamesmcarthur has joined #openstack-tc | 04:47 | |
*** diablo_rojo has quit IRC | 05:03 | |
*** whoami-rajat has joined #openstack-tc | 05:22 | |
*** jamesmcarthur has quit IRC | 05:27 | |
*** jamesmcarthur has joined #openstack-tc | 06:50 | |
*** jamesmcarthur has quit IRC | 06:55 | |
*** jaosorior has joined #openstack-tc | 08:39 | |
*** jpich has joined #openstack-tc | 08:44 | |
*** tosky has joined #openstack-tc | 08:47 | |
*** zaneb has quit IRC | 09:17 | |
ttx | On yesterday's discussion... I agree that OpenStack's positioning as "providing infrastructure" means you can use it to power anything (and that is where its true value lies). But that value is hard to visualize, and having a few killer narrow use cases could definitely help | 09:34 |
---|---|---|
ttx | We now have a few shops that deploy OpenStack solely as a backend for providing infrastructure to Zuul v3 | 09:35 |
ttx | We need more of that sort of thing, and we need to make the experience of deploying such narrow use cases VERY simple | 09:35 |
ttx | (it is a lot easier than making the experience of deploying "OpenStack" very simple) | 09:36 |
*** ricolin has quit IRC | 09:41 | |
*** jpich has quit IRC | 09:48 | |
*** jpich has joined #openstack-tc | 09:48 | |
*** e0ne has joined #openstack-tc | 09:52 | |
*** cdent has joined #openstack-tc | 10:22 | |
smcginnis | I think that's key. If OpenStack can be just a piece of an overall solution, not the solution itself, making the default deployment that covers 80% of the needs would make adoption really grow I think. | 10:25 |
cmurphy | is kubernetes that? | 10:28 |
cdent | a) "just a piece of an overall solution" is totes right, b) smcginnis never sleeps! | 10:28 |
smcginnis | I would see that as one thing. | 10:29 |
smcginnis | cdent: Doesn't help when I fall asleep by 1900. Considering I just got back to my normal time zone a day and a half ago, I think I'm doing pretty good. :) | 10:30 |
ttx | cmurphy: K8s sits slightly higher on the stack, and also encourages more focused/small/separate clusters | 10:30 |
ttx | so it suffers a bit less of that "universality" issue -- but it's still affected by it | 10:30 |
smcginnis | It could make "I just need a cloud to deploy my k8s cluster on" a lot easier for those that don't need a lot. | 10:31 |
ttx | But being "a homegrown K8s substrate" could be one of those narrow use cases | 10:31 |
ttx | what smcginnis said | 10:31 |
ttx | which asks the question of whether we should encourage bottom-up (Magnum and its API of cluster generation) or top-bottom (the openstack cloud provider in K8s) | 10:33 |
cdent | I think "k8s substrate" is something we (whoever that is) ought to pursue aggressively | 10:33 |
ttx | cdent: I think we do, via the cloud provider effort mostly | 10:34 |
ttx | Like having "openstack" in that list together with Azure and GKE is awesome | 10:34 |
ttx | (https://cncf.ci/) | 10:35 |
cdent | ttx: yeah, but how close to that is "press a button and boom I've got one"? That's the experience people get out of something like minikube | 10:35 |
ttx | well it assumes a preexisting cloud, so I think we still have work to do to come up to "the minimal set of components you need to act as a K8s cloud provider" | 10:36 |
ttx | heck I don't even know that answer | 10:37 |
smcginnis | I think dims was talking about an effort being started to enable deploying a minimal OpenStack cloud supporting deploying a k8s cluster when we were in Seattle. | 10:38 |
ttx | and then facilitate one-node deployment of that that can then be extended | 10:38 |
ttx | like do we need a ministack (think devstack, centered on specific use case needs, that you can evolve into something durable and use in prod) ? | 10:40 |
smcginnis | Given the number of times I've seen people try to use devstack to set up a long running deployment, I'd say there is probably a need for something like that. | 10:41 |
ttx | It feels some of the container-driven deployment toolkits would have the flexibility to go from one-node quick setup to sustainable multi-node | 10:42 |
ttx | if the goal is to serve as a substrate for Zuul or a single Kubernetes cluster, you don't need 100's of nodes either | 10:42 |
*** jpich has quit IRC | 10:43 | |
ttx | trying/comparing/refreshing my knowledge on the various openstack deploy toolkits is on my 2019 list | 10:43 |
ttx | see which one I would place my bet on as the "simple" option | 10:44 |
EmilienM | I'm not aware of "simple" option to deploy OpenStack ;-) | 10:47 |
EmilienM | (and hi) | 10:47 |
cdent | 'ministack' would be great | 10:47 |
cdent | even greater would be some people with the time and energy to make one | 10:47 |
EmilienM | the problem with these things is that it always starts small and get (too) big because $reasons | 10:47 |
EmilienM | once ministack works, people will start to want it to be multinode and such | 10:48 |
EmilienM | once multinode works, they'll want loadbalancing, etc | 10:48 |
EmilienM | that's where it go south :) | 10:49 |
* EmilienM bbl, lunch time here | 10:49 | |
*** jpich has joined #openstack-tc | 10:49 | |
EmilienM | FWIW, this is our attempt to make TripleO deployed by one command on a fresh system: | 10:49 |
EmilienM | https://docs.openstack.org/tripleo-docs/latest/install/containers_deployment/standalone.html | 10:49 |
EmilienM | feedback welcome | 10:49 |
smcginnis | I think it's fine if folks want to add multinode. But doing that should be a flag they need to set that does not impact those that don't need that. | 10:51 |
smcginnis | Protect the simple case, enable tweaking settings for the more complicated cases. | 10:52 |
cdent | doesn't osa have an all in one mode? | 10:52 |
smcginnis | Yep - https://docs.openstack.org/openstack-ansible/latest/user/aio/quickstart.html | 10:53 |
cdent | yeah, just found that too | 10:53 |
cdent | hmm, more steps that I was hoping for | 10:54 |
smcginnis | Yep | 10:54 |
cdent | not a ton of steps | 10:54 |
cdent | but was really looking for one call | 10:54 |
smcginnis | I need to go back and try again, but when I tried to use that to set up my lab I was hitting various problems. | 10:54 |
cdent | and I can't understand why it takes so long | 10:55 |
cdent | /o\ | 10:58 |
cdent | I'm exhausted by the fact that so many things that seemed wrong 4 years ago are still the same kind of wrong, but somewhere else. | 10:58 |
cdent | And there's this creeping thing where if you try to hold a line on "doing better" it gets slowly clobbered by "that's the way we do things". For example I'd tried to make it normal and okay in placement that a fresh install didn't do database migrations, to save a little bit of time | 11:00 |
cdent | Or removing moving parts and having fewer steps in install | 11:01 |
smcginnis | "removing moving parts and having fewer steps in install" is a great thing to do, IMO. | 11:01 |
cdent | yet we all agreed to make upgrade checks a community goal | 11:02 |
cdent | it's a nice safety valve, but it is yet one more thing in the process | 11:02 |
cdent | we could have built it in (somehow) instead | 11:03 |
smcginnis | But the upgrade checks don't have to be run to do an upgrade, right? They just give the person or tool doing the upgrade something they can run that can check for things they should be aware of. | 11:13 |
cdent | yes, it was probably a bad example, because it is mostly a good thing, but on the other hand it is yet one more thing to be aware of | 11:13 |
cdent | I recognize that openstack is a complex system so there's lots to be aware of, but if you spread that awareness over N(large) different projects... I'd not enjoy managing openstack... | 11:14 |
*** cdent has quit IRC | 11:30 | |
*** dtantsur|afk is now known as dtantsur | 11:32 | |
*** cdent has joined #openstack-tc | 12:46 | |
*** ssbarnea has quit IRC | 12:51 | |
*** lbragstad has joined #openstack-tc | 14:08 | |
*** mriedem has joined #openstack-tc | 14:30 | |
*** EmilienM is now known as EvilienM | 14:34 | |
*** cdent has quit IRC | 14:58 | |
*** cdent has joined #openstack-tc | 15:11 | |
TheJulia | I've heard that echoed time and time again. The key for people and their adoption is solving a real problem... and it does take growth of expertise in some areas where specific things are needed like highly specific networking to meet specific needs. We've gotten stuck in this model that an operator will install the software, and manage the software, not let the software manage the system. In a sense we have database | 15:25 |
TheJulia | migrations that are outside of the daemon that uses the database on a normal basis. Largely because we built the interaction models to have very fixed behaviors and patterns for an operator... which in a sense is great in a regulated environment, and is a nightmare when your just wanting something to "work". | 15:25 |
cdent | yes | 15:27 |
*** lbragstad has quit IRC | 15:36 | |
*** EvilienM is now known as EmilienM | 15:37 | |
*** lbragstad has joined #openstack-tc | 15:41 | |
*** cdent has quit IRC | 15:59 | |
*** diablo_rojo has joined #openstack-tc | 16:00 | |
*** jamesmcarthur has joined #openstack-tc | 16:07 | |
*** e0ne has quit IRC | 16:36 | |
*** cdent has joined #openstack-tc | 16:43 | |
fungi | to rephrase (makign sure i understand what you're saying) openstack evolved in an environment where explicit manual operation was preferred over magic automation of maintenance tasks? | 16:51 |
scas | declarative over imperitive is what it seems like to me from a deploy aspect | 17:13 |
scas | lest we trust those pesky computers /too/ much and have an even worse version of skynet | 17:13 |
scas | or something equally ranty | 17:13 |
mriedem | keep in mind that rax public cloud, back in the day, specifically didn't want projects doing data migrations during the schema migrations (db sync) because of the downtime involved in migrating hundreds of thousands of records in a production database | 17:14 |
scas | having served time at rax, pre-openstack, i know that there was a notion of not letting machines do /too/ much unattended, whether or not it was actively fostered. it seemed that emphasizing for a declarative model was preferred over a sort of self-service model | 17:17 |
scas | for that to make into openstack is not the most surprising | 17:18 |
scas | s/emphasizing/optimizing/ speaking and typing do not mix | 17:20 |
scas | but, scas, what about the whole self-service thing that openstack provides? | 17:21 |
scas | the caveat seems to be that at the operator level, it's bring your own shovels and pickaxes | 17:22 |
scas | the cycle of ENOTIME enforces this, be it through attrition, lack of interest, or conflict of interest, over time, people don't like the idea of having to build their own toolbox. they'd rather go find something shrinkwrapped that sort of mostly does what needs to be done | 17:24 |
clarkb | fwiw (and again huge biases like debugging openstack all the time) I run a lot of software and running infracloud was relatively easy compared to other things | 17:25 |
clarkb | the trouble with our openstack cloud wasn't openstack it was the data center flooding | 17:25 |
clarkb | and our hosting provider's switches turning into hubs | 17:26 |
clarkb | and the hard drives and cpus and motherboards and memory on our hardware dying | 17:26 |
scas | chef openstack's standard deployment is getting as close to push-button as possible. documentation would close the gap, were it not for the cycle of ENOTIME pushing things down the list | 17:26 |
*** whoami-rajat has quit IRC | 17:27 | |
clarkb | and I think its those costs that people underestimate. When you go from small fiefdoms mostly managing themselves in your datacenter to trying to provide broad infrastructure all of a sudden you are responsible for that undersized switch flooding its cam table and the hard drives on hypervisors dying | 17:27 |
clarkb | operationally the cost goes up, but I'm not sure its entirely due to the software | 17:27 |
scas | my penchant for non-self-promotion is not helping that much, either | 17:28 |
clarkb | though in the flooding case you lose regardless of how you are organized as a host | 17:28 |
scas | the main theme that i've ran into in my years is that it's not about how much you prepare, but how well you can recover when things /do/ go wrong. preparing can help one see the patterns, but being in the moment is nothing like drills | 17:30 |
scas | even in the public cloud side, i've spoken with individuals about their cloud strategy. their eyes widen when i ask what would they do if 'the cloud' went down | 17:31 |
scas | the verbal response is usually a measured one, but the initial facial response is pretty much the same unless you think about this kind of stuff for morbid 'fun' | 17:32 |
scas | i've seen it up and down the management stack | 17:33 |
*** e0ne has joined #openstack-tc | 17:43 | |
*** e0ne has quit IRC | 17:44 | |
*** jpich has quit IRC | 17:49 | |
fungi | reminds me of when, at a previous employer providing colocation and dedicated hosting services, we had an automatic transfer switch failure during a power incident at our oldest facility and it went dark for 5 minutes. i and other staff spent the better part of two mostly-sleepless days helping customers get their systems back online due to resulting filesystem/database corruption and the like. | 17:52 |
fungi | customers thanked us up and down and seemed generally accepting of the situation, but our primary competitor in a neighboring city ran scathing ads inferring we were an untrustworthy fly-by-night operation. then a month later their flagship facility suffered a power outage that lasted several days (and nearly a week for some parts of that facility), and at least a third of their customer base walked out | 17:52 |
fungi | rather than wait for it to get fixed | 17:52 |
bsilverman | K8s and Docker are the hot topic right now. I have a bunch of OpenStack deals in the pipeline and most revolve around how how they are going to do hybrid cloud using containers. Some are OpenShift, some aren't. Each OpenStack distro now has it's own opinionated way of solving this challenge and most aren't aligned real closely with toolsets and methodologies. | 17:59 |
bsilverman | Discussions around OpenStack, for most of these deals was a software drag that evolved from showing companies that orchestrating bare metal for containers and k8s wasn't easy and they'd need a private cloud with similar capabilities to public cloud for the rest of their persistent workloads. | 18:02 |
bsilverman | Meanwhile, the major telcos are still moving forward with their implementations using NFV/Edge as their main use cases. | 18:05 |
*** tosky has quit IRC | 18:09 | |
bsilverman | Honestly, I was surprised at the amount of people I spoke to at Kubecon that didn't even know what OpenStack was. I see that as a major failure of the community and an opportunity to get more people familiar with the software by introducing them to OpenStack as an infrastructure orchestrator for their use case. | 18:09 |
*** jamesmcarthur has quit IRC | 18:10 | |
* bsilverman fades slowly off into the distance. | 18:10 | |
*** jamesmcarthur has joined #openstack-tc | 18:10 | |
bsilverman | o/ | 18:10 |
*** jamesmcarthur has quit IRC | 18:11 | |
*** dtantsur is now known as dtantsur|afk | 18:11 | |
*** jamesmcarthur has joined #openstack-tc | 18:11 | |
scas | in some of the chat media that i lurk in, just saying 'openstack' can send people frothy | 18:12 |
scas | particularly the home power user contingent | 18:12 |
clarkb | scas: are there concrete issues or is it tried it 5 years ago, got mad and forever hold on to that opinion? | 18:13 |
clarkb | (I do think we don't give openstack enough credit for the improvements that have been made) | 18:14 |
scas | clarkb: a little of both | 18:15 |
scas | on one end of the spectrum, you've the old crusty people who tried mashing it together manually, maybe had a bad time with it in the upcycle of hype. on the opposite end, you have someone that's just learning about it, sees that tripleo is the dominant tool in terms of raw numbers, gets frustrated, and forever expounds the shittiness of their experience | 18:16 |
clarkb | ya that second individual is why I think reevaluating our defaults for new users is worthwhile | 18:17 |
clarkb | short of building a time machine fixing the first individuals experience is less straightforward :) | 18:17 |
scas | aye | 18:18 |
cdent | $IF_ONLY_TIMEMACHINE | 18:18 |
scas | for the first case, it's more of changing set opinions. in the latter, they have no preconceptions of it, good or bad. they just heard it was 'cool' | 18:18 |
scas | the former is difficult to overcome, unless such a dissenter revisits that 'forever' decision | 18:19 |
scas | post-berlin, one person showed up in #openstack-chef that is a relatively new user. their opinion and perspective was enlightening in a good way, to me at least | 18:21 |
scas | at the end of the day, it's down to the implementer to look at things through an unbiased lens first and foremost | 18:22 |
scas | i'm simplifying for the sake of brevity for the medium | 18:23 |
*** jamesmcarthur has quit IRC | 18:46 | |
*** jamesmcarthur has joined #openstack-tc | 18:47 | |
*** jamesmcarthur has quit IRC | 18:51 | |
*** jamesmcarthur has joined #openstack-tc | 18:55 | |
*** whoami-rajat has joined #openstack-tc | 18:57 | |
*** jamesmcarthur has quit IRC | 19:11 | |
*** jamesmcarthur has joined #openstack-tc | 19:17 | |
*** jamesmcarthur has quit IRC | 19:43 | |
*** jamesmcarthur has joined #openstack-tc | 19:43 | |
*** jamesmcarthur has quit IRC | 19:46 | |
*** jamesmcarthur has joined #openstack-tc | 19:47 | |
*** jamesmcarthur has quit IRC | 19:52 | |
cdent | return of mid-cycles siting! | 19:58 |
*** jaypipes has quit IRC | 20:04 | |
smcginnis | Cinder discussion started soon after the PTG change announcement. | 20:05 |
cdent | make sense | 20:07 |
smcginnis | It will be interesting to see how it works now. They were always hugely productive in the pre-PTG era, but quite a bit has changed since then. | 20:08 |
smcginnis | But I think it did always help to have regular high-bandwidth face to face events at regular intervals. I hope it still has enough critical mass to keep working well. | 20:09 |
clarkb | smcginnis: do you anticipate cinder foregoing ptg room time as a result? | 20:10 |
clarkb | or doing both? | 20:10 |
cdent | i like the idea of them, I'm not sure how many companies will be eager to pay | 20:10 |
smcginnis | I expect doing both. | 20:10 |
smcginnis | The midcycle is really to fill that gap between Summits now. (again) | 20:10 |
*** cdent has quit IRC | 20:13 | |
TheJulia | yay for getting distracted by other things | 20:32 |
TheJulia | fungi: Essentially what scas was saying is what I was attempting to convey. Because of enotime and all of the other various issues that go into it, we've never gotten away from improving the model or interaction. I fear we've only complicated it in the grand scheme of the universe. | 20:36 |
TheJulia | clarkb: That is a downright frightening sequence of events... | 20:37 |
clarkb | TheJulia: that the datacenter flooded? | 20:38 |
clarkb | ya that resulted in losing all the rails for our servers somehow when they moved them | 20:38 |
TheJulia | scas: I would argue that thinking of the morbid kind of stuff is an excellent skill.... although it severely impacts things like... bathroom remodels. | 20:38 |
clarkb | then stacked them like pizza boxes on the floor | 20:38 |
TheJulia | clarkb: yeah | 20:38 |
clarkb | also our switches | 20:39 |
clarkb | which is how we ended up on shared switch gear that turned into hubs when the cam tables filled | 20:39 |
TheJulia | fungi: was it one of the static switches locking over to one side? | 20:40 |
TheJulia | bsilverman: I think it is highly dependent upon the circle in which one keeps themselves, which is hard to break out of, and then we reach the ENOTIME issue spoken of previously. :( | 20:42 |
TheJulia | clarkb: that is... horrifying. | 20:43 |
clarkb | On the one hand it was donated hosting which was nice. On the other it was really difficult to deal with the curveballs we were thrown :) | 20:43 |
fungi | TheJulia: the facility in question had a spof ats, and it fused, refusing to trip to the live feed of the two we had coming into the facility. the upses were also in a sore state, and when load all ended up on one side it decided it had insufficient capacity, gave up, failed over to the second bank of upses which then had a similar fit... it was really not a good time for us | 20:44 |
TheJulia | smcginnis: I anticipate few teams will be able to coalesce to a single location for a mid-cycle and that most teams will end up having to do high bandwidth check-ins electronically. | 20:45 |
fungi | on a positive note, it at least got the owner to agree to start properly servicing batteries in the ups again :/ | 20:45 |
TheJulia | #thisiswhycloudsarehard | 20:45 |
TheJulia | Then it kind of all goes to the sheep vs cattle mentality conundrum | 20:46 |
TheJulia | err | 20:46 |
TheJulia | pets vs cattle | 20:46 |
smcginnis | TheJulia: Yeah, I'm sure it won't work for many to do face to face. Hangouts and the like are probably going to be more common. | 20:46 |
* TheJulia is braindead | 20:46 | |
* smcginnis would like a pet sheep | 20:47 | |
TheJulia | I actually met someone who had some... they were unbelievably cute. | 20:47 |
smcginnis | Totally. :) | 20:47 |
* dhellmann wonders if other cities are out of tidy cat or if this is a localized emergency | 20:53 | |
TheJulia | dhellmann: I am avoiding running to the pet store since I already went to the vet this morning | 20:54 |
TheJulia | I can report in from california tomorrow! :) | 20:54 |
dhellmann | :-) | 20:54 |
dhellmann | I'll tell the cats to hold it | 20:54 |
TheJulia | I'm not sure that will work | 20:54 |
dhellmann | no, not likely | 20:55 |
TheJulia | Feline Pine multi-cat clumping or... the walnut shell multi-cat works really well | 20:55 |
dhellmann | I'm trying feline pine with them. Our previous cat would only use that, but these 2 didn't seem to like it as kittens. | 20:56 |
TheJulia | :( | 20:57 |
dhellmann | they're ~7 now, so maybe they've forgotten | 20:57 |
dhellmann | if that doesn't work I'll see if I can find some cheap non-clumping clay. Theresa doesn't like to use the clumping stuff because they track it all through the house | 20:58 |
TheJulia | That is where a robotic vacuum helps.... | 20:58 |
TheJulia | set it on a schedule, keep it near the boxes.... presto! | 20:58 |
TheJulia | no episodes of cats riding the vacuum around either... saldy | 21:00 |
TheJulia | sadly | 21:00 |
clarkb | I just got our vacuum running again | 21:00 |
clarkb | toddlers were not amused | 21:00 |
clarkb | but I've got them saying robot in the original asimov pronounciation | 21:01 |
clarkb | so I'll call that a win | 21:01 |
dhellmann | clarkb : what is that, "rowbut"? | 21:02 |
clarkb | ya | 21:02 |
dhellmann | TheJulia : unfortunately our vac is dougic and not robotic | 21:03 |
TheJulia | :( | 21:10 |
fungi | based in advanced doug-powered technology | 21:15 |
*** whoami-rajat has quit IRC | 22:37 | |
*** jaosorior has quit IRC | 22:42 | |
* TheJulia tries to make a joke, and fails to find the words after doing spec reviews | 23:35 | |
*** openstack has joined #openstack-tc | 23:45 | |
*** ChanServ sets mode: +o openstack | 23:45 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!