*** HenryG_ is now known as HenryG | 00:16 | |
*** adrian_otto has quit IRC | 00:28 | |
*** unicell has quit IRC | 01:25 | |
*** marcoemorais has quit IRC | 01:40 | |
*** unicell has joined #openstack-containers | 01:52 | |
*** unicell has quit IRC | 02:56 | |
*** unicell has joined #openstack-containers | 03:35 | |
*** harlowja is now known as harlowja_away | 03:47 | |
*** harlowja_away is now known as harlowja | 03:48 | |
*** marcoemorais has joined #openstack-containers | 03:53 | |
*** marcoemorais has quit IRC | 03:54 | |
*** adrian_otto has joined #openstack-containers | 04:47 | |
*** adrian_otto has quit IRC | 05:45 | |
*** adrian_otto has joined #openstack-containers | 05:46 | |
*** adrian_otto has quit IRC | 05:46 | |
*** harlowja is now known as harlowja_away | 06:31 | |
*** stannie1 has joined #openstack-containers | 07:48 | |
*** stannie has quit IRC | 09:00 | |
*** julienvey has joined #openstack-containers | 09:06 | |
*** PaulCzar has joined #openstack-containers | 12:54 | |
*** unicell has quit IRC | 12:57 | |
*** jeckersb_gone is now known as jeckersb | 13:06 | |
*** PaulCzar has quit IRC | 13:34 | |
*** PaulCzar has joined #openstack-containers | 13:35 | |
*** PaulCzar has quit IRC | 13:35 | |
*** thomasem has joined #openstack-containers | 13:38 | |
*** adrian_otto has joined #openstack-containers | 14:17 | |
*** EricGonczer_ has joined #openstack-containers | 14:44 | |
*** adrian_otto has quit IRC | 14:54 | |
*** apmelton1 has quit IRC | 15:04 | |
*** apmelton has joined #openstack-containers | 15:06 | |
*** unicell has joined #openstack-containers | 15:15 | |
*** EricGonczer_ has quit IRC | 15:18 | |
*** diga has joined #openstack-containers | 15:21 | |
*** EricGonczer_ has joined #openstack-containers | 15:31 | |
*** EricGonczer_ has quit IRC | 15:36 | |
*** diga has quit IRC | 15:37 | |
*** EricGonczer_ has joined #openstack-containers | 15:37 | |
*** adrian_otto has joined #openstack-containers | 15:50 | |
*** julienvey has quit IRC | 15:56 | |
*** diga_ has joined #openstack-containers | 16:01 | |
*** stannie1 is now known as stannie | 16:01 | |
diga_ | Hi | 16:02 |
---|---|---|
diga_ | do we have meeting today ? | 16:02 |
diga_ | yes, into meeting | 16:03 |
*** EricGonczer_ has quit IRC | 16:14 | |
*** marcoemorais has joined #openstack-containers | 16:18 | |
*** harlowja_away is now known as harlowja | 17:15 | |
*** marcoemorais has quit IRC | 17:16 | |
*** marcoemorais has joined #openstack-containers | 17:17 | |
*** marcoemorais has quit IRC | 17:17 | |
*** marcoemorais has joined #openstack-containers | 17:18 | |
*** marcoemorais has quit IRC | 17:26 | |
*** marcoemorais has joined #openstack-containers | 17:29 | |
*** EricGonczer_ has joined #openstack-containers | 17:49 | |
*** diga_ has quit IRC | 17:52 | |
*** thomasem has quit IRC | 17:59 | |
*** thomasem has joined #openstack-containers | 18:02 | |
*** EricGonczer_ has quit IRC | 18:04 | |
*** marcoemorais has quit IRC | 18:14 | |
*** marcoemorais has joined #openstack-containers | 18:14 | |
*** marcoemorais has quit IRC | 18:15 | |
*** marcoemorais has joined #openstack-containers | 18:15 | |
*** EricGonczer_ has joined #openstack-containers | 18:19 | |
*** marcoemorais has quit IRC | 18:21 | |
*** marcoemorais has joined #openstack-containers | 18:21 | |
*** thomasem has quit IRC | 18:27 | |
*** EricGonczer_ has quit IRC | 18:27 | |
*** marcoemorais has quit IRC | 18:28 | |
*** marcoemorais has joined #openstack-containers | 18:28 | |
*** thomasem has joined #openstack-containers | 18:53 | |
*** thomasem has quit IRC | 19:03 | |
*** harlowja has quit IRC | 19:06 | |
*** harlowja has joined #openstack-containers | 19:08 | |
*** thomasem has joined #openstack-containers | 19:16 | |
*** thomasem_ has joined #openstack-containers | 19:19 | |
*** thomasem has quit IRC | 19:20 | |
*** EricGonczer_ has joined #openstack-containers | 19:21 | |
*** thomasem_ has quit IRC | 19:54 | |
*** marcoemorais has quit IRC | 19:59 | |
*** marcoemorais has joined #openstack-containers | 19:59 | |
*** marcoemorais has quit IRC | 20:00 | |
*** marcoemorais has joined #openstack-containers | 20:01 | |
*** marcoemorais has quit IRC | 20:03 | |
*** marcoemorais has joined #openstack-containers | 20:04 | |
*** EricGonczer_ has quit IRC | 20:16 | |
*** EricGonczer_ has joined #openstack-containers | 20:19 | |
*** EricGonczer_ has quit IRC | 20:24 | |
*** marcoemorais has quit IRC | 21:02 | |
*** marcoemorais has joined #openstack-containers | 21:03 | |
*** marcoemorais has quit IRC | 21:04 | |
*** marcoemorais has joined #openstack-containers | 21:04 | |
*** thomasem has joined #openstack-containers | 21:15 | |
*** apmelton has quit IRC | 21:29 | |
*** apmelton has joined #openstack-containers | 21:30 | |
*** harlowja is now known as harlowja_away | 21:43 | |
*** stannie has quit IRC | 21:54 | |
adrian_otto | Our team meeting will begin in 5 minutes in #openstack-meeting-alt | 21:55 |
*** harlowja_away is now known as harlowja | 22:00 | |
*** apmelton has quit IRC | 22:00 | |
*** apmelton has joined #openstack-containers | 22:01 | |
*** chuck_ has joined #openstack-containers | 22:25 | |
*** slagle_ has joined #openstack-containers | 22:25 | |
*** unicell1 has joined #openstack-containers | 22:35 | |
*** marcoemorais has quit IRC | 22:36 | |
*** adrian_otto1 has joined #openstack-containers | 22:36 | |
*** marcoemorais has joined #openstack-containers | 22:36 | |
*** s1rp_ has joined #openstack-containers | 22:37 | |
*** zul has quit IRC | 22:37 | |
*** slagle has quit IRC | 22:37 | |
*** adrian_otto has quit IRC | 22:39 | |
*** slagle_ has quit IRC | 22:43 | |
*** unicell has quit IRC | 22:43 | |
*** s1rp has quit IRC | 22:43 | |
*** adrian_otto1 is now known as adrian_otto | 22:43 | |
*** HenryG has quit IRC | 22:46 | |
*** jogo has joined #openstack-containers | 22:59 | |
*** mtesauro has joined #openstack-containers | 23:00 | |
adrian_otto | hi jogo | 23:00 |
apmelton | so jogo, right now, if someone wanted to run openstack + kubernetes, they could | 23:02 |
apmelton | the only thing to solve there is the deployment situation | 23:02 |
jogo | apmelton: go on | 23:02 |
apmelton | and maybe heat recipes (don't think that's the right term) so each tenant can spin their own kubernetes instance up | 23:03 |
jogo | apmelton: I would think an image would suffice as well | 23:03 |
adrian_otto | I need a heat template to produce a kubernetes install for myself, and configure that, and maintain it. | 23:03 |
adrian_otto | it could use a special image | 23:03 |
adrian_otto | plus a cloud init script or whatever | 23:03 |
apmelton | the entire point of our project is to offer a containers service that integrates with openstack on as many levels as possible | 23:03 |
jogo | apmelton: right | 23:04 |
jogo | so what does that mean | 23:04 |
apmelton | not only for a good end user experience, but also a good deployer experience | 23:04 |
adrian_otto | apmelton, maybe I can articulate the use case for solum | 23:04 |
jogo | why can't there be a good experience for things that aren't offically openstack? | 23:04 |
adrian_otto | jogo: say I want to offer hosted CI/CD service | 23:04 |
adrian_otto | and in order to deploy my applications using containers I need to use kubernetes or a steroid enhanced coreos/fleet setup in the middle | 23:05 |
jogo | or mesos | 23:05 |
jogo | or ? | 23:06 |
adrian_otto | for every CI costomer, I need an instance of kubernetes. | 23:06 |
adrian_otto | or mesos, or some other single tenant system | 23:06 |
adrian_otto | and I need to keep track of all those, and upgrade them, and deal with them | 23:06 |
adrian_otto | our ops guys *hate* the idea of doing that as much as they would hate managing 2000 instances of Jenkins | 23:06 |
adrian_otto | which is why they want this solution to begin with | 23:07 |
adrian_otto | OpenStack has the concept of multi-tenancy at every level | 23:07 |
adrian_otto | and none of the container management systems do | 23:07 |
adrian_otto | because "it would be stupid to run two containers on the same hosts with workloads from different customers in them" | 23:08 |
adrian_otto | but that reasoning is myopic | 23:08 |
adrian_otto | it does not consider that you are using Bare Metal or VM instances in combination with your container soltuion | 23:08 |
adrian_otto | and that you may never attempt to place two containers on the same host | 23:09 |
adrian_otto | what I will absolutely commit to is avoiding NIH at all costs | 23:10 |
jogo | so this container service would support multi tenancy on a single compute instance? | 23:10 |
apmelton | what is NIH? | 23:10 |
adrian_otto | NIH + Not Invented here disease, rebuilt everything from scratch | 23:10 |
apmelton | ah | 23:10 |
jogo | apmelton: http://en.wikipedia.org/wiki/Not_invented_here | 23:10 |
adrian_otto | unfortunately the OpenStack community ahs a good dose of the disease already | 23:10 |
thomasem | lol, very true | 23:11 |
adrian_otto | and that's hard to counter | 23:11 |
jogo | adrian_otto: that is for sure | 23:11 |
adrian_otto | but I will commit to resisting that at every turn | 23:11 |
adrian_otto | in SOlum, I have even done that to my own detriment | 23:11 |
thomasem | jogo: It would support multi-tenancy by nature of using OpenStack for the logical grouping of customers' containers. | 23:11 |
jogo | adrian_otto: so am glad we agree about the NIH part :) | 23:11 |
jogo | thomasem: so seprate compute resources per user | 23:12 |
thomasem | yeah | 23:12 |
jogo | where a compute resource is something that nova produces | 23:12 |
thomasem | yeah | 23:12 |
apmelton | jogo, you mentioned before still paying for an instance even if you aren't using all of it (not paying per container) | 23:14 |
apmelton | think of instances as a reservation of space to grow and spawn containers near each other | 23:14 |
thomasem | In my mind, by using Nova, we won't have to worry as much about the scheduling, we can just have a thin API that affords customers the advanced features of containers without having to stand up full control planes and doubling up on scheduling and the seamlessness of utilizing multi-tenant compute resources to keep everything separated and for billing. | 23:15 |
adrian_otto | thomasem: +1 | 23:15 |
thomasem | If we went with an existing solution, we'd introduce a bunch of waste. | 23:15 |
thomasem | either on our side or our customers | 23:16 |
thomasem | neither of which is desirable. | 23:16 |
mtesauro | thomasem +1 | 23:16 |
adrian_otto | thomasem: +1 | 23:16 |
thomasem | When there's so much overlap between Nova and other existing solutions already, save for multi-tenancy. | 23:16 |
jogo | so if the coud operator is managing the container service I agree running 1 per user is bad | 23:16 |
thomasem | They would be in the case of providing PaaS-like solutions. | 23:17 |
apmelton | jogo, well, bad for the user, it'd bring more money in for the operator :P | 23:17 |
jogo | apmelton: hehe | 23:17 |
thomasem | Lol | 23:17 |
thomasem | that's very true | 23:17 |
adrian_otto | apmelton: aggregate income is lower that way. volume economics apply. | 23:17 |
jogo | so as consumer of a public cloud | 23:18 |
thomasem | The problem, though, is we could prevent pain on both ends. | 23:18 |
adrian_otto | an profit is lower too because oversubscription rates are not as dense | 23:18 |
thomasem | And I'd personally rather have less overhead so I can sell more resources to other customers. | 23:18 |
jogo | when would I use this service over running my own 'container service' (mesos coreOS docker etc) on top of a cloud | 23:18 |
adrian_otto | thomasem: exactly | 23:18 |
adrian_otto | jogo, I ahve this idea for a new mobile app with this innovative backend, let me whip it up! | 23:19 |
adrian_otto | I will just provision it on my Rackspace cloud account | 23:19 |
adrian_otto | oh, that's going to cost me how much? | 23:19 |
adrian_otto | oh, damn. | 23:19 |
adrian_otto | nevermind. | 23:19 |
adrian_otto | I want hosting an application to cost about as much as making a phone call | 23:20 |
adrian_otto | it should be dirt cheap to run processes. | 23:20 |
adrian_otto | particularly if they are stateless things that can go to sleep | 23:21 |
jogo | adrian_otto: but if all I have to do is spin up a single compute instance and then start playing thats easy | 23:22 |
adrian_otto | every single openstack operator worldwide should have an answer for this scenario. | 23:22 |
jogo | how is that different billing wise then using the proposed container service | 23:22 |
thomasem | Think of the undercloud implications | 23:22 |
adrian_otto | it allows operators to bill in new ways | 23:22 |
adrian_otto | like per second instead of per hour | 23:22 |
jogo | adrian_otto: why can't that be done now? | 23:22 |
adrian_otto | because starting a VM is expensive | 23:23 |
adrian_otto | it's not a subsecond operation | 23:23 |
apmelton | jogo, if you want to spin up a 128M container, and your service provider provides a 128M instance from nova, there will be no extra space | 23:23 |
jogo | adrian_otto: sure that is a bit of a pathalogical case | 23:23 |
adrian_otto | jogo, let's make a bet and regroup in 5 years! | 23:24 |
thomasem | lol | 23:24 |
jogo | well pathological cases do happen | 23:24 |
thomasem | Alrighty, I've got to run. Have a great one. I'll catch y'all later! | 23:25 |
adrian_otto | I don't want to be perceived as a chicken little, but consider this. Id OpenStack totally misses the boat for containers, it may be completely eclipsed. All of us who have made huge investments here will be kicking ourselves. I'm offering us insurance for tthat contingency. | 23:25 |
jogo | adrian_otto: you have a similar pathalogical case with the container service. if I have 129M of containers I need double the servers and one is mostly empty | 23:26 |
jogo | adrian_otto: yeah I agree with that statement | 23:26 |
jogo | adrian_otto: I would like to see OpenStack have a strong container story | 23:26 |
adrian_otto | jogo: not if I am running a private cloud with the nova virt type of libvirt/LXC | 23:26 |
*** thomasem has quit IRC | 23:26 | |
jogo | but that doesn't mean it needs to be the container service you are proposing | 23:27 |
jogo | there are many ways to solve it | 23:27 |
*** thomasem has joined #openstack-containers | 23:27 | |
adrian_otto | so far this is the only plan that has any sort of consensus at all | 23:27 |
apmelton | jogo you have that 129M issue in both cases | 23:27 |
jogo | this is part of my bigger issue with: OpenStack does a bad job of supporting its surounding ecosystem | 23:27 |
adrian_otto | is there an alternative that will get better support, if so we should consider it | 23:27 |
jogo | its all about can it use the trademark or not | 23:28 |
apmelton | the size of the instane spun up to run a container will always be, at a minimum, the smallest flavor that can support it | 23:28 |
apmelton | no matter what service is handling it | 23:28 |
jogo | adrian_otto: make it easier for users to spin up single tenant container services | 23:28 |
jogo | give them more flexibility | 23:28 |
jogo | don't try to become the defacto standard via trademark | 23:29 |
adrian_otto | jogo, we have heat templates. | 23:29 |
adrian_otto | done. | 23:29 |
jogo | do it through tchnical prowess | 23:29 |
jogo | adrian_otto: so make it super easy, document it, test it. solve the auto scaling of comute resources case | 23:29 |
jogo | etc | 23:29 |
adrian_otto | jogo, we did that with docker and a libswarm backend. | 23:30 |
adrian_otto | but that's not going to solve containers for all openstack operators. | 23:30 |
jogo | adrian_otto: why not? | 23:30 |
jogo | why can't the compute program say, the default way to do this is use docker + libswarm | 23:31 |
jogo | and add that into devstack etc | 23:31 |
jogo | document it in offical docs | 23:31 |
jogo | whatever | 23:31 |
adrian_otto | because the truth is that most openstack cloud operators are still trying to find out what containers even are, let alone how to wire them up to their clouds as managed services | 23:31 |
jogo | adrian_otto: so taking a step back here | 23:31 |
apmelton | jogo, because docker + libswarm is very limited in features | 23:31 |
*** thomasem has quit IRC | 23:31 | |
jogo | OpenStack has taken the big tent approach for a while | 23:32 |
apmelton | you don't get intelligent scheduling | 23:32 |
jogo | but IMHO its not really working | 23:32 |
jogo | apmelton: mesos and its frameworks can solve that | 23:32 |
jogo | anyway, as we grow 'OpenStack' into a big tent | 23:32 |
jogo | it dilutes the brand if some of the things in the brand fail (think ceilometer 6 months ago( | 23:32 |
*** mtesauro has quit IRC | 23:33 | |
jogo | given our current track record I don't think the big tent approach is right for us | 23:33 |
jogo | big tent crushes the ecosystem | 23:33 |
adrian_otto | I don't see this as a big tent issue | 23:33 |
adrian_otto | I see this as a way to give cloud operators a working containers solution, and points to integrate innovations of their own | 23:34 |
jogo | I think the way to grow what 'OpenStack' is to have a solid set of foundational services and grow the 3rd party ecosystem around it | 23:34 |
adrian_otto | it's more like a pluggable middleware than a vertically integrated stack | 23:34 |
jogo | in https://dague.net/2014/08/26/openstack-as-layers/ | 23:34 |
jogo | IMHO we should drop layer 4 | 23:35 |
adrian_otto | you don't see a containers API as a foundational subsrtrate? | 23:35 |
jogo | and promote a ecosystem around it | 23:35 |
jogo | adrian_otto: well maybe it can be seen that way | 23:35 |
jogo | but there are lots of tools that do something similar | 23:35 |
adrian_otto | just like trove is | 23:35 |
jogo | and we have tons of chances to get it wrong | 23:35 |
jogo | adrian_otto: IMHO trove is not foundational | 23:35 |
jogo | err layer 1 | 23:35 |
jogo | to use terms sean is | 23:35 |
adrian_otto | I see containers service at level 2. | 23:36 |
jogo | adrian_otto: it consumes layer 1 things and builds upon them | 23:36 |
jogo | like the rest of layer 4 | 23:36 |
jogo | I would love to see OpenStack promote container adoption and usage | 23:37 |
jogo | I just don't think the best way to do that is with our own special OpenStack container service | 23:37 |
jogo | I am afraid by coupling it with OpenStack we would hold it back | 23:37 |
jogo | we have lot of procedure and overhead do the integration of all the things and our stability requirements. | 23:37 |
jogo | adrian_otto: does that make sense? | 23:40 |
jogo | adrian_otto: it very well may not | 23:40 |
adrian_otto | I understand your position clearly, but I disagree with it. | 23:40 |
jogo | its almost the end of the day | 23:40 |
jogo | adrian_otto: so I think I understand your view, but can you re-itterate just to make sure I am clear | 23:41 |
adrian_otto | I see the power of the OpenStack community is derived from the ability to assemble and conduct open development and build community around it. | 23:41 |
adrian_otto | building an ecosystem is something that we have not *really* committed to yet as a community. | 23:42 |
jogo | adrian_otto: and why can't that community be larger then things under the 'OpenStack' umbrella | 23:42 |
adrian_otto | we have committed to working on the projects inside the ecosystem. | 23:42 |
adrian_otto | I think it should be. | 23:42 |
adrian_otto | but there is a practical reality as well. | 23:42 |
adrian_otto | Most OpenStack sponsors are reluctant to work on projects are that are not in the OpenStack fold | 23:43 |
jogo | adrian_otto: agreed we have not tried to grow the ecosystem to date, but I think that is the way forward | 23:43 |
adrian_otto | I'm not talking about the diamond level guys like us | 23:43 |
adrian_otto | I'm talking about the gold level guys | 23:43 |
jogo | adrian_otto: yeah, so that part scares me. if it isn't openstack don't work on it | 23:43 |
adrian_otto | that's a fundamental truth today. | 23:44 |
adrian_otto | as much as we both hate it | 23:44 |
adrian_otto | and I'm trying to activate on ideas now, in the today we have. | 23:44 |
jogo | adrian_otto: another idea, one that I have yet to fully formulate is: have more seperated workflow for layer 4 services | 23:44 |
adrian_otto | I'll help you get to the better tomorrow too, but I think that needs to be more planned and with more buy in than we have now. | 23:44 |
adrian_otto | jogo: that's definitely a good idea | 23:45 |
jogo | where we can bless multiple projects to do the same thing etc | 23:45 |
adrian_otto | one example I can point to of a successful ecosystem that has a higher degree of federation | 23:45 |
jogo | so its the same as ecosystem but it can use the OpenStack trademark because somehow that is what we are all hung up on | 23:45 |
adrian_otto | is the Apache Foundation | 23:45 |
jogo | yeah, they have a very different model | 23:46 |
adrian_otto | basically projects run without cross project integration as a goal | 23:46 |
jogo | right | 23:46 |
jogo | so maybe we need a apache foundation style wing | 23:46 |
adrian_otto | and in that sort of a world, your tents construct works great | 23:46 |
adrian_otto | so I think we do both want the same thing for the long term of OpenStack | 23:47 |
adrian_otto | but the method of getting there from here is not clear | 23:48 |
adrian_otto | not yet, not to me. | 23:48 |
adrian_otto | there is a fundamental difference between Apache and OpenStack | 23:48 |
adrian_otto | we have an ecosystem here of API enabled services that are growing more and more interdependent | 23:49 |
jogo | adrian_otto: so yes we agree on that | 23:49 |
jogo | adrian_otto: I don't know how to get there either | 23:49 |
jogo | adrian_otto: but I think its one of the biggest issues we as a community are facing today | 23:49 |
adrian_otto | Apache has a bunch of disparate applications that are only really unified by themes. | 23:50 |
jogo | adrian_otto: they have a unified theme? | 23:50 |
jogo | adrian_otto: I didn't even know that | 23:50 |
adrian_otto | well, some might argue they did at one point | 23:50 |
jogo | adrian_otto: I thought they were unified by some infra and legal and that is it | 23:50 |
adrian_otto | that has sprawled | 23:50 |
adrian_otto | jogo: LOL, fair enough | 23:50 |
jogo | adrian_otto: but having unified legal and ifra is huge | 23:50 |
adrian_otto | is this a discussion that we are having at the OpenStack BoD level? | 23:51 |
jogo | and something we should foster for ecosystem thigs if they don't already have a home | 23:51 |
jogo | adrian_otto: at some point maybe | 23:51 |
adrian_otto | becaues this really strikes me as business that needs to be dealt with between the BoD and the TC | 23:51 |
jogo | adrian_otto: I don't disagree per-se | 23:52 |
adrian_otto | our jobs are to make things work for OpenStack suers within the frameworks we have | 23:52 |
adrian_otto | *users | 23:52 |
jogo | adrian_otto: but it would be good to make the BoD aware of the brewing storm | 23:52 |
jogo | at the very least | 23:52 |
jogo | so if I may try to summarize what we agree on: | 23:52 |
apmelton | I think that hits at what I want adrian_otto | 23:52 |
apmelton | I want the containers service to fit into the OpenStack framework | 23:53 |
apmelton | tightly fit | 23:53 |
apmelton | for instance, we have tools built around openstack's styles of notifiactions, I want this service to work with that | 23:53 |
apmelton | I think there is value in having a tightly coupled service | 23:54 |
adrian_otto | jogo: we should think on that further to find the most productive way to address that set of concerns | 23:54 |
jogo | currently OpenStack only knows how to grow a big tent. Sometimes we pick the winner in a race before there is actually one and we get it wrong. This hurts the brand and hurts development. The big tent model breaks down when there is an existing solution that lives somewhere else | 23:54 |
adrian_otto | I'm worried of causing knee-jerks if there is a more refined and smooth way to iterate toward what we want | 23:54 |
adrian_otto | apmelton: +1 | 23:55 |
jogo | with our current model of the integrated gate and tightly coupled development work, this model isn't scalable much further | 23:55 |
jogo | To move to a big ecosystem model there are two use cases to resolve | 23:56 |
adrian_otto | jogo, the bottleneck is the infra team, correct? Or are there other concerns too? | 23:56 |
jogo | * supporting existing services, so we don't have to reinvent something to bless it | 23:56 |
jogo | * foster the built for OpenStack ecosystem, open it up to a unified development process, legal and trademark (?) | 23:57 |
jogo | adrian_otto: no, not really | 23:57 |
jogo | adrian_otto: I commentented on that in one of the ML threads | 23:57 |
jogo | adrian_otto: infra is part of the limit sure. (We can use more rax test VMs) | 23:58 |
apmelton | I'd think the bottle neck is really you can only test but so many integration paths | 23:58 |
jogo | and HP | 23:58 |
jogo | apmelton: yup | 23:58 |
jogo | apmelton: with such a tightly integrated system we have a N^2 issue | 23:58 |
jogo | we have no good way of rolling out changes accross all projects too | 23:59 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!