*** tetsuro has joined #openstack-tc | 00:10 | |
*** tetsuro has quit IRC | 00:12 | |
*** tetsuro has joined #openstack-tc | 00:13 | |
*** tetsuro has quit IRC | 00:19 | |
*** tetsuro has joined #openstack-tc | 00:20 | |
*** tetsuro_ has joined #openstack-tc | 00:30 | |
*** tetsuro has quit IRC | 00:33 | |
*** jamesmcarthur has quit IRC | 00:46 | |
*** tosky has quit IRC | 00:50 | |
*** jamesmcarthur has joined #openstack-tc | 00:58 | |
*** jamesmcarthur has quit IRC | 00:59 | |
*** jamesmcarthur_ has joined #openstack-tc | 00:59 | |
*** jamesmcarthur_ has quit IRC | 01:10 | |
*** jamesmcarthur has joined #openstack-tc | 01:13 | |
*** ijolliffe has quit IRC | 01:17 | |
*** ianychoi_ has joined #openstack-tc | 01:38 | |
*** ianychoi has quit IRC | 01:40 | |
*** jamesmcarthur has quit IRC | 03:03 | |
*** jamesmcarthur has joined #openstack-tc | 03:06 | |
*** jamesmcarthur has quit IRC | 03:06 | |
*** jamesmcarthur has joined #openstack-tc | 03:06 | |
*** tetsuro_ has quit IRC | 03:51 | |
*** tetsuro has joined #openstack-tc | 04:16 | |
*** jamesmcarthur has quit IRC | 04:18 | |
*** jamesmcarthur has joined #openstack-tc | 04:19 | |
*** jamesmcarthur has quit IRC | 04:24 | |
*** jamesmcarthur has joined #openstack-tc | 04:26 | |
*** tetsuro_ has joined #openstack-tc | 04:49 | |
*** tetsuro has quit IRC | 04:52 | |
*** tetsuro has joined #openstack-tc | 05:03 | |
*** tetsuro_ has quit IRC | 05:06 | |
*** tetsuro_ has joined #openstack-tc | 05:15 | |
*** tetsuro has quit IRC | 05:18 | |
*** evrardjp has quit IRC | 05:36 | |
*** evrardjp has joined #openstack-tc | 05:36 | |
*** jamesmcarthur has quit IRC | 06:31 | |
*** tetsuro_ has quit IRC | 07:49 | |
*** slaweq has joined #openstack-tc | 08:06 | |
*** lxkong has quit IRC | 08:08 | |
*** gagehugo has quit IRC | 08:08 | |
*** mwhahaha has quit IRC | 08:09 | |
*** gagehugo has joined #openstack-tc | 08:10 | |
*** lxkong has joined #openstack-tc | 08:10 | |
*** mnaser has quit IRC | 08:10 | |
*** mwhahaha has joined #openstack-tc | 08:10 | |
*** mnaser has joined #openstack-tc | 08:11 | |
*** rpittau|afk is now known as rpittau | 08:11 | |
*** e0ne has joined #openstack-tc | 08:16 | |
*** lpetrut has joined #openstack-tc | 08:18 | |
*** tosky has joined #openstack-tc | 08:21 | |
*** tetsuro has joined #openstack-tc | 08:34 | |
*** tetsuro_ has joined #openstack-tc | 08:40 | |
*** tetsuro has quit IRC | 08:43 | |
ttx | Re: elections, I saw threads from asettle jroll and me announcing they would not run for reelection. That means we'll have at least one open seat, despite the reduction in size | 09:11 |
---|---|---|
*** e0ne has quit IRC | 09:36 | |
*** e0ne has joined #openstack-tc | 09:36 | |
*** tetsuro has joined #openstack-tc | 09:40 | |
*** tetsuro_ has quit IRC | 09:43 | |
*** tetsuro has quit IRC | 09:56 | |
*** rpittau is now known as rpittau|bbl | 11:30 | |
*** ijolliffe has joined #openstack-tc | 12:19 | |
*** e0ne_ has joined #openstack-tc | 12:20 | |
*** e0ne has quit IRC | 12:21 | |
*** AJaeger has joined #openstack-tc | 12:52 | |
AJaeger | mnaser, ttx, evrardjp, we lost specs.openstack.org/openstack/openstack-specs and I pushed https://review.opendev.org/713869 to populate it again and fix a few small problems. Could you review and merge, please? I know it's deprecated but let's have it published and updated... | 12:54 |
evrardjp | well, it seems I don't have access to this. | 12:56 |
evrardjp | I reviewed | 12:56 |
ttx | me neither | 12:56 |
evrardjp | looks good. | 12:56 |
ttx | heh https://review.opendev.org/#/admin/groups/1372,members | 12:57 |
evrardjp | wow I did miss discsuss | 12:57 |
ttx | evrardjp: It's my second superpower. Spotting typos. Not very useful during a Zombie Apocalypse | 12:57 |
evrardjp | that's quite a graveyard of many | 12:57 |
evrardjp | we should probably have tc as a group for this. | 12:58 |
evrardjp | in this* | 12:58 |
evrardjp | WDYT? | 12:58 |
AJaeger | fixed the typo - thanks. | 12:58 |
AJaeger | I thought you had access - seems I looked at the wrong place ;( | 12:59 |
*** rpittau|bbl is now known as rpittau | 12:59 | |
smcginnis | Wow, that is a long interesting list of names. Agree, it would be much better to add some groups int ehre. | 12:59 |
ttx | Looks like a snapshot of openstack circa 2013 | 12:59 |
evrardjp | yes that sounds about right | 13:00 |
smcginnis | Good to see at least several still-active names in there. | 13:00 |
evrardjp | automagically was OSA spec liaison at about that time | 13:01 |
evrardjp | smcginnis: yes indeed! | 13:01 |
evrardjp | though it's probably worth cleaning, and assigning this to <surviving members> + tc + docs ? | 13:01 |
evrardjp | or we just leave it as is, and add AJaeger as maintainer for life | 13:02 |
evrardjp | :D | 13:02 |
AJaeger | Ok, so I see mugsie and diablo_rojo in the list, let's ping them and ask to review 713869 and update the list. | 13:02 |
evrardjp | it's very high volume | 13:02 |
AJaeger | evrardjp: I would just abandon everything open ;) | 13:02 |
evrardjp | yeah ricolin is also active in this chan :) | 13:02 |
evrardjp | and jroll too | 13:03 |
AJaeger | Great | 13:03 |
evrardjp | many folks can still merge and change this, just not the initial ping list :) | 13:03 |
evrardjp | anyway, thanks for the update AJ! | 13:03 |
AJaeger | thanks, evrardjp. | 13:05 |
AJaeger | Yeah, initial ping list can tag the repo and abandon changes - but not approve ;/ | 13:06 |
jroll | oh nice, thanks | 13:07 |
jroll | idk the approval rules there, but I'm happy to do so, if that's needed | 13:07 |
mugsie | jroll: I think it was supposed to be consnsus | 13:11 |
mugsie | but who knows now. that list is a blast from the past :D | 13:12 |
jroll | right? | 13:12 |
jroll | that's a lot of people to dig up for 50% consensus | 13:13 |
mugsie | I +W'd it. if there is issues, please revert, and shout at me :D | 13:16 |
jroll | can we also shout at you if there are not issues? | 13:17 |
mugsie | i would expect so, yes | 13:17 |
jroll | perfect | 13:17 |
mugsie | I just may not listen :) | 13:17 |
AJaeger | thanks, mugsie ! | 13:17 |
* AJaeger won't shout | 13:17 | |
fungi | also that cross-project-spec-liaisons group is owned by the cross-project-spec-liaisons-chair group which has tech-committee-chair included so anyone in https://review.opendev.org/#/admin/groups/206,members can update the cross-project-spec-liaisons membership if desired | 13:26 |
AJaeger | fungi: oh, so let's say evrardjp could make the ACL changes? Cool :) | 13:27 |
fungi | well, the group membership | 13:37 |
AJaeger | ok | 13:39 |
evrardjp | ok will do | 13:48 |
evrardjp | wow I should probably clean up that group too | 13:48 |
fungi | northern hemisphere spring equinox is at 19:50 utc today, so id you wait 6 hours you could rightfully call it spring cleaning? | 13:55 |
evrardjp | haha | 14:08 |
*** dklyle has joined #openstack-tc | 14:15 | |
*** jamesmcarthur has joined #openstack-tc | 14:20 | |
*** jamesmcarthur has quit IRC | 14:29 | |
*** jamesmcarthur has joined #openstack-tc | 14:31 | |
*** jamesmcarthur has quit IRC | 14:58 | |
ricolin | o/ | 15:00 |
*** jamesmcarthur has joined #openstack-tc | 15:01 | |
evrardjp | o/ | 15:02 |
jungleboyj | o/ | 15:03 |
gmann | o/ | 15:03 |
evrardjp | anything for office hours? I suppose folks want to talk at least about covid19 and ptl-less openstack ! | 15:04 |
jungleboyj | I am going to lock myself away at some point today and update the Survey Response. | 15:04 |
evrardjp | doesn't the government handle that for you? : p | 15:05 |
evrardjp | (did you see what I did? ;) ) | 15:05 |
jungleboyj | They don't keep my co-workers from interrupting remotely. | 15:05 |
gmann | there is power button things on laptop :) | 15:06 |
jungleboyj | :-p | 15:07 |
*** ricolin has quit IRC | 15:08 | |
njohnston | o/ | 15:08 |
*** ricolin has joined #openstack-tc | 15:13 | |
mnaser | how do we feel about openstack projects publishing official docker images | 15:16 |
mnaser | i think its about time we start having something consistent, i know infra/opendev has a really neat set of build images and runtime images | 15:17 |
mnaser | the build images build wheels and the runtime images install from those built wheels and run bindep to get the dependencies installed | 15:17 |
mnaser | it would also help make openstack more approachable for deployments and create some unity, so someone can do a git clone openstack/nova; docker build . | 15:17 |
jungleboyj | mnaser: I have played with docker a bit more lately and I can see the appeal of that. | 15:18 |
mnaser | and with the recent work being done on speculative container jobs, we totally can have devstack deploy from containers (and *only* build the containers needed in CI) | 15:18 |
mnaser | it'll also make devstack nicely platform independent too! | 15:19 |
fungi | it'll probably need a revisit of https://governance.openstack.org/tc/resolutions/20170530-binary-artifacts.html | 15:20 |
clarkb | mnaser: 3xcept for where docker is podman and podman doesnt act like docker :) | 15:21 |
clarkb | *except | 15:21 |
ricolin | I'm also working on building docker image for heat-agents as well, will be nice if we can have more consistency here:) | 15:21 |
mnaser | clarkb: aha. yeah. they'll always be those subtle small things too | 15:21 |
mnaser | fungi: yeah, i mean, the container ecosystem has seen a lot of change from the past 3 years | 15:22 |
ricolin | alias docker=podman :) | 15:22 |
mnaser | things like multi-stage builds were not a thing before | 15:22 |
mnaser | now we have a whole ecosystem and infrastructure to build containers with _minimum_ amount of dependencies/layers | 15:22 |
*** lpetrut has quit IRC | 15:23 | |
mordred | yeah - python-builder/python-base images provide a real easy way to make simple containers from bindep/requirements that should work with any openstack project | 15:27 |
*** jamesmcarthur has quit IRC | 15:28 | |
evrardjp | mnaser: that was the intent of the containers SIG -- first agree on bindep usage across projects to be consistent, then start to build containers in a consistent manner | 15:29 |
*** jamesmcarthur has joined #openstack-tc | 15:29 | |
ricolin | is there any previous decision regarding having our own container repo, or have official Open infra docker repo? | 15:29 |
mnaser | lets push things into dockerhub to make lives easier for people | 15:29 |
evrardjp | yeah I would push into dockerhub too | 15:30 |
jungleboyj | I thought there was already someone doing this with OpenStack. | 15:30 |
mnaser | evrardjp: i mean, what's there to agree on that, we already have bindep files, we already have some 'predefined' profiles, so we just.. add dockerfiles | 15:30 |
evrardjp | jungleboyj: there are multiple projects | 15:30 |
evrardjp | for example LOCI and openstack-helm do that | 15:30 |
ricolin | also for what I know, magnum doing their own repo, and heat plan so as well | 15:30 |
evrardjp | the bindep is not enough | 15:30 |
evrardjp | existing one | 15:30 |
mnaser | that is the problem | 15:30 |
mnaser | everyone is doing their own thing | 15:31 |
evrardjp | you need to have extra things | 15:31 |
evrardjp | and so you have many stacks as webserver of choices | 15:31 |
evrardjp | or whatever | 15:31 |
mnaser | yeah, we should add these extras when we add the dockerfile | 15:31 |
mnaser | plop uwsgi in there and call it a day | 15:31 |
mnaser | thats what we use in ci | 15:31 |
mnaser | thats what people should be using | 15:31 |
evrardjp | yeah, that's what I would tend to agree to | 15:31 |
*** njohnston_ has joined #openstack-tc | 15:31 | |
mnaser | i can drive this but i want us to all be on the same page | 15:32 |
evrardjp | I had an example Dockerfile which was having staged build for very simple images for all the projects | 15:32 |
mnaser | and i would like to follow the pattern established by opendev because that gives us _a lot_ of advantages with speculative image builds | 15:32 |
jungleboyj | Ah, but we don't have a common OpenStack repo for the images. Gtocha. | 15:32 |
evrardjp | jungleboyj: that's not really a problem | 15:32 |
mnaser | im thinking we dont have an openstack/docker-images repo, i'm thinking we have a Dockerfile inside openstack/cinder repo | 15:33 |
evrardjp | that's what I think would be good | 15:33 |
evrardjp | the problem with this mnaser, is that we will allow drift | 15:33 |
mnaser | what is going to drift exactly? | 15:33 |
evrardjp | well all the projects can do whatever they want in their docker file | 15:34 |
evrardjp | and let it rot too | 15:34 |
evrardjp | what I think would be better if we had some proposal bot to update the main Dockerfile | 15:34 |
mnaser | it won't rot if we test it across those jobs :) | 15:34 |
mnaser | s/jobs/projects/ | 15:34 |
evrardjp | and leverage the bindep appropriately | 15:34 |
evrardjp | you don't need to go that far as changing everything | 15:34 |
mnaser | it won't rot if i convince people running devstack with docker is the way to go, hah | 15:34 |
mnaser | it's about user experience, people want to build an image when checking out a repo, 99.9999% of projects that have dockerimages do that | 15:35 |
mnaser | why do we try and become unicorns again | 15:35 |
evrardjp | you know that projects have been doing full containers for a while? OSA had done that in the past, and kolla is doing it | 15:35 |
mnaser | yes | 15:35 |
mnaser | i know that. | 15:35 |
evrardjp | OSA has decided to not pursue that road because some things are just easier to deal with outside a container | 15:36 |
evrardjp | my point was it would be good to agree on the first steps | 15:36 |
njohnston | this sounds like a great discussion on the ML. I am sure the tripleo guys can comment on the edge cases they have seen - I know EmilienM has been suggesting devs use tripleo-standalone instead of devstack for personal dev as it gives you a containerized deployment. It'd be nice to get that as a standard | 15:36 |
mordred | well - we have a builder image that makes the dockerfile tiny | 15:36 |
mnaser | ^^^ yeah | 15:36 |
mnaser | i think people should look at the builder image stuff... | 15:36 |
EmilienM | hi | 15:36 |
mordred | https://opendev.org/openstack/python-openstackclient/src/branch/master/Dockerfile | 15:37 |
mnaser | you seriously don't need much more than a bindep file and the dockerfile does everything for you. | 15:37 |
njohnston | EmilienM: we're talking about openstack projects publishing official docker images | 15:37 |
mordred | incidentally - with speculative images now - you can do things like : docker run -it -v$HOME/.config/openstack:/etc/openstack --rm osclient/python-openstackclient:change_711246_latest openstack --os-cloud=vexxhost server list | 15:37 |
EmilienM | njohnston: ok | 15:37 |
mnaser | ^ yes, this whole effort is not much more than adding that to all repos _and_ maybe setting up test jobs which use them | 15:38 |
mnaser | IMHO you shouldn't need more than that in your repo. | 15:38 |
mordred | builds tiny images that do not do anything other than install the python | 15:38 |
mnaser | and it also installs bindep in the build image (with profile 'build' afaik) | 15:39 |
EmilienM | I like how things happen "incidentally" :D | 15:39 |
EmilienM | mordred: really cool command ^^^ | 15:39 |
mordred | EmilienM: \o/ | 15:39 |
mnaser | the big thing that made me think about this is mordred published this to osclient/openstackclient | 15:39 |
ricolin | mordred, question, who owns https://hub.docker.com/u/opendevorg ? | 15:39 |
mordred | the image jobs use the promote pipeline to retag whatever the gate image was after it lands - so we don't have to rebuild in post | 15:39 |
mordred | ricolin: the opendev team | 15:40 |
mnaser | to me that's just sad and bad ux | 15:40 |
mnaser | we wouldn't publish to opendevorg... we'd publish to openstack/ | 15:40 |
mnaser | (side note: there seems to be a lot of confusion that openstack-infra => opendev) | 15:40 |
mordred | mnaser: I believe ricolin is curious about python-builder and python-base | 15:40 |
mnaser | oh | 15:40 |
mnaser | yes, that | 15:40 |
clarkb | mordred: I think that is an important distinction. Building images on all the different platforms misses the point and is a crazy use of resources. Focusing on building an image that is known to work is much better | 15:40 |
mordred | since those are opendevorg projets | 15:40 |
ricolin | yep | 15:40 |
evrardjp | yeah I would prefer if we had an official openstack namespace on dockerhub, and image name = project | 15:40 |
EmilienM | hub.docker.com is meh, constrained by random HTTP errors (in tripleo we suffered a lot about it, we implemented our own python registry which handle these errors) | 15:41 |
mordred | yes. this is _explicitly_ avoiding building images for different guest os's | 15:41 |
evrardjp | EmilienM: true that | 15:41 |
mnaser | EmilienM: yeah i mean we can totally get registry.opendev.org if someone wants to work on that and publish to both | 15:41 |
ricolin | mnaser, and yes, have one namespace for openstack is definitely the way to approach | 15:41 |
clarkb | EmilienM: were there more errors than 1) not using our caches and 2) not using a proper client? | 15:41 |
mnaser | i mean, these days, docker is pretty good at using registires that are not dockerhub | 15:41 |
mordred | we could just as easily publish to quay.io | 15:41 |
EmilienM | i tested quay.io; a bit better but not super stable yet IMO, having a bunch of downtimes & also random errors | 15:41 |
mnaser | publish to ALL THE REGISTRIES | 15:41 |
clarkb | (because those are the two errors I know about and I think we largely know how to address those) | 15:41 |
mordred | I think the main point would be more whether we coudl start publishing some simple images | 15:41 |
evrardjp | yeah I think it's not a problem of where to publish too | 15:41 |
evrardjp | to* | 15:41 |
mordred | evrardjp: yah | 15:41 |
EmilienM | evrardjp: ack | 15:42 |
mordred | the point is more to come to an agreement to maybe publish some super simple single-app images | 15:42 |
mnaser | i guess this might add a small tiny review load on teams having to review changes to bindep/requirements | 15:42 |
evrardjp | if we manage to make the project accept those new way of building different images, in a simple way, we are winning | 15:42 |
evrardjp | for me to build the different kind of image in a same project, we could use bindep's features | 15:42 |
mnaser | if we want to stay relevant and make openstack approachable and future proof, we need to get containers | 15:43 |
mordred | yeah - I mean - this wouldnt' mean that people doing container things HAVE to use these ... just that it's SO EASY to make them that it could add a nice option to people who want them | 15:43 |
mnaser | because people _will_ build their own | 15:43 |
evrardjp | for me it's all about experience too -- you don't want glance to have apache + mod_wsgi and cinder to have uwsgi | 15:43 |
mnaser | i'm hoping all projects work with uwsgi now, actually neutron finally seems to have uwsgi jobs too so thats nice | 15:43 |
evrardjp | mordred: agreed! | 15:43 |
mnaser | i think that was the last project that didn't have them | 15:44 |
njohnston | mnaser: yes in neutron land we were lagging, but we have noting jobs now | 15:44 |
evrardjp | also the side effect on working on the bindep all together was that for projects that didn't want to use containers, they could still guess the package list to install by using bindep | 15:44 |
mnaser | uh | 15:44 |
mnaser | this might be a little too much but | 15:44 |
mnaser | i dont think you get to choose if you dont want containers | 15:45 |
evrardjp | for me bindep is a very good thing we have, we just need to make sure it's standardized | 15:45 |
mordred | evrardjp: yah. and then being able to use bindep instead of duplicating it in a dockerfile is also really nice :) | 15:45 |
evrardjp | correct! | 15:45 |
slaweq | njohnston: mnaser we have non-voting jobs with uwsgi for long time, I will promote those jobs to be voting this week | 15:45 |
njohnston | So for neutron where the same codebase results in neutron-server as well as neutron-openvswitch-agent, neutron-linuxbridge-agent, etc. would those all be the same container image invoked differently? Or different images? | 15:45 |
mnaser | slaweq: nice! i saw these jobs :) | 15:45 |
mordred | evrardjp: the python-builder image looks at a "build" tag to know what it needs to build wheels, and then installs the untagged stuff into the final image | 15:45 |
slaweq | njohnston: mnaser also, some time ago I was trying to switch to use uwsgi as default in devstack | 15:45 |
ricolin | slaweq, cool! | 15:45 |
mnaser | openstack ships containers. i can have someone push up changes to all the projects because it's important for me (and this is a good time to help improve many more things) | 15:45 |
mordred | evrardjp: so we can have people optimize their bindep files | 15:45 |
slaweq | but there were some problems with grenade jobs IIRC | 15:45 |
evrardjp | mordred: I guess we could extend a little bit with a few profiles ? | 15:46 |
slaweq | so this patch is lost somewhere in my list for now :/ | 15:46 |
slaweq | but I will get back to it :) | 15:46 |
mordred | we could - but so far for all the things we use python-builder with we haven't found a need for additional ones | 15:46 |
mnaser | i don't think we should extend it with a few profiles, i think the image should contain all the dependencies and just different targets. but, we ca ndiscuss | 15:46 |
mnaser | i don't want us to get into the rabbit hole of optimizing before even making any progress tbh | 15:46 |
mordred | yeah - I think step one is start with just the super-simple "just do the one thing" | 15:46 |
evrardjp | it's not about that | 15:46 |
mnaser | slaweq: awesome, this makes me really happy | 15:46 |
evrardjp | is there such a project in openstack mordred? | 15:47 |
evrardjp | AFAIK, no | 15:47 |
slaweq | mnaser: njohnston: that's the patch https://review.opendev.org/#/c/694042/ | 15:47 |
mordred | evrardjp: opendevorg/python-builder | 15:47 |
slaweq | I need to rebase it and get back to it | 15:47 |
evrardjp | that's not what I meant | 15:47 |
mnaser | slaweq: thanks for purusing it, i know there was some really hairy bugs at the time when i tried it so im sure it wasnt easy | 15:47 |
slaweq | that's all from me about uwsgi in neutron | 15:47 |
evrardjp | a service project that's easy to ship as a single container | 15:47 |
mordred | all of them | 15:47 |
mordred | it's just python | 15:47 |
evrardjp | I would say the easiest is probably keystone | 15:47 |
evrardjp | it's not | 15:48 |
mordred | oh - let me be a little clearer with the intent here ... | 15:48 |
evrardjp | well I guess it depends on what you put inside the container | 15:48 |
mordred | these are not "this containers runs 5 different service and a rabbitmq to run service X" | 15:48 |
mordred | that's not what this is or wants to be | 15:48 |
evrardjp | let's just take nova's example | 15:49 |
evrardjp | what will the images be? | 15:49 |
mordred | this is "this container contains the keystone app installed so you can run the keystone-api server in one container and the keystone-db service in another container" - and you'll be expected to run db and rabbit and whatnot in other containers | 15:49 |
mordred | evrardjp: nova-api, nova-conductor, nova-scheduler, etc | 15:49 |
evrardjp | ok so we agree on the multiple images, one per usage | 15:50 |
evrardjp | that's a first start :) | 15:50 |
mnaser | i mean fwiw | 15:50 |
mnaser | it's all the same image | 15:50 |
mnaser | we just have multiple targets | 15:50 |
evrardjp | so question, where does libvirt? | 15:50 |
evrardjp | go to? | 15:50 |
mnaser | with different entrypoints | 15:50 |
mnaser | not in nova | 15:50 |
evrardjp | the c bindings? | 15:50 |
mnaser | c bindings would naturally come in as runtime dependencies | 15:50 |
mnaser | alongside the libvirt python bindings | 15:51 |
mnaser | libvirt (the binary) is not included. | 15:51 |
mnaser | you can run that however way your heart desires and use the connection_uri to point to it | 15:51 |
evrardjp | I see what you mean | 15:51 |
clarkb | I think evrardjp's concern is that the bindings in container A may not work with application binary in container B | 15:51 |
mordred | exactly. the idea is that these are all intended to be single-process containers | 15:51 |
evrardjp | clarkb: correct | 15:51 |
clarkb | that is a valid concern but libvirt is a bad example as they work hard to have compatibility | 15:51 |
ricolin | It's important we start with the most base contain, which we can build some others on top if team decide to do it | 15:51 |
ricolin | *container | 15:52 |
mnaser | clarkb: but also, we have things like "supported libvirt versions" inside nova for example | 15:52 |
evrardjp | clarkb: I just took an example, but there are many of those | 15:52 |
mordred | yah - that's right. if we start with simple "this si what pip instal gets you" | 15:52 |
mordred | it's _something_ | 15:52 |
mordred | will it solve everyone's use cases for all deplyments? Nope | 15:52 |
evrardjp | my point was agreeing on a first base, not to throw a solution for everyone | 15:52 |
mordred | evrardjp: ++ | 15:52 |
clarkb | evrardjp: well you have to distinguish between C bindings that talk to things in the container and things that don't | 15:52 |
clarkb | and I think with openstack you largely avoid these problems because the deps are sane | 15:52 |
mordred | I guess - the main thing here is - could we let mnaser go off and work on a couple of services to show people? | 15:53 |
evrardjp | clarkb: I think that's exactly what I said , but with words that are less good than yours | 15:53 |
evrardjp | :D | 15:53 |
evrardjp | fine for me | 15:53 |
evrardjp | that was the intent of the container sig | 15:53 |
mordred | might also be easier for people to talk about if we can point at a few specific examples | 15:53 |
evrardjp | so if you're happy to share results, use containers-sig topic :) | 15:53 |
evrardjp | on the ML* | 15:54 |
evrardjp | mordred: agreed | 15:54 |
mnaser | im gonna speculate that i could probably even do some neat docker-compose job too | 15:54 |
mnaser | just to do a basic func test | 15:54 |
evrardjp | this is why I also said that I wanted to start with a simple example, for example keystone, and continue forward. | 15:54 |
evrardjp | mnaser: after all of that you can use nomad to start those containers | 15:54 |
evrardjp | when you want to go webscale | 15:55 |
evrardjp | lol | 15:55 |
mnaser | shhh i'm already writing an openstack operator right now | 15:55 |
evrardjp | what with? | 15:55 |
mnaser | just wait till running openstack is cool again because its so easy | 15:55 |
mnaser | kubebuilder. | 15:55 |
evrardjp | I didn't touch kubebuilder yet | 15:55 |
evrardjp | the de-facto standard nowadays | 15:56 |
evrardjp | anyway | 15:56 |
mnaser | if anyone is curious about following that progress: https://opendev.org/vexxhost/openstack-operator | 15:56 |
ricolin | I'm almost about to reach tomorrow! Will keep tracking this when I awake:) | 15:56 |
clarkb | from an infrastructure side of things I would strongly encourage avoiding trying to build these containers on all the base images. Pick one that works for hte use case and stick to it. This allows you to keep the problem space smaller, but also reduces the number of jobs necesary as well as required bandwidth and storage. On top of that use the caches we have in place (or add new ones if necessary), and | 15:57 |
evrardjp | there are people which tried this docker-compose stack before, might be worth talking over the ML to share experiences | 15:57 |
clarkb | avoid writing custom clients as much as possible. | 15:57 |
ricolin | mnaser, Are you plan to send ML later? | 15:57 |
mnaser | that even has CI jobs which deploy memcache/mcrouter and use that deployed inside k8s alongside devstack and run tempest on it, so sslowly but surely | 15:57 |
clarkb | if you follow the example that corvus and mordred have set up in zuul and opendev that should all largely be handled for you | 15:57 |
evrardjp | clarkb: agreed | 15:57 |
mnaser | clarkb: i absolutely would support no other way than doing it the way opendev/zuul has done it. let's align more stuff so we deduplicate work. | 15:58 |
clarkb | ++ | 15:58 |
mnaser | clarkb: it also gives us huge beenfits like you can probably run some of openstack's build jobs and make sure you dont break us | 15:58 |
mnaser | i dont think other people probably want to do that ;P | 15:58 |
clarkb | also we've sorted out ipv6 and you don't want to sort that out again | 15:59 |
mnaser | ricolin: i think i would like to get a few patches in and kinda show-off something that's functional | 15:59 |
mnaser | so uh, cinder/glance/keystone/nova sorry for the spam in advance :) | 16:00 |
njohnston_ | i bet placement would also be a good candidate | 16:01 |
mnaser | oh yeah, pretty much everything in the base devstack job | 16:01 |
*** njohnston_ has quit IRC | 16:22 | |
*** AJaeger has left #openstack-tc | 16:41 | |
*** jamesmcarthur has quit IRC | 17:10 | |
*** jamesmcarthur has joined #openstack-tc | 17:18 | |
fungi | tons of scrollback, trying to catch up after meetings but one of the major drivers for the 20170530-binary-artifacts resolution was that we don't want openstack to be liable if they distribute a vulnerable version of, say, glibc in a container. we don't have reporting channels nor contributor bandwidth to deal with security-related issues in non-openstack software, so any "security support" would be | 17:25 |
fungi | best effort and on whoever's maintaining the images to sort out (or declare they don't care) | 17:25 |
njohnston | hey, looks like maybe | 17:26 |
njohnston | sorry, wrong channel | 17:27 |
fungi | also any image building automation (jobs) will need people caring after them. if they break, and we stop publishing images for a while as a result, people are left with possibly lengthy gaps where they're running vulnerable versions | 17:29 |
fungi | i am kinda curious though how "official" openstack images would be different from "kolla" or "loci" images (and what makes those not official by comparison). how will those projects feel about someone reinventing what they're doing and saying it's the one true way and their solutions are second-rate? | 17:30 |
mnaser | fungi: unless someone says they'd like to step up and make sure those images keep being published | 17:30 |
mnaser | fungi: it's just about simplifying and unifying the experience. kolla builds "system" containers, not really the expected way of getting containers these days | 17:31 |
fungi | also for bindep, it's designed specifically for the case of "known distro includes these packages we require" and so doesn't handle 1. packages outside the distro's blessed set, nor 2. anything beyond saying "these are the names of packages you don't seem to have installed" really | 17:31 |
mnaser | fungi: loci is a lot like what opendev does, but it has a lot of complexities (and i think has struggled with maintainership), the idea of some other remote external repo just seems confusing | 17:32 |
mnaser | i expect issues to come up as we go through them, but i think for most of the _python_ bits, we're probably ok | 17:33 |
clarkb | fungi: to me the big difference is kolla in particular is a distribution with a fairly opinionated deployment tool | 17:34 |
clarkb | fungi: the idea proposed with these images is they'd just be the minimalist thing that can run $project's command | 17:34 |
fungi | yeah, i get that kolla and loci approaches are different | 17:34 |
fungi | is there a reason not to evolve loci toward that? | 17:35 |
mnaser | loci is largely a derivative of what opendev does | 17:35 |
fungi | i'm just getting a strong https://xkcd.com/927/ vibe, that's all | 17:35 |
mnaser | it's somewhat of a marketing things too, "docker pull loci/who?" | 17:35 |
clarkb | to me loci still has the mutli platform problem | 17:35 |
clarkb | and overcomplicated setup | 17:35 |
*** evrardjp has quit IRC | 17:36 | |
clarkb | you don't need all that if you pick a single platform and stick to bindep and pip install | 17:36 |
fungi | the loci and kolla namespaces are because we don't want to imply that those containers come with support from the entire openstack community, but only from the loci or kolla teams | 17:36 |
mnaser | ^ and a "requirements" layer which is confusing and non trivial too | 17:36 |
clarkb | if loci wants to evolve into ^ then great | 17:36 |
*** evrardjp has joined #openstack-tc | 17:36 | |
mnaser | i think there's big value in docker build . inside a folder and getting an openstack container | 17:36 |
fungi | in fact, the tc told kolla and loci (with that resolution) that they needed to use separate namespaces | 17:36 |
mnaser | at the time, building containers was a whole different story | 17:37 |
mnaser | it was still early, no clear patterns were emerging | 17:37 |
mnaser | multi-stage builds didnt exist | 17:37 |
fungi | i get that. but why does the fact that people have been building containers in different ways mean that now you have to start over from scratch? or that every time there's some innovation in how containers are used that you have to start over for that matter, instead of fixing the thing you've got to incorporate modern concepts? | 17:38 |
fungi | that's 100% nih | 17:38 |
fungi | that ting which exists is wrong, rather than try to make it right i'll just ignore it and redo everything myself, possibly discovering a lot of the same problems along the way | 17:39 |
clarkb | fungi: I thnik the assertion is that the ways people have been building those containes is not good | 17:40 |
clarkb | except no one wants to say that (so I did sorry) | 17:40 |
clarkb | and there has been little indication of them changing | 17:40 |
fungi | because people have tried to get involved to change them and been told to go away? | 17:41 |
clarkb | fungi: specifically I brought this up with kolla at the first denver ptg | 17:41 |
clarkb | they all agreed this would be a better approach but no one had the interest in implementing it | 17:41 |
clarkb | because its significant effort particularly when you've got the preexisting deployment tooling coupled in already | 17:41 |
clarkb | (you don't want to break those use(rs| cases) | 17:41 |
clarkb | this may allow them to break that chicken and egg relationship by having a new thing on the side they can incorporate over time | 17:42 |
mugsie | the overhead of a dockerfile per project is way less than a central thing like loci, and doesn't require yet another team to maintain it day to day, and if we have people who are motivated at the moment to set up a sane openstack/base image, that we can built on top of for each service, I see value in that | 17:43 |
mugsie | e.g. if loci's ability to make designate images broke, I would have 2 places to debug, where if it is in my repo, I can manage it better | 17:44 |
mugsie | if I don't care, we are in the same place as we were with loci | 17:44 |
fungi | i'm just curious what the migration path is, if we agree this is superior... can loci basically become you have a dockerfile in every service repo? | 17:44 |
mugsie | basically | 17:44 |
fungi | our project silos make this very hard. loci centralized the container configuration because individual projects didn't want to have to review it. that makes it harder for some projects which do care to keep that updated. on the other hand distributing it into all the different teams makes it hard for someone who has an interest in keeping the set working when some teams ignore patches for it | 17:46 |
mugsie | yeap, that is the trade off | 17:46 |
mugsie | adding CI jobs can help to make sure they looked at, but YMMV per team | 17:46 |
fungi | at its most basic level, the proposal is to expect/force every team to care about "their" dockerfiles, vs the loci and kolla approaches of having a team who cares about the dockerfiles for all projects | 17:48 |
mugsie | yep. I understand the ask, but it is not something I would recommend we force in the near future, but we should flag it for projects | 17:49 |
clarkb | fungi: for me its more about building minimal images in a consistent manner using modern container build tooling | 17:49 |
clarkb | fungi: you can do that with or without the specificiations in repo | 17:49 |
fungi | if you do it without the specifications in repo, then why couldn't that just be the next evolution of loci? | 17:50 |
fungi | it's not like source code is carved into stone | 17:50 |
clarkb | fungi: it could be if we are ok with breaking loci's existing users | 17:50 |
clarkb | (I doubt anyone wants to figure out a migration) | 17:50 |
mugsie | true, I think mnaser's proposal was in repo, but it could be elsewhere. there would have to be other tooling for the base images etc, but I like the dockerfile in the repo | 17:50 |
fungi | that's what major release numbers are for ;) | 17:50 |
fungi | or if the dockerfiles are in each repo, then maybe the next evolution of loci is the jobs which are being run to build images from those? | 17:51 |
mnaser | mugsie: i just feel that we should simplify the world for users so "if you want docker images go clone this other repo and then do some magic" | 17:51 |
clarkb | fungi: those jobs are basically arleady defiend in opendev/base-jobs :) | 17:51 |
clarkb | fungi: but you do need to inherit from them and sprinkle in specifics | 17:51 |
fungi | and need no care and feeding, or changes to accommodate openstack's weird software choices? | 17:52 |
mugsie | mnaser: ++ | 17:52 |
clarkb | fungi: not as long as bindep is populated properly and pip install . works | 17:52 |
clarkb | fungi: we've intentionally built them to work well for python (because we too do a lot of python) | 17:52 |
mnaser | mugsie: also, yeah, the jobs already exists. the opendev builder image and stuff is _seriously_ well thought out | 17:52 |
clarkb | and maybe thats the right way to frame it | 17:53 |
fungi | i am kinda curious how the "average docker user" will be satisfied by cloning nova and building and running that image, without also having to do the same for additional repos, but since i'm not really a docker user at all i probably under estimate the degree to which they're used to deploying complicated software consisting of dozens of separate microservices | 17:53 |
clarkb | by tackling this from the perspective of "how do we properly build docker images for python applications" we've built somethign that works well pretty globally | 17:54 |
mnaser | i already have keystone images with minimal changes (and by that i mean, tagging bindep with the correct compile/test profiles and ensuring uwsgi gets installed -- which im using extras to add it, therefore not touching the existing "infrastructure" | 17:54 |
clarkb | rather than siloed towers that only work well for a specific use case | 17:54 |
fungi | just sort of find it odd that a user accustomed to deploying a suite of services as large as openstack would be surprised by needing to "clone one more repo" for the container config | 17:55 |
mnaser | i think its also a lot of the ci/cd tooling expects a dockerfile in the same repo too | 17:56 |
fungi | or that many wouldn't be equally surprised that there's not one repo that contains the configuration they need to build openstack containers, but that they have to look at dozens of dockerfiles in different repositories | 17:56 |
clarkb | (ours does not fwiw) | 17:56 |
mnaser | i mean, no one should need to touch those dockerfiles.. and if they need to, they can build them knowing where they are going to use them as a "source" from | 17:57 |
fungi | i wonder if a viable middle ground might be to have a central container configuration for openstack services, but then allow projects to relocate those into their repositories if they want to care for them (and relocate them back into the central repo if they no longer care about them) | 17:59 |
fungi | assuming there's also a team of folks who want to collaborate on maintaining those images, making sure they're updating consistently and so on | 18:00 |
mnaser | i just dont know why we want to deviate from how every project on earth does docker containers | 18:00 |
mnaser | and again, i dont think those will be actaully needing that much changes, also, i intend to help drive this (and) ensure that we can get devstack somewhat the ability to consume those | 18:01 |
fungi | i don't know how projects on earth do docker containers, but i do know that we have teams who will not want to have one more thing they're responsible for in their git repositories | 18:01 |
mnaser | maybe this is a good time to evaluate the idea of CODEOWNERS alongside this | 18:02 |
mnaser | i mean, if the job is green and the change seems sane.. | 18:02 |
fungi | that'll be a fun conversation | 18:02 |
mnaser | well, i'm not trying to grow the scope of things, but i can imagine if we put CI in place, then hopefully we won't break it, and dockerfiles are pretty static | 18:03 |
mnaser | they won't change often | 18:03 |
clarkb | and most of the fixing will be to bindep and requirements | 18:03 |
clarkb | (things that you want to update anyway) | 18:03 |
mugsie | yeah - I suggest that teams may initially push back, but if there is a good base, and minimal work, it should be easy enough to get a critical mass on side | 18:05 |
fungi | to circle back to my first comment, the reason i strongly recommend revisiting (and if necessary revising) https://governance.openstack.org/tc/resolutions/20170530-binary-artifacts.html is that, while some things about the situation which prompted us to write that have changed in the nearly three years since, other things have not | 18:05 |
*** rpittau is now known as rpittau|afk | 18:06 | |
mugsie | I never understood the issue for e.g. SSL - the binaries can be apache, which is an AS IS licence, so I am not sure where liability would come from | 18:07 |
fungi | what was the ssl liability issue? | 18:07 |
fungi | where? | 18:07 |
mugsie | was the issue not with a potential liablity if we ship a binary with a broken / vunrable SSL (as an example) | 18:08 |
fungi | i can think of a few potential issues around openssl, if that's what you're talking about, but anyone who works knee-deep in distro packaging has plenty of license incompatibility horror stories | 18:08 |
mnaser | given the velocity of how much we merge changes | 18:09 |
mnaser | i doubt the images would be stale for long | 18:09 |
mnaser | and im not even proposing tagged images with releases or stable branches | 18:09 |
mnaser | just :latest pointing towards master, for now | 18:09 |
fungi | oh, i think i said glibc, but yes. one of the problems that the resolution was intended to solve was making sure consumers of those images know that we aren't explicitly tracking vulnerabilities in other people's software we're redistributing in them, and that if they want to be sure they're updated with non-vulnerable versions they should build the images themselves | 18:10 |
mnaser | i think we can start by at least having docker files and then when we publish them to dockerhub, we can put (in the readme) a warning saying those are not supported and you should build your own | 18:11 |
mnaser | anyways, im pretty sure all those individuals who are after that high level of security | 18:11 |
mnaser | are certainly _not_ docker pull-ings things from the internet | 18:11 |
fungi | to approach it as "the apache (or whatever) license says we're not to be held liable" is not great for users. we need them to know we're not keeping track of vulnerabilities in other people's software out of COURTESY and so they don't hate us later when it becomes a problem for them | 18:11 |
clarkb | related to that I think one of the ways you minimize that risk in addition to communication is careful choice of base image | 18:12 |
mugsie | clarkb: ++ | 18:12 |
clarkb | comparing to kolla again they build (or tried to build) on like 5 different platforms | 18:12 |
fungi | also the apache license is a red herring here. the images wouldn't (couldn't) be "apache licensed" because they will almost certainly aggregate software under lots of licenses, some of which aren't necessarily even apache-compatible | 18:12 |
clarkb | all with different update scheduels and versions and it gets complicated very quickly | 18:13 |
clarkb | if you stuck to a single minimal base image the problem space gets much smaller | 18:13 |
mugsie | can we try and avoid dockerhub? I have ... issues ... with that being a thing | 18:13 |
mnaser | clarkb: yep, i agree. | 18:13 |
mnaser | we can publish to registery.opendev.org | 18:13 |
mugsie | something like distroless/python3 is interesting | 18:13 |
mnaser | or where-ever our heart desires | 18:13 |
clarkb | I have 0 desire to run a production docker registry fwiw | 18:14 |
mnaser | mugsie: this is actually something that's already done by the infra team | 18:14 |
clarkb | the tooling in the sapce is terrible | 18:14 |
clarkb | and you need lots of disk | 18:14 |
mugsie | yeap | 18:14 |
clarkb | (its so bad zuul wrote its own to deal with speculative images) | 18:14 |
mnaser | opendev/python-builder + opendevorg/python-base is already built on "distroless" (somewhat). the latter is FROM python:foo | 18:14 |
mnaser | i mean, we can use quay, we can use dockerhub, that's just an implementation detail where it ends up | 18:14 |
mnaser | we can run gitlab and use it's container registry | 18:15 |
* mnaser giggles | 18:15 | |
*** jamesmcarthur has quit IRC | 18:15 | |
mugsie | clarkb: thats fair. it is just a personal peeve :) | 18:15 |
clarkb | and ya all our tooling should support arbitrary registries | 18:16 |
clarkb | you have to use a qualified name if not talking to dockerhub and we only cache dockerhub and quay currently iirc | 18:16 |
mnaser | yeah. and folks can publish it where they want if they want to | 18:20 |
*** jamesmcarthur has joined #openstack-tc | 19:19 | |
*** jamesmcarthur has quit IRC | 19:28 | |
*** jamesmcarthur has joined #openstack-tc | 19:29 | |
mordred | mugsie: distroless/python3 doesn't actually help a ton - it's interesting, but for the common openstack usecase it's a little bit too distroless | 19:32 |
mordred | mugsie: and yeah - I agree - dockerhub-by-default is annoying :) | 19:32 |
mordred | we fully qualify our dockerhub references these days so taht we can use the richer multi-registry versions of the container tools | 19:32 |
mordred | but - you know - if it's a road we want to go down and it winds up being *the* thing - having a registry.opendev.org isn't untennable ... ACTUALLY - that might solve several problems at once | 19:36 |
mordred | lemme write down a thought real quick | 19:36 |
openstackgerrit | Jay Bryant proposed openstack/governance master: Analysis of 2019 User Survey Feedback https://review.opendev.org/698582 | 19:43 |
jungleboyj | tc-members ^^^ Took another stab at the User Survey. Take a quick look if you can. Would like to wrap it up so aprice can including it with up-coming communications! | 19:44 |
*** jamesmcarthur has quit IRC | 20:03 | |
*** jamesmcarthur has joined #openstack-tc | 20:05 | |
*** jamesmcarthur has quit IRC | 20:14 | |
*** jamesmcarthur has joined #openstack-tc | 20:16 | |
*** jamesmcarthur has quit IRC | 20:22 | |
*** jamesmcarthur has joined #openstack-tc | 20:46 | |
*** e0ne_ has quit IRC | 21:04 | |
*** e0ne has joined #openstack-tc | 21:08 | |
*** jamesmcarthur has quit IRC | 21:15 | |
*** jamesmcarthur has joined #openstack-tc | 21:16 | |
*** e0ne has quit IRC | 21:22 | |
*** jamesmcarthur has quit IRC | 21:23 | |
*** jamesmcarthur has joined #openstack-tc | 21:23 | |
*** jamesmcarthur has quit IRC | 21:40 | |
*** jamesmcarthur has joined #openstack-tc | 21:41 | |
*** jamesmcarthur has quit IRC | 21:47 | |
*** jamesmcarthur has joined #openstack-tc | 21:47 | |
*** jamesmcarthur has quit IRC | 21:54 | |
*** jamesmcarthur has joined #openstack-tc | 21:57 | |
*** jamesmcarthur has quit IRC | 21:59 | |
*** jamesmcarthur has joined #openstack-tc | 22:02 | |
*** jamesmcarthur has quit IRC | 22:05 | |
*** jamesmcarthur has joined #openstack-tc | 22:05 | |
*** jamesmcarthur has quit IRC | 22:13 | |
*** jamesmcarthur has joined #openstack-tc | 22:17 | |
*** jamesmcarthur has quit IRC | 22:36 | |
*** jamesmcarthur has joined #openstack-tc | 22:37 | |
*** slaweq has quit IRC | 22:45 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!