barancw | I'm looking to configure cinder to use the cinder.volume.drivers.synology.synology_iscsi.SynoISCSIDriver driver. It doesn't look like this is supported at the moment. Has anyone done this or know of examples of custom cinder driver implementations? | 06:13 |
---|---|---|
noonedeadpunk | barancw: I think it's all about cinder.conf and installing some extra packages for cinder-volume, isn't it? | 07:25 |
noonedeadpunk | As you can apply any cinder config you want, and you can also override list of packages that will be installed inside containers, so I while it's not officially supported, there should be close to no blockers getting it working from my perspective | 07:26 |
jrosser | they’ve gone - but it looks in-tree https://github.com/openstack/cinder/tree/master/cinder/volume/drivers/synology | 07:57 |
noonedeadpunk | ah, well, then it would be even easier | 08:00 |
amarao | I wanted to remove repo_server and go back to 'build and whole download venv'. In docs it's noted as existing option, but I can't find any traces of that in current openstack-ansible. Is it a supported mode, or I need to do it myself? | 10:04 |
noonedeadpunk | amarao: so te rebuild venv we do have 2 variables that can be used - venv_rebuild and venv_wheels_rebuild - you can set them both to true if you want clean deployment of some service | 10:15 |
jrosser | though for the repo_server that is really a very small thing it does | 10:15 |
noonedeadpunk | and they can be passed as extra args ofc, like `openstack-ansible os-glance-install.yml -e venv_rebuild=true -e venv_wheels_rebuild=true` | 10:15 |
jrosser | amarao: what issue do you have with the repo server? | 10:16 |
jrosser | amarao: i also cannot find the string "download venv" in our documentation, do you have a link? | 10:17 |
amarao | I'm trying to split deployment into two phases: build vens, pack them, and at install phase I just download them and do not run venv installation. | 10:17 |
amarao | I've tried to remove repo_all group, but utility-install.yml fails because there is no server to answer to utility_upper_constraints_url. | 10:17 |
noonedeadpunk | yes, was jsut gonna ask that ^, as it kind of depends what's the purpose of repo_server removal, as re-build and download venvs is also has not much to do with repo server, except wheels being stored there | 10:17 |
noonedeadpunk | well, that concept of pre-building envs was removed since Stein | 10:18 |
jrosser | amarao: in the past (many releases ago) that is how OSA worked and it was pretty bad so was removed | 10:18 |
amarao | https://specs.openstack.org/openstack/openstack-ansible-specs/specs/queens/python-build-install-simplification.html, grep "downloading a complete virtualenv". | 10:18 |
jrosser | queens though | 10:18 |
jrosser | which release are you using now? | 10:18 |
amarao | Oh... sorry. Yep, I'm no zed. | 10:19 |
jrosser | right - so everything there is completely historical | 10:19 |
noonedeadpunk | Well, grepping "downloading a complete virtualenv" does correpsond section of status on moment of writing | 10:19 |
noonedeadpunk | And proposed changed I assume is close to what we have now | 10:20 |
noonedeadpunk | I believe that's exactly the spec that was finilized for Stein | 10:20 |
jrosser | it talks about building and storing venvs still | 10:21 |
amarao | I really want to go for 'download venv' way, because I want to have completely 'internet-less' installation with pre-build venvs. I more or less wrote my code, and it works with reinstall and reboot. | 10:21 |
amarao | But I got a problem with repo_server which is queried even if venvs are build, and I wonder if I can disable it... | 10:21 |
noonedeadpunk | the main problem with old approach was that 1 specific project that can't build was preventing from building all the rest venvs as well | 10:22 |
noonedeadpunk | also, repo_server was used even more there | 10:22 |
jrosser | amarao: i don't believe there is a way to use "pre built venvs" with OSA as it exists today | 10:22 |
amarao | In current code, if requirements/constrains didn't changed, venv is not rebuild (I use this trick). But constrains are downloaded from URL, (hardcoded use of module)... | 10:23 |
jrosser | i think that noonedeadpunk has some recent experience with deployments with no internet connectivity | 10:23 |
noonedeadpunk | As of today you need to have either internet or some pypi mirror to build wheels. Then it's all done with local connection | 10:23 |
noonedeadpunk | As once you have wheels build, venvs are being installed from these wheels | 10:24 |
noonedeadpunk | you can also override URL for upper-constraints, but repo_server was never considered as outbound connection. So isntead of external URL we cache upper-constraints there to make connection local | 10:25 |
amarao | https://gist.github.com/amarao/a3062f9790d7ea19633d18d3eeb83709 | 10:25 |
amarao | (My code to unpack venvs). Venvs are just 'tared' from aio in CI. | 10:25 |
jrosser | amarao: is it that you have a hard requirement for the entire environment to have no internet connection | 10:26 |
jrosser | amarao: or is it that you just want the deployed nodes to have no internet? | 10:26 |
amarao | Yep. We can have locally cached mirrors, but nothing else. | 10:26 |
noonedeadpunk | amarao: fwiw, that was the role that was building venvs https://opendev.org/openstack/openstack-ansible-repo_build/src/branch/stable/train | 10:26 |
noonedeadpunk | amarao: and that's exactly scenario I'm having right now - jsut local mirrors | 10:27 |
amarao | It doesn't work. Nova installals additional stuff in venvs, so I opted for snatching venv from aio. | 10:27 |
amarao | (I believe, vnc stuff is been deployed directly from git) | 10:27 |
noonedeadpunk | yeah, it might. But you still need to have local forks/mirrors of git regardless | 10:28 |
noonedeadpunk | at least of specific projects that are being used | 10:28 |
noonedeadpunk | I do have 1 issue with wheels build right now, hopefully will sort it out by the EOD, but it's related to building metadata of packages, and when they do it, they're trying to isntall pbr from outside not repsecting --index-url | 10:29 |
amarao | I pack roles as tar too (snatch from aio), and I clone openstack-ansible from local mirror. | 10:29 |
jrosser | urgh | 10:30 |
jrosser | ^ this is going to be very messy | 10:30 |
jrosser | imho you should host an internal git server | 10:30 |
amarao | And I pack boostrapped ansible as tar too. | 10:30 |
jrosser | we have one with all the roles, all the services | 10:30 |
amarao | I'm ok to have internal git server, but I don't want to maintain 100500 repos. | 10:30 |
jrosser | it's not 100500 | 10:30 |
jrosser | and you don't have to maintain them | 10:31 |
noonedeadpunk | Have you checked out https://pypi.org/project/pulpcore/ ?:) | 10:31 |
amarao | It's more than 60 repos (including those connected in different roles outside of ansible-role-requirements.yml), too much for my taste. | 10:32 |
noonedeadpunk | um, but you don't need them all? | 10:32 |
jrosser | amarao: but this is what ansible and cron are for :) | 10:32 |
jrosser | it's like the thing i never have to touch | 10:33 |
amarao | No-no-no, maintenance is much more complicated thing. I want to get working copy, freeze it, and use it, with guarantee of no drift... I hoped to get rid of repo_server (use local files or something like that), but if it's not possible, well, let's it be. | 10:33 |
jrosser | the idea is that the git sha we put in the openstack-ansible repo are doing that freeze for you | 10:34 |
noonedeadpunk | well, I think you can get rid of it. Though it;s mainly helping to have local stuff that doesn't drift rather then fetching stuff each time | 10:34 |
amarao | I'll do internet installation at image build time, and then will have everything (except for apt pacakges) in a single docker image, to deploy on hosts via copy methods... | 10:34 |
noonedeadpunk | what you're describing I think is closer to kolla-ansible approach? | 10:35 |
noonedeadpunk | when they build docker-images for each service and then deploy them | 10:36 |
amarao | More or less. They want own database, and I want to provide a passive container (build in CI with internet available) to 'just install what is in the inventory'. | 10:39 |
amarao | The main thing is that I have ~2.5k lines of code for things to set up for openstack (network equipment, bgp for VIP ip instead of vrrp, etc), so openstack is just a part of installation. | 10:39 |
amarao | Whole idea: you have a new inventory, apt repos (mirrors), pack of servers with ssh and no internet access, make openstack. | 10:39 |
amarao | I pack in docker image not the target things on host, but 'deployment image - playbooks, roles, packed venvs'. | 10:39 |
noonedeadpunk | that's exactly what I have as of today for one of my new deployments | 10:39 |
noonedeadpunk | And deploy host is either ephemeral or fully managed with CI | 10:40 |
amarao | .. but you use git repo mirrors, don't you? | 10:40 |
noonedeadpunk | well, yes, we do have mirrors/forks | 10:40 |
noonedeadpunk | But I still don't understand why it's any of concern? | 10:40 |
amarao | Do you use them at 'production install time'? | 10:40 |
noonedeadpunk | They're used for wheels build only during first deployment. Then all stuff (well except maybe vnc or smth like that) goes from pre-built wheels | 10:41 |
noonedeadpunk | and wheels are built when running against first host, so neither of compute or net nodes don't build wheels, as there's already neutron that was builded for API part | 10:42 |
noonedeadpunk | Also, there's no way how things could drift, as we install all packages with SHA | 10:42 |
amarao | Oh, that's the difference. We have multiple openstack installations (almost like 'as a service', but for internal teams), and 'production installation time' can't use too much of mirrrors. Each region need own mirror (SG <-> US link is terrible), and less mirrors are there, the better. | 10:42 |
jrosser | i deploy the mirror inside each deployment | 10:43 |
jrosser | but it's automated so really a no-op to think about | 10:43 |
noonedeadpunk | what can move forward are only upper-constraints. | 10:43 |
noonedeadpunk | amarao: but again. You have wheels inside each reagion by default. | 10:43 |
noonedeadpunk | So for deploy time you do this venv installation once regardless | 10:44 |
noonedeadpunk | s/venv installation/wheels build/ | 10:44 |
noonedeadpunk | and then all venvs are getting build from local repo server | 10:44 |
amarao | Okay, I got this idea. But I opt for packing whole venv without involving wheels (I don't need them, because venvs are not touched by installer, because requirements/constrains do not change for a given 'version' (local version)). | 10:44 |
jrosser | so do you run openstack-ansible at all on the deployment? like how is the config written..... | 10:45 |
amarao | Yep. When I pack on aio, the venv/.../etc is packed to, and openstack-ansible is nice enough to create symlink from /etc, and override everything as it should be. | 10:46 |
amarao | (at production install time) | 10:46 |
amarao | But it's still work-in-progress, I'm not sure I fixed all internet dependencies... | 10:47 |
noonedeadpunk | well, you might want indeed to look on our old repo_build role to get some hints... But yeah, as of today it was completely cutted out | 10:47 |
amarao | Thanks, it helped a lot (because there are many odd references to old packaging in search results, and I tried to squeeze it from the current code). | 10:48 |
noonedeadpunk | I'm still though not quite got real difference of approaches to have built wheels to quickly provision venv or packing already built venv. | 10:50 |
jrosser | amarao: do you see also that the python_venv_build role symlinks some libraries from the host into the venvs? | 10:50 |
noonedeadpunk | Oh, btw, keep in mind that some venvs do have symlink of system packages inside it | 10:50 |
noonedeadpunk | Especially the case with all ceph modules | 10:51 |
noonedeadpunk | yeah, that :) | 10:51 |
jrosser | and you should take care to version your venvs to OS + OS release as well or i can guarantee bad things will happen, they are totally not portable | 10:52 |
noonedeadpunk | I mean - you can lsyncd repo_server with all wheels that are built somewhere else, or do s3fs mount or smth like that | 10:53 |
noonedeadpunk | as standalone repo_servers (that are "generic" for many deployments and not part of any specific one) is also a practise | 10:53 |
jrosser | afs like the infra folk do would be interesting option too | 10:55 |
noonedeadpunk | yes, or that | 10:56 |
amarao | Thanks for warning about symlinks to 'outside', I didn't thought about it. But if I install the same packages on the host, symlinks would work just fine, I suppose. | 11:14 |
*** dviroel|out is now known as dviroel | 11:18 | |
noonedeadpunk | jrosser: question - what pip options you use for isolated wheels build? As I'm facing an issue with setuptools, that are not bing passed `--index-url` | 11:45 |
jrosser | you mean in OSA? | 11:46 |
noonedeadpunk | yeah | 11:46 |
noonedeadpunk | and setuptools does smth like https://github.com/pypa/setuptools/blob/main/setuptools/installer.py#L47-L52 | 11:46 |
jrosser | you've done these things? https://docs.openstack.org/openstack-ansible/latest/user/limited-connectivity/index.html#python-package-repositories | 11:47 |
noonedeadpunk | But I can't really figure out for quite some time how to pass opts here https://opendev.org/openstack/ansible-role-python_venv_build/src/branch/master/tasks/python_venv_wheel_build.yml#L140 to make things happy | 11:47 |
noonedeadpunk | well.. no, I don't have /root/.pydistutils.cfg | 11:47 |
noonedeadpunk | You create that manually? | 11:48 |
jrosser | yeah looks that thats dealt with in our ansible that prepares hosts before we run OSA | 11:49 |
noonedeadpunk | as for pip.conf there should be no need - you can supply "venv_pip_build_args: --index-url http://some.url" instead nicely... | 11:49 |
noonedeadpunk | ugh... Will dig deeper then into setuptools.... | 11:50 |
jrosser | then we use `lxc_container_cache_files_from_host` to write it into the lxc base image | 11:51 |
noonedeadpunk | as I guess some option for pip should be passed and respected by setuptools.... | 11:51 |
noonedeadpunk | but it's needed _only_ for repo_container, isn't it? | 11:52 |
noonedeadpunk | and then you don't really need index_url at all, as everything will come from find-links (as wheels are there) | 11:52 |
* jrosser not sure | 11:54 | |
jrosser | i wonder why we didnt just use `venv_pip_build_args` and `venv_pip_install_args` | 11:57 |
jrosser | as far as i remember needing `/root/.pydistutils.cfg` was sort of second-order, like maybe it was in installing dependancies of dependancies where the behaviour was just different | 11:58 |
noonedeadpunk | Well, I faced issue on trying to build utility venv, as at very least murano-pkg-check tries to install pbr as part of setuptools | 12:02 |
noonedeadpunk | and setuptools are not passed index-url from pypi | 12:02 |
noonedeadpunk | so running `/openstack/venvs/wheel-builder-python3/bin/pip wheel --requirement utility-26.0.0-requirements.txt --constraint utility-26.0.0-global-constraints.txt --constraint utility-26.0.0-source-constraints.txt --find-links /var/www/repo/os-releases/26.0.0/ubuntu-22.04-x86_64/wheels --index-url http://pulp.index/url | 12:03 |
noonedeadpunk | is failing on metadata processing as for that it tries to install pbr | 12:04 |
noonedeadpunk | So error is like `distutils.errors.DistutilsError: Command '['/openstack/venvs/wheel-builder-python3/bin/python3', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', '/tmp/tmppm10dzdz', '--quiet', 'pbr>=1.8']' returned non-zero exit status 1.` | 12:04 |
noonedeadpunk | which is suuuuper annoying | 12:04 |
jrosser | iirc there was brokenness / fixes recently in pbr? | 12:05 |
jrosser | i don't remember related to what though | 12:06 |
noonedeadpunk | and frustrating that adding PIP_INDEX_URL as env var does help | 12:06 |
noonedeadpunk | so one way can be edding `env` to the command here https://opendev.org/openstack/ansible-role-python_venv_build/src/branch/master/tasks/python_venv_wheel_build.yml#L132-L140 | 12:07 |
jrosser | does messing with pip.conf help at all? | 12:07 |
jrosser | just wondering what actually is broken, cli parsing or taking the setting from different places depending on where the code is | 12:08 |
noonedeadpunk | well, messing with .pydistutils.cfg does | 12:08 |
noonedeadpunk | but now I refuse to believe that there's no pip option to make setuptools use index-url.... | 12:08 |
noonedeadpunk | As I'd assume smth like --global-option or --build-option should have worked.... | 12:09 |
noonedeadpunk | but it seems that https://github.com/pypa/setuptools/blob/main/setuptools/installer.py#L42 is only what's in projects setup.cfg.... Or well, overrides from .pydistutils.cfg are also respected | 12:10 |
noonedeadpunk | But yeah, I asked just in case you also followed that route instead of creating files from pre-osa step | 12:37 |
noonedeadpunk | ok, I'm done | 12:56 |
opendevreview | Dmitriy Rabotyagov proposed openstack/ansible-role-python_venv_build master: Allow to set ENV vars for wheels build and venv install https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/871472 | 13:17 |
noonedeadpunk | ^ | 13:18 |
noonedeadpunk | I'm not able to find easier way :( | 13:18 |
noonedeadpunk | with having that it seems that these overrides are enough not to have any manually created config files: https://paste.openstack.org/show/bLvEypPSBqgqK1zRy5ls/ | 13:20 |
noonedeadpunk | well, except _custom_pypi_mirror shouldn't have {{}} :D | 13:20 |
jrosser | thats nice - anything that reduces the amount of external config is cool | 13:38 |
noonedeadpunk | Will document that once get a bit more progress, to be sure that's enough for sure | 13:55 |
jrosser | urgh on the ML someone tried V->Y upgrade | 14:43 |
mgariepy | how did it went ? | 14:47 |
noonedeadpunk | Assume not good since it's on ML | 14:48 |
mgariepy | hmm i don't see the mail in ML :/ | 14:48 |
jrosser | unhelpfully it's "snapshot error" | 14:48 |
mgariepy | LOL | 14:49 |
jrosser | idk how the db migrations and stuff work if you skip all those releases | 14:49 |
jrosser | maybe thats fine | 14:50 |
noonedeadpunk | Well, cinder does cleaned up DB migrations on W at least | 14:50 |
noonedeadpunk | V>Y is quite messy to be fair | 14:51 |
noonedeadpunk | Ah, no, forget that | 14:51 |
noonedeadpunk | T->Y was messy :D | 14:51 |
noonedeadpunk | cinder DB should not be an issue for V->Y | 14:52 |
noonedeadpunk | But I'm not sure what the issue is? | 14:52 |
noonedeadpunk | There were no errors in ML? | 14:52 |
noonedeadpunk | Yes, couple of INFO/DEBUG but that's it? | 14:52 |
jrosser | oh hold on is this the same person asking about Y>Z on the 20th? | 14:53 |
noonedeadpunk | No idea.... | 14:56 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Ensure management_address is used instead of ansible_host https://review.opendev.org/c/openstack/openstack-ansible/+/871483 | 15:07 |
opendevreview | Dmitriy Rabotyagov proposed openstack/ansible-role-systemd_service master: Ensure daemon is reloaded on socket change https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/871487 | 15:29 |
andrewbonney | noonedeadpunk: does this state var look right to you? I've just done a minor upgrade on a zed deploy and zookeeper ended up stopped during the service load step https://github.com/openstack/ansible-role-zookeeper/blob/stable/zed/tasks/main.yml#L76 | 15:49 |
noonedeadpunk | Hm..... | 15:50 |
noonedeadpunk | I'm quite sure I won't do that without need.... | 15:50 |
noonedeadpunk | Maybe it's because cert generation is performed later on.... | 15:51 |
noonedeadpunk | and likely because of that https://github.com/openstack/ansible-role-zookeeper/blob/stable/zed/handlers/main.yml#L31-L38 | 15:52 |
noonedeadpunk | but yeah.... I see how things go wrong here when nothing is changed.... | 15:52 |
andrewbonney | I guess the first time I installed it the handler started the service, but when upgrading that doesn't need to run, and the task causes it to stop | 15:52 |
andrewbonney | :) | 15:52 |
*** dviroel is now known as dviroel|lunch | 15:53 | |
andrewbonney | Should the upgrade CI have caught that? | 15:53 |
jrosser | i don't think there is any coverage of that | 15:53 |
noonedeadpunk | nah, there's none | 15:53 |
jrosser | there is this https://github.com/openstack/openstack-ansible/blob/master/playbooks/healthcheck-infrastructure.yml#L364 | 15:54 |
jrosser | but thats only for fresh installations | 15:54 |
noonedeadpunk | damn, how to get rid of that chicken-egg nicely.... | 15:54 |
noonedeadpunk | if only we had state - don't do anything | 15:55 |
jrosser | i think state is not a required parameter | 15:56 |
jrosser | https://github.com/openstack/ansible-role-systemd_service/blob/1f7091a11cfdcad885443c23e00d8ab85b0783a6/tasks/systemd_load.yml#L23 | 15:57 |
noonedeadpunk | well, it kind of is | 15:58 |
noonedeadpunk | as then it's executed https://github.com/openstack/ansible-role-systemd_service/blob/1f7091a11cfdcad885443c23e00d8ab85b0783a6/handlers/main.yml#L21 | 15:58 |
jrosser | why do we do that | 15:59 |
noonedeadpunk | well, we could notify `systemd service changed` there https://github.com/openstack/ansible-role-systemd_service/blob/1f7091a11cfdcad885443c23e00d8ab85b0783a6/tasks/systemd_load.yml#L23 | 15:59 |
noonedeadpunk | I assume, because we don't want to execute smth twice when it's already done in systemd_load.yml | 16:00 |
jrosser | feels like systemd_service role ought to support not specifying the state | 16:04 |
jrosser | just like the underlying systemd: module | 16:04 |
noonedeadpunk | well, I mean. You can avoid configuring state. But then role will attempt to restart service once it's done | 16:05 |
noonedeadpunk | as otherwise we will find ourselves in situation like with sockets | 16:06 |
jrosser | that would then let the restart be handled outside that role by `systemd service changed` handler | 16:06 |
jrosser | oh i mean maybe it is an error in the current role that it assumes service.state being undefined -> restart | 16:07 |
jrosser | but perhaps we rely on that behaviour elsewhere | 16:08 |
noonedeadpunk | yeah, I got what you mean I'm jsut not sure about consequences | 16:08 |
jrosser | maybe more hackily check for it not being `noop` | 16:08 |
noonedeadpunk | yeah, already though about adding "ignore" or smth | 16:09 |
noonedeadpunk | but that is... | 16:09 |
noonedeadpunk | /o\ | 16:09 |
noonedeadpunk | maybe zookeeper role can be adjusted... | 16:10 |
noonedeadpunk | As `Symlink zookeeper` can be easily moved to tasks for example | 16:10 |
noonedeadpunk | and tbh I'm not sure if service was failing due to absent certs or not... | 16:11 |
noonedeadpunk | but likely adding some option to avoid doing anything with state would be beneficial | 16:12 |
noonedeadpunk | the easiest thing would be to add another key, like ignore_handlers or smth | 16:12 |
jrosser | there is also the state of the package install from a previous task i guess | 16:13 |
jrosser | but i don't think that actually helps | 16:14 |
noonedeadpunk | I was also trying to look at idempotence of sockets... As https://opendev.org/openstack/openstack-ansible-galera_server/src/branch/master/tasks/galera_server_post_install.yml#L56 not great either | 16:15 |
jrosser | oh yes that is really nasty | 16:16 |
jrosser | there is some very specific ordering to creating / restarting those | 16:16 |
noonedeadpunk | well. restart doesn't work as of today anyway | 16:17 |
noonedeadpunk | or well. restart works. but daemon is not reloaded | 16:17 |
noonedeadpunk | so it's a bit useless | 16:17 |
noonedeadpunk | just pushed https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/871487 | 16:18 |
jrosser | this is all related to needing to add `load: False` | 16:18 |
noonedeadpunk | jrosser: oh, wait, lol | 16:35 |
noonedeadpunk | we have `restart_changed` | 16:35 |
noonedeadpunk | so on zookeeper role s/state: restarted/restart_changed: false | 16:35 |
opendevreview | Dmitriy Rabotyagov proposed openstack/ansible-role-zookeeper master: Ensure zookeeper is not stopped after role re-run https://review.opendev.org/c/openstack/ansible-role-zookeeper/+/871517 | 16:50 |
noonedeadpunk | andrewbonney: can you check this out ^ ? | 16:50 |
*** dviroel|lunch is now known as dviroel | 16:51 | |
jrosser | noonedeadpunk: we might get to look at that tomorrow but theres a good chance not till weds | 17:30 |
noonedeadpunk | sure, np | 17:31 |
noonedeadpunk | I will check that tomorrow then here :) | 17:31 |
moha7 | Hi | 17:36 |
moha7 | jrosser: Did you merge the patch (after the recent release) you were talking about it here days ago? I'm going to deploy a new AIO and need to know should I patch the stable branch or not | 17:37 |
jrosser | you should be able to see that yourself | 17:38 |
jrosser | go to the link i gave you and the status of the patch will be shown | 17:38 |
jrosser | and the history of merged commits is here https://opendev.org/openstack/openstack-ansible/commits/branch/stable/zed | 17:39 |
moha7 | "Verified" | 17:40 |
jrosser | well "Verified +2" which has a specific meaning | 17:41 |
jrosser | but more importantly down the bottom it says "Change has been successfully merged" | 17:42 |
jrosser | rows get added to the table at the bottom of the page for each event that happens to the patch | 17:43 |
moha7 | +1 | 17:43 |
moha7 | `/opt/openstack-ansible/scripts/bootstrap-aio.sh` --> Failed: https://ibb.co/Wgc7GGq | 18:11 |
opendevreview | Dmitriy Rabotyagov proposed openstack/ansible-role-systemd_service master: Restart sockets when they are changed https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/871526 | 18:15 |
jrosser | moha7: try `ansible localhost -m setup` | 18:15 |
noonedeadpunk | moha7: do you have default route on host? | 18:15 |
jrosser | fwiw i see this sometimes when making a fresh aio | 18:16 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-galera_server master: Do not forcefully restart socket https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/871527 | 18:17 |
noonedeadpunk | oh, rly? I've never seen that... | 18:17 |
noonedeadpunk | well, except maybe there's no IP or interfaces are named not as expected | 18:18 |
noonedeadpunk | btw would be awesome to land https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/871304 - I've holded Y release for it | 18:19 |
moha7 | noonedeadpunk: the netplan config: https://u.teknik.io/lFux1.png; `ip a` --> https://ibb.co/KrR53xy | Sorry for sending pictures, it's becuz i lost the ssh connection to the server | 18:23 |
noonedeadpunk | well "network is unreachable" is a bit suspicious | 18:34 |
noonedeadpunk | also ip a doesn't show ip on the interface | 18:34 |
moha7 | Woops! solved; there was a mistake by me in Netplan configuration; Changes were not applied | 18:35 |
moha7 | FYI: Yesterday, my colleague had this error during deploying `os-cinder-install.yml`: https://p.teknik.io/AMKjg; He successfully passed the deployment by removing `venv_tag` from the YML file. | 18:37 |
noonedeadpunk | well, to be frank, I'm not sure if we have today any good reason to keep these local facts. I think it's mostly they rarely bring issues and seems to work somehow | 18:40 |
noonedeadpunk | or maybe there is ofc and it's just me thinking it's possible to drop them with no cost | 18:40 |
moha7 | |setup-everything.yml is still running (checked from the web console), but I lost the ssh connection and `ping`! Probably becuz of security hardening. Then I need to false it next time.| | 19:00 |
jrosser | that doesnt usually interrupt the playbooks | 19:08 |
jrosser | and losing the ability to ping suggests some trouble outside openstack-ansible as well | 19:11 |
moha7 | Deployment documentation: "Note that br-vxlan is not required to be a bridge at all, a physical interface or a bond VLAN subinterface can be used directly and will be more efficient." <-- https://docs.openstack.org/project-deploy-guide/openstack-ansible/zed/targethosts.html | 20:31 |
moha7 | If it's not required to be as bridge, then how the interface for tenant network should be defined in the `openstack_user_config.yml`? the same _br-vlan_ section here: https://paste.opendev.org/show/bb714u2x5OD377JJtp14/ ? | 20:32 |
moha7 | (in a multi-node env) | 20:32 |
*** dviroel is now known as dviroel|out | 21:25 | |
moha7 | jamesdenton: ^ | 21:59 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!