*** tosky has quit IRC | 00:20 | |
*** openstackgerrit has quit IRC | 00:49 | |
*** cshen has joined #openstack-ansible | 01:16 | |
*** openstackstatus has quit IRC | 01:20 | |
*** openstack has joined #openstack-ansible | 01:23 | |
*** ChanServ sets mode: +o openstack | 01:23 | |
*** cshen has joined #openstack-ansible | 03:01 | |
*** cshen has quit IRC | 03:06 | |
*** mwhahaha has quit IRC | 03:26 | |
*** PrinzElvis has quit IRC | 03:27 | |
*** CeeMac has quit IRC | 03:27 | |
*** PrinzElvis has joined #openstack-ansible | 03:28 | |
*** mwhahaha has joined #openstack-ansible | 03:29 | |
*** CeeMac has joined #openstack-ansible | 03:31 | |
*** spatel has joined #openstack-ansible | 03:33 | |
*** evrardjp has quit IRC | 05:33 | |
*** evrardjp has joined #openstack-ansible | 05:33 | |
*** gyee has quit IRC | 06:11 | |
*** spatel has quit IRC | 07:01 | |
*** cshen has joined #openstack-ansible | 07:02 | |
CeeMac | morning | 07:12 |
---|---|---|
*** miloa has joined #openstack-ansible | 07:17 | |
*** lkoranda has joined #openstack-ansible | 07:56 | |
*** rpittau|afk is now known as rpittau | 07:59 | |
*** maharg101 has joined #openstack-ansible | 08:05 | |
*** andrewbonney has joined #openstack-ansible | 08:19 | |
*** lkoranda has quit IRC | 08:20 | |
*** lkoranda has joined #openstack-ansible | 08:27 | |
jrosser | morning | 08:30 |
*** openstackgerrit has joined #openstack-ansible | 08:33 | |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible master: WIP - use the cached copy of the u-c file from the repo server https://review.opendev.org/c/openstack/openstack-ansible/+/774523 | 08:33 |
*** lkoranda has quit IRC | 08:37 | |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible master: Add reference to netplan config example https://review.opendev.org/c/openstack/openstack-ansible/+/774425 | 08:50 |
jrosser | noonedeadpunk: i put -W against my cached u-c patches https://review.opendev.org/q/topic:%22osa-uc-cache%22 | 08:52 |
jrosser | i think it's going to work certainly for CI / AIO cases | 08:53 |
*** lkoranda has joined #openstack-ansible | 08:53 | |
*** lkoranda has quit IRC | 08:53 | |
jrosser | but if you run master branch magnum on victoria for example, we need to be able to somehow get a cache of the V u-c file as well to point magnum_upper_constraints_url to | 08:54 |
jrosser | ah master not V of course | 08:55 |
noonedeadpunk | I don't fully understand why do we use content at the moment. to filter out things like tempest from it? | 08:55 |
noonedeadpunk | but I don't see this happening... It would be much easier just to use copy from localhost location (path of which we know)? | 08:57 |
jrosser | it's not just a CI time thing though | 08:58 |
noonedeadpunk | nasty thing here is that we link and rely on really different steps | 08:58 |
jrosser | it also has to work for multinode, and also when deploy host is seperate | 08:58 |
noonedeadpunk | yeah, but we store file for non-ci as well | 08:58 |
jrosser | yeah well this is kind of why i -W it, as theres a lot to consider | 08:59 |
noonedeadpunk | yeah, it's a bit hacky I guess at the moment... and if you change requirements SHA - it won't get updated | 09:00 |
jrosser | if the repo server role is done it should, but thats kind of not obvious | 09:01 |
*** jbadiapa has joined #openstack-ansible | 09:03 | |
jrosser | i'm wondering if this should all be some common_tasks stuff put into each role playbook, really not sure what is the right approach | 09:06 |
*** tosky has joined #openstack-ansible | 09:10 | |
openstackgerrit | Alfredo Moralejo proposed openstack/openstack-ansible-os_tempest master: Remove horizon tempest plugin installation for EL8 distro https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/774604 | 09:13 |
noonedeadpunk | I think approach with repo_server is good. But I thik I'd move more logic into the role. we can just always try using copy to get constraints from /opt/ansible-runtime-constraints-{{ requirements_git_install_branch }}. txt and if it fails - get_url | 09:14 |
noonedeadpunk | but yeah, good question in this is how to determine upstream url for repo container only, and point the same variable to repo container for all other hosts (except localhost as well) | 09:15 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible master: Pass upper-constraints content to the repo_server role https://review.opendev.org/c/openstack/openstack-ansible/+/774518 | 09:45 |
*** lkoranda has joined #openstack-ansible | 09:55 | |
*** gokhani has joined #openstack-ansible | 10:02 | |
gokhani | hi folks, can ı ask how to change lxc containers timezone ? | 10:03 |
noonedeadpunk | that's funny, but I never tried to to that so have no idea. I guess they're tighten to the host timezone? | 10:04 |
noonedeadpunk | oh, I think you can set it in lxc config | 10:05 |
*** lkoranda has quit IRC | 10:05 | |
noonedeadpunk | I think you can set environment.TZ for lxc_container_config_list variable | 10:07 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible master: WIP - use the cached copy of the u-c file from the repo server https://review.opendev.org/c/openstack/openstack-ansible/+/774523 | 10:08 |
gokhani | noonedeadpunk , ı think I need to add environment.TZ variable to /etc/lxc/lxc-openstack.conf | 10:10 |
gokhani | noonedeadpunk, it doesn't change anything :8 | 10:19 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/ansible-role-uwsgi stable/train: Run uwsgi tasks only when uwsgi_services defined https://review.opendev.org/c/openstack/ansible-role-uwsgi/+/774494 | 10:19 |
*** lkoranda has joined #openstack-ansible | 10:20 | |
noonedeadpunk | As I said, most liekly that setting lxc_container_config_list and re-running lxc-containers-create.yml playbook should apply config. but that may also require containers restart | 10:20 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/ansible-role-uwsgi stable/train: Run uwsgi tasks only when uwsgi_services defined https://review.opendev.org/c/openstack/ansible-role-uwsgi/+/774494 | 10:20 |
noonedeadpunk | so it will result in adding config to "/var/lib/lxc/{{ inventory_hostname }}/config" https://opendev.org/openstack/openstack-ansible-lxc_container_create/src/branch/master/tasks/lxc_container_config.yml#L17-L26 | 10:21 |
noonedeadpunk | and yeah, container restart will be performed by the role | 10:21 |
gokhani | noonedeadpunk, how to set key/value pair "environment.TZ = Europe/Istanbul" to lxc_container_config_list. This variable is list. | 10:27 |
jrosser | gokhani: see the way it is used here https://codesearch.opendev.org/?q=lxc_container_config_list | 10:29 |
jrosser | also that some values are already set there so perhaps be careful with overriding that var | 10:29 |
*** lkoranda has quit IRC | 10:33 | |
gokhani | jrosser, it didn't work for me. I override this variable and in tasks I can see it add this variable > http://paste.openstack.org/show/802461/ | 10:39 |
gokhani | may be I need to set lxc.environment.TZ=Europe/Istanbul instead of environment.TZ=Europe/Istanbul | 10:40 |
jrosser | you need to set whatever it is that the lxc config file expects | 10:42 |
openstackgerrit | Jonathan Rosser proposed openstack/ansible-role-pki master: Add boilerplate ansible role components https://review.opendev.org/c/openstack/ansible-role-pki/+/774620 | 10:45 |
openstackgerrit | Merged openstack/openstack-ansible-os_zun master: Move zun pip packages from constraints to requirements https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/772300 | 10:46 |
jrosser | noonedeadpunk: did you try this on something with containers? https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/774224 | 10:50 |
jrosser | the CI jobs look all like metal | 10:50 |
noonedeadpunk | functional are lxc? | 10:51 |
noonedeadpunk | oh, hm... | 10:52 |
noonedeadpunk | seems like they not | 10:52 |
jrosser | well thats the funny thing, i'm not seeing that | 10:52 |
jrosser | yeah surprising, as i'm sure they used to be | 10:52 |
noonedeadpunk | but according to recap they are lxc... https://zuul.opendev.org/t/openstack/build/57ac419aa1424096aa07f9dd514c3ae8/log/job-output.txt#3675 | 10:54 |
gokhani | jrosser noonedeadpunk , lxc.environment.TZ didn't work. ı only changed timezone with install tzdata on lxc container. And Maybe if we copy /usr/share/zoneInfo directory to container and link with /etc/localtime it will work. | 10:54 |
noonedeadpunk | we do install tzdata by default nowadays... maybe we should backport it's installation... | 10:56 |
jrosser | seems [test1 -> localhost] everywhere | 10:56 |
noonedeadpunk | https://opendev.org/openstack/openstack-ansible-lxc_hosts/src/branch/stable/victoria/vars/debian.yml#L50 | 10:57 |
openstackgerrit | Merged openstack/openstack-ansible master: Do not prepare upstream repos for distro jobs https://review.opendev.org/c/openstack/openstack-ansible/+/774372 | 10:57 |
noonedeadpunk | jrosser: btw I guess we're missing https://review.opendev.org/c/openstack/project-config/+/773387 for pki CI | 10:59 |
jrosser | oh hrrm yes | 11:00 |
jrosser | i didnt have much of a plan with that other that trying to get the CI to be alive | 11:00 |
noonedeadpunk | aha, I see. I was wondering if we need integrated tests right now while not having playbook in place | 11:01 |
jrosser | i was also thinking maybe just the infra tests | 11:01 |
jrosser | perhaps stage one is just getting to the point we can fix up rabbitmq | 11:01 |
gokhani | noonedeadpunk, thanks I will add tzdata to ubuntu vars file for ussuri | 11:01 |
noonedeadpunk | oh, yes, actually that's also good! | 11:01 |
* noonedeadpunk checking lxc | 11:07 | |
*** evrardjp has quit IRC | 11:10 | |
*** guilhermesp has quit IRC | 11:10 | |
*** nicolasbock has quit IRC | 11:11 | |
*** guilhermesp has joined #openstack-ansible | 11:11 | |
*** nicolasbock has joined #openstack-ansible | 11:11 | |
*** evrardjp has joined #openstack-ansible | 11:12 | |
noonedeadpunk | that looks weird https://opendev.org/openstack/openstack-ansible/src/branch/master/playbooks/containers-lxc-create.yml#L81 | 11:12 |
*** miloa has quit IRC | 11:12 | |
noonedeadpunk | I think we should have smth more reliable in inventory nowadays? | 11:13 |
noonedeadpunk | like `is_metal` | 11:15 |
*** gokhani has quit IRC | 11:16 | |
*** gokhani has joined #openstack-ansible | 11:18 | |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts master: Replace import with include https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/774224 | 11:19 |
noonedeadpunk | well, in sandbox role is running ok | 11:19 |
noonedeadpunk | (against lxc) | 11:19 |
*** miloa has joined #openstack-ansible | 11:22 | |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts master: Replace import with include https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/774224 | 11:23 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts master: Adjust openstack_distrib_code_name https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/774624 | 11:25 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts stable/victoria: Adjust openstack_distrib_code_name https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/774625 | 11:26 |
openstackgerrit | Merged openstack/openstack-ansible-os_aodh master: Move aodh pip packages from constraints to requirements https://review.opendev.org/c/openstack/openstack-ansible-os_aodh/+/772259 | 11:45 |
*** lkoranda has joined #openstack-ansible | 11:52 | |
*** zul has quit IRC | 11:54 | |
*** andrewbonney has quit IRC | 12:08 | |
*** guilhermesp has quit IRC | 12:08 | |
*** nicolasbock has quit IRC | 12:09 | |
*** andrewbonney has joined #openstack-ansible | 12:10 | |
*** guilhermesp has joined #openstack-ansible | 12:10 | |
*** nicolasbock has joined #openstack-ansible | 12:12 | |
openstackgerrit | Merged openstack/openstack-ansible stable/victoria: Add haproxy_*_service variables https://review.opendev.org/c/openstack/openstack-ansible/+/774126 | 12:17 |
*** cshen has quit IRC | 12:22 | |
*** cshen has joined #openstack-ansible | 12:26 | |
jrosser | hmm wierd error https://zuul.opendev.org/t/openstack/build/f7594d20ed424213b6b7478c8c634e14 | 12:30 |
* jrosser wonders if this is pip being too old | 12:30 | |
frickler | jrosser: is that distro pip? then very likely yes | 12:32 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible master: Pass upper-constraints content to the repo_server role https://review.opendev.org/c/openstack/openstack-ansible/+/774518 | 12:35 |
frickler | jrosser: if you run with "-vvv" you can see how pip sees the wheels but deems them incompatible | 12:39 |
jrosser | hrrm thats ugly | 12:41 |
*** CeeMac has quit IRC | 12:58 | |
*** CeeMac has joined #openstack-ansible | 13:01 | |
noonedeadpunk | uh, that;s nasty as well https://zuul.opendev.org/t/openstack/build/7bf27b24d95c410ab9c22fb44c527ca9/log/job-output.txt#2231 | 13:28 |
noonedeadpunk | new python cryptography module requiers rust now... | 13:29 |
andrewbonney | Think I just hit that same thing in our internal CI | 13:29 |
noonedeadpunk | for ubuntu fixed with apt install rustc setuptools-rust | 13:30 |
jrosser | there was some chat in the infra irc about this yesterady | 13:31 |
jrosser | may be that old pip isnt smart enough to find the built wheel | 13:31 |
noonedeadpunk | * apt install rustc && pip3 install setuptools-rust | 13:31 |
* jrosser will try in VM | 13:31 | |
jrosser | we currently pin virtualenv back in the functional tests which could be the casue | 13:32 |
openstackgerrit | Merged openstack/openstack-ansible master: Disable ssl for rabbitmq https://review.opendev.org/c/openstack/openstack-ansible/+/773377 | 13:32 |
* noonedeadpunk trying to get rid of functional tests for openstack-hosts and lxc roles | 13:39 | |
*** lkoranda has quit IRC | 13:44 | |
jrosser | really we need to merge this i think https://review.opendev.org/c/openstack/openstack-ansible-tests/+/773862 | 13:46 |
jrosser | no point fixing things on bionic | 13:46 |
noonedeadpunk | yeah | 13:46 |
noonedeadpunk | but that patch is more about adding things to gates | 13:47 |
noonedeadpunk | that we're missing | 13:47 |
noonedeadpunk | it doesn't drop bionic | 13:47 |
noonedeadpunk | we can drop it though there | 13:48 |
*** lkoranda has joined #openstack-ansible | 13:49 | |
openstackgerrit | Merged openstack/openstack-ansible master: Add reference to netplan config example https://review.opendev.org/c/openstack/openstack-ansible/+/774425 | 13:55 |
*** spatel has joined #openstack-ansible | 13:59 | |
*** sshnaidm is now known as sshnaidm|afk | 14:00 | |
spatel | I am trying to deploy ceph using OSA and successfully deployed mon container but when i am trying to run playbook on OSD node getting stuck here - http://paste.openstack.org/show/802471/ | 14:03 |
spatel | deploying octopus | 14:03 |
spatel | i check and i do have /dev/sdb disk but its saying not able to find so look like something else going on | 14:04 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-tests master: Unpin virtualenv version https://review.opendev.org/c/openstack/openstack-ansible-tests/+/774651 | 14:04 |
admin0 | is scaling from 1 -> 3 controllers as easy as defining them in the config and running the playbooks ? | 14:04 |
jrosser | ^ that allows cryptography to install on bionic but it does mean moving to the new resolver at the same tie | 14:05 |
jrosser | *time | 14:05 |
noonedeadpunk | doh | 14:06 |
jrosser | spatel: thats an error from ceph_volume Failed to find physical volume "/dev/sdb1" | 14:07 |
jrosser | related to LVM pvs, you need to check what ceph_volume wants your disk setup to be (this is not an OSA thing) | 14:07 |
jrosser | spatel: as usual there is goldmine of info in the AIO :) https://github.com/openstack/openstack-ansible/blob/master/tests/roles/bootstrap-host/tasks/prepare_ceph.yml#L65-L91 | 14:08 |
spatel | I have created /dev/sdb1 using fdisk because i ran out of option otherwise i told ceph use /dev/sdb raw disk | 14:11 |
jrosser | well first check that ceph-volume works with a raw partition or if you need to put some LVM on that first | 14:20 |
mgariepy | ceph-volume can manage the bare drives if you let it. | 14:21 |
*** pcaruana has joined #openstack-ansible | 14:24 | |
admin0 | 22.0.0 -- on a 3x controller setup, mysql fails to come up .. the error is: WSREP: Member 1.0 (m2b10_galera_container-ccaa6815) requested state transfer from '*any*', but it is impossible to select State Transfer donor: Resource temporarily unavailable | 14:39 |
spatel | mgariepy jrosser when i am running this command manually "/usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb" it just hang and not returning any output so trying to understand what its doing | 14:41 |
admin0 | spatel, why not run a decoupled build ? where u bring up ceph separately to osa | 14:45 |
admin0 | and then just integrate the two | 14:45 |
spatel | I have very small ceph deployment, just for glance images and small backup stuff that is why deploying using OSA + ceph | 14:46 |
kleini | spatel: I saw your blog post regarding PowerDNS and Designate. Does your setup work to upgrade both PowerDNS instances from Designate? | 14:50 |
spatel | upgrade both PowerDNS instances from Designate? i don't what what you trying to say? | 14:51 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_neutron master: Remove neutron_keepalived_no_track variable https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/770808 | 14:52 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_neutron master: Switch neutron functional jobs to focal https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/773979 | 14:54 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_neutron master: Switch neutron functional jobs to focal https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/773979 | 14:55 |
kleini | sorry, the question is: do both PowerDNS instances transfer new zone data from Designate successfully? In my case only one PowerDNS instance is notified through Web API, although I have two targets defined. | 15:01 |
spatel | Yes it notify both PowerDNS using designate | 15:03 |
spatel | kleini | 15:03 |
kleini | okay, then that is different for your setup. maybe something fixed with Victoria and broken in Ussuri. | 15:04 |
spatel | if you have 10 PowerDNS they will get notify all as far as they are in target list. (I have noticed first target DNS always get update immediately but next target take few more seconds (in my case 20 to 30second delay) | 15:05 |
kleini | maybe I just need more time to test this again in my environment | 15:06 |
spatel | I never tested designate on Ussuri but i can tell you victoria its working | 15:31 |
spatel | kleini :) | 15:31 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Add hosts integrated tests https://review.opendev.org/c/openstack/openstack-ansible/+/774685 | 15:37 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts master: Use integrated tests https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/774688 | 15:41 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Add hosts integrated tests https://review.opendev.org/c/openstack/openstack-ansible/+/774685 | 15:41 |
*** sshnaidm|afk is now known as sshnaidm | 15:44 | |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts master: Use integrated tests https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/774688 | 15:45 |
*** gokhani has quit IRC | 15:48 | |
*** spatel has quit IRC | 15:59 | |
*** macz_ has joined #openstack-ansible | 16:02 | |
noonedeadpunk | #startmeeting openstack_ansible_meeting | 16:02 |
openstack | Meeting started Tue Feb 9 16:02:54 2021 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. | 16:02 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 16:02 |
*** openstack changes topic to " (Meeting topic: openstack_ansible_meeting)" | 16:02 | |
openstack | The meeting name has been set to 'openstack_ansible_meeting' | 16:02 |
*** macz_ has joined #openstack-ansible | 16:03 | |
noonedeadpunk | trying to check if we have some bugs to discuss... | 16:05 |
noonedeadpunk | I think jrossercovered today most of them :) | 16:06 |
noonedeadpunk | #topic office hours | 16:07 |
*** openstack changes topic to "office hours (Meeting topic: openstack_ansible_meeting)" | 16:07 | |
jrosser | o/ | 16:09 |
jrosser | hello | 16:09 |
noonedeadpunk | \o/ | 16:09 |
noonedeadpunk | I don't have really much from my side to say, since I had pretty little time on my hands :( | 16:10 |
jrosser | feels like we need to get all this new-pip stuff merged | 16:11 |
noonedeadpunk | I'd say we almost did? | 16:11 |
noonedeadpunk | https://review.opendev.org/q/topic:%22osa-new-pip%22+(status:open) | 16:11 |
noonedeadpunk | it's super closr | 16:11 |
noonedeadpunk | *close | 16:12 |
jrosser | we don't yet land the patch to the integrated repo which turns it on | 16:12 |
jrosser | this is related for the tests repo https://review.opendev.org/c/openstack/openstack-ansible-tests/+/774651 | 16:12 |
noonedeadpunk | we stuck on neutron pretty much | 16:13 |
noonedeadpunk | and tests repo does not make this easy for us | 16:13 |
jrosser | yeah lots of things there, the tests repo patch will help | 16:13 |
jrosser | then we need the bionic->focal patch for os_neutron | 16:13 |
noonedeadpunk | which just doesn't work actually... | 16:13 |
*** spatel has joined #openstack-ansible | 16:14 | |
jrosser | indeed, the functional tests are all generally unhappy | 16:14 |
jrosser | https://review.opendev.org/773979 is failing horribly in CI just now | 16:15 |
jrosser | oh right | 16:16 |
jrosser | we can't and the change to the tests repo + bionic->focal without also the constraints->requirements changes for os_neutron | 16:17 |
jrosser | some of these patches are going to need to be squashed | 16:17 |
jrosser | *land | 16:17 |
noonedeadpunk | why constraints->requirements changes relate to bionic vs focal? I guess they will get same versions during play? | 16:18 |
noonedeadpunk | but see no issues in merging as well if it will be required | 16:18 |
noonedeadpunk | also I'm wondering what to do with octavia on centos | 16:19 |
noonedeadpunk | should we jsut mark it nv now? | 16:19 |
jrosser | i wonder if johnsom is around? | 16:20 |
johnsom | Hi | 16:20 |
jrosser | woah | 16:20 |
jrosser | :) | 16:20 |
johnsom | You rang? | 16:20 |
johnsom | What is up? | 16:20 |
jrosser | did you see this http://lists.openstack.org/pipermail/openstack-discuss/2021-February/020218.html | 16:21 |
jrosser | we are a bit stuck on our centos-8 CI jobs | 16:21 |
johnsom | Hmm, reading through. The initial report is a nova bug it looks like. Let me read all the way down | 16:22 |
noonedeadpunk | the issue here is that nova and neutron tempest tests are passing for us.. | 16:23 |
noonedeadpunk | maybe we're testing wrong things... | 16:23 |
jrosser | we should check they actually boot something :) | 16:24 |
johnsom | Well, Octavia tends to actually test more than other projects. We have true end-to-end tests, where some gloss over things | 16:24 |
johnsom | Is there a patch with logs I can dig in? | 16:24 |
noonedeadpunk | sure https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/769952 | 16:25 |
johnsom | Thanks, give me a few minutes. | 16:25 |
noonedeadpunk | `tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops` should boot smth I guess | 16:26 |
noonedeadpunk | https://zuul.opendev.org/t/openstack/build/0a123e189be8445da96927be09220d7a/log/logs/openstack/aio1-utility/tempest_run.log.txt#135 (it's for nova role CI) | 16:26 |
johnsom | Hmm, those logs have expired. Another patch maybe? | 16:26 |
johnsom | Ah, nevermind, I had the wrong link | 16:26 |
noonedeadpunk | jrosser: yeah they do spawn isntance https://zuul.opendev.org/t/openstack/build/0a123e189be8445da96927be09220d7a/log/logs/host/nova-compute.service.journal-20-52-15.log.txt#5241 | 16:27 |
jrosser | cool | 16:28 |
noonedeadpunk | johnsom: you an also check this if previous has expired https://zuul.opendev.org/t/openstack/build/df371a76c1ab4e76b97e4b6b974fe29a | 16:29 |
noonedeadpunk | btw for the last patch debian also failed in pretty much the same way I'd say... | 16:30 |
*** chandankumar is now known as raukadah | 16:33 | |
jrosser | noonedeadpunk: i did not know what to do about the 0.0.0 version here https://bugs.launchpad.net/openstack-ansible/+bug/1915128 | 16:36 |
openstack | Launchpad bug 1915128 in openstack-ansible "OpenStack Swift-proxy-server do not start" [Undecided,New] | 16:36 |
jrosser | other than say we're not really supporting rocky..... | 16:36 |
noonedeadpunk | I'm wondering if it's because they checked-out to rocky-em tag | 16:38 |
noonedeadpunk | I could imagine that pbr might go crazy about that | 16:39 |
jrosser | oh interesting, could be | 16:39 |
jrosser | perhaps an assumption that a tag is a number | 16:39 |
jrosser | whilst we are in meeting time i guess we should also talk about CI resource use? | 16:40 |
noonedeadpunk | yeah | 16:40 |
noonedeadpunk | I think the best we can do, except reducing time, is also move bionic tests to experimental | 16:41 |
jrosser | i think that the conclusion on the ML is a good one, reducing job failures is the biggest win | 16:41 |
noonedeadpunk | not sure if we should actively carry on bionic | 16:41 |
jrosser | becasue that may be even 100% overhead right now, or more | 16:41 |
noonedeadpunk | and main issue with failures I guess is galera | 16:41 |
noonedeadpunk | yeah, there were another ones, like auditd bug... | 16:42 |
jrosser | i'm going to try and be a bit more disciplined with recheck to note on the etherpad (https://etherpad.opendev.org/p/osa-ci-failures) when there is some systematic error | 16:42 |
noonedeadpunk | and I guess looking into gnocchi is also useful | 16:42 |
jrosser | oh yes there is a whole lot of mess there | 16:42 |
jrosser | something very strange with the db access unless i'm reading the log badly | 16:43 |
noonedeadpunk | +1 to having that etherpad | 16:43 |
noonedeadpunk | I think I need to deploy it to see what's going on | 16:43 |
*** rh-jelabarre has quit IRC | 16:45 | |
jrosser | what to do with mariadb? is this an irc sort of thing? | 16:45 |
noonedeadpunk | I actually no idea except asking, yeah. | 16:47 |
* noonedeadpunk goes to #mariadb for this | 16:47 | |
noonedeadpunk | * #maria | 16:47 |
admin0 | hi all .. i am getting issue in setup-infra that i cannot understand .. this is the error: https://gist.githubusercontent.com/a1git/bf7c55a1befd59e3682be485bc4b1e88/raw/785c1d0a32fc05ae23e5fa5dbd859d3934f6930a/gistfile1.txt -- does it mean i need to downgrade my pip ? | 16:48 |
admin0 | i tried 22.0.0 .. but it fails on galera setup .. so going back on 21.2.2 | 16:48 |
*** rh-jelabarre has joined #openstack-ansible | 16:50 | |
jrosser | admin0: have you used venv_rebuild=true ever on that deployment? | 16:52 |
noonedeadpunk | uh.... | 16:53 |
admin0 | i have not .. this is a new greenfield | 16:53 |
noonedeadpunk | we need to merge https://review.opendev.org/q/I6bbe66b699ce5ab245bb9779b61b5c4625eba927 | 16:53 |
admin0 | on one line in the log inside the python_venv_build log, I find 021-02-09T22:13:01,803 error: command 'x86_64-linux-gnu-gcc' failed with exit status 1 | 16:54 |
spatel | noonedeadpunk ++++++1 for that patch | 16:54 |
admin0 | aren't those installed by ansible inside the container ? | 16:54 |
noonedeadpunk | I guess it should be installed only on repo container where we usually delegate | 16:55 |
admin0 | i will lxc-containers-destroy .. and retry once more | 16:55 |
spatel | admin0 it cook everything on repo and then just deploy on other container to reduce duplication work | 16:56 |
openstackgerrit | Merged openstack/openstack-ansible-tests master: Unpin virtualenv version https://review.opendev.org/c/openstack/openstack-ansible-tests/+/774651 | 16:56 |
johnsom | jrosser noonedeadpunk I think we need to bring in a nova expert on this. I don't see why nova is going out to lunch, but there are a bunch of errors in the nova logs. This seems to be related: https://zuul.opendev.org/t/openstack/build/df371a76c1ab4e76b97e4b6b974fe29a/log/logs/host/nova-api-os-compute.service.journal-12-56-44.log.txt#6893 | 16:56 |
spatel | venv_rebuild can be evil without that patch :) I learnt that hard way | 16:57 |
jrosser | it should never be trying to build that wheel on the utility container lie spatel says | 16:57 |
jrosser | it means that for some reason it is not being taken from the repo server | 16:57 |
johnsom | This is the other key message: https://zuul.opendev.org/t/openstack/build/df371a76c1ab4e76b97e4b6b974fe29a/log/logs/host/nova-compute.service.journal-12-56-44.log.txt#5970 | 16:57 |
johnsom | But that may be a side effect of the cleanup/error handling related to the above error | 16:58 |
* jrosser sees eventlet...... | 17:00 | |
johnsom | Yeah | 17:03 |
noonedeadpunk | hm, that seems like libvirt issue indeed | 17:03 |
noonedeadpunk | wondering why we don't see it anywhere else... | 17:03 |
johnsom | Well, I really think it's related to the messaging queue problem. The libvirt very well may be a side effect | 17:03 |
johnsom | I'm just not sure what it is trying to message there. | 17:04 |
jrosser | rabbitmq log is totally unhelpful :( | 17:04 |
noonedeadpunk | eventually I saw this messages in my deployment with ceilometer | 17:06 |
noonedeadpunk | when it agent tries to poll libvirt | 17:06 |
noonedeadpunk | and the metric it's polling is not supported by libvirt | 17:07 |
noonedeadpunk | but here we don't have any pollster I guess (except nova) | 17:07 |
*** spatel has quit IRC | 17:07 | |
*** spatel has joined #openstack-ansible | 17:08 | |
noonedeadpunk | well anyway, thanks for taking time and watching johnsom! | 17:08 |
noonedeadpunk | #endmeeting | 17:08 |
*** openstack changes topic to "Launchpad: https://launchpad.net/openstack-ansible || Weekly Meetings: https://wiki.openstack.org/wiki/Meetings/openstack-ansible || Review Dashboard: http://bit.ly/osa-review-board-v3" | 17:08 | |
openstack | Meeting ended Tue Feb 9 17:08:56 2021 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 17:08 |
johnsom | Yeah, sorry I can't be of more help | 17:08 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/openstack_ansible_meeting/2021/openstack_ansible_meeting.2021-02-09-16.02.html | 17:08 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/openstack_ansible_meeting/2021/openstack_ansible_meeting.2021-02-09-16.02.txt | 17:09 |
openstack | Log: http://eavesdrop.openstack.org/meetings/openstack_ansible_meeting/2021/openstack_ansible_meeting.2021-02-09-16.02.log.html | 17:09 |
johnsom | We need nova team level knowledge on that one. | 17:09 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_neutron master: Switch neutron functional jobs to focal https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/773979 | 17:12 |
jrosser | mgariepy: can you take a look at this if you have a moment https://review.opendev.org/c/openstack/ansible-role-python_venv_build/+/774159/4 | 17:20 |
*** gyee has joined #openstack-ansible | 17:20 | |
*** maharg101 has quit IRC | 17:39 | |
*** lkoranda has quit IRC | 17:46 | |
*** rpittau is now known as rpittau|afk | 17:56 | |
spatel | I think i am totally stuck with ceph, ceph-ansible running this command and its just hanging forever " ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb " | 18:07 |
spatel | that command supposed to create LVM etc.. right | 18:07 |
spatel | i wonder if this is octopus issue | 18:08 |
admin0 | spatel, i have octopus using ceph-ansible | 18:08 |
admin0 | worked fine for me | 18:08 |
spatel | hmm ubuntu 20.04? | 18:08 |
admin0 | i used sgdisk to zap the partitions .. so the disk is clear and used auto detect | 18:09 |
admin0 | yep | 18:09 |
spatel | i did zap to wipe out all partition | 18:09 |
spatel | did you use raw disk or LVM? | 18:09 |
admin0 | it will do the lvm itself | 18:09 |
admin0 | let me share my configs | 18:10 |
spatel | please | 18:10 |
admin0 | spatel, v | 18:12 |
admin0 | https://gist.github.com/a1git/1f0b75a169a74e463b92214c6148bb55 | 18:12 |
spatel | reading | 18:12 |
spatel | where are data disk in config? | 18:13 |
admin0 | no need :) | 18:14 |
admin0 | thats the magic | 18:14 |
spatel | how does ceph know which is OS disk and which is spare disk | 18:14 |
admin0 | if there are no partition table, it uses it | 18:14 |
admin0 | so your main disk which has data and partitions is untouched | 18:14 |
spatel | hmm | 18:14 |
admin0 | this means i can have uneven disks and i don't have to worry about making a config | 18:14 |
admin0 | else you have to write every disk and thats a pain | 18:15 |
admin0 | https://gist.github.com/a1git/7f491039325bd735d3e93b2cae418da7 | 18:15 |
spatel | in my old ceph deployment i have each disk in config because some disk are not part of ceph | 18:15 |
admin0 | one of the server where there are disks | 18:15 |
admin0 | for those disks, just create a partition .. like fdisk .. 1 partition .. and ceph will not touch it | 18:15 |
admin0 | it will only add those disks which have no partion table .. so sgdisk --zap ones | 18:16 |
spatel | hmm | 18:16 |
*** jpvlsmv has joined #openstack-ansible | 18:18 | |
spatel | admin0 this is what i have http://paste.openstack.org/show/802489/ | 18:19 |
jrosser | public and cluster networks overlap there wierdly | 18:21 |
admin0 | it failed again .. unable to execute 'x86_64-linux-gnu-gcc': No such file or directory .. | 18:22 |
admin0 | can't seem to make utility-install work | 18:23 |
jrosser | admin0: CI tests are passing, it is usually something in your environment | 18:23 |
jrosser | i built an AIO today which worked | 18:24 |
admin0 | if i have to redo from scratch, i do a rm -rf /openstack on all servers, rm -rf /etc/ansible .. rm -rf /etc/openstack_deploy and then start again | 18:24 |
jrosser | any my colleague upgraded a region U->V today too | 18:24 |
jrosser | thats not cleaning out the repo server | 18:25 |
jrosser | really you should read the detail of the bug i pasted earlier | 18:25 |
jrosser | understand what it breaks and why | 18:25 |
admin0 | this seems like the error i have https://bugs.launchpad.net/rally/+bug/1618714 | 18:26 |
openstack | Launchpad bug 1618714 in Rally "Running setup.py install for netifaces: finished with status 'error'" [Undecided,Fix released] | 18:26 |
jrosser | this is from 2016 | 18:27 |
admin0 | jrosser, you asked about _rebuild=true is used or not .. but which bug did you pasted earlier | 18:27 |
admin0 | i seem to have missed it | 18:27 |
jrosser | https://bugs.launchpad.net/bugs/1914301 | 18:29 |
openstack | Launchpad bug 1914301 in openstack-ansible "passing venv_rebuild=true leaves repo server in unusable state" [Undecided,Fix released] | 18:29 |
jrosser | if you just delete the directories that you listed then you are not doing a clean deployment | 18:29 |
mgariepy | jrosser, looking at it now. | 18:30 |
jrosser | perhaps this explains why you experience so much failure when switching between tags? | 18:30 |
jrosser | specifically, if you leave the /var/www contents available to the repo server, then next deployment you may inherit bad state for that | 18:31 |
jrosser | and if at some point you have run with venv_rebuild=true, then the trouble caused by that bug will propagate into the next deploy | 18:31 |
spatel | jrosser oh! i can see my mon ips are wrong. thanks for pointing that out | 18:31 |
spatel | after fixing mon IP still hanging on ceph-volume lvm create command damn! it | 18:35 |
spatel | admin0 can you send me output of ceph-volume inventory /dev/sdX command | 18:35 |
admin0 | i do lxc-container-destroy to destroy all containers first . then delete the /openstack on all controllers and computes | 18:36 |
spatel | I don't have "sas address" address | 18:36 |
admin0 | what would be the best way to clean and redo it ? | 18:36 |
admin0 | and i also delete the inventory file, so that the new containers will be new ones ( and not resusing old ips or folders by any chance) | 18:37 |
jrosser | admin0: you should do whatever works best for you :) here we tend to just pxeboot stuff again if it gets messed up | 18:39 |
admin0 | i have that too . trying to avoid that | 18:39 |
admin0 | one final question | 18:40 |
jrosser | the answer lies in yuor repo server | 18:40 |
jrosser | why did it not build the wheel for netifaces | 18:40 |
jrosser | thats the actual trouble | 18:40 |
admin0 | is expanding from 1 to 3 controllers only a matter of adding them into the config file and running the playbooks ? | 18:40 |
admin0 | my 22.0.0 failed on galera .. so i can try to delete all and go back to victoria with only 1 controller added | 18:40 |
jrosser | with one controller you will have the internal/external IP on an interface yourself | 18:40 |
jrosser | that needs converting to keepalived managing the VIP | 18:41 |
admin0 | i also have keepalive/haproxy on single controller with a VIP | 18:41 |
admin0 | so that is taken care of | 18:41 |
jrosser | it should be easy to add more then | 18:41 |
jrosser | galera does sometimes fail to start properly on focal | 18:41 |
jrosser | did you check that? | 18:42 |
admin0 | spatel, https://gist.github.com/a1git/0b8b1906382318e021f9d6a782af558e | 18:43 |
admin0 | one is from virtual(kvm) based ceph install, another one is on metal | 18:43 |
spatel | thanks so you don't have sas address address, which i am getting error may be safe to ignore | 18:46 |
admin0 | the last time i did ceph+osa was in 2016 :) | 18:50 |
admin0 | always decoupled after that | 18:50 |
admin0 | that too from copying the rackspace playbooks | 18:50 |
admin0 | not the osa one, but their other one | 18:50 |
admin0 | i forgot which one | 18:51 |
admin0 | i think i may have a clue/hunch :D | 18:52 |
*** andrewbonney has quit IRC | 18:52 | |
admin0 | i copied my config from an identical m1000e .. but this one does not have mtu9k, while that one has .. now i realize it may have been the cause of this and also why the galera was not syncing | 18:53 |
admin0 | going back on 22.0.0 to retry this again | 18:53 |
admin0 | i had already destoryed the containers before i could valiate that it had the 9000 mtu that would not have worked | 18:54 |
*** jpvlsmv has quit IRC | 19:00 | |
openstackgerrit | Merged openstack/ansible-role-uwsgi stable/train: Run uwsgi tasks only when uwsgi_services defined https://review.opendev.org/c/openstack/ansible-role-uwsgi/+/774494 | 19:11 |
*** miloa has quit IRC | 19:23 | |
admin0 | it was the mtu :( | 19:34 |
admin0 | all working good now | 19:34 |
admin0 | "<jrosser> the answer lies in yuor repo server" \o/ | 19:34 |
*** maharg101 has joined #openstack-ansible | 19:36 | |
*** jpvlsmv has joined #openstack-ansible | 19:36 | |
spatel | jrosser admin0 my ceph problem resolved :) | 19:39 |
*** macz_ has quit IRC | 19:39 | |
spatel | don't ask me what was the issue otherwise you will beat me.. lol | 19:39 |
jrosser | awesome | 19:39 |
jrosser | wrong host?! | 19:40 |
spatel | problem was mtu | 19:40 |
jrosser | lol | 19:40 |
*** macz_ has joined #openstack-ansible | 19:40 | |
*** maharg101 has quit IRC | 19:40 | |
spatel | in netplan i set mtu on bridge but that doesn't work, you have to set on bond0 or vlan interface | 19:40 |
spatel | it would be good if our ansible playbook check MTU and do ping -s 9000 kind of test to see if it can make it otherwise throw error :) | 19:42 |
jpvlsmv | Hello, I'm a first time caller... | 19:45 |
spatel | jpvlsmv hello | 19:46 |
jpvlsmv | trying to deploy Victoria, with some security constraints... specifically, root must not be able to log in via ssh. There are many tasks that specify user:root | 19:47 |
jpvlsmv | which overrides the -u openstackuser and fails | 19:47 |
admin0 | spatel, yours also :D | 19:51 |
jpvlsmv | For example, in openstack-hosts-setup.yml task Install Ansible prerequisites, changing the user:root to become: true fixed my situation | 19:51 |
admin0 | i burned 6 hours on mtu today :) before i realized it | 19:51 |
spatel | it would be that part of test in ansible-playbook so one less thing to worry | 19:52 |
jrosser | jpvlsmv: currently we don't really have support for deploying as the non root user | 19:52 |
admin0 | jpvlsmv, is there an issue just logging in as root, or adding the keys to root directly ? | 19:52 |
jrosser | however as you've seen it's actually not so hard to fix things up | 19:53 |
jrosser | if you are wanting to make some contributions to openstack-ansible this would be excellent | 19:53 |
openstackgerrit | Ghanshyam proposed openstack/openstack-ansible-os_horizon master: Use Tempest for dashboard test instead of tempest-horizon https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/774719 | 19:54 |
jpvlsmv | well, I have a key to log in to the target, but it's as my own userid rather than root, and the security folks have blocked root via sshd_config | 19:54 |
admin0 | jpvlsmv, then whey do you make this your issue ? go back to the security folks and say you need root to continue | 19:57 |
admin0 | or send them the playbooks and ask them to help you write a workaround for you | 19:57 |
jpvlsmv | Unfortunately, they require "PermitRootLogin no" (because that line is specified in their controls) | 19:59 |
jpvlsmv | ok, well, I'll look at how to make such a contribution... | 20:00 |
jrosser | jpvlsmv: have you built an all-in-one? | 20:02 |
jpvlsmv | @jrosser I have before, but currently working to expand out to a proper-sized grid | 20:03 |
jrosser | you can probably do a lot of this in an AIO by disabling root login and seeing what breaks | 20:03 |
spatel | I have openstack design question: last couple years i am building openstack in my own datacenter where i have full control on network / server ect. but now i want to build openstack in Remote datacenter where i am renting some server where i don't have control on network. what are the option i have to build openstack there? | 20:05 |
admin0 | its not that hard | 20:05 |
spatel | I am thinking about network design | 20:06 |
spatel | currently i have all VLAN base provider | 20:06 |
admin0 | you need to create your own OOB network .. | 20:06 |
admin0 | so throw in 1 server ( i call it infra node) .. and install ubuntu vyos, or pfsense .. where you can create vlans, tags, multiple networks | 20:06 |
admin0 | then you can vpn here and reach the idrac/cimc of the servers and switches | 20:07 |
spatel | i need VLAN right on physical network br-mgmt, br-vxlan etc.. | 20:07 |
admin0 | yes | 20:07 |
admin0 | you will need to rent/colo the switch .. so it needs to be able to do vlans | 20:07 |
admin0 | and also based on your traffic and how you get the ips, you can have either the DC have .1 on the subnet, or you can ebgp to the dc and split their /24 to /25 internally on your own vlan tag for br-vlan | 20:08 |
spatel | lets say i don't have control on DC network (you can't put any switch or have vlan) in that case what option i have? | 20:10 |
admin0 | you mean pure dedicated servers only ? | 20:10 |
spatel | currently collecting all information | 20:10 |
spatel | Yes pure 100% metal server | 20:11 |
spatel | with public IPs | 20:11 |
admin0 | if they are in the same L2 , you can still run your network between them | 20:11 |
spatel | that is what i am thinking and trying to understand what i can do with minimum resource. | 20:12 |
admin0 | else, you need something else like a openvpn where on subinterfae you run the br-mgmt br-vlan br-vxlan etc to make it work .. but you need to do some NAT tricks for the ip address | 20:12 |
spatel | I am sure its not going to be easy | 20:13 |
spatel | I need DPDK or SRIOV also because my network profile is bursty | 20:13 |
jrosser | i have seen this done using linux vxlan to overlay on top of no-vlans metal deployment | 20:18 |
spatel | jrosser that would be overkill right? | 20:24 |
admin0 | ext-net can also be on a vxlan right ? | 20:25 |
spatel | running all VLANs on top of overlay network like spine-leaf | 20:25 |
spatel | I need to find good datacenter who can provide me dedicated VLANs for my OSA control plane | 20:26 |
jrosser | spatel: depends if you think it is overkill if all you have is public IPs | 20:26 |
jrosser | like where your internal vip would go, for example | 20:27 |
jrosser | all http and insecure..... | 20:27 |
spatel | currently i am thinking through my head.. not sure what kind of deal we will get, currently i am preparing all Question/Answer to ask when going to hunt for DC racks | 20:28 |
spatel | if its very complicated then i have to go for ranting Racks instead of server | 20:29 |
spatel | renting* | 20:29 |
*** waxfire has quit IRC | 20:39 | |
*** waxfire has joined #openstack-ansible | 20:40 | |
*** d34dh0r53 has quit IRC | 21:07 | |
*** d34dh0r53 has joined #openstack-ansible | 21:08 | |
*** jpvlsmv has quit IRC | 21:10 | |
*** poopcat has quit IRC | 21:22 | |
*** poopcat has joined #openstack-ansible | 21:22 | |
spatel | jrosser do you know how or what part of playbook create default ceph pool like vms,images and metrics etc? | 21:27 |
spatel | I am seeing it didn't created any pools on ceph storage so wondering if i missing something | 21:28 |
*** ianychoi__ has joined #openstack-ansible | 21:40 | |
*** ianychoi_ has quit IRC | 21:43 | |
*** cshen has quit IRC | 21:59 | |
jrosser | spatel: https://github.com/openstack/openstack-ansible/blob/master/tests/roles/bootstrap-host/templates/user_variables_ceph.yml.j2#L25 | 22:23 |
spatel | I do have that setting in user_variables.yml | 22:24 |
spatel | openstack_config: true | 22:24 |
jrosser | this is all ceph-ansible stuff, not OSA | 22:24 |
jrosser | https://github.com/ceph/ceph-ansible/blob/b1f37c4b3ddec4ff564514c079026f1e020575ea/roles/ceph-defaults/defaults/main.yml#L578-L633 | 22:24 |
spatel | is it possible i don't have enough disk or space on ceph that is why it didn't create default pools? | 22:25 |
spatel | Totally understand its not OSA but just trying to get some help to see if i missed something | 22:25 |
spatel | anyway i may re-build ceph again to verify because i did lots of poking and not sure if something went wrong | 22:26 |
admin0 | spatel, if you want to do it separately, buzz me :) | 22:26 |
spatel | doesn't it make any difference if i do separately ? | 22:27 |
spatel | its same playbook even i do separately | 22:27 |
admin0 | are the variables also the same ? | 22:28 |
admin0 | i have not looked into our inbuilt playbooks tbh .. | 22:28 |
spatel | My ceph deployment is very small and limited usage so i don't want to give it more hardware | 22:28 |
spatel | Everything is same | 22:28 |
admin0 | it can be from the same deply container | 22:28 |
admin0 | deploy* | 22:28 |
admin0 | git clone into /opt/ceph-ansible and thats it | 22:29 |
spatel | OSA use ceph-ansible | 22:29 |
spatel | advantage of using OSA is i can run mon nodes on containers | 22:29 |
jrosser | spatel: https://zuul.opendev.org/t/openstack/build/1546dae2f15d456fb98eff4a4860fec9/log/job-output.txt#30984-31010 | 22:30 |
jrosser | every patch to openstack-ansible runs a ceph job | 22:30 |
jrosser | you can compare the logs | 22:30 |
spatel | Perfect! i can see TASK now which creating pools | 22:31 |
*** cshen has joined #openstack-ansible | 22:32 | |
jrosser | i found it by looking in the ceph-ansible code here https://github.com/ceph/ceph-ansible/blob/371d854a5c03cfb30d27d5cdbaaad61f7f8d6c58/roles/ceph-osd/tasks/openstack_config.yml#L4 | 22:33 |
jrosser | then finding some random patch to openstack-ansible in the review dashboard and searching the log for that task name | 22:33 |
spatel | I have noticed ansible not executing that TASK | 22:37 |
jrosser | then you would look at the conditionals here https://github.com/ceph/ceph-ansible/blob/371d854a5c03cfb30d27d5cdbaaad61f7f8d6c58/roles/ceph-osd/tasks/main.yml#L97-L103 | 22:38 |
admin0 | the cloud is up \o/ .. thanks guys for helping | 22:38 |
spatel | in that block i have openstack_config: true | 22:40 |
spatel | not sure about this part not add_osd | bool | 22:40 |
spatel | jrosser thank for the clue, let me take it from here and start investigate | 22:45 |
jrosser | no worries | 22:45 |
jrosser | always worth throwing in some debug: task to print those followed by a fail: to make it stop | 22:45 |
spatel | yup! let me start debugging to see how that condition will be true | 22:46 |
*** spatel has quit IRC | 22:50 | |
*** spatel has joined #openstack-ansible | 23:02 | |
*** maharg101 has joined #openstack-ansible | 23:37 | |
*** maharg101 has quit IRC | 23:42 | |
*** tosky has quit IRC | 23:48 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!