*** ysandeep|out is now known as ysandeep | 05:21 | |
*** ysandeep is now known as ysandeep|afk | 06:44 | |
*** ysandeep|afk is now known as ysandeep | 07:31 | |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_octavia master: Use PKI role for certificate generation https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/839068 | 08:31 |
---|---|---|
noonedeadpunk | so, seems all yammy fail quite early on repo stuff | 08:44 |
noonedeadpunk | I believe on glusterfs actually | 08:44 |
noonedeadpunk | `Failed to find required executable "gluster" in paths` | 08:45 |
noonedeadpunk | https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_c85/843711/6/check/openstack-ansible-deploy-aio_lxc-ubuntu-jammy/c859f0a/logs/ara-report/results/1704.html | 08:45 |
noonedeadpunk | we need glusterfs-cli for 22.04 | 08:47 |
mgariepy | i don't have time today but i think to reproduce locally we need to add the disable-recommends file in apt. | 09:49 |
jrosser_ | also i don't have time today, but didnt we already do something with disable-recommends for ceph? | 09:53 |
jrosser_ | https://opendev.org/openstack/openstack-ansible-openstack_hosts/commit/c4405603be81e66515fcff3a9528d15a286b1b00 | 09:59 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-openstack_hosts master: Add default apt config for ubuntu 22.04 https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/844410 | 10:03 |
mgariepy | the CI does disable recommends and suggest. | 10:04 |
mgariepy | but it still works on focal. | 10:04 |
mgariepy | tomorrow i'll try to reproduce locally just to confirm it's the issue. | 10:05 |
mgariepy | it's kinda crazy how the image is so big but does contains so little. | 10:05 |
jrosser_ | we landed our own support for disabling recommended patches kind of in parallel with adding 22.04 support, so we missed that for jammy | 10:05 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-plugins master: Ensure that glusterfs-cli is installed for ubuntu 22.04 https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/844413 | 10:07 |
jrosser_ | noonedeadpunk: two fixes there ^^ but i don't know if its sufficient | 10:08 |
opendevreview | Merged openstack/openstack-ansible-galera_server master: Fix systemd and centos9. https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/844288 | 10:17 |
*** ysandeep is now known as ysandeep|break | 10:49 | |
*** dviroel|out is now known as dviroel | 11:12 | |
*** ysandeep|break is now known as ysandeep | 11:27 | |
opendevreview | Merged openstack/openstack-ansible-plugins master: Ensure that glusterfs-cli is installed for ubuntu 22.04 https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/844413 | 11:49 |
*** ysandeep is now known as ysandeep|afk | 13:06 | |
noonedeadpunk | oh, ok, good :) | 13:36 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_ironic master: Remove [keystone] configuration block https://review.opendev.org/c/openstack/openstack-ansible-os_ironic/+/831544 | 13:39 |
Mouaa | Hi guys. It seems that we have side effects on our nested DEV platform (Vxlan in Vxlan with mtu tunned) -> Keepalived hearbeats currently configured in vxlan Multicast are flapping VRRP routers state. Do you know if it is possible and supported (via openstack-ansible) to configure keepalive in Unicast? In the meantime, we will try to tunne on the values of fall and rise in the vrrp_script... Thanks for your reply | 14:15 |
jrosser_ | Mouaa: the configuration for keepalived is set here https://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/group_vars/haproxy/keepalived.yml#L56 | 14:18 |
jrosser_ | if you want to change any of that, copy that whole variable definition to your user_variables.yml and make whatever adjustments you need | 14:19 |
jrosser_ | that variable is used by this ansible role https://github.com/evrardjp/ansible-keepalived | 14:19 |
jrosser_ | yout can see in the template that generates the keepalived config file that there is the possibility for unicast https://github.com/evrardjp/ansible-keepalived/blob/master/templates/keepalived.conf.j2#L117-L122 | 14:20 |
jrosser_ | in openstack-ansible we leave all these hooks open for you to adjust the config however you need, but you won't find a specific example for keepalived+unicast | 14:21 |
jrosser_ | it would be up to you to find what is needed in keepalived.conf and create the right contents for that in user_variables.yml | 14:22 |
Mouaa | thx Jonathan | 14:22 |
jrosser_ | Mouaa: i am describing here keepalived for the openstack-ansible H/A API endpoints | 14:26 |
jrosser_ | do you mean for neutron HA routers? | 14:26 |
Mouaa | You're right, our problem it's for H/A API endpoints | 14:28 |
*** ysandeep|afk is now known as ysandeep | 14:28 | |
jrosser_ | ok, cool | 14:29 |
spatel | Folk any good RAID controller to build Ceph storage? | 14:40 |
spatel | Any recommendation? | 14:40 |
supamatt | dont use a raid controller ;S | 14:52 |
spatel | hmm | 14:53 |
spatel | I read RAID controller increase performance like write cach etc.. (RAID-0 or JBOD) | 14:54 |
spatel | I meant passthrough RAID | 14:54 |
jrosser_ | spatel: how many disks do you want? | 15:02 |
spatel | 2x18TB disk | 15:03 |
spatel | on each server | 15:03 |
spatel | we have total 15 servers for Ceph | 15:04 |
jrosser_ | sounds like you would have enough onboard sata ports | 15:06 |
spatel | hmm so i don't need any external RAID controller correct? | 15:07 |
spatel | someone proposed to buy this one - https://arecadirect.com/areca-arc-1886-16i-16-port-pcie-gen-4-0-tri-mode-raid-adapters/ | 15:07 |
jrosser_ | for what reason? :) | 15:07 |
spatel | that is what i am researching.. :) | 15:07 |
spatel | why do i need $1200 RAID controller? | 15:08 |
spatel | does on-board RAID controller provide speed and write cache ? | 15:08 |
jrosser_ | first really decide if you are building..... highest possible capacity / best iops / best throughput / lowest cost / some other definition of "best" | 15:09 |
spatel | I am using 18TB disk which has 12G/s speed so atleast need enough speed on RAID controller | 15:09 |
spatel | This storage is for HPC openstack workload | 15:09 |
jrosser_ | the interface may well be 12Gbit/sec but the sequential write rate of an HDD is way way way less than that | 15:10 |
spatel | We need massive filestorage ( more space to storage research data, i don't think IOPS matter here) | 15:10 |
jrosser_ | then when you get non-sequental access the throughput will be much lower again | 15:11 |
spatel | I am going to put WAL+DB on NvME | 15:11 |
jrosser_ | imho you should look at the cost / benefit of doing the whole thing on nvme | 15:11 |
spatel | NVME is very costly :( | 15:11 |
jrosser_ | right - this is why i say you have to decide what you are optimising for | 15:11 |
spatel | we need more TB space then IOPS | 15:11 |
spatel | we need more TB space than IOPS | 15:11 |
jrosser_ | you can optimise for low cost and large storage but that will come at the price of io/sec and throughput | 15:12 |
spatel | You are saying buying Areca RAID controller with HDD won't do better compare to onboard one | 15:13 |
jrosser_ | there are some advantages to having a SAS interface over SATA, like command queuing is better and so on | 15:14 |
jrosser_ | but really you don't need anything raid or complicated, it's just more money and more to go wrong | 15:14 |
jrosser_ | you say nvme is expensive but that card would cost more than your 2x 18T disks? | 15:14 |
spatel | Agreed | 15:15 |
jrosser_ | get a "sensible" server with onboard LSI sas chip and it's job done | 15:16 |
spatel | what is the difference of keeping WAL+DB on NvME + HDD Vs all NvME ? | 15:16 |
spatel | jrosser_ +1 | 15:16 |
jrosser_ | depending on what you are doing some db operations on ceph can be quite intensive | 15:17 |
spatel | hmm | 15:17 |
jrosser_ | so if you have HDD those might be unacceptably slow, which is why there is a lot of advice to put the DB on nvme | 15:17 |
spatel | This is for HPC workload and i am still debating what storage software i should use? Ceph/Gluster/Luster ? | 15:18 |
jrosser_ | without knowing what the workload is, it's not possible to say | 15:18 |
jrosser_ | and if you want RBD, or filesystemd mounts, cinder volumes, or object storage....... | 15:18 |
jrosser_ | anyway, for ceph i'd not get a fancy raid card | 15:19 |
spatel | Its University and they do all kind of research to finding answer is difficult (its general purpose HPC cluster for students) | 15:19 |
spatel | I am going to propose same to not buy RAID controller :) | 15:19 |
spatel | Plan is to have storage and use Manila to mount filesystem on VM so they have shared filesystem for MPI jobs | 15:20 |
jrosser_ | remember that the performance can never be better than the underlying performance of the drives, really | 15:21 |
jrosser_ | there is no magic or free lunch here | 15:21 |
spatel | can't disagree with you :) | 15:21 |
jrosser_ | and the marketing performance figures for HDD will be for sequential | 15:22 |
jrosser_ | everything else will be terrible | 15:22 |
spatel | 16TB NvME cost $3999 | 15:23 |
opendevreview | Merged openstack/openstack-ansible-openstack_hosts master: Add default apt config for ubuntu 22.04 https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/844410 | 15:35 |
*** ysandeep is now known as ysandeep|out | 15:38 | |
*** dviroel is now known as dviroel|lunch | 15:44 | |
*** dviroel|lunch is now known as dviroel | 16:26 | |
*** dviroel is now known as dviroel|afk | 20:28 | |
*** dviroel|afk is now known as dviroel | 23:02 | |
*** dviroel|afk is now known as dviroel | 23:03 | |
*** dviroel is now known as dviroel|afk | 23:23 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!