opendevreview | Jimmy McCrory proposed openstack/openstack-ansible-galera_server master: Additional TLS configuration options https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/911009 | 03:24 |
---|---|---|
noonedeadpunk | mornings | 08:05 |
jrosser | hello | 08:24 |
jrosser | is there anything we should do on v / w / x branches ahead of them becoming unmaintained/<...> ? | 09:03 |
noonedeadpunk | yes, for sure | 09:10 |
noonedeadpunk | I was actually going today to spend some time on sorting things aout | 09:10 |
noonedeadpunk | and already quite due to create new releases for stable branches as well | 09:10 |
noonedeadpunk | that was one thing I really wanted to backport, as it messes up 2023.2 quite a lot: https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/910384 | 09:13 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Add ovn-bgp-agent to source install requirements https://review.opendev.org/c/openstack/openstack-ansible/+/909694 | 09:15 |
noonedeadpunk | let's quickly land https://review.opendev.org/c/openstack/openstack-ansible/+/908499 to unblock master branch and renos | 09:16 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: reno: Update master for unmaintained/yoga https://review.opendev.org/c/openstack/openstack-ansible/+/908499 | 09:17 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Do not use underscores in container names https://review.opendev.org/c/openstack/openstack-ansible/+/905433 | 09:17 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Allow env.d to contain underscores in physical_skel https://review.opendev.org/c/openstack/openstack-ansible/+/905438 | 09:17 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Update upstream SHAs https://review.opendev.org/c/openstack/openstack-ansible/+/910386 | 09:18 |
jrosser | you think we can get some of the older branches working again? i've not really had time to look at them much recently | 09:22 |
noonedeadpunk | I guess yes... They can't be terribly borked, but also time is super limited :( | 09:35 |
noonedeadpunk | Most likely smth rabbit, smth erlang... | 09:36 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/zed: Drop upgrade jobs for Zed https://review.opendev.org/c/openstack/openstack-ansible/+/911050 | 09:44 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/zed: Bump SHAs for Zed https://review.opendev.org/c/openstack/openstack-ansible/+/911051 | 09:46 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-tests stable/zed: Bump ansible-core to 2.12.8 https://review.opendev.org/c/openstack/openstack-ansible-tests/+/911064 | 09:49 |
opendevreview | Dmitriy Rabotyagov proposed openstack/ansible-role-systemd_networkd stable/zed: Use OriginalName instead of Name in systemd.link https://review.opendev.org/c/openstack/ansible-role-systemd_networkd/+/908815 | 09:49 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/2023.1: Bump SHAs for 2023.1 (Antelope) https://review.opendev.org/c/openstack/openstack-ansible/+/911054 | 09:55 |
opendevreview | Merged openstack/openstack-ansible master: reno: Update master for unmaintained/yoga https://review.opendev.org/c/openstack/openstack-ansible/+/908499 | 10:01 |
jrosser | noonedeadpunk: did you make an AIO for magnum tls upgrade? | 11:00 |
jrosser | i am not quite sure how to run that locally as it needs a depends-on for the upgrade | 11:01 |
jrosser | this https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/901185 | 11:01 |
noonedeadpunk | ugh, well 50% of it... | 11:15 |
jrosser | i can do it manually, but i was attempting to replicate the gate-check-commit script and got a bit stuck | 11:23 |
noonedeadpunk | nah, you can't do that easily through gate-check-commit | 11:26 |
opendevreview | Merged openstack/openstack-ansible-openstack_hosts master: Drop task that deletes old UCA repo https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/907433 | 13:56 |
g3t1nf0 | hey there, what services do I need before deploying openstack? DNS, DHCP, NTP, LDAP is what I've found in the docu. Am I missing something? Also can I install it with selinux and firewalld enabled if I'm willing to configure them on the failing states? | 14:33 |
jrosser | g3t1nf0: assuming we are talking about an openstack-ansible deployment, LDAP is optional, DHCP is at your choice about how you configure networking, selinux is entirely untested and will almost certainly break stuff, and firewalld also will be troublesome | 14:35 |
g3t1nf0 | so in other words way better to move to debian then insisting on using Centos | 14:36 |
jrosser | oh well there are other reasons not to use centos | 14:36 |
g3t1nf0 | like ? I'm using it because of selinux | 14:36 |
jrosser | you will get to experience all the things that will be in RHEL at some point in the future, regardless of if they are working / break everything | 14:37 |
jrosser | remember that centos these days is the precursor to RHEL, not a rebuild of it | 14:37 |
g3t1nf0 | yeah I know but still you could go the Rocky way which is a clone | 14:38 |
jrosser | well you did say centos :) | 14:38 |
g3t1nf0 | so back to the question just DNS and NTP. DHCP and LDAP are optional | 14:39 |
jrosser | if selinux is mandatory then you will need to put in some work, i think | 14:39 |
jrosser | openstack-ansible does not really care about how you configure your networking, so long as the requrements are met | 14:39 |
jrosser | and keystone can use LDAP, if you choose to configure it that way, else you don't need it at all | 14:40 |
g3t1nf0 | Then my info is wrong. From my understanding I should always have a dedicated LDAP to connect keystone with and not relly just on keystone | 14:40 |
jrosser | it's entirely up to you as a design decision for your deployment | 14:41 |
g3t1nf0 | s/relly/rely | 14:41 |
jrosser | out of the box, openstack-ansible will default to users in keystone | 14:41 |
g3t1nf0 | how does openstack ansible differentiate baremetal and lxc ? | 14:43 |
jrosser | by default, some things run on the bare metal of the hosts and some things are in lxc containers | 14:44 |
jrosser | if you want it all on bare metal, you can do that optionally | 14:44 |
g3t1nf0 | so I don't need any support infra to launch a cluster? | 14:44 |
g3t1nf0 | just the one deployment host | 14:45 |
jrosser | at a minimum yes. some way to delete / reprovision the cluster hosts quickly can be very useful when testing | 14:45 |
jrosser | i always recommend starting with an all-in-one deployment https://docs.openstack.org/openstack-ansible/2023.2/user/aio/quickstart.html | 14:46 |
jrosser | it can be very beneficial to have that as a reference to compare to first steps on making a production deployment | 14:47 |
g3t1nf0 | haven't thought of doing that but it makes sense | 14:48 |
jrosser | if you have any way to get a 8G ram / 80G disk VM then it would really be time well spent | 14:48 |
jrosser | there are some differences with the way the networking is setup, as it hides everything behind one interface, one IP and a bit of NAT | 14:49 |
jrosser | but for eveything else it will be quite close to a non-HA production deployment | 14:50 |
g3t1nf0 | can the AIO be used in production to uttilize the hardware on 100% | 14:50 |
jrosser | if you have one server as a homelab and you dont mind if you break your own workload, maybe | 14:51 |
jrosser | but for any degree of high availability then a multinode deployment is needed | 14:51 |
g3t1nf0 | guess I'm not explaining it correctly what I want ... So 3 servers with the Control Plane for everything, and when I add a new node it is used for everything like nova, cinder and glance | 14:56 |
g3t1nf0 | something like hyperconverged infra | 14:56 |
noonedeadpunk | #startmeeting openstack_ansible_meeting | 15:01 |
opendevmeet | Meeting started Tue Mar 5 15:01:54 2024 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. | 15:01 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 15:01 |
opendevmeet | The meeting name has been set to 'openstack_ansible_meeting' | 15:01 |
noonedeadpunk | #topic rollcall | 15:01 |
noonedeadpunk | o/ | 15:03 |
NeilHanlon | morning :) | 15:04 |
jrosser | o/ hello | 15:05 |
noonedeadpunk | #topic office hours | 15:07 |
noonedeadpunk | I was mostly occupied woth OVN BGP Agent implementation last week frankly speaking | 15:08 |
noonedeadpunk | While it's not a scope/prio for the project, having quite a harsh deadlines internally nowadays | 15:09 |
noonedeadpunk | I wanted to look today also on implementing new variables to control these improvements and push patches towards all reposhttps://review.opendev.org/q/topic:bug-2031497 | 15:10 |
noonedeadpunk | also would be good to see what is broken... | 15:10 |
NeilHanlon | yeah | 15:11 |
noonedeadpunk | jrosser: do we have more capi things to be merged? Or we're almost done with the topic? | 15:12 |
jrosser | well now really it turns to what is wrong with the magnum tls upgrade jobs | 15:12 |
jrosser | capi is basically done for something we can say is "experimental" for next cycle | 15:12 |
noonedeadpunk | yeah, ok, I will get aio done today after all | 15:12 |
jrosser | i am still trying | 15:12 |
noonedeadpunk | oh, I think it's almost deadline for the cycle-highlights. Should we discuss what to include there? | 15:13 |
noonedeadpunk | I guess openstack-resources role is one thing? | 15:13 |
noonedeadpunk | Then capi as experimental | 15:13 |
jrosser | yeah, vpnaas and bgp agent are pretty good new features too | 15:14 |
noonedeadpunk | ++ | 15:14 |
noonedeadpunk | what else we did... | 15:14 |
noonedeadpunk | push for collectification? | 15:14 |
noonedeadpunk | quorum queues as default? | 15:14 |
jrosser | did we make octavia ovn provider work? | 15:15 |
noonedeadpunk | yup | 15:15 |
jrosser | and bookworm, is that this cycle? | 15:15 |
noonedeadpunk | tested it in our ovn sandbox and cleaned-up the patch | 15:15 |
noonedeadpunk | I think bookworm was the previous one | 15:16 |
noonedeadpunk | ovn driver for octavia: https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/868462 | 15:16 |
jrosser | hmm i wonder if there is any kind of test for that | 15:16 |
noonedeadpunk | nope. not yet. | 15:17 |
noonedeadpunk | I guess this relates to overall making octavia work for ovn | 15:17 |
noonedeadpunk | or well. there could be some tempest settings... | 15:17 |
noonedeadpunk | I need to look at it | 15:17 |
opendevreview | Dmitriy Rabotyagov proposed openstack/ansible-role-systemd_networkd stable/zed: Use OriginalName instead of Name in systemd.link https://review.opendev.org/c/openstack/ansible-role-systemd_networkd/+/908815 | 15:26 |
noonedeadpunk | It pretty much dissapointing that I was not able to work on anything that I planed during last ptg though :( | 15:28 |
noonedeadpunk | but we have what we have I guess. | 15:28 |
noonedeadpunk | also this seems to be needed to fix functional jobs for Zed: https://review.opendev.org/c/openstack/openstack-ansible-tests/+/911064 | 15:31 |
noonedeadpunk | and this what I want to include into 2023.2 bump for 28.1.0 tag: https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/910384 | 15:33 |
noonedeadpunk | anything else we wanna discuss? | 15:37 |
NeilHanlon | not from I.. sorry I have not been as active lately.. hoping to be more involved this spring | 15:37 |
noonedeadpunk | I also hope to be more involved starting April.... | 15:40 |
noonedeadpunk | Ok, then I will wrap this up early | 15:42 |
noonedeadpunk | #endmeeting | 15:42 |
opendevmeet | Meeting ended Tue Mar 5 15:42:25 2024 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:42 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/openstack_ansible_meeting/2024/openstack_ansible_meeting.2024-03-05-15.01.html | 15:42 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/openstack_ansible_meeting/2024/openstack_ansible_meeting.2024-03-05-15.01.txt | 15:42 |
opendevmeet | Log: https://meetings.opendev.org/meetings/openstack_ansible_meeting/2024/openstack_ansible_meeting.2024-03-05-15.01.log.html | 15:42 |
opendevreview | Merged openstack/openstack-ansible master: Update upstream SHAs https://review.opendev.org/c/openstack/openstack-ansible/+/910386 | 16:07 |
g3t1nf0 | jrosser, guess I'm not explaining it correctly what I want ... So 3 servers with the Control Plane for everything, and when I add a new node it is used for everything like nova, cinder and swift, something like hyperconverged infra? | 16:09 |
noonedeadpunk | g3t1nf0: you can do that, yes | 16:15 |
noonedeadpunk | though I guess it's good to define what backend storage you use for cinder | 16:16 |
noonedeadpunk | as if it's going to be ceph - then you don't need cinder on computes (and don't need swift per say, as ceph rgw has some swift compatability) | 16:17 |
noonedeadpunk | basically you manage components that are to be installed and if metal/non-metal with groups defenition in openstack_user_config.yml file | 16:17 |
g3t1nf0 | I'm using 25x ssd hp 380 gen9 servers and I want to utilize them to the fullest with the cpu and the storage so I want every node to be used for nova, with ceph as the main storage for the cinder and swift | 16:17 |
g3t1nf0 | then I haven't understood how cinder and swift are working. I thought they use ceph as the backend | 16:19 |
noonedeadpunk | g3t1nf0: so. you want swift-swift or object storage which can be provided by Ceph RadosGateway? | 16:20 |
noonedeadpunk | And have compatible with Swift and S3 API? | 16:20 |
g3t1nf0 | I want to have ceph in the backend and allow my users to have the s3 api exactly | 16:20 |
g3t1nf0 | but also block storage | 16:21 |
noonedeadpunk | ok, yeah, then you probably don't need swift per say :) it's just quite a difference on architecture | 16:21 |
g3t1nf0 | so just ceph then ? | 16:21 |
noonedeadpunk | So, your cinder-api and cinder-volume should be running just on control plane - no need to have any of cinder service on a compute | 16:21 |
noonedeadpunk | and cinder-volume can be running in a container then | 16:22 |
noonedeadpunk | Basically what you'd need to run on computes are - nova-compute, some neutron (ovn? ovs?) agents, and ceph-osds | 16:22 |
opendevreview | Merged openstack/openstack-ansible stable/zed: Drop upgrade jobs for Zed https://review.opendev.org/c/openstack/openstack-ansible/+/911050 | 16:22 |
opendevreview | Merged openstack/openstack-ansible stable/zed: Bump SHAs for Zed https://review.opendev.org/c/openstack/openstack-ansible/+/911051 | 16:22 |
opendevreview | Merged openstack/openstack-ansible stable/2023.1: Bump SHAs for 2023.1 (Antelope) https://review.opendev.org/c/openstack/openstack-ansible/+/911054 | 16:22 |
noonedeadpunk | g3t1nf0: ceph can provide block storage and object storage (and even shared FS) | 16:23 |
g3t1nf0 | exactly what I was thinking | 16:23 |
noonedeadpunk | and you can integrate ceph object sotrage with openstack as well | 16:23 |
opendevreview | Jimmy McCrory proposed openstack/openstack-ansible master: Add check_hostname option to db healthcheck tasks https://review.opendev.org/c/openstack/openstack-ansible/+/911150 | 16:23 |
noonedeadpunk | so then you don't need Swift | 16:23 |
g3t1nf0 | okay so to run all that on the CP what kind of resources will I need ? | 16:24 |
g3t1nf0 | I have 48vcpus and 768G ram | 16:24 |
opendevreview | Jimmy McCrory proposed openstack/openstack-ansible-galera_server master: Additional TLS configuration options https://review.opendev.org/c/openstack/openstack-ansible-galera_server/+/911009 | 16:24 |
g3t1nf0 | on all servers but I can lower the ram if its overkill and use it on an extra server as compute | 16:25 |
noonedeadpunk | so, I'd suggest having not less then 128 for control plane, but that depends on amount of openstack services you'll include | 16:27 |
noonedeadpunk | I guess I'd go with 256 though | 16:29 |
g3t1nf0 | is the cpu enough? also should I use the drives on the CP as well as Ceph OSDs ? | 16:30 |
noonedeadpunk | vcpus meaning HT enabled? So like 24cpu 1 socket? | 16:32 |
g3t1nf0 | 12cpus 2 sockets | 16:32 |
noonedeadpunk | well. you might want to tune threads number for services as they're slightly max out by default | 16:33 |
noonedeadpunk | but you should be fine | 16:33 |
g3t1nf0 | usually what is the recommended for a CP? | 16:33 |
noonedeadpunk | This is very-very tircky question as all depends on what is anticipated API load/amount of interactions with API for your cluster | 16:35 |
g3t1nf0 | services I'd like to run except the storage which will be with ceph as discussed earlier, murano, freezer, senlin, nova, magnum, zun, horizon, trove, designate, ec2-api, keystone, watcher, barbican, octavia, zaqar, neutron, tacker, heat, placement, ceilometer, glance (manila, cinder, swift) | 16:40 |
noonedeadpunk | well. this is pretty much optimistic, I would say.... | 16:42 |
noonedeadpunk | iirc ec2-api is already deprecated, murano is not going to be released for 2024.1 and future is unclear, freezer not maintained nicely for last.... 4 cycles? pretty much same for zun. | 16:43 |
noonedeadpunk | so unless you want to step into maintenance of these projects... I would double-check if you _really_ want them | 16:44 |
g3t1nf0 | https://docs.openstack.org/2023.2/install/ so this page is not updated then? | 16:47 |
noonedeadpunk | I guess it depends on the prespecitve.... | 16:50 |
noonedeadpunk | but also - why you'd need ec2-api? | 16:51 |
noonedeadpunk | g3t1nf0: but I guess I was more reffering to contribution metrics to projects - ie Freezer: https://www.stackalytics.io/?module=freezer-group&metric=commits | 16:52 |
noonedeadpunk | or zun: https://www.stackalytics.io/?module=zun-group&metric=commits | 16:52 |
g3t1nf0 | those are scary to look at, why are there not so many commits as earlier? are companies moving away from openstack? | 16:55 |
noonedeadpunk | I would not say from openstack. But some projects get less love/usage then others | 16:56 |
noonedeadpunk | Some are getting jsut stable and feature complete | 16:56 |
noonedeadpunk | but they still should be maintained, but orgs behind these projects may neglect this need | 16:56 |
noonedeadpunk | frankly - I have no idea why freezer has that much love | 16:57 |
noonedeadpunk | but also worth mentioning, that main arena for openstack is private cloud after all, that has all sort of internal solutions for backups | 16:57 |
g3t1nf0 | yeah true, but companies like metallb offer it so they should give you an option for backup, no? | 16:58 |
noonedeadpunk | and talking about number of openstack deployments - they just grow. But mostly they're limited to some core components | 16:58 |
noonedeadpunk | frankly - I know already 2-3 organizations that were looking at freezer, but then got scare of number of it's maintainers and moved to commercial products like Trilio | 17:00 |
noonedeadpunk | while they could pick up basic maintenance tasks and keep gates alive. Dividing this by these 3 companies would be really small involvment... | 17:01 |
g3t1nf0 | I get what you mean | 17:01 |
jrosser | I agree your list of services is optimistic | 17:02 |
jrosser | you should start with the minimum core stuff and build up later | 17:02 |
jrosser | and have a test lab for the lesser used ones | 17:02 |
noonedeadpunk | these ones are in a good shape though: nova, magnum, horizon, designate, keystone, barbican, octavia, neutron, heat, placement, ceilometer, glance, cinder, manila. | 17:04 |
jrosser | things designate require you to make some design decisions to suit your deployment, and same increasingly goes for magnum | 17:05 |
noonedeadpunk | (and for manila) | 17:05 |
jrosser | also Barbican, to hsm or not is the question there | 17:05 |
jrosser | and business requirements will perhaps dictate the answer there | 17:06 |
noonedeadpunk | and actually you can tell about each service, except placement :D | 17:06 |
noonedeadpunk | and maybe ceilometer, but arguable | 17:06 |
jrosser | so really it’s best to view all of this as a toolbox, and each tool likley has multiple pluggable backends in some way | 17:06 |
noonedeadpunk | still need to choose datastore for it | 17:06 |
jrosser | so everyone’s deployment ends up somehow different | 17:06 |
noonedeadpunk | ++ | 17:07 |
noonedeadpunk | yeah, so it is very a tough question about anything recommended or suggested | 17:07 |
noonedeadpunk | like it all very much depends | 17:07 |
jrosser | even hyper-converged would be too much for me | 17:08 |
g3t1nf0 | so from all the services that jrosser said I have extra left trove watcher and zaqar | 17:08 |
jrosser | managing upgrades would just be too much of a nightmare | 17:08 |
noonedeadpunk | yeah, hyper-converged is not a good idea when talking about day2 | 17:09 |
jrosser | great example is ceph for that | 17:09 |
g3t1nf0 | trove would be nice for the DBs watcher and zaqar are not that of a high priority | 17:09 |
noonedeadpunk | it is very appealing/cheap on day0, but quickly will become an utter mess with plenty of exceptions | 17:09 |
jrosser | ceph releases, openstack releases and OS releases all have unique release cadences | 17:09 |
noonedeadpunk | trove should work, in general. clusterization might need some love there | 17:10 |
jrosser | and this backs you very quickly into some sort of upgrade deadlock | 17:10 |
g3t1nf0 | or you use katello for repository lock and upgrade at your own pace | 17:10 |
noonedeadpunk | not saying about scaling up, where storage/compute resources grow differently | 17:10 |
noonedeadpunk | and that osds also consume significant amount of memory, which could be "sold" as VMs... | 17:11 |
g3t1nf0 | jrosser: so you would recomend that I skip ceph and go for manila cinder ans swift ? | 17:11 |
noonedeadpunk | So profit in price is not as significant as you might think | 17:11 |
jrosser | g3t1nf0: I am saying that by hyper converging you are making your post deployment life tough | 17:12 |
noonedeadpunk | manila and cinder require some storage backend | 17:12 |
jrosser | but absolutely you should use ceph | 17:12 |
noonedeadpunk | and this can be ceph | 17:12 |
g3t1nf0 | so we are back again then on the update deadlock that you mentioned earlier | 17:12 |
jrosser | Manila is kind of an api/orchestrator for file systems | 17:12 |
noonedeadpunk | so it's not cinder or ceph - it's what to use instead of ceph for cinder | 17:12 |
jrosser | where you get that file system from is another thing | 17:13 |
g3t1nf0 | iscsi would be one idea for block | 17:13 |
jrosser | g3t1nf0: you would have less upgrade deadlock, imho, by having dedicated storage nodes | 17:13 |
jrosser | we are currently stepping through antelope to bobcat, ceph | 17:14 |
jrosser | Quincy to reef | 17:14 |
jrosser | and focal to jammy | 17:14 |
jrosser | and it’s pretty involved sequencing to make that happen nicely | 17:14 |
g3t1nf0 | then I'll be losing 75 drive trays just on the CP | 17:14 |
noonedeadpunk | well, I mean, given you already have hardware, there's probably not much choice. | 17:15 |
jrosser | like some wierd HP raid controller? | 17:16 |
jrosser | that needs some thought too | 17:17 |
g3t1nf0 | yeah they are on raid controller but I've flashed them for HBA | 17:17 |
g3t1nf0 | the raid controllers :) | 17:17 |
jrosser | noonedeadpunk makes a good point about how you manage ram on the compute nodes too | 17:18 |
jrosser | under error conditions ceph can needs lots of ram | 17:19 |
jrosser | and how you separate the ram needs of your VM and there OSD is tricky | 17:19 |
g3t1nf0 | 768 should be fine if I leave 128 always free | 17:19 |
jrosser | as the OOM killer is sitting ready, which should it choose? | 17:19 |
jrosser | anyway, I think we’re both saying hyper-converged is pretty advanced usage | 17:22 |
jrosser | both for resource management and day-2 operations | 17:22 |
jrosser | ceph is never about performance so if you don’t need the Tb of the control plane node disks, just ignore them | 17:23 |
g3t1nf0 | so you would recommend for a one man show just to keep it simple and start this way because when shit hits the fan it only me to blame | 17:24 |
noonedeadpunk | yeah, reserving 128 should do the trick indeed. THere's a parameter in nova, which allows you to mark this space as reserved in placement | 17:25 |
noonedeadpunk | and I don't think HCA as advanced usage, but more as trying to save startup costs and getting a bag full of shit right after it... | 17:27 |
noonedeadpunk | As operational cost overcomes potential profit relatively fast | 17:27 |
noonedeadpunk | but anyway, if hardware is already there.... | 17:28 |
noonedeadpunk | you can move things around once grow:) | 17:28 |
g3t1nf0 | my thought exactly to start as small as possible and when I grow to separate everything as expected a cluster with ceph and CP just with openstack plus the worker nodes | 17:35 |
g3t1nf0 | hmm what are the network nodes needed for? Should I have separate network nodes? Isn't it enought if I keep neuron on my CP and worker nodes ? | 17:47 |
jrosser | g3t1nf0: it is your decision again :) | 17:50 |
jrosser | mine are separate as the public networks are direct on the internet and we have some policy stuff around separating that | 17:51 |
g3t1nf0 | too many decisions :D how to keep it simple when I can take a shitton of decisions :D | 17:51 |
g3t1nf0 | I see so no firewall before openstack >? | 17:51 |
jrosser | well mine is like a private(public) cloud, so no | 17:52 |
jrosser | but your use case might be different | 17:52 |
jrosser | most people make the note work nodes also the infra nodes | 17:53 |
jrosser | and put the api loadbalancer also there | 17:53 |
jrosser | but really OSA is a toolbox, not a shrink wrap installer that decides everything for you | 17:54 |
jrosser | there’s a reference architecture, and that’s what you get if you don’t tell it otherwise | 17:54 |
jrosser | but ultimately pretty much anything is possible | 17:54 |
g3t1nf0 | guess I need a bit more reading before deploying anything | 17:55 |
jrosser | and that makes the learning curve kind of steep, for the benefit of flexibility | 17:55 |
jrosser | if you went to a large vendor of a commercial solution, there may be no choice but to take their defined architecture | 17:55 |
jrosser | so many ways to achieve and openstack, really | 17:56 |
jrosser | *an openstack | 17:56 |
g3t1nf0 | yeah but as a starting point as you've said pretty steep learing curve | 17:57 |
jrosser | well that brings us back full circle to the all-in-one | 17:58 |
jrosser | it makes it’s own config for you, and deploys the defaults automatically | 17:58 |
g3t1nf0 | okay I do see the point in the network node, should it be a single node or again 3x ? | 17:59 |
g3t1nf0 | I can put the LB, DNS and the whole network there so it makes sense | 18:00 |
noonedeadpunk | I think wat we mean here under network node - how external access to VMs will be provided. | 18:17 |
noonedeadpunk | This brings potentially some limitations, like VMs not being able to have direct connectivity to public nets, only through private network and floating IPs/ src nat and L3 router | 18:18 |
noonedeadpunk | and then with ovn having separate net nodes makes sense if you for some reason do not want to pass public vlans to computes | 18:23 |
noonedeadpunk | as with OVS it was also handling L3/DHCP/ namespaces which are not needed today with OVN | 18:24 |
noonedeadpunk | jrosser: I did reproduce the issue in the sandbox with magnum fwiw | 18:34 |
noonedeadpunk | I do see 2 images for magnum. different checksums, both having fedora-coreos tag | 18:37 |
jrosser | I wonder if we don’t differentiate between the images sufficiently across upgrades | 18:42 |
jrosser | the cluster template needs to point to the correct things for the upgraded environment | 18:43 |
noonedeadpunk | oh | 18:44 |
noonedeadpunk | how this works at all... https://opendev.org/openstack/magnum-tempest-plugin/src/branch/master/magnum_tempest_plugin/common/datagen.py#L269 | 18:44 |
noonedeadpunk | or why it's called... | 18:45 |
noonedeadpunk | or why it's not failing otherwise.... | 18:47 |
noonedeadpunk | and it indeed sends the request with null in it: https://paste.openstack.org/show/bFSBYMJ2JOcSE3ZZjhwI/ | 18:56 |
noonedeadpunk | I _think_ cluster_distro should not be null..... | 18:57 |
noonedeadpunk | or image_id should not be "fedora-coreos-latest".... | 18:58 |
noonedeadpunk | yeah, ok, so magnum goes for the image | 19:01 |
noonedeadpunk | https://paste.openstack.org/show/bpFRctqvXvTWsFKzSImc/ | 19:01 |
noonedeadpunk | but indeed there're 2 images... not sure what making troubles though... | 19:07 |
noonedeadpunk | like they obviously both having correct propery... | 19:08 |
noonedeadpunk | hm, but once I've hidden 1 image - it's passing.... | 19:09 |
jrosser | calling it -latest is probably not great either, I assume it’s versioned | 19:10 |
noonedeadpunk | and yes, any image can be hidden for tests to pass /o\ | 19:10 |
noonedeadpunk | I think it's still better... | 19:10 |
noonedeadpunk | Like latest will always be latest... | 19:10 |
noonedeadpunk | better then fail on user? | 19:10 |
jrosser | oh right it’s not totally clear what those two images actually are though? | 19:10 |
noonedeadpunk | But why there's no failure on non-tls upgrades.... | 19:11 |
noonedeadpunk | so, during upgrade image is switched from 35.20220424.3.0 to 38.20230806.3.0 | 19:13 |
noonedeadpunk | basically... I guess openstack_resources rotation should drop old image with the same name? | 19:13 |
noonedeadpunk | but that is also a valid thing to fix in magnum.... | 19:13 |
noonedeadpunk | and also it should not fetch deactiveated images, for instance... | 19:15 |
noonedeadpunk | so this is actually image is being asked for, but nothing obvious is being raised in the result https://opendev.org/openstack/magnum/src/branch/master/magnum/api/attr_validator.py#L32-L44 | 19:43 |
noonedeadpunk | mnasiadka: what;s our opinion of magnum returning `Cluster type (vm, None, kubernetes) not supported` when there's simply 2 images with os_distro=fedora-cores? | 19:44 |
noonedeadpunk | Should it try getting only the first one and ignore deactivated images? or just fail in a more meaningful way? | 19:45 |
noonedeadpunk | ok, I guess I got why only TLS is failing... as it;s the only job applying master aio vars on upgrade.... | 20:27 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Enable image rotation for Magnum https://review.opendev.org/c/openstack/openstack-ansible/+/911377 | 20:39 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_magnum master: Adopt for usage openstack_resources role https://review.opendev.org/c/openstack/openstack-ansible-os_magnum/+/901185 | 20:40 |
noonedeadpunk | *hopefully* this will fix it | 20:40 |
noonedeadpunk | alternativelly, we should be able to rename the images as well here https://opendev.org/openstack/openstack-ansible-plugins/src/branch/master/roles/openstack_resources/tasks/image_rotate.yml#L23 - by appending some suffix | 20:41 |
noonedeadpunk | but would need think a bit on implementation, as couple of things do depend on image name there.... | 20:42 |
spatel | noonedeadpunk hey! | 21:51 |
spatel | there | 21:51 |
spatel | I have stupid NFS question, I have configured cinder-volume with NFS but how does compute nodes vm talk to NFS? If compute nodes need access of NFS then at what path I should mount them? | 21:52 |
jrosser | spatel: it's all here i think https://docs.openstack.org/cinder/latest/admin/nfs-backend.html | 22:06 |
spatel | Hmm! I am on same page and confused :( | 22:22 |
spatel | How does kvm know where to pick NFS share? | 22:24 |
spatel | because I have configured cinder-volume on controller and created 10G volume. I have attached to VM but my VM can't see that 10G volume :( | 22:26 |
spatel | Trying to understand how does all these piece glued | 22:27 |
jrosser | don;t you mount the nfs in some known place and tell cinder about that with nfs_shares_config | 22:29 |
jrosser | then you get a file backing the volume in there? | 22:29 |
spatel | I did mounted NFS on controller nodes and configured cinder-volume service. everything working and I am able to create volume etc.. | 22:30 |
spatel | I can see volume-XXXXXX files on NFS share | 22:30 |
spatel | but question is how does Compute know about NFS and those volume files? | 22:30 |
spatel | Who will tell libvirt to talk to NFS and attach volume file to VM and how > | 22:32 |
jrosser | doesnt nova-compute do that? | 22:35 |
spatel | I don't know and none of the doc saying anything about compute config | 22:36 |
jrosser | i think you should look at the nova-compute logs when you try to attach the volume | 22:38 |
jrosser | you need the nfs-common package i would expect and also there needs to be correct permissions to mount/read/write the share | 22:39 |
spatel | Lookin g | 22:42 |
opendevreview | Jimmy McCrory proposed openstack/openstack-ansible master: Add check_hostname option to db healthcheck tasks https://review.opendev.org/c/openstack/openstack-ansible/+/911150 | 23:07 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!