Wednesday, 2022-06-08

*** rlandy|bbl is now known as rlandy00:19
*** rlandy is now known as rlandy|out00:41
*** blarnath is now known as d34dh0r5308:10
*** rlandy|out is now known as rlandy10:33
dgripaHey code_bleu I can see your message, unfortunately I cant help you11:27
dgripaI'm here asking for help too11:27
dgripaseems like no one is active nowadays11:28
jrosser_dgripa: a lot of people will be at https://openinfra.dev/summit/ this week11:35
dgripahey jrosser, TY for the information13:29
sorin-mihaiI'm trying to run a Ethereum node in a OpenStack cluster with 2 storage backends, Ceph and Cinder LVM. Ceph has both both SSD only pool and HDD pool with DB on SSD, but even on the SSD only pool the IOPS is way lower than what i could get from Cinder LVM. Since Cinder LVM is not HA and the volume is tied to a single node, what are my options for getting the highest IOPS from the hardware and still have a functional HA of the volume?14:18
*** Gilou_ is now known as Gilou14:50
jcmdlnsorin-mihai: You have a lot of options but they all have trade-offs. I think in this case you might want to consider reviewing various SSD-backed or NVMe-backed Ceph papers released by organizations like Micron and explore what trade-offs make sense for your needs.14:57
jcmdlnUltimately, LVM-backed Cinder volumes will be hard to beat using distributed storage in small(er) clusters. I think a Ceph irc room or mailing list might be able to give you more accurate advice, though be sure not to make your redundancy worse in favor of higher performance until you've explored other options. 15:04
sorin-mihaijcmdln, NVMe is not an option in the existing Ceph cluster... It is a hyperconverged setup, with the OSDs, both SSDs and SAS HDDs with SSD db, spread among the compute nodes equally, but the mon, mgr, mds and rgw daemons are only on the controllers. Would it improve anything, in relation with OpenStack (maybe Cinder RBD if i understand correctly?), to run the mon, mgr, mds on the compute nodes as well?15:06
jcmdlnsorin-mihai: Sorry I didn't mean to suggest getting NVMes, I simply meant that there are examples of tuning various kinds of flash-based storage devices that might be relevant for your use-case. With regards to your hyperconverged setup, I cannot speak authoritatively though co-locating services can only make performance worse but by a very negligible margin unless you are using nodes with very old hardware.15:59
jamesbensonOpen question to any/every one: What do you do after deployment to verify that your cluster is functional?  We run refstack, but are their other tests you run?  Thanks!16:26
fricklerjamesbenson: have you seen rally?16:30
jamesbensonfrickler: I started some stuff with rally, (https://gitlab.com/utsa-ics/osias/-/blob/Rally_Openstack/test_rally.sh) but it's not on our main tests.  What would you recommend?  You can see what we've done with here at the link.16:32
fricklerjamesbenson: at a glance that looks like a good start. possibly add in some neutron, octavia and designate16:36
jamesbensonDo you run refstack tests also?  If so, do you use it through the refstack-client or through rally?16:37
fricklerno, I don't. although I think we have some code for that, but I'll need to look that up tomorrow16:43
jamesbensonThat would be great.  Thank you!  Also, if you have any tests better than the sample tests from refstack (https://rally.readthedocs.io/en/stable/quick_start/tutorial/step_5_task_templates.html), please let me know.  I'm open to ideas :-) 16:46
*** rlandy is now known as rlandy|bbl22:42

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!