*** rlandy|bbl is now known as rlandy | 00:19 | |
*** rlandy is now known as rlandy|out | 00:41 | |
*** blarnath is now known as d34dh0r53 | 08:10 | |
*** rlandy|out is now known as rlandy | 10:33 | |
dgripa | Hey code_bleu I can see your message, unfortunately I cant help you | 11:27 |
---|---|---|
dgripa | I'm here asking for help too | 11:27 |
dgripa | seems like no one is active nowadays | 11:28 |
jrosser_ | dgripa: a lot of people will be at https://openinfra.dev/summit/ this week | 11:35 |
dgripa | hey jrosser, TY for the information | 13:29 |
sorin-mihai | I'm trying to run a Ethereum node in a OpenStack cluster with 2 storage backends, Ceph and Cinder LVM. Ceph has both both SSD only pool and HDD pool with DB on SSD, but even on the SSD only pool the IOPS is way lower than what i could get from Cinder LVM. Since Cinder LVM is not HA and the volume is tied to a single node, what are my options for getting the highest IOPS from the hardware and still have a functional HA of the volume? | 14:18 |
*** Gilou_ is now known as Gilou | 14:50 | |
jcmdln | sorin-mihai: You have a lot of options but they all have trade-offs. I think in this case you might want to consider reviewing various SSD-backed or NVMe-backed Ceph papers released by organizations like Micron and explore what trade-offs make sense for your needs. | 14:57 |
jcmdln | Ultimately, LVM-backed Cinder volumes will be hard to beat using distributed storage in small(er) clusters. I think a Ceph irc room or mailing list might be able to give you more accurate advice, though be sure not to make your redundancy worse in favor of higher performance until you've explored other options. | 15:04 |
sorin-mihai | jcmdln, NVMe is not an option in the existing Ceph cluster... It is a hyperconverged setup, with the OSDs, both SSDs and SAS HDDs with SSD db, spread among the compute nodes equally, but the mon, mgr, mds and rgw daemons are only on the controllers. Would it improve anything, in relation with OpenStack (maybe Cinder RBD if i understand correctly?), to run the mon, mgr, mds on the compute nodes as well? | 15:06 |
jcmdln | sorin-mihai: Sorry I didn't mean to suggest getting NVMes, I simply meant that there are examples of tuning various kinds of flash-based storage devices that might be relevant for your use-case. With regards to your hyperconverged setup, I cannot speak authoritatively though co-locating services can only make performance worse but by a very negligible margin unless you are using nodes with very old hardware. | 15:59 |
jamesbenson | Open question to any/every one: What do you do after deployment to verify that your cluster is functional? We run refstack, but are their other tests you run? Thanks! | 16:26 |
frickler | jamesbenson: have you seen rally? | 16:30 |
jamesbenson | frickler: I started some stuff with rally, (https://gitlab.com/utsa-ics/osias/-/blob/Rally_Openstack/test_rally.sh) but it's not on our main tests. What would you recommend? You can see what we've done with here at the link. | 16:32 |
frickler | jamesbenson: at a glance that looks like a good start. possibly add in some neutron, octavia and designate | 16:36 |
jamesbenson | Do you run refstack tests also? If so, do you use it through the refstack-client or through rally? | 16:37 |
frickler | no, I don't. although I think we have some code for that, but I'll need to look that up tomorrow | 16:43 |
jamesbenson | That would be great. Thank you! Also, if you have any tests better than the sample tests from refstack (https://rally.readthedocs.io/en/stable/quick_start/tutorial/step_5_task_templates.html), please let me know. I'm open to ideas :-) | 16:46 |
*** rlandy is now known as rlandy|bbl | 22:42 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!