*** Apoorva has joined #openstack-operators | 00:01 | |
*** VW has quit IRC | 00:02 | |
*** itsuugo has quit IRC | 00:06 | |
*** itsuugo has joined #openstack-operators | 00:07 | |
*** uxdanielle has quit IRC | 00:07 | |
*** armax has quit IRC | 00:14 | |
*** armax has joined #openstack-operators | 00:20 | |
*** ducttape_ has joined #openstack-operators | 00:21 | |
*** rmcall has joined #openstack-operators | 00:23 | |
*** ducttape_ has quit IRC | 00:26 | |
*** piet has quit IRC | 00:30 | |
*** markvoelker has joined #openstack-operators | 00:34 | |
*** itsuugo has quit IRC | 00:45 | |
*** ducttape_ has joined #openstack-operators | 00:45 | |
*** itsuugo has joined #openstack-operators | 00:46 | |
*** piet has joined #openstack-operators | 00:52 | |
*** itsuugo has quit IRC | 00:56 | |
*** itsuugo has joined #openstack-operators | 00:58 | |
*** ducttape_ has quit IRC | 01:04 | |
*** itsuugo has quit IRC | 01:06 | |
*** itsuugo has joined #openstack-operators | 01:08 | |
*** ducttape_ has joined #openstack-operators | 01:11 | |
*** esker[away] is now known as esker | 01:12 | |
*** britthouser has quit IRC | 01:12 | |
*** esker has quit IRC | 01:13 | |
*** itsuugo has quit IRC | 01:15 | |
*** itsuugo has joined #openstack-operators | 01:17 | |
*** britthouser has joined #openstack-operators | 01:17 | |
*** itsuugo has quit IRC | 01:22 | |
*** piet has quit IRC | 01:22 | |
*** itsuugo has joined #openstack-operators | 01:22 | |
*** vinsh has joined #openstack-operators | 01:24 | |
*** britthouser has quit IRC | 01:27 | |
*** ducttape_ has quit IRC | 01:27 | |
*** karad has quit IRC | 01:28 | |
*** itsuugo has quit IRC | 01:30 | |
*** ducttape_ has joined #openstack-operators | 01:31 | |
*** itsuugo has joined #openstack-operators | 01:31 | |
*** Apoorva_ has joined #openstack-operators | 01:35 | |
*** Apoorva has quit IRC | 01:39 | |
*** itsuugo has quit IRC | 01:40 | |
*** Apoorva_ has quit IRC | 01:40 | |
*** itsuugo has joined #openstack-operators | 01:41 | |
*** itsuugo has quit IRC | 01:51 | |
*** itsuugo has joined #openstack-operators | 01:53 | |
*** armax has quit IRC | 01:59 | |
*** vijaykc4 has joined #openstack-operators | 01:59 | |
*** itsuugo has quit IRC | 02:03 | |
*** itsuugo has joined #openstack-operators | 02:03 | |
*** itsuugo has quit IRC | 02:11 | |
*** itsuugo has joined #openstack-operators | 02:12 | |
*** itsuugo has quit IRC | 02:17 | |
*** itsuugo has joined #openstack-operators | 02:18 | |
*** itsuugo has quit IRC | 02:23 | |
*** itsuugo has joined #openstack-operators | 02:23 | |
*** rstarmer has joined #openstack-operators | 02:26 | |
*** vinsh has quit IRC | 02:27 | |
*** catintheroof has joined #openstack-operators | 02:27 | |
*** rstarmer has quit IRC | 02:31 | |
*** VW has joined #openstack-operators | 02:32 | |
*** vijaykc4 has quit IRC | 02:33 | |
*** vinsh has joined #openstack-operators | 02:33 | |
*** vinsh has quit IRC | 02:36 | |
*** vinsh has joined #openstack-operators | 02:36 | |
*** vinsh has quit IRC | 02:40 | |
*** britthouser has joined #openstack-operators | 02:43 | |
*** britthouser has quit IRC | 02:43 | |
*** ducttape_ has quit IRC | 02:44 | |
*** ducttape_ has joined #openstack-operators | 02:45 | |
*** itsuugo has quit IRC | 02:51 | |
*** sudipto_ has joined #openstack-operators | 02:53 | |
*** sudipto has joined #openstack-operators | 02:53 | |
*** itsuugo has joined #openstack-operators | 02:53 | |
VW | anyone around for the LDT meeting? | 03:00 |
---|---|---|
VW | sorrison: you online? | 03:00 |
*** itsuugo has quit IRC | 03:01 | |
VW | #startmeeting Large Deployments Team | 03:01 |
openstack | Meeting started Fri Sep 16 03:01:45 2016 UTC and is due to finish in 60 minutes. The chair is VW. Information about MeetBot at http://wiki.debian.org/MeetBot. | 03:01 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 03:01 |
openstack | The meeting name has been set to 'large_deployments_team' | 03:01 |
*** vijaykc4 has joined #openstack-operators | 03:01 | |
*** itsuugo has joined #openstack-operators | 03:02 | |
VW | looks like we might be short of folks for the LDT meeting scheduled, but I'll wait and see if some folks wander in | 03:03 |
*** ducttape_ has quit IRC | 03:04 | |
*** armax has joined #openstack-operators | 03:05 | |
*** itsuugo has quit IRC | 03:09 | |
*** itsuugo has joined #openstack-operators | 03:11 | |
*** itsuugo has quit IRC | 03:15 | |
*** itsuugo has joined #openstack-operators | 03:16 | |
*** itsuugo has quit IRC | 03:23 | |
*** itsuugo has joined #openstack-operators | 03:25 | |
*** itsuugo has quit IRC | 03:30 | |
VW | looks like not. we'll try again next month | 03:30 |
VW | #endmeeting | 03:30 |
openstack | Meeting ended Fri Sep 16 03:30:23 2016 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 03:30 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/large_deployments_team/2016/large_deployments_team.2016-09-16-03.01.html | 03:30 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/large_deployments_team/2016/large_deployments_team.2016-09-16-03.01.txt | 03:30 |
openstack | Log: http://eavesdrop.openstack.org/meetings/large_deployments_team/2016/large_deployments_team.2016-09-16-03.01.log.html | 03:30 |
*** VW_ has joined #openstack-operators | 03:31 | |
*** itsuugo has joined #openstack-operators | 03:31 | |
*** VW has quit IRC | 03:34 | |
*** vinsh has joined #openstack-operators | 03:36 | |
*** vinsh has quit IRC | 03:37 | |
*** vinsh has joined #openstack-operators | 03:37 | |
*** vinsh has quit IRC | 03:41 | |
*** VW_ has quit IRC | 03:45 | |
*** VW has joined #openstack-operators | 03:46 | |
*** itsuugo has quit IRC | 03:48 | |
*** itsuugo has joined #openstack-operators | 03:50 | |
*** VW has quit IRC | 03:50 | |
*** itsuugo has quit IRC | 03:54 | |
*** mriedem has quit IRC | 03:55 | |
*** itsuugo has joined #openstack-operators | 03:56 | |
*** itsuugo has quit IRC | 04:00 | |
*** itsuugo has joined #openstack-operators | 04:01 | |
*** ducttape_ has joined #openstack-operators | 04:05 | |
*** itsuugo has quit IRC | 04:08 | |
*** itsuugo has joined #openstack-operators | 04:11 | |
*** ducttape_ has quit IRC | 04:11 | |
*** vijaykc4 has quit IRC | 04:12 | |
sorrison | Hey VM sorry we had a going away lunch for a work mate and couldn't make it | 04:15 |
*** sudipto has quit IRC | 04:17 | |
*** sudipto_ has quit IRC | 04:17 | |
*** itsuugo has quit IRC | 04:20 | |
*** itsuugo has joined #openstack-operators | 04:21 | |
*** markvoelker has quit IRC | 04:28 | |
*** itsuugo has quit IRC | 04:28 | |
*** itsuugo has joined #openstack-operators | 04:29 | |
*** rcernin has quit IRC | 04:37 | |
*** itsuugo has quit IRC | 04:37 | |
*** itsuugo has joined #openstack-operators | 04:39 | |
*** itsuugo has quit IRC | 04:43 | |
*** itsuugo has joined #openstack-operators | 04:45 | |
*** harlowja has quit IRC | 04:45 | |
*** itsuugo has quit IRC | 04:50 | |
*** sudipto_ has joined #openstack-operators | 04:50 | |
*** sudipto has joined #openstack-operators | 04:50 | |
*** itsuugo has joined #openstack-operators | 04:51 | |
*** itsuugo has quit IRC | 04:56 | |
*** itsuugo has joined #openstack-operators | 04:57 | |
*** itsuugo has quit IRC | 05:04 | |
*** itsuugo has joined #openstack-operators | 05:05 | |
*** itsuugo has quit IRC | 05:12 | |
*** bvandenh has quit IRC | 05:12 | |
*** itsuugo has joined #openstack-operators | 05:14 | |
*** fragatina has quit IRC | 05:15 | |
*** itsuugo has quit IRC | 05:18 | |
*** itsuugo has joined #openstack-operators | 05:20 | |
*** markvoelker has joined #openstack-operators | 05:28 | |
*** markvoelker has quit IRC | 05:33 | |
*** fragatina has joined #openstack-operators | 05:33 | |
*** fragatina has quit IRC | 05:35 | |
*** itsuugo has quit IRC | 05:36 | |
*** itsuugo has joined #openstack-operators | 05:36 | |
*** bvandenh_ has joined #openstack-operators | 05:38 | |
*** itsuugo has quit IRC | 05:41 | |
*** rcernin has joined #openstack-operators | 05:43 | |
*** itsuugo has joined #openstack-operators | 05:43 | |
*** itsuugo has quit IRC | 05:48 | |
*** itsuugo has joined #openstack-operators | 05:49 | |
*** rstarmer has joined #openstack-operators | 06:02 | |
*** itsuugo has quit IRC | 06:07 | |
*** itsuugo has joined #openstack-operators | 06:08 | |
*** rcernin has quit IRC | 06:14 | |
*** itsuugo has quit IRC | 06:15 | |
*** itsuugo has joined #openstack-operators | 06:16 | |
*** rcernin has joined #openstack-operators | 06:19 | |
*** pcaruana has joined #openstack-operators | 06:23 | |
*** fragatina has joined #openstack-operators | 06:29 | |
*** fragatina has quit IRC | 06:33 | |
*** itsuugo has quit IRC | 06:33 | |
*** itsuugo has joined #openstack-operators | 06:34 | |
*** vern has quit IRC | 06:40 | |
*** vern has joined #openstack-operators | 06:43 | |
*** itsuugo has quit IRC | 06:44 | |
*** itsuugo has joined #openstack-operators | 06:45 | |
*** VW has joined #openstack-operators | 06:48 | |
*** VW has quit IRC | 06:53 | |
*** priteau has joined #openstack-operators | 06:53 | |
*** priteau has quit IRC | 06:54 | |
*** itsuugo has quit IRC | 06:54 | |
*** itsuugo has joined #openstack-operators | 06:56 | |
*** david-lyle_ has joined #openstack-operators | 06:59 | |
*** david-lyle has quit IRC | 07:00 | |
*** itsuugo has quit IRC | 07:01 | |
*** itsuugo has joined #openstack-operators | 07:02 | |
*** sticker has quit IRC | 07:03 | |
*** matrohon has joined #openstack-operators | 07:10 | |
*** itsuugo has quit IRC | 07:15 | |
*** sudipto_ has quit IRC | 07:16 | |
*** sudipto has quit IRC | 07:16 | |
*** itsuugo has joined #openstack-operators | 07:17 | |
*** julian1 has quit IRC | 07:19 | |
*** julian1 has joined #openstack-operators | 07:20 | |
*** itsuugo has quit IRC | 07:22 | |
*** itsuugo has joined #openstack-operators | 07:23 | |
*** david-lyle has joined #openstack-operators | 07:28 | |
*** david-lyle_ has quit IRC | 07:29 | |
*** markvoelker has joined #openstack-operators | 07:29 | |
*** markvoelker has quit IRC | 07:34 | |
*** jkraj has joined #openstack-operators | 08:01 | |
*** openstackgerrit has quit IRC | 08:03 | |
*** openstackgerrit has joined #openstack-operators | 08:04 | |
*** itsuugo has quit IRC | 08:08 | |
*** itsuugo has joined #openstack-operators | 08:10 | |
*** fragatina has joined #openstack-operators | 08:15 | |
*** itsuugo has quit IRC | 08:15 | |
*** itsuugo has joined #openstack-operators | 08:16 | |
*** fragatina has quit IRC | 08:19 | |
*** itsuugo has quit IRC | 08:21 | |
*** itsuugo has joined #openstack-operators | 08:22 | |
*** dbecker has quit IRC | 08:24 | |
*** derekh has joined #openstack-operators | 08:26 | |
*** dbecker has joined #openstack-operators | 08:28 | |
*** itsuugo has quit IRC | 08:29 | |
*** itsuugo has joined #openstack-operators | 08:31 | |
*** racedo has joined #openstack-operators | 08:36 | |
*** saneax-_-|AFK is now known as saneax | 08:42 | |
*** electrofelix has joined #openstack-operators | 08:43 | |
*** itsuugo has quit IRC | 08:49 | |
*** itsuugo has joined #openstack-operators | 08:50 | |
*** VW has joined #openstack-operators | 08:50 | |
*** VW has quit IRC | 08:55 | |
*** itsuugo has quit IRC | 08:55 | |
*** itsuugo has joined #openstack-operators | 08:56 | |
*** itsuugo has quit IRC | 09:08 | |
*** itsuugo has joined #openstack-operators | 09:10 | |
*** itsuugo has quit IRC | 09:15 | |
*** itsuugo has joined #openstack-operators | 09:16 | |
*** simon-AS559 has joined #openstack-operators | 09:17 | |
*** sudipto_ has joined #openstack-operators | 09:25 | |
*** sudipto has joined #openstack-operators | 09:25 | |
*** itsuugo has quit IRC | 09:27 | |
*** hughsaunders is now known as hushsaunders | 09:28 | |
*** itsuugo has joined #openstack-operators | 09:29 | |
*** hushsaunders is now known as hush | 09:29 | |
*** hush is now known as hughsaunders | 09:30 | |
*** itsuugo has quit IRC | 09:34 | |
*** itsuugo has joined #openstack-operators | 09:36 | |
*** itsuugo has quit IRC | 09:41 | |
*** itsuugo has joined #openstack-operators | 09:42 | |
*** itsuugo has quit IRC | 09:47 | |
*** itsuugo has joined #openstack-operators | 09:48 | |
*** itsuugo has quit IRC | 09:57 | |
*** itsuugo has joined #openstack-operators | 09:58 | |
*** itsuugo has quit IRC | 10:07 | |
*** bvandenh_ has quit IRC | 10:08 | |
*** itsuugo has joined #openstack-operators | 10:09 | |
*** itsuugo has quit IRC | 10:14 | |
*** rstarmer has quit IRC | 10:15 | |
*** itsuugo has joined #openstack-operators | 10:15 | |
*** itsuugo has quit IRC | 10:20 | |
*** itsuugo has joined #openstack-operators | 10:21 | |
*** ducttape_ has joined #openstack-operators | 10:26 | |
*** ducttape_ has quit IRC | 10:30 | |
*** itsuugo has quit IRC | 10:33 | |
*** itsuugo has joined #openstack-operators | 10:36 | |
*** itsuugo has quit IRC | 10:41 | |
*** itsuugo has joined #openstack-operators | 10:42 | |
*** karad has joined #openstack-operators | 10:44 | |
*** itsuugo has quit IRC | 10:51 | |
*** itsuugo has joined #openstack-operators | 10:52 | |
*** VW has joined #openstack-operators | 10:52 | |
*** itsuugo has quit IRC | 10:57 | |
*** VW has quit IRC | 10:57 | |
*** itsuugo has joined #openstack-operators | 10:58 | |
*** itsuugo has quit IRC | 11:09 | |
*** itsuugo has joined #openstack-operators | 11:10 | |
*** sudswas__ has joined #openstack-operators | 11:12 | |
*** sudipto has quit IRC | 11:13 | |
*** sudipto has joined #openstack-operators | 11:13 | |
*** sudipto_ has quit IRC | 11:14 | |
*** itsuugo has quit IRC | 11:21 | |
*** itsuugo has joined #openstack-operators | 11:23 | |
*** ducttape_ has joined #openstack-operators | 11:27 | |
*** itsuugo has quit IRC | 11:30 | |
*** markvoelker has joined #openstack-operators | 11:31 | |
*** itsuugo has joined #openstack-operators | 11:31 | |
*** ducttape_ has quit IRC | 11:31 | |
*** markvoelker has quit IRC | 11:35 | |
*** paramite has joined #openstack-operators | 11:36 | |
*** itsuugo has quit IRC | 11:38 | |
*** itsuugo has joined #openstack-operators | 11:39 | |
*** catintheroof has quit IRC | 11:39 | |
*** itsuugo has quit IRC | 11:44 | |
*** itsuugo has joined #openstack-operators | 11:45 | |
*** jsheeren has joined #openstack-operators | 11:48 | |
*** VW has joined #openstack-operators | 12:07 | |
*** itsuugo has quit IRC | 12:07 | |
*** itsuugo has joined #openstack-operators | 12:09 | |
*** VW has quit IRC | 12:10 | |
*** VW has joined #openstack-operators | 12:11 | |
*** ducttape_ has joined #openstack-operators | 12:14 | |
*** VW has quit IRC | 12:16 | |
*** sudipto has quit IRC | 12:19 | |
*** sudswas__ has quit IRC | 12:19 | |
*** itsuugo has quit IRC | 12:20 | |
*** sudipto has joined #openstack-operators | 12:21 | |
*** itsuugo has joined #openstack-operators | 12:21 | |
*** sudswas__ has joined #openstack-operators | 12:24 | |
*** maticue has joined #openstack-operators | 12:24 | |
*** markvoelker has joined #openstack-operators | 12:26 | |
*** itsuugo has quit IRC | 12:26 | |
*** itsuugo has joined #openstack-operators | 12:27 | |
*** catintheroof has joined #openstack-operators | 12:27 | |
catintheroof | Hi, quick question, suppose i have lots of users into a single OU on ldap, and i need to assign each user to a new domain, i dont need domain specific driver for that right ? i just need multidomains enabled and how to do i do to filter that every user is a new domain ? can i apply some filter on keystone to achieve that ? | 12:29 |
*** itsuugo has quit IRC | 12:34 | |
*** itsuugo has joined #openstack-operators | 12:35 | |
*** mriedem has joined #openstack-operators | 12:38 | |
*** ducttape_ has quit IRC | 12:39 | |
*** itsuugo has quit IRC | 12:40 | |
*** itsuugo has joined #openstack-operators | 12:41 | |
*** uxdanielle has joined #openstack-operators | 12:43 | |
*** makowals has joined #openstack-operators | 12:59 | |
*** rstarmer has joined #openstack-operators | 13:00 | |
*** makowals has quit IRC | 13:01 | |
*** makowals_ has joined #openstack-operators | 13:02 | |
*** makowals_ has quit IRC | 13:03 | |
*** makowals has joined #openstack-operators | 13:04 | |
*** rstarmer has quit IRC | 13:05 | |
*** VW has joined #openstack-operators | 13:10 | |
*** dminer has joined #openstack-operators | 13:27 | |
*** VW has quit IRC | 13:27 | |
*** VW has joined #openstack-operators | 13:28 | |
*** alaski is now known as lascii | 13:28 | |
*** piet has joined #openstack-operators | 13:45 | |
*** vinsh has joined #openstack-operators | 13:52 | |
*** saneax is now known as saneax-_-|AFK | 13:55 | |
*** makowals has quit IRC | 13:56 | |
*** jsheeren has quit IRC | 14:01 | |
*** ducttape_ has joined #openstack-operators | 14:03 | |
*** _ducttape_ has joined #openstack-operators | 14:18 | |
*** ducttape_ has quit IRC | 14:21 | |
*** vinsh has quit IRC | 14:35 | |
*** itsuugo has quit IRC | 14:38 | |
*** itsuugo has joined #openstack-operators | 14:41 | |
*** ircuser-1 has quit IRC | 14:46 | |
*** vinsh has joined #openstack-operators | 14:47 | |
*** vinsh has quit IRC | 14:47 | |
*** vinsh has joined #openstack-operators | 14:48 | |
*** VW has quit IRC | 14:50 | |
*** albertom has quit IRC | 15:03 | |
*** albertom has joined #openstack-operators | 15:13 | |
*** jkraj has quit IRC | 15:13 | |
*** rcernin has quit IRC | 15:15 | |
*** jkraj has joined #openstack-operators | 15:19 | |
*** jkraj has quit IRC | 15:30 | |
*** itsuugo has quit IRC | 15:39 | |
*** itsuugo has joined #openstack-operators | 15:41 | |
*** electrofelix has quit IRC | 15:50 | |
*** mriedem is now known as mriedem_afk | 15:50 | |
*** derekh has quit IRC | 16:02 | |
*** matrohon has quit IRC | 16:02 | |
*** pilgrimstack has quit IRC | 16:04 | |
*** britthouser has joined #openstack-operators | 16:08 | |
*** VW has joined #openstack-operators | 16:11 | |
*** britthouser has quit IRC | 16:11 | |
*** Benj_ has joined #openstack-operators | 16:11 | |
*** britthouser has joined #openstack-operators | 16:12 | |
*** VW_ has joined #openstack-operators | 16:13 | |
*** Benj_ has quit IRC | 16:13 | |
*** Benj_ has joined #openstack-operators | 16:14 | |
*** hieulq_ has joined #openstack-operators | 16:15 | |
*** fragatina has joined #openstack-operators | 16:16 | |
*** VW has quit IRC | 16:16 | |
*** Apoorva has joined #openstack-operators | 16:16 | |
*** fragatina has quit IRC | 16:20 | |
*** VW_ has quit IRC | 16:22 | |
*** VW has joined #openstack-operators | 16:23 | |
*** rmcall has quit IRC | 16:27 | |
*** VW has quit IRC | 16:28 | |
*** rmcall has joined #openstack-operators | 16:28 | |
*** hieulq_ has quit IRC | 16:31 | |
*** hieulq_ has joined #openstack-operators | 16:32 | |
*** dminer has quit IRC | 16:33 | |
*** piet has quit IRC | 16:34 | |
*** piet has joined #openstack-operators | 16:35 | |
*** rmcall has quit IRC | 16:36 | |
*** _ducttape_ has quit IRC | 16:36 | |
*** ducttape_ has joined #openstack-operators | 16:37 | |
*** rmcall has joined #openstack-operators | 16:37 | |
*** ducttape_ has quit IRC | 16:38 | |
*** ducttape_ has joined #openstack-operators | 16:38 | |
openstackgerrit | Craig Sterrett proposed openstack/osops-tools-contrib: Edited Readme added testing information https://review.openstack.org/371705 | 16:40 |
*** rmcall has quit IRC | 16:41 | |
*** fragatina has joined #openstack-operators | 16:46 | |
*** fragatina has quit IRC | 16:49 | |
*** fragatina has joined #openstack-operators | 16:49 | |
*** hieulq_ has quit IRC | 16:54 | |
*** mwturvey has quit IRC | 16:56 | |
*** itsuugo has quit IRC | 17:00 | |
*** itsuugo has joined #openstack-operators | 17:00 | |
*** simon-AS5591 has joined #openstack-operators | 17:04 | |
*** simon-AS5591 has quit IRC | 17:04 | |
*** ircuser-1 has joined #openstack-operators | 17:05 | |
*** fragatina has quit IRC | 17:05 | |
*** fragatina has joined #openstack-operators | 17:06 | |
pabelanger | afternoon | 17:06 |
pabelanger | I'm trying to learn more about pre-caching glance images on compute nodes, interested in any documentation around the subject. | 17:07 |
*** simon-AS559 has quit IRC | 17:07 | |
pabelanger | With infracloud, we are only using the local HDD to the compute node to store the images from glance. | 17:08 |
pabelanger | Obviously when we upload a new image to glance, we are setting problems launching new compute nodes for the first time because the need to fetch the new image | 17:08 |
pabelanger | I'm curious how other operators deal with this issue. | 17:09 |
pabelanger | These are qcow2 images too | 17:09 |
jlk | Rackspace used something they build, scheduled images I think. | 17:10 |
jlk | a system to prime the system behind the scenes | 17:10 |
jlk | there were efforts to push that upstream, but I'm not sure where it stalled out | 17:10 |
*** ducttape_ has quit IRC | 17:11 | |
jlk | at IBM private cloud we don't bother with it, image fetch time is pretty fast, and instance launch time hasn't been a customer complaint | 17:11 |
pabelanger | Right, that is what I am finding in the googles. People have either implemented something locally or it is not an issue before of their setup | 17:14 |
*** rstarmer has joined #openstack-operators | 17:40 | |
mgagne | yea, looking for the same here. I think pabelanger is asking on my behalf ;) | 17:40 |
mgagne | I will be looking into implementing a swift image download for https://github.com/openstack/nova/tree/master/nova/image/download | 17:41 |
mgagne | and see where it goes | 17:41 |
*** jamesdenton has joined #openstack-operators | 17:43 | |
*** VW has joined #openstack-operators | 17:45 | |
*** ducttape_ has joined #openstack-operators | 17:45 | |
notmyname | mgagne: I'm not sure of the context of that, but if you need guidance on swift, let me know | 17:53 |
mgagne | notmyname: 1) upload a new image to Glance (with Swift backend) 2) Boot 50 instances spread on 50 computes. 3) Observe chaos where Nova and Glance struggle to download and cache the new image. | 17:54 |
mgagne | notmyname: hmm so you are the one that wanted to be my friend at the ops meetup? =) | 17:55 |
notmyname | mgagne: I want to be everyone's friend ;-) | 17:58 |
mgagne | =) | 17:58 |
notmyname | especially ops | 17:58 |
mgagne | hehe | 17:58 |
mgagne | so I'm looking to bypass Glance when downloading images stored in Swift and see where it goes in term of performance | 17:59 |
notmyname | what's the current bottleneck? | 17:59 |
notmyname | network, cpu, drive IO? | 18:00 |
mgagne | we have yet to fully identify the problem (we have firewall, glance, swift, in the data path) but so far, we see Glance itself as an unnecessary element in the download chain | 18:00 |
notmyname | ok | 18:00 |
notmyname | so one image needs to be loaded by 50 compute instances. and, because it wouldn't be fun otherwise, all 50 need the image at the same time | 18:01 |
mgagne | yep | 18:01 |
mgagne | exact situation | 18:02 |
notmyname | how big is the image? | 18:02 |
mgagne | ask infra =), maybe 10GB | 18:02 |
notmyname | ok | 18:02 |
pabelanger | 8GB | 18:02 |
pabelanger | qcow2 | 18:02 |
notmyname | and the whole image is needed before any of it can be used, right? so eg you can't start using the first byte before the last byte is downloaded | 18:02 |
pabelanger | actually not sure how that would work | 18:03 |
pabelanger | or what other providers do | 18:03 |
pabelanger | but, I would imagine, we need to whole image | 18:03 |
notmyname | yeah | 18:03 |
mgagne | so we could scale Swift and Glance. But with Glance, cache is not shared so it would need to warm up on all nodes first. Glance cache is not used at its full potential since every compute nodes want the image right now, Glance has no time to warm up the cache. | 18:04 |
notmyname | ok, and how many drives are in your swift cluster? order of magnitude? more than 100? | 18:04 |
mgagne | we can cap the network bandwidth just fine when downloading a file =) | 18:04 |
mgagne | I think one of the network hardware between Glance and Swift could be a bottleneck and also Glance itself (need to verify that first) | 18:05 |
notmyname | I'm thinking of drive bandwidth, and it matters if you have 8 drives or 80 drives (or 800) | 18:05 |
mgagne | there is a lot of elements and system in the data path. I don't think Swift itself is the botteneck. | 18:06 |
notmyname | I hope not :-) | 18:07 |
mgagne | I'm sure it's not =) | 18:07 |
*** piet has quit IRC | 18:08 | |
notmyname | but with the idea I want to share, I want to make sure my suggestion matches what's possible. and it *really* matters if you have a few drives or "enough" drives | 18:08 |
mgagne | true, will consider this aspect when adding more proxies as drives might be the next bottleneck for Swift. | 18:08 |
*** sudipto has quit IRC | 18:08 | |
*** sudswas__ has quit IRC | 18:08 | |
notmyname | right. so do you have about 10 drives in the cluster or more than 100? | 18:09 |
mgagne | would have to check, I'm not the one managing Swift =) | 18:10 |
mgagne | more than 100 | 18:11 |
notmyname | ok, so assuming you have enough drives, here's what I'd do if I were writing a client to download 8+GB objects from swift and wanted to get 100% of the bytes to 50+ clients ASAP... | 18:11 |
notmyname | ok, 100+ is fine for this case | 18:12 |
notmyname | first, since there's a 5GB limit on an individual object in swift, you'll need to use large object manifests. I would definitely use static large objects (SLO) | 18:12 |
*** harlowja has joined #openstack-operators | 18:13 | |
notmyname | SLOs work by making an object who's contents is a json blob listing the *other* objects that make it up. think of it kinda like a table of contents | 18:13 |
*** paramite has quit IRC | 18:13 | |
*** ducttape_ has quit IRC | 18:13 | |
notmyname | so I'd split the original image into small pieces before uploading it. somehting in the 100-250MB range | 18:14 |
notmyname | so you'd split it locally using whatever you want (FWIW, the python-swiftclient SDK and CLI can do this for you) and upload those to /v1/myaccount/image_foo_segments/seg_00000 etc | 18:15 |
notmyname | so with an 8Gb image and 100MB segments, you'd end up with ~80 segments | 18:15 |
mgagne | Glance manages this aspect, the user/customer doesn't manage it. I'm sure we can configure Glance to split as suggested | 18:15 |
notmyname | right | 18:15 |
mgagne | might already be the case by default | 18:16 |
notmyname | then the 81st object you'd put into swift is the SLO manifest: /v1/myaccount/myimages/awesome.image | 18:16 |
mgagne | will have to check and apply suggestions | 18:16 |
notmyname | yeah, they used to use DLOs and then did some work to switch to SLOs at some point | 18:16 |
mgagne | I'm sure Swift isn't the bottleneck for now. As mentioned, we have network hardwares and Glance in the datapath that we will need to check first. Once those are fixed/removed, we will work on optimizing Swift which might better scale than Glance =) | 18:16 |
notmyname | ok, so now you've got the image into swift. now let's load it really really quickly using all the hardware in teh swift cluster | 18:17 |
notmyname | so the naive way is to simply have all 50 clients GET the SLO and stream the bytes | 18:17 |
notmyname | that will have a few problems | 18:17 |
notmyname | first is that you'll end up with a lot of drive contention as each client starts getting bytes off of a single (or small set of) drive | 18:18 |
notmyname | an HDD is bad at serving bytes quickly to 50+ clients at the same time. HDDs are slow | 18:18 |
mgagne | right, is there something that can be done in the proxy for that? | 18:18 |
notmyname | the second problem is that one stream won't likely max the network bandwidth for one client | 18:18 |
notmyname | so the trick to solve both problems is the same thing: concurrently download different parts | 18:19 |
mgagne | is this something that is controlled on the client side? or can swift proxy do some magic? | 18:20 |
notmyname | so I'd make each of the 50+ VM hosts download the SLO itself (ie get the manifest) and then parse it to find the segments. then each client should randomize the list and download 10 segments at a time. then after all 80+ segments are downloaded, reassemble locally (ie cat them together) | 18:20 |
notmyname | no, you don't want the swift proxy to do this, otherwise you'll be limited by one swift proxy. much much better to do this on the client itself | 18:20 |
mgagne | right so we are talking about implementation details in the nova image download driver (that doesn't exist atm) | 18:20 |
notmyname | right | 18:20 |
mgagne | awesome | 18:21 |
notmyname | but a lot of that logic about concurrent downloads is already written in python-swiftclient. | 18:21 |
mgagne | so it's just a matter of enabling the feature? | 18:22 |
notmyname | so if this is done, you'll have 50+ clients concurrently downloading parts of the whole in such a way that 80+ drives in the cluster are used at the same time. you should be able to max out your network and not max out HDD IO in this way | 18:22 |
mgagne | shuffle ? | 18:22 |
mgagne | but it looks like the option is enabled by default. no? | 18:23 |
*** timburke has joined #openstack-operators | 18:24 | |
notmyname | hi timburke | 18:24 |
mgagne | "download 10 segments" -> controlled by object_dd_threads ? | 18:24 |
notmyname | thingee: meet mgagne | 18:24 |
notmyname | timburke: | 18:24 |
timburke | hi mgagne! | 18:25 |
mgagne | hi! | 18:25 |
notmyname | timburke: ok, here's the summary: mgagne wants to download an SLO as quickly as possible to 50+ clients | 18:25 |
notmyname | timburke: what's the options in swiftclient (service) to do the concurrent SLO download stuff you wrote? | 18:25 |
mgagne | or with a bit more context: I want to fix the thundering herd problem when a new private image is uploaded by a customer and boot 50 instances on as much compute nodes. =) | 18:26 |
timburke | notmyname: i didn't *write* it yet! that's what https://bugs.launchpad.net/python-swiftclient/+bug/1621562 is about | 18:26 |
openstack | Launchpad bug 1621562 in python-swiftclient "Download large objects concurrently" [Wishlist,Confirmed] | 18:26 |
*** VW has quit IRC | 18:26 | |
notmyname | lolo | 18:26 |
*** VW has joined #openstack-operators | 18:27 | |
notmyname | ok, yeah, that's what I was describing to mgagne as a way to do it | 18:27 |
*** chlong_ has quit IRC | 18:27 | |
notmyname | the option 1 there | 18:27 |
timburke | basically, you'll want each client to download the manifest (rather than the large object), shuffle the segments, and download them into a sparse file | 18:28 |
notmyname | mgagne: I knew timburke would have the answer ;-) | 18:28 |
timburke | which also seems like what clayg prefers | 18:28 |
notmyname | timburke: or local file that's then concat'd with the others at the end | 18:28 |
mgagne | right, makes a lot of sense | 18:28 |
timburke | fwiw, we have code to fetch the manifest and figure out the list of segments already | 18:29 |
timburke | just a matter of typing :P | 18:29 |
notmyname | but there's still some mechanism in swiftclient to facilitate that, if a client wanted to do it now | 18:29 |
notmyname | the download threads/concurrency thing | 18:29 |
mgagne | yea, found it. so I'm not sure how it relates to the bug report | 18:29 |
timburke | how are we using it? cli or python? | 18:29 |
mgagne | or could it be that python only support concurrent download of different objects and not large object chunks? http://docs.openstack.org/developer/python-swiftclient/swiftclient.html#swiftclient.multithreading.MultiThreadingManager | 18:31 |
*** VW has quit IRC | 18:31 | |
mgagne | and upload is supported with segment_threads and not download of chunks? | 18:32 |
mgagne | first time I'm looking at swiftclient so bear with me =) | 18:33 |
notmyname | yeah, the bug report is to do that automatically on downloads. It should be possible to do it today, it's just that the magic auto-ness isn't in swiftclient downloads. but one could use the primitives to do the same thing that's tin the bug report | 18:33 |
timburke | weeeeelllll... we can find a way around it, if we're willing to do the reconstruction client-side. but yes, the dd-threads option is talking about objects, not segments within a large object | 18:33 |
mgagne | right. brb in 5m | 18:34 |
notmyname | timburke: so a client could build the "work list" by fetching the manifest, then use the download threads to get concurrency, then reconstruct locally, right? | 18:34 |
* notmyname needs to step out too. will be back later | 18:35 | |
timburke | how much can we assume about the upload? eg, that it was done by swiftclient? that it was a dlo, or an slo? or do we want to discover all of this as part of it? | 18:35 |
*** tdasilva has joined #openstack-operators | 18:35 | |
*** rstarmer has quit IRC | 18:36 | |
mgagne | timburke: the idea I had is to add the ability to nova-compute to download an image from Swift if possible instead of Glance itself so middleman is bypassed. So it would have to support whatever methods Glance used to upload the image to Swift. | 18:38 |
mgagne | I have a lot on my plate for the next weeks. So I might just throw resources/hardware at the problem for now. If I find time, I might work on the proposed idea. | 18:39 |
timburke | makes sense. and doing it in python should make things easier | 18:40 |
timburke | fwiw, glance would be uploading slos now, but dlos previously | 18:41 |
mgagne | which version? | 18:41 |
mgagne | we are running kilo atm, planning on upgrading soon (that's one of the task in my plate) | 18:42 |
mgagne | but this future work would need the work described in above bug report, right? | 18:42 |
timburke | actually, i stand corrected. glance uses dlos, and presumably has for a while. maybe i was thinking of shade or someone else? | 18:44 |
*** saneax-_-|AFK is now known as saneax | 18:46 | |
timburke | mgagne: depends on how exactly we want the integration to work. ideally, yes, we'd fix swiftclient to do the concurrent download of segments, then get nova to use the new feature | 18:47 |
mgagne | :D | 18:49 |
timburke | but if you need it in a hurry, you could get nova to head the object and determine the manifest prefix, get the appropriate container listing, pipe that back through swiftclient to download, and then reassemble yourself. with the caveat that hopefully that would all be replaced fairly quickly once the feature lands in swiftclient | 18:49 |
timburke | having another person asking for it bumps this up *my* priority list a bit, so good to know :-) | 18:50 |
*** pilgrimstack has joined #openstack-operators | 18:52 | |
mgagne | I might not be able to invest time in the very short term but yes, it looks like a feature I would definitely use. | 18:56 |
timburke | mgagne: fwiw, with a bit of cli magic, current swiftclient can do the concurrent download for dlos (doesn't take care of reconstruction, though, and what i've got leaves a rather messy path) | 18:59 |
timburke | something like `swift download $(swift stat "$CONTAINER" "$DLO" | grep Manifest | sed -e 's/^[^:]*: //' -e 's!/! --prefix=!')` might be useful | 18:59 |
*** pilgrimstack has quit IRC | 19:02 | |
*** simon-AS559 has joined #openstack-operators | 19:05 | |
*** VW has joined #openstack-operators | 19:15 | |
*** david-lyle has quit IRC | 19:30 | |
*** david-lyle has joined #openstack-operators | 19:30 | |
*** david-lyle has quit IRC | 19:43 | |
*** Apoorva has quit IRC | 19:47 | |
*** itsuugo has quit IRC | 19:50 | |
*** itsuugo has joined #openstack-operators | 19:50 | |
*** VW has quit IRC | 20:18 | |
*** pontusf4 has joined #openstack-operators | 20:30 | |
*** pontusf has joined #openstack-operators | 20:32 | |
*** pontusf3 has quit IRC | 20:34 | |
*** Apoorva has joined #openstack-operators | 20:34 | |
*** pontusf4 has quit IRC | 20:35 | |
*** pontusf has quit IRC | 20:37 | |
*** pontusf has joined #openstack-operators | 20:39 | |
*** itsuugo has quit IRC | 20:40 | |
*** itsuugo has joined #openstack-operators | 20:41 | |
*** pontusf1 has joined #openstack-operators | 20:41 | |
*** pontusf has quit IRC | 20:43 | |
*** pontusf1 has quit IRC | 20:45 | |
*** lascii is now known as alaski | 20:53 | |
*** itsuugo has quit IRC | 21:02 | |
*** itsuugo has joined #openstack-operators | 21:04 | |
*** wasmum has quit IRC | 21:18 | |
*** VW has joined #openstack-operators | 21:19 | |
*** wasmum has joined #openstack-operators | 21:20 | |
*** simon-AS559 has quit IRC | 21:22 | |
*** VW has quit IRC | 21:24 | |
*** simon-AS559 has joined #openstack-operators | 21:25 | |
*** jamesdenton has quit IRC | 21:26 | |
*** fragatin_ has joined #openstack-operators | 21:33 | |
*** fragatina has quit IRC | 21:36 | |
*** saneax is now known as saneax-_-|AFK | 21:40 | |
*** itsuugo has quit IRC | 21:43 | |
*** itsuugo has joined #openstack-operators | 21:44 | |
*** fragatin_ has quit IRC | 21:48 | |
*** fragatina has joined #openstack-operators | 21:50 | |
*** fragatina has quit IRC | 22:10 | |
*** fragatina has joined #openstack-operators | 22:10 | |
*** itsuugo has quit IRC | 22:18 | |
*** itsuugo has joined #openstack-operators | 22:19 | |
*** simon-AS559 has quit IRC | 22:20 | |
*** VW has joined #openstack-operators | 22:21 | |
*** mriedem_afk is now known as mriedem | 22:22 | |
*** VW has quit IRC | 22:26 | |
*** makowals has joined #openstack-operators | 22:31 | |
*** catintheroof has quit IRC | 22:31 | |
*** markvoelker has quit IRC | 22:34 | |
*** itsuugo has quit IRC | 22:39 | |
*** itsuugo has joined #openstack-operators | 22:41 | |
*** dtrainor has quit IRC | 22:43 | |
*** david-lyle has joined #openstack-operators | 22:48 | |
*** itsuugo has quit IRC | 22:55 | |
*** itsuugo has joined #openstack-operators | 22:56 | |
*** erhudy has quit IRC | 23:02 | |
*** pontusf1 has joined #openstack-operators | 23:04 | |
*** itsuugo has quit IRC | 23:06 | |
*** itsuugo has joined #openstack-operators | 23:08 | |
*** Benj_ has quit IRC | 23:34 | |
*** rmcall has joined #openstack-operators | 23:46 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!