openstackgerrit | WeAreFormalGroup proposed openstack/smaug: Implement cinder protection plugin https://review.openstack.org/286458 | 02:05 |
---|---|---|
openstackgerrit | WeAreFormalGroup proposed openstack/smaug: Implement glance protection plugin https://review.openstack.org/295752 | 02:09 |
*** yinweiishere has joined #openstack-smaug | 02:41 | |
openstackgerrit | smile-luobin proposed openstack/smaug: Implement nova protection plugin https://review.openstack.org/295618 | 02:45 |
openstackgerrit | smile-luobin proposed openstack/smaug: Implement nova protection plugin https://review.openstack.org/295618 | 02:48 |
*** chenhuayi has joined #openstack-smaug | 03:07 | |
chenhuayi | devstack installation failed. | 03:09 |
chenhuayi | ++inc/python:pip_install:163 echo 'Installing test-requirements for /opt/stack/smaug-dashboard/test-requirements.txt' | 03:09 |
chenhuayi | Installing test-requirements for /opt/stack/smaug-dashboard/test-requirements.txt | 03:09 |
chenhuayi | ++inc/python:pip_install:165 sudo -H http_proxy= https_proxy= no_proxy= PIP_FIND_LINKS= /usr/local/bin/pip2.7 install -c /opt/stack/requirements/upper-constraints.txt -r /opt/stack/smaug-dashboard/test-requirements.txt | 03:09 |
chenhuayi | Ignoring dnspython3: markers u"python_version=='3.4'" don't match your environment | 03:09 |
chenhuayi | Could not satisfy constraints for 'horizon': installation from path or url cannot be constrained to a version | 03:09 |
chenhuayi | +inc/python:pip_install:1 exit_trap | 03:09 |
chenhuayi | Installing test-requirements for /opt/stack/smaug-dashboard/test-requirements.txt ??? | 03:10 |
chenhuayi | There was no problem in yestoday. | 03:12 |
*** gampel1 has joined #openstack-smaug | 06:00 | |
*** saggi_class has joined #openstack-smaug | 06:12 | |
saggi_class | yinwei_computer, yinweiishere: ping | 06:12 |
yinwei_computer | hello | 06:36 |
gampel | hi yinwei | 06:42 |
yinwei_computer | hi gampel | 06:43 |
yinwei_computer | hi saggi_class | 06:43 |
*** yuval has joined #openstack-smaug | 06:43 | |
saggi_class | yinwei_computer: What do you think about the suggestion I sent in email yesterday? | 06:44 |
gampel | What we did not find yesterday in nova server info is the port or net id | 06:44 |
gampel | regarding the dangling ports? | 06:44 |
saggi_class | gampel: We will take it from neutron. It doesn't matter as long as we store it. | 06:44 |
yinwei_computer | yes, nova doesn't have this info | 06:44 |
gampel | ok so we store in the server teh net id and port id | 06:44 |
yinwei_computer | saggi_class: I replied to your mail | 06:45 |
yinwei_computer | have you got it? | 06:45 |
yinwei_computer | I'm not sure if I catch your idea | 06:45 |
yinwei_computer | do you mean there will be a single network protectable plugin, which manages all l2/l3 subresources, just excluding port? | 06:46 |
yinwei_computer | Pls. take a look at my reply to make sure we are on the same page | 06:46 |
saggi_class | I'm not sure I'll have time. This is the only class I have where I'm in front of a computer. | 06:47 |
saggi_class | But the general idea is to have a single portectable for all network resources. | 06:48 |
gampel | I think that the VM should be dependent on the networking:: internally it includes all the networking data and can build the dependencys | 06:48 |
yinwei_computer | including port? | 06:48 |
saggi_class | and have all VMs depend on it. | 06:48 |
gampel | yes | 06:48 |
saggi_class | We don't really back up port. We back up the VM connection info and network id | 06:48 |
saggi_class | The port is a side effect of this realtionship | 06:49 |
gampel | we just need to store in the Nova server meta data the network id | 06:49 |
yinwei_computer | yes, I got this point | 06:49 |
yinwei_computer | what if the restore requirement is to keep fixed ip unchanged? | 06:49 |
yinwei_computer | or mac unchanged? | 06:49 |
saggi_class | The mac will be unchanged by default | 06:50 |
saggi_class | we will keep this information on the VM | 06:50 |
saggi_class | in the bank | 06:50 |
saggi_class | since it's a property of the VM virtual hardware | 06:50 |
saggi_class | Sorry but I gtg to class | 06:50 |
*** saggi_class has quit IRC | 06:50 | |
yinwei_computer | ok | 06:51 |
gampel | i think it could be one protectorabl for now and latter we could split it into the aaS VPN FWaaS etc | 06:51 |
yinwei_computer | gampel: how comes the mac is unchanged by default? | 06:51 |
gampel | the mac is part of what we get from nova info for the sever, if we could preserve the MAC we will do | 06:52 |
gampel | if not possible we would have to generate a new one | 06:52 |
yinwei_computer | you mean sepcify the mac in boot server's parameter | 06:52 |
yinwei_computer | so that's my question: if the network protectable includes port or not. | 06:53 |
gampel | I think that in Neutron plugin we should try to understand the connection Port to Server by the data stored in Nova protection plugin | 06:53 |
yinwei_computer | if it doesn't include port, then we create port during server restoration, specifying the network id | 06:53 |
gampel | i think it does if it is possible and it should be maybe part of the options parameters | 06:54 |
yinwei_computer | if it include port, port is exclusive to one server, not shared by many servers | 06:54 |
gampel | I am not sure i understand I think it is always so | 06:56 |
yinwei_computer | ok, so who will create port? | 06:57 |
yinwei_computer | network resource node or server resource node? | 06:57 |
yinwei_computer | I mean, in restoration | 06:57 |
yuval | yinwei_computer: the server resource node | 06:57 |
yuval | yinwei_computer: there is only 1 network object | 06:58 |
yinwei_computer | ok, so the network resource node doesn't care about port, right? | 06:58 |
yuval | yinwei_computer: which creates the l2, l3, etc.. objects | 06:58 |
gampel | when we create vm we could specify teh port as an optional parameter | 06:58 |
yuval | yinwei_computer: correct | 06:58 |
yinwei_computer | :) | 06:59 |
gampel | one sec | 06:59 |
yinwei_computer | so let's first confirm one thing, network resource node create port or not. I think you guys haven't achieved consensus. | 06:59 |
gampel | I think when we restore there are two options | 07:00 |
yinwei_computer | yes | 07:01 |
gampel | 1) Craete VM with Net_Id parameter if we do not care about the MAC (nova will create the port ) | 07:01 |
gampel | 2) Create VM with the optional Port parameter after we created the port in the neutron protection Plugin | 07:02 |
gampel | in option 1 we need to make sure we add this port to the relevant SG | 07:02 |
gampel | do you agree ? | 07:02 |
yinwei_computer | yes, totally agree | 07:03 |
gampel | I think that we can achieve this two flows using protect and restore schemes and let the user define the desire flow | 07:03 |
yinwei_computer | but I think this is the options for network protectable /protection plugin | 07:03 |
yinwei_computer | implementation | 07:04 |
yinwei_computer | I prefer option2, where network take care of port, and network plugin implementation has the freedom to achieve different requirement | 07:05 |
yinwei_computer | what is unchanged, what could be changed | 07:05 |
gampel | I do not have a problem doing this as first phase but i do not see a problem supporting option 1 latter by using the parameters | 07:06 |
gampel | or I am missing some thing | 07:06 |
gampel | Server is dependent on Network | 07:07 |
gampel | and Project is dependent on network | 07:07 |
yinwei_computer | if we use option1, requirement change on network may requires modification on server plugin | 07:07 |
yinwei_computer | say, one day the requirment is fixed ip/mac unchanged, user need modify both server plugin and network plugin | 07:08 |
yinwei_computer | for option2, only network plugin will be modified | 07:08 |
yinwei_computer | I mean, let network takes care of port could limit network issue solved inside network plugins | 07:09 |
yinwei_computer | btw, network takes care of port could solve the dangling port issue | 07:09 |
yinwei_computer | what do you think? | 07:11 |
gampel | I do not see a problem starting this way , we would have to see we are not missing any think | 07:12 |
openstackgerrit | zengchen proposed openstack/smaug: Implement time trigger with Eventlet https://review.openstack.org/296880 | 07:13 |
gampel | we need to make sure we could have a link metadata stored in Server with the port is | 07:13 |
gampel | id | 07:13 |
yinwei_computer | yes, that's the info to map restored new server to restored new port | 07:13 |
gampel | I think in the end we will have to support both flows but I have no problem starting this way | 07:14 |
yinwei_computer | as saggi said, we need check this info from neutron | 07:14 |
gampel | i understand | 07:14 |
yinwei_computer | actually, personally I'd like to make a clear line between server and network, so I prefer the latter one. Like volume/server attachment :) | 07:15 |
gampel | I have a question regarding the glance protection plugin | 07:15 |
yinwei_computer | yes | 07:15 |
gampel | #link https://review.openstack.org/#/c/295752/ | 07:16 |
gampel | i added a comment there for chunks but it was ignored | 07:17 |
gampel | " think that you will have to split this object into chunks think that you could reuse the implementation in the dragon project _imagecopy(...)" | 07:18 |
gampel | https://github.com/os-cloud-storage/openstack-workload-disaster-recovery/blob/e4d35645c53a4a41a9b1a158f6f0d49ab9eb8fd2/dragon/workload_policy/actions/plugins/instance_image_action.py | 07:18 |
yinwei_computer | yes, rong discussed this with us | 07:18 |
gampel | this is the link to dragon PoC project form IBM any objcet store as max chunk | 07:18 |
yinwei_computer | do you mean the feature like s3 multipart upload? | 07:18 |
yinwei_computer | which means big object will be split by chunks and upload on parallel? | 07:19 |
gampel | I discuss this with saggi that we need to add to the context or to the bank objcet mac chunk | 07:19 |
yinwei_computer | yes, I know that | 07:19 |
yinwei_computer | swift also supports multi part upload alike feature | 07:19 |
yinwei_computer | but we'd better have swift-bank-plugin to handle this | 07:20 |
yinwei_computer | and configure it with default parameter, like size exceeding 4M will be split in 4M chunks | 07:20 |
yinwei_computer | because this depends on the bank implementation, right? | 07:21 |
gampel | Ok but it means that you will copy the "100G" image in memory to the bank plugin | 07:21 |
yinwei_computer | not all bank supports multi part upload | 07:21 |
yinwei_computer | why | 07:21 |
yinwei_computer | as LuoBin checked with me, this is stream opertaion | 07:22 |
gampel | you should use a multi part write to the bank plugin and not read all the image at once from glance | 07:22 |
yinwei_computer | I'm not sure if I understand your question | 07:24 |
gampel | Ok i have to go into a meeting | 07:24 |
gampel | we need to make sure we do not copy all the data from glance into memory | 07:25 |
yinwei_computer | ok, I will double check the details with LuoBin | 07:25 |
yinwei_computer | sure | 07:25 |
yinwei_computer | what I suppose the image_data is a file instead of memory copy data | 07:25 |
gampel | and when writing to bank we should chunk by the max chunk of that bank | 07:25 |
gampel | please look on how it was implemented in dragon for ideas | 07:26 |
yinwei_computer | and swift's multi part upload will read this file and chunk it into several parts but connect them in metadata. | 07:26 |
yinwei_computer | sure | 07:26 |
gampel | sorry i have to go | 07:26 |
yinwei_computer | ok | 07:27 |
yinwei_computer | will respond to you later | 07:27 |
yuval | ping chenzeng | 07:31 |
chenzeng | yes | 07:33 |
chenzeng | yuval:how are you? do you see my new udates? | 07:35 |
openstackgerrit | zengchen proposed openstack/smaug: Implement time trigger with Eventlet https://review.openstack.org/296880 | 07:35 |
yuval | chenzeng: hey, how are you? | 07:38 |
chenzeng | yuval:fine.and you? the weather is very good. welcome you to come to china | 07:39 |
yuval | chenzeng: looking forward to it :) | 07:39 |
chenzeng | yuval:ya. i hope to see you in china. | 07:40 |
chenzeng | yuval:i just submit the new updates right now. do you see it. | 07:40 |
chenzeng | yuval:i don't understand the 'copy the self._operation_ids on each loop'. | 07:41 |
yuval | chenzeng: lets get to this in a second, ok? I would like to first speak on _start_greenthread() | 07:44 |
chenzeng | ok | 07:44 |
yuval | chenzeng: I don't understand why we calculate a past time, and not the next time after now | 07:44 |
chenzeng | yuval:do you see my comments in the codes? | 07:44 |
yuval | chenzeng: yes | 07:45 |
chenzeng | yuval:ok, i will take a example for you | 07:45 |
chenzeng | yuval: user create a trigger which only run once. the trigger_property is like this{'format':'crontab', 'pattern':'* * * * *', 'window':10, 'end_time':now}. in the __init__, if use doesn't specify the start_time, it will be now. | 07:48 |
chenzeng | yuval:then the star_time < now. | 07:48 |
yuval | chenzeng: I understand | 07:49 |
chenzeng | yuval:good, am i right? | 07:50 |
yuval | chenzeng: so, how about: | 07:51 |
yuval | chenzeng: self._compute_next_run_time(now - window, ...) | 07:51 |
chenzeng | what's the question? | 07:53 |
yuval | chenzeng: or another case. a user create a trigger, trigger_property: {'format': 'crontab', 'pattern': '* 1 * * *', 'window': 6000}. no operations are added, so greenthread doesn't start. at 2 in the morning, a user adds an operation. immediately, the job of 1 in the morning happens | 07:56 |
yuval | chenzeng: ? | 08:00 |
chenzeng | sorry, i don't understand ' at 2 in the morning, a user adds an operation. immediately, the job of 1 in the morning happens' | 08:01 |
chenzeng | you mean at 2:00 user add an operation? | 08:01 |
chenzeng | the job of 1 is what? | 08:01 |
yuval | the crontab is set to happen at 1am | 08:02 |
yuval | the operation is registered to the trigger at 2 am | 08:02 |
chenzeng | ok, i take a look. | 08:02 |
yuval | because the first_run_time is in the past (1am) | 08:02 |
yuval | and it is within the window | 08:02 |
yuval | the operation will happen | 08:02 |
yuval | that is what I'm concerned with | 08:03 |
chenzeng | yes, that is the logic. | 08:04 |
chenzeng | any problem? | 08:04 |
yuval | chenzeng: how about, instead of looping over the result of _compute_next_run_time | 08:08 |
yuval | chenzeng: give _compute_next_run_time the current time (now) minus the window | 08:08 |
yuval | chenzeng: as the start time | 08:08 |
chenzeng | yuval:ok, that's the simpler way. i will udpate. thanks. | 08:09 |
yuval | chenzeng: do you agree with it? is it ok? | 08:09 |
chenzeng | yuval:about your example, i don't know what is you concern with? | 08:11 |
yuval | chenzeng: that an operation supposed to happen in the future will be triggered | 08:12 |
chenzeng | yuval:about your simpler way that computes first run time. i know the start_time will not influnce the next time for contab. but for rfc2445, i don't know if it will. | 08:13 |
*** gampel1 has quit IRC | 08:14 | |
chenzeng | yuval:our trigger is design to run like that. we must suppose if the triggered time is not on time, and if it doesn't exceed the end time, we should trigger the operation. | 08:18 |
chenzeng | otherwise what is the window used for. | 08:18 |
yuval | chenzeng: I agree | 08:19 |
yuval | chenzeng: ok, so what about rfc2445? | 08:19 |
chenzeng | sorry, i have not researched the rfc2445. | 08:20 |
chenzeng | later i will go to study rfc2445. | 08:20 |
yuval | I asked because you refered to it | 08:21 |
chenzeng | rfc2445 is another time format | 08:21 |
yuval | chenzeng: do you think we can avoid the loop over next_run_time? | 08:21 |
chenzeng | you mean the loop at _start_greenthread? | 08:22 |
yuval | yes | 08:22 |
chenzeng | currently, i can use your algorithm. but we should consider wheather it is allways right. | 08:24 |
yuval | chenzeng: I'm in favor of not looping, but it is your choice. can we move to the copying of the operations_id set? | 08:25 |
chenzeng | for example, for crontab, the pattern is "* */1 * * *", the start_time = 15:30 or 15:40, then the next time is 16:00 | 08:27 |
chenzeng | no matter the start_time = 15:30 or 15:40, then the next time is 16:00 for crontab. | 08:28 |
chenzeng | if the action of rfc2445 or any other kind of time format is like the crontab, your algorithm is the best. | 08:29 |
chenzeng | we can define the same rule for all time formats, then we can use your algorithm | 08:31 |
chenzeng | can we move to the copying of the operations_id set? | 08:31 |
yuval | yes | 08:31 |
chenying | Hi yuval Is saggi here? | 08:31 |
yuval | chenzeng: no, can I help? | 08:32 |
yuval | chenying: no, can I help? | 08:32 |
chenzeng | yuval:as i replied to you. isn't the 'context switch' atomic? | 08:32 |
yuval | chenzeng: well, eventlet context switches happen on sleep, I/O and such | 08:33 |
chenzeng | yuval:yes | 08:33 |
yuval | chenzeng: consider during the loop over operation_ids | 08:33 |
yuval | chenzeng: when we send to the executor, we get a 'context switch' to an rpc call adding/removing operation_ids | 08:34 |
chenzeng | you mean when looping the operation_ids, it can be switched out ? | 08:35 |
yuval | chenzeng: as we know, during an iteration over a container, if you change the number of elements, it gives an error | 08:35 |
yuval | chenzeng: we don't know, but it might. we call self._executor.execute(). if that function sleep or does I/O, it might get a context switch | 08:36 |
yuval | chenzeng: but that depends on the executor implementation, which we do not know in time_trigger | 08:36 |
chenying | Hi yuval I hope saggi can address the comments of operation log patch. | 08:36 |
yuval | chenying: I believe he will take a look later. I'll take a look also | 08:37 |
chenzeng | i understand you | 08:38 |
chenzeng | yuval: i understand you | 08:38 |
yuval | chenzeng: we could use a lock there, but I think copying can be better, performance-wise | 08:41 |
yuval | chenzeng: I have to go | 08:45 |
chenzeng | yuval:ok, i agree with you | 08:45 |
openstackgerrit | yinwei proposed openstack/smaug: Create BankCheckpointCollection implementation https://review.openstack.org/280325 | 09:13 |
openstackgerrit | yinwei proposed openstack/smaug: Enable lease checking for checkpoint https://review.openstack.org/299228 | 09:13 |
openstackgerrit | zengchen proposed openstack/smaug: Implement time trigger with Eventlet https://review.openstack.org/296880 | 09:14 |
openstackgerrit | yinwei proposed openstack/smaug: Restore design spec (protection service level) https://review.openstack.org/296950 | 09:30 |
openstackgerrit | zengchen proposed openstack/smaug: Implement time trigger with Eventlet https://review.openstack.org/296880 | 09:32 |
*** huayi_ has quit IRC | 09:43 | |
chenzeng | yuval:sorry to delay your dinner time. | 09:49 |
yinwei_computer | ping saggi | 11:29 |
yinwei_computer | ping gampel | 11:29 |
*** chenhuayi has quit IRC | 11:46 | |
*** openstackgerrit has quit IRC | 11:47 | |
*** openstackgerrit has joined #openstack-smaug | 11:47 | |
*** gampel1 has joined #openstack-smaug | 12:21 | |
*** openstackgerrit has quit IRC | 12:33 | |
*** openstackgerrit has joined #openstack-smaug | 12:33 | |
*** openstackgerrit has quit IRC | 13:18 | |
*** openstackgerrit has joined #openstack-smaug | 13:18 | |
*** zhonghua has joined #openstack-smaug | 13:19 | |
*** zhonghua-lee has quit IRC | 13:21 | |
*** yuval has quit IRC | 15:32 | |
*** chenying has quit IRC | 16:17 | |
*** openstack has joined #openstack-smaug | 17:04 | |
*** openstackgerrit has quit IRC | 20:48 | |
*** openstackgerrit has joined #openstack-smaug | 20:48 | |
*** gampel1 has quit IRC | 22:04 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!