[svlug] Rsync across SSH & alternatives
Skip Evans
skip at bigskypenguin.com
Fri May 14 10:37:37 PDT 2010
Oops. I meant scp, not rcp. I use scp now to copy back up
files to my own machine from one of my production machines, so
that's what I'd use.
What I hope to do is let the R program that updates each
individual kick off a script after each update that will do an
scp to the second server.
It will only do the scp command after updating the small file.
They won't get partial files on the destination server.
The PHP file will then read the most recent file each time a
graph is generated. If PHP hits a file as it is being written
I believe it waits for the file to close, but I'll have to
research that.
Skip
Robert Hajime Lanning wrote:
> Skip Evans wrote:
>> Hey all,
>>
>> What about rcp? I'm wondering if that might work for this as well.
>>
>> The files are very small: just a couple of lines of a CSV file
>> for some graph data. And as each one is updated about once a
>> minute on the first server they'd like to then push the file
>> to the second server where the data is read for graphing
>> purposes by a PHP web app.
>>
>> When an rcp runs does it do a regular log in with the local
>> and remote keys? I use it right now to do a backup, but I'm
>> not up on exactly how it connects. The reason I wonder about
>> this is speed. I know when you ssh to a server you wait for
>> the log in prompt, and then password validation, but I'm
>> thinking rcp is faster because it is using the keys to
>> validate access?
>>
>> Would rcp be faster than rsync?
>>
>> Also, thanks for the suggestions of netcat, auto-sshfs, too.
>> I'm reading up on those as well.
>>
>> Skip
>>
>
> rcp pretty much has no security.
> No keys, no encryption and no authentication which does make
> it faster.
>
> netcat would be about the same, as well.
>
> I would not use either of them, over the open Internet.
>
> If you are talking about one to two kilobytes per file, times
> three files, you can:
>
> 1) scp file1 file2 file3 $REMOTE:/dest/dir
> 2) tar -c file1 file2 file3 | ssh $REMOTE tar -C /dest/dir -x
> 3) rsync --wholefile ...
>
> All three options above do use ssh with keys.
>
> You can either run your file transfer script via cron every one
> minute (* * * * * /path/to/script). Or you can write the script
> to run in the background in a sleep loop, if you need it even
> more often.
>
> Options one and two should take about three seconds to transfer
> all files. Unless you have a really bad route.
>
> Though the next question is, how atomic are the updates to these
> files? Are you going to get partial files at the destination?
>
> If the files are written to a temporary filename (in the same directory)
> then mv'd into place, then you have an atomic update, no matter the size
> of the files.
>
> Using option three above will do the update to a temporary filename,
> then move into place, when transfer is done (unless you use the
> --inplace option.) So, the remote end will have an atomic update with
> option three.
>
--
====================================
Skip Evans
PenguinSites.com, LLC
503 S Baldwin St, #1
Madison WI 53703
608.250.2720
http://penguinsites.com
------------------------------------
Those of you who believe in
telekinesis, raise my hand.
-- Kurt Vonnegut
More information about the svlug
mailing list