Anyone who has worked out of a coffee shop and has had to upload large files to a remote system has probably experienced the initial burst speed of the upload followed by a disheartening throttling down of bandwidth to a crawl.
As I write a lot of software that uses the JVM I produce some rather large JAR files which need to get copied to a remote Puppet host, and I experience this bandwidth throttling often - both on public wifi and when I use my cellular data plan. Fortunately there's an easily solution if you're working on a Unix platform - split a file and have multiple concurrent uploads where each upload can take advantage of the initial bandwidth burst.
The below script isn't perfect but it gets the job done. I've limited the concurrent uploads to four and new batches of uploads are only started once the current batch has completed. I based this script off an article about running multiple processes in bash.
#!/bin/bash
FILE=$1
HOSTNAME=$2
USER=cfeduke
SPLIT_SIZE=3145728 # bytes, 3 MiB (1024*1024*3)
SCP\_MAX\_COUNT=4 # no. of maximum scps at once
cp $FILE /tmp
cd /tmp
FILE=$(basename $FILE)
split -b$SPLIT_SIZE $FILE file-part-
scp_count=0
for part in file-part-*; do
scp -B "$part" $HOSTNAME:~ &
scp_count=$(($scp_count+1))
if [ "$scp_count" -gt $SCP_MAX_COUNT ]; then
wait
scp_count=0
fi
done
if [ "$scp_count" -gt 0 ]; then
wait
fi
ssh -o "BatchMode yes" -t $HOSTNAME "cat /home/$USER/file-part-* >> /home/$USER/$FILE && rm /home/$USER/file-part-*"
rm file-part-*