Download_parallel

Let’s say we have a webpage (ipaudio3.club) where you can stream all Stephen Kings books. ANd let’s say you want to download.. not stream.

So, you view the page-source and find out what url is used for the different parts of the book you are interested in.

It’s something like this:
https://ipaudio3.club/wp-content/STEPHEN/<title>01.mp3

and the files are numbered 01 to 14

# first problem
# seq 1 14
# this will give you 1 2 3 4 ... 14

# solution 
seq -f '%02g' 1 14
# will produce 01 02 03 ... 14

# second problem
# how send these numbers to curl?

# solution: use `xargs`

# url is your URL
seq -f '%02g' 1 14 | xargs -I{} -n1 -P0 curl '<url>/{}.mp3' --output '{}.mp3'

# This will download the files 01-14 and save them as 01.mp3 02.mp3 and so on.
# It will be done in parallel.

-I{} will set {} to a variable, holding the input (the numbers). You can then use the same marker {} to insert the value.

-n1 sets the numbers of arguments to be passed on at a time to 1.

-P0 will use as many processes as the CPU allows.

Note Link to heading

on using -n
warning: options --max-args and --replace are mutually exclusive

Looks like you don’t use -n1 when using -I{}.

seq 1 9 | xargs -n2 bash -c 'echo $1 -> $2'
seq 1 9 | xargs -I% echo '-> %'