I've faced two of them today. A site containing the MDC pictures, and a site containing MP3s for the whole Qur'an by Mashari.
In the page that containing all the links, I could have passed on them link-by-link pressing 'save item as'. Alternatively I did the following.
I right clicked on the page, using Firefox, 'View page info'. In the 'media' tab, I selected all the images and saved the link list to a file called list.txt.
Then I created a new folder and put the file in it. And I wrote the following simple command and pressed enter:
for File in $(cat list.txt) ; do wget $File; doneAnd I let it download all the links mentioned in the file.
And about the MP3, I went to the 'links' tab and likewise put all the links in a files also called list.txt in another folder. But this time there were two links for each Sura, one in MP3 and one in Zip. I did that to get the zip links only:
cat list.txt | grep -i zip > list.txtThen I issued the same command to start downloading. After that I did:
for File in $(ls); do unzip $File; rm $File; doneto unzip files and delete unzipped .zip files.
After that I did:
play *.mp3to play them all :D ( yeah mp3 from the command line :D )
The final download script for the MP3s was
for File in $(cat list.txt)Like that once a file is downloaded it is unzipped and deleted automatically.
do wget $File
I am writing this to let people who use 'non-unix' know how simple and handy Linux can be.