Skip to main content

Questions tagged [wget]

wget - command-line utility to download content non-interactively(can be called from scripts, cron jobs , terminals without the X-Windows support, etc.)

6 votes
2 answers
1k views

Today I downloaded a large file using wget, but I accidentally deleted it before it had finished downloading. However wget continued to download the file until it had finished, but there was no file ...
EmmaV's user avatar
  • 4,443
2 votes
2 answers
205 views

I want to spider plants.usda.gov before it disappears due to budget cuts. However every wget combo I try results in an ultimately blank result. I also checked on archive.org and there too, the entries ...
slashdottir's user avatar
0 votes
1 answer
93 views

I'm trying to download a specific sitemap.xml (https://www.irna.ir/sitemap/all/sitemap.xml). The problem is that when you load the specific sitemap.xml for a few seconds one white page with a header ...
Amirali's user avatar
0 votes
2 answers
155 views

I am trying to download 1000 HTML pages from a specific site (https://isna.ir/) with wget in a recursive way (it is part of our course assignment) but it just downloads an index.html file. I tried a ...
Amirali's user avatar
1 vote
3 answers
1k views

I just installed Debian. I was trying to install ProtonVpn but I can't pull the deb file with wget. My system clock is up to date. I also tried adding different servers in the resolve.conf file but ...
emmet's user avatar
  • 11
1 vote
1 answer
121 views

I've moved from Mint to Fedora, and a script I have is no longer functioning: wget -t 1 localhost:52199/MCWS/v1/Playback/PlayPause?Zone=-1&ZoneType=ID returns Failed to send 233 bytes (hostname='...
David Campbell's user avatar
0 votes
1 answer
239 views

So, I want to be able to download a web page in a way similar to what https://archive.is does. Using wget -p -E -k usually produces a decent result - but this result is somewhat hard to handle. For ...
Mikhail Ramendik's user avatar
0 votes
1 answer
61 views

I'm building a pipeline in Snakemake to analyse some data. One of the data files I'm using is provided as supplemental data as part of this publication. The paper is behind a paywall, but I've ...
Whitehot's user avatar
  • 245
2 votes
2 answers
220 views

I want to download all files named ${n}x${n} from a directory on a website with wget2 on zsh, where n is the same number value both times, with n from 1 to 6000. I've found that specifying all the ...
XDR's user avatar
  • 471
0 votes
0 answers
75 views

URL = "http://downloads.sourceforge.net/sourceforge/gptfdisk" I can access this URL in my browser, but it is getting redirected to a newer URL. Problem: When I give this command to get ...
Viraj Patel's user avatar
1 vote
1 answer
1k views

I'm trying to find a way to download the latest released .deb file from a Github releases page, for example: https://github.com/grafviktor/goto/releases. the solution would first try to curl the page ...
Ilgar's user avatar
  • 13
2 votes
1 answer
114 views

I have a FOSS project whose web site is generated by asciidoc and some custom scripts as an horde (thousands) of static files locally in the source files' repo, copied into another workspace and ...
Jim Klimov's user avatar
1 vote
1 answer
87 views

I have trying download files form a url that return save dialog box. I am using wget. It is not working! I am using the follow command wget -ci LinkDownloadHw.csv Bellow are a sample of links used in ...
THIAGO CARLOS LOPES RIBEIRO's user avatar
1 vote
1 answer
126 views

Does some sort of command like: echotonet "this" or cat file.txt > echonet that could allow one to easily paste text document data to a service like paste.debian.net ? I often use ...
user3450548's user avatar
  • 3,104
4 votes
1 answer
1k views

This question is a follow-up to How to download multiple files simultaniously with wget? Similar to that question, I need to download many files. With the accepted answer above recommends the use of ...
sneumann's user avatar

15 30 50 per page
1
2 3 4 5
58