-
Notifications
You must be signed in to change notification settings - Fork 1
Commands
URLs...
Link(s) to the OD(s) you would like to download content from.
*This is not needed if you are using -i, --input-file.
-h, --help
Prints help information.
-U, --update
Update to the latest version.
-V, --version
Prints version information
-v, --verbose
Enable verbose output
--test
Run a scrape test without downloading or recording. Deactivates Downloader & Recorder.
--scan
Scan ODs
Scan ODs displaying their content to the terminal. A shortcut to enabling
--verbose & --test. Deactivates Downloader & Recorder.
-d, --depth
Specify the maximum depth for recursive scraping.
How far to look into a directory(ies) to retrieve files. Can also be used to traverse subpages
(ODs with previous & next buttons). Default: 20. Depth of1 is current directory.
-A, --accept
Files to accept for scraping, downloading, & recording. (Regex)
Using Regex, specify which files to accept for scraping, recording, & downloading. Only the files
that matches the regex will be acceptable for scraping, downloading, & recording.
This option takes precedence over --reject, -R.
Ex: zeiver -A "(mov|mp3|lunchbox_pic1\.jpg|(pic_of_me.gif))"
-R, --reject
Files to reject for scraping, downloading, & recording. (Regex)
Using Regex, specify which files to reject for scraping, downloading, & recording. Only the files
that match the regex will be rejected for scraping, downloading, & recording. --accept, -A
takes precedence over this option.
Ex: zeiver -R "(jpg|png|3gp|(pic_of_me.gif))"
--download-only
Use Downloader only
Use only the Downloader to download all resources from links provided by --input-file, -i or the
command line. This option takes precedence over all recorder and scraper options.
--record
Activates the Recorder
Enables the Recorder which saves the scraped links to a file. Disabled by default.
*Option cannot be used with --record-only.
--record-only
Save the links only
After scraping, instead of downloading the files, save the links to them. *The downloader will be disabled when this option is active. Enables Recorder instead.
--output-record
Name of record file
Changes the name of the record file. This file is where the recorder will store the links.
Default: URL_Records.txt
--no-stats
Prevents Recorder from creating _stat_ files.
The Recorder will no longer create _stat_ files when saving scraped links to a file. Default: false
Ex: stat_URL_Record.txt
--no-stats-list
Prevent Recorder from writing file names to stat files
Stat files includes the names of all files in alphabetical order alongside the number of file extensions. This option prevents the Recorder from adding file names to stat files.
-i, --input-file
Read URLs from a local or external file
Read URLs from a file to be sent to the Scraper. *Each line represents a URL to an OD.
Ex: zeiver -i "./dir/urls.txt"
--input-record Read URLs from a file containing file paths and create a stats file.
Read URLs from an input file which contains links to other files and create a stats file based on the results. This option is for those who have a file filled with random unorganized links to a bunch of other files and want to take advantage of Zeiver's Recorder module.
*Each line represents a URL to a file. Activates Recorder. Valid with --verbose,
--output, --output-record, --no-stats-list.
-o, --output
Save Directory.
The local directory path to save files. Files saved by the Recorder are also stored here.
Default: ./
Ex: zeiver -o "./downloads/images/dir"
-c,--cuts
Ignores a specified number of remote directories from being created.
*Only available when downloading. Default: 0
Ex: URL: example.org/pub/xempcs/other/pics
Original Save Location: ./pub/xempcs/other/pics
zeiver --cuts 2 www.example.org/pub/xempcs/other/pics
New Save Location: ./other/pics
--no-dirs
Do not create a hierarchy of directories structured the same as the URL the file came from. All files will be saved to the current output directory instead. *Only available when downloading.
--print-headers
Prints all Response Headers to the terminal
Prints all available Response headers received from each url to the terminal. Option takes precedence over all other options!
--print-header
Prints a Response Header to terminal
Prints a specified Response Header to the terminal for each url. This Option takes precedence over all other options.
--print-pages
Prints the HTML Document to the terminal
Prints the HTML Document of each URL to the terminal for viewing. Allows you to see in the eyes of Zeiver. This option takes precedence over all other options.
--https-only
Use HTTPS only
Restrict Zeiver to send all requests through HTTPS connections only.
-H, --headers
Sets the default headers to use for every request. *_Must use the 'header$value' format. Can be used multiple times!
Ex: zeiver -H "accept$ text/html, application/xhtml+xml, image/webp" -H "content-length$128"
--auth
The Basic Authentication to use.
The basic authentication needed to use a closed directory. This is a shortcut for using the
Authorization: Basic header. Must use the username:password format.
Ex: zeiver --auth "demo:say it"
-u
The User Agent header to use. Default: Zeiver/VERSION
-t, --tries
The amount of times to retry a failed connection/request. Default: 20
-w, --wait
Wait between each HTTP request for scraping.
Wait a specified number of seconds before sending each scraping request.
--wait-download
Wait between each HTTP request for download.
Wait a specified number of seconds before sending each download request.
--retry-wait
The wait time between each failed request.
Whenever a request fails, Zeiver will wait the specified number of seconds before retrying again.
Default: 10
--random-wait
Wait a random amount of seconds before each HTTP request for scraping.
Randomly waits a specified number of seconds before each scraping request. The time between
requests will vary between 0.5 * --wait,-w (inclusive) to 1.5 * --wait,-w (exclusive)
--random-download
Wait a random amount of seconds before each HTTP request for download.
Randomly waits a specified number of seconds before each download request. The time between
requests will vary between 0.5 * --wait-download (inclusive) to 1.5 * --wait-download
(exclusive)
-T, --timeout
Adds a request timeout for a specified number of seconds. 0 means to never timeout.
Default: 40
-r, --redirects
Maximum redirects to follow. Default: 10
--proxy
The proxy to use.
Ex: zeiver --proxy "socks5://192.168.1.1:9000"
--proxy-auth
The basic authentication needed to use the proxy. *Must use the 'username:password' format.
--all-certs
Accepts all certificates (Beware!)
Accepts all certificates even invalid ones. Use this option at your own risk!