CloudScraper is a Tool to spider and scrape targets in search of cloud resources. Plug in a URL and it will spider and search the source of spidered pages for strings such as ‘s3.amazonaws.com’, ‘windows.net’ and ‘digitaloceanspaces’. AWS, Azure, Digital Ocean resources are currently supported.

This tool was inspired by a recent talk by Bryce Kunz. The talk Blue Cloud of Death: Red Teaming Azure takes us through some of the lesser known common information disclosures outside of the ever common S3 Buckets.

Pre-Requisites:

Non-Standard Python Libraries:

  • requests
  • rfc3987
  • termcolor

Created with Python 3.6

Usage:

usage: CloudScraper.py [-h] [-v] [-p Processes] [-d DEPTH] [-u URL] [-l TARGETLIST]

optional arguments:
  -h, --help     show this help message and exit
  -u URL         Target Scope
  -d DEPTH       Max Depth of links Default: 5
  -l TARGETLIST  Location of text file of Line Delimited targets
  -v Verbose     Verbose output
  -p Processes  Number of processes to be executed in parallel. Default: 2

example: python3 CloudScraper.py -u https://rottentomatoes.com

HOW TO INSTALL:

CLICK ON THIS LINK

LEAVE A REPLY

Please enter your comment!
Please enter your name here