Termux ID: Information Gathering -->

Tilt: Terminal ip lookup tool, is an easy and simple open source tool implemented in Python for ip/host passive reconnaissance. It's very handy for first reconnaissance approach and for host data retrieval.

Features
  • Host to IP conversion
  • IP to Host conversion
  • DNS to IPs
  • GeoIP Translation
  • Extensive information gathering trough Host-name
    • Whois with:
      • Registrar info
      • Dates
      • Name Server
      • SiteStatus
      • Owner information
      • Additional data
    • Sub domains
      • Percentage of access
    • Extensive Name Server
    • SOA Records
    • DNS Records with extensive data
  • Reverse IP Lookup
    • Extensive reverse IP lookup, looking for host with different IP on the same machine

Download and install
You can download the latest version by cloning Tilt from the Git repository:
git clone https://github.com/AeonDave/tilt.git

Dependencies
Python 2.7.3 of course
With 0.6 version i decided to introduce a library needed to parse html... so you have to install BeautyfulSoup library (http://www.crummy.com/software/BeautifulSoup/). But don't worry! It's easy!
pip install beautifulsoup4
or
easy_install BeautifulSoup4
or you just simply download the library and then
cd BeautifulSoup
python setup.py install

Usage
python tilt.py [Target] [Options] [Output]

Target:
-t, --target target Target URL (e.g. "www.site.com")
Options:
-h, --help Show basic help message
-v, --version Show program's version number
-e, --extensive Perform extensive ip lookup
-r, --reverse Perform e reverse ip lookup
-g, --google Perform a search on google
-u, --update Update program from repository
Output:
-o, --output file Print log on a file

Examples:
python tilt.py -t google.com -r
python tilt.py -t 8.8.8.8
python tilt.py -t google.com -e -r -o file.log
python tilt.py -u


Tilt - Terminal Ip Lookup Tool


wig is a web application information gathering tool, which can identify numerous Content Management Systems and other administrative applications.
The application fingerprinting is based on checksums and string matching of known files for different versions of CMSes. This results in a score being calculated for each detected CMS and its versions. Each detected CMS is displayed along with the most probable version(s) of it. The score calculation is based on weights and the amount of "hits" for a given checksum.
wig also tries to guess the operating system on the server based on the 'server' and 'x-powered-by' headers. A database containing known header values for different operating systems is included in wig, which allows wig to guess Microsoft Windows versions and Linux distribution and version.

Requirements
wig is built with Python 3, and is therefore not compatible with Python 2.

Installation
wig can be run from the command line or installed with distuils.

Command line
$ python3 wig.py example.com

Usage in script
Install with
$ python3 setup.py install
and then wig can be imported from any location as such:
>>>> from wig.wig import wig
>>>> w = wig(url='example.com')
>>>> w.run()
>>>> results = w.get_results()

How it works
The default behavior of wig is to identify a CMS, and exit after version detection of the CMS. This is done to limit the amount of traffic sent to the target server. This behavior can be overwritten by setting the '-a' flag, in which case wig will test all the known fingerprints. As some configurations of applications do not use the default location for files and resources, it is possible to have wig fetch all the static resources it encounters during its scan. This is done with the '-c' option. The '-m' option tests all fingerprints against all fetched URLs, which is helpful if the default location has been changed.

Help Screen
usage: wig.py [-h] [-l INPUT_FILE] [-q] [-n STOP_AFTER] [-a] [-m] [-u] [-d]
[-t THREADS] [--no_cache_load] [--no_cache_save] [-N]
[--verbosity] [--proxy PROXY] [-w OUTPUT_FILE]
[url]

WebApp Information Gatherer

positional arguments:
url The url to scan e.g. http://example.com

optional arguments:
-h, --help show this help message and exit
-l INPUT_FILE File with urls, one per line.
-q Set wig to not prompt for user input during run
-n STOP_AFTER Stop after this amount of CMSs have been detected. Default:
1
-a Do not stop after the first CMS is detected
-m Try harder to find a match without making more requests
-u User-agent to use in the requests
-d Disable the search for subdomains
-t THREADS Number of threads to use
--no_cache_load Do not load cached responses
--no_cache_save Do not save the cache for later use
-N Shortcut for --no_cache_load and --no_cache_save
--verbosity, -v Increase verbosity. Use multiple times for more info
--proxy PROXY Tunnel through a proxy (format: localhost:8080)
-w OUTPUT_FILE File to dump results into (JSON)

Example of run:
$ python3 wig.py example.com

wig - WebApp Information Gatherer


Redirected to http://www.example.com
Continue? [Y|n]:
Scanning http://www.example.com...
_____________________________________________________ SITE INFO _____________________________________________________
IP Title
256.256.256.256 PAGE_TITLE

______________________________________________________ VERSION ______________________________________________________
Name Versions Type
Drupal 7.38 CMS
nginx Platform
amazons3 Platform
Varnish Platform
IIS 7.5 Platform
ASP.NET 4.0.30319 Platform
jQuery 1.4.4 JavaScript
Microsoft Windows Server 2008 R2 OS

_____________________________________________________ SUBDOMAINS ____________________________________________________
Name Page Title IP
http://m.example.com:80 Mobile Page 256.256.256.257
https://m.example.com:443 Secure Mobil Page 256.256.256.258

____________________________________________________ INTERESTING ____________________________________________________
URL Note Type
/test/ Test directory Interesting
/login/ Login Page Interesting

_______________________________________________ PLATFORM OBSERVATIONS _______________________________________________
Platform URL Type
ASP.NET 2.0.50727 /old.aspx Observation
ASP.NET 4.0.30319 /login/ Observation
IIS 6.0 http://www.example.com/templates/file.css Observation
IIS 7.0 https://www.example.com/login/ Observation
IIS 7.5 http://www.example.com Observation

_______________________________________________________ TOOLS _______________________________________________________
Name Link Software
droopescan https://github.com/droope/droopescan Drupal
CMSmap https://github.com/Dionach/CMSmap Drupal

__________________________________________________ VULNERABILITIES __________________________________________________
Affected #Vulns Link
Drupal 7.38 5 http://cvedetails.com/version/185744

_____________________________________________________________________________________________________________________
Time: 11.3 sec Urls: 310 Fingerprints: 37580


wig - WebApp Information Gatherer


Vanquish is a Kali Linux based Enumeration Orchestrator built in Python. Vanquish leverages the opensource enumeration tools on Kali to perform multiple active information gathering phases. The results of each phase are fed into the next phase to identify vulnerabilities that could be leveraged for a remote shell.


Vanquish Features
So what is so special about Vanquish compared to other enumeration scripts?
  1. Multi-threaded – Runs multiple commands and scans multiple hosts simultaneously.
  2. Configurable – All commands are configured in a separate .ini file for ease of adjustment
  3. Multiphase – Optimized to run the fastest enumeration commands first in order to get actionable results as quickly as possible.
  4. Intelligent – Feeds the findings from one phase into the next in order to uncover deeper vulnerabilities.
  5. Modular – New attack plans and commands configurations can be easily built for fit for purpose enumeration orchestration.

Getting Started
Vanquish can be installed on Kali Linux using the following commands:
git clone https://github.com/frizb/Vanquish
cd Vanquish
python Vanquish2.py -install
vanquish --help


Once Vanquish is installed you can scan hosts for leveraging the best of breed Kali Linux tools:
echo 192.168.126.133 >> test.txt
vanquish -hostFile test.txt -logging
echo review the results!
cd test
cd 192_168_126_133
ls -la

What Kali Tools does Vanquish leverage?
| NMap | Hydra | Nikto | Metasploit | | Gobuster | Dirb | Exploitdb | Nbtscan | | Ntpq | Enum4linux | Smbclient | Rpcclient | | Onesixtyone | Sslscan | Sslyze | Snmpwalk | | Ident-user-enum | Smtp-user-enum | Snmp-check | Cisco-torch | | Dnsrecon | Dig | Whatweb | Wafw00f | | Wpscan | Cewl | Curl | Mysql | Nmblookup | Searchsploit | | Nbtscan-unixwiz | Xprobe2 | Blindelephant | Showmount |

Running Vanquish
  • CTRL + C
    CTRL + C to exit an enumeration phase and skip to the next phase (helpful if a command is taking too long) Vanquish will skip running a command again if it sees that the output files already exist. If you want to re-execute a command, delete the output files (.txt,.xml,.nmap etc.) and run Vanquish again.
  • CTRL + Z
    CTRL + Z to exit Vanquish.
  • Resume Mode
    Vanquish will skip running a command again if it sees that the output files already exist.
  • Re-run an enumeration command
    If you want to re-execute a command, delete the output files (.txt,.xml,.nmap etc.) and run Vanquish again.

Commandline Arguments
Command Line Arguments
usage: vanquish [-h] [-install] [-outputFolder folder] [-configFile file]
[-attackPlanFile file] [-hostFile file] [-workspace workspace]
[-domain domain] [-dnsServer dnsServer] [-proxy proxy]
[-reportFile report] [-noResume] [-noColor]
[-threadPool threads] [-phase phase] [-noExploitSearch]
[-benchmarking] [-logging] [-verbose] [-debug]

Vanquish is Kali Linux based Enumeration Orchestrator.

optional arguments:
-h, --help show this help message and exit
-install Install Vanquish and it's requirements
-outputFolder folder output folder path (default: name of the host file))
-configFile file configuration ini file (default: config.ini)
-attackPlanFile file attack plan ini file (default: attackplan.ini)
-hostFile file list of hosts to attack (default: hosts.txt)
-workspace workspace Metasploit workspace to import data into (default: is
the host filename)
-domain domain Domain to be used in DNS enumeration (default:
megacorpone.com)
-dnsServer dnsServer DNS server option to use with Nmap DNS enumeration.
Reveals the host names of each server (default: )
-proxy proxy Proxy server option to use with scanning tools that
support proxies. Should be in the format of ip:port
(default: )
-reportFile report filename used for the report (default: report.txt)
-noResume do not resume a previous session
-noColor do not display color
-threadPool threads Thread Pool Size (default: 8)
-phase phase only execute a specific phase
-noExploitSearch disable searchspolit exploit searching
-benchmarking enable bench mark reporting on the execution time of
commands(exports to benchmark.csv)
-logging enable verbose and debug data logging to files
-verbose display verbose details during the scan
-debug display debug details during the scan

Custom Attack Plans
GoBuster Max
GoBuster Max is an attack plan that will run all the web application content detection dictionaries against your targets.
Vanquish -hostFile test.txt -attackPlanFile ./attackplans/gobuster-max.ini -logging



Vanquish - Kali Linux based Enumeration Orchestrator