Posted on

Google Dorking for hackers – Part 2

Hello, aspiring ethical hackers. This blogpost is Part 2 of Google Dorking. In our Part 1 of Google hacking, you learnt what Google hacking or Google Dorking is, what google operators are and different Google operators. In this Part 2. You will learn how hackers use Google Dorking for gathering whatever they want. If someone is a Black Hat Hacker, you will definitely not use Google to show different google operators. They will be looking for information that can make their hack successful. what could that information be?

Google dorking

This information can be credentials, login pages, backup files, database dumps, email lists, various types of files, open FTP servers & SSH servers etc. How nice it would be for hackers if they got passwords and credentials file by using just a simple google dork instead of using password cracking. In real world, we do get them. All we have to do is combine some of the Google dorks you have learnt about in Part 1 of Google hacking.

allintext:password filetype:log

allintext:username filetype:log

In the above dork, we are searching for log files with passwords in them. You will be surprised how many results you get.

Google Dorking 4
Google Dorking 3
Google Dorking 5

Well, what is the next thing you do once you successfully get credentials. You search for a page where you can use those credentials. I mean login pages. We can search for login pages using the Google dork below.

intitle:login

Google Dorking 6

You can even search specifically for WordPress login pages using the dork below.

intitle: “Index of” WP-admin

Google Dorking 7

Sometimes, not just single passwords, you can also get files which are a collection of credentials. These are files where passwords are collected and stored. For example, this Google dork.

intext:”Index of” intext:”password.zip”

Google Dorking 11

You learnt about the functions of a database while learning web server hacking. Sometimes users make a backup of their databases for safety purpose and unfortunately store that backup on the web server only. What if we can find this database. It can be found using the Google dork as shown below.

“index of” “database.sql.zip”

Google Dorking 8

Or by using this query.

inurl: backup filetype:sql

Google Dorking 9

We can find other backups too.

intitle: “index of” “backup.zip”

Google Dorking 10

We can also find email lists of an organization using Google dorking. Most of the organizations make and save a list of emails as excel files. They can be found using the Google dork as shown below.

filetype:xls inurl:”email.xls”

Google Dorking 12

Once we have the list of emails, we can perform social engineering attacks on them. Websites don’t just have credentials, emails and backups stored on them. They have different types of files like PDF’s, word documents, images etc. Sometimes they are not meant for viewers of the website but they are nonetheless on web server. Even these files can be found by Google dorking.

site: <> f iletype:pdf

Google Dorking 13

site: <> f iletype: doc

Google Dorking 14

These files can be used to find any metadata that can reveal more information. What if you are trying to hack a particular software and need its source code to find if it has any vulnerabilities. We have a dork for that too.

intitle: “index of” “sourcecode”

Google Dorking 15

A software has a specific vulnerability disclosed and hackers work to find it. For example, take a particular version of Power CMS V2. This can be done using the below query.

intitle: “Installation Wizard – PowerCMS v2”

Google Dorking 16

You know how many websites still use FTP and how many of them are still expose to internet. They can be found using below Google dork.

intitle: “index of” inurl:ftp

Google Dorking 18

site:sftp.*.*/ intext:”login” intitle:”server login”

Google Dorking 17

inurl:/web-ftp .cgi

Google Dorking 19

You can also find specific database managers like for example, phpmyadmin

“index of” inurl:phpmyadmin

Google Dorking 20
Posted on

Dirty Cow vulnerability: Beginners guide

Hello, aspiring ethical hackers. This blogpost is a beginner’s guide to Dirty COW vulnerability. Assigned CVEID, CVE-2016-5195, this vulnerability affects Linux kernel version 2.6.21 since 2007. To exploit this vulnerability, the hackers need to first gain initial access on the target system.

What is this Dirty COW vulnerability?

Dirty COW is a Linux privilege escalation vulnerability which is caused due to a race condition in the way the Linux kernel handled copy-on-write functions. The name Dirty COW came from this Copy-On-Write (COW). By exploiting this vulnerability, an unprivileged user can gain access to the read-only memory mapping subsequently elevating their privileges on the system.

Which kernels are vulnerable?

All the Linux kernels from versions 2.x to 4.x before 4.8.7 are vulnerable to this Dirty COW vulnerability. Let’s demonstrate this vulnerability on a Ubuntu 12 system. To exploit this vulnerability, the hackers need to first gain initial access on the target system.

Dirty Cow 1

Download this exploit from Github and extract its contents. It is a C program as shown below.

Dirty Cow 2

Compile this code using inbuilt GCC compiler in Ubuntu system. This exploit creates a new user named ‘firefart’ with root privileges on the target system by writing to the /etc/passwd file. Usually, creating an user with root privileges in not possible for low privileged users on Linux systems. But this is a privilege escalation vulnerability.

Dirty Cow 3

Now, let’s execute the exploit as shown below. It will prompt you to create a new password for the new user “firefart” it is creating.

Dirty Cow 4

Login as the newly created user to see if the exploit was successful in exploiting the vulnerability and creating the news user “firefart”.

Dirty Cow 5

As you can see, a new user named “firefart” has been created on the target system with root privileges.

Posted on

Complete guide to DNSrecon

Hello, aspiring ethical hackers. This is a complete guide to dnsrecon tool. In our previous blogpost on DNS enumeration, you read what DNS is, what are the various types of DNS records, what is the information about the network can DNS enumeration reveal to a pen tester or a Black Hat Hacker. DNSrecon is one such tool used for enumerating DNS.

DNSrecon is written by Carlos Perez. He wrote it initially in Ruby to learn about that programming language and about DNS way back in 2007. As time passed by, he wanted to learn python and he posted dnsrecon tool to python.

The features of DNSrecon tool are,

  1. Checks all NS Records for Zone Transfers.
  2. Enumerates general DNS Records for a given domain (MX, SOA, NS, A, AAAA, SPF and TXT).
  3. Performs common SRV Record enumeration.
  4. Top Level Domain (TLD) expansion.
  5. Checks for Wildcard resolution.
  6. Brute forces subdomains and host A and AAAA records given in a domain and a wordlist.
  7. Performs PTR record lookup for a given IP Range or CIDR.
  8. Checks a DNS server’s cached records for A, AAAA and CNAME.
  9. Records provided a list of host records in a text file to check.

Let’s see how to enumerate DNS with DNSrecon. DNSrecon is installed by default in Kali Linux. To use DNSrecon, all we have to do is use the command below.

dnsenum -d <domain>

DNSrecon 12

–name_server (-n)

By default, DNSrecon will use SOA of the target server to enumerate DNS. You can use a different server, you can use it using this option.

DNSrecon 3

-a

This option is used to do a zone transfer along with standard enumeration performed above.

DNSrecon 4

As expected it failed.

DNSrecon 5

-y, -b, -k

Similarly, you can perform yandex (-y), bing(-b), crt.sh (-k) enumeration along with standard enumeration.

DNSrecon 7
DNSrecon 8
DNSrecon 9
DNSrecon 10

-w

This option is used to perform deep whois record analysis and reverse lookup of IP ranges found when doing standard enumeration.

DNSrecon 11

-z

This option is used to perform a DNSSEC zone walk along with standard enumeration.

DNSrecon 12 1
DNSrecon 13

–dictionary (-d)

This option is used to use a dictionary file containing subdomains and hostnames to use for brute force.

DNSrecon 14

–range (-r)

Specify a IP range to perform reverse lookup.

DNSrecon 15

–type (-t)

This option is used to perform a specific type of enumeration only. The various possible types of enumeration that can be performed using dnsrecon are,

  • Std: all SOA, NS, A, AAAA, MX and SRV.
  • rvl: reverse lookup
  • brt: brute force using a given dictionary
  • srv: SRV records.
  • axfr: zone transfer from NS server.
  • bing: Bing search for hosts and subdomains.
  • Yand: Yandex search for hosts and subdomains.
  • Crt: crt.sh enumeration for subdomains and hosts.
  • Snoop: cache snooping argument at NS server.
  • tld: test against all TLD’s registered with IANA.
  • Zonewalk: perform DNS sec Zone using NSEC records.
DNSrecon 16
DNSrecon 17
DNSrecon 18
DNSrecon 19
DNSrecon 20

Saving results

You can save the results of the found records to a database (-db), XML (-X), CSV (-c) and Json(-j) files.

DNSrecon 21
DNSrecon 22 1
DNSrecon 23

–lifetime

This option is used to set the time the tool has to wait until the target server responds. The default time is 3 seconds.

DNSrecon 24

–threads

This option is useful to specify the number of threads to be used while performing reverse lookup, forward lookup, brute force and SRV record enumeration.

DNSrecon 25

That’s all about DNSrecon.

Posted on

Complete guide to DNSenum

Hello, aspiring ethical hackers. In the previous blogpost on DNS enumeration, you learnt what DNS service is used for, different types of records it has, what information can DNS enumeration reveal to hackers or pentesters. In this blogpost you will learn about a tool named DNSenum that can be used to enumerate DNS. DNSenum is a multithreaded perl script that is used to gather information from target DNS servers.

The features of DNSenum are,

  1. Get the host’s address (A record).
  2. Get the nameservers (NS).
  3. Get the MX record (MX).
  4. Perform axfr queries on nameservers and get BIND VERSION.
  5. Get extra names and subdomains via google scraping (google query = “-www site:domain”).
  6. Brute force subdomains from file, can also perform recursion on subdomain that have NS records.
  7. Calculate C class domain network ranges and perform whois queries on them.
  8. Perform reverse lookups on netranges (C class or/and whois netranges).

Let’s see how to perform DNS enumeration with DNSenum. DNSenum is included by default in Kali Linux. If you want to enumerate a domain with DNSenum. all you have to do is supply a domain name as shown below.

dnsenum <domain>

When run in default mode, DNSnum first enumerates the host address, then the name servers, then MX records, ACFR queries, extra names and subdomains via google scraping, brute forces subdomains from them, calculates the class C IP network ranges from the results and performs whois queries on them and performs reverse lookup on these IP addresses.

DNSenum 1
DNSenum 2
DNSenum 3
DNSenum 4
DNSenum 5
DNSenum 67

–dnsserver

In some cases, the result from the enumeration can vary depending on the server that is queried. Using DNSenum, we can perform a query by using another DNS server as shown below.

DNSenum 8

When you first use dnsenum on a domain to perform enumeration, you will notice that there will be a considerable delay at some stages. The delay occurs while dnsenum is brute forcing the subdomain names and then while performing reverse lookup on the IP address range.

While brute forcing the subdomain names, there is a delay because the file used by DNSenum (“/usr/share/dnsenum/dns.txt”) has over 1506 entries. So, until the tool checks all the entries, there will definitely be a delay. Can we reduce this data? Yes, by using another file instead of the default one. For example, we can create our own “dns.txt” file with entries of subdomains gathered from other type of enumeration.

DNSenum 11

–file(f)

We can specify this custom file with the (-f) option as shown below.

DNSenum 12

–subfile

We can also save the output of subdomain brute forcing in a file using the subfile option as shown below.

DNSenum 17
DNSenum 18

–noreverse

Coming to reverse lookup, while performing reverse lookup on 512 IP addresses (in this case) definitely takes time. But don’t worry. We can skip the reverse lookup by using the normal option.

DNSenum 13

–private

This option enumerates and saves the private IP addresses of a domain in the file named <domain_name>_ips.txt.

DNSenum 14
DNSenum 15

–timeout (-t)

The default timeout option of TCP queries and UDP queries for dnsenum is 10 seconds. The timeout option allows us to change it.

DNSenum 16

–threads (va)

This option is used to specify the number of threads to perform different queries.

DNSenum 19

–verbose (-v)

You already know what this option does. It reveals more information. See the differences.

DNSenum 20
DNSenum 21

–scrape (-s)

Used to specify the number of subdomains to be scraped from Google.

DNSenum 23
DNSenum 24

Here’s the result.

DNSenum 25

–page (-p)

While scraping the subdomain with dnsenum above, you should have noticed that it queries Google search pages for subdomains related to the domain. By default, it is 20 pages. Using this option, it can be changed. For example, lets set it to 10.

DNSenum 26
DNSenum 2728

–recursion (-r)

This option can be used to perform recursion on subdomain gathering.

DNSenum 29

–whois (-w)

As you might have expected, this option is used to perform whois queries on class C network ranges. It can be time consuming. Use wisely. Learn what is whois footpriting.

DNSenum 30
DNSenum 31

–delay (-d)

This option is used to specify the maximum delay between each whois query. The default delay is 3 seconds.

DNSenum 32

That’s all about DNSenum.

Posted on

Beginners guide to footprinting websites: Part 2

Hello aspiring ethical hackers. In Part-1 of website footprinting, you learnt how to gather information about a website by using methods like grabbing banners, directory scanning and spidering. In this Part-2, you will learn about some more techniques for footprinting websites.

Footprinting Website 0

4. Website mirroring

Either you are directory scanning or spidering, you are sending a lot of requests to the website (especially if the website is very large) which may raise suspicions or on the target side or you will be blocked. What if there was an effective workaround for this. Actually, there is. Instead of sending requests to the target website, we can download the entire website to your local device. This is known as website mirroring. For example, let’s mirror a website using wget as shown below.

Footprinting Website 11
Footprinting Website 12
Footprinting Website 13
Footprinting Website 14
Footprinting Website 15

5. Footprinting websites using online services

A website is constantly updated. The information that is displayed on the website last year may not be there today. What if there was a way to go back in time to view the past versions of a website for gathering information. Actually, there is a way for this. By using the website archive.org. Archive.org collects the snapshot of the website at different points in time from the time the website existed and stores it. So, you can go there and view how the website looked 10 years back ago or three years ago. For example, this is how our website looked way back in 2018.

Footprinting Website 16

Better, you can constantly monitor the updates being made to the websites using a website known as website watcher.

Footprinting Website 17

Website watcher automatically checks webpages for any updates and changes.