Posted on

Google Dorking for hackers – Part 2

Hello, aspiring ethical hackers. This blogpost is Part 2 of Google Dorking. In our Part 1 of Google hacking, you learnt what Google hacking or Google Dorking is, what google operators are and different Google operators. In this Part 2. You will learn how hackers use Google Dorking for gathering whatever they want. If someone is a Black Hat Hacker, you will definitely not use Google to show different google operators. They will be looking for information that can make their hack successful. what could that information be?

Google dorking

This information can be credentials, login pages, backup files, database dumps, email lists, various types of files, open FTP servers & SSH servers etc. How nice it would be for hackers if they got passwords and credentials file by using just a simple google dork instead of using password cracking. In real world, we do get them. All we have to do is combine some of the Google dorks you have learnt about in Part 1 of Google hacking.

allintext:password filetype:log

allintext:username filetype:log

In the above dork, we are searching for log files with passwords in them. You will be surprised how many results you get.

Google Dorking 4
Google Dorking 3
Google Dorking 5

Well, what is the next thing you do once you successfully get credentials. You search for a page where you can use those credentials. I mean login pages. We can search for login pages using the Google dork below.

intitle:login

Google Dorking 6

You can even search specifically for WordPress login pages using the dork below.

intitle: “Index of” WP-admin

Google Dorking 7

Sometimes, not just single passwords, you can also get files which are a collection of credentials. These are files where passwords are collected and stored. For example, this Google dork.

intext:”Index of” intext:”password.zip”

Google Dorking 11

You learnt about the functions of a database while learning web server hacking. Sometimes users make a backup of their databases for safety purpose and unfortunately store that backup on the web server only. What if we can find this database. It can be found using the Google dork as shown below.

“index of” “database.sql.zip”

Google Dorking 8

Or by using this query.

inurl: backup filetype:sql

Google Dorking 9

We can find other backups too.

intitle: “index of” “backup.zip”

Google Dorking 10

We can also find email lists of an organization using Google dorking. Most of the organizations make and save a list of emails as excel files. They can be found using the Google dork as shown below.

filetype:xls inurl:”email.xls”

Google Dorking 12

Once we have the list of emails, we can perform social engineering attacks on them. Websites don’t just have credentials, emails and backups stored on them. They have different types of files like PDF’s, word documents, images etc. Sometimes they are not meant for viewers of the website but they are nonetheless on web server. Even these files can be found by Google dorking.

site: <> f iletype:pdf

Google Dorking 13

site: <> f iletype: doc

Google Dorking 14

These files can be used to find any metadata that can reveal more information. What if you are trying to hack a particular software and need its source code to find if it has any vulnerabilities. We have a dork for that too.

intitle: “index of” “sourcecode”

Google Dorking 15

A software has a specific vulnerability disclosed and hackers work to find it. For example, take a particular version of Power CMS V2. This can be done using the below query.

intitle: “Installation Wizard – PowerCMS v2”

Google Dorking 16

You know how many websites still use FTP and how many of them are still expose to internet. They can be found using below Google dork.

intitle: “index of” inurl:ftp

Google Dorking 18

site:sftp.*.*/ intext:”login” intitle:”server login”

Google Dorking 17

inurl:/web-ftp .cgi

Google Dorking 19

You can also find specific database managers like for example, phpmyadmin

“index of” inurl:phpmyadmin

Google Dorking 20
Posted on

Beginners guide to footprinting websites: Part 2

Hello aspiring ethical hackers. In Part-1 of website footprinting, you learnt how to gather information about a website by using methods like grabbing banners, directory scanning and spidering. In this Part-2, you will learn about some more techniques for footprinting websites.

Footprinting Website 0

4. Website mirroring

Either you are directory scanning or spidering, you are sending a lot of requests to the website (especially if the website is very large) which may raise suspicions or on the target side or you will be blocked. What if there was an effective workaround for this. Actually, there is. Instead of sending requests to the target website, we can download the entire website to your local device. This is known as website mirroring. For example, let’s mirror a website using wget as shown below.

Footprinting Website 11
Footprinting Website 12
Footprinting Website 13
Footprinting Website 14
Footprinting Website 15

5. Footprinting websites using online services

A website is constantly updated. The information that is displayed on the website last year may not be there today. What if there was a way to go back in time to view the past versions of a website for gathering information. Actually, there is a way for this. By using the website archive.org. Archive.org collects the snapshot of the website at different points in time from the time the website existed and stores it. So, you can go there and view how the website looked 10 years back ago or three years ago. For example, this is how our website looked way back in 2018.

Footprinting Website 16

Better, you can constantly monitor the updates being made to the websites using a website known as website watcher.

Footprinting Website 17

Website watcher automatically checks webpages for any updates and changes.

Posted on

Website footprinting for beginners

Hello, aspiring Ethical Hackers. In our previous article, you have learnt what is Foot printing, why it is important and how many types of Foot printing techniques are there. Website Footprinting is one type of Foot printing.

Website Footprinting 0

What is Website Footprinting?

Website Footprinting is the process of analyzing target’s website to gather as much information as possible that may prove helpful in penetration testing or hack depending on which Hat you wear.

What information does Website Footprinting reveal?

Website Footprinting Aa

Website Footprinting reveals the following information.

  1. Webserver software and its version.
  2. Types of CMS being used and its version.
  3. Contact details.
  4. Sub directories of the website.
  5. Operating System of the target hosting the web server.
  6. Scripting languages used to code the website.
  7. Types of Database being used by the target website.
  8. Misconfigured files.
  9. Parameters used.
  10. Misplaced files.

How is Website Foot printing performed?

There are multiple methods to perform Website Footprinting. They are,

  1. Banner Grabbing
  2. Web Directory scanning
  3. Web spidering
  4. Website Mirroring
  5. Website Header Analysis.

1. Banner Grabbing

Website Footprinting B

A Banner is a small piece of information that is displayed by services, programs or systems. This banner sometimes even consists of types of software used, its version and some other information related to the software and sometimes even the operating system behind it. Banner Grabbing is the method used to gain information about the services running on target system by grabbing this banner. Learn more about Banner Grabbing here.

2. Web Directory Scanning

Website directories are the folders present in website. Sometimes these directories contain sensitive files either placed there due to misconfiguration or by mistake. Not just that, there may be some hidden directories that cannot be accessed using the browser.

For example, earlier this year, the Brazilian retail arm of Swedish luxury vehicle manufacturer, Volvo, exposed sensitive files mistakenly on their website. These exposed files include their database’s authentication system (both MySQL and Redis), open ports, credentials and even website’s Laravel application key.

There are many tools to perform Website directory scanning. Let’s look at one tool that is installed by default in Kali Linux, dirb. Since I don’t want to spend my rest of my life in prison, I will not test this tool on any live website but on web services of Metasploitable 2.

The command to run “dirb” tool is very simple. It is as shown below.

Website Footprinting 1

Just give it an URL and it starts scanning.

Website Footprinting 2
Website Footprinting 3

After the scan is finished, we can analyze the URLs one by one. Very soon, I found an interesting one.

Website Footprinting 4

I first open the passwords directory and find a file named “accounts.txt” in it.

Website Footprinting 5

As I open it, I found some credentials. These appear to be users of Mutillidae web app.

Website Footprinting 6

Then I open the phpMyAdmin page. phpMyAdmin is a database manager. Although I don’t get access to databases, I get some server and OS information of target.

Website Footprinting 7

Next interesting thing to check out is ‘robots.txt’ file. What is robots.txt? Robots.txt is a file specifically used to ask search engines not to index some files and paths. Any entry or path given in this robots.txt file is not indexed or crawled by a search engine spider. But here we can access it. Let’s see what it contains.

Website Footprinting 8

It has disallowed some six paths and files from indexing. Normally in these cases, any configuration file is a prized catch. So, let’s check out “config.inc” file.

Website Footprinting 9

Once again, some credentials. But these appear to be belonging to a database.

3. Web Spidering or Crawling

Website crawling or spidering is a technique used to crawl through the links of a website to understand the structure of the website. This crawling sometimes reveal interesting links and pages on which Pen testers can focus on.

A crawler or spider works this way. When you give it an URL or webpage, it visits the URL and makes a list of all the hyperlinks present on that page. Then it visits the hyperlinks and repeat the process again recursively. In this way a website spider builds the structure of the entire website for hackers to get a better picture of their target.

There are many website spidering tools. For this tutorial, we will use the Web directory scanner module of Metasploit.

Website Footprinting 10

I will use it to scan mutillidae on Metasploitable 2.

Website Footprinting 10a
Website Footprinting 11

Set the target IP or URL and set the path.

Website Footprinting 12

After all options are set, execute the module after loading some required modules to run, it starts crawling the target website.

Website Footprinting 13
Website Footprinting 14
Website Footprinting 15
Website Footprinting 16

If the target website is too large, spidering can take a lot of time. That’s all in this blogpost. Readers will learn about website mirroring and how to gather information about target website using web services. Read Part 2 now.

Posted on

Metadata for Pen testers

Hello aspiring Ethical Hackers. In our previous blog post, you learnt what is Footprinting, why it is important and different types of Footprinting techniques. In this blog post, you will learn about performing Footprinting using Metadata.

What is Metadata?

Metadata is a set of data that provides information about other data. Simply put, it is data about the data. Everyone knows data is very important but metadata is often ignored but equally important. But how is metadata helpful to Ethical Hackers. Before going there, let us see how to extract Metadata.

How to extract Metadata?

There are various tools and online resources that extract metadata from different files. For this article, let’s use one tool that is inbuilt in Kali Linux, exiftool. Exiftool extracts metadata from a number of file types.

Metadata 1

Let’s extract metadata of a docx file.

Metadata 34
Metadata 6

Now, let’s extract it from a PDF file.

Metadata 5

Let’s see another PDF file.

Metadata 78

Last and final, let’s use it on an image file.

Metadata 910

How is it useful in pen testing?

If you have noticed, we have performed metadata extraction from 3 types of files: Docx, PDF and an Image. That’s because these are the most common types of files that are available online. Any organization uses these types of files on their websites or anywhere else to convey information.

While extracting information of the docx file revealed the names of creators of the file (Admin, Kalyan). This revelation can help in gaining access later (i.e username is admin etc) or to perform a spear phishing attack targeted at the target user. We can also see that the document was created using Microsoft Word software. So, we can target these users with a malicious macro attack.

While observing the information extracted from a PDF file, we can see that this PDF was created using Microsoft Word. In this case, the version of the MS Word software is also very clear (2019) along with the creator’s name.

The second PDF file was created using Microsoft PowerPoint. So, we can figure out that these users need to be targeted with PowerPoint attack.

Images are another most common types of files found on a website or any other company’s property. We can see that the image I downloaded from a website is either edited or created with Photoshop along with its specific version. So, we can search for any vulnerabilities in this particular software or use this software themed lure to target this organization.

That’s how Metadata can help Pen testers in gaining information about the target organization.

Posted on

Network footprinting for beginners

Hello, aspiring Ethical Hackers. In our previous blogpost Footprinting Guide, you learnt about different types of Footprinting that is performed by hackers and pen testers to gather information about their target. One of the important types of footprinting is Network Footprinting.

What is Network Footprinting?

It is gathering information about the target’s network like ranges of IP addresses used by the target organization, IP address blocks etc. This Footprinting can be considered as a last step before making initial contact with the target using network scanning. This also allows attackers to map the target network.

How to perform Network Footprinting?

Information like range of IP addresses can and their subnet masks can be found out from the Regional Internet Registries (RIR’s) and some other sites given below.

  1. Whois.arin.net – ARIN whois search
  2. Apnic.net/about-apnic/whois_search (APNIC)
  3. AFRINIC whois
  4. LACNIC whois
  5. RIPE whois search
  6. Bgp.he.net.

Apart for these, there is also a tool called Samspade that can be used to perform this footprinting.

Nw Footprint 1 1024x582
Nw Footprint 2 1024x581

traceroute and tracert

Traceroute and tracert are computer network diagnostic commands that display possible route (or path), the packets take to reach their intended target on network. These commands utilize the TTL field in the header of ICMP packets to discover the routes on the path of a target network or system.

Nw Footprint 3
Nw Footprint 4

That’s all in gathering information about Network.