Hacking a website? Don’t miss out on this important step!
Before hacking a website or a resource, it is a good practice to first accumulate all the information which can be gathered related to the target resource. Hackers/penetration testers gather all possible information related to the website they want to hack. Then they come up with a strategy to attack the resource with this information. A lot of this information is publicly available on the internet.
In this blog, I will go through the main tools which can be used to get all this information. Main steps are
Let’s look at how to perform these steps:
WhoIS lookup is a protocol used to find out the owner of any resource on the internet. It can be the owner of a domain name or IP address. This information is easy to fetch, you can google whois lookup
to find a list of websites providing this service. I have used https://whois.domaintools.com to fetch WhoIs details in the screenshots below.
Anyone can get information related to the website owner’s country, the date since the website was active, the IP address of the website, the company that registered the domain, and a lot more things.
For this, there’s a tool available at https://sitereport.netcraft.com/. This site will give information related to a lot of things, most importantly the technologies used to host that website. If the website uses javascript
, the hacker can run a javascript
code that will affect the client’s computer. If the website uses PHP
, the hacker can write a PHP
code for server-side attacks. We also get the information on web trackers used by the website. The result for my website — https://gourav-dhar.com looks like this.
If you are trying to hack a website and you are not able to find any vulnerabilities, the next step is to find any other website that exists on the same server. If two websites exist on the same server, then they will have the same IP address. If you are able to hack any other website, you can navigate through the file system to the target website. To do this you would need to do a reverse DNS lookup.
The role of a DNS server is to convert a URL to IP address. Whenever you browse a URL, the internet service provider (ISP) will query the DNS server for an IP address where the requests need to be sent, and return the IP address.
With reverse DNS lookup, we can get the website information hosted on the IP address/computer. To do this navigate to https://viewdns.info/. Under the heading Reverse IP Lookup
, enter the IP of the target website and click on Go to get the list of websites hosted on the same computer.
You will get a list of all websites hosted on the same IP in which the target website is present. Now you can try to find the least secure website among these and navigate your way to the target website.
There’s a tool Zenmap
which helps us to do it. It can be installed with the following command:
$ sudo apt install zenmap
Put the target IP and select intense scan to look for open ports and the service version details which can be exploited.
A lot of websites have subdomains. It is useful to discover the information related to the subdomains of the website. You can get information of the web pages that the admin or management of the website use. You can also access the beta version of the web application or the parts of the web application still under development. There are high chances of finding bugs in the applications under development, hence higher chances of finding weaknesses that can be exploited to gain access to systems.
There are a lot of tools that can be used to discover subdomains. One such tool is knockpy
. Installing it is very simple. Run :
$ sudo apt install knockpy
To use this tool, run knockpy
followed by the domain name. I will run it for google.com.
$ knockpy google.com
Sometimes there is a chance of finding config files on the server location. These config files can give us a lot of information that can be used to hack into a website. To discover such files, there’s a tool named dirb
. To install it run the following command
$ sudo apt install dirb
This tool will send requests to the target website with a word picked up from a wordlist. It will only be able to search for files and directories based on the wordlist that the user provides or the default wordlist present in dirb
.
Finally, run man dirb
to explore all the options this command provides.
While the above is a list of the main approaches related to information gathering, there are a lot of open-source tools that speed up this process. Remember google is your friend!
And that’s a wrap! Hi, I am Gourav Dhar, a software developer and I also write blogs on Backend Development and System Design. Subscribe to my Newsletter “The Geeky Minds” and learn something new every week - https://thegeekyminds.com/subscribe
Other Articles
What is an SSL/TLS Certificate and How do they Secure Your Website?
What are WebSockets? Everything you need to know about WebSockets!
How to create the perfect Pull Request?