Hey Folks, as we know that often we add sitemap in website to index the content of websites all across the Internet and simply crawlers take the “sitemap.xml” file from the web application and give us the result. But some crawlers are used to find hidden files or directories in web applications. Almost all web application crawlers work the same way and the one we are going to talk about today works like any other.
Lets take a look 🙂 !!
To configure this tool we have to download it from github. Execute the pipe command after going to the directory and install the requirements of this tool.
git clone https://github.com/micha3lb3n/SourceWolf.git
cd SourceWolf/
pip3 install -r requirements.txt
Ready 🙂 Nowwe can use this tool by using the following command.
python3 sourcewolf.py
Now first of all we add only URL parameters to our search and we get the following response from this tool which shows that URL is can be access.
python3 sourcewolf.py --url http://192.168.1.7/wp-admin
Only through this technique we can obtain sensitive information, files and directory in web application such as robots.txt. The use of this tool is quite specific as you can see below that we add the FUZZ keyword after the URL which is required for fuzzing against a web application.
python3 sourcewolf.py -b http://192.168.1.7/FUZZ
The results will come with all response code because we have not added any extra parameters to the query.
Verbose mode exists to visualize additional details as well as it give us the details of what the tool is doing.
python3 sourcewolf.py -b http://192.168.1.7/FUZZ -v
Most of the time we uses our own custom list to find the hidden files in web application and if you are find this feature in this tool then you see below.
python3 sourcewolf.py -b http://192.168.1.7/FUZZ -w wordlist.txt
The Ouput option is not only available to save results, it also gives us additional features after complete the crawling.
python3 sourcewolf.py -b http://192.168.1.7/FUZZ -o ok
Below you can see that it also give us the juicy stuff from the source code.
As you can see it has successfully captured social media and JavaScript variables.
After going to the output directory, we can see the result as you can see below.
A keen learner and passionate IT student. He has done Web designing, CCNA, RedHat, Ethical hacking, Network & web penetration testing. Currently, he is completing his graduation and learning about Red teaming, CTF challenges & Blue teaming.
The gau (Get All URLs) tool is a versatile open-source utility that collects URLs from…
Jsluice++ is a Burp Suite extension designed for passive and active scanning of JavaScript traffic…
Hey Folks :) !! In this tutorial, we will describe some of the techniques commonly…
Hey Folks :) !! In this article, we present the "Termux Cheat Sheet for Hackers"…
Amid the rapid advancement of technology, the significance of human involvement in cybersecurity frequently goes…
Hey Folks, we are back today after such a long break, but don't worry we…
This website uses cookies.