TrackerSift, a powerful tool designed for security researchers to easily crawl websites and capture various API calls, network requests, and dynamic features of JavaScript, including stack traces. TrakerSift uses selenium to automatically crawl and navigate through a website, capturing all relevant information using purposely-built chrome extension. Chrome extension can identify and collect data from various sources, including APIs, database queries, server responses, and user interactions.
With TrakerSift, you can easily analyze the behavior of a website's JavaScript code, including the different function calls, event listeners, and error messages. The tool is designed to capture stack traces, which can provide valuable insights into the code execution path and help identify potential issues and vulnerabilities.TrakerSift exports your findings in JSON format.
TrackerSift, Abdul Haddi Amjad, Zubair Shafiq, Muhammad Ali Gulzar, IMC'21
The published version of the manuscript is avaibable at : TrackerSift
We are working on docker and soon we will release the DockerFile
Meanwhile follow these:
- Clone this repository and move inside the repo directory using
cdcommand - Open
twoterminals inside repository directory ---- First Terminal ---- - In first terminal, run
cd serverand thennode server.js-- this will start the local-host server atPort:3000which communicates with chrome-extension to save the captured data insideserver/outputdirectory. ---- Second Terminal ---- - In second terminal, create
condaenvironment usingrequirement.txt--conda create --name envname --file requirements.txt - Now if you want to crawl a specific website e.g,
livescore.comyou can add any list of websites mentioned inten.csvwhich is specified insele.py-line 20 - After specifying the websites, simply activate the conda env and run
python sele.py-- this will crawl the website and store the respective data inoutput/server/{website}/directory e.g, forlivescore.comyou can find inoutput/server/livescore.com/.
See
SCHEMA.mdfor each file.
- In the same conda env you can run
python label.pyto label the network requests for all crawled websites. This will createlabel_request.jsoninside each website directory.
- In the same conda env you can run
python graph-plot/main.pyto create graph for each website. This will createlabel_request.jsoninside each website directory.
- In the same conda env you can run
python graph-plot/makeFeatures.pyto create graph for each website. This will create multiple files inside website directory, seeSCHEMA.mdfor each file.
@inproceedings{Amjad22TrackerSift, title = {TrackerSift}, author = {Abdul Haddi Amjad, Zubair Shafiq, Muhammad Ali Gulzar}, booktitle = {ACM Internet Measurement Conference (IMC)}, year = {2021} }
Please contact Hadi Amjad if you run into any problems running the code or if you have any questions.