January 3, 2023 in scanning by Jonathan Walker10 minutes
When new security research is out, the spin up time to scour assets for vulnerabilities can be a long and tedious task. Spending time learning about the latest findings, how to exploit them, and what conditions are required in order to exploit them. How can you stay on top of it all when it is a constant battle repeating itself?
That is the exact problem projects like Nuclei are made for; To help researchers identify known issues through a powerful templating language to ensure you do not miss out. It can also help you identify issues such as known subdomain takeovers, exposed panels, network services, misconfigurations, exposed files, and the list goes on.
It does an outstanding job when you have a limited number of assets and are performing scans individually. Though the moment you have hundreds or thousands of assets, scaling vertically is no longer an option.
Performing scanning at scale is exactly what Nuclear Pond is meant to achieve. You can launch thousands of scans without having to worry about cost, waiting for extended periods of time, and customize how many scans you want to perform in parallel for far less than a cup of coffee. Once the scans are complete you can choose to upload them to S3 for querying with Athena or just view the output as if it were running on your own machine. These scans can launch hundreds of instances of Nuclei all at the same time on the cheapest compute available on AWS with Lambda.
Scans can help you visualize your attack surface and vulnerabilities such as:
Nuclei is a tool that allows you to create configuration files to validate a wide variety of security issues. Contributing to Nuclei and Nuclei Templates is a breeze. When running nuclei, you can specify what security issues you are trying to identify and launch a scan against your hosts. This can often be slow on your local machine even with rate limits and concurrency settings configured. While they have done an outstanding job creating a fast scanner, doing so at scale can be difficult. Lets take a deep dive into each component of Nuclei.
ProjectDiscovery maintains the repository nuclei-templates which contains various templates for the nuclei scanner provided by them and the community. Contributions are welcome and straight forward to add to based on previous examples and you can reference my pull request to get a sense of just how easy it is.
This is to help you understand what scans you can perform with Nuclei and the options available to you.
-tags takeover
allows you to identify known takeovers-t dns
executes all templates under dns directory-t dns/detect-dangling-cname.yaml
-etags cve
excludes searching for vulnerabilitiesHere is an example in which we want to enumerate some protocols running on scanme.nmap.org with the network detection templates. This will help us identify network services running on the specified host. This allows us to currently check over 40 different network based services such as ssh, smtp, mysql, mongodb, telnet, etc.
Takeovers can be a common occurrence when you manage thousands of zones within your infrastructure and mistakes certainly occur in which deprecating assets may not complete in the correct order or completely. This can lead to dangling assets that can be taken over by an attacker. The repository Can I take over XYZ is an excellent resource if you want to learn what the current landscape looks like at this time.
Nuclei currently has over 70 different templates to detect if you are currently vulnerable to a takeover and here is an example as to how check to see if a domain is vulnerable.
Think of Nuclear Pond as just a way for you to run Nuclei in the cloud. You can use it just as you would on your local machine but run them in parallel and with however many hosts you want to specify. All you need to think of is the nuclei command line flags you wish to pass to it.
To install Nuclear Pond, you need to configure the backend terraform module. You can do this by running terraform apply
or leveraging terragrunt. After your backend infrastructure has been created, it’s time for you to install nuclearpond
.
The most important flags are as follows. These are flags for nuclearpond run
which must proceed each of the following.
-b
which commands how many hosts per invocation(number of hosts / batches = nuclei lambda invocations)-o
flag allows you to specify outputs such as cmd
for the output of Nuclei and s3
for data lake analysis of findings-c
flag allows you to specify how many threads to invoke lambda (1 is the default but >10 is recommended at scale)-a $(echo -ne "-t dns" | base64)
flag allows you to pass -t dns
to Nuclei-f
is your backend function name and -r
is your region you have deployed the function to-f
and -r
export AWS_REGION=us-east-1
export AWS_LAMBDA_FUNCTION_NAME=test-nuclei-runner-function
The backend configures a Lambda function which includes the Nuclei binary within a layer which is located in /opt/nuclei
. The function accepts an event with your targets, arguments, and output type. Nuclear Pond allows you to invoke this lambda function by taking your targets, arguments, and output in parallel by splitting up your targets into batches.
-rl
and -c
Nuclei flags should not be an issue-o file.txt
as that file remains in lambdaWhile I strongly recommend against not including filters for your scan and running it against all templates, it can be done within a couple of minutes with -rl 1000 -c 50
which can potentially bring down your target. So use caution and always make sure you have permission to do so. This tool is primarily built for a targeted approach.
This output is recommended when leveraging Nuclear Pond as once the script invokes, all of the work is handed off to the cloud for you to analyze another time. This output is known as s3
and you can output it by specifying -o s3
. You can also specify -l targets.txt
and -b 10
to invoke the lambda functions in batches of 10 targets per execution.
Now lets run Nuclear Pond as it was intended to do, at scale on a significant amount of targets. Here I have around 500k targets, decided to batch 2k targets per execution with -b 2000
, and run 200 individual threads locally with -c 200
to run lambda functions asynchronously.
To explore your findings in Athena all you need to do is perform the following query! The database and the table should already be available to you. You may also have to configure query results if you have not done so already. Once you are comfortable with querying Athena, it would be best to move over to help you visualize your results such as grafana.
In order to get down into queries a little deeper, I thought I would give you a quick example. In the select statement we drill down into info
column, "matched-at"
column must be in double quotes due to -
character, and you are searching only for high and critical findings generated by Nuclei.
The backend infrastructure, all within terraform module. I would strongly recommend reading the readme associated to it as it will have some important notes.
What are you waiting for? Get started today. Contributions are welcome and looking forward to seeing issues created!