Introduction

This article is for anyone with an interest in the development, security, or governance of public internet-based operational infrastructure.

Methodology and code examples are available below.

Many CTOs and CISOs are currently hard at work safeguarding their organisation’s web assets against hacking attempts. One attack vector that can be created through DNS rot is hijacking subdomains.

These factors have a bearing on the threat:

  • IPv4 addresses are still used for most internet connections. They are, however, a finite resource and sought-after IPs today demand a premium price tag. Only 3.7 billion publicly-available IPv4 addresses exist. Amazon Web Services (AWS) own approximately 53 million IPv4 addresses.
  • Cloud hosting (and hybrid-cloud hosting) providers supply customers with static and floating IP addresses. While floating IPs have many benefits, they also have inherent risks, especially when coupled with a lack of DNS governance

We hypothesised that the limited availability of IP addresses, coupled with the general lack of DNS governance, would make it possible to hijack subdomains, or possibly even main domains, with relative ease.

We formulated a proof of concept to test this hypothesis and measure the extent of the potential issue. We set up our experiment on the AWS Cloud Hosting platform and the results exceeded our expectations—albeit in a very negative way. Our rudimentary setup that requires only basic programming skills, hijacked two domains and one subdomain in a two-hour period at a cost of only $50 in AWS credits.

In this article, we describe the general landscape and factors that make subdomains vulnerable to hijacking, provide step-by-step instructions that you can follow to replicate our proof of concept, and provide advice on what companies can do to avert or prevent the issue.

What is Subdomain Hijacking?

In our context, the term subdomain hijacking means a bad actor taking control of the content delivery of a domain or subdomain. They gain control of what, where and how content is displayed on the subdomain.

This should not be confused with the risk of a bad actor changing the DNS records of a domain or subdomain.

Information

In the DNS hierarchy, a subdomain is a prefix of a parent domain. Subdomains are useful for many legitimate purposes, for example email services, content delivery networks and APIs. For example, a company may use: * blog.example.com to host the web blog of the company * api.prod.example.com and api.stage.example.com for nested API endpoints under their respective production and staging environments. * mail.example.com for the mail exchange servers

What Makes Subdomains Attractive to Cybercriminals?

The actions and motiviations of bad actors are often for financial or political gain. If a customer has a bad experience related to a company, the damage and loss of good will can be costly.

Larger companies with the right clone of design secure.barclaysbank.co.uk would be a much more attractive subdomain than ijustmadeupabankingdomain34544534.bank or somethingveryfishy.com.

Even though great effort has gone into making consumers more aware of cyber threats, the focus has primarily been on having a lock display in the address bar.

How Bad Actors Use Hijacked Subdomains

Bad actors on the internet today are not common thugs. They are technically sophisticated and well-financed, some with resources estimated to exceed many of the SMEs trying to prevent their shenanigans. In a constant game of cat and mouse, new and innovative ways to exploit subdomains arise daily—as one door is barricaded, the next is exploited.

The big names in technology are not immune. ZDNet reported that Microsoft had been made aware of 21 msn.com vulnerable subdomains 2017, and then another 142 misconfigured microsoft.com subdomains in 2019.

Here are a few examples of “dirty-tricks” that are known to have succeeded on subdomains in the past:

  • Cloning an e-commerce card payment form. The information is used in many ways, for example identity theft, phishing, account theft, or to receive payment for non-existent goods that are never shipped.
  • Setting up a login screen to capture usernames, passwords and 2factor authentication details. This could facilitate a Selenium process running in the background that logs in to the authentic site and bypasses two-factor authentication.
  • Capturing usernames and passwords from a password vault (1Password, LastPass, Bitwarden).
  • Sending phishing emails to acquire customer login details.

In addition to these hit-and-run schemes, many bad actors are in it for the longer-haul, and could use a hijacked subdomain to:

  • Setup a spyware or torrent file-sharing system.
  • Manipulate search engine rankings. This is possible when the hreflang or canonical tags are set.
  • Attack downstream services and sites that use the source server as a proxy source.

The possibilities really are endless and expand daily.

Vulnerable Subdomains Result from DNS Misconfiguration

Subdomains present an unsafe attack vector when their DNS records become misconfigured or abandoned. Research shows that this is extremely common and widespread.

It can happen very easily. Consider this scenario:

  • Your company discontinues using an external service and neglects to remove the subdomain, for example yourcompany.externalservice.com.
  • A bad actor signs up for the service and claims the subdomain.
  • The service does not verify the DNS setup.
  • The attacker now has control over the subdomain and is free to use it at will.

It really is as simple as that! When you factor in wildcard DNS entries, the problem is exacerbated.

Cloud Hosting Is More At Risk

Cloud or hybrid-cloud infrastructures are inherently more susceptible to subdomain hijacking. The main reason is that they having cheap scaling capabilities that offer API capabilities to initialise servers, than smaller web hosting providers or dedicated.

Some examples of primary cloud hosting providers include:

  • Amazon Web Services (AWS)
  • Azure
  • Digital Ocean
  • Vultr

The prerequisites are:

  • Basic scripting knowledge, in a language like Ruby, Python or PHP.
  • The ability to use the CLI or SDK that the hosting provider typically makes available.

Proof of Concept Tutorial: Step-by-step Guide to Creating a Honeypot

In this section, we take you through the process of setting up a honeypot subdomain to attract bad actors.

Information

Our honeypot is a sacrificial server with a domain set up for the sole purpose of attracting traffic. The honeypot records attempts to access any domains associated with the IP address.

We chose Amazon Web Services (AWS) as the attack vector for this experiment, primarily because they offer an easy-to-use API and have a large customer base.

The sections that follow provide details of each step in the process.

Information

To replicate this tutorial you need a basic understanding of the Ruby scripting language. You may need to install the AWS CLI.

Defining the Scope

To keep our proof of concept realistic and affordable, we set the follow objectives and limitations to scope:

  • Scale: Initialise 1,000 AWS EC2 instances. We need enough IP addresses for a sample set. Amazon now have millions of IP addresses.
  • Instance type: Use Amazon T4g instance. We use the smallest possible AWS server, currently billed at $0.0042/hour.
  • Time: 2-3 hours. Amazon has a 512 vCPU limit on standard AWS accounts. This means the attack needs to be performed over a few hours to stay within the tier.
  • Method: Create a trustworthy honeypot. This allows us to capture HTTPS traffic. See Accepting HTTPS Connections below for more.
  • Data centre: Use the us-east-1 data centre. This is quite a popular AWS region.

Setting Up an Amazon Machine Image (AMI)

Amazon requires an AMI (default server image) that provides the information necessary to launch instances.

We use an Amazon Linux AMI (ID: ami-0947d2ba12ee1ff75) with a relaxed security group ingress to allow connections to SSH, HTTP and HTTPS across TCP:

  • Port: 22, IP: CIDR: ‘0.0.0.0/0’
  • Port: 80, IP: CIDR: ‘0.0.0.0/0’
  • Port: 443, IP: CIDR: ‘0.0.0.0/0’

Setting Up the Web Server

Our goal is to set up a web server that can consume all incoming traffic on port 80 and 443.

The Nginx default configuration file (/etc/nginx/nginx.conf) does this as is, with the value: server_name = _;.

To set up the web server:

  1. Initialise the server and security group, and then connect to it:
ssh -i your.pem ec2-user@ip-address
  1. Update the server to pull the latest YUM packages:
sudo yum update -y 
  1. Install Amazon Linux Nginx:
sudo amazon-linux-extras install nginx1 -y 
  1. Include the incoming Host HTTP header using the sed command to replace the $request value with the individual request parts:
sudo sed -i "s/\$request/\$request_method \$scheme\:\/\/\$host\$request_uri \$server_protocol/g" /etc/nginx/nginx.conf 
Information

The Nginx access.log does not include the incoming Host HTTP header by default.

Accepting HTTPS Connections

To provide maximum exposure, we need to capture HTTPS traffic on port 443 through a subdomain. However, a warning stating that the SSL certificate is not valid for the domain that requested it will be displayed to all incoming traffic via HTTPS connections.

We generated a subdomain that looks like it could contain sensitive information client-reports.prod.merj.com.

Information

Both approaches are described below, but we use the latter approach for the rest of the tutorial

Bypassing SSL Verification

A vast number of penetration testing tools are available and will most likely have scanned all existing domains that have used this IP address. These pen testing tools can ignore SSL warnings. In addition, HTTP libraries, such as cURL, frequently include a method to switch off SSL peer verification. For example:

curl https://wrong.host.badssl.com/
curl: (60) SSL: no alternative certificate subject name matches target host name 'wrong.host.badssl.com'
More details here: https://curl.haxx.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above.

Use the -k argument in cURL to bypass the SSL verification, as follows:

curl -k https://wrong.host.badssl.com/
<!DOCTYPE html>
<html>
<head>
  <meta charset="utf-8">
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <link rel="shortcut icon" href="/icons/favicon-red.ico"/>
  <link rel="apple-touch-icon" href="/icons/icon-red.png"/>
  <title>wrong.host.badssl.com</title>
  <link rel="stylesheet" href="/style.css">
  <style>body { background: red; }</style>
</head>
<body>
<div id="content">
  <h1 style="font-size: 12vw;">
    wrong.host.<br>badssl.com
  </h1>
</div>

</body>
</html>

Obtaining an SSL Certificate

In the event that the bad actor has an automated penetration testing scan and manually verifies the subdomain after, we use the honeypot subdomain client-reporting.prod.merj.com so they believe they may gain access to our data. In turn, we record the request in our access logs.

To obtain an SSL certificate, we:

  1. Install Certbot for Amazon Linux.
Information

Certbot is a wrapper for Let’s Encrypt that will provide the SSL certificate.

sudo wget -r --no-parent -A 'epel-release-*.rpm' [http://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/](http://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/) 
sudo rpm -Uvh dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-*.rpm
sudo yum-config-manager --enable epel*
sudo yum install -y certbot
  1. Add the following location to the Nginx configuration to respond to the ACME challenge:
sudo vi /etc/nginx/nginx.conf
```
include /etc/nginx/default.d/*.conf;
# Below this line add:
        location ^~ /.well-known/acme-challenge/ {
           default_type "text/plain";
            root /var/www/html;
        }

        location = /.well-known/acme-challenge/ {
            return 404;
       }

# exit vi
```
  1. Ensure that the directory for the root is available:
sudo mkdir -p /var/www/html/.well-known/acme-challenge
  1. Turn on Nginx services immediately and for every server start:
sudo chkconfig nginx on
sudo service nginx start
  1. Make the Certbot request to obtain the SSL certificate:
sudo certbot certonly --webroot -w /var/www/html -d {YourDomain} --agree-tos --email {YourEmailAddress} -n \
  1. Uncomment the server block with port 443 and include the Let’s Encrypt certificates, after successfully obtaining the SSL certificate:
vi /etc/nginx/nginx.conf 

Leave the following line commented:

    # ssl_ciphers PROFILE=SYSTEM;
  1. Restart Nginx:
sudo service nginx restart

Logging Entries to Persistent Storage

Individual servers will create access logs under /var/log/nginx/access.log. We need to aggregate these logs into persistent storage in order to analyze them. For this purpose, we use Logstash, which has the ability to push to an AWS S3 bucket.

To aggregate the access logs:

  1. Install Logstash:
    # Install Java
    sudo yum install java-1.8.0-openjdk -y
    echo "[logstash-6.x]
    name=Elastic repository for 6.x packages
    baseurl=https://artifacts.elastic.co/packages/6.x/yum
    gpgcheck=1
    gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
    enabled=1
    autorefresh=1
    type=rpm-md" > logstash.repo

    sudo mv logstash.repo /etc/yum.repos.d/
    sudo yum install logstash -y

    sudo chown -R logstash:logstash /tmp/logstash

    # Reduce the JAVA heap sizes down to 500mb

    sudo sed -i "s/-Xms1g/-Xms500m/g" /etc/logstash/jvm.options
    sudo sed -i "s/-Xmx1g/-Xmx500m/g" /etc/logstash/jvm.options

    # Set permissions of Logstash to be root. Don't do this for production systems!

    sudo sed -i "s/LS_USER\=logstash/LS_USER\=root/g" /etc/logstash/startup.options
    sudo sed -i "s/LS_GROUP\=logstash/LS_GROUP\=root/g" /etc/logstash/startup.options

    sudo /usr/share/logstash/bin/system-install`
   

2. Add a filter mutation to include each log entry on a new line

Information

Logstash does not do this by default.


cat <<EOT >> ~/nginx-logs.conf
    input {
        file {
    	    path => ["/var/log/nginx/access.log"]
    	    sincedb_path => "/dev/null"
        }
    }
    filter {
        mutate {
    	    update => {"message" => "%{message}
        "}
        }
    }
    output {
        stdout {}
        s3 {
    	    access_key_id => "{YourKeyHere}"
            secret_access_key => "{YourSecretAccessKey}"
            bucket => "{YourS3Bucket}"
            additional_settings => {
                "force_path_style" => true
            }
            time_file => 1
            codec => "plain"
        }
    }
    EOT
   
sudo mv ~/nginx-logs.conf /etc/logstash/conf.d/

3. Finally, start the service and ensure that it is started on each restart:

sudo service logstash start
sudo chkconfig logstash on

Running Subdomain-hijacking Scripts

Information

The hijacking scripts are available in our Github repository.

We use two subdomain-hijacking scripts:

  • ec2.rb: Creates the security permissions, SSH keys and initialises servers. The configuration settings are available at the top of the code.
  • reporter.rb: Checks the AWS S3 logs. It reports when domains are found and adds them to a “safe list” to ensure those instances are not terminated. It also terminates instances that have not successfully found a domain.

As AWS bills by the hour, we listen for traffic for one hour at a time. This keeps our costs to a minimum.

To do this, we use a central computer to run a Cron task every minute. This task initialises and terminates instances that have been running for close to an hour and that do not have any valid subdomains pointing to them.

* * * * * /bin/bash -c 'export PATH="$HOME/.rbenv/bin:$PATH" ; eval "$(rbenv init -)"; cd /Users/work/Projects/subdomain-hijacking; ruby ./reporter.rb' >>/Users/work/Projects/subdomain-hijacking/cron-reporter.log 2>&1 && ruby ./ec2.rb' >>/Users/work/Projects/subdomain-hijacking/cron-ec2.log 2>&1

Analyzing the Logs

To analyze the logs, we:

  1. Download the AWS S3 log files and combine them.
cat *.log > all.log

2. Parse the host from the files and ignore all entries that connect by the server IP address

ruby extract-domains.rb all.log > domains.txt
  1. To verify that the hijacked hosts are resolving to our test servers, the next step would typically be to perform a DNS lookup. Run the list through a bulk DNS lookup hos such as InfoByIP.
  2. Check to see which IP addresses match against the list of IP addresses that were retained in the Amazon Web Services (AWS) account as being a potential server. Any that match are a verified hijack.

Whitelisting Hosts

Our analysis of the logs revealed various penetration-software applications attempting to attack our test servers, by either:

  • Testing common URL paths.
  • Injecting HTTP Host headers.

We add the hosts pointing to our servers to our whitelist to ensure the scripts do not terminate these servers now that we have control of them and their IP addresses.

Potential Next Steps** **

  • This analysis does not cover scenarios where the test server is a proxy source. We would need to adapt the scripts to check all of the downstream IPs – taking considerably more time and resources.
  • Rebuild codebase to create an end-to-end automated process.

Setting Up a DNS Governance Policy

As we have established, subdomain hijacking does not require genius-level technical skills. On the contrary, it is probably simpler than you imagined.

To keep their web assets secure, companies should incorporate these policies into their cybersecurity action plan:

  • Include a DNS policy when deprecating public-facing servers.
  • Use only reserved IP addresses for critical infrastructure.
  • If third-party vendors are used to host content or as a proxy source, ensure the DNS records and proxy configurations reflect a replacement server when the contract ends.
  • Audit subdomains frequently in the event that any of the above are missed.

Do you have more governance tips? Send me a tweet @ryansiddle.