5 errors in pentest results

Especially for companies that operate larger infrastructures, a pentest can often provide more insights than is typically assumed. We show you how to interpret pentest results correctly and get the maximum benefit from them.

One of the main reasons is a wrong perspective on the results of a test. Typical assumptions are:

Misconception 1: A pentest finds all vulnerabilities that are present on the target

A first important realization is that penetration tests can never detect all vulnerabilities on a target system. This is for the following reasons: Firstly, the test is limited in time, and secondly, not all configuration parameters are known about the system for most tests.

Conclusion: A pentest alone cannot be used to make a target application more secure. A pentest report without critical findings does not mean that the application can contain absolutely no vulnerabilities.

Consequence: Use the full range of testing options for application audits: Code reviews, peer reviews, Secure Software Development training. The earlier vulnerabilities are discovered, the higher the profit. Early code reviews focusing on weaknesses, code complexity and “bad smells” can uncover errors in the design, data model or programmer understanding. A pentest usually only takes place at a release status of the application where major changes are no longer possible.

If vulnerabilities are identified in a pentest, it should always be evaluated whether these errors may also be present in other application components. Particularly in the case of input validation vulnerabilities, it is often not possible to identify all vulnerable parameters in a test. It should also be analyzed whether the faulty design may have been used in other applications.

Misconception 2: A pentest makes a statement about how secure the system is against (future) attacks

A pentest is always just a snapshot of currently known vulnerabilities and the target system in its configuration and version at the time of the test. Just because a current report shows “low” as the overall risk does not mean that a new vulnerability will not be published in the future that compromises the entire system.

Pentests should therefore not be seen as a one-off measure, but rather as a method for regularly checking an application or IT system for known vulnerabilities.

Misconception 3: Risk assessment equals priority

We often see that pentest results are processed further without a more detailed discussion of the risk. The risk assessment of the pentesters is seen as “set in stone”. Here we would like to point out that a discussion of the identified vulnerabilities with your IT security team can lead to a meaningful weighting or prioritization of the results. Depending on the threat model you have developed (e.g. in a risk/impact analysis as part of ISO 27001 certification), there may be vulnerabilities whose elimination should be prioritized differently than the external assessment of the pentest report.

As Pentest Factory, we are happy to support this discussion (e.g. in a joint meeting) in order to create an overall picture of the target system and risk in its environment.

Another aspect is the risk assessment system itself. When using the standard CVSS system (without environment metrics), the overall risk is calculated from a formula that leaves us as testers little room for context-dependent upgrading or downgrading of risks. For example, you can only choose between “High Attack Complexity” and “Low Attack Complexity” for the “Attack Complexity” metric. Accordingly, attacks of medium complexity cannot be mapped here. This is similar for the other metrics in the CVSS system. This means, for example, that we may classify a finding with medium criticality as “high risk” because the CVSS formula calculates this.

In general, it makes sense to discuss the individual results and assigned risks in the team.

Misconception 4: Fixing vulnerabilities solves the problem

The result of a pentest is a final report. This lists identified weaknesses and provides specific recommendations for remedying the findings.

At first glance, it appears that the main task after the test is completed is to eliminate these weaknesses.

However, as a pentest service provider, we often see that remedying vulnerabilities is the only activity resulting from a test result. For this reason, it is all the more important to understand that the real value of the pentest lies in the identification of faulty processes. It is worth asking about every weak point “Why did this vulnerability occur? How can we correct the process behind it?

This is the only way to ensure that, for example, a missing patch management process or inadequate asset management is corrected and that software deployments are not running again with missing updates after a month.

Since we very often see that a root cause analysis is omitted after the pentest has been completed, we would like to show a second example in which an understanding of the process that went wrong can bring significant added value in terms of safety:

  • In a pentest report, it is determined that a file with sensitive environment variables was saved in the web directory of the application server. The file with the name “.env” was already communicated to the customer during the pentest and the customer immediately removed the file. If the customer leaves his remedial measures at this step, he ignores a complete root cause analysis and possibly other existing vulnerabilities.
  • Let’s ask ourselves the question: Why did the .env file make it into the web directory? After analyzing the development repository (e.g. GIT), we discover that a developer created the file 2 months before release and stored sensitive environment variables in it. This includes the AWS secret key and the passwords of the administrator account. The developer forgot to exclude the file from the repository indexing. This is achieved by including the file in the “.gitignore” list.
    • How can we rectify this error in the future?
      • Finding 1: Possible cause in developer misunderstanding: “Developers do not understand the risk of hardcoded passwords and keys”.
        –> Awareness seminar with developers on the topic of secure software development

        –> Monthly session on “Secrets in the source code”

      • Finding 2: The fault was not noticed for 2 months and was only discovered in the pentest.
        –> Options for automatic detection of secrets: “Static source code analysis”, “Automated analysis of commits”, “Automated scans of the source code repository”

        –> Customization of the CI/CD pipeline to automatically stop sensitive commits

      • Insight 3: Poor management of sensitive keys
        –> Introduce new central tool for secrets management – This also improves the enforcement of password policies, password rotation
    • Have we made this mistake several times in the past?
      • Insight 1: Developers have not just programmed one application. We find that the same error has also been made in a neighboring application.

        –> Pentest result can be transferred to similar systems and processes

      • Insight 2: The version management tool contains a history of all changes ever made

        –> Analysis of the entire repository for sensitive commits

Misconception 5: High risks in the final report = "The product is bad"

Just because a “critical” vulnerability is identified does not mean that the development or the product is “bad”. Products that provide a particularly large number of functions have a particularly large number of potential attack surfaces. The best examples are well-known products such as browsers or operating systems that release monthly security patches.

As experts in the field of cybersecurity for many years, we see that the biggest problems arise from a defensive mindset and an inadequate response to risks. Specifically, the following fatal decisions are made:

  • Wrong decision 1: “The less we disclose about the vulnerability, the less negative attention we generate.”

    –> Maximum transparency is the only correct response, especially after a vulnerability becomes known. What exactly is the weak point? Where does this occur? What is the worst-case scenario? Maximum transparency is the only way to determine the exact cause and ensure that all parties involved understand the risk sufficiently to initiate countermeasures.

    The vulnerability should never be seen as the fault of an individual or the company, but as an opportunity to react. The response to a vulnerability (not the vulnerability itself) determines to a large extent what damage can actually be done.

  • Wrong decision 2: Persons responsible for the weakness are sought.
    –> This leads to a fatal error culture in the company, where errors are no longer openly communicated and corrected.
    –> Errors are interpreted as failure. Learning effects and joint growth do not materialize.
  • Wrong decision 3: In order to make the identification of a critical vulnerability less likely in advance, a very limited scope is deliberately selected for the pentest. Here are a few examples:
    • Only one specific user front end is considered “in-scope”. Administrative components must not be tested.
    • A user environment is provided for the pentest that contains no or insufficient test data, which means that essential application functions cannot be tested.
    • No data may be sent in the productive environment”. The pentest can therefore not effectively test input processing.
    • The use of intrusion prevention systems or web application firewalls is not specified. The pentest is hindered by these systems. A result no longer adequately reflects the risk of the application itself.

    –> These or other restrictions lead to an incorrect risk image of the target system. Instead of recognizing vulnerabilities as early as possible, the complexity and risk potential of the application grows step by step. If a vulnerability is detected late, it becomes more time-consuming and therefore more costly to close it.

Conclusion

As a pentest service provider, it is important to us that our customers get the maximum benefit from a pentest. For this reason, we often hold team discussions to identify trends and make the best possible recommendations. This article is the result of these discussions over the last few years and aims to open up new perspectives on the pentest results.

Do you have any questions or need support with pentesting, secure software development or improving internal processes? Please use the contact form, we will be happy to assist you.

What to look for in pentests? Quality features explained.

Often, in discussions with new customers, we can see that the market of penetration testing services seems opaque and it is difficult to decide on a service provider. The focus is often on the price of a pentest and other decision criteria are omitted.

With this article, we want to provide you with a basic guide to qualitatively evaluate penetration testing service providers and simplify your decision-making process.

Basic qualification of penetration testers

One issue that arises in pentesting is the issue of experience. Attacking computer systems requires creativity, flexibility, and an understanding of a breadth of technologies and platforms. While several years of experience as a developer or security officer can make it easier to get started, they are still no substitute for practical knowledge of how security mechanisms work and how they can be attacked.

For this reason, we recommend focusing on how many years a penetration tester has been performing tests and what practical qualifications they have. Below we have listed some commonly found qualifications and what knowledge is hereby effectively attested.

As can be seen from the descriptions, of the certifications listed here, only the OSCP demonstrates actual practical knowledge in compromising computer systems. We recommend that you hire testers who have a practical qualification similar to OSCP. To view successful qualification and validity, we recommend asking the service provider for proof (e.g., digital link to Credly, Credential.net, or a scanned copy of a tester’s acquired certificate).

Specialization of penetration testers

As described before, the OSCP certification provides a good reference point to verify essential competencies of a penetration tester. OSCP certification demonstrates knowledge of enumerating and testing individual hosts and services.

However, since modern applications have grown extremely in complexity, we recommend asking the service provider what specializations the individual testers have and having these specializations proven (e.g., certifications, customer references, CVE records). Especially for complex test objects, the tester should be familiar with the technologies and have specialized in the corresponding area. This is especially true in areas such as web applications, API interfaces, Active Directory, mobile application testing, SAP testing and many more.

Offer and scoping

When you request a quote, the quote should be tailored to the application or infrastructure you are testing. To do this, the service provider should find out what the scope of the test object is and, on this basis, make an estimate of how many days of testing are required.

If the service provider does not ask for details about your test object and sends you a quote “blindly”, it may either be that too few days have been calculated in, which means that the application can be tested less deeply or even that certain components are omitted. Alternatively, it can equally happen that too many days are estimated for the test object and you simply have to pay for them, although the test could have been completed in advance.

Tip: If you approach several service providers at the same time (e.g. in a tender), you should describe the test object as precisely as possible (technologies used, typical application functions and processes, number of hosts). This information makes it easier to create an appropriate quote and reduces the likelihood of choosing the wrong test scope or methodology.

Final Report

After a penetration test has been performed, the final report is the key document that records the results of the pentest. Therefore, pay particular attention to the quality of the final report and obtain a sample report in advance of the engagement.

Each finding should include a clear description, with screenshots, of how to identify and exploit the vulnerability so that you or your developers can understand the issue and recreate it if necessary. Also, each finding should include an explanation of what risk has been assigned to the vulnerability and what this assignment is based on (e.g., using risk assessment methods such as CVSS or OWASP).

The report should clearly list all the framework parameters of the test and explain typical W questions, such as:

– When was the test performed (period)?
– What was tested (test scope)?
– What, if anything, was not tested (scope)?
– How was it tested (methodology)?
– Who performed the test (contact person)?
– What risk assessment method was used?
– Which tools, scripts and programs were used?

Ask the service provider for a sample report and compare reports to choose the ideal report format for you. Also be sure to include a management summary that summarizes the test results in non-technical, management-level language. This is especially important because the details of findings are often very technical or complex and can only be understood by technical personnel.

Vulnerability scan versus penetration test

Often the terms vulnerability scan and penetration test get mixed up. A vulnerability scan is an automated procedure by which a program independently or based on certain scan parameters tests the test object for vulnerabilities. No manual testing by a human is performed here.

Be careful when a vulnerability scan is advertised instead of a penetration test. Many vulnerabilities are contextual and can only be identified through manual testing. In addition, vulnerability scanners can return false positive results, which are not actual vulnerabilities.

To test efficiently, one or more automated scans can be part of a penetration test. However, you should ensure that the service provider has a focus on manual verification of results and manual testing of the test object. The automatically generated test results should not be included directly in the report, only after manual verification. Each finding should include a detailed account of how the vulnerability was verified.

Technical and legal basics

Before a penetration test can be legally performed, it is mandatory to obtain the hoster’s permission. If you do not host your application or infrastructure yourself, be sure to ask the hoster for permission to test it. Exceptions to this are some cloud hosters that explicitly allow penetration testing (e.g. Microsoft Azure, Amazon AWS, Google Cloud). Make sure that all approvals have been given before starting the test. The penetration test service provider should raise this issue on its own and be sure to clarify it with you before testing begins.

In order to clearly assign which attacks are carried out by your service provider and which attacks represent a real threat, the service provider should carry out the tests from a fixed IP address. To do this, ask your service provider whether such a static IP address exists and find out in advance of the tests. You can also search your log files for the IP address during the test and get insight into what volume of requests the test generated. Be careful when choosing a service provider should they not use a unique IP address for their testing. In addition, always obtain the contact information of the person performing the technical tests. This way you have the possibility to contact a technical contact person directly in case of problems or technical questions and to get feedback immediately. Furthermore, this allows you to exclude the possibility that a subcontractor was commissioned to carry out the tests in a non-transparent or possibly unofficial manner.

Specific procedure

Web application testing

A penetration test of a web application should follow the public standard test methodology “OWASP”. The OWASP consortium provides procedures for testing all current vulnerability categories. This should definitely be tested.

If you want to test an application that provides a user login and protected user areas, we recommend performing a “grey-box” test. Here, test accounts are provided to the service provider, allowing internal areas behind a login to be tested more efficiently and granularly. Pay attention to whether the service provider suggests this test methodology or, if necessary, actively ask for the test methodology.

If an API interface is to be tested, the service provider should request interface documentation or a collection of sample API requests (e.g. Swagger.json, Postman Collections). Without API documentation, testing APIs is not purposeful because endpoints and parameter names have to be guessed. This can result in important endpoints being overlooked and vulnerabilities not being detected.

IT infrastructure testing

An infrastructure test where multiple host systems are tested for their available services usually consists of several automated scans at the beginning of a test combined with manual test units and a subsequent verification of the scan results.

Active Directory Assessment

Active Directory environments are very dynamic and require specialized knowledge beyond a basic qualification such as OSCP. Make sure the tester of your AD environment has advanced training and certifications in Windows and Active Directory security. These may include, for example, the Certified Red Team Professional (CRTP) or Certified Red Team Expert (CRTE) hands-on certification. However, also many other trainings in the area of Azure AD and Windows environments.

Attacked via SMS? Smishing examined

Angriff per SMS? Smishing unter der Lupe

Introduction

Almost everyone is familiar with the issue of spam: you receive e-mails telling you about unbeatable discounts, millions in winnings for your wallet or a blocked bank account. Often these are already filtered by spam filters before delivery or are unmasked by the numerous spelling mistakes and a strange sender address.

However, during our daily work at Pentest Factory, we were able to uncover a much more effective method to trick us into clicking on a malicious link: After a quick call to our mobile device, we receive the following SMS:

1 e1663244942666

Analysis

We open the link in a locked down virtual machine:

3 1

After a simple 301 redirect we reach the following page:

2 1

However, there is nothing out of the ordinary to be found at first glance. Even after analyzing the source code of the page, there are no special peculiarities to be seen.

However, we remember that the SMS was sent to a mobile device. Maybe it is possible to trigger a different behavior with a mobile user agent. We change our user agent to an ordinary Android Firefox browser. And look! We are now redirected to another page:

4 1

If we analyze the code of the page, we can see that it consists of 95% JavaScript code:

5 1

In our analysis of the code, we note that a series of checks are run to enumerate the properties of the browser and the underlying device:

6 1

These checks are located in separate functions (A1 to A91). These are iterated in a for loop and all parameters are queried. Afterwards all parameters returned by the check functions are converted to a JSON string.

This JSON string is then AES CBC encrypted using the JavaScript library “CryptoJS”:

7 1

The individual function calls and their names are obfuscated to make it difficult for a reader to understand the code:

An encoding function like 0x4ee32b takes an array position and a key as parameters. The section of JavaScript code we have called “encoded JavaScript” is a large array containing encoded function names (a so-called lookup table). If the function 0x4ee32b(index, key) is called, the value is read in the array at the corresponding index and this is decoded by means of the key parameter. This results in a final function name. Example:

8 1

This way CryptoJS[‘enc’][‘Utf8’][‘parse’] is called. A different way of writing CryptoJS.enc.Utf8.parse(string)

We stopped our debugger at the point where the array parameter is passed to the encryption routine:

9 1

You can see that, among other things, it reads out which user agent we use, CPU, operating system, device manufacturer, browser, as well as many other parameters, which functions are allowed or possible on our device.

Then, an encryption of these values takes place and, the page transmits this encrypted string to another page:

10 1

Once we arrive there, the server constructs a new redirect pointing to another host:

11 1

This next host receives an encrypted URL that is passed as a GET parameter, which is then redirected to in the final step.

In our case, we are redirected to the “TikTok” app in the Google PlayStore:12

In an article from Google it is described (https://blog.google/threat-analysis-group/protecting-android-users-from-0-day-attacks/), how similar behavior was used in 2021 to infect Android devices with a 0-day vulnerability. Here, a link was sent to the victims via e-mail. After clicking on the link, an exploit was executed in the browser to gain control over the underlying device. Similar to our example, the page finally redirects to a legitimate website.

Since we did not have a vulnerable Android device available, we can only guess if the site we analyzed also had a 0-day vulnerability or an attack planned against our device. However, we can assume that the detailed and obfuscated testing of all system parameters of our device is a preparatory step to analyze the compatibility of an attack with our device in advance.

Conclusion

Be vigilant against phishing attacks – not only emails but also SMS messages can reference malicious sites and prompt you to install malware (disguised as a useful app) on your device. As can also be seen in Google’s article, just one click (to find out what is behind the link) can be enough to launch an attack on the device and take it over completely.

In general, we recommend the following measures to protect against such attacks:

  • Keep all your devices up to date. Install security updates on a regular (automated) basis. This includes mobile devices. Use a mobile device management system to verify that all devices in your organization are compliant with patch levels and security policies.

  • Do not click on any links that cannot be trusted. This is especially true for messages from unknown senders. If in doubt, the message should first be forwarded to your security team for review.

Note: These recommendations are not an exhaustive list. If you are unsure whether you and your company are adequately protected against phishing, please contact us – we have many years of experience in protecting against phishing attacks and offer various services on a technical and personnel level. This includes:

  • Technical examination of your mail servers regarding the resilience and detection capability of phishing mails as well as malware, incl. a final report with hardening measures and insight into which attacks were successful.

  • Simulation of a real phishing attack to investigate how easily your employees become victims (anonymized evaluation also possible). Our attacks can be carried out by e-mail, telephone or physically (e.g. infected storage media).

  • Educate your employees in phishing seminars to improve awareness of attacks

  • Periodic repetition of simulated attacks to obtain progressive values of your anti-phishing measures and to examine the effectiveness of the measures

Preparing for a penetration test

Penetration testing is a useful tool to improve the security of IT infrastructures as well as applications. They help uncover security vulnerabilities early on and give the company a chance to fix security issues before they are actually exploited. Penetration tests are therefore an effective means of improving the security of a company or having it evaluated. Be it customer requirements, an upcoming certification, or your company’s intrinsic need for security. Often, a penetration test by an external service provider is the requirement.

In advance to a penetration test, however, it is often assumed that one’s own company is secure in principle. Accordingly, a penetration test should identify few to no vulnerabilities, and remediation of these findings should also be feasible in a timely manner. The use of monetary and human resources is predictable. Accordingly, a timely pentest is carried out by an external party, which evaluates the security of one’s own company concluding in a final report. That definitely sounds like a desirable process, and not just for you!

Problem definition

In reality, however, the results of a penetration test often differ from the expected result. Even in the few cases where applications or IT infrastructures are securely built and operated, the associated pentest reports are usually never empty. A variety of deployed standard configurations of web servers, firewalls, or other network components provide a multitude of findings to report in a final report. Coupled with a standardized risk assessment methodology such as the Common Vulnerability Scoring System (CVSS), these misconfigurations often end up as a medium risk finding in the final report.

The result of the penetration test is then perceived by companies as a surprise, as the final report does not confirm the company’s security with a green seal. This often brings up the need for a second test (a retest) after corrective actions have been implemented. This leads to potentially unplanned and additional costs, as the initial result of the penetration test is not desired to be passed on to third parties such as customers, insurance companies, etc.

Even if this very probable course is already addressed in advance of the penetration test and the offer also lists a re-test, customers are often taken by surprise by the result of a penetration test. The combination of faulty expectations and a cognitive bias such as “survivorship bias,” where only other companies are hacked and you yourself have never had an incident, often make the emotional experience of a penetration test a negative one.

However, this does not have to be the case. By preparing for a penetration test independently, many findings can be avoided in advance. This brings you, the customer, a little closer to your desired goal of a blank final report and allows us penetration testers to test in a more targeted manner. This can lead to an earlier completion of the pentest project and effectively save costs, should the pentest provider, such as Pentest Factory, charge transparently according to maximum effort.

Means of preparation

The preparations for a penetration test are, of course, based on the frame parameters of the underlying penetration test. Since there are a variety of test types and test objects, we will focus on general preparation options. These can and should be incorporated into an internal corporate process that is executed regularly and deliberately. Regardless of whether a penetration test is due or not.

In addition, we would like to clarify that preparation in advance of a penetration test is not always desirable. For example, if you’ve been raising the lack of resources for your IT department with management for years and finally get approval to conduct a penetration test, don’t do anything yourself up front. Temporarily glossing over the results would be the wrong approach here; after all, you are hoping for a realistic result from the penetration test that represents your company’s current situation and defenses. Only a negative result can signal grievances in your company to the management. You should also stay clear of preparative measures when conducting phishing campaigns, interview-based audits, or audits of your external IT service provider. After all, a real attacker does not announce himself beforehand to sensitize his victims.

The following preparation options are available in principle:

  • Performing an active port scan on your IT infrastructure components
    • Identification of unneeded network services and ports
    • Identification of faulty firewall rules
    • Identification of obsolete software components and vulnerabilities incl. CVEs
    • Identification of typical misconfigurations
      • the disclosure of version information in HTTP headers and software banners
      • The use of content from a standard installation (IIS default website, Nginx or Apache “It works” website)
      • Failure to forward unencrypted HTTP to the secure HTTPS protocol
      • and many more
  • Execution of an automated vulnerability scan
    • Independent identification of so-called “low-hanging fruit” findings of a penetration test
    • Identification of obsolete and insecure software components incl. CVEs
    • Receive recommended actions to address the identified vulnerabilities.
In this article, we focus on performing active port scans using the popular port scanning tool “Nmap”. The results are visually processed as an HTML result file to identify problems. In addition, we touch on the use of automated vulnerability scanners such as Greenbone OpenVAS or Nessus Community.

Carrying out port scans

For all non-technical readers of this blog entry, we would like to briefly explain what port scanning is and what advantages it brings. Network services, such as a web server for providing a homepage, are operated on so-called network ports. A network port is a unique address that can be used to clearly assign connections to applications. A website like this blog therefore waits for incoming connections on the standardized ports 80 and 443. Network port 80 is usually used for an unencrypted connection (HTTP) and should automatically forward to the secure port 443. Behind port 443 is the secure and encrypted HTTPS protocol, which loads the page content from the server and makes it available to your client browser for display. From an attacker’s point of view, these network ports are very interesting, as there are usually services or applications behind them that can potentially be attacked. There are a total of 65353 ports, each for connectionless (UDP) and connection-oriented communication protocols (TCP). The assignment between a port and the service behind it is standardized. However, the configuration can vary freely and does not have to be based on the standard. From an attacker’s point of view, these ports must therefore be enumerated in order to know which services can be attacked. From the company’s point of view, these ports are important to ensure that only those services are offered that are intended to be reached. All unnecessary services should be closed or their front door secured by a firewall.

Network ports can be identified using freely available network tools. One of the best-known tools for identifying open ports and recognizing network services is called “Nmap”. This program is free, open source and can be started under both Windows and Linux. It provides a large number of call parameters, which cannot be fully explained and discussed in detail. 

Nevertheless, in this blog post we would like to provide you with the basic information you need to carry out your own scans. To successfully start an Nmap port scan, you only need the IP address(es) of the IT systems to be scanned. Alternatively, you can also provide the DNS host name and Nmap will automatically resolve the IP address behind it.

You can use the following command to start a port scan after installing Nmap. All open TCP ports in the range 0-65535 are identified and returned. An explanation of the call parameters can be found here.

nmap -sS -Pn –open –min-hostgroup 256 –min-rate 5000 –max-retries 3 -oA nmap_fullrange_portscan -vvv -p- <IP-1> <IP-2> <IP-RANGE>

As a result, you will receive three result files and an output in your terminal window as follows:

wMocvrl++Hd7gAAAABJRU5ErkJggg==

The results of the first port scan only list the network ports identified as open. In addition, we receive the network service behind it, which should be located behind the port by default (RFC standard). However, as already mentioned, network operators do not have to adhere to these port assignments and can tend to operate their services behind any port. For this reason, we need a second port scan to reveal the real network services behind the ports that have now been identified as open.

To do this, we execute the following command, specifying the previously identified ports and the same IT systems to be scanned. An explanation of the call parameters can be found  here .

nmap -sS -sV –script=default,vuln -Pn –open –min-hostgroup 256 –min-rate 5000 –max-retries 3 –script-timeout 300 -d –stylesheet https://raw.githubusercontent.com/pentestfactory/nmap-bootstrap-xsl/main/nmap-bootstrap.xsl -oA nmap_advanced_portscan -vvv -p <PORT1>, <PORT2>, <PORT3> <IP-1> <IP-2> <IP-RANGE>

After completion, we again receive three result files and a new output in the terminal window as follows:

yGQlDh5qNjcAAAAASUVORK5CYII=

The resulting “nmap_advanced_portscan.xml” file can also be opened with a browser of your choice to visually display the results of the port scan as an HTML web page. HTML reports are not supported by Nmap by default, but an individual stylesheet such as “https://raw.githubusercontent.com/pentestfactory/nmap-bootstrap-xsl/main/nmap-bootstrap.xsl” can be defined when the scan is called up, which visualizes the results as an HTML report. Furthermore, the results can be filtered and there is an option for CSV, Excel and PDF downloads.

APvMurslNjwAAAAASUVORK5CYII=

The results of the port scan should now be checked by a technical contact, preferably from the IT department. Make sure that only those network services are offered by your IT systems that are really necessary to fulfill the business purpose. In addition, take a close look at disclosed software versions and check whether the versions used are up-to-date and have been hardened with all available security patches. Also check the validity of identified SSL certificates and refrain from using insecure signing algorithms such as MD5 or SHA1. For internal IT infrastructures, you will usually have identified a variety of network services because you have scanned from a privileged network position within the organization. Here, firewall restrictions are generally implemented somewhat less strictly than for publicly accessible IT systems or services within a demilitarized zone (DMZ).

Execution of automated vulnerability scans

Vulnerability scans are usually performed using automated software solutions that check IT systems for known vulnerabilities and misconfigurations. The resulting findings are problems which are publicly known and for which automated scripts have been written to detect them. Please note that automated vulnerability scanners are not able to identify all potentially existing vulnerabilities. However, they are a great help to detect standard problems quickly as well as automatically.

The regular execution of automated vulnerability scans should be integrated into your internal business process. This process is independent of whether and how often you perform penetration tests. However, it is generally recommended to also have penetration testing performed by an external service provider, as both automated and manual techniques are used to identify vulnerabilities. Only by combining both types of testing by an experienced penetration tester a majority of IT vulnerabilities be detected and ultimately fixed by you. Accordingly, implement a vulnerability management process in your company and scan your IT assets regularly and independently.

Several products can be used to perform an automated vulnerability scan. For this blog post, we are focusing on free variants. This includes the following products:

  • OpenVAS by Greenbone
  • Nessus Community by Tenable

The products are usually self-explanatory after an installation. After specifying the scan type and the IT assets to be checked, an automated scan takes place and the results are clearly displayed in the vulnerability scanner web application. All findings are usually reported with a description, risk assessment, and recommendation for remediation. Moreover, you get the possibility to export the results as CSV, HTML, PDF, etc.

CVE Quick Search: Implementing our own vulnerability database

Not only for penetration testing it is interesting to know, which vulnerabilities exist for a certain software product. Also from the perspective of an IT team it can be useful to quickly obtain information about an employed product version. So far various databases existed for these queries like e.g., https://nvd.nist.gov/vuln/search, https://cvedetails.com or https://snyk.io/vuln

However, during the last years, we could identify several issues with these databases:

  • Many databases only index vulnerabilities for certain product groups (e.g., Snyk: Web Technologies)
  • Many databases search for keywords in full-text descriptions. Searching for specific product versions is not precise.
  • Many databases are outdated or list incorrect information

1Figure: Incorrect vulnerability results for Windows 10

3Figure: Keyword search returns a different product than the originally searched for product

This is why we decided to implement our own solution. We considered the following key points:

  • Products and version numbers can be searched using unique identifiers. This allows a more precise search query.
  • The system performs a daily import of the lastest vulnerability data from the National Institute of Standards and Technology (NIST). Vulnerabilities are thus kept up to date and have a verified CVE entry.
  • The system is based on Elastic Stack https://www.elastic.co/de/elastic-stack/ to query and visualize data in real time.

Technical Implementation: NIST NVD & Elastic Stack

Upon finding vulnerabilities in products, security researchers commonly register a CVE entry per vulnerability. These CVE entries are given a unique identifier, detailed vulnerability information, as well as a general description.

They can be registered at https://cve.mitre.org and are indexed in the National Vulnerability Database (NVD) in real time (https://cve.mitre.org/about/cve_and_nvd_relationship.html). NIST publishes these data sets publicly and freely, which contain all registered vulnerabilities. We use this data stream as a basis for our own database.

The technical details of the data import and subsequent provisioning are illustrated as follows:

4Figure: Overview of the technical components of the vulnerability database

1. Daily import of vulnerability data from the NIST NVD

The data sets are organized by year numbers and refreshed daily by NIST. Every night we download the latest files onto our file server.

2. Pre-Processing of vulnerability data

Afterwards the files are pre-processed to make them compatible with the Elastic Stack Parser. One process that happens here is the expansion of all JSON files: The downloaded files contain JSON objects, however they are often nested, which makes it harder to identify single objects for the parser. We read the JSON and write all object seperators into separate lines. This way we can use a regex ( ‘^{‘ ) to precisely determine, when a new object begins.

5
6

Furthermore we strip the file of all unneeded metadata (e.g., autor, version information, etc.), which leaves only the CVE entries in the file as sequential JSON objects.

3. Reading in the pre-processed vulnerability data using Logstash

After the pre-processing, our Logstash parser is able to read the individual lines of the files using the Multiline Codec (https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html). Every time a complete JSON object is read in, Logstash forwards this CVE object to our Elasticsearch instance.

The CVE Quick Search – Data formats and vulnerability queries

After all CVE entries were read and stored in the Elasticsearch database, we have to understand, which format these entries have and how we can search them for specific products and product vulnerabilities. Our final result is illustrated in the following screenshot: Using unique identifiers, we can return exact vulnerability reports for the queried product version.

2021 09 17 09 56 10 ClipboardFigure: Preview of our vulnerability query frontend

1. Format of product versions

The general format of product versions is specified in the NIST specification. Section 5.3.3 gives a short overview (https://nvlpubs.nist.gov/nistpubs/Legacy/IR/nistir7695.pdf):

8

cpe:2.3:part:vendor:product_name:version:update:edition:sw_edition:target_sw:target_hw:language:other

  • part: either ‘a’ (application), ‘o’ (operating system) or ‘h’ (hardware)
  • vendor: unique identifier of the product vendor
  • product_name: a unique name identifier of the product
  • version: the version number of the product
  • edition: deprecated
  • sw_edition: Version for identifiying different market versions
  • target_sw: Software environment the product is used with/in
  • target_hw: Hardware environment the product is used with/in
  • language: Supported language
  • other: other annotations

A colon is used as a separating character. Asterisk (*) is used as a wildcard symbol.

In our screenshot: “cpe:2.3:o:juniper:junos:17.4r3:*:*:*:*:*:*:*” we can determine that the operating system JunOS of the vendor Juniper in version 17.4r3 is prone to a vulnerability.

Looking at the JSON file, it becomes apparent that there are two formats that are used to store the version number of a vulnerability.

  • Format 1: Using the attributes “versionStartIncluding/versionStartExcluding” and “versionEndIncluding/versionEndExcluding” a range of vulnerable versions is defined.
  • Format 2: A single vulnerable software version is stored in “cpe23Uri”.

2. Querying the database

To query the database for specific products, an easy interface to find correct product identifiers is required. We have decided to implement this component using JavaScript Auto-Complete, that displays products and associated CPE identifiers dynamically:

9Figure: Autocomplete mechanism of the query frontend

After a choice was made, the vulnerabilities matching the specific product identifier can be queried.

Outlook: Kibana – Visualising vulnerabilities and trends

A big advantage that storing vulnerability data in an Elasticsearch database has, is its direct connection to Kibana. Kibana autonomously queries Elasticsearch to generate visualisations from it. In the following we illustrate a selection of visualizations of vulnerability data:

10Figure: Amount of registered vulnerabilities per year

11Figure: Fractions of the respective risk severity groups per year

We see great potential in using this data for real time statistics on our homepage to provide vulnerability trends which are updated on a daily basis.

Outlook – Threat Intelligence and automatization

Another item on our CVE database roadmap is the implementation of a system that automatically notifies customers of new vulnerabilities, once they are released for a certain CPE identifier. Elasticsearch offers an extensive REST API that allows us to realize this task with the already implemented ELK stack.

Currently we are working on implementing live statistics for our homepage. As soon as this milestone is complete, we will continue with the topic of “Threat Intelligence”. As you can see, we not only focus on the field of penetration testing here at Pentest Factory GmbH, but also have great interest in researching cybersecurity topics and extending our understanding, as well as our service line.

Automated cyber attacks: no system remains untouched

Automated attacks

Independent of the size of a company or enterprise, everyone has to expect becoming a target of cyber attacks. Many attacks are not aimed at a specific target, but happen randomly and automated. Upon deploying a new server for the provisioning of our own vulnerability database, we noticed that already in the first 20 hours of online time, almost 800 requests could be logged on the webserver. In this article we want to dissect which origin these requests have and illustrate that attackers target far more than well-known systems and companies these days. In addition, we give practical advice, how to protect your own system against these attacks.

Legitimate requests to the vulnerability database (37%)

In a first step we want to filter all requests from our log file that constitue valid queries to our vulnerability database (the majority of which were executed in test cases). We do this by filtering all known source IP addresses, as well as regular requests to known API endpoints. The vulnerability database provides the following API endpoints for the retrieval of vulnerability data:

  • /api/status
  • /api/import
  • /api/query_cve
  • /api/query_cpe
  • /api/index_management

After a first evaluation, we observed that 269 of 724 requests were legitimate requests to the vulnerability database:

Cyber attacks Figure 1: Sample of legitimate requests to the webserver

But which origin do the remaining 455 requests have?

Directory enumeration of administrative database backends (14%)

A single IP address was particularly persistent: With 101 requests an attacker attempted to enumerate various backends for database administration:logs 2 cut

Figure 2: Directory scanning to find database backends

Vulnerability scans from unknown sources (14%)

Furthermore we could identify 102 requests, where our attempts to associate the source IPs with domains or specific organisations (e.g., using nslookup, user-agent) were unsuccessful. The 102 requests originate from 5 different IP addresses or subnets. This means around 20 requests per scan were executed.

logs 3 cut

Figure 3: Various vulnerability scans with unknown origin

Enumerated components were:

  • boaform Admin Interface (8 requests)
  • /api/jsonws/invoke: Liferay CMS Remote Code Execution and other exploits

Requests to / (11,5%)

Overall, we could identify 83 requests that requested the index file of the webserver. This allows to identify, whether a webserver is online and to observe which service is initially returned.

logs 4

Figure 4: Index-requests of various sources

We could identify various providers and tools that have checked our webserver for its availability:

Vulnerability scans from leakix.net (9%)

During our evaluation of the log file we could identify further 65 requests that originate from two IP addresses, using a user agent of “leakix.net”:

logs 5

Figure 5: Vulnerability scan of leakix.net

The page itself explains that the service scans the entire Internet randomly for known vulnerabilities:

logs 6

Figure 6: leakix.net – About

HAFNIUM Exchange Exploits (2,8%)

Furthermore we could identify 20 requests that attempted to detect or exploit parts of the HAFNIUM Exchange vulnerabilities. (Common IOCs can be found under https://i.blackhat.com/USA21/Wednesday-Handouts/us-21-ProxyLogon-Is-Just-The-Tip-Of-The-Iceberg-A-New-Attack-Surface-On-Microsoft-Exchange-Server.pdf):

  • autodiscover.xml: Attempt to obtain the administrator account ID of the Exchange server
  • \owa\auth\: Folder that shells are uploaded into post-compromise to establish a backdoor to the system

logs 7

Figure 7: Attempted exploitation of HAFNIUM/Proxylogon Exchange vulnerabilities

NGINX .env Sensitive Information Disclosure of Server Variables (1,5%)

11 requests have attempted to read a .env file in the root directory of the webserver. Should this file exist and be accessible it is likely to contain sensitive environment variables (such as passwords).

logs 8

Figure 8: Attempts to read a .env file

Remaining Requests (10,2%)

Further 58 requests were not part of larger scanning activities and have enumerated single vulnerabilities:

  • Server-Side Request Forgery attempts: 12 requests
  • CVE-2020-25078: D-Link IP Camera Admin Passwort Exploit: 9 requests
  • Hexcoded Exploits/Payloads: 5 requests
  • Spring Boot: Actuator Endpoint for reading (sensitive) server information: 3 requests
  • Netgear Router DGN1000/DGN2200: Remote Code Execution Exploit: 2 requests
  • Open Proxy CONNECT: 1 request
  • Various single exploits or vulnerability checks: 27 requests

Furthermore the following harmless data was queried:

  • favicon.ico – Bookmark graphic: 7 requests
  • robots.txt – file for search engine indexing: 9 requests

Conclusion

Using tools like zmap, attackers are able to scan the entire Internet in less than 5 minutes (see https://www.usenix.org/system/files/conference/woot14/woot14-adrian.pdf). The listed statistics have shown that IT systems are an immediate target of automated attacks and vulnerability scans, as soon as they are available on the Internet.The size of a company or the degree of familiarity are irrelevant, since attackers are able to scan the entire Internet for vulnerable hosts and oftentimes cover the entire IPv4 address range. Even using common infrastructural components like reverse proxies or load balancers to hide applications behind specific hostnames can be targeted. A secret or special hostname is not hidden, like oftentimes assumed, and does not protect from unauthorized access. Already with the retrieval of SSL certificates for your services and applications, hostnames are logged in so called SSL transparency logs. These are publicly available. This similarly allows automated tools to conduct attacks, since hostnames can be queried using services like crt.sh. Further information regarding this topic can be found in our articleSubdomains under the hood: SSL Transparency Logs .

The implementation of access controls and hardening measures thus has to be done before your services and applications are exposed to the Internet. As soon as an IT system is reachable on the Internet, you have to expect active attacks that may succeed in the worst case.

Recommendation

Expose only required network services publicly

When you publish IT systems on the public Internet, you should only expose services that are required for the business purpose. Running a web application or service based on the HTTP(S) protocol, this usually means port 443 TCP is required.

Refrain from exposing the entire host (all available network services) on the Internet.

Network separation

Implement a demilitarized zone (DMZ) using firewalls to achieve an additional layer of network separation between the public Internet and your internal IT infrastructure. Place all infrastructure components that you want to expose on the Internet in the designated DMZ. Further information can be found in the IT baseline of the BSI.

Patch-Management and Inventory Creation

Keep all your software components up to date and implement a patch management process. Create an inventory of all IT infrastructure components, listing all used software versions, virtual hostnames, SSL certificate expiration dates, configuration settings, etc.

Further information can be found under: http://www.windowsecurity.com/uplarticle/Patch_Management/ASG_Patch_Mgmt-Ch2-Best_Practices.pdf

Hardening measures

Harden all exposed network services and IT systems according to the best-practices of the vendor or hardening measures of the Center for Internet Security (CIS). Change all default passwords or simple login credentials that may still exist from the development period and configure your systems for productive use. This includes the deactivation of debug features or testing endpoints. Implement all recommended HTTP-Response-Headers and harden the configuration of your webservers. Ensure that sensitive cookies have the Secure, HttpOnly and SameSite flags set.

Transport encryption

Offer your network services via an encrypted communication channel. This ensures the confidentiality and integrity of your data and allows clients to verify the authenticity of the server. Refrain from using outdated algorithms like RC4, DES, 3DES, MD2, MD4, MD5 or SHA1. Employ SSL certificates that are issued from a trustworthy certification authority, e.g., Let’s Encrypt. Keep these certificates up to date and renew them in time. Use a single, unique SSL certificate per application (service) and set the correct domain name in the Common Name field of the certificate. Using SSL wildcard certificates is only necessary in rare cases and not recommended.

Access controls and additional security solutions

Limit access to your network services, in case they are not publicly available on the Internet. It may make sense to implement an IP whitelisting, which limits connections to a trustworthy pool of static IPv4 addresses. Configure this behavior either in your firewall solution or directly within the deployed network service, if possible. Alternatively you can also use SSL client certificates or Basic-Authentication

Implement additional security solutions for your network services like Intrusion Prevention Systems (IPS) or a Web Application Firewall (WAF), to have advanced protection against potential attacks. For IPS we can reommend the open source solution Fail2ban. As a WAF, ModSecurity with the known OWASP Core Rule Set can be set up.

Fail2ban is an IPS written in Python, which identifies suspicious activity based on log entries and regex filters and allows to set up automatical defense actions. It is for instance possible to recognized automated vulnerability scans, brute-force attacks or bot-based requests and block attackers using IPtables. Fail2ban ist open source and can be used freely.

  • Installation of Fail2ban
    • Fail2ban can usually be installed using the native packet manager of your Linux distribution. The following command is usually sufficient:
sudo apt update && sudo apt install fail2ban
    • Afterwards the Fail2ban service should have started automatically. Verify succesful startup using the following command:
sudo systemctl status fail2ban
  • Configuration of Fail2ban
    • After the installation of Fail2ban, a new directory /etc/fail2ban/ is available, which holds all relevant configuration files. By default, two configuration files are provided:/etc/fail2ban/jail.conf and /etc/fail2ban/jail.d/defaults-debian.conf. They should however not be edited, since they may be overriden with the next package update.
    • Instead you should create specific configuration files with the .local file extension. Configuration files with this extension will override directives from the .conf files. The easiest configuration method for most users is copying over the supplied jail.conf tojail.local and then editing the .local file for desired changes. The .local file only needs to hold entries that shall override the default config.
  • Fail2ban for SSH
    • After the installation of Fail2ban, a default guard is active for the SSH service on TCP port 22. Should you use a different port for your SSH service, you have to adapt the configuration setting port in your jail.local file. Here you can also adapt important directives like findtime, bantime and maxretry, should you require a more specific configuration. Should you not require this protection, you can disable it by setting the directive enabled to falseFurther information can be found under: https://wiki.ubuntuusers.de/fail2ban/
  • Fail2ban for web services
    • Furthermore, Fail2ban can be set up to protect against automated web attacks. You may, for instance, recognize attacks that try to enumerate web directories (Forceful Browsing) or known requests associated with vulnerability scans and block them.
    • The community provides dedicated configuration files, which can be used freely:
    • Store these exemplary filter configurations in the directory /etc/fail2ban/filter.d/ and configure a new jail in your jail.local file. In the following we provide an example.
  • Blocking search requests from bots
    • Automated bots and vulnerability scanners continuously crawl the entire Internet to identify vulnerable hosts and execute exploits. Oftentimes, known tools are used, whose signature can be identified in the User-Agent HTTP-Header. Using this header, many simple bot attacks can be detected and blocked. Attackers may change this header, which leaves more advanced attacks undetected. The Fail2ban filters *badbots.conf are mainly based on the “User-Agent” header. 
    • Alternatively, it is also possible to block all requests that follow a typical attack pattern. This includes automated requests, which continuously attempt to identify files or directories on the web server. Since this type of attack requests several file and directory names at random, the probability of many requests resulting in a 404 Not Found error message is relatively high. Analysing these error messages and the associated log files, Fail2ban is able to recognize attacks and ban attacker systems early on.
    • Example: Nginx web server:

1. Store the following file under /etc/fail2ban/filter.d/nginx-botsearch.conf

https://github.com/fail2ban/fail2ban/blob/master/config/filter.d/nginx-botsearch.conf

2. Add configuration settings to your /etc/fail2ban/jail.local:

[nginx-botsearch]
ignoreip = 127.0.0.0/8 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16
enabled = true
port = http,https
filter = nginx-botsearch
logpath = /var/log/nginx/access.log
bantime = 604800 # Bann für 1 Woche
maxretry = 10 # Bann bei 10 Fehlermeldungen
findtime = 60 # zurücksetzen von maxretry nach 1 Minute

3. If necessary, include further trustworthy IP addresses of your company in the ignoreip field, which shall not be blocked by Fail2ban. If necessary, adapt other directives according to your needs and verify the specified port number of the web server, as well as correct read permissions for the   /var/log/nginx/access.log log file.

4. Restart the Fail2ban service

sudo systemctl restart fail2ban

Automated enumeration requests will now be banned if they generate more than ten 404 error messages within one minute. The IP address of the attacking system will be blocked for a week using IPtables and enabled again afterwards. If desired, you can also be informed about IP bans via e-mail using additional configuration settings. A Push-notification to your smartphone using a Telegram-Messenger-Bot in Fail2ban is also possible. Overall, Fail2ban is very flexible and allows unlimited banactions,  like custom shell scripts, in case a filter matches

To view already banned IP addresses the following command can be used:

  • View available jails
sudo fail2ban-client status
  • View banned IP address in a jail
sudo fail2ban-client status 

Fail2ban offers several ways to protect your services even better. Inform yourself about additional filters and start using them, if desired. Alternatively, you can also create your own filters using regex and test them on log entries.

Premade Fail2ban filter lists can be found here: https://github.com/fail2ban/fail2ban/tree/master/config/filter.d  

Vulnerabilities in NEX Forms < 7.8.8

To protect our infrastructure against attacks, internal penetration tests are an inherent part of our strategy. We hereby put an additional focus on systems that process sensitive client data. During a penetration test of our homepage before the initial go-live, we were able to identify two vulnerabilities in the popular WordPress-Plugin NEX Forms.

Both vulnerabilities were fixed in the subsequent release and can not be exploited in current software versions anymore. More details can be found in this article.


Background

NEX Forms is a popular WordPress plugin for the creation of forms and the management of submitted form data. It has been sold more than 12.500 times and can be found on several WordPress webpages. The plugin offers a functionality to create form reports. These reports can then be exported into PDF or Excel formats. In this component we were able to identify two vulnerabilities.

CVE-2021-34675: NEX Forms Authentication Bypass for PDF Reports

The “Reporting” section of the NEX Forms backend allows users to aggregate form submissions and export them into PDF files. As soon as a selection is exported into PDF, the server stores the resulting file under the following path:

/wp-content/uploads/submission_report.pdf

nex forms vulnerability form
Visualization of the vulnerable NEX forms form

Figure 1: Reporting section with Excel and PDF export functions

During our testing, we were able to identify that this exported file is not access protected. An attacker is thus able to download the file without authentication:

34675 2

Figure 2: Proof-of-Concept: Unauthenticated access to the PDF report

CVE-2021-43676: NEX Forms Authentication Bypass for Excel Reports

Similar to the previously mentioned finding, another vulnerability for Excel exports exists. Here, the Excel file is not directly stored on the file system of the webserver, but directly returned as a server response.

To abuse this vulnerability a form report has to have been exported into the Excel format. The server then returns the latest Excel file, whenever the GET Parameter “export_csv” with a value of “true” is passed to the backend. This URL handler does not verify any authentication parameters, which allows an attacker to access the contents without prior authentication:

34676 2

Figure 3: Proof-of-Concept: Unauthenticated access to the Excel report


Possible Impact

An attacker that abuses these authentication vulnerabilities may cause the following damage:

  • Access to confidential files that have been submitted via any NEX Forms form.
  • Access to PII, such as name, e-mail, IP address or phone number

This could lead to a significant loss of the confidentiality of the data processed by the NEX Forms plugin.


Vulnerability Fix

Both vulnerabilities were fixed in the subsequent release of the vendor. More information can be found under: https://codecanyon.net/item/nexforms-the-ultimate-wordpress-form-builder/7103891.

We thank the Envato Security Team for patch coordination with the developers and the fast remediation of the identified vulnerabilities.

Why we crack >80% of your employees’ passwords

Summary

During our technical password audits, we were able to analyse more than 40.000 password hashes and crack more than 3/4 of them. This is mostly due to short passwords, an oudated password policy in the company, as well as frequent password reuse. Furthermore it happens that administrative accounts are not bound to the corporate password policy which allows weak passwords to be set. Issues in the onboarding process of employees may also be abused by attackers to crack additional passwords. Oftentimes a single password of a privileged user is enough to allow for a full compromise of the corporate IT infrastructure.

Task us with an Active Directory Password Audit, to increase the resilience of your company against attackers in the internal network and to verify the effectivity of your password policies. We gladly support you in identifying and remediating issues related to the handling of passwords and their respective processes.

Introduction

The Corona pandemic caused a sudden change towards home office working spaces. However, IT infrastructure components, like VPNs and remote access, were oftentimes not readily available during this shift. Many companies had to upgrade their existing solutions and retrofit their IT infrastructure.

Besides the newly aquired components, also new accounts were created for accessing company ressources over the Internet. If technically possible, companies implemented Single-Sign-On (SSO) authentification, which requires a user to login only once with their domain credentails before access is granted to various company ressources.

According to professor Christoph Meinel, the director of the Hasso-Plattner-Institute, the Corona pandemic increased the attack surface for cyber attacks greatly and created complex challenges for IT departments. [1] Due to the increase in home office work, higher global Internet usage since the start of the pandemic and an extension of IT infrastructures, threat actors have gained new, attractive targets for hacking and phishing attacks. Looking at just the DE-CIX Internet backbone node in Frankfurt, where traffic of various ISPs is accumulated, a new high of 9.1 Terabits per second was registered. This value equals a data volume of 1800 downloaded HD movies per second. A new record compared to prior peaks of 8.3 Terabits. [2]

Assume Breach

The effects of the pandemic have thus continuously increased the attack surface of companies and their employees regarding cyber attacks. Terms like „Supply Chain Attacks“ or „0-Day Vulnerabilities“ are frequently brought up in the media, which shows that many enterprises are actively attacked and compromised. Oftentimes the compromise of a single employee or IT system is enough to obtain access to internal networks of a company. Here a multitude of attacks can happen, like phishing or the exploitation of specific vulnerabilities in public IT systems.

Microsoft operates according to the „Assume Breach“ principle and expects that attackers have already gained access, rather than assuming complete security of systems can be achieved. But what happens, once an attacker is able to access a corporate network? How is it possible that the compromise of a regular employee account causes the entire network to break down? Sensitive ressources and server systems are regularly updated and are not openly accessible. Only a limited number of people have access to critical systems. Furthermore a company-wide password policy ensures that attackers cannot guess the passwords of administrators or other employees. Where is the problem?

IT-Security and Passwords

The annual statistics of the Hasso-Plattner-Institute from 2020 [3] illustrate that the most popular passwords amongst Germans are „123456“, „123456789“ or „passwort“, just like in the statistics from the prior years. This does not constitute sufficient password complexity – without even covering the reuse of passwords.

Most companies are aware of this issue and implement technical policies to prevent the use of weak passwords. Usually, group policies are applied for all employees via the Microsoft Active Directory service. Users are then forced to set passwords with a sufficient length as well as certain complexity requirements. Everyone knows phrases like “Your password has to contain at least 8 characters”. Does this imply that weak passwords are a thing of the past? Unfortunately not, since passwords like „Winter21!“ are still very weak and guessable, even though they are compliant with the company-wide password policy.

Online Attacks vs. Offline Attacks

For online services likeOutlook Web Access (OWA) or VPN portals, where a user logs on with their username and password, the likelihood of a successful attack is grearly reduced. An attacker would have to identify a valid username and subsequently guess the respective password. Furthermore solutions like account lockouts after multiple invalid login attempts, rate limiting or two factor authentication (2FA) are used. These componentes reduce the success rate of attackers considerable, since the number of guessing attempts is limited.

But even if such defensive mechanisms are not present, the attack is still executed online, by choosing a combination of a username and password and sending it to the underlying web server. Only after the login request is processed by the server, the attacker receives the response with a successful login or an error message. This Client-Server communication limits the performance of an attack, since the required time and success rate lie greatly apart. Even a simple password containing lowercase letters and umtlauts with a length of 6 characters would require 729 million attacker requests to brute force all possible password combinations. Additionally, the attacker would already need to know the username of the victim or use further guessing to find it out. By using a company-wide password policy, including the above defensive mechanisms, the probability for a successful online brute-force attack is virtually zero.

However, for offline attacks, where an attacker has typically captured or obtained a password hash, brute-forcing can be executed with a significantly higher performance. But where do these password hashes come from and why are they more prone to guessing attempts?

Password Hashes

Let us go through the following scenario: As a great car enthusiast Max M. is always looking for new offers on the car market. Thanks to the digital change, not only local car dealerships but also the Internet is available with a great variety of cars. Max gladly uses these online platforms to look out for rare deals. For the use of these services, he generally needs a user account to save his favorites and place bids. A registration via e-mail and the password “Muster1234” is quickly done. But how does a subsequent login with our new user work? As a layman, you would quickly come to the conclusion that the username and password are simply stored by the online service and compared upon logging in.

This is correct on an abstract level, the technical details are however lacking a few details. The login credentials are stored after registration in a database. The database however does not contain clear-text passwords of a user anymore, but a so called password hash. The password hash is derived by a mathmatical one-way function based on a user’s password. Instead of our password „Muster1234“ the database now contains a string like „VEhWkDYumFA7orwT4SJSam62k+l+q1ZQCJjuL2oSgog=“ and the mathematical function ensures that this kind of computation is only possible in one direction. It is thus effectively not possible to reconstruct the clear-text password from the hash. This method ensures that the web hoster or page owner cannot access their customers’ passwords in clear-text.

During login, the clear-text password of the login form is sent to the application server, which applies the same mathematical function on the entered password and subsequently compares it to the hash that is stored in the database. Should both values be equal, the correct password was entered and the user is logged in. Are the values unequal, an incorrect password was submitted and the login results in an error. There are further technical features which are implemented in modern applications such as replicated databases or the use of “salted” hashing. These are however not relevant for our exemplary scenario.

An attacker that tries to compromise the user account, has the same difficulties doing so using an online attack. The provider of the car platform may allow only three failed logins before the user account is disabled for 5 minutes. An automated attack to guess the password is thus not feasible.

Should the attacker however gain access to the underlying database (e.g. using an SQL injection vulnerability), the outlook is different. An attacker then has access to the password hash and is able to conduct offline attacks. The mathematical one-way function is publicly known and can be used to compute hashes. An attacker may thus proceed in the following order:

  1. Choose any input string, which represents a guessing attempt of the password.
  2. Input the chosen string into the mathematical one-way function and compute its hash.
  3. Compare the computed hash with the password hash extracted from the application’s database. Results:
    1. If they are equal, the clear-text password has been successfully guessed.
    2. If they are unequal, choose a new input string and try again.

This attack is significantly more performant than online attacks, as no network communication takes place and no server-side security mechanisms become active. However it has to be noted that modern and secure hash functions are created in a way so that hash computation for attackers becomes complex and unfeasible. This is achieved by increasing the expense of a single hash computation by the factor n, which can be ignored for a singular hash computation and comparison with the login database. For an attacker who needs a multitude of hash computations to break a hash, the expense is increased by factor n, which results in successful guessing attempts to require multiple years of processing time. By using modern hash functions like Argon2 or PBKDF2, offline attacks are similarly complex to online attacks and rather complex to realize in a timely manner.

LM- and NT-Hashes

Our scenario can be translated to many other applications, like the logon to a Windows operating system. Similarly to the account of the online car dealership, Windows allows creating users that can log on to the operating system. Whether a login requires a password can be configured individually for every user account. The password is yet again stored as a hash and not in clear-text format. Microsoft uses two algorithms to compute the hash of a user password. The older of the two is called LM hash and is based on the DES algorithm. Out out security reasons this hash type was deactivated starting from versions Windows Vista and Windows Server 2008. As an alternative the so called NT hash was introduced, which is based on the MD4 algorithm.

The password hashes are stored locally in the so called SAM database on the hard drive of the operating system. Similarly to our previous scenario, a comparison between the entered password (after generating its hash) and the password hash stored in the SAM database is done. Are both values idententical, a correct password was used and the user is logged on to the system.

In corporate environments and especially in Microsoft Active Directory networks, these hashes are not only stored locally in the SAM database, but also on a dedicated server, the domain controller, in the NTDS database file. This allows for a uniform authentication against databases, file servers and further corporate ressources using the Kerberos protocol. Furthermore this reduces the complexity within the network, since IT assets and user accounts can be managed centrally via the Active Directory controllers. Using group policies, companies can furthermore ensure that employees must set a logon password and that the password adheres to a strict password policy. Passwords may need to be renewed on a regular basis. On the basis of the account password it is also possible to implement Single-Sign-On (SSO) for a variety of company ressources, since the NT hash is stored centrally on the domain controllers. Besides the local SAM database on every machine, as well as the domain controllers of an on-premise Active Directory solution, it is also possible to synchronize NT hashes with a cloud-based domain controller (e.g., Azure). This extends the possibilities of SSO logins to cloud assets, like Office 365. The password hashes of a user are thus used in several occasions, which increases the likelhood that they may be compromised.

Access to NT hashes

To obtain access to NT hashes as an attacker, several techniques exist. Out of brevity, we only mention a selection of well-known methods in this article:

  1. Compromising a single workstation (e.g., using a phishing e-mail) and dumping the local SAM database of the Windows operating system (e.g., using the tool “Mimikatz”).
  2. Compromising a domain controller in an active directory environment (e.g., using the PrintNightmare vulnerability) and dumping the NTDS database (e.g., via Mimikatz).
  3. Compromising a privileged domain user account with DCSync permissions (e.g., a domain admin or enterprise admin). Extracting all NT hashes from the domain controller in an Active Directory domain.
  4. Compromising a privileged Azure user account with the permissions to execute an Azure AD Connect synchronization. Extracting all NT hashes from the domain controller of an Active Directory domain.
  5. and many more attacks…

Password cracking

After the NT hashes of a company have been compromised, they can either be used in internal “relaying” attacks or targeted in password cracking attempts to recover the clear-text password of an employee.

This is possible, since NT hashes in Active Directory environments use an outdated algorithm called MD4. This hash function was published in 1990 by Ronald Rivest and considered insecure relatively quickly. A main problem of the hash function was its missing collision resistance, which leads to different input values generating the same output hash. This undermines the main purpose of a cryptographic hash function.

Furthermore, MD4 is highly performant and does not slow down cracking attempts, as opposed to modern hash functions like Argon2. This allows attackers to execute effective offline attacks against NT hashes. A modern gaming computer with a recent graphics card model is able to compute 50-80 billion hashes per second. Cracking short or weak passwords thus becomes dead easy.

To illustrate the implications of this cracking speed, we want to analyse all possible password combinations of 8-character long passwords. To simplify this analysis, we assume that the password consists only of lowercase letters and numbers. The german alphabet contains 26 base letters, as well as three umlauts ä, ö, ü and the special letter ß. Numerical values have a range of 10 diffferent values – the digits from 0 to 9. This results in 40 possible values for every digit of our 8 character long password. This equals about 6550 billion possible combinations, using the following mathematical formula:

formel

A gaming computer, which generates 50 billion hashes per second, would thus only require 131 seconds to test all 6550 billion possibilities in our 8 character password space. Such a password would thus be cracked in a bit more than 2 minutes. Real threat actors employ special password cracking rigs, which can compute roughly 500 – 700 billion hashes per second. These systems cost around 10.000€ to set up.

Furthermore there is a variety of cracking methods that do not aim at brute-forcing the entire keyspace (all possible passwords). This allows cracking passwords that have more than 12 characters, which would require several years for a regular keyspace brute-force.

Such techniques are:

  • Dictionary lists (e.g., a German dictionary)
  • Password lists (e.g., from public leaks or breaches)
  • Rule based lists, combination attacks, keywalks, etc.

Pentest Factory password audit

By tasking us with an Active Directory password audit, we execute these attack scenarios. In a first step we extract all NT hashes of your employees from a domain controller without user associations. Our process is coordinated with your works council and is based on a proven process.

Afterwards we execute a real cracking attack on the extracted hashes to compute clear-text passwords. We employ several modern and realistic attack techniques and execute our attacks using a cracking rig with a hash power of 100 billion hashes per second.

After finalising the cracking phase, we create a final report that details the results of the audit. The report contains metrics regarding compromised employee passwords and lists extensive recommendations how password security can be improved in your company on the long term. Oftentimes, we are able to reveal procedural problems, e.g., with the onboarding of new employees or misconfigurations in group and password policies. Furthermore, we offer to conduct a password audit with user correlation. We have established a process that allows you as a customer to associate compromised passwords to the respective employee accounts (without our knowledge). This process is also coordinated with your works council and adheres to common data privacy regulations.

Statistics from our past password audits

Since the founding of Pentest Factory GmbH in 2019, we could conduct several password audits for our clients and improve the handling of passwords. Besides technical misconfigurations, like a password policy that was not applied uniformly, we were able to find procedural problems on several occasions. Especially the onboarding of new employees creates insecure processes that lead to weak passwords being chosen. It also occurs that administrative users can choose passwords independently of established policies. Since these users are highly privileged, weak passwords contribute to a significantly increased risk for a breach. Should an attacker be able to guess the password of an administrative user, this could result in a compromise of the entire company IT infrastructure.

To give you an insight into our work, we want to present statistics from our previously executed password audits.

Combining all performed audits, we could evaluate 32.493 unique password hashes. Including reused passwords we can count 40.288 password hashes. This means that 7795 passwords could be cracked that have been used with several user accounts at the same time. These are oftentimes passwords like „Winter2021“ or passwords that were handed out during initial onboarding and not changed. The highest password reuse we could detect was a password with around 450 accounts using the same password. This was an initialization password that had not been changed by the respective users.

Of overall 32.493 unique password hashes, we were able to crack 26.927 hashes and compute their clear-text passwords. This amounts to a percentage of over 82%. This means we were able to break more than two thirds of all employee passwords during our password audits. An alarming statistic.

cracked vs notcracked

This is mainly because passwords with a length less than 12 characters were used. The following figure highlights this insight.

Note: The figure does not include all cracked password lengths. Exceptions like password lengths over 20 characters or very short or even empty passwords were omitted.

pwlength

Furthermore our statistics show the effects of a too weak password policy, as well as issues with applying a password policy company-wide.

Note: The below figure does not contain the password masks of all cracked passwords but only a selection.

masks 1

A multitude of employee passwords were guessable, because they were based on a known password mask. Over 12.000 cracked passwords consisted of an initial string, ending with numerical values. This includes especially weak passwords like „Summer2019“ and „password1“.

These passwords are usually already a part of publicly available password lists. One of the most known password lists is called “Rockyou”. It contains more than 14 million unique passwords from a breach of the company RockYou in 2009. The company fell victim to a hacker attack and had saved all their customer passwords as clear-text in their database. The hackers were able to access this data and published the records afterwards.

On the basis of these leaks it is possible to generate statistics about the structure of user passwords. These statistics, patterns and rules for password creation can subsequently be used to break another myriad of password hashes. The use of a password manager, which creates cryptographically random and complex passwords, can prevent these rule-based attacks and make it harder for patterns to occur.

Recommendations regarding password security

Our statistics have shown that a strict and modern password policy can reduce the success rate of a cracking or guessing attack drastically. Nevertheless, password security is based on multiple factors, which we want to illustrate as follows.

Password length

Distance yourself from outdated password polices that only enforce a password length of 8 characters. The costs for modern and powerful hardware are continously decreasing, which allows even attackers with a low budget to effectively execute password cracking attacks. The continuous growth of cost-effective cloud services furthermore enables attackers to dynamically execute attacks based on a fixed budget, without having to buy hardware or set up systems.

Already a password length of 10 characters can increase the effort needed to crack a password significantly – even considering modern cracking systems. For companies that employ Microsoft Active Directory we still recommend using a minimum password length of 12 characters.

Complexity

Ensure that passwords have sufficient complexity by implementing the following minium requirements:

  • The password contains at least one lowercase letter
  • The password contains at least one uppercase letter
  • The password contains at least a digit
  • The password contains at least a special character

Regular password changes

Regular changes of passwords are not recommended by the BSI anymore, as long as the password is only accessible by authorized persons. [4]

Should the password have been compromised, which implies that it is known to an unauthorized person, it has to be ensured that the password is changed immediately. Furthermore it is recommended to regularly check public databases regarding new password leaks of your company. We gladly support you in this matter in our Cyber Security Check.

Password history

Ensure that users cannot choose passwords that they have previously used. Implement a password history that contains the last 24 used password hashes and prevents their reuse.

Employment of blacklists

Implement additional checks that prevent the use of known blacklisted words. This includes the own company name, seasons of the year, the name of clients, service owners, products or single words like “password”. Ensure that these blacklisted words are not only forbidden organisationally, but also on a technical level.

Automatic account lockout

Configure an automatic account lockout for multiple invalid logins to actively prevent online attacks. A proven guideline is locking a user account after 5 failed login attempts for 5-10 minutes. Locked accounts should be unlocked automatically after a set timespan, so regular usage continues and help desk overloads are prevented.

Sensitization

The sensitization of all employees including the management is essential to increase the security posture company-wide. Regular sensitization measures should become a part of the company’s culture so correct handling of sensitive access data is internalized.

A deliberate change in behavior is necessary in security relevant situations, e.g.,

  • locking your machine, even if you leave the desk only shortly;
  • locking away confidential documents;
  • never forward your password to anyone;
  • use secure (strong) passwords;
  • do not use passwords twice or multiple times.

Despite a technically strict password policy, users might still choose weak passwords that can be guessed with ease. Only the execution of regular password audits and a sensitization of employees can prevent damage in the long term.

Use of two-factor authentication (2FA)

Configure additional security features such as two-factor authentication. This ensures that even in the event that a password guessing attempt is successful, the attacker cannot gain access to the user account (and company ressources) without a secondary token.

Regular password audits

Execute regular password audits to identify user accounts with weak or reused passwords and protect them from future attacks. A continuous re-evaluation of your company-wide password policy and further awareness seminars enable you to generate metrics on a technical level that allow you to continuously measure and improve password security in your company.

Differentiated password policies

Introduce multiple password policies based on the protection level of the respective target group. Low privileged user accounts can thus be required to choose a password with a minimum length of 12 characters including complexity requirements, while administrative user accounts have to follow a more strict policy with at least 14 characters.

Additional security features

We gladly advise you regarding additional security features in your Active Directory environment to improve password security. This includes:

  • Local Administrator Password Solution (LAPS)
  • Custom .DLL Password Filters
  • Logging and Monitoring of Active Directory Events

Comissioning

Should we have sparked your interest in a password audit, we are looking forward to hearing from you. We gladly support you in evaluating the password security of your company, as well as making long-term improvements.

You can also use our online configurator to comission an audit.

More information regarding our password audit can be found under: https://www.pentestfactory.de/passwort-audit

Sources

[1] https://hpi.de/news/jahrgaenge/2020/die-beliebtesten-deutschen-passwoerter-2020-platz-6-diesmal-ichliebedich.html
[2] https://www.kas.de/documents/252038/7995358/Die+Auswirkungen+von+COVID-19+auf+Cyberkriminalit%C3%A4t+und+staatliche+Cyberaktivit%C3%A4ten.pdf/8ecf7084-704b-6810-4374-5840a6954b9f?version=1.0&t=1591354253482
[3] https://hpi.de/news/jahrgaenge/2020/die-beliebtesten-deutschen-passwoerter-2020-platz-6-diesmal-ichliebedich.html#:~:text=Das%20Hasso%2DPlattner%2DInstitut%20(,sind%20und%202020%20geleakt%20wurden.
[4] https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Grundschutz/Kompendium/IT_Grundschutz_Kompendium_Edition2020.pdf – Section ORP.4.A8

Vulnerabilities in FTAPI 4.0 – 4.11

Um unsere Infrastruktur gegen Angriffe abzusichern, sind interne Penetrationstests ein fester Bestandteil unserer Strategie. Wir prüfen dabei besonders die Systeme, die zur Verarbeitung von Kundendaten eingesetzt werden. In einem Penetrationstest unserer Dateiaustauschplattform FTAPI konnten wir dabei die folgenden Schwachstellen identifizieren.

Nach der Entdeckung haben wir die Schwachstellen an den Hersteller weitergeleitet, womit diese im nächsten Release geschlossen werden konnten. Weitere Details finden sich hierzu am Ende des Beitrags.


 

CVE-2021-25277: FTAPI Stored XSS (via File Upload)

Die Webanwendung ist anfällig für „Stored Cross-Site Scripting”: Der Dateiupload der Applikation erlaubt es Dateien mit unsicheren Namen hochzuladen. Beim Hovern über das Dateinamensfeld wird ein Alternativtext-Element eingeblendet (siehe folgender Screenshot), was den Dateinamen zeigt. Dieses dynamisch eingeblendete Element nimmt keine Filterung des Dateinamens auf bösartige Zeichen vor, wodurch eine XSS Schwachstelle entsteht.

25277 1

Abbildung 1: Verwundbares Alternativtextfeld der Dateinamens-Box

Proof-of-Concept

Beim Hochladen einer Datei mit dem folgenden Namen, wird exemplarisch eine Alert-Box ausgeführt, um die Schwachstelle zu visualisieren:

25277 2

Abbildung 2: Proof-of-Concept Dateiname mit alert() Ausführung

Damit der Upload erfolgreich ist, darf die Datei nicht leer sein. Sie lässt sich mit dem folgenden Linux Befehl erzeugen:

echo "test" >> "<iframe onload=alert('Pentest_Factory_XSS')>"

Das Dateinamensfeld wird nicht nur beim Upload, sondern auch für den Empfänger beim Abruf der Datei eingeblendet. Somit kommt es auch hier zu einer Ausführung unseres Codes, sobald die Maus das grüne Datei-Feld berührt:

25277 3

Abbildung 3: Proof-of-Concept alert() wird in der Inbox des Opfers ausgeführt


 

CVE-2021-25278: FTAPI Stored XSS (via Submit Box Template)

Die Webanwendung ist anfällig für „Stored Cross-Site Scripting”: Administrativen Nutzern ist es möglich das Submit Box Template zu ändern. Hierbei existiert eine Funktion zum Hochladen von Hintergrundbildern. Die Uploads werden dabei nicht auf bösartige Inhalte gefiltert, was es einem Angreifer erlaubt, eine SVG Datei mit eingebettetem XSS hochzuladen.

25278 1

Abbildung 4: Verwundbarer Hintergrundbild-Upload im Submit Box Layout Editor

Proof-of-Concept

Um die Schwachstelle exemplarisch auszunutzen, kann eine .svg Datei mit folgendem Inhalt als Hintergrundbild hochgeladen werden:

<?xml version="1.0" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.1" baseProfile="full" xmlns="http://www.w3.org/2000/svg">
   <polygon id="triangle" points="0,0 0,50 50,0" fill="#009900" stroke="#004400"/>
   <script type="text/javascript">
      alert('Pentest Factory XSS');
   </script>
</svg>
 

Die hochgeladene Datei wird im Verzeichnis /api/2/staticfile/ gespeichert und führt zu XSS, sobald diese aufgerufen wird:

25278 2

Abbildung 5: Stored XSS in Suchfunktion


 

Mögliche Auswirkungen

Ein Angreifer, der eine der Cross-Site Scripting Schwachstellen ausnutzt, könnte unter anderem folgende Angriffe ausführen:

  • Session Hijacking mit Zugriff auf vertrauliche Dateien und Identifikatoren
  • Veränderung der Präsentation der Webseite
  • Einfügen von schädlichen Inhalten
  • Umleiten von Anwendern auf schädliche Seiten
  • Malwareinfektion

 Dies könnte zum Verlust der Vertraulichkeit, Integrität und Verfügbarkeit der von FTAPI verarbeiteten Daten führen.


 

Behebung der Schwachstellen

Die Schwachstellen wurden im nächsten Release des Herstellers behoben. Mehr Informationen hierzu finden sich unter https://docs.ftapi.com/display/RN/4.11.0.

Vielen Dank dabei an das FTAPI Team für die schnelle und einfache Kommunikation mit uns und die schnelle Behebung der identifizierten Schwachstellen!

 

Subdomains under the hood: SSL Transparency Logs

Since the certification authority Let’s Encrypt was founded in 2014 and went live at the end of 2015, more than 182 million active certificates and 77 million active domains have been registered to date (as of 05/2021). [1]

To make the certification processes more transparent, all certificate registrations are logged publicly. Below, we take a look at how this information can be used from an attacker’s perspective to enumerate subdomains and what measures organizations can take to protect them.

Let’s Encrypt

Since the introduction of Let’s Encrypt, the way of handling SSL certificates has been revolutionized. Anyone who owns a domain name these days is able to obtain a free SSL certificate through Let’s Encrypt. Using open source tools such as Certbot, the application and configuration of SSL certificates can take place intuitively, securely and, above all, automatically. Certificates are renewed automatically and associated web server services such as Apache or Nginx are restarted fully automatically afterwards. The age of expensive SSL certificates and complex, manual configuration settings is almost over.

stats 1 Growth of Let’s Encrypt

Certificate Transparency (CT) Logs

Furthermore, Let’s Encrypt contributes to transparency. All issued Let’s Encrypt certificates are sent to “CT Logs” as well as also logged in a standalone logging system using Google Trillian in the AWS Cloud by Let’s Encrypt itself. [2]

The abbreviation CT stands for Certificate Transparency and is explained as follows:

“Certificate Transparency (CT) is a system for logging and monitoring the issuance of a TLS certificate. CT allows anyone to audit and monitor certificate issuances […].” [2]

Certificate Transparency was a response to the attacks on DigiNotar [3] and other Certificate Authorities in 2011. These attacks showed that the lack of transparency in the way CAs operated posed a significant risk. [4]

CT therefore makes it possible for anyone with access to the Internet to publicly view and verify issued certificates.

Problem definition

Requesting and setting up Let’s Encrypt SSL certificates thus proves to be extremely simple. This is also shown by the high number of certificates issued daily by Let’s Encrypt. More than 2 million certificates are issued per day and their issuance is transparently logged (as of 05/2021) [5].

issued certs Let’s Encrypt certificates issued per day

Certificates are issued for all kinds of systems or projects. Be it productive systems, test environments or temporary projects. Users or companies are able to get free certificates for their domains and subdomains. Wildcard certificates have also been available since 2018. Everything transparently logged and publicly viewable.

Transparency is great, isn’t it?

Due to the fact that all issued certificates are transparently logged, this information can be viewed by any person. This information includes, for example, the common name of a certificate, which reveals the domain name or subdomain of a service. An attacker or pentester is thus able to identify the hostname and potentially sensitive systems in Certificate Transparency Logs.

At first glance, this does not pose a problem, provided that the systems or services offered behind the domain names are intentionally publicly accessible, use up-to-date software versions and are protected from unauthorized access by requiring authentication, if possible.

However, our experience in penetration testing and security analysis shows that systems are often unintentionally exposed on the Internet. Either this is done by mistake or under the assumption that an attacker needs further information such as the hostname to gain access at all. Furthermore, many companies no longer have an overview of their existing and active IT assets due to grown structures. By disabling the indexing of website content (e.g. by Google crawlers), additional supposed protection is implemented especially for test environments. Accordingly, an attacker would have no knowledge about the system at all and some sort of security is achieved assumedly. Developers or IT admins are also usually unaware that SSL certificate requests are logged and that this allows domain names to be enumerated publicly.

Readout of CT logs

A variety of methods now exist to access public CT log information. These methods often take place in so-called Open Source Intelligence (OSINT) operations to identify interesting information and attack vectors of a company. We at Pentest Factory also use these methods and web services to identify interesting systems of our customers during our passive reconnaissance.

A well-known web service is: https://crt.sh/

certsh 1 Sample excerpt from public CT logs of the domain pentestfactory.de

Furthermore, a variety of automated scripts exist on the Internet (e.g., GitHub) to extract the information automatically as well.

The myth of wildcard certificates

After realizing the possibility of enumerating CT logs and therefore the incoming potential problem, companies often come up with a grandiose idea. Instead of requesting individual SSL certificates for different subdomains and services, one general wildcard certificate is generated and set up across all systems or services.

This means that the subdomains are no longer published in Transparency Logs, since the certificate’s common name is simply a wildcard entry such as *.domain.tld. External attackers are thus no longer able to enumerate the various subdomains and services of a company. Problem solved, right?

Partially. Correctly, the hostnames or subdomains are no longer published in Transparency Logs. However, there are still many opportunities for an attacker to passively gain information about interesting services or subdomains of a company. The underlying problem that systems may be unintentionally exposed to the Internet, use outdated software with publicly known vulnerabilities, or fail to implement access controls still exists. The approach of using a wildcard certificate to provide more security by simply hiding information is called Security through Obscurity. In reality, reusing a single wildcard certificate across multiple services and servers reduces an organization’s security.

For example, an attacker can perform a DNS brute force attack using a large list of frequently used domain names. Public DNS servers such as Google (8.8.8.8) or Cloudflare (1.1.1.1) provide feedback on whether a domain can be successfully resolved or not. This again gives an attacker the opportunity to identify services and subdomains of interest.

gobuster 2
Example DNS brute force attack on the domain pentestfactory.de to enumerate subdomains

The dangers of wildcard certificates

Reuse of a wildcard certificate across multiple subdomains and different servers is strongly discouraged. The problem here lies in the reuse of the certificate.

Should an attacker succeed in compromising a web server that uses a wildcard certificate, the certificate must be considered fully compromised. This compromises the confidentiality and integrity of traffic to any service of an organization that uses the same wildcard certificate. An attacker, in possession of the wildcard certificate, would be able to decrypt, read or even modify traffic of all services reusing the certificate. However, an attacker must be in a Man-in-the-Middle (MitM) position between the client and the server to intercept the traffic. Accordingly, this attack is not trivial, but it is practically feasible by skilled attackers.

If unique SSL certificates were used for each domain/service instead of a single wildcard certificate, the attacker would not have been able to compromise all services at once. Other domains and corporate services would not have been affected at all, since an attacker would have to compromise those individual SSL certificates too. The company would therefore only have to revoke and reissue a single certificate and not the extensively reused wildcard certificate across multiple servers and services. Furthermore, the damage can be reduced by using unique certificates and its extent can be measured in case of a successful attack. Companies then know exactly which certificate for which domain or service has been compromised and where attacks may already have taken place. With the use of wildcard certificates and a successful attack, all domains and services are potentially compromised and the impact of the damage is opaque or difficult to assess.

More information about wildcard certificates:

Recommendation

Always be aware that attackers can gain a lot of information about your company. Be it public sources or active tests to obtain valuable information. The security and resilience of a company stands and falls with the weakest link in the IT infrastructure. In general, refrain from Security through Obscurity practices and always keep your systems up to date (patch management).

Rather, make sure that all your publicly accessible systems are intentionally exposed to the Internet and implement access control if necessary. Development environments or test instances should always remain hidden from the public and only be made available to the company itself and its developers. This can be achieved by whitelisting your company-wide IP addresses on the firewall or by implementing a simple authentication wall (e.g. using basic authentication for web services). Use a complex password with a sufficiently large length (> 12 characters).

SSL-Zertifikate sollten den exakten (Sub)Domänennamen im Common Name des Zertifikats definieren sowie von einer vertrauenswürdigen Zertifizierungsstelle (CA) ausgestellt worden sein. Continue to ensure that all certificates are valid and renewed early before expiration. Furthermore, it is recommended to use only strong algorithms for singing SSL certificates, such as SHA-256. The use of the outdated SHA-1 hashing algorithm should be avoided, as it is vulnerable to practical collision attacks [6].

Professional support

Are you unsure what information about your organization is circulating on the Internet and which systems are publicly accessible? Order a passive reconnaissance via our pentest configurator and we will be happy to gather this information for you.

You are interested in the resilience of your public IT infrastructure against external attackers? Want to identify the weakest link in your IT assets and have your SSL configuration technically verified? Then order a penetration test of your public IT infrastructure via our pentest configurator.

Sources

[1] https://letsencrypt.org/de/stats/#
[2] https://letsencrypt.org/de/docs/ct-logs/
[3] https://www.eff.org/de/deeplinks/2011/09/post-mortem-iranian-diginotar-attack
[4] https://certificate.transparency.dev/community/
[5] https://letsencrypt.org/de/stats/#
[6] https://portswigger.net/daily-swig/researchers-demonstrate-practical-break-of-sha-1-hash-function