Algorithm Rollover in OpenDNSSEC 1.3

October 29, 2015 by yuri

Erratum: Unfortunately it appears that this method does not work for OpenDNSSEC 1.4.x. It still works for 1.3.x, specifically 1.3.18 is tested (thanks Michał Kępień!).

The current version of OpenDNSSEC is unable to perform an algorithm rollover. Blindly changing the KSK and ZSK algorithm in the kasp.xml will result in a bogus zone. The only option to do it is to go unsigned: Remove the DS record, wait, publish unsigned zone, and then start signing with the new algorithm. This is undesirable.

There is however a way to roll to a new algorithm securely if you are clever about it and don’t mind some manual intervention. We have worked out a way for you to do this. The nice part is that it requires no manual interaction with the HSM, and that ods-signerd, the signer daemon, can keep running without any interruptions. This means there needs to be no service downtime, your zonefile can still be updated and resigned during this process. For simplicity, a requirement is that your new keys must be of the same length as your old keys.

Also, please note that since we are working with key pairs, the key material should be compatible. This method allows us to switch between different RSA signature algorithms. For example from RSA/SHA-1 to RSA/SHA-256 but not from RSA/SHA-256 to P-256/SHA-256.

1 – Stop Enforcer

First we must stop ods-enforcerd. We will not start it again until the whole process is finished. Make sure nothing autostarts it, which could very well disrupt the whole rollover process badly. Also please make sure you are not currently in a rollover of some sort. Your signconf must mention two keys exactly. A KSK and a ZSK.

2 – Duplicate ZSK

We will edit the signing configuration for your zone. To persuade the signer to double sign everything we will introduce a new ZSK. But rather than generating a new key we will reuse the key material of the old key. This file will generally live under /var/opendnssec/signconf/.

Thus the following section…

    <KSK />
    <Publish />
    <ZSK />
    <Publish />

would become

    <KSK />
    <Publish />
    <ZSK />
    <Publish />
    <ZSK />

Take care not to include the “<Publish />” element for the new key.

Now the signer must pickup this change:

$ ods-signer update

Note that at this point the signer is instructed to double sign your entire zone. It might take a long time for large zones, and your zone will possibly almost double in size. This is the only way to do a secure algorithm rollover in DNSSEC. Also, all your DNSSEC responses increase in size.

One might be tempted to also publish the DNSKEY records of the KSK and ZSK at this time. However I advise against that. If this is done, it is possible for a resolver to retrieve the new DNSKEY RRset (containing the new algorithm) but to have RRsets in its cache with signatures created by the old DNSKEY RRset (i.e., without the new algorithm). [see RFC6781]

We have to make sure every signature is picked up by intermediate resolvers before introducing the new DNSKEY records. Whatever the longest TTL is in your zone, dictates how long you have to wait before starting STEP 3.

3 – Duplicate KSK

For the KSK we do the same trick as for the ZSK: introduce a new key that uses the same key material as the old key. Your <Keys> section in the signconf will now look like this:

    <KSK />
    <Publish />
    <ZSK />
    <Publish />
    <KSK />
    <Publish />
    <ZSK />
    <Publish />

This time we add the “<Publish />” element to our new ZSK. Again we notify the signer of the changed signconf for our zone. Since algoritihm rollover is not officially supported it might take some effort to convince the signer to pick up all the changes:

$ ods-signer update
$ ods-signer clear
$ ods-signer sign

At this point only the DNSKEY RRset should be changed. Once the resulting zone is transferred proceed to the next step.

We must be sure everyone was able to pick up the new DNSKEY RRset. Thus before proceeding to the next step wait at least the TTL of the DNSKEY RRset. You can find the TTL in the <keys> section of kasp.xml. Don’t even consider rushing over this step! Waiting here is of utter most importance.

4 – Generate DS

Since our DNSKEY changed, our DS will also be different. We can’t ask ods-ksmutil for the DS this time since it isn’t in the database yet. We can find the DNSKEY ofcource in the signed zonefile, a grep or even a simple dig should give it to us. Then we need to calculate the DS.

$ dig DNSKEY |grep 257 > dnskeys
$ ldns-key2ds -n dnskeys

At this point you can switch the old DS record at the parent for the new one. Either in one action or add the newest first and then remove the oldest.

Then there is more waiting involved: the TTL of the DS record. Query your parent or look it up in the KASP. <Parent><DS><TTL>.

5 – Clean up

We will now remove the old KSK section from signconf, notify the signer and wait for TTL of DNSKEY RRset. We then remove the old ZSK entry from the signconf and notify the signer. (No need to wait here.)

$ ods-signer clear
$ ods-signer sign

6 – Mangle the Database

To get the enforcer state in line with the latest signconf is easy. Only the algorithm number is set different. A quick database edit can fix it. For Example:

$ sqlite3 /var/opendnssec/kasp.db
sqlite> UPDATE `keypairs` SET `algorithm`=8 WHERE 
  `HSMkey_id`='d4823c2f7eedceab5e5e3fd2c16c5dc4' OR 
$ mysql -u ods-db-usr opendnssec -p
mysql> UPDATE `keypairs` SET `algorithm`=8 WHERE
  `HSMkey_id`='d4823c2f7eedceab5e5e3fd2c16c5dc4' OR

The HSMkey_id can be find in the signconf as <Locator>.

7 – Start Enforcer

If everything was executed according to plan, the enforcer daemon once started should produce the same signconf as the one we last fabricated by hand. I recommend to verify this before feeding the signconf to the signer.

NSD 4.1: zonefile-mode and fork fix

September 19, 2014 by wouter

NSD 4.1: zonefile-mode and fork fix

Use zone files and not nsd.db

NSD 4.1 has been released and it contains a new feature where NSD does not use the nsd.db file, but uses the zonefiles directly.  The feature can be turned on by configuring one line in nsd.conf, it can also be turned off by changing that line back, the server needs to be restarted to effect the change.

nsd.conf excerpt:

# this line disables nsd.db, and the text format zonefiles
# are used directly
database: ""

With this config statement NSD reads the zonefiles for zones upon startup.  This takes about the same time as reading the nsd.db file.  The memory usage without the nsd.db file is about 50%-60% lower. When zone transfers (for secondary zones) update the zone information, NSD writes the new contents back to the zonefile.

The zonefiles are written every hour, with a timer that can be configured with the zonefiles-write: 3600 configuration statement. This sets the time in seconds when you want the zonefiles to be written back to disk. NSD first writes the file to file~ and then renames that to the original filename to protect against filesystem space problems. Read and write to zonefiles is slightly slower than to nsd.db, the performance of NSD in queries per second is not impacted. NSD does not write the entire zonefile everytime a change occurs because that would be very slow, especially in the case of many incremental zone transfers, that is why the zonefiles-write timer only writes the entire file after a specified time has elapsed.

You can check zonefiles before loading them with the new nsd-checkzone tool that prints if the zonefile contains errors.  It uses the same parse code as NSD.

Linux fork problems fixed

The NSD mode of operation forks processes, specifically for every zone update that is processed.  Because NSD4 supports provisioning of many more zones than NSD3 does, many more forks are performed when these zones update frequently. This caused problems in Linux systems, because Linux cannot handle this specific sequence of fork operations that NSD used.

The system leaked memory for the NSD process, until the system became unstable (after days). The workaround, in NSD 4.1, forks in a different pattern that does not cause the Linux implementation to leak the vm chunk information in the Linux process memory tables. This information was not really leaked, it was cleaned up on process exit, so a stop and start of the daemon could also workaround the problem, but it accumulated while the daemon was running.

The fork pattern that caused the failures for Linux was a pattern where the deepest forked process forks new copies that replace all the older processes, and this in a sequence. The new pattern takes efforts to have a higher up (parent) process fork the new copies, at the expense of having the UNIX signals delivered to the wrong processes afterwards, NSD now uses pipes to communicate that information, where for SIGCHILD it uses the property that pipes are closed by Linux when a process exits (and it was the only process that held that file descriptor).


OpenDNSSEC project transferred to NLnet Labs

September 15, 2014 by benno

NLnet Labs announces that it will take full responsibility for continuing the activities of both the OpenDNSSEC softwareproject as well as the support activities of the Swedish OpenDNSSEC AB. OpenDNSSEC was created as an open source turn-key solution for DNSSEC, managing the security of domain names on the Internet. The project drives adoption of Domain Name System Security Extensions (DNSSEC) to further enhance Internet security.

After initiating the OpenDNSSEC project in cooperation with UK Internet registry Nominet, the project and development has been managed by the Swedish Internet Structure Foundation (responsible for .SE) for more than 4 years. NLnet Labs contributed strongly from the early days onwards. Working closely together, both organizations agreed upon the transition of the project ownership to NLnet Labs.

NLnet Labs and its 100% subsidiary Open Netlabs BV, will continue the development and support from August 2014 onwards. There is a strong need to move forward—as the project has picked up pace—and increase the global acceptance and implementations of OpenDNSSEC. Embedding the project, product, and support in a sustainable environment will help achieving its original objectives and providing the required added value of the OpenDNSSEC software products. In order to allocate sufficient development capacity, NLnet Labs recently opened vacancies for junior and senior software system engineers.

About NLnet Labs

The NLnet Labs Foundation (NLnet Labs for short) is a not for profit foundation founded in 1999 in the Netherlands. Its statutes define its objectives: to develop Open Source software and open standards for the benefit of the Internet. The foundation believes that the Openness of the network, as enabled by technology and policy, thrives human wellbeing and prosperity. By contributing technology and expertise in the form of Open Source Software and Open Standards, we contribute to wellbeing and prosperity for all.

About Open Netlabs

Open Netlabs is a support and consultancy company, globally supporting organisations using NLnet Labs’ open source software and assisting customers in the implementation and operations of their DNS-infrastructure. High level support and SLA’s, consultancy and training are the core of the services portfolio of Open Netlabs. Open Netlabs BV is a wholly owned, taxable subsidiary of the NLnet Labs Foundation, serving the non-profit public benefit goals of its parent. The company is guided and managed according the charter of the NLnet Labs Foundation.

A next step…

June 2, 2014 by olaf

July 11 I  will be leaving NLnet Labs to join the Internet Society as Chief Internet Technology Officer.

During the last one-and half decade I have tried to push the needle to a more secure, resilient, and dependable Internet. For the last eight and a half years I did this at NLnet Labs by leading a team that writes high quality code, participates in the Internet standards process, and works with operators on implementations. The Lab has pushed the needle on DNSSEC deployment by building products that I proudly believe, make a difference for the Open Internet.

Why does it make a difference?

Because, the Internet’s technology matters.

Bottom up innovation and deployment of technology, even if there is very little short-term economic incentive to take action, is at the very hearth of the success of the Internet. The availability of Open Source software turns out to be an important driver for the successful deployment of new protocols. That is where NLnet Labs, and a myriad of other open (and closed) source developers, in groups or as individuals, make a difference.

As a corollary, when there is such little short-term economic incentive there needs to be buy-in for the vision of ‘what good looks like’. With such vision all the independent players can work towards a common objective and we collectively take a bet on a future network value. That is where ISOC makes the difference. With its promotion of the open development, evolution, and use of the Internet (for the benefit of all people throughout the world) ISOC can share a vision and encourage technologies that help to increase trust, provide security, and make the net more stable, to gain a foothold.

For me, the transition from an organization that builds technology for the Open Internet to an organization that promotes the Open Internet is a natural path. I had evangineer as job title on my business card: A pun combining the realism of technical engineering with the evangelizing the good of the Open Internet. At ISOC I plan to continue the practice of evangineering, by working ‘with good people and fostering broad collaboration to address the [Internet’s] issues, since we all know that the Internet’s Technology Matters’ (A quote from my predecessor Leslie Daigle).

NLnet Labs will be in good hands, with NLnet Labs veteran Benno Overeinder at the helm, and Han Brouwers (director of the wholly owned subsidiary Opennet Labs BV) on his side. NLnet Labs is a solid team working on innovative projects that will shape the future. Of course, NLnet Labs continues to be committed to its projects and products.

I will be a proud and supportive alumnus.


Hackathon at TNW-2014

May 1, 2014 by wouter
Wouter Wijngaards en Olaf Kolkman


At NLnet Labs we believe that DNSSEC allows for security innovations that will change the global security and privacy landscape. Innovations like DANE, a technology that allows people to use the global DNS to bootstrap a encrypted channel, are only the start of currently unimaginable technical innovation.

The deployment of DNSSEC is a typical collective action problem and we are trying to make a difference by providing the tools that help to reduce costs or bring value for those who want to provision, provide, and use secured DNS data.

The GETDNS API plays in that space. It is an attempt to provide applications a tool to get DNSSEC information that will aid the improvement of security and privacy.


The GETDNS API is an API description desigened by application developers for accessing DNS asynchronously with DNSSEC and DANE functionality. The GETDNS API is implemented in a collaboration effort by Verisign and NLnet Labs in the getdns library.

GETDNS participants at TNW 2014

The TNW 2014 conference in Amsterdam, the Netherlands, hosted a Hack Battle this year. Participants made ‘hacks’: apps or tools; using provided APIs and their own tools and competed in this contest. The contest ran for 36 hours and with 146 participants produced a number of contest entries. Verisign Labs and NLnet Labs promoted the use of the GETDNS.API library for DNSSEC, security, privacy and DANE implementation. This library and thus the API was available to the participants. In the contest the C API, the node.js API and the python API were available.

Four entries have been made using the GETDNS.API, those participants received GetDNS Tshirts. The other teams in the back battle can be viewed here.

The presentations of the teams are on video, youtube link.



By Ruslan Mavlyutov, Arvind Narayanan and Bhavna Soman.

This entry created a plugin for Thunderbird, in python, that checks the DNSSEC credentials of DKIM record associated with an email. The user can see the status of the email.

This entry won the prize given by NLnet Labs (Raspberry Pi™ kits)!

hackerleague link


Bootstrapping Trust with DANE

By Sathya Gunasekaran and Iain Learmonth.

GETDNS participants at TNW 2014

This entry adds DNSSEC secured OTR-key lookups to the python-based gajim XMPP client.  This project allows people that use OTR in their jabber client to check if the fingerprint of a key matches the fingerprints published in the DNS. They built a python library that uses getdnsapi to fetch OTR, openPGP and S/MIME fingerprints.

This team was interviewed by the Dutch Tweakers website, video link.

Github python dnskeys library link.

Github gajim branch.


DANE Doctor

By Hynek Schlawack and Richard Wall.

This entry is a website for debugging DANE. It shows diagnostics and highlights errors.

They also integrated the python bindings for getdns with the asynchronous python framework Twisted. They hope to be able to contribute this as a DANE enabled TLS client API to the Twisted framework.

Github link.


DNSSEC name and shame!

By Tom Cuddy and Joel Purra.

This entry wants to highlight which contest sponsors do the right thing to protect DNS data and shame the ones that do it wrong.

This team won the prize given by PayPal, because of the importance of protecting DNS data.

Github link.

website link.

The GETDNS API specification is edited by Paul Hoffman. Verisign Labs and NLnet Labs are cooperating on the implementation of the API using code and expertise from the Unbound and ldns projects. The getdnsapi implementation website, twitter.

Does Open Data Reveal National Critical Infrastructures?

February 21, 2014 by benno

This blog post is based on the report Open Data Analysis to Retrieve Sensitive Information Regarding National-Centric Critical Infrastructures by Renato Fontana.

Democratization of Public Data

The ideas of Open Data comes from the concept that data should be freely available to use, reuse, and redistribute by anyone. An important motivation in making information available via the Open Data Initiative was the desire for openness and transparency of (local) government and private sectors. Besides openness and transparency, also economic value can be created by improvement of data quality through feedback on published data. Typically, most content available through Open Data repositories refers to government accountability, companies acceptance, financing statistics, national demographics, geographic information, health quality, crime rates, or infrastructure measurements.

The volume of data available in Open Data repositories supporting this democratization of information is growing exponentially as new datasets are made public. Meanwhile, organisations should be aware that data can contain classified information, i.e., information that should not be made publicly available. The explosive rate of publishing open data can exert the information classification process to the limit, and possibly increase the likelihood of disclosure of sensitive information.

The disclosure of a single dataset may not represent a security risk, but when compiled with further information, it can truly reveal particular areas of a national critical infrastructure. Visualisation techniques can be applied to identify patters and gain insights where a number of critical infrastructure sectors overlap.

This blog post shows that is possible to identify these specific areas by only taking into account the public nature of information contained in Open Data repositories.

Method and Approach

In this study, we focus on Open Data repositories in the Netherlands. After identifying the main sources of Open Data (see details in report), web crawlers and advanced search engines were used to retrieve all machine readable formats of data, e.g., .csv, .xls, .json. A data sanitation phase is necessary to remove all blank and unstructured entries from the obtained files.

After the data sanitation, some initial considerations can be made by observing the raw data in the files. For example, finding a common or primary identifier among datasets is an optimal approach to cross-reference information. In a next step, the datasets can be visualised in a layered manner, allowing for the identification of patterns (correlations) in the data by human cognitive perception. In visualisation analysis, this sense-making loop is a continuously interaction between using data to create hypothesis and visualisation to acquire insights.

As the research was scoped to the Netherlands and Amsterdam, the proof of concept took into the account the government definition of “critical infrastructures”. Also, research was limited to datasets referring to energy resources and ICT. A visualization layer was created based on each dataset that could refer to a critical infrastructure.

Visualisation of Data

From the different Open Data sets, a layered visualisation is generated and shown below. The figure provides sufficient insights to illustrate that most data centers in Amsterdam are geographically close to the main energy sources. It also suggests which power plants may behave as backup sources in case of service disruption. In the case of Hemweg power plant located in Westpoort, it is clear how critical this facility is by observing the output amount in megawatts being generated and the high-resource demanding infrastructures around it.

Four layer visualisation. The darker green areas are also the sectors where the highest number of data centers (blue dots) and power plants (red dots) are concentrated in Amsterdam.

Four layer visualisation. The darker green areas are also the sectors where the highest number of data centers (blue dots) and power plants (red dots) are concentrated in Amsterdam.

A few datasets contained fields with entry values flagged as “afgeschermd”, suggesting the existing concern in not revealing sensitive information. The desire to obfuscate some areas can be seen as an institutional interest in enforcing security measurements. Thus, that such information is sensitive and its disclosure can be considered as a security threat.

Conclusions and Considerations

Results and insights in this research are considered not trivial to be obtained. Even within a short time frame for analysis over a specific set of data, we were able to derive interesting conclusions regarding the national critical infrastructures. Conclusions of this nature can be something that governments and interested parties want to avoid to be easily obtained due to national security purposes.


The presented research confirms the possibility to derive conclusions from critical infrastructure regions based on public data. The approach involved the implementation of a feedback (sense-making) loop process and continuous visualization of data. This ongoing effort may create space to discuss in which extent this approach can be considered beneficial or dangerous. Such discussion must be left to an open debate, which must also consider the matter of Open Data and national security.

To open or not to open data?

How “National” is the Dutch Critical IP Infrastructure?

September 24, 2013 by benno

This blog post is based on the report “Discovery and Mapping of the Dutch National Critical IP Infrastructure” by Fahimeh Alizadeh and Razvan Oprea.


After the publication of the Critical Infrastructure Protection report more than ten years ago, the leading questions that emerge today are how critical infrastructure companies are interconnected, how resilient are these connections, and to which extent are they dependent on foreign entities?

In 2002, the Netherlands started the Critical Infrastructure Protection (CIP) project with the objective “The development of an integrated set of measures to protect the infrastructure of government and industry”. In the CIP study, critical infrastructure includes the business enterprises and public bodies that provide the goods and services essential for the day-to-day lives of most people in the Netherlands. The critical infrastructure is divided into 12 critical sectors, with telecommunications and ICT as one of them.

In this blog article, we look into a specific aspect of the Dutch critical infrastructure, namely how the organisations part of the critical infrastructure depend on Internet services, and to which extent are these Internet services part of a Dutch national IP infrastructure. To this end, we map organisations part of the Dutch critical infrastructure to their presence on the Internet, and analyse how the organisations are interconnected via Dutch or foreign networks.

Previous Studies

The presence of organisations on the Internet is defined by the IP resources they use, and how their networks are connected with other networks. IP resources are the IP address blocks (IP prefixes) and autonomous system numbers (ASNs) that are used in a network. The interconnection between networks is governed by the BGP routing protocol, which operates with IP prefixes and ASNs in its routing/forwarding decision algorithm.

In 2012 in Germany, a joint project with two universities and the Federal Office for Information Security (BSI) classified the German “national Internet”.  Their methodology started with the list of IP prefixes allocated to organisations registered in Germany.  From this information they found the originating AS numbers and then their interconnections using BGP dumps.

We took a slightly different approach in our research—without access to privileged information (including, for instance, the IP blocks used internally by critical infrastructure organisations in the Netherlands), we were limited in scope. We however included in our analysis the foreign ASes that act as proxies for web and mail services provisioned by critical infrastructure Dutch companies.

Approach, Methods and Techniques

To discover and map the interconnections between the critical infrastructure organisations, we identify three phases in our analysis. First, we start by identifying the Internet presence: their AS numbers or the AS numbers of the entities that act as their proxies (think ISPs). Once the list was created, we looked at how are these ASes interconnected and, finally, we described a method for visually mapping them.

Finding the Points-of-Presence

The discovery part involved a lot of manual work—first we needed to find all the AS numbers assigned to Dutch organisations and then filter out those which are not part of the critical infrastructure. The data source used in the first step is an authoritative list maintained by RIPE NCC, containing all the AS numbers allocated to organisations from their service region, which roughly comprises Europe, Russia and the Middle East. Singling out Dutch organisations (not trivial) resulted in a pretty comprehensive list of 727 organisations. The next step was to filter the critical organisations from the list. We created a classification based on the 12 sectors the Dutch government deemed critical in CIP project. After filtering, we ended up with 335 selected entries—this was our bottom-up discovery process: from IP resources to organisations.

At this point we observed that around 80% of the organisations in our AS list are active in the Internet, IT and Communications sector. This meant that the vast majority of the critical infrastructure organisations used a “proxy AS”, such as an ISP to intermediate their Internet presence. This started the top-down discovery process. We selected a number of organisations from each critical sector (using Dutch Chamber of Commerce, Google, Wikipedia, etc.) and after a careful analysis we ended-up with around 150 entities.

Without having any information on the way organisations physically connect to the Internet, we relied on the public DNS data to extract the useful bits of information: the A (and AAAA) and MX records. Web and mail servers are important because inevitably there is a two-way information flow from and to the organisation to the entities hosting their web and mail servers (unlike the NS records for instance). The combined list obtained from concatenating the results of the two approaches gave us a master list of critical infrastructure-related ASNs.

Connecting the Dots

The next step was determining how these ASNs inter-relate. Two well-known Internet topology maps are from CAIDA and UCLA Internet Research Lab. These maps show all the links between AS pairs. For our analysis, we selected the UCLA IRL topology map as it was the most recent of the two.

In the initial mapping of the Dutch critical infrastructure, we selected all links for which both nodes are part of our combined list of ASNs (Dutch and foreign, discovered via the bottom-up plus the top down approaches). Unfortunately, the resulting graph had many disconnected nodes, which is an interesting observation by itself as it shows that the Dutch critical infrastructure is dependent on non-Dutch intermediary or transit nodes. The goal being to build the minimum graph that connects all the critical infrastructure ASNs, the next step was that for each ASN in our list we would add its provider (UCLA offered this information also) and re-run the selection process. In this way we ended up with a much better image on the relations between the Dutch critical infrastructure ASes and its dependency on foreign intermediary or transit networks.

The final step was to visualise this information in a way that offers an overview of the relations per critical sector but allows one to dive into more details if needed. For this study we considered Data-Driven Documents (D3.js) JavaScript library and Sigma.js library.  For our purposes, Sigma.js library provided the visualistion methods that satisfied our needs: it gives a good perspective on each sector and allows one to zoom-in seamlessly in an area for details. By placing the foreign (proxy) ASes on the opposite side from the Dutch ones gave a even more intuitive representation of the AS interconnections.


We provided network graphs for each critical sector and used them as input data source for further analysis (our report contains more details). To give an example, let us consider the Energy critical sector, which includes 3 sub-sectors: electricity, gas, and oil. Figure 1 shows the ASN network graph for this sector when only direct links between each two ASNs are taken into account. No providers are included and it is clear that the graph is too disconnected to draw any conclusions. The distribution of links in two sides (Dutch ASNs on the right side and foreign ones on the left side), is divided almost equally: 44% for the foreign ASNs and 56% for the Dutch ASNs.

Energy sector without providers

Figure 1: Energy critical sector without providers.

Figure 2 shows the graph after we add for each AS its direct provider—the graph is now more connected, with a different distribution of links: 69% for foreign ASNs and 31% for the Dutch ASNs.


Figure 2: Energy critical sector with providers.

Although it is expected that each node will have at least one link (it will be the connection of the node to its provider), we still can find one isolated ASN in the Dutch part—it is, according to RIPEstat, ASN 61013 (Alliander N.V.). Although Alliander N.V. is one of the largest companies in maintenance, expansion and adaptation of the gas and electricity network in the Netherlands, no IP prefix ever originated from this AS; instead their web server and mail server are hosted by British Telecommunications plc (ASN 5400).


In this research we mapped the representative Dutch critical infrastructure organisations using two discovery methods (bottom-up and top-down). The discovered organisations were verified manually one-by-one so we have a high degree of confidence in the accuracy of the results. However, we only worked with public sources of information and thus we did not see physical, private and back-up links. A more comprehensive list of organisations can only be obtained with specialised information, or with privileged access to information, which would allow us to know what IP address space is actually being used inside every organisation.

We observed that many critical infrastructure organisations have reliable connections to the Internet (the native and proxy ASes are well interconnected), but rely a lot on foreign providers for their communication needs.

If we would consider the imaginary scenario of an emergency in which critical sector organisations can only communicate using Dutch links, then around half of them (those that use foreign proxy ASes) would be cut-off from the network. In this context we find it would be useful to start a discussion regarding the security and privacy implications of having critical infrastructure organisations’ email and websites hosted with foreign entities, especially so with those from outside the European Union (EU) since they do not not necessarily have the same laws regarding data privacy and confidentiality.


The study was performed as a System and Network Engineering (UvA) Master thesis research project by Fahimeh Alizadeh and Razvan Oprea under supervision from Benno Overeinder (NLnet Labs) and  Marco Davids (SIDN).

RRL SLIP and Response Spoofing

September 16, 2013 by wouter

The recent disclosure by ANSSI (CVE-2013-5661) notes problems with RRL Slip and response spoofing. This document explains explains the tradeoffs. Other documents with advice:

Note that the security advise is about trade-offs between the vulnerability to reflective DoS versus the likelihood of individuals being cache poisoned and as such a generic operational DNS trade-off. There are no specific vulnerabilities in the NSD implementation; rather the vulnerability is caused by the network throttling dropping answers.

NSD has response rate limiting (RRL) implemented. This exists in NSD3 and NSD4, when configured with –enable-ratelimit. The rate limiting uses SLIP to send back truncated replies and drop other replies. The default slip rate is 2. The slip rate is randomized, and it is therefore difficult to predict exactly which response is going to be truncated and which response is going to be dropped.

When the zones served with NSD have DNSSEC signatures, it would be best to use the default slip rate of 2. Spoofing can be countered with DNSSEC validation of the signatures. And reflective DoS is countered with the RRL slip rate of 2. The slip rate of 2 causes reflective DoS attacks to lose half their bandwidth, and protects the target, while legitimate clients that are falsely identified as spoofing targets (false positives) experience delays in receiving answers.

When the zones that are loaded are not protected with DNSSEC, the choices are less optimal. The RRL slip rate of 2 solves reflection, but response spoofing, as the (ANSSI report) notes, is a problem. You can also choose an RRL slip rate of 1, which truncates every response, and the possibility to spoof responses as reported by ANSSI is removed. But with RRL slip 1 the server acts as a reflector for spoofed traffic. Albeit as a reflector that does not change the size of that traffic, so without amplification.

NLnet Labs recommends DNSSEC for DNS data protection, including detection of spoofing. We realize that operators of authoritative name servers may not be able to influence the operators of recursive name servers to turn on validation. Turning on DNSSEC on your zones allows the recursive name server operators to make their choice while a slip value of 2 decreases the attractiveness of the global DNS system as a DoS amplification tool.

NSD4 TCP Performance

July 8, 2013 by wouter

For NSD 4 the TCP performance was optimised, with different socket handling compared to NSD 3. This article discusses a TCP performance test for NSD 4. In previous blog contributions, general (UDP) performance was measured and memory usage was analysed for NSD 4.

The TCP performance was measured by taking the average qps reported by the dnstcpbench tool from the PowerDNS source distribution. (Thanks for a great tool!) The timeout was set to 100 msec. On FreeBSD the system sends connection resets when a TCP connection cannot be established, and in this situation the tool overreports the qps. To mitigate this the qps was scaled back by multiplying by the fraction of succeeded tcp queries. The scaled back qps is close to the median qps that is also reported by the tool. For Linux, such scaling was not performed, and the average and median are close together.

You can click to enlarge these charts:


The highest TCP queries per second performance on Linux is about 14k qps by Yadifa, and then followed by NSD 4. On FreeBSD performance is higher, about 16k qps, and NSD 4 is fastest, with Knot and then Yadifa following with about 14k qps. Notice how Bind performs at 12k qps on FreeBSD, and 8k qps on Linux. PowerDNS remains at about the same speed. NSD 4 has higher TCP qps than NSD 3, on Linux and on FreeBSD.

On FreeBSD, software that cannot handle the load produces connection errors. The number of connection errors goes down when more threads are used by NSD 3, and Knot. For other software the thread count does not really influence this connection error count, but it does increase the qps performance. For Yadifa the qps performance degrades substantially when more threads are used, and it has a large number of connection errors because of that.  In general the connection errors are caused by a lack of performance, and a TCP connection cannot be established. Linux apparently deals differently with this (turns it into timeouts), this may cause some qps reporting differences between the OSes. In both cases the charts represent the average successful TCP qps.

The same pattern as for UDP can be seen with the number of threads, for NSD 4, the best Linux performance uses 2 cpu, and performance increases better and higher on FreeBSD, but the optimum here is 3 cpu instead of 4 cpu for UDP. Other software similarly benefits from more CPU power. It turns out that the PowerDNS option to get extra distribution threads adds UDP workers and not TCP workers, this is why performance does not scale up in these charts for PowerDNS. Yadifa performance goes down for both Linux and FreeBSD when more threads are in use.

The zone served in these experiments is a synthetic root zone (as used in previous tests), with 1 million random queries for unsigned delegations. PowerDNS uses its zonefile backend. The same test system(s) as used in the previous measurements are used, a PowerEdge 1950 with 4 cores at 2 GHz is running the DNS server.

NSD4 High Memory Usage

July 5, 2013 by wouter

NSD 4 is currently in beta and we are expecting a release candidate soon. This is the second of a series of blog-posts in which we describe some findings that may help you to optimize your NSD4 installation. In the first article we talked about general performance, this article muses about memory usage. (This article is based on the forthcoming nsd-4.0.0b5)

NSD4 Memory usage

The memory intensive architectural trade-off between pre-compiling answers and high speed serving of packets has been part of the NSD design since its first incarnation almost a decade ago.

With NSD 4 we continued the pre-compilation philosophy.  It even seems that, compared to NSD 3, NSD 4 uses more memory. Why? How?


Memory is being consumed to achieve speed improvements, but also for improvements such that administrators can update the zones served without needing the restart that so prominently featured NSD 3; NSD 4 can update, add, and remove zones without a restart. NSD 4 can receive IXFR (incremental zone transfers) and apply them in a time that depends on the size of that transfer, independent of zone size. Besides, during an update the database as stored on disk (nsd.db) is updated. On incremental updates of NSEC3 signed zones the nsec3-precompiled answers are all updated as well. All these features, that improve usability and speed imply that disk usage and memory usage increased compared to NSD 3.


To compare the memory a Dell PowerEdge 1950 with 8 GB of RAM, a large HDD and 2 GHz Xeon CPU (the same machine used for the performance tests earlier) was used to load the .NL zone (the authoritative Dutch top-level-domain) from June 2013. This is a fairly large zone, its zonefile is about 1.5 GB. It has 5.3 million delegations. It is signed with DNSSEC, and uses NSEC3 (opt-out), and has about 28% signed delegations. This means, with the nsec3 domains for the signed delegations, it has 5.3 * 1.28 = 6.8 million domain names with associated resource records.

The figure below shows the memory use of the daemon. ‘Rss’ represents the resident memory used by the daemon after starting. ‘Rss other’ is measured by tracking the total system memory usage. ‘Compiler’ represent the memory used by a zone compiler (if the software has such) and added onto it. This assumes you run the zone compiler and the DNS server on the same machine. If swap space is used we add it separately. Finally, the virtual memory usage (‘vsz extra’) is also added onto the bar, that entry reflects the size of the memory-mapped I/O to the nsd.db for NSD 4. Note that the memory-mapped I/O does not need to reside in core-memory.


We configured our measurement machine with 8 GB of RAM and we observe that the NL zone barely fits with NSD 4 (16 GB would be a better and realistic configuration). Bind and Yadifa can easily serve the zone from 8 GB of core memory. The zone compiler of Knot runs into swap space because it becomes very big. NSD 4 causes swap space to be used for a different reason, it (barely) fits in the 8 GB (about 7 GB), but its heavy use of memory mapped I/O causes the Linux kernel to make space in RAM by swapping other stuff to disk. The 8 GB of RAM is insufficient, you can start the daemon, but provisioning for operations it is too tight for common tasks, such as reloading the zone from zonefile and processing a (large) AXFR. However, because of it’s new design, NSD 4 could actually work in this RAM if it handled only relatively small IXFR updates.

The NSD 4 usage is the main daemon, plus a very small xfrd (xfrd now uses less than it did in NSD 3). The main daemon uses more memory for an increase in speed and also for better NSEC3 zone update processing. The virtual memory is the memory mapped nsd.db file. The kernel uses its virtual memory cache mechanism to handle this I/O, and you can provision for less than the total nsd.db file (at the cost of update processing speed). Realistic provisioning for NSD 4 here is about 10% of the virtual space to 100% of the virtual space, somewhere between 9 GB – 17 GB. It would be wise to add another multiple of memory on this for large zone changes, which because NSD keeps serving the old zone while it is busy setting up the new version of the zone, uses about twice the memory for that zone, so add another 6-7 Gb (the rss) for this (AXFR, zonefile change).

The NSD 3 usage is the base daemon, plus a xfrd (the other process), plus zonec. For continued operations another (same sized) chunk should be added for nsdc update, that updates the zonefiles and cleans up the nsd.db. This causes NSD 3 to use more memory in its provision than is necessary to run NSD 4 with a low disk I/O provision. This is because NSD 4 does not have zonec and nsdc update, this has been folded into the main daemon, and is performed during reload tasks (the daemon keeps serving DNS), and is what causes the disk structures to be much larger.

NSD 4 comes with a tool that tells you estimates for the size of RAM and disk needed for a zone. For this zone it indicates that 6.9 GB is used for RAM and 11 GB is used for nsd.db. The tool estimates that about 8 GB to about 17 GB could be used to run the NL zone (with 10% – 100% of the nsd.db memory mapped). As an aside, you build the nsd-mem tool by ‘make nsd-mem’ in the source repository.

NSEC3, memory and performance

NSD4 uses precompiled NSEC3 answers. Without pre-compilation of NSEC3, providing answers that proof the non-existence of a query (NXDOMAIN proof) involve a number of hash-calculations that bog down the performance of the name server. Obviously this precompiled data takes memory but results in NSD 4 answering queries much faster, as it is not CPU-bound by the nsec3 hashing. The precompilation means hashing all the names in the zone, something that takes 60-80 seconds on our measurement machine for the NL zone. In NSD 4 to handle new zone updates quickly, it keeps administration to incrementally update its precompiled NSEC3 data. This means IXFR updates to NSEC3 zones are handled by hashing the names affected by the update and not the entire zone. Note that NSD 4 does not allocate NSEC3 memory for NSEC (non-NSEC3) and unsigned zones, and this could make it use less memory than NSD 3 for non-NSEC3 zones.

If the NL zone was signed with NSEC, with the same key sizes, then the zonefile file would become 2.7 GB for the 5.3 million delegations. The memory usage goes up because there is no opt-out, but goes down because there is no nsec3 administration. The nsd-mem tool calculates 6.0 GB RAM and 10.6 GB disk usage and estimates 7.8 – 16.6 GB. This is nearly identical to the NSEC3 case, slightly less on the RAM and disk usage. NSD 3 uses 4.5 GB (rss) + 4.5 GB (other proc) + about 4 GB (zonec). And omitting the nsdc update usage this is already 13 GB for NSD 3.

Starting the server

In NSD 4 a restart of the daemon should only be necessary for system reasons (kernel updates). With its nsd-control tool you can change the other configuration on the fly without a restart. NSD 3 needed to zonec and restart the daemon to serve a new zone and NSD 4 does not need to do so.

This shows the speed of starting the daemon:

read_speedFor NSD 4 you can compile a new zone without restart, and while serving the old zone. Its zone compiler also has to write the 11 GB nsd.db to disk, and this makes it slower than the NSD 3 zone compiler (it is the same parser). The Knot compiler is likely from before its recent Ragel updates that speed it up. The initial start for NSD 4 measures the time to read the NL zone from the 11 GB nsd.db, this would happen after a system restart for example.

The stop time for NSD 3 and 4 is 0, below one second. For the other daemons this is curiously slow. But these numbers are very small compared to the system start numbers.

Thus, if you get a fresh zonefile and want to start, you can use the left bar for NSD 4, add up the two bars for NSD 3, and add up the two bars for Knot. For a system restart, the daemon start value gives the time needed to setup the daemon memory.

Summary: From NSD 3 to NSD 4

If you are running NSD 3 today and you do not experience any memory issues, such as extensive swapping, during the full serving-updating-zone-compiling cycle you should not experience any problems migrating to NSD 4. This is mainly due to the fact that what a significant fraction of the memory use in NSD 4, is memory-mapped to disk and is not accessed for serving answers to DNS queries.

However, we do advice you to run the nsd-mem tool that ships with NSD 4 to test your actual requirements. That will give you an exact calculation of your core-needs.


The software tested is NSD 4.0.0b5, NSD 3.2.15, Bind 9.9.2-P1, Knot 1.2.0, and Yadifa 1.0.2-2337. The OS is Linux 3.9, the file system is ext4 on hdd.

Wed Sep 25 2013

© Stichting NLnet Labs

Science Park 400, 1098 XH Amsterdam, The Netherlands, subsidised by NLnet and SIDN.