Testdriving the CrypTech Alpha Board.

May 15, 2017 by yuri

Last summer there was a CrypTech workshop in Berlin right before the IETF. I did not attend the workshop personally but a mysterious anodized red box appeared on my desk shortly after. It was the CrypTech Alpha Board, an open source hardware cryptographic engine.

At the workshop OpenDNSSEC 1.4 was found to be able to work somewhat with the Alpha board, at that time still heavily being worked on. Naturally I liked to give it a go with the then just released OpenDNSSEC 2.0 and see if it worked (it didn’t).

In theory, since CrypTech implements a PKCS11 interface, OpenDNSSEC should be able to use it as a drop in replacement for SoftHSM or any other HSM. Although installation of the software went smooth it was not ready for prime time back then. One of the first things noticeable was that OpenDNSSEC was slowed down heavily. This wasn’t caused by the slow signing process persé but mainly because it took very long to connect to. Another hurdle was the limited number of slots available for keys. Only 10 keys could be stored. This would make any real setup unpractical. The deal breaker for OpenDNSSEC was however that only one application, in only one thread would be able to access the HSM. All other threads would be blocking until the current thread disconnects. OpenDNSSEC never closes a HSM connection, would have multiple threads per daemon using the HMS, and have multiple daemons involved concurrently. This was not going to fly for OpenDNSSEC.

A winter passed by, and the red box on my desk slowly build up a light gray film of dust. With spring around the corner, while the trees on Science Park released their pollen, a new version of the software and firmware where released. With tears in my eyes (pollen) I set out to test once more whether Cryptech worked with -by now- OpenDNSSEC version 2.1.1  (it did!)

My three major concerns are addressed. Logging in now is a considerably quicker operation than before (5 seconds versus 30). The 10 key limit has been raised considerably. This would be more than 4000 keys depending on the number of attributes stored along side the key. And, most important, a daemon called ‘cryptech_muxd’ is shipped with the software. It multiplexes connections to the HSM so any number of thread now can access the HSM concurrently.

Now it seems to ‘just work’ with OpenDNSSEC!

Further thoughts

  • At this point the performance isn’t stellar. The signer seems to indicate 2 to 3 signatures per second for my small test zone.
  • The device is rather chatty on the USB port. Roughly 200 packets a second back and forth between the muxer and the CrypTech when idle.
  • I do very much like the idea of a fully open HSM which can be reviewed by anyone with enough domain knowledge. However, documentation is not yet to a point where it would be easy for the community to contribute. We are eagerly awaiting more documentation!
  • While stepping through the code I noticed randomness generation is much slower than I expected. I (perhaps falsely) assumed that having it’s own avalanche diode based random generator would provide it with plenty randomness.

Does Open Data Reveal National Critical Infrastructures?

February 21, 2014 by benno

This blog post is based on the report Open Data Analysis to Retrieve Sensitive Information Regarding National-Centric Critical Infrastructures by Renato Fontana.

Democratization of Public Data

The ideas of Open Data comes from the concept that data should be freely available to use, reuse, and redistribute by anyone. An important motivation in making information available via the Open Data Initiative was the desire for openness and transparency of (local) government and private sectors. Besides openness and transparency, also economic value can be created by improvement of data quality through feedback on published data. Typically, most content available through Open Data repositories refers to government accountability, companies acceptance, financing statistics, national demographics, geographic information, health quality, crime rates, or infrastructure measurements.

The volume of data available in Open Data repositories supporting this democratization of information is growing exponentially as new datasets are made public. Meanwhile, organisations should be aware that data can contain classified information, i.e., information that should not be made publicly available. The explosive rate of publishing open data can exert the information classification process to the limit, and possibly increase the likelihood of disclosure of sensitive information.

The disclosure of a single dataset may not represent a security risk, but when compiled with further information, it can truly reveal particular areas of a national critical infrastructure. Visualisation techniques can be applied to identify patters and gain insights where a number of critical infrastructure sectors overlap.

This blog post shows that is possible to identify these specific areas by only taking into account the public nature of information contained in Open Data repositories.

Method and Approach

In this study, we focus on Open Data repositories in the Netherlands. After identifying the main sources of Open Data (see details in report), web crawlers and advanced search engines were used to retrieve all machine readable formats of data, e.g., .csv, .xls, .json. A data sanitation phase is necessary to remove all blank and unstructured entries from the obtained files.

After the data sanitation, some initial considerations can be made by observing the raw data in the files. For example, finding a common or primary identifier among datasets is an optimal approach to cross-reference information. In a next step, the datasets can be visualised in a layered manner, allowing for the identification of patterns (correlations) in the data by human cognitive perception. In visualisation analysis, this sense-making loop is a continuously interaction between using data to create hypothesis and visualisation to acquire insights.

As the research was scoped to the Netherlands and Amsterdam, the proof of concept took into the account the government definition of “critical infrastructures”. Also, research was limited to datasets referring to energy resources and ICT. A visualization layer was created based on each dataset that could refer to a critical infrastructure.

Visualisation of Data

From the different Open Data sets, a layered visualisation is generated and shown below. The figure provides sufficient insights to illustrate that most data centers in Amsterdam are geographically close to the main energy sources. It also suggests which power plants may behave as backup sources in case of service disruption. In the case of Hemweg power plant located in Westpoort, it is clear how critical this facility is by observing the output amount in megawatts being generated and the high-resource demanding infrastructures around it.

Four layer visualisation. The darker green areas are also the sectors where the highest number of data centers (blue dots) and power plants (red dots) are concentrated in Amsterdam.

Four layer visualisation. The darker green areas are also the sectors where the highest number of data centers (blue dots) and power plants (red dots) are concentrated in Amsterdam.

A few datasets contained fields with entry values flagged as “afgeschermd”, suggesting the existing concern in not revealing sensitive information. The desire to obfuscate some areas can be seen as an institutional interest in enforcing security measurements. Thus, that such information is sensitive and its disclosure can be considered as a security threat.

Conclusions and Considerations

Results and insights in this research are considered not trivial to be obtained. Even within a short time frame for analysis over a specific set of data, we were able to derive interesting conclusions regarding the national critical infrastructures. Conclusions of this nature can be something that governments and interested parties want to avoid to be easily obtained due to national security purposes.


The presented research confirms the possibility to derive conclusions from critical infrastructure regions based on public data. The approach involved the implementation of a feedback (sense-making) loop process and continuous visualization of data. This ongoing effort may create space to discuss in which extent this approach can be considered beneficial or dangerous. Such discussion must be left to an open debate, which must also consider the matter of Open Data and national security.

To open or not to open data?

Wed Sep 25 2013

© Stichting NLnet Labs

Science Park 400, 1098 XH Amsterdam, The Netherlands

labs@nlnetlabs.nl, subsidised by NLnet and SIDN.