OpenPGP: Duplicate keyids - short vs long

Lately there seems to be a lot of discussion around regarding the use of short keyids as a large number of duplicates/collisions were uploaded to the keyserver network as seen in the chart below:

generate_key_bar_chart.php

The problem with most of these posts are they are plain wrong. But lets look at it from a few different viewpoints, one of which is the timing of the articles. For OpenPGP V4 keys the short keyid are the lowest 32 bits of the fingerprint. The specific keys that were published, some 20,000 keys that duplicate the strong set keys, were generated by evil32 in 2014, so nothing is actually new here (except it being published on the keyservers), and adding to that the triviality of short keyid collisions has been known from the start, which is ok, since it is just a short, convenient identifier.

What was interesting with the evil32 keyring however was demonstrating doing this on a large scale, by cloning the full strong set of the common Web of Trust (WoT). A strong set can be described as "the largest set of keys such that for any two keys in the set, there is a path from one to the other".

So what have we learned so far - when discussing keys there is such a thing as a path between keys existing in a strong set (that requires the path to be complete in both directions), the path we're talking about is a signature path, whereby Alice and Bob meet up at a conference and exchange data and check IDs after which they sign each others keys, then a path exists.

So where does duplicate short keyids make things diffused from a security perspective? Absolutely nowhere! The issue arise when people start using OpenPGP without understanding any concept of operational security or key management, and start using encryption and digital signatures "because it is cool" without actually verifying any of the recipients' keys. They go looking for a key on the keyservers, gets two results and are confused, making a large fuss about it.

In many ways we should be thankful that the duplicate keys are on the keyservers and starts confusing people, maybe they will start doing some key verification now? Not likely, when even Computer Emergency Response Teams (CERT) such as the dutch don't understand basic concepts of security and starts wanting to prove the negative and detecting duplicate keyids. In an intentional attack, all the keys found might very likely be an attacker, you should verify positively with the person you want to communicate with, or through your network of trusted peers (that you assign a trust level to when calculating the WoT), never, ever, try to guess what is wrong by proving something is false.

The most common suggestion over the past few days seems to evolve around "Use the long keyid", whereby the short keyid is a 32 bit identifier, the long keyid increases the size to 64 bits, and for GnuPG this can be achieved using "keyid-format 0xlong" in gpg.conf. And sadly, the suggestion is based on the same misconception. For one thing, generating colliding 64 bit keyids is also possible, but the really scary thing is it still assumes users are not properly verifying the keys they are using with the full fingerprint in place, normally along with the algorithm type and creation date, for which purpose I carry around the following slip of paper:

Screenshot from 2016-08-17 18-26-59

The moral? If you actually do your job and validate the keys of your correspondents either directly or through trusted peers (including Certificate Authorities) that have signed the key, whether you're using the short keyid or the long keyid as reference is mostly without importance as the selection of keys you look at are already verified, and the likelihood of having collission on that set of keys is slim.

The one thing that is very sure is that the existence of duplicate/colliding short keyids on the keyserver networks does not impact security if OpenPGP is used properly (if anything it improves it if people start using their brain)

Norwegian government propose access to extended surveillance methods

The Norwegian Government proposed Proposition 68 L (2015-2016) today extending and introducing a wide range of methods for the police to cross the privacy boundry with increased surveillance, including what the Minister of Justice, Progress Party (FrP)'s Anundsen, calls "surveillance closer to the soul".

The possibility to perform telecommunications control in Norway has history back to 1915, however was limited to cases involving national security until 1976. Starting in 1915 the surveillance was restricted to post and telegraph but telephone surveillance was added in December 1950. Now in 2016 the government wants to extend the scope to:

  • "Data reading" is introduced as a term giving the police access to hacking into computers, including adding keyloggers (physical or virtual)
  • Possibility to send silent SMSes to generate telephone traffic. The Norwegian police has already been wildly criticized for illegally using IMSI catchers across, in particular, Oslo in violation with court order and registration requirements. A silent SMS is a message that is not displayed by the phone, but the generated traffic will increase the verbosity information that can be apprehended by the police when the phone company is compelled to turn over data.
  • Take control over email accounts without a court order to ease access to information early in an investigation
  • Physically bug (microphone) private rooms without an actual crime having been committed as a preventive measure.

"Closer to the soul", indeed; if you don't already see the resemblance to Minority Report (2002) you likely want to make it your weekend movie pick. IMDB summarize the Spielberg movie as "In a future where a special police unit is able to arrest murderers before they commit their crimes, an officer from that unit is himself accused of a future murder"

Anundsen argues that you don't get any more access to an individual's thoughts from monitoring what is typed on a computer and potentially never sent, than you get by physically taking control over the person's diaries. Without going into how wrong that argument sounds to begin with, there is of course a difference of awareness of the police physically getting access to a person's diaries or just silently monitoring in the background while the person were to be writing in the diary without knowledge of the police presence.

This adds to a long line of police requests for increased access to information across the globe. Senators in USA wants a new bill to impose fines if operators don't willingly help attacking their own products and Obama is ever reducing security, this time by increasing the scope of use of data collected.

So what can you do to protect yourself in a society where everyone around you is increasingly becoming your enemy? Arstechnica had an interesting post recently titled "Most software already has a 'golden key' backdoor: the system update". If you can't trust the operative system and hardware providers you're lost to begin with. Bill Gates expresses his view on personal information access asIt is no different than [the question of] should anybody ever have been able to tell the phone company to get information, should anybody be able to get at bank records,” Gates said. “There’s no difference between information.” He offered this analogy: “Let’s say the bank had tied a ribbon round the disk drive and said, ‘Don’t make me cut this ribbon because you’ll make me cut it many times.’

So you need a software stack that you can trust, and likely want to audit the source code of, or if using binary builds at least a system that use reproducible builds.

With a relatively trusted software stack, and monitoring any update activity, while making sure that you do update for security issues immediately, of course, the added complexity of encrypted and digitally signed emails comes into question. Personally I quite prefer OpenPGP using the GnuPG implementation, and with the way the world continues to develop I'm tempted to refuse to answer emails from people that sends me emails that aren't following proper email etiquette and are properly signed and encrypted. Phone calls and SMS messages I prefer not to get or take to begin with (we haven't even discussed SS7 in this post). Naturally private keys should only be stored on smart cards and data expected to be sensitive only read on airgapped systems.

It is also curious that Norway is following China in its privacy activity by this act.

Some worries about mobile appliances and the Internet of Things

Recently a friend of mine set up a new audio system and decided to go for one of the popular Sonos alternatives. Helping him setting it up brought out a few interesting questions, some of which I'll try to elaborate on in this post.

This won't be a comprehensive discussion of the developments of the Internet of Things (IoT), that would result in a book rather than a blog post, and several articles have been written about the subject already, including this pc world article that sums up a few elements quite succinctly;

Vint Cerf is known as a "father of the Internet," and like any good parent, he worries about his offspring -- most recently, the IoT.

"Sometimes I'm terrified by it," he said in a news briefing Monday at the Heidelberg Laureate Forum in Germany. "It's a combination of appliances and software, and I'm always nervous about software -- software has bugs."

And that brings a nice introduction to one of the elements of the issue. Software have bugs, and some (actually a lot) are affecting security. This requires, on the onset, three things;

  1. Software vendors needs to be alerted of the vulnerabilities and fix them
  2. Users needs to be have a way to properly update their systems in a way that provide integrity control and authentication (digital signatures)
  3. Users actually have to upgrade their systems.

As we have seen as recently as with the Stagefright vulnerability affecting mobile phones, there is a failure on several of these levels even for a popular operating system such as Android. These failures stems from multiple sources, one of which is that cellphone vendors don't use the latest versions of the OS available across all phones they sell.

There are reasons for this, mainly that the software shipped is often heavily modified with proprietary code to support the specific features of a specific phone, and requirements of modern OSes might not work on older phones due to resource constraints. That brings up a situation where cellphone vendors, if they are doing their jobs right at least, needs to support security fixes across several branches and backport security fixes.

This backporting is actually very difficult to do, because it require a large security department identifying whether the software bugs are security related, affects the various branches, modifying it the source code to fix the issues on these branches that might have different logic. As such, the choice of cellphone vendor needs to include a consideration of their ability to track security upgrades across the phones and properly defined support cycles for the various phones involved. This is why other software products have End of Life statements for when security fixes are no longer issued for a specific version branch.

Of course, these fixes doesn't matter if the users don't update their systems to receive the fixes. For cellphones this is actually one of the better parts; you see a much broader update for this compared to e.g. regular computers. But the part of the cellphone vendors fixing things is sadly lacking, in particular due to backporting to old kernel versions.

Lets move away from the technical for a little bit and go back to the Sonos system mentioned initially. Ultimately consumers wants things to be easy, and they want everything to communicate directly, e.g. using the cellphone to control the music playing in the living room. That is perfectly natural, but in order to accomodate this, the easy solution is to allow direct access between all network nodes. As described in my former blog post Your Weakest Security Link? Your Children, or is it?  this isn't necessarily a good idea, in fact, it is likely a very bad idea.

I started out mentioning Sonos as that is what prompted me to write this article, in frustration after trying to set up this system on a segregated network, completely isolatet, yet it kept requiring internet access for software updates even to allow to play musing through the digital SPDIF cable. This was supposed to have been one of the easier setups possible, connected to TV and a multi-media computer running Gentoo Linux for things like streaming Netflix and HBO. I would never allow a "smart" application to run unrestricted on other applicances, and I very much like to lock it down as much as possible, using it as, guess what - a sound system. However, the constant requests for updates before it can be used means that you open up a channel for data lekage out of your home, for now that means opening up this specifically in the firewall rules whenever an update is necessary to proceed, but of course, an attacker could make use of this and just submit data along with the update request in batch job rather than streaming live.

Devices have bugs, in particular devices that are outside of your control is of worry; and lets face it, reading through the applications requests for access to information when trying to install a new one results in very few apps being permitted on my own cellphone, can you expect others ensuring proper security hygiene with their own devices? Even my own devices like these are considered as non-secured and should not be permitted access to the regular network. This means setting up an isolated network where they only have access to services explicitly granted permission to, but not between eachother so that it can spread malware and monitor use.

We solved this in the setup the blog post started out about by setting up a new WiFi for Appliances that does not have internet access. You might ask "Why not?" and happily plug your "Smart" TV to the regular network, but history has shown that is a bad idea:

Samsung's small print says that its Smart TV's voice recognition system will not only capture your private conversations, but also pass them onto third parties.

And it is not the only device having the capability of doing so. The only way around this is complete network segregation and a security boundry that doesn't allow traffic (neither upstream nor downstream) unless you explicitly want to grant it.

Expecting users to properly configure their networks is however a pipe dream, and here comes an important issue. The less users care about their privacy, either it is allowing recording devices in your living room, or cellphones that snaps photos without you knowing it , you are at increased risk. There is absolutely no doubt that the choices of other users influence your own security, that further require reduced privileges of any network your guests are permitted into or more careful examination of information that you share (nor not) with them given their lack of ability to safeguard it, reducing your trust in them.

I'm worried about the continued trend of lack of privacy and security focus; but rather a focus on rapid functionality development, increased interconnectiveness and complexity of systems, without a focus on security and privacy that ensures the architecture is sustainable.

Stopping this blog post for now, as to ensure the rant doesn't become too long (or that is maybe too late alreday), but leaving it with a quote from this zdnet article:

The Internet of Things is a safety issue, and therefore a business risk;
When you merge the physical and the digital, it's not just about InfoSec any more. People's lives could be at risk.