Thoughts on Mr Gates' Robot tax

In an article in QARTS Bill Gates, co-founder of Microsoft, propose that "The robot that takes your job should pay taxes". The issue also got a practical discussion outside of newspapers as the European Parliament rejected a similar proposal on February 16th, 2017

The debate is, however, an interesting one and in many ways an extension of a former blog post of mine; Employment in a technological era (November 14, 2015)

Although I don't personally agree with a lot of what is being presented, it is interesting to see if the various perspectives have merit, and as part of that I'd like to summarize some of my own thoughts on the matter after having spent a bit of time reading up on the various positions.

Starting of with the IFR position quoted in the reuters article on European Parliament rejecting such a tax proposal;

The IFR and others argue that automation and the use of robots create new jobs by increasing productivity, and point to a correlation between robot density and employment in advanced industrial nations, for example in the German car industry.

Seems as close to my own position as most others point out. Introducing a "robot tax" has multiple difficulties, starting with defining what is a robot for the purpose of the tax. Increased productivity by ways of automation can happen in a lot of ways, and one of the most used today is likely, for better and for worse, spreadsheets in desk-based workplaces. At what stage does increased productivity turn into a taxable offense for the company? Incidentally this proposal is in many ways similar to the Investment Tax that is thankfully removed in Norway for the purpose of hindering technological growth, and also mimics the position of Queen Elizabeth mentioned in my previous post on the matter;

"Thou aimest high, Master Lee. Consider thou what the invention could do to my poor subjects. It would assuredly bring to them ruin by depriving them of employment, thus making them beggars"

And here comes a large part of the differences of opinion. While the need for higher intelligence and education has been rising the general population has not necessarily invested in increasing their skills to match it. This causes imbalances in wage distribution, and if a sufficiently high degree of the population becomes unnecessary as part of the workforce due to lack of skills, it can cause further riots and civil war.

The socialistic approach to the issue, instead of fixing the actual underlying issue of increasing the skills of the various individuals and ensuring incensing proper genetic development for a sustainable workforce. That is to say intelligence is a function that takes two arguments, genetic and behavioral. The Lynn-Flynn effect masqueraded the underlying decline of intelligence in society throughout the 20th centuary when not controlling for a change in population IQ distribution, whereby the 21st centuary has demonstrated actual decline to average IQ in advanced economies where, in particular, a greater absence of malnourishment over time has not contributed to increased intelligence.

The approach of restricting opportunities is highly undemocractic, as Alexis de Toqueville states in Democracy in America;

"Democracy extends the sphere of individual freedom, socialism restricts it. Democracy attaches all possible value to each man; socialism makes each man a mere agent, a mere number. Democracy and socialism have nothing in common but one word: equality. But notice the difference: while democracy seeks equality in liberty, socialism seeks equality in restraint and servitude."

Or paraphrased, socialists seek equality in outcome, whereby a democracy seeks equality in opportunities. Such slowdown of development, similar to the luddite movements between 1811 and 1816 or by introducing a universal income (rejected in Switzerland, trial basis ongoing in Finland) is nothing but socialism and hinders positive developments in society. The idea of a universal income would be highly supported by George Henry whom in 1879 completed "Progress and poverty: an inquery into cause of industrial depression and of increase of want with increase of wealth" in which he states

The present century has been marked by a prodigious increase in wealth-producing power. The utilization of steam- and electricity, the introduction of improved processes and labor-saving machinery, the greater subdivision and grander scale of production, the wonderful facilitation of exchanges, have multiplied enormously the effectiveness of labor [...] Now, however, we are coming into collision with facts which there can be no mistaking. From all parts of the civilized world come complaints of industrial depression; of labor condemned to involuntary idleness; of capital massed and wasting: of pecuniary distress among business men; of want and suffering and anxiety among the working classes.

There are however scenarios where a "robot tax" can make sense. If the development of AI results in sentient beings, that have rights and liabilities of their own, a taxation similar to human beings would be a natural extension of said rights. Without the rights, a lesson can be learned from Roman times, where it at least should be ensured that owner of enslaved persons (then slaves, but the principle is sound when extending to robots) is responsible for any damage as attributing some kind of legal personality to robots (or slaves) would relieve those who should control them of their responsibilities.

10 year anniversary for sks-keyservers.net

December 3rd 2016 marks 10 years since sks-keyservers.net was first announced on the sks-devel mailing list. The time really has passed by too quickly, driven by a community that is a pleasure to cooperate with.

Sadly there is still a long way to go for OpenPGP to be used mainstream, but in this blog post I'll try to reminisce on a few things that have happened since Bjørn Buerger commented about *.keyserver.penguin.de being down, which lead to the need for a DNS Round Robin alternative. Having a common DNS Round Robin to use is practical for a number of reasons, mainly; (i) Its easier to communicate to users (ii) It distributes the load across multiple keyservers (iii) Non synchronizing/responding keyservers can be removed without users needing to reconfigure the systems.

After the announcement of the new service, Enigmail, the Thunderbird OpenPGP plugin, was quick to change the default preferences to point to hkp://pool.sks-keyservers.net already in December 2006, less than a week after the new service was officially announced.

The GnuPG Project started its usage of the pools when keys.gnupg.net was changed to be a CNAME to the pool in May 2012. Since then the cooperation has evolved, and in particular in the "Modern" 2.1 branch it has been completed. Since 2.1.11 the public key for the Certificate Authority used for the HKPS pool has been used by default if a user specify the use of hkps://hkps.pool.sks-keyservers.net, i.e without needing to specify the hkp-cacert, and with the release of 2.1.16 it is now the default keyserver that is used if a user has no overriding configuration. Earlier versions produced an error message of no keyserver at all in this scenario.

Some slides from my presentation of the first OpenPGP conference, in Cologne 2016, are available describing the current state of operations. And if you want to learn a bit of Norwegian you can watch the recording of the 2014 presentation, or at least read the slides that happen to be in English.

Although the growth in number of public keyblocks has been increasing as demonstrated in Figure 1, it is still a low reach with 4.5 million entries. How about we use the next 10 years to make sure it becomes mainstream?

2016-11-generate_key_chart-php
Figure 1: Number of OpenPGP public keyblocks

 

OpenPGP: Duplicate keyids - short vs long

Lately there seems to be a lot of discussion around regarding the use of short keyids as a large number of duplicates/collisions were uploaded to the keyserver network as seen in the chart below:

generate_key_bar_chart.php

The problem with most of these posts are they are plain wrong. But lets look at it from a few different viewpoints, one of which is the timing of the articles. For OpenPGP V4 keys the short keyid are the lowest 32 bits of the fingerprint. The specific keys that were published, some 20,000 keys that duplicate the strong set keys, were generated by evil32 in 2014, so nothing is actually new here (except it being published on the keyservers), and adding to that the triviality of short keyid collisions has been known from the start, which is ok, since it is just a short, convenient identifier.

What was interesting with the evil32 keyring however was demonstrating doing this on a large scale, by cloning the full strong set of the common Web of Trust (WoT). A strong set can be described as "the largest set of keys such that for any two keys in the set, there is a path from one to the other".

So what have we learned so far - when discussing keys there is such a thing as a path between keys existing in a strong set (that requires the path to be complete in both directions), the path we're talking about is a signature path, whereby Alice and Bob meet up at a conference and exchange data and check IDs after which they sign each others keys, then a path exists.

So where does duplicate short keyids make things diffused from a security perspective? Absolutely nowhere! The issue arise when people start using OpenPGP without understanding any concept of operational security or key management, and start using encryption and digital signatures "because it is cool" without actually verifying any of the recipients' keys. They go looking for a key on the keyservers, gets two results and are confused, making a large fuss about it.

In many ways we should be thankful that the duplicate keys are on the keyservers and starts confusing people, maybe they will start doing some key verification now? Not likely, when even Computer Emergency Response Teams (CERT) such as the dutch don't understand basic concepts of security and starts wanting to prove the negative and detecting duplicate keyids. In an intentional attack, all the keys found might very likely be an attacker, you should verify positively with the person you want to communicate with, or through your network of trusted peers (that you assign a trust level to when calculating the WoT), never, ever, try to guess what is wrong by proving something is false.

The most common suggestion over the past few days seems to evolve around "Use the long keyid", whereby the short keyid is a 32 bit identifier, the long keyid increases the size to 64 bits, and for GnuPG this can be achieved using "keyid-format 0xlong" in gpg.conf. And sadly, the suggestion is based on the same misconception. For one thing, generating colliding 64 bit keyids is also possible, but the really scary thing is it still assumes users are not properly verifying the keys they are using with the full fingerprint in place, normally along with the algorithm type and creation date, for which purpose I carry around the following slip of paper:

Screenshot from 2016-08-17 18-26-59

The moral? If you actually do your job and validate the keys of your correspondents either directly or through trusted peers (including Certificate Authorities) that have signed the key, whether you're using the short keyid or the long keyid as reference is mostly without importance as the selection of keys you look at are already verified, and the likelihood of having collission on that set of keys is slim.

The one thing that is very sure is that the existence of duplicate/colliding short keyids on the keyserver networks does not impact security if OpenPGP is used properly (if anything it improves it if people start using their brain)