32C3

This year I participated in the Chaos Computer Club's annual congress for the first time, despite it being the 32nd such event being arranged, hence its name 32c3. This year's event has the subname of "Gated Communities" and follows last year in its location in Hamburg after having been in Berlin for a while. By this point I expect many have written the event off as a nerd gathering of hackers, which, well, in many ways it is, but it requires some modification. The number of visitors exceeds 12,000, so this is a large event, lasting over four days from 27th to 30th of December each year, and if you look deeper into it actually is a family event for many with own events for teaching children technology and a childspace that include games that use technology to represent position or sound in order to control ping-pong-games. Picture taking is of course prohibited throughout the conference unless getting explicit permission from all involved parties (as it should be in the rest of society).

Presentations this year were organized in four main tracks, starting at 11:30 and going as late as 2am. It is a somewhat interesting experience to attend a lecture on "A gentle introduction to post-quantum cryptography" by Dan Bernstein and Tanja Lange at 23:00 - 00:00 and having a full lecture hall. I wonder how many universities would have the same result.

Don't worry though, if missing a lecture the video streaming is one of the better you can encounter, separated into multiple sections, (i) a live stream (ii) a Re-Live, which is un-modified version of the stream that can be watched later and (iii) A released video of the talk that is properly mastered and in better quality. So if wanting to watch the aforementioned talk on PQC you can do so at any time.

As a disproporational amount of my acquaintances are focusing on the legal field instead of technology in itself, lets continue with a good talk by Max Schrems suing Facebook over Safe Harbor and data protection going all the way to the european court of justice. Or maybe you want to learn more about the legal ambiguities surrounding Sealand, and the precesses involved in creation your own country and the operational failures of data havens?

If wanting to mix in the more technological part, how about a wrap-up of the Crypto Wars part II and comparisons to the 1990's. For those not having spent too much time looking into the first one, some particularly bad ideas were the clipper chip for key escrow, but what is curious is the same amount of arguments being used then as now. FBI/NSA and other governmental agencies wants un-fethered access to encrypted email and blames cryptography for its failures, even though those involved in recent events in Paris and San Bernadino actually used un-encrypted communication and the security services never picked up anything. As such, they, along with politicians, use Fear, Uncertainty, and Doubt (FUD) to make their case. Its typical of politicians to think that the problem is the rhethoric or the name rather than the underlying substance, and as a result we see discussions of a "secure golden key" or a "front door" instead of a "back door" to cryptography. The attempts of governments from the first crypto wars of couse influence us even today, in particular with the export restrictions imposed that until recently still exists compatibility for in various libraries allowing for downgrade attacks. A good talk by J. Alex Halderman and Nadia Heninger on Logjam underlines why attempts of undermining encryption is a bad thing even decades later.

What people seems to forget is that encryption is required for the e-commerce that we use every day. Who would ever connect to an internet banking application if their neighbour could be monitoring all account information and traffic? And the right to privacy is even established under the Universal Declaration of Human Rights, article 19, stating: "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinion without interfearence and to seek, receive and impart information and ideas through any media and regardless of frontiers".

The United Kingdom (UK) is comming off this debate in a particularly bad way with Cameron's Snooper's Charter. In particular §189(4)(c): "Operators may be obliged to remove "electronic protection" if they provide ..." seems worrying. This is followed by Australia; where simply explaining an algorithm to someone can result in penalization. But none of these beats India; that require a copy of plain text to be retained for a minimum of 90 days if sending an encrypted message.

This level of tyranny from oppressive regimes of various governments nicely sets the stage for the presentation of North Korea's Red Star Operating System and the various ways the operating system, set to mimic Apple's Mac OS, in order to spy and keep the people down. Of particular interest is the watermarking technology and censoring application that forms part of the "anti-virus" (well, the red start icon of it could be a hint)

All in all, this is just a minimal representaiton of some of the interesting aspects of this conference. Not surprisingly the most used operating systems of the visitors (at least those connected to the network) was GNU/Linux (24.1%) and Android (17.6%), and if you want to see the talk about Windows 10 acting as a botnet you have that video as well.

Employment in a technological era

Lately I've been spending some time reading up on research into developments to the nature of employment given the increased computerization and automation in today's, and in particular, in tomorrow's world. These developments brings immense increases in productivity and opens up a new world of opportunities, but are employees keeping track and updating their skill sets to utilize it? My personal opinion is no, which was what initiated looking into the research on the matter.

Frey and Osborne's paper "The future of employment: how suspectible are jobs to computerisation" (2013) bring some interesting aspects, including a decent historical context to this issue; starting with referencing how John Maynard Keynes is frequently cited for his prediction of a widespread technological unemployment "due to our discovery of means of economising the use of labor outrunning the pace of which we can find new uses for labor" (Keynes, 1933). This was of course during a different technological advancement than we're experiencing now,  but it shows that the discussion is not new, in fact it is nicely illustrated by an example of William Lee, inventing the stocking frame knitting machine in 1589, hoping that it would relieve workers of hand-knitting, something which met opposition by Queen Elizabeth I that was more concerned with the employment impact and refused to grant him a patent, claiming that "Thou aimest high, Master Lee. Consider thou what the invention could do to my poor subjects. It would assuredly bring to them ruin by depriving them of employment, thus making them beggars" (cited in Acemoglu and Robinson, 2012).

Has anything changed since the 16th century, or are we facing the same kind of social opposition to changing the status quo? How many, today, are willing to learn a programming language in order to interface with and utilize the tools of today? As pointed out by Makyr (1998): "Unless all individuals accept the "verdict" of the market outcome, the decision whether to adopt an innovation is likely to be resisted by losers through non-market mechanisms and political activism". This was followed up by the luddite riots between 1811 and 1816 as a manifestation of a fear of technological change among workers as Parliament revoked a 1551 law prohibiting the use of gig mills in the wool-finishing trade.

Today's challenges to labor markets are different in form, yet resemble the historical aspects to a great extent. These days the ability to communicate with a computer is, in my humble opinion, as vital as learning human languages, yet there are barely a few pushes towards learning programming languages alongside human spoken languages. My hypothesis is that a reason for this is a lack of knowledge in the adult population for the same, and quite frankly mathematics and logic in general, which naturally makes people uncomfortable requiring children to learn these subjects. Initiatives such as the UK's attempt to get kids coding, with changes to the national curriculum. ICT – Information and Communications Technology introducing a new “computing” curriculum including coding lessons for children as young as five (September 2013) is therefore very welcome, but as referenced in an article in The Guardian: "it seems many parents will be surprised when their children come home from school talking about algorithms, debugging and Boolean logic" and "It's about giving the next generation a chance to shape their world, not just be consumers in it".

The ability to shape my own day is one of the reasons why I'm personally interested in the world of open source. If I'm experiencing an issue while running an application, or if I want to extend it with new functionality, it is possible to do something about it when the source is available. Even more so, in a world that is increasingly complex and interconnected, basing this communication on open standards enables participation from multiple participants across different operating systems and user interfaces.

At the same time, increasingly so in the aftermath of Edward Snowden, I want to have the ability to see what happens with my data. Reading through the End User License Agreements (EULA) of services being offered to consumers I sometimes get truly scared. The last explicit example was the music playing service Spotify that introduced new terms stating that in order to continue using the service I would have to accept to having gained permission from all contacts to share their personal information. Safe to say I terminated that subscription.

There is an increasing gap in the knowledge required to understand the ramifications of the services being developed, the value of private information, and people's ability to recognize what is happening in an ever connected world. As pointed out in two earlier posts, "Your Weakest Security Link? Your Children, or is it?" and "Some worries about mobile appliances and the Internet of Things" this can actually be quite difficult, with the end result of individuals just drifting along.

So what do you think? Why not pick up an online tutorial on learning SQL, the structured query language used to talk with most database systems the next time you're feeling bored and is inclined to put on a TV program or just lie back on the couch, or maybe pick up a little bit of python, C, or for that matter C# if you're in a windows-centric world. Or as a general plea; make read a book once in a while.

 

Some worries about mobile appliances and the Internet of Things

Recently a friend of mine set up a new audio system and decided to go for one of the popular Sonos alternatives. Helping him setting it up brought out a few interesting questions, some of which I'll try to elaborate on in this post.

This won't be a comprehensive discussion of the developments of the Internet of Things (IoT), that would result in a book rather than a blog post, and several articles have been written about the subject already, including this pc world article that sums up a few elements quite succinctly;

Vint Cerf is known as a "father of the Internet," and like any good parent, he worries about his offspring -- most recently, the IoT.

"Sometimes I'm terrified by it," he said in a news briefing Monday at the Heidelberg Laureate Forum in Germany. "It's a combination of appliances and software, and I'm always nervous about software -- software has bugs."

And that brings a nice introduction to one of the elements of the issue. Software have bugs, and some (actually a lot) are affecting security. This requires, on the onset, three things;

  1. Software vendors needs to be alerted of the vulnerabilities and fix them
  2. Users needs to be have a way to properly update their systems in a way that provide integrity control and authentication (digital signatures)
  3. Users actually have to upgrade their systems.

As we have seen as recently as with the Stagefright vulnerability affecting mobile phones, there is a failure on several of these levels even for a popular operating system such as Android. These failures stems from multiple sources, one of which is that cellphone vendors don't use the latest versions of the OS available across all phones they sell.

There are reasons for this, mainly that the software shipped is often heavily modified with proprietary code to support the specific features of a specific phone, and requirements of modern OSes might not work on older phones due to resource constraints. That brings up a situation where cellphone vendors, if they are doing their jobs right at least, needs to support security fixes across several branches and backport security fixes.

This backporting is actually very difficult to do, because it require a large security department identifying whether the software bugs are security related, affects the various branches, modifying it the source code to fix the issues on these branches that might have different logic. As such, the choice of cellphone vendor needs to include a consideration of their ability to track security upgrades across the phones and properly defined support cycles for the various phones involved. This is why other software products have End of Life statements for when security fixes are no longer issued for a specific version branch.

Of course, these fixes doesn't matter if the users don't update their systems to receive the fixes. For cellphones this is actually one of the better parts; you see a much broader update for this compared to e.g. regular computers. But the part of the cellphone vendors fixing things is sadly lacking, in particular due to backporting to old kernel versions.

Lets move away from the technical for a little bit and go back to the Sonos system mentioned initially. Ultimately consumers wants things to be easy, and they want everything to communicate directly, e.g. using the cellphone to control the music playing in the living room. That is perfectly natural, but in order to accomodate this, the easy solution is to allow direct access between all network nodes. As described in my former blog post Your Weakest Security Link? Your Children, or is it?  this isn't necessarily a good idea, in fact, it is likely a very bad idea.

I started out mentioning Sonos as that is what prompted me to write this article, in frustration after trying to set up this system on a segregated network, completely isolatet, yet it kept requiring internet access for software updates even to allow to play musing through the digital SPDIF cable. This was supposed to have been one of the easier setups possible, connected to TV and a multi-media computer running Gentoo Linux for things like streaming Netflix and HBO. I would never allow a "smart" application to run unrestricted on other applicances, and I very much like to lock it down as much as possible, using it as, guess what - a sound system. However, the constant requests for updates before it can be used means that you open up a channel for data lekage out of your home, for now that means opening up this specifically in the firewall rules whenever an update is necessary to proceed, but of course, an attacker could make use of this and just submit data along with the update request in batch job rather than streaming live.

Devices have bugs, in particular devices that are outside of your control is of worry; and lets face it, reading through the applications requests for access to information when trying to install a new one results in very few apps being permitted on my own cellphone, can you expect others ensuring proper security hygiene with their own devices? Even my own devices like these are considered as non-secured and should not be permitted access to the regular network. This means setting up an isolated network where they only have access to services explicitly granted permission to, but not between eachother so that it can spread malware and monitor use.

We solved this in the setup the blog post started out about by setting up a new WiFi for Appliances that does not have internet access. You might ask "Why not?" and happily plug your "Smart" TV to the regular network, but history has shown that is a bad idea:

Samsung's small print says that its Smart TV's voice recognition system will not only capture your private conversations, but also pass them onto third parties.

And it is not the only device having the capability of doing so. The only way around this is complete network segregation and a security boundry that doesn't allow traffic (neither upstream nor downstream) unless you explicitly want to grant it.

Expecting users to properly configure their networks is however a pipe dream, and here comes an important issue. The less users care about their privacy, either it is allowing recording devices in your living room, or cellphones that snaps photos without you knowing it , you are at increased risk. There is absolutely no doubt that the choices of other users influence your own security, that further require reduced privileges of any network your guests are permitted into or more careful examination of information that you share (nor not) with them given their lack of ability to safeguard it, reducing your trust in them.

I'm worried about the continued trend of lack of privacy and security focus; but rather a focus on rapid functionality development, increased interconnectiveness and complexity of systems, without a focus on security and privacy that ensures the architecture is sustainable.

Stopping this blog post for now, as to ensure the rant doesn't become too long (or that is maybe too late alreday), but leaving it with a quote from this zdnet article:

The Internet of Things is a safety issue, and therefore a business risk;
When you merge the physical and the digital, it's not just about InfoSec any more. People's lives could be at risk.