Link

Bruce Schneier today blogged about the TAO unit of the NSA, referencing an article from Der Spiegel which spoke of the techniques the unit used in order to gain access to a target. The Schneier article referenced includes a variety of links from their top secret “catalog of tools.” Of course we are only scratching the surface here, particularly as many of these tools and techniques have bamboozling code names like “QUANTUM INSERT,” but nevertheless it is an interesting read.

UPDATE: Gizmodo has an excellent (if not brief) summary of some of the techniques mentioned.

Advertisements

Season’s Greetings

Aside

I would like to take this opportunity to wish all of my readers a happy, safe and fulfilling holiday season. No matter your feelings on the religious significance of this time of year I believe it is nonetheless an important time to catch up with family and close friends, particularly for those of us who live a long way from our home town.

I’d like to also implore my readers to engage more readily. The comments on articles are enabled to provoke lively and hopefully informative discussion. It has been a slightly slow week in terms of news on the security front and I don’t normally write “filler” for the sake of ensuring the blog at least looks maintained, but rest assured that there are several interesting stories in the pipeline in addition to some original research that is due to be published shortly.

Reader Queries On Blogsig – A Message Authentication System for Internet Forums

A reader e-mailed several days ago asking a few questions regarding my “blogsig” concept. To call the effort a project is perhaps premature as only a small amount of proof of concept code has been produced and work on this has proceeded at a slow pace due to other much more important commitments.

I first mentioned this concept to Nick P. and others on Bruce Schneier’s blog several months ago and had quite a few somewhat verbose discussions with forum regulars in regards to the most optimum way of achieving my desired goal.

It was effectively an open brainstorm and no promise was made that a project would ultimately result from the very crude proof of concepts that were created to demonstrate the viability of the concept.

The executive summary is that blogsig is a message signing system optimized for online forums such as blogs or social media sites like Facebook. Through the use of elliptic curve crypto we are able to keep keylength small.

The ultimate goal of the project is to produce a single line signature that could either be appended to the message or alternatively embedded in a link. A browser plugin would provide both signing and verification functions.

I hope that midway through 2014 we will have a complete specification and some solid implementations out in the community. That said, blogsig will only be developed if there is sufficient community demand. The naive will retort with comments such as “just use PGP” but nobody really wants a mass of lines choking up every forum post. Detatching the signature and hosting it elsewhere doesn’t solve the problem, it simply shifts the burden.

The threat metrics of blogs and social media are different to electronic mail. The internet needs a web forum optimized message authentication and assurance system that is easy for the common folk to use.

Rather than repeat myself I have included some quotes from posts made on Schneier’s forum along with a link to the full page. As I cannot repost content without the original author’s permission you will have to view the original pages to see any replies directed toward me.

Many of the information described is deprecated and no longer valid. The current PoC uses DJB’s ED25519 and encodes text using ASCII85.

“Nick: there have been comments that have appeared once or twice and I have looked back and think “did I say that? it looks like my writing style kinda but it is certainly not my personality?” – honestly I just play along and go with it because by the time I go back to the thread to respond I’ve long forgotten where and when I was when I wrote the original comments. Now my memory is like that of a deadbeat alcoholic. I can remember most things fine – probably more than fine. Song titles, lyrics, code, commands, citations, who did what when and where, etc. but I can’t for the life of me remember small actions that happened in a day of other bigger events should I need to a few days down the track. In my teenage years I had some brain injury in a prank gone wrong (don’t ask! seriously though I am okay now and am not writing this from a wheelchair or anything) but no doubt I feel that these little chunks of memory that go ‘walkies’ are somehow related. Or some jerk is posting randomly as me once a month (or maybe once every two months) and making me doubt my sanity :-). But getting to the point – message signing certainly has utility. Nothing stops impersonation on blogs. Only trust (and an eagle eyed moderator who can make tough calls with limited data – often both users may be on services like tor or otherwise won’t have a static IP so you won’t be able to just go “oh, *normal* IP = good guy).
I think perhaps signing with, say a 4096 PGP key is a bit excessive for blog use. Perhaps a 512 DSA key just for signing? It’s not earth shatteringly important after all.”
October 26 2013 15:01

“the key is that it should be tiny enough to be embedded in the blog post – either covertly (in an element like a ‘title’ for a link for example, or as an actual link) or overtly (the last line in the post). I am working on this concept just for fun… and it will give me something to fill up my next weekend. The concept is that we don’t need high security – we just need something that makes spoofing expensive and time consuming for the adversary. By putting the data on an outside link you are making the system centralized and forcing users to either use “our” server that we put up for testing (if we went this way) or run their own. that said we could probably use an API for something like pastebin to populate the full signature in a paste. Given blog posts are going to be checked within a few months of submission this might just work.

Nick: I might just hack up something quick and dirty using either shell or perl that parses a blog post and generates a ‘tiny’ signature line – and of course another script that can take a blog and verify the signature line.

I imagine people would just generate in-app keypairs for use with this, but with a bit of mucking around there is no reason why we can’t integrate this with their exisiting PGP keypair. Now we can’t directly import ours as a subkey as PGP requires >1024bit (gpg is okay with 512 in expert mode) and it is handled a lot differently. The easiest way to do this would be to create a subkey on a trusted key of theirs and use their blogsig public key as their comment field. By putting their PGP keyID in the metadata space of their blogsig instead of some other identifier (which would be limited to about 16 chars by necessity) they can advertise the link and the client software can validate it by pulling the key from the servers and verifying if a subkey exists, etc. Again love sounding ideas off everyone here. One day we’ll come up with something that is not just useful but indispensable. We’re not there yet, of course.”
October 26 2013 17:59

“I am considering using ECDSA (or the ec based digital SIG system Nick spoke of on Friday) for my blogsig project. This is due to the need to fit both a signature and metadata into an 80 character (a single standard length line) signature. Given I have stated that non-repudiation and absolute certainty are not part of the brief I think it is a reasonable enough choice. A blogsig is designed only to certify that there is a high (not absolute or legally provable) probability the signed post was composed by the keyholder and has not been modified (except for reformatting, a concession we must make with blogs).”

November 7 2013 05:12

“My current PoC code is a bit of script hackery that uses wget to dump the page you wish to validate, then trawls the page looking for blogsig footers. Each footer found gets pumped into a routine. This routine pulls the blogsig into its component parts – the metadata and the digital signature.

The first block is the key identifier, which is encoded in five characters of the blogsig (and thus is limited to 94 chars, that is 7bit minus the space and controlchars). The keyID is actually in hex but is encoded in 7bit printable to save space. The sixth character of the blogsig metadata stores whether to verify the entire block as teXt (strip all HTML), full HTML, or strip all but Links. It can also be instead set to K which means the public ECDSA key follows. I will likely be able to avoid the sixth char of metadata entirely if the client software is smart enough to try the three methods until a good sig is found (and of course notify as to what method has been used). An embedded key could easily be differentiated from a SIG, of course obviating the need for the sixth char entirely. Directly following the six char metadata is the length of the signed post in characters (with all whitespace ignored and the HTML stripping settings of the mode chosen enforced) immediately followed by a ! and then the signature proper (or the key in the case of mode K). Obviously I could move char six to this location and have a relatively predictable delimeter rather than wasting a character. The blogsig ends with a % sign, thus giving a blogsig a pretty unique layout to be found with a regex.

Anyway my PoC can take a chunk of blog and verify sits against my test keyring without any hassles. The stripped HTML with the exception of links (mode L) is the default as link forgery is a possibility with plaintext strip mode. I have tested it with WordPress and the comment section of drupal without any problems. Of course this is just a proof of concept.

The next obvious piece of the puzzle is key servers. My idea of being able to push our your public key on a blog sucks as not everyone will see it and people might request it time and time again. A key server is the logical solution.

I am considering whether I can “dress up” a blogsig key so that the existing PGP key servers can do this job for us. It would be trivial to change my key identifier to PGP’s system. How it would be done remains to be seen. The servers might refuse abnormal looking keys. The easiest way would be to use the metadata of a PGP key (like the comments field) to publish our public key. It is short and shouldn’t pose an issue. People that wish to use both blogsig and pgp could generate a subkey with the required info and publish it to the key servers. This is just an idea I am toying with.

So that’s what I got up to on a boring Saturday evening. Obviously it is all just a test and my mind is made up about nothing. But I think the concept of a short low security key for signing blog posts is a good one. If I end up hiding the blogsig data in a link then it would annoy readers of the blogs less (although a one line blog SIG has to be less annoying than a multi line PGP sig). If you had the link point to the blogsig website, is http://sig.co/b?blogsig_blob_goes_here then those with the browser plugin installed would get instant verification but those without the plugin could simply click the link. If it is a public blog a CGI script on the server could fetch the page, parse it and verify the blogsig. How is that for graceful fallback behavior?

As we have discussed the aim of the blogsig is for a short digital signature to provide a low to medium level of assurance that the user is the one who owns the public key. The key size is small enough that an attacker with time, money and resources could potentially forge it, but in a way that’s the point – non-repudiation is not a good feature to have in a system like this. Really all it is doing is providing a mild level of assurance. For anything serious – use PGP.”
November 3 2013 09:22

“The keyID is a throwback to few things I was trying – in my first implementation keyID was derived from a hash of the public key. Obviously collisions would be an issue that I have not yet considered. The second idea was to use the keyID (and lengthening the field) to give the PGP key that contains our blogsig key embedded in its metadata – as a way for the client to know where to go and fetch the public key. Not sure exactly where I am going with this yet.

Re edu I’d be happy enough to use it. I just chose ECDSA as there was reference code available and it was a “known quantity”

Re the URL idea – thanks. I thought it may at least remove the signature blob from public view and stop people going “huh? What’s that line of crap at the end of your blog posts?”.. I may have misunderstood you but are you stating that the server would, in effect act as a proxy, searching for signatures and verifying them if they are present?

Agreed re your comments on SSL. While we have different aims In have no doubt the solutions could be engineered to be similar or indeed solved in a single implementation. I think the key here is lots of discussion, simple proof of concepts etc before going to the spec/RFC stage. You can spot that some very popular internet protocols were thought up on paper without ever being implemented as a proof of concept – IPSEC comes to mind.
The benefit of doing some hacky proof of concept code – even if its just a bit of perl or even shell that, say takes your message and signs it – and another script that you can pump a HTML page into and it will find and verify any embedded sigs. Such code is never intended to actually be used but by doing that you encounter some of the problems that an actual implementation would face without going to all of the trouble at such an early stage.”
November 4 2013 05:58

Paper Highlights Dangers of Legacy Code But Experience Shows Removing Legacy Code/Equipment/Services Difficult

A research paper published by the University of Pennsylvania in association with Stefan Frei of Securina coins the term “honeymoon effect” to describe the delay between the release of a software product and the discovery and subsequent patching of the first vulnerability and posits that the duration of such a delay is a function of the team’s familiarity with the code. The most interesting revelation in the paper stems from the postulation that code re-use, often encouraged to decrease development time appears to be a major contributor to the amount (and perhaps by inference – the severity) of vulnerabilities found.

It was at this point in the paper that I decided that an opposing view published on my blog would be potentially of benefit, if only to get this annoyance off my chest. I also noticed that this paper also made an appearance today on Schneier’s blog but he provided little discourse other than paraphrasing the key concepts outlined in the paper, something I found particularly strange given the glaring issues that are endemic to a paper with such a potentially controversial claim.

Of course it is without doubt that legacy installations are responsible for many of the breakdowns in security policy we have seen of late. The Adobe breach – where millions of passwords were leaked by an unidentified attacker was said to have occurred on a legacy AAA system that was scheduled for decommissioning. Many an Internet provider has had an ancient MX used as a springboard to further infiltrate their network as a consequence of an old and forgotten installation of sendmail. A major police department in a European nation had millions of records including vehicle/driver license data in addition to dossiers on the victims and perpetrators of crime accessed (and presumably duplicated) thanks to a legacy information storage system that was entirely proprietary and thus could not be exported via electronic means, necessitating time consuming manual entry which was scheduled to be completed shortly before the breach took place. I stress this to my clients regularly – if you no longer have a use for a piece of software, a service, router or other internet connected appliance then decommission the unit and eliminate the risk of an unauthorized incursion.

Problems may exist where an alternative solution has replaced the legacy equipment and/or software but only a partial importation was successful thus requiring the use of two systems in tandem. While this is an inexpensive approach for mitigating such a problem it is most certainly the wrong way to address such a challenge.

Rather than touch on my issues with this paper – which are numerous I would like to relate to you a personal story about replacing legacy equipment that has been untouched for not months, not even years – but decades. I will no doubt write a short post countering the author of the paper’s claim that code reuse is necessarily a bad thing. Indeed if you are unfamiliar with a certain function you are almost certainly better off using an already established library. Phillip Zimmerman himself said many years ago that any encryption solution that you design independently will be fundamentally flawed. It is far better to take a library written by those with detailed knowledge in the specific field than try and improvise something that may nor may not be superior. One pertinent example would be OpenSSL. I personally don’t like it – it’s code is terse, poorly commented and generally a big mess. Despite this I would not even consider writing my own SSL library. Other excellent solutions exist such as CyaSSL (tiny and perfect for embedded systems and with a much smaller attack surface as a consequence) or Mozilla’s NSS. There’s also GNUTLS and countless others. I am firmly of the opinion that you need not reinvent the wheel when there is no compelling reason to do so. That said, I think there is immeasurable value in auditing third party code you intend to use in your own software. It is part of the due diligence that we, as developers, should be undertaking and it improves the security of not just your own product but all of the other projects that happen to use your library. Nevertheless feel free to disagree, or even better perhaps you can comment on this post.

I once assisted a library who wished to upgrade their software and was presented with a “black box” of which the library staff knew very little. After a preliminary investigation the ancient unit ran SunOS 4 and also interfaced with their sister library elsewhere in the town over a leased line (that’s being kind – it was literally a jumpered connection at the exchange providing a direct electrical path over a pair between the two sites). To make matters more complicated the sister library had dissimilar hardware. All of the interaction between staff (at the borrowing desks) and library customers (at the enquiry workstations) was via VT100 dumb terminals.

The rear of the “black box” (the name the clueless customers gave the main SPARC server) had four RS232 connectors, an AUI connector that made its way to a 10base-T MAU amongst the usual connections for a local monitor and keyboard. The second smaller unit was in a slightly different style case and had another four RS232 connectors and an external SCSI enclosure.

So you can imagine my enthusiasm when I discovered this. We eventually obtained root access to both machines thanks to a decades old sticky note and started poking around. The company who supplied the library management software was long out of business and hardware for such machines would have been nearly impossible to find (as an aside I will one day tell you the story of a university who to this day relies upon a VAX to keep their admissions and keeps a stockpile of old VAXen for parts. Despite my continual recommendations the closest I was able to convince the board into modernization was the installation of a terminal server which allowed staff to ssh using PuTTY into the terminal server, which would find a free serial port and establish a connection) which effectively meant they were sitting on a time bomb that could very easily go off at any time the hardware chose to fail.

Looking through the configuration files we discovered that the second box was effectively used to add more RS232 ports to support more concurrent dumb terminals. I suspect that the SPARC boxes of the era had some kind of hardware limitation preventing someone from adding a heap of serial ports to a single host but I highly doubt this hypothesis has any merit given that running large banks of serial equipment was relatively common in an enterprise environment.

The second box had a very simple configuration. A customized getty was enabled on the first three ports. This sent 25 LFs (who knows, maybe they forgot that VT100s had a clear control code?), echoed the date in strictly 11 characters, e.g. “03 Jun 1983” followed by 62 spaces and then the time, again always fixed with at 7 characters e.g. “10:23am” suffixed with 11 LFs. It then sent 23 spaces then echoed “NORTHERN DISTRICT LIBRARY SERVICE” followed by 3 LFs, then 29 spaces followed by “Press RETURN to begin” finally suffixed by 8 LFs so the whole thing looked centered on the screen as the read input prompt took up the bottom row. Imagine that – and now imagine that at 9600bps on an amber screen dumb terminal. Yep, some joker spent real time and effort getting it all to display just right. They should have gone the whole nine yards and put in some ASCII art.

So in theory the user presses enter. If they do the little script opens an rlogin session to the other “black box”, and as they are already authenticated they do not get a repeat greeting. Normally the script would just be sitting there expecting input (ostensibly the enter key but it could be anything followed by a LF) forever but our intrepid coders have thought all about that with a cron job that runs every 30 minutes checking for idle gettys, flushing their tty with a heap of line feeds and then sending a SIGHUP to it to make it reset and pump out the issue file again. I was told that it was actually put on by a tech in the early 90s after angry librarians complained that kids weren’t pressing ENTER but rather writing expletives without hitting CR and thus leaving abuse for others to find. This limited the damage.

The other “black box” was where the magic happens. We found the software – no source unfortunately just a SunOS executable and also found that despite this being the main box that it was saving all of its data on the other box’s external SCSI array that was exported via NFS. Fun. The data files that we found looked unintelligible and weren’t in any format we’d encountered before.

On the other box with the SCSI HDD array we noticed that one of the serial ports was not running a getty and sought to figure out what was going on there. The answer was simple when we finally traced the wires. It led to a box that we now know as a PSTN emulator – it supposedly is installed at one end of one of these old school jumpered lines and essentially emulates the central office. It provides a dial tone, line voltage for ringing the other end, etc. Essentially it is like an 80s version of one of those FXO cards that we use nowadays with IP PBX software.

Anyway the modem hooked up to the line simply picked up the line, got carrier and the box on the other end did much the same thing by supplying four dumb terminals with access via transparently launching an rlogin session.

So we were pretty upset about how the hell we were going to get the data out of this antiquated system when a colleague suggested that instead of fighting with it we instead work with it. So we wrote a heap of shell scripts, got really comfortable with using expect and purchased a 10base-T PCI Ethernet card on eBay for all of $8 and as a bonus they threw in a T adaptor and a 8′ run of coax with BNCs on each end that turned out to be just the right length for us.

So we eventually got the LAN working and was finally able to access the system via rlogin. Our first script that succeeded was designed to login, navigate through the text menus and then display the catalog. Unfortunately it wouldn’t show more than 1000 entries so we ended up using two character prefixes and had our script work through aa to zz and also tacked on 0 to 9 (which got a few titles we would have missed). Each result window had the ISBN, description, replacement price, etc. By running each expect script individually and then through the liberal use of sed and egrep we were able to pull out all the data we needed and slowly populate our mysql database (via the command line mysql tool).

We then worked out scripts to scrape information from the borrower section. We were able to get not only their names, barcode number and phone number but also a complete history of every book they have ever borrowed. This was invaluable especially as we were able to use this information to compile a list of books that were currently out on loan. Nobody was spared – even banned and inactive users were exported so we could have the most complete data set.

We finally worked out the employee information and time clock history in addition to pretty much every note ever entered. Once we were satisfied our code was bug free we gave it a test run.

It worked beautifully but it became painfully apparent that this was not going to be a fast procedure, certainly not fast enough to do over a holiday weekend. We estimated it would take over two months to export everything. By this time the information would be stale and useless.

We considered doing the book information import first and then finally doing the customer data but a variety of factors conspired against that concept, which I believe would have been a reasonable option. It was then that I had my epiphany. Scripts are notoriously slow but mine was unnecessarily slow for a variety of reasons and it was this inefficiency that needed to be rectified. The software worked sequentially going through each record in order and carefully updating the table when the data is collected. A partially terminated attempt could not be resumed which also proved to be an issue.

I dramatically altered the architecture of my code. I added a journal to the database and created a supervisor process that invoked all of the scripts as required. I also carefully audited each script and removed any barriers that existed to concurrent operation (for example temporary files that had the same filename were instead modified to use the PID and the time in unixtime to ensure that one process won’t touch another’s temporary file. More importantly for most purposes I eliminated the use of temporary files instead putting raw information into the database and performing the cleanup within SQL which is far more efficient).

The supervisor script queried the journal table to determine what tasks are outstanding. It may perhaps find that books beginning with aa-cd are completed, as are pa-sz but a previous attempt on fa-gz is listed as being active but the last checkin was four hours ago. It will thus lookup the PID for the ga-gz job and kill the script. It will then remove the lock on the job and set a flag that indicates that there has been a failure. It will then make a new entry for ga-gb in the journal and indicate it as locked for processing, entering the current time in start and last check-in (it takes errored tasks in tiny chunks to narrow down error source) and start the script on that subsection. It will then execute the appropriate script with the arguments to attempt to extract ga-gb, background the task and enter its PID in the journal. If system load is still low it will find another uncompleted task in the journal, check it out and execute the script.
As the journal and all output data are on a mysql server we were able to bring in four Linux boxes to assist in the effort. We would have added more but network bandwidth (on the very slow coaxial 10base-T – we only had one old NIC so cheated and made a media converter out of an old PC, put our only card in it and bridged the two interfaces – this way we could connect our other PCs via standard UTP albeit at a very slow speed) became the bottleneck in addition to the software having often significant seek delays. You could hear the SCSI arrays churning so I suspect it is decades of fragmentation from a database that has never been reindexed). The journal system worked very well and eliminated duplication.

We managed to complete our final pre-production test in just four days and were able to convince the library staff to “look but don’t touch” (i.e. use the old dumb terminals to look up books but to write down people who return or loan out books on paper for later entry into the new system) when the big day (or should I say days) came.

We got the vendor for the new software to provide us with a template of exactly how he would like the data presented for importation. He went for the old school CSV, which suited us just fine. We had to clean up a lot of the data but eventually exported it into tab delimited and worked on it using sed, awk and a few cheeky scripts to sort out date formats, rogue whitespace and other minor things that can ruin your day. In the end we only had one or two rejections and two of them were as a result of foreign accents being mishandled by the old software and instead being pumped out as random extended ASCII characters which the vendor’s import tool didn’t like very much.

Once we pulled out all the old hardware and put in the Windows Server machine it was (almost) smooth sailing from this point on. We enabled Terminal Services and installed some lovely thin clients that were only marginally larger than a can of soda. We actually got away with rewiring much of the library as the RS232 was delivered through the library using UTP – I imagine it is similar to how the Cisco console cables function. A standard patch cable came from the wall and into an adaptor that turned the RJ45 into a DB9 which then seated into the serial port of the dumb terminal. I sincerely doubt that the cable would be of sufficient quality to carry 1000baseT but it was sufficient for our purposes and there was no noticeable lag or latency at 100baseTX speeds.

The final hurdle we encountered was the second library on the campus about ¾ mile up the road. We installed a bunch of thin clients and a VPN terminator and thus linked them to the primary site over their existing DSL connection but their experience was unacceptable with significant lag. Of course they didn’t think anything was wrong as they were used to the screen redraw time of a 9600bps dumb terminal. It was at this point that I considered that the jumpered line could be put to good use.

We removed the old equipment and instead installed an AT-MT605. These little beauties are great value for money (we got ours for about $350 each) and are simply connected at either end of your private line. The only configuration you need to do is flick a DIP switch to select one as the central office and the other as the subscriber. We were able to achieve well over 30mbit/sec (a bit better than the 9600bps modem they had interconnecting the place before!). The connection was so good the campus library (the smaller satellite one) has canceled their slow and unreliable DSL connection and are sharing the high speed Internet from the main library via the link. We have used QoS to ensure that the thin clients receive priority bandwidth to preserve the user experience.

I hope you’ve enjoyed this rather long winded explanation about my encounter with legacy hardware and how we were able to successfully migrate and modernize their entire operations through a combination of good hardware, a keen understanding and a broad base of knowledge – not to mention the mandatory abundance of patience.

Schneier To Leave BT

Bruce Schneier, the world famous cryptographer behind algorithms like Blowfish has been asked to clean out his desk (proverbially speaking, as he evidently teleworked from Minnesota) as a consultant with BT. For their part the British telephony giant has claimed that it had nothing to do with his involvement in analyzing the Snowden documents with the team at The Guardian, but critics are skeptical. I wrote a brief response to this announcement on his blog and have included it below both for the convenience of the readers, to ensure that the link doesn’t break over time and more importantly to safeguard against the post being removed by the blog’s moderator.

“… hypothetically speaking I would think that if Bruce discovered the conduct of his employer conflicted with his own very well defined publically known ethical persona then I imagine he would hand in his resignation in a heartbeat. I know I would. You could counter argue that perhaps BT’s board was angered about having someone like Bruce – who thanks to the Guardian and his analysis of the Snowden material now has “controversial” emblazoned on his back – on their payroll when their own hands aren’t exactly clean (allegedly, anyway, if you believe the British press). I understand that there are almost certainly NDAs in place and that neither Bruce nor BT could likely comment in depth on this issue.

Suffice to say, I think it removes a big question mark that has been hanging over Schneier’s head regarding his potential conflict of interest. I can only speculate that Schneier has a diverse schedule and a comparably diverse income stream from things like book sales, signings, endorsements and speaking appearances so I suspect (and hope, as he is one of the “good guys”) that this will not cause him any financial distress.
If (and I expect this to neither be confirmed or vociferously denied) Schneier’s departure has even a shred of connection between his work on the Snowden material then I congratulate Bruce on standing up for what’s right and hope that he continues to be a bastion for free speech, limited government and internet security. On behalf of everyone who has read your work over the years – thank you.”

Schneier is a living treasure to the Internet security community. He has devoted a considerable portion of his life to researching and improving information security. Regardless of your opinions of his more controversial beliefs or his use of the proprietary and notoriously bug filled and security porous Windows, Bruce has conducted himself with professionalism and has been an asset to our industry. If the situation has occurred as some have alleged (emphasis on alleged) it would appear to be a tremendous mistake to let such a talented and valuable human resource slip through your organization as a result of nothing more than the bruised ego of a few white collars in the boardroom.

Android 4.4.2 Breaks AppOps

I’ve mentioned in a post just a few days earlier my annoyance at Google’s burial of the AppOps feature in 4.4 and 4.4.1. With the release yesterday of 4.4.2 to Nexus 4 users I have noted with dismay that AppOps itself has been removed in its entirety. That’s right – a security feature that enabled somewhat granular control over what functionality an app could utilize on your device has been removed deliberately by Google with one of their product development boffins citing the reason being that it was being used by end users when it was merely a debugging tool.

Others have expressed outrage at this dumbing down of Android functionality with the EFF commenting on the issue. ZDNet today ran an article about the changes.

I have said it before and I will repeat myself here again just in case anyone missed it. Google doesn’t want the end users to be able to granularly configure application privileges. Android makes its money through Google Play purchases and app developers in particular are very fond of using advertizing networks to monetize an ostensibly “free” app. Of course, the app isn’t truly free and the user is instead trading their privacy, bandwidth and screen real estate to the advertising network. By being able to easily toggle coarse location and internet access such a built in ad serving platform in, say a flashlight would be rendered inoperable and this makes the app developers angry. Never mind the fact that you own the cellphone.

A truly gutsy Google would go further than the AppOps we saw in Jellybean and provide a truly granular permissions management system both at install time and post installation within the “Apps” section of the Settings applet. On downloading an app from the Play Store a user could simply click Install or alternatively click Customize. This would provide a boilerplate warning that this could cause unintended consequences (or perhaps enable this feature only if the phone has developer mode enabled) and then allow the user to toggle all of the permissions listed. Simple. There is no technical reason why this cannot be done.

Until Google gets their act together and starts showing their users some respect (and this may never happen, given second to Facebook they are perhaps one of the most dangerous companies on the internet when you consider the quantity of data they have on each user. Telling us they won’t “be evil” doesn’t ease my suspicions one bit when they have been caught assisting the NSA, voluntarily or otherwise I believe any company with morals would alert the world of this disgrace even if it meant risking imprisonment of the board – Lavabit and Cryptoseal’s handling of this issue was flawless) I believe the only option privacy conscious users have is to use a custom ROM that allows more granular control or install the XPosed framework which will allow you to use the excellent XPrivacy.

I should note that privacy conscious users should avoid using a cellular phone if at all possible. Even if you eliminate all of the risks endemic to Android (which isn’t that difficult – you can remove the Play Store, all Google Services and sideload a few trusted apps and update them manually via adb strictly when necessary) you still have the baseband to worry about. The modern cellphone will betray your location via GPS if interrogated thanks to the E911 mandate (yes, of course radiolocation would be possible even if the phone wasn’t cooperating but actively sending GPS coordinates is a hell of a lot more accurate. If the service could only be enabled if a 911/112 call was recently initiated then perhaps it is excusable but unfortunately this is not the case) and can also be instructed to covertly auto answer, providing a level 3+ adversary with a listening device on your person even without physically planting a thing. The phone and the baseband communicate via a pseudoserial port using the antiquated Hayes AT command set. It has become a weekend project of mine to learn more about possibly the least researched part of the cellphone stack.