Fabricated Digital Evidence…

…has become a problem of epic proportions.  Fabricated (phony) digital evidence is a product of digital surveillance and digital forensics.  Digital surveillance and forensics are both performed on people who use digital electronic devices = a significant portion of the modern world.

There have been many examples of digital surveillance and forensics in recent years.  Here are some gems from 2016:
1)  Yahoo admitted that its own security teams had knowledge in 2014 that malware (harmful software) was installed in Yahoo’s systems.  The malware was designed to facilitate government intrusion into their systems.
2)  It was revealed that AT&T keeps records of ALL communications through its systems, including contents of ALL phone calls, text messages, emails, Skype(tm) video chats, and more — then sells the contents to police departments and law enforcement.  AT&T has been keeping the records for decades.  AT&T sells their information to law enforcement having cost taxpayers $10’s of USD-millions$ over the years.
2a)  AT&T forces law enforcement to sign an NDA so that the source of the evidence is never revealed to U.S. Courts.  In other words, it’s public knowledge that law enforcement fabricates the source(s) of evidence provided by AT&T.
3)  Edward Snowden remains exiled in Russia.
4)  Russia meddled with U.S. Elections by hacking into a variety of U.S.-based systems.
5)  The FBI made headlines in summer of 2016 by claiming that it needed Apple to provide access to an iPhone that allegedly belonged to a domestic terrorist.  The FBI wanted access for forensic purposes.

There are many other examples in recent years.  In every case there is a U.S. Court with an honorable judge that somehow sees this information, one way or another.  Never mind that most U.S. Judges barely know how to use the interwebs to access their email…some don’t use the interwebs at all…and most can barely setup a WiFi network at home.

The silliest part of all this digital evidence and surveillance is that it’s so easy to fabricate digital evidence.  I mean…it’s so easy…

It’s ridiculously easy to fabricate digital evidence on any given digital electronic device because every single device is chocked-full of security flaws.  In some cases the flaws are the actual things providing or collecting evidence (via surveillance or forensics).

To make matters worse, most digital electronic systems are not designed to keep an audit trail.  In other words, the systems perform actions which are never recorded (internally and externally).  In fact, not all of the actions can be recorded, but even when they are recorded it’s practically infeasible to keep audit trails of everything due to storage space requirements.  It’s also difficult to verify if an audit trail is complete.  For example, there are no automated tests that cover real-world usage of a public-facing web server to verify if an audit trail exposes all possible attack vectors.  In other words, it’s practically impossible to test if an audit trail works.

It’s not just impossible to test the effectiveness of auditing (logging), it’s also impossible to perform some logging.  A modern computer not only contains multiple CPUs (processing brains), but hundreds or thousands of smaller MCUs (smaller brains) that have their own dedicated purposes.  These tiny parts send messages to each other that are never recorded, and in fact can’t be recorded.

In other words, much of the activity a computer performs to transfer 8-bits (1 byte) of data across its subsystems cannot be audited.  To use an analogy, most digital surveillance and forensics are often performed by flaws in digital electronic devices, such as computers and smartphones, by leveraging the inability of those systems to know their hand from their foot.  In other words they tap into systems that were not designed to be accessible, and internally the system has no way of knowing that it’s compromised.  For example, the FBI did not require Apple to provide access to an iPhone in summer of 2016 to obtain aforementioned forensic evidence.

What seems to lost on everybody in America from journalist to judge is as follows:  any so-called evidence obtained from a digital device can be easily fabricated and there is no way to determine the veracity of the evidence.

That’s right!  It turns out every single system used to collect that evidence can be compromised and there’s no way to know.

To make matters worse, it’s possible to hack into any given digital system, fabricate evidence, and then remove any trace of the hack.  Practically any digital electronic system or sub-system can be compromised and there is almost always an attack vector that allows the attacker to go undetected.  Again…digital audit trails are almost always incomplete and impossible in some cases.  Moreover, some of the latest attack vectors being revealed publicly use emissions of radiation to remotely hack into systems with no physical connection to the device(s).

Therefore, since it is impossible for practical intents and purposes to prove that any given piece of digital evidence is not fabricated, I think it should all be suppressed in U.S. Courts, and the Court of Public Opinion.  I think it’s a complete pile of bullshit.  It’s so easy to fabricate evidence on somebody’s phone, somebody’s phone logs, somebody’s email…and it turns out nefarious actors and legitimate law enforcement have secret and potentially illegal access to all of it.

All digital evidence is fabricated bullshit until proven otherwise.  It will never be proven otherwise because modern electronic systems are unavoidably chocked full of flaws, errors, and bugs!  They are designed to be full of errors and built with tools that are themselves full of errors.

It would take something like true quantum computing to change this state of affairs.  Not only that, but it will take something like mass-production of true quantum computers to change this state of affairs.  Furthermore, if the security claims behind quantum computing materialize then what will agencies and actors like AT&T, the FBI, and other law enforcement do when there are no security flaws to exploit?

NPM Sucks, BowerJS Too, Here’s Why

It’s been years since I’ve ranted on this blog, and my apologies to you, I know you’ve been missing it.  I have many blog postings from the last several years and I may migrate hem back onto this site, I’m still deciding.  In the meantime I wanted to rant about some tech stuff.

It’d be good if there were a canonical ‘Why npm sucks’ article, like the ‘fractal of bad design’ one for PHP.” – https://news.ycombinator.com/item?id=8300438

Therefore I just couldn’t resist.  Let’s start with the disclaimers.  I make no “canonical” or “fractal” claims here.  I come from the C, C++, Java, and PHP development worlds to NodeJS.  I’ve been using client-side JavaScript for almost 20 years and I understand it well.  The first time I ever worked with server-side JavaScript was a trivial maintenance job on a Netscape Server circa 2003, so I’m excited about using JavaScript today.  It’s not like I’m familiar with (all) JavaScript engine intrinsics but I can be if called upon.  I’m still agnostic about software engineering tools, so it’s not like I love JavaScript.

NPM and Bower suck for different reasons.  BowerJS is designed to (mis)use SCCS as a dependency management repository.  NPM is prone to unwieldy folder structures.  The above link also mentions that nested dependencies are a problem, and they are an incredibly mundane and troublesome issue.  I’m not actually keeping score in this article since that’s not my intent.  I just hope for better open-source systems.

Nested Dependencies

Since I admit there are at least two problems, let’s start with the latter, nested dependencies.  It’s indirectly related to the first and not-so-subtle.  NPM uses a non-parsimonious approach to storing dependencies in local folders.   By convention we are talking about folders named “node_modules” nested inside of the “node_modules” directory in a standard NodeJS/NPM project.  This issue seem to affect Bower very much, except that Bower can be used to link to dependencies that then use NPM (for their dependencies).

I recently ranted on another site that adding two (2) dependencies to a NodeJS project’s package.json declaration resulted in at least 934 dependency folders in ./node_modules.  Now, it’s true that one of those dependencies is HapiJS, which is a complete web application framework, and is indeed lightweight.  Still, most software (that end-users want/need) use some type of application framework like this.  In other words, most NodeJS projects should have more than 934 dependency folders in ./node_modules.  I mean, I suppose we could somehow aggregate actual statistics from Github, but we haven’t done that here.

In a given NodeJS project that uses NPM there are many redundant nested dependencies.  For example, if HapiJS v15 uses Lodash; and Sequelize ODM uses the same version of Lodash; then you’ll have two copies of Lodash nested under ./node_modules.  The only case where one copy of Lodash will be downloaded is when Lodash is a parent dependency for the project – in other words your project depends on Lodash in addition to HapiJS and Sequelize.

In practice, there are usually version mismatches right down to [my-favorite-dependency] v1.2.3 and [my-favorite-depenency] v1.2.2.  There are many redundancies and version mismatches of sub-dependencies in any given NodeJS ./node_modules folder.  However they can be viewed with a single NPM command and managed (npm ls).

Another problem with NPM folder structure is its design.  It’s not clear why sub-dependencies are stored in node_modules folders underneath their dependency.  In short, there is a project-local view towards default NPM behavior.  NPM creates a node_modules folder in the directory that declares a package.json.  Within node_modules, and as I attempted to describe above, most dependencies have their own dependencies.  In many cases (unless your project declares the same top-level dep.) the project will end up containing nested node_modules/**/node_modules folders, since sub-dependencies can nest their own deps.

The motivation for NPM’s non-parsimonious nesting of node_modules/**/node_modules is not explained on the NPM.org website.  It’s not clear why NPM doesn’t simply store all dependencies at the top-level.  The @scope attribute for deps. doesn’t seem to explain it either.  Additionally, the motivation for the folder structure resulting from @scope is unclear.  It’s unclear why @scope is not given an implied (default) value and thus organized under node_modules.

NPM’s non-parsimonious view of HDD storage while claiming the opposite is more than annoying, it can become an obstacle to productivity.  In theory storage space  isn’t a problem because storage space is inexpensive and HDDs, computers, and network connections are fast.  In the real-world having so many dependencies is a problem in two cases:  A)  whenever the network is slow (aka not at the office); B) whenever projects must be copied, transferred, cloned, re-initialized, and so-forth.  Total size of some NodeJS projects + dependencies starts to rival small VMs at 2GB.  In practice, I’ve had to wire computers together on a gigabit switch to transfer repositories of NodeJS code because that’s simply the only reasonable way to avoid potentially 24-48 hours of downtime.  The last time I had to do that when re-creating (even larger) repositories of Java projects was…never!

Nevertheless, this problem of large numbers of dependency folders is not worthy of a “sucks” label.  A feature request might be better.  There’s a switch in NPM to use a global repository.  However from the NPM perspective this is intended for installing system-wide applications along with man-pages (documentation).  Using it doesn’t abate nested dependencies but it may help with transferring multiple projects in distributed team-environments.  NPM does not have notion of “user-local” dependencies, although it does have a user-local cache.  NPM only has project-local dependencies and global dependencies, with possibly redundant nested folders.  You can learn more about NPM folder structure at https://docs.npmjs.com/files/folders.

Tools could be built to handle the issues around version redundancy and mismatch – and there are straightforward solutions.  The nested dependency problem can be hammered down and it has been with other dependency management tools.

SCCS as Dependency Repository

The problem with Bower, and fortunately not NPM, and another reason Bower deserves the “suck” label in 2016, is that it uses source-code control systems (SCCS), in this case primarily Git (therefore Github.com by convention) as a dependency repository.  This is a bad design decision and it could be non-recoverable, depending on factors potentially beyond the control of these tools.

Indeed, it’s taken me a long time using Bower, trusting it along the way, to realize that it’s designed on a fundamentally flawed premise:  that SCCS repositories can double as dependency repositories.  They can in theory, but really they can’t in practice.

Bower uses Git tags to identify versions of packages declared in their respective configuration files.  The git tags are stored in the SCCS (git) repo.  As a result Bower doesn’t work in some situations where a “.git” folder is not found in the working directory tree.

A Misuse of SCCS

I think it’s pretty straightforward to explain why SCCS is not intended for dependency management.  If it was, then we should all stop maintaining files like package.json, bower.json, and metadata.rb and the version attributes contained within.  Not only is SCCS intended for code-management, version control, and shared/distributed development teams, it is not intended for repository management.  SCCS tags and branches are not a good way to identify dependencies because:

Metadata that uniquely identifies a codebase should reside directly in the codebase.

I’m pretty sure somebody else coined that phrase, or something just like it, and not myself.  Version information (in pakcage.json, bower.json, composer.json, pom.xml, etc.) is codebase-metadata.  Ideally we should be able to view and update version information right there in our IDE.  Otherwise we have to use another tool to perform this necessary function.  Humans write code and most people agree that requiring fewer tools to get the same job done is preferable to requiring more tools.

When we use SCCS tags and branches on files that have unique version identifiers, we are creating metadata about metadata.

Again, version information is metadata and it’s conventionally stored in a file like package.json.  Now when we tag package.json with a git-tag like “4.3.2-npm” we’ve created metadata about metadata.  On face value it seems like a bad idea to me to use metadata about metadata for anything as important as dependency management and version resolution.  Yet that’s what NPM and BowerJS do.  Again, I’m not going to get into the trenches of why metadata-about-metadata should not be used this way.  I’ll leave it for another article or for you.  Besides, I think there are better ways.

What’s subtly interesting to me about using SCCS as a dependency repository is that by convention, and in reality, most of the packages NPM and BowerJS retrieve are hosted on Github.com.  Effectively, Github.com has become a Public Dependency Repository.  I wonder if Github.com “knows” that or cares?  I don’t think NPM and BowerJS are in violation of Github.com’s TOS, but I think they’re venturing into a fuzzy area.  NPM’s architecture places a defacto burden on Github.com and public SCCS systems like it.

Moreover, NPM’s architecture leads to distribution of code that’s reminiscent of Github.com’s “Github Pages” feature.  To explain, Github Pages requires a repository to create a “gh-pages” branch that contains content specific to Github Pages.  Since it’s a static website intended to highlight the repository the “gh-pages” branch has its own development roadmap.  In practice something similar (though less pronounced) happens in the NPM ecosystem:  *-npm branches are sometimes (often?) different from the main branch.  Lodash, for example, has not-exactly-the-same codebase in the tags used to distribute through NPM, for other NPM-like tools, and for pure JS distros.

How To Recover

BowerJS should adopt a centralized dependency repository/registry.  Using SCCS is unacceptable.  If it wants to use SCCS as its dependency repository then it should use a gh-pages like approach.

For NPM it should actually view redundant nested dependencies and large numbers of small files as potentially unacceptable.  NPM already has a scalable approach to it’s registry/repository management.  Its features a more a “nice-to-have”.

MavenJS

Really I’d rather have a MavenJS tool than NPM and/or BowerJS and/or a new-fangled SCCS-extension.  MavenJS would handle dependency management like Maven for Java.  MavenJS would just concern itself with dependency management, would replace Grunt, and would not need to continue the idea of “build goals”.  If we designed, contributed, or evangelized for NPM and/or BowerJS over the last 7+ years, then our work is cut out.  Fortunately we have a model to work from.  Keep in mind that others have tried to improve upon that model, so fortunately we can iterate!

Wireless 3D Printing with Bluetooth, Marlin, and RAMPS 1.4

The initial modifications to the Reprap 3D Printer firmware called Marlin are ready for public testing. I’m using it now to control the printer wirelessly and run jobs from its SD card. The code is at https://bitbucket.org/terawattindustries/marlin-bt (master branch). Communication speed between Arduino and BT is @ 115.2kbps. UPDATE: default speed is now 500K in firmware means set your BT module accordingly.

Hardware

I’m using the Bluetooth module discussed in a previous post. The Bluetooth setup costs about $25. Terawatt Industries is manufacturing a PCB design so this attaches directly onto RAMPS and will work with RepRap power sources without an adapter board as shown.

  • I’m using the BT module I blogged about at https://tenacious33.wordpress.com/2012/07/21/5v-tolerant-bluetooth-module/. It’s reliable and inexpensive. It’s class 2 but that seems perfect here.
  • Speed is 115.2kbps. 250k (Marlin max) should work with the firmware also, but this BT module is programmed for 115.2k. UPDATE: I’ve confirmed Marlin go handle the MEGA’s max USART speed of 1M bps. This module goes a little faster so I’m experimenting with 921.6k bps and will know more soon.

Software

The new features are:

  • Installs just like Marlin – use the Arduino IDE.
  • Enabled with a #define in Configuration.h.
  • Backwards compatible with USB connections, whether or not #define’d.
  • USB and Bluetooth connections live side-by-side. In testing we can have one machine connected to the printer via USB, then connect to the printer from another machine using Bluetooth and issue GCODE commands, like SD print. The machine that’s connected via USB echoes all of this on its console (Pronterface) and we can even disconnect and re-connect the BT module to eavesdrop without affecting the print.
  • USART2 is used for Bluetooth communication in the firmware and USART0’s behavior is mostly intact. Actually, when BT support is enabled everything on USART0 gets multiplexed to USART2.
  • USB and Bluetooth commands shouldn’t collide. This isn’t well tested – it’s an unusual scenario – but the firmware should only read one command from each port and block the other until the end of command is reached. There are some interesting implications. Let’s move along…

Instructions

These instructions apply to the pictured BT module. The module must be wired as 5V tolerant as mentioned in my blog post.

RAMPS Setup

    1. Download the Bluetooth-enabled version of Marlin from the Terawatt Industries Bitbucket Repository.
    2. Install the firmware on your (RAMPS/Arduino MEGA) control board. The reprap wiki has instructions for this if you need help.
    3. Connect the BT module’s 3v3 pin to a 3.3 volt power source. The Terawatt ATX PSU Adapter Board and the ATX PSU Dev Board from Ultimachine are awesome for this. They’re perfect for powering 3D printers too.
    4. Connect the BT module’s GND pin to a ground source.
    5. Connect the TX pin from the BT module to pin 17 on RAMPS 1.4.
    6. Connect the RX pin from the BT module — through the 5v tolerant circuit — to pin 16 on RAMPS. If you skip the 5v tolerant circuitry you’ll fry this BT module as stated in the datasheet.

    Printing via Bluetooth

    These instructions assume you’ve already installed the Bluetooth-capable Marlin firmware on your RAMPS 1.4 Control Board and attached a working Bluetooth module to it.

    1. Pair your computer to the BT module. Doing this depends on your OS, whether you run Windows, OS X, Linux, or otherwise. For this module the name contains the word “linvor”. The Linvor BT module has a pin of 1234.
    2. Open Pronterface.
    3. Set connection speed to 115200
    4. Choose the port listed with the name of your Bluetooth module
    5. Click Connect
    6. Tell Pronterface to print all the neat things stored on its SD Card. You can print via bluetooth – just make sure your computer stays in-range for the duration of the print.

    Addendum

    If you’re using the “screen” app or a terminal to drive the Bluetooth connection the following GCODE commands can be useful:

    M119
    M104 S190
    M140 S60
    M20
    M23 myfile.gco
    M24
    M27

Too Much Glory for Arduino

Arduino is so simple and elegant to use; yes, I understand. However the Arduino IDE shouldn’t get more glory than it deserves. For the record beginners would be better served by using a Wordpad or Dashcode editor and dragging/dropping the files into a processor that compiled and uploaded HEXes. Not only would users be more efficient and benefit, the dev team would be more efficient building that product. Instead, the Arduino dev team wastes effort by adding features and fixing bugs in the Arduino IDE — all of which have already been raised and addressed in a myriad of other editors and IDEs. Users have to struggle with those bugs and lacking features, yet so many praise the IDE and its low barrier-to-entry. The IDE kinda blows. Its multithreading model is broken @ v1.0 and the development team lets simple editor bugs hang around for years without a fix – like Ctrl-Z undo not moving the cursor. The worship of it should cease. Over and out…

5V Tolerant CSR Bluetooth Module

Here’s a 5V tolerant version of the Bluetooth Module covered in the post about the generic 3.3v Bluetooth Module. The module is known by a few names such as “the eBay bluetooth module”, “the dealextreme bluetooth module”, but it’s official name is the Cambridge Silicon Radio EGBT-04 module. This module is very affordable and seems robust. This circuit follows the recommended application note in the datasheet as follows:

  • The connection from the MCU’s TX line to the Bluetooth’s RX line uses a cheap 1N914 diode and 10K pull-up instead of 3.3K. The weaker pull-up has worked better with the Arduino configurations tested so far.
  • The connection from the Bluetooth’s TX line to the MCU’s RX line is straight-thru. Most MCUs will tolerate this. I haven’t memorized the module’s output current specification but it’s low and the datasheet is posted on this site.
  • The MCU is an Arduino MEGA.
  • The blue LED is connected to the Bluetooth module PIO for connection status. This is an active-high PIO.
  • The purple LED is wired to the module’s RX line so this this fixture can monitor the signal from the MCU. It blinks off when transmission occurs. At 115.2kbps the blink is too fast to see with short byte sequences but at 57.6k they’re noticeable. The firmware on this module has no PIOs for RX/TX (yet).
  • With Arduino IDE 1.0+ SoftwareSerial library the module can talk to Arduino at speeds up to 57.6k on a 16MHz Arduino (Pro Mini, Uno, etc). With the Arduino MEGA using a library for built-in USART the module can talk to Arduino at speeds up to 115.2k – perhaps higher but untested.

    Robustness:
    I’m not sure if the jury is out yet. I have no quantitative way to measure connection strength in the field yet but it seems acceptable. OS X has worked smoothly with this module so far. Android has been quirky with issues both at the SDK level and when fatal APK crashes cause disconnection. Google’s Bluetooth Chat won’t connect but Amarino and Amarino-Embed work. When an app crashes while connected to the module it has to be reset. Whitebox testing on Android hasn’t turned up any other surprises so far. I’d have to say the other modules I’ve worked with have their own quirks with Android. Eventually I’ll be able to make more quantitative statements about this topic.

    The module has performed well in my field-tests involving some real-world scenarios such as connection distance and robustness through walls. The firmware lacks features found in modules from Roving Networks and BlueGiga. In general, I’d say the firmware running on this module doesn’t offer anywhere near the amount of features offered in firmware running on Roving Networks and BlueGiga modules. But I rarely use the full feature set on those modules and there’s a well-featured firmware available and programmable for this one. Another thing is: even the pricey modules provide what’s called “single-mode” Bluetooth profile support. That means, if the firmware is written to support the SPP or HID profile, then that’s all it supports. To support a different profile it has to be re-programmed analogous to this module. All-in-all, I’d say this module measures well. I haven’t found multi-mode support for an affordable price yet, but that’s likely to change.

XBot – All Grown Up

It’s autonomous now — no more tethering to your PC for power, except to recharge. This robot reacts to light with a dancing head and a micro light show. You control it by waving your hand around it. Since it runs on an Arduino Pro Mini it’s fully hackable. Source code and DIY instructions are on the way. The build is solderless: wire-wrap and hot-glue take care of everything in this low-powered bot.

Three states are shown in the above photos. On (orange/red); Stand-By (blue); Dancing (shown as purple, but is multicolored).

MS-DOS FAT16 Formatting on OS X

It took a second to figure out how to format an SDCard with a MS-DOS FAT16 partition, so I wanted to share with everybody. This applies to any type of external disk drive.

There are some links to be found…such as http://www.bergek.com/2008/10/27/partition-usb-disk-with-fat16-on-mac-os-x/ and another one that’s almost identical.

First of all, the second set of instructions result in a “/Volumes/SDCARD/ does not appear to be a whole disk” error in OSX Lion. The drives listed in /Volumes by OS X are the mounted logical drives, and partitioning needs to occur on a physical disk, so I think using “/Volumes/”-anything would not work, but maybe it did in an older version of diskutil.

Regardless, both sets of instructions are approximately the same. When entering this command:
diskutil partitiondisk /dev/disk2 1 MBRFormat "MS-DOS FAT16" "XYZ" 1024M
in OS X Lion I kept getting this error message:
XYZ does not appear to be a valid volume name for its file system

I’m not sure why that isn’t a valid partition name. From what I recall MS-DOS FAT16 partitions support an 11-byte ASCII name. Rather than digging into that I tried this command:
diskutil partitiondisk /dev/disk2 1 MBRFormat "MS-DOS FAT16" "" 1024M
and got the error:
The size (1024M) must not be greater than the size of the whole disk

My SDCard is 1G = 1024M, so I’m not sure why that doesn’t work either. Rather than dig into that hole I used this command:
diskutil partitiondisk /dev/disk2 1 MBRFormat "MS-DOS FAT16" "" 1G

And my SDCard is now formatted. Hope this helps!

UPDATE 8/23/2012: The above worked for a 1GB micro SD card that was used and had been partitioned for an Android G1 previously. On another, brand new SDCard, the steps work but OS X complains that it can’t read the card. Trying to format an SDCard > 1GB is likely the problem – if I find a workaround I’ll post.

ATMega32u4 + WS2801 Demo Jig

I posted some firmware for an ATMega32u4 + WS2801 combo that works for demonstration or testing. This code is based on Sparkfun’s but has been modified to run on an ATMega32u4 with LUFA firmware support and compiles with avr-gcc. The LUFA usage is just there for show but with a little effort can be extended.

Pictured above is the firmware running with a WS2801 strip. I’m finding the random generator produces some washed-out colors, but in general they move properly down the strip.

IMPORTANT! Trying to drive the whole strip from USB/5V on the 32u4 will kill the MCU with overcurrent draw. Don’t do this! The default set in Main.c is 1. The MAX number is somewhere around 8.

Pinouts are described in firmware/main.c.