Tuesday, July 8, 2014

file locking using a context manager (with statement) in python

I needed a quick locking mechanism to prevent my daemons from stepping over each other. To have a sane daemon startup (and prevent multiple daemon spawns), we need to ensure that we have an exclusive lock before starting the program.  Googling around didn't lead to show any context managers that actually use the flock syscalls.

So here goes my attempt that seems to work:

Spinning off some python processes that utilise this context manager shows serialisation taking place:
And here's the output of lsof showing locking for the processes spun off above:

Monday, May 19, 2014

Redistilling PDFs that are not portable by design

I hate it when I am forced to deal with documents that are portable in title only (yes, I am looking at your Adobe). Every so often, I do get pdf documents from a major organisation that can viewed by Adobe Acrobat only. On OSX, this bloated application consumes 369 Megabytes of precious SSD space (preview consumes 29 Megabytes and is nicer).

Anyway, back to the story, these documents cannot be saved in any other format on my machine. In fact, the only way to read these documents w/out hackery is to print them out and rescan them back.

!Stupid!

So here goes a recipe for saving these files in a portable way.

Saturday, May 17, 2014

Subnet calculation using pure mysql

You can easily aggregate your records by subnets using mysql thanks to bitwise operators, an inet_aton (ascii to number function) and some thinking...

Here you go:

Thursday, May 15, 2014

tshark: display filters + reporting using csv


You can do pretty nifty things with tshark. The absolute life saver is thsark's ability to dump to a csv/tsv file using a user specified display filter.

As an example, I'd like to point out some packet retransmission issues to my provider in a nice (manager friendly) spreadsheet.  Here we go:

Manager friendly output:

ip.src tcp.srcport ip.dst tcp.dstport tcp.flags.syn tcp.flags.ack tcp.flags.push tcp.flags.reset tcp.analysis.bytes_in_flight tcp.len
a.b.c.d 8645 e.f.g.h7 9999 1 0 0 0
0
e.f.g.h7 9999 a.b.c.d 8645 1 1 0 0
0
a.b.c.d 8645 e.f.g.h7 9999 0 1 0 0
0
a.b.c.d 8645 e.f.g.h7 9999 0 1 1 0 168 168
e.f.g.h7 9999 a.b.c.d 8645 0 1 0 0
0
e.f.g.h7 9999 a.b.c.d 8645 0 1 1 0 1154 1154
a.b.c.d 8645 e.f.g.h7 9999 0 1 0 0
0
a.b.c.d 8645 e.f.g.h7 9999 0 1 0 0 1448 1448
a.b.c.d 8645 e.f.g.h7 9999 0 1 1 0 1502 54
e.f.g.h7 9999 a.b.c.d 8645 0 1 0 0
0

How do we get there?
1. Identify the fields that you want. A wireshark display filter cheat-sheet is a good place to start. You can home in on the fields that you want by firing up Wireshark and using the expression builder (button right next to the filter input box) then selecting the protocol that you want.

2. Choose your TCP stream.

3. Assemble your command. The one used to display the output above is:

Partitions in Postgres: Automatically creating partitions based on an attribute

A long time ago... I worked on importing ~ half a billion log records into Postgres. To achieve a low query response time, I used a partitioner that would shard records monthly. I documented it in the Postgres docs

Here it is:

Sunday, March 16, 2014

Making sense of /proc/buddyinfo

/proc/buddyinfo gives you an idea about the free memory fragments on your Linux box. You get to view the free fragments for each available order, for the different zones of each numa node. The typical /proc/buddyinfo looks like this:


This box has a single numa node. Each numa node is an entry in the kernel linked list pgdat_list. Each node is further divided into zones. Here are some example zone types:
  • DMA Zone: Lower 16 MiB of RAM used by legacy devices that cannot address anything beyond the first 16MiB of RAM.
  • DMA32 Zone (only on x86_64): Some devices can't address beyond the first 4GiB of RAM. On x86, this zone would probably be covered by Normal zone
  • Normal Zone: Anything above zone DMA and doesn't require kernel tricks to be addressable. Typically on x86, this is 16MiB to 896MiB. Many kernel operations require that the memory being used be from this zone
  • Highmem Zone (x86 only): Anything above 896MiB.
Each zone is further divided into power of 2 (also known as the order) page sized chunks by the buddy allocator. The buddy allocator attempts to satisfy an allocation request from a zone's free pool. Over time, this free pool will fragment and higher order allocations will fail. The buddyinfo proc file is generated on demand by walking all the free lists.

Say we have just rebooted the machine and we have a free pool of 16MiB (DMA zone). The most sensible thing to do would be to have the this memory split into largest contiguous blocks available. The largest order is defined at compile time to 11 which means that the largest slice the buddy allocator has is 4MiB block (2^10 * page_size). so the 16 MiB DMA zone would initially split into 4 free blocks.

Here's how we'll service an allocation request for 72KiB:
  1. Round up the allocation request to the next power of 2 (128)
  2. Split a 4MiB chunk into two 2MiB chunks
  3. Split  one 2 MiB chunk into two MiB chunks
  4. Continue splitting until we get a 128KiB chunk that we'll allocate.
 Allocation requests will over time split, merge, split... this pool until we get to a point where we might have to fail a request due to the lack of a contiguous memory block.

Here's an example of an allocation failure from a Gentoo bug report.

In such cases, the buddyinfo proc file will allow you to view the current fragmentation state of your memory.
Here's a quick python script that will make this data more digestible.

And sample output for the buddyinfo data pasted earlier on.

Wednesday, January 29, 2014

Fifos and persistent readers

I recently worked on a daemon (call it slurper) that persistently read data from syslog via a FIFO (also known as a named pipe). On startup, slurper would work fine for a couple of hours then stop processing input from the FIFO.  The relevant code in slurper is:

Digging into this mystery revealed that syslogd server was getting EAGAIN errors on the fifo descriptor.  According to man 7 pipe:

      O_NONBLOCK enabled, n <= PIPE_BUF
              If there is room to write n bytes to the pipe, then write(2) succeeds immediately, writing all n bytes; otherwise write(2) fails, with errno set to EAGAIN.

The syslogd daemon was opening the pipe in O_NONBLOCK mode and getting EAGAIN errors which implied that the pipe was full. (man 7 pipe states that the pipe buffer is 64K).
Additionally, a `cat` on the FIFO drains the pipe and allows syslogd to write more content.

All these clues imply that the FIFO has no reader. But how can that be? A check on lsof shows that slurper has an open fd for the named pipe. Digging deeper, an attempt to `cat` slurpers' open fd didn't return any data

cat /proc/$(pgrep slurper)/fd/ # Be careful with this. It will steal data from your pipe/file/socket on a production system

So I decided to whip up a reader that emulates slurper's behaviour

Strace this script to see which syscalls are being invoked

This reveals that a writer closing it's fd will cause readers to read an EOF (and probably exit in the case of the block under the context manager).
So we have two options:
1) Ugly and kludgy:  Wrap the context manager read block within an infinite loop the reopens the file:
2) Super cool trick. Open another dummy writer to the FIFO. The kernel sends an EOF when the last writer closes it's fd. Since our dummy writer never closes the fd, readers will never get an EOF if the real writer closes it's fd.

The actual root cause: The syslog daemon was being restarted and this would cause it to close and reopen it's fds.