Welcome, guest | Sign In | My Account | Store | Cart

You can use the regular expression engine to parse binary files, especially those for which the struct module alone is inadequate.

Python, 66 lines
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
import re
from struct import unpack, pack
    
def parse(buf):
    # Compile a regex that can parse a buffer with an arbitrary number of
    # records, each consisting of a short, a null-terminated string, 
    # and two more shorts.  Incomplete records at the end of the file 
    # will be ignored.  re.DOTALL ensures we treat newlines as data.
    r = re.compile("(..)(.*?)\0(..)(..)", re.DOTALL)

    # packed will be a list of tuples: (packed short, string, short, short).
    # You can use finditer instead to save memory on a large file, but
    # it will return MatchObjects rather than tuples.
    packed = r.findall(buf)

    # Create an unpacked list of tuples, mirroring the packed list.
    # Perl equivalent: @objlist = unpack("(S Z* S S)*", $buf);
    # Note that we do not need to unpack the string, because its 
    # packed and unpacked representations are identical.
    objlist = map(lambda x: (short(x[0]), x[1], short(x[2]), short(x[3])), packed)

    # Alternatively, unpack using a list comprehension:
    # objlist = [ ( short(x[0]), x[1], short(x[2]), short(x[3]) ) for x in packed ]
        
    # Create a dictionary from the packed list.  The records hold object id,
    # description, and x and y coordinates, and we want to index by id.
    # We could also create it from the unpacked list, of course.
    objdict = {}
    for x in packed:
        id = short(x[0])
        objdict[id] = {}
        objdict[id]["desc"] = x[1]
        objdict[id]["x"] = short(x[2])
        objdict[id]["y"] = short(x[3])

    return objlist, objdict

# Converts 2-byte string to little-endian short value.
# unpack returns a tuple, so we grab the first (and only) element.
def short(x):
    return unpack("<H", x)[0]

# Packs the arguments into a string that parse() can read,
# for testing.
def packobj(id, desc, x, y):
    return pack("<H", id) + desc + "\0" + pack("<HH", x, y)


if __name__ == '__main__':

    # Pack test objects into string buffer.  Normally, you'd load buf
    # with file data, perhaps with buf = file(filename, "rb").read()
    buf = ""
    buf += packobj(768, "golden helmet", 3, 4)
    buf += packobj(234, "windmill", 20, 30)
    # Test inclusion of newline in string
    buf += packobj( 35, "pitcher\nand stone", 1, 2)
    # Also add a bit of garbage at the end,
    # which the parser should ignore.
    buf += "garbage";

    # Parse buffer into list and dictionary of objects
    olist, odict = parse(buf)
    print olist
    print odict
    print odict[35]["desc"]  # should retain the newline

The typical way to parse binary data in Python is to use the unpack method of the struct module. This works well for fixed-width fields, but becomes more complicated when you need to parse variable-width fields. Perl's implementation of unpack accepts "*" as the field length, and even allows grouping with parentheses, which mitigates this problem. Python does not currently offer these features. Although you can dynamically generate a format string for unpack with a lot of slicing and calls to calcsize, the resulting code will likely be hard to read and error-prone.

Fortunately, in some cases there is a simpler way to do it: use the regular expression engine to grab each field, and use struct.unpack on the results.

First, you construct a regular expression (RE) describing the entire record structure, grouping each field you'd like to extract with parentheses, and compile it.

To create the regular expression, you just have to remember that one character in the RE equals one byte in the record. So, the expression ".." would match any short (2 bytes). To match a variable-width field, the RE engine will have to be able to recognize where the field ends. In a null-terminated string, for example, the field ends with a zero byte. You'd therefore look for any number of characters followed by a null byte: "(.*?)\0". Note the use of the non-greedy qualifier "?" -- this way, we only match up to the first null, rather than the last null in the buffer.

When compiling, make sure to pass the re.DOTALL flag to the compiler, or it will consider bytes that happen to match ASCII '\n' to be newlines. Then, you use the findall method of the compiled expression object on your buffer. findall finds all non-overlapping matches, one match for each record. It returns a list of tuples, one for each match; each tuple will contain one element for each field you grouped in the RE.

You still need to unpack the fields in the tuples before using them, since they're still strings rather than usable values. Generally, you'll call unpack once for each field, with only one format character. (You can also group multiple consecutive fixed fields in one set of parentheses in the RE, and then unpack them in one call. But that may get confusing.)

The code above demonstrates how to unpack a binary file that has an indeterminate number of variable-width records, each consisting of a little-endian short, a null-terminated string, and two more shorts. It drops the resulting values into a list and also into a dictionary.

This technique is useful when your variable-width fields are terminated with a sentinel, such as the zero-terminated strings described above. If your field length is embedded in the data, and you can't use the "p" (Pascal string) modifier, you'll probably have to resort to slicing the buffer up manually.

This technique is also applicable even if your fields are all fixed-width. The findall method will operate on the entire buffer at once with a single regular expression, which saves you from having to dynamically create a long format string encapsulating all your data, or alternatively iterating over slices of the buffer.