Master pulled from openflow.org
diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..44ebddd
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,36 @@
+OpenFlow Test Framework
+
+Copyright (c) 2010 The Board of Trustees of The Leland Stanford 
+Junior University
+
+Except where otherwise noted, this software is distributed under
+the OpenFlow Software License.  See 
+http://www.openflowswitch.org/wp/legal/ for current details.
+
+We are making the OpenFlow specification and associated documentation
+(Software) available for public use and benefit with the expectation
+that others will use, modify and enhance the Software and contribute
+those enhancements back to the community. However, since we would like
+to make the Software available for broadest use, with as few
+restrictions as possible permission is hereby granted, free of charge,
+to any person obtaining a copy of this Software to deal in the
+Software under the copyrights without restriction, including without
+limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions: 
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software. 
+
+THE SOFTWARE IS PROVIDED -Y´AS IS¡, WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 
+
+The name and trademarks of copyright holder(s) may NOT be used in
+advertising or publicity pertaining to the Software or any derivatives
+without specific, written prior permission. 
diff --git a/README b/README
index a96286b..5d1b5c5 100644
--- a/README
+++ b/README
@@ -1,3 +1,343 @@
-This is a placeholder for the OFTest README, coming soon.
+OpenFlow Testing Framework
+July, 2010
 
-OFTest is a framework for testing OpenFlow switches.
+Copyright (c) 2010 The Board of Trustees of The Leland Stanford 
+Junior University
+
+Warning
++++++++
+
+    This is still experimental and it requires root privilege to
+    control the dataplane ports.  As a consequence, there may be
+    risks to the machine on which this is running.  Use caution.
+
+    Please see Helpful Notes below.
+
+License
++++++++
+
+    The software included with this distribution is subject to the
+    OpenFlow Switching License as given in the included file LICENSE.
+    Details are also available at:
+
+    http://www.openflow.org/wp/legal
+
+    Other software referenced in this distribution is subject to its
+    respective license.
+
+Getting OFTest
+++++++++++++++
+
+    You can check out OFTest with git with the following command:
+
+    git clone git://openflow.org/oftest
+
+Introduction
+++++++++++++
+
+    This test framework is meant to exercise a candidate OpenFlow
+    switch (the device/switch under test, DUT or SUT).  It provides a
+    connection like a controller to which the switch connects and it 
+    controls data plane ports, sending and receiving packets, which 
+    should be connected to the switch.
+
+    There are two parts to running the test framework:
+
+    * Building the python libraries that support the OF protocol
+    * Running oft, the main entry point of the test framework
+
+    Normally log output from oft is sent to the file oft.log, but
+    can be redirected to the console by specifying --log-file="".
+ 
+Quick Start
++++++++++++
+
+    You need to have Python setup tools and Scapy installed on your
+    system.  See 'Pre-requisites' below.
+
+    Make sure your switch is running and trying to connect to a
+    controller on the machine where you're running oft (normally port
+    6633).  See below regarding run_switch.py for a script that starts 
+    up a software switch on the test host.
+
+    Currently, switches must be running version 1.0 of OpenFlow. 
+
+      # git clone yuba:/usr/local/git/openflow-projects/oftest
+      # cd oftest/tools/munger
+      # make install
+      # cd ../../tests
+         Make sure the switch you want to test is running --
+         see (4) below for the reference switch example.
+      # ./oft --list
+      # sudo ./oft
+      # sudo ./oft --verbose --log-file=""    
+      # sudo ./oft --test-spec=<mod> --platform=remote --host=...
+
+Longer Start
+++++++++++++
+
+    1.  Pre-requisites:
+        * An OF switch instance to test (see 4 below)
+        * Root privilege on host running oft
+        * Switch running OpenFlow 1.0 and attempting to connect 
+          to a controller on the machine running oft.
+        * Python 2.5.  You can run platforms using eth interfaces
+          with Python 2.4.
+        * Python setup tools (e.g.: sudo apt-get install python-setuptools)
+        * oftest checked out (called <oftest> here)
+        * scapy installed:  http://www.secdev.org/projects/scapy/
+          'sudo apt-get install scapy' should work on Debian.
+        * tcpdump installed (optional, but scapy will complain if it's
+          not there)
+        * Doxygen and doxypy for document generation (optional)
+        * lint for source checking (optional)
+
+    2.  Build the OpenFlow Python message classes
+
+        Important:  The OF version used by the controller is based on 
+        the file in <oftest>/tools/pylibopenflow/include/openflow.h
+        This is currently the 1.0 release file.
+
+        cd <oftest>/tools/munger
+        make install
+
+        This places files in <oftest>/src/python/oftest/src and then
+        calls setuptools to install on the local host
+
+    3.  Edit configuration if necessary
+        Local platforms work with veth interface pairs and default to
+        four ports.  You can adjust this a bit with the command line
+        parameters port_count, base_of_port and base_if_index.
+ 
+        Starting from remote.py as a simple example, you can add your
+        own <platform>.py file and then have it imported with
+        --platform=<platform> on the command line.  This is meant to 
+        allow you to test remote switches attempting to connect to a
+        controller on a network accessible to the test host.
+
+    4.  Start the switch to test
+        The switch must be running and actively attempting to 
+        connect to a controller on the test host at the port number
+        used by oft (6633 by default, or specified as --port=<n> as
+        an argument to oft).
+
+        If you're new to the test environment and want to check its 
+        sanity, you can do the following.  This requires that
+        your host kernel supports virtual ethernet interfaces.  This
+        is best done in a window separate from where you will run oft.
+ 
+        4A. Check out openflow (preferably at the same level as oftest):
+            git clone git://openflowswitch.org/openflow.git
+        4B. cd openflow; ./boot.sh; ./configure; make
+        4C. cd ../oftest/tests
+        4D. Run the switch startup script:
+            sudo ./run_switch.py; Now you can run oft (see below).
+        4F. Use --help to see command line switches.  If you use a port
+            number other than the default, make sure you use the same
+            one for the switch as for oft.
+        4E. Use control-C to terminate the switch daemons.
+        4F. To clean up the virtual ethernet interfaces, use
+            sudo rmmod veth
+
+    5.  Run oft
+        See Warning above; requires sudo to control the dataplane
+        cd <oftest>/tests
+        sudo ./oft --help
+
+Helpful Note: Rebuilding
+++++++++++++++++++++++++
+
+    If you ever make a change to the code in src/oftest/python...
+    you must rebuild and reinstall the source code.  See Step (2)
+    in the Longer Start above.
+
+    If you see
+
+        WARNING:..:Could not import file ...
+
+    There is likely a Python error in the file.  Try invoking the
+    Python cli directly and importing the file to get more
+    information.
+
+Helpful Note: Recovering From Crash
++++++++++++++++++++++++++++++++++++
+
+    If the test script, oft, becomes unresponsive, you may find that
+    ^C does not break out of the script.  In this case you have two
+    options:
+
+    * Use ^Z to interrupt the script and return to the shell prompt.
+    * Start another terminal window to the same machine.
+
+    In either case, you then need to kill the process that is hung.
+    Use the following commands:
+
+        me@host> ps aux | grep oft
+        root         4  0.0      S<   Jul07   0:00 [ksoftirqd/0]
+        ...
+        root     14066  3.2      Tl   09:27   0:00 python ./oft ...
+        me       14074  0.0      R+   09:28   0:00 grep oft
+
+        me@host> sudo kill -9 14066
+
+    where 14066 is the process ID of the hung process.  (Replace it
+    with the PID for your process.)
+
+    This is still preliminary work and there are bugs in the framework
+    that need to be ironed out.  Please report any issues to
+    dtalayco@stanford.edu.
+
+
+OFT Command Line Options
+++++++++++++++++++++++++
+
+    Here is a summary of the oft command line options.  Use --help to see
+    the long and short command option names.
+
+    platform          : String identifying the target platform
+    controller_host   : Host on which test controller is running (for sockets)
+    controller_port   : Port on which test controller listens for switch cxn
+    port_count        : Number of ports in dataplane
+    base_of_port      : Base OpenFlow port number in dataplane
+    base_if_index     : Base OS network interface for dataplane
+    test_dir          : Directory to search for test files (default .)
+    test_spec         : Specification of test(s) to run
+    log_file          : Filename for test logging
+    list              : Boolean:  List all tests and exit
+    debug             : String giving debug level (info, warning, error...)
+    verbose           : Same as debug=verbose
+
+Overview
+++++++++
+
+    The directory structure is currently:
+
+     <oftest>
+         `
+         |-- doc
+         |-- src
+         |   `-- python
+         |       `-- oftest
+         |-- tests
+         |   `-- oft and files with test cases
+         `-- tools
+             |-- munger
+             `-- pylibopenflow
+
+    The tools directory is what processes the OpenFlow header
+    files to produce Python classes representing OpenFlow messages.
+    The results are placed in src/python/oftest and currently
+    include:
+
+        message.py:      The main API providing OF message classes
+        error.py:        Subclasses for error messages
+        action.py:       Subclasses for action specification
+        cstruct.py:      Direct representation of C structures in Python
+        class_maps.py:   Addition info about C structures
+
+    In addition, the following Python files are present in 
+    src/python/oftest:
+
+        controller.py:   The controller representation
+        dataplane.py:    The dataplane representation
+        action_list.py:  Action list class
+        netutils.py:     e.g., set promisc on sockets
+        ofutils.py:      Utilities related to OpenFlow messages
+        oft_assert.py:   Test framework level assertion
+
+    Tests are run from the tests directory.  The file oft is the
+    top level entry point for tests.  Try ./oft --help for some more.
+
+Important Notes
++++++++++++++++
+
+    1.  If you edit any of the files in src/python/oftest or any of the
+    scripts in tools/munger/scripts, you MUST re-run make install.  This
+    is easy to forget.
+
+    2.  If your running into issues with transactions, and it appears that
+    OpenFlow messages aren't quite right, start by looking at any length
+    fields in the packets.  With the local platform, you can use wireshark
+    on the loopback interface as well as the dataplane veth interfaces.
+
+Adding Your Own Test Cases
+++++++++++++++++++++++++++
+
+    Check the online tutorial:  
+        http://openflow.org/wk/index.php/OFTestTutorial
+
+    You can:
+
+        * Add cases to an existing file
+        * Add a new file
+
+    If you add cases to an existing file, each case should be its own
+    class.  It must inherit from unittest.TestCase or one of its 
+    derivatives and define runTest (that's how test cases are discovered).
+
+    If you add a new file, it must implement a top level function called
+    test_set_init which takes a configuration dictionary.  See basic.py
+    for an example.  The main point of this is to pass the port map 
+    object to the test cases.  But you can access any configuration
+    parameters this way.  Each test case in the new file must derive
+    from unittest.TestCase.
+
+    CONVENTIONS:
+
+    The first line of the doc string for a file and for a test class is 
+    displayed in the list command.  Please keep it clear and under 50
+    characters.
+
+
+Using CentOS/RHEL
++++++++++++++++++
+
+    CentOS/RHEL have two challenges:  they are very tied to Python 2.4
+    (and Scapy requires Python 2.5 for its latest version) and they
+    require a kernel upgrade to use veth pairs for local platform
+    testing.  
+
+    If you only need to control eth interfaces for a remote platform,
+    you can use CentOS/RHEL without major disruption.  The key is to 
+    download scapy-1.2 from the following link:
+
+    wget http://hg.secdev.org/scapy/raw-file/v1.2.0.2/scapy.py
+
+    See: http://www.dirk-loss.de/scapy-doc/installation.html#installing-scapy-v1-2
+    for more info.
+
+    Copy scapy.py to /usr/lib/python2.4/site-packages
+
+    If you hit an error related to importing scapy.all, you just need
+    to change the import to refer to scapy (not scapy.all).  See
+    examples in parse.py for example.
+
+
+Other Info
+++++++++++
+
+    * Build doc with
+      + cd <oftest>/tools/munger
+      + make doc
+    Places the results in <oftest>/doc/html
+    If you have problems, check the install location doxypy.py and
+    that it is set correctly in <oftest>/doc/Doxyfile
+
+    * Run lint on sources
+      + cd <oftest>/tools/munger
+      + make lint
+    Places results in <oftest>/lint/*.log
+    The file controller.log currently has some errors indicated
+
+
+To Do
++++++
+
+    * Need to have an overview of the components of the test, how they
+      connect and how they are managed by the test framework.
+    * See the Regression Test component on trac:
+      http://www.openflowswitch.org/bugs/openflow
+      http://www.openflowswitch.org/bugs/openflow/query?component=Regression+test+suite
+
+    * Make the framework work with OF versions other than 1.0?
+
diff --git a/doc/Doxyfile b/doc/Doxyfile
new file mode 100644
index 0000000..85cfc63
--- /dev/null
+++ b/doc/Doxyfile
@@ -0,0 +1,1417 @@
+# Doxyfile 1.5.6
+
+# This file describes the settings to be used by the documentation system
+# doxygen (www.doxygen.org) for a project
+#
+# All text after a hash (#) is considered a comment and will be ignored
+# The format is:
+#       TAG = value [value, ...]
+# For lists items can also be appended using:
+#       TAG += value [value, ...]
+# Values that contain spaces should be placed between quotes (" ")
+
+#---------------------------------------------------------------------------
+# Project related configuration options
+#---------------------------------------------------------------------------
+
+# This tag specifies the encoding used for all characters in the config file 
+# that follow. The default is UTF-8 which is also the encoding used for all 
+# text before the first occurrence of this tag. Doxygen uses libiconv (or the 
+# iconv built into libc) for the transcoding. See 
+# http://www.gnu.org/software/libiconv for the list of possible encodings.
+
+DOXYFILE_ENCODING      = UTF-8
+
+# The PROJECT_NAME tag is a single word (or a sequence of words surrounded 
+# by quotes) that should identify the project.
+
+PROJECT_NAME           = 
+
+# The PROJECT_NUMBER tag can be used to enter a project or revision number. 
+# This could be handy for archiving the generated documentation or 
+# if some version control system is used.
+
+PROJECT_NUMBER         = 
+
+# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) 
+# base path where the generated documentation will be put. 
+# If a relative path is entered, it will be relative to the location 
+# where doxygen was started. If left blank the current directory will be used.
+
+OUTPUT_DIRECTORY       = 
+
+# If the CREATE_SUBDIRS tag is set to YES, then doxygen will create 
+# 4096 sub-directories (in 2 levels) under the output directory of each output 
+# format and will distribute the generated files over these directories. 
+# Enabling this option can be useful when feeding doxygen a huge amount of 
+# source files, where putting all generated files in the same directory would 
+# otherwise cause performance problems for the file system.
+
+CREATE_SUBDIRS         = NO
+
+# The OUTPUT_LANGUAGE tag is used to specify the language in which all 
+# documentation generated by doxygen is written. Doxygen will use this 
+# information to generate all constant output in the proper language. 
+# The default language is English, other supported languages are: 
+# Afrikaans, Arabic, Brazilian, Catalan, Chinese, Chinese-Traditional, 
+# Croatian, Czech, Danish, Dutch, Farsi, Finnish, French, German, Greek, 
+# Hungarian, Italian, Japanese, Japanese-en (Japanese with English messages), 
+# Korean, Korean-en, Lithuanian, Norwegian, Macedonian, Persian, Polish, 
+# Portuguese, Romanian, Russian, Serbian, Slovak, Slovene, Spanish, Swedish, 
+# and Ukrainian.
+
+OUTPUT_LANGUAGE        = English
+
+# If the BRIEF_MEMBER_DESC tag is set to YES (the default) Doxygen will 
+# include brief member descriptions after the members that are listed in 
+# the file and class documentation (similar to JavaDoc). 
+# Set to NO to disable this.
+
+BRIEF_MEMBER_DESC      = YES
+
+# If the REPEAT_BRIEF tag is set to YES (the default) Doxygen will prepend 
+# the brief description of a member or function before the detailed description. 
+# Note: if both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the 
+# brief descriptions will be completely suppressed.
+
+REPEAT_BRIEF           = YES
+
+# This tag implements a quasi-intelligent brief description abbreviator 
+# that is used to form the text in various listings. Each string 
+# in this list, if found as the leading text of the brief description, will be 
+# stripped from the text and the result after processing the whole list, is 
+# used as the annotated text. Otherwise, the brief description is used as-is. 
+# If left blank, the following values are used ("$name" is automatically 
+# replaced with the name of the entity): "The $name class" "The $name widget" 
+# "The $name file" "is" "provides" "specifies" "contains" 
+# "represents" "a" "an" "the"
+
+ABBREVIATE_BRIEF       = 
+
+# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then 
+# Doxygen will generate a detailed section even if there is only a brief 
+# description.
+
+ALWAYS_DETAILED_SEC    = NO
+
+# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all 
+# inherited members of a class in the documentation of that class as if those 
+# members were ordinary class members. Constructors, destructors and assignment 
+# operators of the base classes will not be shown.
+
+INLINE_INHERITED_MEMB  = NO
+
+# If the FULL_PATH_NAMES tag is set to YES then Doxygen will prepend the full 
+# path before files name in the file list and in the header files. If set 
+# to NO the shortest path that makes the file name unique will be used.
+
+FULL_PATH_NAMES        = YES
+
+# If the FULL_PATH_NAMES tag is set to YES then the STRIP_FROM_PATH tag 
+# can be used to strip a user-defined part of the path. Stripping is 
+# only done if one of the specified strings matches the left-hand part of 
+# the path. The tag can be used to show relative paths in the file list. 
+# If left blank the directory from which doxygen is run is used as the 
+# path to strip.
+
+STRIP_FROM_PATH        = 
+
+# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of 
+# the path mentioned in the documentation of a class, which tells 
+# the reader which header file to include in order to use a class. 
+# If left blank only the name of the header file containing the class 
+# definition is used. Otherwise one should specify the include paths that 
+# are normally passed to the compiler using the -I flag.
+
+STRIP_FROM_INC_PATH    = 
+
+# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter 
+# (but less readable) file names. This can be useful is your file systems 
+# doesn't support long names like on DOS, Mac, or CD-ROM.
+
+SHORT_NAMES            = NO
+
+# If the JAVADOC_AUTOBRIEF tag is set to YES then Doxygen 
+# will interpret the first line (until the first dot) of a JavaDoc-style 
+# comment as the brief description. If set to NO, the JavaDoc 
+# comments will behave just like regular Qt-style comments 
+# (thus requiring an explicit @brief command for a brief description.)
+
+JAVADOC_AUTOBRIEF      = NO
+
+# If the QT_AUTOBRIEF tag is set to YES then Doxygen will 
+# interpret the first line (until the first dot) of a Qt-style 
+# comment as the brief description. If set to NO, the comments 
+# will behave just like regular Qt-style comments (thus requiring 
+# an explicit \brief command for a brief description.)
+
+QT_AUTOBRIEF           = NO
+
+# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make Doxygen 
+# treat a multi-line C++ special comment block (i.e. a block of //! or /// 
+# comments) as a brief description. This used to be the default behaviour. 
+# The new default is to treat a multi-line C++ comment block as a detailed 
+# description. Set this tag to YES if you prefer the old behaviour instead.
+
+MULTILINE_CPP_IS_BRIEF = NO
+
+# If the DETAILS_AT_TOP tag is set to YES then Doxygen 
+# will output the detailed description near the top, like JavaDoc.
+# If set to NO, the detailed description appears after the member 
+# documentation.
+
+DETAILS_AT_TOP         = NO
+
+# If the INHERIT_DOCS tag is set to YES (the default) then an undocumented 
+# member inherits the documentation from any documented member that it 
+# re-implements.
+
+INHERIT_DOCS           = YES
+
+# If the SEPARATE_MEMBER_PAGES tag is set to YES, then doxygen will produce 
+# a new page for each member. If set to NO, the documentation of a member will 
+# be part of the file/class/namespace that contains it.
+
+SEPARATE_MEMBER_PAGES  = NO
+
+# The TAB_SIZE tag can be used to set the number of spaces in a tab. 
+# Doxygen uses this value to replace tabs by spaces in code fragments.
+
+TAB_SIZE               = 8
+
+# This tag can be used to specify a number of aliases that acts 
+# as commands in the documentation. An alias has the form "name=value". 
+# For example adding "sideeffect=\par Side Effects:\n" will allow you to 
+# put the command \sideeffect (or @sideeffect) in the documentation, which 
+# will result in a user-defined paragraph with heading "Side Effects:". 
+# You can put \n's in the value part of an alias to insert newlines.
+
+ALIASES                = 
+
+# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C 
+# sources only. Doxygen will then generate output that is more tailored for C. 
+# For instance, some of the names that are used will be different. The list 
+# of all members will be omitted, etc.
+
+OPTIMIZE_OUTPUT_FOR_C  = NO
+
+# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java 
+# sources only. Doxygen will then generate output that is more tailored for 
+# Java. For instance, namespaces will be presented as packages, qualified 
+# scopes will look different, etc.
+
+OPTIMIZE_OUTPUT_JAVA   = YES
+
+# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran 
+# sources only. Doxygen will then generate output that is more tailored for 
+# Fortran.
+
+OPTIMIZE_FOR_FORTRAN   = NO
+
+# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL 
+# sources. Doxygen will then generate output that is tailored for 
+# VHDL.
+
+OPTIMIZE_OUTPUT_VHDL   = NO
+
+# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want 
+# to include (a tag file for) the STL sources as input, then you should 
+# set this tag to YES in order to let doxygen match functions declarations and 
+# definitions whose arguments contain STL classes (e.g. func(std::string); v.s. 
+# func(std::string) {}). This also make the inheritance and collaboration 
+# diagrams that involve STL classes more complete and accurate.
+
+BUILTIN_STL_SUPPORT    = NO
+
+# If you use Microsoft's C++/CLI language, you should set this option to YES to
+# enable parsing support.
+
+CPP_CLI_SUPPORT        = NO
+
+# Set the SIP_SUPPORT tag to YES if your project consists of sip sources only. 
+# Doxygen will parse them like normal C++ but will assume all classes use public 
+# instead of private inheritance when no explicit protection keyword is present.
+
+SIP_SUPPORT            = NO
+
+# For Microsoft's IDL there are propget and propput attributes to indicate getter 
+# and setter methods for a property. Setting this option to YES (the default) 
+# will make doxygen to replace the get and set methods by a property in the 
+# documentation. This will only work if the methods are indeed getting or 
+# setting a simple type. If this is not the case, or you want to show the 
+# methods anyway, you should set this option to NO.
+
+IDL_PROPERTY_SUPPORT   = YES
+
+# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC 
+# tag is set to YES, then doxygen will reuse the documentation of the first 
+# member in the group (if any) for the other members of the group. By default 
+# all members of a group must be documented explicitly.
+
+DISTRIBUTE_GROUP_DOC   = NO
+
+# Set the SUBGROUPING tag to YES (the default) to allow class member groups of 
+# the same type (for instance a group of public functions) to be put as a 
+# subgroup of that type (e.g. under the Public Functions section). Set it to 
+# NO to prevent subgrouping. Alternatively, this can be done per class using 
+# the \nosubgrouping command.
+
+SUBGROUPING            = YES
+
+# When TYPEDEF_HIDES_STRUCT is enabled, a typedef of a struct, union, or enum 
+# is documented as struct, union, or enum with the name of the typedef. So 
+# typedef struct TypeS {} TypeT, will appear in the documentation as a struct 
+# with name TypeT. When disabled the typedef will appear as a member of a file, 
+# namespace, or class. And the struct will be named TypeS. This can typically 
+# be useful for C code in case the coding convention dictates that all compound 
+# types are typedef'ed and only the typedef is referenced, never the tag name.
+
+TYPEDEF_HIDES_STRUCT   = NO
+
+#---------------------------------------------------------------------------
+# Build related configuration options
+#---------------------------------------------------------------------------
+
+# If the EXTRACT_ALL tag is set to YES doxygen will assume all entities in 
+# documentation are documented, even if no documentation was available. 
+# Private class members and static file members will be hidden unless 
+# the EXTRACT_PRIVATE and EXTRACT_STATIC tags are set to YES
+
+EXTRACT_ALL            = YES
+
+# If the EXTRACT_PRIVATE tag is set to YES all private members of a class 
+# will be included in the documentation.
+
+EXTRACT_PRIVATE        = NO
+
+# If the EXTRACT_STATIC tag is set to YES all static members of a file 
+# will be included in the documentation.
+
+EXTRACT_STATIC         = NO
+
+# If the EXTRACT_LOCAL_CLASSES tag is set to YES classes (and structs) 
+# defined locally in source files will be included in the documentation. 
+# If set to NO only classes defined in header files are included.
+
+EXTRACT_LOCAL_CLASSES  = YES
+
+# This flag is only useful for Objective-C code. When set to YES local 
+# methods, which are defined in the implementation section but not in 
+# the interface are included in the documentation. 
+# If set to NO (the default) only methods in the interface are included.
+
+EXTRACT_LOCAL_METHODS  = NO
+
+# If this flag is set to YES, the members of anonymous namespaces will be 
+# extracted and appear in the documentation as a namespace called 
+# 'anonymous_namespace{file}', where file will be replaced with the base 
+# name of the file that contains the anonymous namespace. By default 
+# anonymous namespace are hidden.
+
+EXTRACT_ANON_NSPACES   = NO
+
+# If the HIDE_UNDOC_MEMBERS tag is set to YES, Doxygen will hide all 
+# undocumented members of documented classes, files or namespaces. 
+# If set to NO (the default) these members will be included in the 
+# various overviews, but no documentation section is generated. 
+# This option has no effect if EXTRACT_ALL is enabled.
+
+HIDE_UNDOC_MEMBERS     = NO
+
+# If the HIDE_UNDOC_CLASSES tag is set to YES, Doxygen will hide all 
+# undocumented classes that are normally visible in the class hierarchy. 
+# If set to NO (the default) these classes will be included in the various 
+# overviews. This option has no effect if EXTRACT_ALL is enabled.
+
+HIDE_UNDOC_CLASSES     = NO
+
+# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, Doxygen will hide all 
+# friend (class|struct|union) declarations. 
+# If set to NO (the default) these declarations will be included in the 
+# documentation.
+
+HIDE_FRIEND_COMPOUNDS  = NO
+
+# If the HIDE_IN_BODY_DOCS tag is set to YES, Doxygen will hide any 
+# documentation blocks found inside the body of a function. 
+# If set to NO (the default) these blocks will be appended to the 
+# function's detailed documentation block.
+
+HIDE_IN_BODY_DOCS      = NO
+
+# The INTERNAL_DOCS tag determines if documentation 
+# that is typed after a \internal command is included. If the tag is set 
+# to NO (the default) then the documentation will be excluded. 
+# Set it to YES to include the internal documentation.
+
+INTERNAL_DOCS          = NO
+
+# If the CASE_SENSE_NAMES tag is set to NO then Doxygen will only generate 
+# file names in lower-case letters. If set to YES upper-case letters are also 
+# allowed. This is useful if you have classes or files whose names only differ 
+# in case and if your file system supports case sensitive file names. Windows 
+# and Mac users are advised to set this option to NO.
+
+CASE_SENSE_NAMES       = YES
+
+# If the HIDE_SCOPE_NAMES tag is set to NO (the default) then Doxygen 
+# will show members with their full class and namespace scopes in the 
+# documentation. If set to YES the scope will be hidden.
+
+HIDE_SCOPE_NAMES       = NO
+
+# If the SHOW_INCLUDE_FILES tag is set to YES (the default) then Doxygen 
+# will put a list of the files that are included by a file in the documentation 
+# of that file.
+
+SHOW_INCLUDE_FILES     = YES
+
+# If the INLINE_INFO tag is set to YES (the default) then a tag [inline] 
+# is inserted in the documentation for inline members.
+
+INLINE_INFO            = YES
+
+# If the SORT_MEMBER_DOCS tag is set to YES (the default) then doxygen 
+# will sort the (detailed) documentation of file and class members 
+# alphabetically by member name. If set to NO the members will appear in 
+# declaration order.
+
+SORT_MEMBER_DOCS       = YES
+
+# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the 
+# brief documentation of file, namespace and class members alphabetically 
+# by member name. If set to NO (the default) the members will appear in 
+# declaration order.
+
+SORT_BRIEF_DOCS        = NO
+
+# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the 
+# hierarchy of group names into alphabetical order. If set to NO (the default) 
+# the group names will appear in their defined order.
+
+SORT_GROUP_NAMES       = NO
+
+# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be 
+# sorted by fully-qualified names, including namespaces. If set to 
+# NO (the default), the class list will be sorted only by class name, 
+# not including the namespace part. 
+# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.
+# Note: This option applies only to the class list, not to the 
+# alphabetical list.
+
+SORT_BY_SCOPE_NAME     = NO
+
+# The GENERATE_TODOLIST tag can be used to enable (YES) or 
+# disable (NO) the todo list. This list is created by putting \todo 
+# commands in the documentation.
+
+GENERATE_TODOLIST      = YES
+
+# The GENERATE_TESTLIST tag can be used to enable (YES) or 
+# disable (NO) the test list. This list is created by putting \test 
+# commands in the documentation.
+
+GENERATE_TESTLIST      = YES
+
+# The GENERATE_BUGLIST tag can be used to enable (YES) or 
+# disable (NO) the bug list. This list is created by putting \bug 
+# commands in the documentation.
+
+GENERATE_BUGLIST       = YES
+
+# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or 
+# disable (NO) the deprecated list. This list is created by putting 
+# \deprecated commands in the documentation.
+
+GENERATE_DEPRECATEDLIST= YES
+
+# The ENABLED_SECTIONS tag can be used to enable conditional 
+# documentation sections, marked by \if sectionname ... \endif.
+
+ENABLED_SECTIONS       = 
+
+# The MAX_INITIALIZER_LINES tag determines the maximum number of lines 
+# the initial value of a variable or define consists of for it to appear in 
+# the documentation. If the initializer consists of more lines than specified 
+# here it will be hidden. Use a value of 0 to hide initializers completely. 
+# The appearance of the initializer of individual variables and defines in the 
+# documentation can be controlled using \showinitializer or \hideinitializer 
+# command in the documentation regardless of this setting.
+
+MAX_INITIALIZER_LINES  = 30
+
+# Set the SHOW_USED_FILES tag to NO to disable the list of files generated 
+# at the bottom of the documentation of classes and structs. If set to YES the 
+# list will mention the files that were used to generate the documentation.
+
+SHOW_USED_FILES        = YES
+
+# If the sources in your project are distributed over multiple directories 
+# then setting the SHOW_DIRECTORIES tag to YES will show the directory hierarchy 
+# in the documentation. The default is NO.
+
+SHOW_DIRECTORIES       = NO
+
+# Set the SHOW_FILES tag to NO to disable the generation of the Files page.
+# This will remove the Files entry from the Quick Index and from the 
+# Folder Tree View (if specified). The default is YES.
+
+SHOW_FILES             = YES
+
+# Set the SHOW_NAMESPACES tag to NO to disable the generation of the 
+# Namespaces page.  This will remove the Namespaces entry from the Quick Index
+# and from the Folder Tree View (if specified). The default is YES.
+
+SHOW_NAMESPACES        = YES
+
+# The FILE_VERSION_FILTER tag can be used to specify a program or script that 
+# doxygen should invoke to get the current version for each file (typically from 
+# the version control system). Doxygen will invoke the program by executing (via 
+# popen()) the command <command> <input-file>, where <command> is the value of 
+# the FILE_VERSION_FILTER tag, and <input-file> is the name of an input file 
+# provided by doxygen. Whatever the program writes to standard output 
+# is used as the file version. See the manual for examples.
+
+FILE_VERSION_FILTER    = 
+
+#---------------------------------------------------------------------------
+# configuration options related to warning and progress messages
+#---------------------------------------------------------------------------
+
+# The QUIET tag can be used to turn on/off the messages that are generated 
+# by doxygen. Possible values are YES and NO. If left blank NO is used.
+
+QUIET                  = NO
+
+# The WARNINGS tag can be used to turn on/off the warning messages that are 
+# generated by doxygen. Possible values are YES and NO. If left blank 
+# NO is used.
+
+WARNINGS               = YES
+
+# If WARN_IF_UNDOCUMENTED is set to YES, then doxygen will generate warnings 
+# for undocumented members. If EXTRACT_ALL is set to YES then this flag will 
+# automatically be disabled.
+
+WARN_IF_UNDOCUMENTED   = YES
+
+# If WARN_IF_DOC_ERROR is set to YES, doxygen will generate warnings for 
+# potential errors in the documentation, such as not documenting some 
+# parameters in a documented function, or documenting parameters that 
+# don't exist or using markup commands wrongly.
+
+WARN_IF_DOC_ERROR      = YES
+
+# This WARN_NO_PARAMDOC option can be abled to get warnings for 
+# functions that are documented, but have no documentation for their parameters 
+# or return value. If set to NO (the default) doxygen will only warn about 
+# wrong or incomplete parameter documentation, but not about the absence of 
+# documentation.
+
+WARN_NO_PARAMDOC       = NO
+
+# The WARN_FORMAT tag determines the format of the warning messages that 
+# doxygen can produce. The string should contain the $file, $line, and $text 
+# tags, which will be replaced by the file and line number from which the 
+# warning originated and the warning text. Optionally the format may contain 
+# $version, which will be replaced by the version of the file (if it could 
+# be obtained via FILE_VERSION_FILTER)
+
+WARN_FORMAT            = "$file:$line: $text"
+
+# The WARN_LOGFILE tag can be used to specify a file to which warning 
+# and error messages should be written. If left blank the output is written 
+# to stderr.
+
+WARN_LOGFILE           = 
+
+#---------------------------------------------------------------------------
+# configuration options related to the input files
+#---------------------------------------------------------------------------
+
+# The INPUT tag can be used to specify the files and/or directories that contain 
+# documented source files. You may enter file names like "myfile.cpp" or 
+# directories like "/usr/src/myproject". Separate the files or directories 
+# with spaces.
+
+INPUT                  = "../src/python/oftest"  "../tests"
+
+# This tag can be used to specify the character encoding of the source files 
+# that doxygen parses. Internally doxygen uses the UTF-8 encoding, which is 
+# also the default input encoding. Doxygen uses libiconv (or the iconv built 
+# into libc) for the transcoding. See http://www.gnu.org/software/libiconv for 
+# the list of possible encodings.
+
+INPUT_ENCODING         = UTF-8
+
+# If the value of the INPUT tag contains directories, you can use the 
+# FILE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp 
+# and *.h) to filter out the source-files in the directories. If left 
+# blank the following patterns are tested: 
+# *.c *.cc *.cxx *.cpp *.c++ *.java *.ii *.ixx *.ipp *.i++ *.inl *.h *.hh *.hxx 
+# *.hpp *.h++ *.idl *.odl *.cs *.php *.php3 *.inc *.m *.mm *.py *.f90
+
+FILE_PATTERNS          = "*.py"
+
+# The RECURSIVE tag can be used to turn specify whether or not subdirectories 
+# should be searched for input files as well. Possible values are YES and NO. 
+# If left blank NO is used.
+
+RECURSIVE              = NO
+
+# The EXCLUDE tag can be used to specify files and/or directories that should 
+# excluded from the INPUT source files. This way you can easily exclude a 
+# subdirectory from a directory tree whose root is specified with the INPUT tag.
+
+EXCLUDE                = 
+
+# The EXCLUDE_SYMLINKS tag can be used select whether or not files or 
+# directories that are symbolic links (a Unix filesystem feature) are excluded 
+# from the input.
+
+EXCLUDE_SYMLINKS       = NO
+
+# If the value of the INPUT tag contains directories, you can use the 
+# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude 
+# certain files from those directories. Note that the wildcards are matched 
+# against the file with absolute path, so to exclude all test directories 
+# for example use the pattern */test/*
+
+EXCLUDE_PATTERNS       = 
+
+# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names 
+# (namespaces, classes, functions, etc.) that should be excluded from the 
+# output. The symbol name can be a fully qualified name, a word, or if the 
+# wildcard * is used, a substring. Examples: ANamespace, AClass, 
+# AClass::ANamespace, ANamespace::*Test
+
+EXCLUDE_SYMBOLS        = 
+
+# The EXAMPLE_PATH tag can be used to specify one or more files or 
+# directories that contain example code fragments that are included (see 
+# the \include command).
+
+EXAMPLE_PATH           = 
+
+# If the value of the EXAMPLE_PATH tag contains directories, you can use the 
+# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp 
+# and *.h) to filter out the source-files in the directories. If left 
+# blank all files are included.
+
+EXAMPLE_PATTERNS       = 
+
+# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be 
+# searched for input files to be used with the \include or \dontinclude 
+# commands irrespective of the value of the RECURSIVE tag. 
+# Possible values are YES and NO. If left blank NO is used.
+
+EXAMPLE_RECURSIVE      = NO
+
+# The IMAGE_PATH tag can be used to specify one or more files or 
+# directories that contain image that are included in the documentation (see 
+# the \image command).
+
+IMAGE_PATH             = 
+
+# The INPUT_FILTER tag can be used to specify a program that doxygen should 
+# invoke to filter for each input file. Doxygen will invoke the filter program 
+# by executing (via popen()) the command <filter> <input-file>, where <filter> 
+# is the value of the INPUT_FILTER tag, and <input-file> is the name of an 
+# input file. Doxygen will then use the output that the filter program writes 
+# to standard output.  If FILTER_PATTERNS is specified, this tag will be 
+# ignored.
+
+INPUT_FILTER           = "python /usr/bin/doxypy.py"
+
+# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern 
+# basis.  Doxygen will compare the file name with each pattern and apply the 
+# filter if there is a match.  The filters are a list of the form: 
+# pattern=filter (like *.cpp=my_cpp_filter). See INPUT_FILTER for further 
+# info on how filters are used. If FILTER_PATTERNS is empty, INPUT_FILTER 
+# is applied to all files.
+
+FILTER_PATTERNS        = 
+
+# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using 
+# INPUT_FILTER) will be used to filter the input files when producing source 
+# files to browse (i.e. when SOURCE_BROWSER is set to YES).
+
+FILTER_SOURCE_FILES    = YES
+
+#---------------------------------------------------------------------------
+# configuration options related to source browsing
+#---------------------------------------------------------------------------
+
+# If the SOURCE_BROWSER tag is set to YES then a list of source files will 
+# be generated. Documented entities will be cross-referenced with these sources. 
+# Note: To get rid of all source code in the generated output, make sure also 
+# VERBATIM_HEADERS is set to NO.
+
+SOURCE_BROWSER         = NO
+
+# Setting the INLINE_SOURCES tag to YES will include the body 
+# of functions and classes directly in the documentation.
+
+INLINE_SOURCES         = NO
+
+# Setting the STRIP_CODE_COMMENTS tag to YES (the default) will instruct 
+# doxygen to hide any special comment blocks from generated source code 
+# fragments. Normal C and C++ comments will always remain visible.
+
+STRIP_CODE_COMMENTS    = YES
+
+# If the REFERENCED_BY_RELATION tag is set to YES 
+# then for each documented function all documented 
+# functions referencing it will be listed.
+
+REFERENCED_BY_RELATION = NO
+
+# If the REFERENCES_RELATION tag is set to YES 
+# then for each documented function all documented entities 
+# called/used by that function will be listed.
+
+REFERENCES_RELATION    = NO
+
+# If the REFERENCES_LINK_SOURCE tag is set to YES (the default)
+# and SOURCE_BROWSER tag is set to YES, then the hyperlinks from
+# functions in REFERENCES_RELATION and REFERENCED_BY_RELATION lists will
+# link to the source code.  Otherwise they will link to the documentstion.
+
+REFERENCES_LINK_SOURCE = YES
+
+# If the USE_HTAGS tag is set to YES then the references to source code 
+# will point to the HTML generated by the htags(1) tool instead of doxygen 
+# built-in source browser. The htags tool is part of GNU's global source 
+# tagging system (see http://www.gnu.org/software/global/global.html). You 
+# will need version 4.8.6 or higher.
+
+USE_HTAGS              = NO
+
+# If the VERBATIM_HEADERS tag is set to YES (the default) then Doxygen 
+# will generate a verbatim copy of the header file for each class for 
+# which an include is specified. Set to NO to disable this.
+
+VERBATIM_HEADERS       = YES
+
+#---------------------------------------------------------------------------
+# configuration options related to the alphabetical class index
+#---------------------------------------------------------------------------
+
+# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index 
+# of all compounds will be generated. Enable this if the project 
+# contains a lot of classes, structs, unions or interfaces.
+
+ALPHABETICAL_INDEX     = NO
+
+# If the alphabetical index is enabled (see ALPHABETICAL_INDEX) then 
+# the COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns 
+# in which this list will be split (can be a number in the range [1..20])
+
+COLS_IN_ALPHA_INDEX    = 5
+
+# In case all classes in a project start with a common prefix, all 
+# classes will be put under the same header in the alphabetical index. 
+# The IGNORE_PREFIX tag can be used to specify one or more prefixes that 
+# should be ignored while generating the index headers.
+
+IGNORE_PREFIX          = 
+
+#---------------------------------------------------------------------------
+# configuration options related to the HTML output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_HTML tag is set to YES (the default) Doxygen will 
+# generate HTML output.
+
+GENERATE_HTML          = YES
+
+# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. 
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be 
+# put in front of it. If left blank `html' will be used as the default path.
+
+HTML_OUTPUT            = html
+
+# The HTML_FILE_EXTENSION tag can be used to specify the file extension for 
+# each generated HTML page (for example: .htm,.php,.asp). If it is left blank 
+# doxygen will generate files with .html extension.
+
+HTML_FILE_EXTENSION    = .html
+
+# The HTML_HEADER tag can be used to specify a personal HTML header for 
+# each generated HTML page. If it is left blank doxygen will generate a 
+# standard header.
+
+HTML_HEADER            = 
+
+# The HTML_FOOTER tag can be used to specify a personal HTML footer for 
+# each generated HTML page. If it is left blank doxygen will generate a 
+# standard footer.
+
+HTML_FOOTER            = 
+
+# The HTML_STYLESHEET tag can be used to specify a user-defined cascading 
+# style sheet that is used by each HTML page. It can be used to 
+# fine-tune the look of the HTML output. If the tag is left blank doxygen 
+# will generate a default style sheet. Note that doxygen will try to copy 
+# the style sheet file to the HTML output directory, so don't put your own 
+# stylesheet in the HTML output directory as well, or it will be erased!
+
+HTML_STYLESHEET        = 
+
+# If the HTML_ALIGN_MEMBERS tag is set to YES, the members of classes, 
+# files or namespaces will be aligned in HTML using tables. If set to 
+# NO a bullet list will be used.
+
+HTML_ALIGN_MEMBERS     = YES
+
+# If the GENERATE_HTMLHELP tag is set to YES, additional index files 
+# will be generated that can be used as input for tools like the 
+# Microsoft HTML help workshop to generate a compiled HTML help file (.chm) 
+# of the generated HTML documentation.
+
+GENERATE_HTMLHELP      = NO
+
+# If the GENERATE_DOCSET tag is set to YES, additional index files 
+# will be generated that can be used as input for Apple's Xcode 3 
+# integrated development environment, introduced with OSX 10.5 (Leopard). 
+# To create a documentation set, doxygen will generate a Makefile in the 
+# HTML output directory. Running make will produce the docset in that 
+# directory and running "make install" will install the docset in 
+# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find 
+# it at startup.
+
+GENERATE_DOCSET        = NO
+
+# When GENERATE_DOCSET tag is set to YES, this tag determines the name of the 
+# feed. A documentation feed provides an umbrella under which multiple 
+# documentation sets from a single provider (such as a company or product suite) 
+# can be grouped.
+
+DOCSET_FEEDNAME        = "Doxygen generated docs"
+
+# When GENERATE_DOCSET tag is set to YES, this tag specifies a string that 
+# should uniquely identify the documentation set bundle. This should be a 
+# reverse domain-name style string, e.g. com.mycompany.MyDocSet. Doxygen 
+# will append .docset to the name.
+
+DOCSET_BUNDLE_ID       = org.doxygen.Project
+
+# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML 
+# documentation will contain sections that can be hidden and shown after the 
+# page has loaded. For this to work a browser that supports 
+# JavaScript and DHTML is required (for instance Mozilla 1.0+, Firefox 
+# Netscape 6.0+, Internet explorer 5.0+, Konqueror, or Safari).
+
+HTML_DYNAMIC_SECTIONS  = NO
+
+# If the GENERATE_HTMLHELP tag is set to YES, the CHM_FILE tag can 
+# be used to specify the file name of the resulting .chm file. You 
+# can add a path in front of the file if the result should not be 
+# written to the html output directory.
+
+CHM_FILE               = 
+
+# If the GENERATE_HTMLHELP tag is set to YES, the HHC_LOCATION tag can 
+# be used to specify the location (absolute path including file name) of 
+# the HTML help compiler (hhc.exe). If non-empty doxygen will try to run 
+# the HTML help compiler on the generated index.hhp.
+
+HHC_LOCATION           = 
+
+# If the GENERATE_HTMLHELP tag is set to YES, the GENERATE_CHI flag 
+# controls if a separate .chi index file is generated (YES) or that 
+# it should be included in the master .chm file (NO).
+
+GENERATE_CHI           = NO
+
+# If the GENERATE_HTMLHELP tag is set to YES, the CHM_INDEX_ENCODING
+# is used to encode HtmlHelp index (hhk), content (hhc) and project file
+# content.
+
+CHM_INDEX_ENCODING     = 
+
+# If the GENERATE_HTMLHELP tag is set to YES, the BINARY_TOC flag 
+# controls whether a binary table of contents is generated (YES) or a 
+# normal table of contents (NO) in the .chm file.
+
+BINARY_TOC             = NO
+
+# The TOC_EXPAND flag can be set to YES to add extra items for group members 
+# to the contents of the HTML help documentation and to the tree view.
+
+TOC_EXPAND             = NO
+
+# The DISABLE_INDEX tag can be used to turn on/off the condensed index at 
+# top of each HTML page. The value NO (the default) enables the index and 
+# the value YES disables it.
+
+DISABLE_INDEX          = NO
+
+# This tag can be used to set the number of enum values (range [1..20]) 
+# that doxygen will group on one line in the generated HTML documentation.
+
+ENUM_VALUES_PER_LINE   = 4
+
+# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index
+# structure should be generated to display hierarchical information.
+# If the tag value is set to FRAME, a side panel will be generated
+# containing a tree-like index structure (just like the one that 
+# is generated for HTML Help). For this to work a browser that supports 
+# JavaScript, DHTML, CSS and frames is required (for instance Mozilla 1.0+, 
+# Netscape 6.0+, Internet explorer 5.0+, or Konqueror). Windows users are 
+# probably better off using the HTML help feature. Other possible values 
+# for this tag are: HIERARCHIES, which will generate the Groups, Directories,
+# and Class Hiererachy pages using a tree view instead of an ordered list;
+# ALL, which combines the behavior of FRAME and HIERARCHIES; and NONE, which
+# disables this behavior completely. For backwards compatibility with previous
+# releases of Doxygen, the values YES and NO are equivalent to FRAME and NONE
+# respectively.
+
+GENERATE_TREEVIEW      = NONE
+
+# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be 
+# used to set the initial width (in pixels) of the frame in which the tree 
+# is shown.
+
+TREEVIEW_WIDTH         = 250
+
+# Use this tag to change the font size of Latex formulas included 
+# as images in the HTML documentation. The default is 10. Note that 
+# when you change the font size after a successful doxygen run you need 
+# to manually remove any form_*.png images from the HTML output directory 
+# to force them to be regenerated.
+
+FORMULA_FONTSIZE       = 10
+
+#---------------------------------------------------------------------------
+# configuration options related to the LaTeX output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_LATEX tag is set to YES (the default) Doxygen will 
+# generate Latex output.
+
+GENERATE_LATEX         = NO
+
+# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. 
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be 
+# put in front of it. If left blank `latex' will be used as the default path.
+
+LATEX_OUTPUT           = latex
+
+# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be 
+# invoked. If left blank `latex' will be used as the default command name.
+
+LATEX_CMD_NAME         = latex
+
+# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to 
+# generate index for LaTeX. If left blank `makeindex' will be used as the 
+# default command name.
+
+MAKEINDEX_CMD_NAME     = makeindex
+
+# If the COMPACT_LATEX tag is set to YES Doxygen generates more compact 
+# LaTeX documents. This may be useful for small projects and may help to 
+# save some trees in general.
+
+COMPACT_LATEX          = NO
+
+# The PAPER_TYPE tag can be used to set the paper type that is used 
+# by the printer. Possible values are: a4, a4wide, letter, legal and 
+# executive. If left blank a4wide will be used.
+
+PAPER_TYPE             = a4wide
+
+# The EXTRA_PACKAGES tag can be to specify one or more names of LaTeX 
+# packages that should be included in the LaTeX output.
+
+EXTRA_PACKAGES         = 
+
+# The LATEX_HEADER tag can be used to specify a personal LaTeX header for 
+# the generated latex document. The header should contain everything until 
+# the first chapter. If it is left blank doxygen will generate a 
+# standard header. Notice: only use this tag if you know what you are doing!
+
+LATEX_HEADER           = 
+
+# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated 
+# is prepared for conversion to pdf (using ps2pdf). The pdf file will 
+# contain links (just like the HTML output) instead of page references 
+# This makes the output suitable for online browsing using a pdf viewer.
+
+PDF_HYPERLINKS         = YES
+
+# If the USE_PDFLATEX tag is set to YES, pdflatex will be used instead of 
+# plain latex in the generated Makefile. Set this option to YES to get a 
+# higher quality PDF documentation.
+
+USE_PDFLATEX           = YES
+
+# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \\batchmode. 
+# command to the generated LaTeX files. This will instruct LaTeX to keep 
+# running if errors occur, instead of asking the user for help. 
+# This option is also used when generating formulas in HTML.
+
+LATEX_BATCHMODE        = NO
+
+# If LATEX_HIDE_INDICES is set to YES then doxygen will not 
+# include the index chapters (such as File Index, Compound Index, etc.) 
+# in the output.
+
+LATEX_HIDE_INDICES     = NO
+
+#---------------------------------------------------------------------------
+# configuration options related to the RTF output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_RTF tag is set to YES Doxygen will generate RTF output 
+# The RTF output is optimized for Word 97 and may not look very pretty with 
+# other RTF readers or editors.
+
+GENERATE_RTF           = NO
+
+# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. 
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be 
+# put in front of it. If left blank `rtf' will be used as the default path.
+
+RTF_OUTPUT             = rtf
+
+# If the COMPACT_RTF tag is set to YES Doxygen generates more compact 
+# RTF documents. This may be useful for small projects and may help to 
+# save some trees in general.
+
+COMPACT_RTF            = NO
+
+# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated 
+# will contain hyperlink fields. The RTF file will 
+# contain links (just like the HTML output) instead of page references. 
+# This makes the output suitable for online browsing using WORD or other 
+# programs which support those fields. 
+# Note: wordpad (write) and others do not support links.
+
+RTF_HYPERLINKS         = NO
+
+# Load stylesheet definitions from file. Syntax is similar to doxygen's 
+# config file, i.e. a series of assignments. You only have to provide 
+# replacements, missing definitions are set to their default value.
+
+RTF_STYLESHEET_FILE    = 
+
+# Set optional variables used in the generation of an rtf document. 
+# Syntax is similar to doxygen's config file.
+
+RTF_EXTENSIONS_FILE    = 
+
+#---------------------------------------------------------------------------
+# configuration options related to the man page output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_MAN tag is set to YES (the default) Doxygen will 
+# generate man pages
+
+GENERATE_MAN           = NO
+
+# The MAN_OUTPUT tag is used to specify where the man pages will be put. 
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be 
+# put in front of it. If left blank `man' will be used as the default path.
+
+MAN_OUTPUT             = man
+
+# The MAN_EXTENSION tag determines the extension that is added to 
+# the generated man pages (default is the subroutine's section .3)
+
+MAN_EXTENSION          = .3
+
+# If the MAN_LINKS tag is set to YES and Doxygen generates man output, 
+# then it will generate one additional man file for each entity 
+# documented in the real man page(s). These additional files 
+# only source the real man page, but without them the man command 
+# would be unable to find the correct page. The default is NO.
+
+MAN_LINKS              = NO
+
+#---------------------------------------------------------------------------
+# configuration options related to the XML output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_XML tag is set to YES Doxygen will 
+# generate an XML file that captures the structure of 
+# the code including all documentation.
+
+GENERATE_XML           = NO
+
+# The XML_OUTPUT tag is used to specify where the XML pages will be put. 
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be 
+# put in front of it. If left blank `xml' will be used as the default path.
+
+XML_OUTPUT             = xml
+
+# The XML_SCHEMA tag can be used to specify an XML schema, 
+# which can be used by a validating XML parser to check the 
+# syntax of the XML files.
+
+XML_SCHEMA             = 
+
+# The XML_DTD tag can be used to specify an XML DTD, 
+# which can be used by a validating XML parser to check the 
+# syntax of the XML files.
+
+XML_DTD                = 
+
+# If the XML_PROGRAMLISTING tag is set to YES Doxygen will 
+# dump the program listings (including syntax highlighting 
+# and cross-referencing information) to the XML output. Note that 
+# enabling this will significantly increase the size of the XML output.
+
+XML_PROGRAMLISTING     = YES
+
+#---------------------------------------------------------------------------
+# configuration options for the AutoGen Definitions output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_AUTOGEN_DEF tag is set to YES Doxygen will 
+# generate an AutoGen Definitions (see autogen.sf.net) file 
+# that captures the structure of the code including all 
+# documentation. Note that this feature is still experimental 
+# and incomplete at the moment.
+
+GENERATE_AUTOGEN_DEF   = NO
+
+#---------------------------------------------------------------------------
+# configuration options related to the Perl module output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_PERLMOD tag is set to YES Doxygen will 
+# generate a Perl module file that captures the structure of 
+# the code including all documentation. Note that this 
+# feature is still experimental and incomplete at the 
+# moment.
+
+GENERATE_PERLMOD       = NO
+
+# If the PERLMOD_LATEX tag is set to YES Doxygen will generate 
+# the necessary Makefile rules, Perl scripts and LaTeX code to be able 
+# to generate PDF and DVI output from the Perl module output.
+
+PERLMOD_LATEX          = NO
+
+# If the PERLMOD_PRETTY tag is set to YES the Perl module output will be 
+# nicely formatted so it can be parsed by a human reader.  This is useful 
+# if you want to understand what is going on.  On the other hand, if this 
+# tag is set to NO the size of the Perl module output will be much smaller 
+# and Perl will parse it just the same.
+
+PERLMOD_PRETTY         = YES
+
+# The names of the make variables in the generated doxyrules.make file 
+# are prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. 
+# This is useful so different doxyrules.make files included by the same 
+# Makefile don't overwrite each other's variables.
+
+PERLMOD_MAKEVAR_PREFIX = 
+
+#---------------------------------------------------------------------------
+# Configuration options related to the preprocessor   
+#---------------------------------------------------------------------------
+
+# If the ENABLE_PREPROCESSING tag is set to YES (the default) Doxygen will 
+# evaluate all C-preprocessor directives found in the sources and include 
+# files.
+
+ENABLE_PREPROCESSING   = YES
+
+# If the MACRO_EXPANSION tag is set to YES Doxygen will expand all macro 
+# names in the source code. If set to NO (the default) only conditional 
+# compilation will be performed. Macro expansion can be done in a controlled 
+# way by setting EXPAND_ONLY_PREDEF to YES.
+
+MACRO_EXPANSION        = NO
+
+# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES 
+# then the macro expansion is limited to the macros specified with the 
+# PREDEFINED and EXPAND_AS_DEFINED tags.
+
+EXPAND_ONLY_PREDEF     = NO
+
+# If the SEARCH_INCLUDES tag is set to YES (the default) the includes files 
+# in the INCLUDE_PATH (see below) will be search if a #include is found.
+
+SEARCH_INCLUDES        = YES
+
+# The INCLUDE_PATH tag can be used to specify one or more directories that 
+# contain include files that are not input files but should be processed by 
+# the preprocessor.
+
+INCLUDE_PATH           = 
+
+# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard 
+# patterns (like *.h and *.hpp) to filter out the header-files in the 
+# directories. If left blank, the patterns specified with FILE_PATTERNS will 
+# be used.
+
+INCLUDE_FILE_PATTERNS  = 
+
+# The PREDEFINED tag can be used to specify one or more macro names that 
+# are defined before the preprocessor is started (similar to the -D option of 
+# gcc). The argument of the tag is a list of macros of the form: name 
+# or name=definition (no spaces). If the definition and the = are 
+# omitted =1 is assumed. To prevent a macro definition from being 
+# undefined via #undef or recursively expanded use the := operator 
+# instead of the = operator.
+
+PREDEFINED             = 
+
+# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then 
+# this tag can be used to specify a list of macro names that should be expanded. 
+# The macro definition that is found in the sources will be used. 
+# Use the PREDEFINED tag if you want to use a different macro definition.
+
+EXPAND_AS_DEFINED      = 
+
+# If the SKIP_FUNCTION_MACROS tag is set to YES (the default) then 
+# doxygen's preprocessor will remove all function-like macros that are alone 
+# on a line, have an all uppercase name, and do not end with a semicolon. Such 
+# function macros are typically used for boiler-plate code, and will confuse 
+# the parser if not removed.
+
+SKIP_FUNCTION_MACROS   = YES
+
+#---------------------------------------------------------------------------
+# Configuration::additions related to external references   
+#---------------------------------------------------------------------------
+
+# The TAGFILES option can be used to specify one or more tagfiles. 
+# Optionally an initial location of the external documentation 
+# can be added for each tagfile. The format of a tag file without 
+# this location is as follows: 
+#   TAGFILES = file1 file2 ... 
+# Adding location for the tag files is done as follows: 
+#   TAGFILES = file1=loc1 "file2 = loc2" ... 
+# where "loc1" and "loc2" can be relative or absolute paths or 
+# URLs. If a location is present for each tag, the installdox tool 
+# does not have to be run to correct the links.
+# Note that each tag file must have a unique name
+# (where the name does NOT include the path)
+# If a tag file is not located in the directory in which doxygen 
+# is run, you must also specify the path to the tagfile here.
+
+TAGFILES               = 
+
+# When a file name is specified after GENERATE_TAGFILE, doxygen will create 
+# a tag file that is based on the input files it reads.
+
+GENERATE_TAGFILE       = 
+
+# If the ALLEXTERNALS tag is set to YES all external classes will be listed 
+# in the class index. If set to NO only the inherited external classes 
+# will be listed.
+
+ALLEXTERNALS           = NO
+
+# If the EXTERNAL_GROUPS tag is set to YES all external groups will be listed 
+# in the modules index. If set to NO, only the current project's groups will 
+# be listed.
+
+EXTERNAL_GROUPS        = YES
+
+# The PERL_PATH should be the absolute path and name of the perl script 
+# interpreter (i.e. the result of `which perl').
+
+PERL_PATH              = /usr/bin/perl
+
+#---------------------------------------------------------------------------
+# Configuration options related to the dot tool   
+#---------------------------------------------------------------------------
+
+# If the CLASS_DIAGRAMS tag is set to YES (the default) Doxygen will 
+# generate a inheritance diagram (in HTML, RTF and LaTeX) for classes with base 
+# or super classes. Setting the tag to NO turns the diagrams off. Note that 
+# this option is superseded by the HAVE_DOT option below. This is only a 
+# fallback. It is recommended to install and use dot, since it yields more 
+# powerful graphs.
+
+CLASS_DIAGRAMS         = YES
+
+# You can define message sequence charts within doxygen comments using the \msc 
+# command. Doxygen will then run the mscgen tool (see 
+# http://www.mcternan.me.uk/mscgen/) to produce the chart and insert it in the 
+# documentation. The MSCGEN_PATH tag allows you to specify the directory where 
+# the mscgen tool resides. If left empty the tool is assumed to be found in the 
+# default search path.
+
+MSCGEN_PATH            = 
+
+# If set to YES, the inheritance and collaboration graphs will hide 
+# inheritance and usage relations if the target is undocumented 
+# or is not a class.
+
+HIDE_UNDOC_RELATIONS   = YES
+
+# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is 
+# available from the path. This tool is part of Graphviz, a graph visualization 
+# toolkit from AT&T and Lucent Bell Labs. The other options in this section 
+# have no effect if this option is set to NO (the default)
+
+HAVE_DOT               = NO
+
+# By default doxygen will write a font called FreeSans.ttf to the output 
+# directory and reference it in all dot files that doxygen generates. This 
+# font does not include all possible unicode characters however, so when you need 
+# these (or just want a differently looking font) you can specify the font name 
+# using DOT_FONTNAME. You need need to make sure dot is able to find the font, 
+# which can be done by putting it in a standard location or by setting the 
+# DOTFONTPATH environment variable or by setting DOT_FONTPATH to the directory 
+# containing the font.
+
+DOT_FONTNAME           = FreeSans
+
+# By default doxygen will tell dot to use the output directory to look for the 
+# FreeSans.ttf font (which doxygen will put there itself). If you specify a 
+# different font using DOT_FONTNAME you can set the path where dot 
+# can find it using this tag.
+
+DOT_FONTPATH           = 
+
+# If the CLASS_GRAPH and HAVE_DOT tags are set to YES then doxygen 
+# will generate a graph for each documented class showing the direct and 
+# indirect inheritance relations. Setting this tag to YES will force the 
+# the CLASS_DIAGRAMS tag to NO.
+
+CLASS_GRAPH            = YES
+
+# If the COLLABORATION_GRAPH and HAVE_DOT tags are set to YES then doxygen 
+# will generate a graph for each documented class showing the direct and 
+# indirect implementation dependencies (inheritance, containment, and 
+# class references variables) of the class with other documented classes.
+
+COLLABORATION_GRAPH    = YES
+
+# If the GROUP_GRAPHS and HAVE_DOT tags are set to YES then doxygen 
+# will generate a graph for groups, showing the direct groups dependencies
+
+GROUP_GRAPHS           = YES
+
+# If the UML_LOOK tag is set to YES doxygen will generate inheritance and 
+# collaboration diagrams in a style similar to the OMG's Unified Modeling 
+# Language.
+
+UML_LOOK               = NO
+
+# If set to YES, the inheritance and collaboration graphs will show the 
+# relations between templates and their instances.
+
+TEMPLATE_RELATIONS     = NO
+
+# If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDE_GRAPH, and HAVE_DOT 
+# tags are set to YES then doxygen will generate a graph for each documented 
+# file showing the direct and indirect include dependencies of the file with 
+# other documented files.
+
+INCLUDE_GRAPH          = YES
+
+# If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDED_BY_GRAPH, and 
+# HAVE_DOT tags are set to YES then doxygen will generate a graph for each 
+# documented header file showing the documented files that directly or 
+# indirectly include this file.
+
+INCLUDED_BY_GRAPH      = YES
+
+# If the CALL_GRAPH and HAVE_DOT options are set to YES then 
+# doxygen will generate a call dependency graph for every global function 
+# or class method. Note that enabling this option will significantly increase 
+# the time of a run. So in most cases it will be better to enable call graphs 
+# for selected functions only using the \callgraph command.
+
+CALL_GRAPH             = NO
+
+# If the CALLER_GRAPH and HAVE_DOT tags are set to YES then 
+# doxygen will generate a caller dependency graph for every global function 
+# or class method. Note that enabling this option will significantly increase 
+# the time of a run. So in most cases it will be better to enable caller 
+# graphs for selected functions only using the \callergraph command.
+
+CALLER_GRAPH           = NO
+
+# If the GRAPHICAL_HIERARCHY and HAVE_DOT tags are set to YES then doxygen 
+# will graphical hierarchy of all classes instead of a textual one.
+
+GRAPHICAL_HIERARCHY    = YES
+
+# If the DIRECTORY_GRAPH, SHOW_DIRECTORIES and HAVE_DOT tags are set to YES 
+# then doxygen will show the dependencies a directory has on other directories 
+# in a graphical way. The dependency relations are determined by the #include
+# relations between the files in the directories.
+
+DIRECTORY_GRAPH        = YES
+
+# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images 
+# generated by dot. Possible values are png, jpg, or gif
+# If left blank png will be used.
+
+DOT_IMAGE_FORMAT       = png
+
+# The tag DOT_PATH can be used to specify the path where the dot tool can be 
+# found. If left blank, it is assumed the dot tool can be found in the path.
+
+DOT_PATH               = 
+
+# The DOTFILE_DIRS tag can be used to specify one or more directories that 
+# contain dot files that are included in the documentation (see the 
+# \dotfile command).
+
+DOTFILE_DIRS           = 
+
+# The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of 
+# nodes that will be shown in the graph. If the number of nodes in a graph 
+# becomes larger than this value, doxygen will truncate the graph, which is 
+# visualized by representing a node as a red box. Note that doxygen if the 
+# number of direct children of the root node in a graph is already larger than 
+# DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note 
+# that the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH.
+
+DOT_GRAPH_MAX_NODES    = 50
+
+# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the 
+# graphs generated by dot. A depth value of 3 means that only nodes reachable 
+# from the root by following a path via at most 3 edges will be shown. Nodes 
+# that lay further from the root node will be omitted. Note that setting this 
+# option to 1 or 2 may greatly reduce the computation time needed for large 
+# code bases. Also note that the size of a graph can be further restricted by 
+# DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction.
+
+MAX_DOT_GRAPH_DEPTH    = 0
+
+# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent 
+# background. This is enabled by default, which results in a transparent 
+# background. Warning: Depending on the platform used, enabling this option 
+# may lead to badly anti-aliased labels on the edges of a graph (i.e. they 
+# become hard to read).
+
+DOT_TRANSPARENT        = YES
+
+# Set the DOT_MULTI_TARGETS tag to YES allow dot to generate multiple output 
+# files in one run (i.e. multiple -o and -T options on the command line). This 
+# makes dot run faster, but since only newer versions of dot (>1.8.10) 
+# support this, this feature is disabled by default.
+
+DOT_MULTI_TARGETS      = NO
+
+# If the GENERATE_LEGEND tag is set to YES (the default) Doxygen will 
+# generate a legend page explaining the meaning of the various boxes and 
+# arrows in the dot generated graphs.
+
+GENERATE_LEGEND        = YES
+
+# If the DOT_CLEANUP tag is set to YES (the default) Doxygen will 
+# remove the intermediate dot files that are used to generate 
+# the various graphs.
+
+DOT_CLEANUP            = YES
+
+#---------------------------------------------------------------------------
+# Configuration::additions related to the search engine   
+#---------------------------------------------------------------------------
+
+# The SEARCHENGINE tag specifies whether or not a search engine should be 
+# used. If set to NO the values of all tags below this one will be ignored.
+
+SEARCHENGINE           = NO
diff --git a/src/python/oftest/__init__.py b/src/python/oftest/__init__.py
new file mode 100644
index 0000000..802dc75
--- /dev/null
+++ b/src/python/oftest/__init__.py
@@ -0,0 +1 @@
+'''Docstring to silence pylint; ignores --ignore option for __init__.py'''
diff --git a/src/python/oftest/action_list.py b/src/python/oftest/action_list.py
new file mode 100644
index 0000000..628e067
--- /dev/null
+++ b/src/python/oftest/action_list.py
@@ -0,0 +1,191 @@
+"""
+OpenFlow actions list class
+"""
+
+from action import *
+from cstruct import ofp_header
+import copy
+
+# # Map OFP action identifiers to the actual structures used on the wire
+# action_object_map = {
+#     OFPAT_OUTPUT                        : ofp_action_output,
+#     OFPAT_SET_VLAN_VID                  : ofp_action_vlan_vid,
+#     OFPAT_SET_VLAN_PCP                  : ofp_action_vlan_pcp,
+#     OFPAT_STRIP_VLAN                    : ofp_action_header,
+#     OFPAT_SET_DL_SRC                    : ofp_action_dl_addr,
+#     OFPAT_SET_DL_DST                    : ofp_action_dl_addr,
+#     OFPAT_SET_NW_SRC                    : ofp_action_nw_addr,
+#     OFPAT_SET_NW_DST                    : ofp_action_nw_addr,
+#     OFPAT_SET_NW_TOS                    : ofp_action_nw_tos,
+#     OFPAT_SET_TP_SRC                    : ofp_action_tp_port,
+#     OFPAT_SET_TP_DST                    : ofp_action_tp_port,
+#     OFPAT_ENQUEUE                       : ofp_action_enqueue
+# }
+
+action_object_map = {
+    OFPAT_OUTPUT                        : action_output,
+    OFPAT_SET_VLAN_VID                  : action_set_vlan_vid,
+    OFPAT_SET_VLAN_PCP                  : action_set_vlan_pcp,
+    OFPAT_STRIP_VLAN                    : action_strip_vlan,
+    OFPAT_SET_DL_SRC                    : action_set_dl_src,
+    OFPAT_SET_DL_DST                    : action_set_dl_dst,
+    OFPAT_SET_NW_SRC                    : action_set_nw_src,
+    OFPAT_SET_NW_DST                    : action_set_nw_dst,
+    OFPAT_SET_NW_TOS                    : action_set_nw_tos,
+    OFPAT_SET_TP_SRC                    : action_set_tp_src,
+    OFPAT_SET_TP_DST                    : action_set_tp_dst,
+    OFPAT_ENQUEUE                       : action_enqueue,
+    OFPAT_VENDOR                        : action_vendor
+}
+
+class action_list(object):
+    """
+    Maintain a list of actions
+
+    Data members:
+    @arg actions: An array of action objects such as action_output, etc.
+
+    Methods:
+    @arg pack: Pack the structure into a string
+    @arg unpack: Unpack a string to objects, with proper typing
+    @arg add: Add an action to the list; you can directly access
+    the action member, but add will validate that the added object 
+    is an action.
+
+    """
+
+    def __init__(self):
+        self.actions = []
+
+    def pack(self):
+        """
+        Pack a list of actions
+
+        Returns the packed string
+        """
+
+        packed = ""
+        for act in self.actions:
+            packed += act.pack()
+        return packed
+
+    def unpack(self, binary_string, bytes=None):
+        """
+        Unpack a list of actions
+        
+        Unpack actions from a binary string, creating an array
+        of objects of the appropriate type
+
+        @param binary_string The string to be unpacked
+
+        @param bytes The total length of the action list in bytes.  
+        Ignored if decode is True.  If None and decode is false, the
+        list is assumed to extend through the entire string.
+
+        @return The remainder of binary_string that was not parsed
+
+        """
+        if bytes == None:
+            bytes = len(binary_string)
+        bytes_done = 0
+        count = 0
+        cur_string = binary_string
+        while bytes_done < bytes:
+            hdr = ofp_action_header()
+            hdr.unpack(cur_string)
+            if hdr.len < OFP_ACTION_HEADER_BYTES:
+                print "ERROR: Action too short"
+                break
+            if not hdr.type in action_object_map.keys():
+                print "WARNING: Skipping unknown action ", hdr.type, hdr.len
+            else:
+                self.actions.append(action_object_map[hdr.type]())
+                self.actions[count].unpack(cur_string)
+                count += 1
+            cur_string = cur_string[hdr.len:]
+            bytes_done += hdr.len
+        return cur_string
+
+    def add(self, action):
+        """
+        Add an action to an action list
+
+        @param action The action to add
+
+        @return True if successful, False if not an action object
+
+        """
+        if isinstance(action, action_class_list):
+            tmp = copy.deepcopy(action)
+            self.actions.append(tmp)
+            return True
+        return False
+
+    def remove_type(self, type):
+        """
+        Remove the first action on the list of the given type
+
+        @param type The type of action to search
+
+        @return The object removed, if any; otherwise None
+
+        """
+        for index in xrange(len(self.actions)):
+            if self.actions[index].type == type:
+                return self.actions.pop(index)
+        return None
+
+    def find_type(self, type):
+        """
+        Find the first action on the list of the given type
+
+        @param type The type of action to search
+
+        @return The object with the matching type if any; otherwise None
+
+        """
+        for index in xrange(len(self.actions)):
+            if self.actions[index].type == type:
+                return self.actions[index]
+        return None
+
+    def extend(self, other):
+        """
+        Add the actions in other to this list
+
+        @param other An object of type action_list whose
+        entries are to be merged into this list
+
+        @return True if successful.  If not successful, the list
+        may have been modified.
+
+        @todo Check if this is proper deep copy or not
+
+        """
+        for act in other.actions:
+            if not self.add(act):
+                return False
+        return True
+        
+    def __len__(self):
+        length = 0
+        for act in self.actions:
+            length += act.__len__()
+        return length
+
+    def __eq__(self, other):
+        if type(self) != type(other): return False
+        if self.actions != other.actions: return False
+        return True
+
+    def __ne__(self, other): return not self.__eq__(other)
+        
+    def show(self, prefix=''):
+        outstr = prefix + "Action List with " + str(len(self.actions)) + \
+            " actions\n"
+        count = 0
+        for obj in self.actions:
+            count += 1
+            outstr += prefix + "  Action " + str(count) + ": \n"
+            outstr += obj.show(prefix + '    ')
+        return outstr
diff --git a/src/python/oftest/controller.py b/src/python/oftest/controller.py
new file mode 100644
index 0000000..4251913
--- /dev/null
+++ b/src/python/oftest/controller.py
@@ -0,0 +1,618 @@
+"""
+OpenFlow Test Framework
+
+Controller class
+
+Provide the interface to the control channel to the switch under test.  
+
+Class inherits from thread so as to run in background allowing
+asynchronous callbacks (if needed, not required).  Also supports
+polling.
+
+The controller thread maintains a queue.  Incoming messages that
+are not handled by a callback function are placed in this queue for 
+poll calls.  
+
+Callbacks and polling support specifying the message type
+
+@todo Support transaction semantics via xid
+@todo Support select and listen on an administrative socket (or
+use a timeout to support clean shutdown).
+
+Currently only one connection is accepted during the life of
+the controller.   There seems
+to be no clean way to interrupt an accept call.  Using select that also listens
+on an administrative socket and can shut down the socket might work.
+
+"""
+
+import os
+import socket
+import time
+from threading import Thread
+from threading import Lock
+from threading import Condition
+from message import *
+from parse import *
+from ofutils import *
+# For some reason, it seems select to be last (or later).
+# Otherwise get an attribute error when calling select.select
+import select
+import logging
+
+##@todo Find a better home for these identifiers (controller)
+RCV_SIZE_DEFAULT = 32768
+LISTEN_QUEUE_SIZE = 1
+
+class Controller(Thread):
+    """
+    Class abstracting the control interface to the switch.  
+
+    For receiving messages, two mechanism will be implemented.  First,
+    query the interface with poll.  Second, register to have a
+    function called by message type.  The callback is passed the
+    message type as well as the raw packet (or message object)
+
+    One of the main purposes of this object is to translate between network 
+    and host byte order.  'Above' this object, things should be in host
+    byte order.
+
+    @todo Consider using SocketServer for listening socket
+    @todo Test transaction code
+
+    @var rcv_size The receive size to use for receive calls
+    @var max_pkts The max size of the receive queue
+    @var keep_alive If true, listen for echo requests and respond w/
+    @var keep_alive If true, listen for echo requests and respond w/
+    echo replies
+    @var initial_hello If true, will send a hello message immediately
+    upon connecting to the switch
+    @var exit_on_reset If true, terminate controller on connection reset
+    @var host The host to use for connect
+    @var port The port to connect on 
+    @var packets_total Total number of packets received
+    @var packets_expired Number of packets popped from queue as queue full
+    @var packets_handled Number of packets handled by something
+    @var dbg_state Debug indication of state
+    """
+
+    def __init__(self, host='127.0.0.1', port=6633, max_pkts=1024):
+        Thread.__init__(self)
+        # Socket related
+        self.rcv_size = RCV_SIZE_DEFAULT
+        self.listen_socket = None
+        self.switch_socket = None
+        self.switch_addr = None
+        self.socs = []
+        self.connect_cv = Condition()
+        self.message_cv = Condition()
+
+        # Counters
+        self.socket_errors = 0
+        self.parse_errors = 0
+        self.packets_total = 0
+        self.packets_expired = 0
+        self.packets_handled = 0
+        self.poll_discards = 0
+
+        # State
+        self.packets = []
+        self.sync = Lock()
+        self.handlers = {}
+        self.keep_alive = False
+        self.active = True
+        self.initial_hello = True
+        self.exit_on_reset = True
+
+        # Settings
+        self.max_pkts = max_pkts
+        self.passive = True
+        self.host = host
+        self.port = port
+        self.dbg_state = "init"
+        self.logger = logging.getLogger("controller")
+
+        # Transaction and message type waiting variables 
+        #   xid_cv: Condition variable (semaphore) for packet waiters
+        #   xid: Transaction ID being waited on
+        #   xid_response: Transaction response message
+        #   expect_msg: Is a message being waited on 
+        #   expect_msg_cv: Semaphore for waiters
+        #   expect_msg_type: Type of message expected
+        #   expect_msg_response: Result passed through here
+
+        self.xid_cv = Condition()
+        self.xid = None
+        self.xid_response = None
+
+        self.expect_msg = False
+        self.expect_msg_cv = Condition()
+        self.expect_msg_type = None
+        self.expect_msg_response = None
+        self.buffered_input = ""
+
+    def _pkt_handle(self, pkt):
+        """
+        Check for all packet handling conditions
+
+        Parse and verify message 
+        Check if XID matches something waiting
+        Check if message is being expected for a poll operation
+        Check if keep alive is on and message is an echo request
+        Check if any registered handler wants the packet
+        Enqueue if none of those conditions is met
+
+        an echo request in case keep_alive is true, followed by
+        registered message handlers.
+        @param pkt The raw packet (string) which may contain multiple OF msgs
+        """
+
+        # snag any left over data from last read()
+        pkt = self.buffered_input + pkt
+        self.buffered_input = ""
+
+        # Process each of the OF msgs inside the pkt
+        offset = 0
+        while offset < len(pkt):
+            # Parse the header to get type
+            hdr = of_header_parse(pkt[offset:])
+            if not hdr:
+                self.logger.info("Could not parse header, pkt len", len(pkt))
+                self.parse_errors += 1
+                return
+            if hdr.length == 0:
+                self.logger.info("Header length is zero")
+                self.parse_errors += 1
+                return
+
+            # Extract the raw message bytes
+            if (offset + hdr.length) > len( pkt[offset:]):
+                break
+            rawmsg = pkt[offset : offset + hdr.length]
+
+            self.logger.debug("Msg in: len %d. offset %d. type %s. hdr.len %d" %
+                (len(pkt), offset, ofp_type_map[hdr.type], hdr.length))
+            if hdr.version != OFP_VERSION:
+                self.logger.error("Version %d does not match OFTest version %d"
+                                  % (hdr.version, OFP_VERSION))
+                print "Version %d does not match OFTest version %d" % \
+                    (hdr.version, OFP_VERSION)
+                self.active = False
+                self.switch_socket = None
+                self.kill()
+
+            msg = of_message_parse(rawmsg)
+            if not msg:
+                self.parse_errors += 1
+                self.logger.warn("Could not parse message")
+                continue
+
+            self.sync.acquire()
+
+            # Check if transaction is waiting
+            self.xid_cv.acquire()
+            if self.xid:
+                if hdr.xid == self.xid:
+                    self.logger.debug("Matched expected XID " + str(hdr.xid))
+                    self.xid_response = (msg, rawmsg)
+                    self.xid = None
+                    self.xid_cv.notify()
+                    self.xid_cv.release()
+                    self.sync.release()
+                    continue
+            self.xid_cv.release()
+
+            # PREVENT QUEUE ACCESS AT THIS POINT?
+            # Check if anyone waiting on this type of message
+            self.expect_msg_cv.acquire()
+            if self.expect_msg:
+                if not self.expect_msg_type or (self.expect_msg_type == hdr.type):
+                    self.logger.debug("Matched expected msg type "
+                                       + ofp_type_map[hdr.type])
+                    self.expect_msg_response = (msg, rawmsg)
+                    self.expect_msg = False
+                    self.expect_msg_cv.notify()
+                    self.expect_msg_cv.release()
+                    self.sync.release()
+                    continue
+            self.expect_msg_cv.release()
+
+            # Check if keep alive is set; if so, respond to echo requests
+            if self.keep_alive:
+                if hdr.type == OFPT_ECHO_REQUEST:
+                    self.sync.release()
+                    self.logger.debug("Responding to echo request")
+                    rep = echo_reply()
+                    rep.header.xid = hdr.xid
+                    # Ignoring additional data
+                    self.message_send(rep.pack(), zero_xid=True)
+                    offset += hdr.length
+                    continue
+
+            # Now check for message handlers; preference is given to
+            # handlers for a specific packet
+            handled = False
+            if hdr.type in self.handlers.keys():
+                handled = self.handlers[hdr.type](self, msg, rawmsg)
+            if not handled and ("all" in self.handlers.keys()):
+                handled = self.handlers["all"](self, msg, rawmsg)
+
+            if not handled: # Not handled, enqueue
+                self.logger.debug("Enqueuing pkt type " + ofp_type_map[hdr.type])
+                if len(self.packets) >= self.max_pkts:
+                    self.packets.pop(0)
+                    self.packets_expired += 1
+                self.packets.append((msg, rawmsg))
+                self.packets_total += 1
+            else:
+                self.packets_handled += 1
+                self.logger.debug("Message handled by callback")
+
+            self.sync.release()
+            offset += hdr.length
+        # end of 'while offset < len(pkt)'
+        #   note that if offset = len(pkt), this is
+        #   appends a harmless empty string
+        self.buffered_input += pkt[offset:]
+
+    def _socket_ready_handle(self, s):
+        """
+        Handle an input-ready socket
+        @param s The socket object that is ready
+        @retval True, reset the switch connection
+        """
+
+        if s == self.listen_socket:
+            if self.switch_socket:
+                self.logger.error("Multiple switch cxns not supported")
+                sys.exit(1)
+
+            (self.switch_socket, self.switch_addr) = \
+                self.listen_socket.accept()
+            self.logger.info("Got cxn to " + str(self.switch_addr))
+            # Notify anyone waiting
+            self.connect_cv.acquire()
+            self.connect_cv.notify()
+            self.connect_cv.release()
+            self.socs.append(self.switch_socket)
+            if self.initial_hello:
+                self.message_send(hello())
+        elif s == self.switch_socket:
+            try:
+                pkt = self.switch_socket.recv(self.rcv_size)
+            except:
+                self.logger.warning("Error on switch read")
+                return True
+
+            if not self.active:
+                return False
+
+            if len(pkt) == 0:
+                self.logger.info("zero-len pkt in")
+                return True
+
+            self._pkt_handle(pkt)
+        else:
+            self.logger.error("Unknown socket ready: " + str(s))
+            return True
+
+        return False
+
+    def run(self):
+        """
+        Activity function for class
+
+        Assumes connection to switch already exists.  Listens on
+        switch_socket for messages until an error (or zero len pkt)
+        occurs.
+
+        When there is a message on the socket, check for handlers; queue the
+        packet if no one handles the packet.
+
+        See note for controller describing the limitation of a single
+        connection for now.
+        """
+
+        self.dbg_state = "starting"
+
+        # Create listen socket
+        self.logger.info("Create/listen at " + self.host + ":" + 
+                 str(self.port))
+        self.listen_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+        self.listen_socket.setsockopt(socket.SOL_SOCKET, 
+                                      socket.SO_REUSEADDR, 1)
+        self.listen_socket.bind((self.host, self.port))
+        self.dbg_state = "listening"
+        self.listen_socket.listen(LISTEN_QUEUE_SIZE)
+
+        self.logger.info("Waiting for switch connection")
+        self.socs = [self.listen_socket]
+        self.dbg_state = "running"
+        while self.active:
+            reset_switch_cxn = False
+            try:
+                sel_in, sel_out, sel_err = \
+                    select.select(self.socs, [], self.socs, 1)
+            except:
+                print sys.exc_info()
+                self.logger.error("Select error, exiting")
+                sys.exit(1)
+
+            if not self.active:
+                break
+
+            for s in sel_in:
+                reset_switch_cxn = self._socket_ready_handle(s)
+
+            for s in sel_err:
+                self.logger.error("Got socket error on: " + str(s))
+                if s == self.switch_socket:
+                    reset_switch_cxn = True
+                else:
+                    self.logger.error("Socket error; exiting")
+                    self.active = False
+                    break
+
+            if self.active and reset_switch_cxn:
+                if self.exit_on_reset:
+                    self.kill()
+                else:
+                    self.logger.warning("Closing switch cxn")
+                    try:
+                        self.switch_socket.close()
+                    except:
+                        pass
+                    self.switch_socket = None
+                    self.socs = self.socs[0:1]
+
+        # End of main loop
+        self.dbg_state = "closing"
+        self.logger.info("Exiting controller thread")
+        self.shutdown()
+
+    def connect(self, timeout=None):
+        """
+        Connect to the switch
+
+        @param timeout If None, block until connected.  If 0, return 
+        immedidately.  Otherwise, block for up to timeout seconds
+        @return Boolean, True if connected
+        """
+
+        if timeout == 0:
+            return self.switch_socket is not None
+        if self.switch_socket is not None:
+            return True
+        self.connect_cv.acquire()
+        self.connect_cv.wait(timeout)
+        self.connect_cv.release()
+
+        return self.switch_socket is not None
+        
+    def kill(self):
+        """
+        Force the controller thread to quit
+
+        Just sets the active state variable to false and expects
+        the select timeout to kick in
+        """
+        self.active = False
+
+    def shutdown(self):
+        """
+        Shutdown the controller closing all sockets
+
+        @todo Might want to synchronize shutdown with self.sync...
+        """
+        self.active = False
+        try:
+            self.switch_socket.shutdown(socket.SHUT_RDWR)
+        except:
+            self.logger.info("Ignoring switch soc shutdown error")
+        self.switch_socket = None
+
+        try:
+            self.listen_socket.shutdown(socket.SHUT_RDWR)
+        except:
+            self.logger.info("Ignoring listen soc shutdown error")
+        self.listen_socket = None
+        self.dbg_state = "down"
+
+    def register(self, msg_type, handler):
+        """
+        Register a callback to receive a specific message type.
+
+        Only one handler may be registered for a given message type.
+
+        WARNING:  A lock is held during the handler call back, so 
+        the handler should not make any blocking calls
+
+        @param msg_type The type of message to receive.  May be DEFAULT 
+        for all non-handled packets.  The special type, the string "all"
+        will send all packets to the handler.
+        @param handler The function to call when a message of the given 
+        type is received.
+        """
+        # Should check type is valid
+        if not handler and msg_type in self.handlers.keys():
+            del self.handlers[msg_type]
+            return
+        self.handlers[msg_type] = handler
+
+    def poll(self, exp_msg=None, timeout=None):
+        """
+        Wait for the next OF message received from the switch.
+
+        @param exp_msg If set, return only when this type of message 
+        is received (unless timeout occurs).
+        @param timeout If None, do not block.  Otherwise, sleep in
+        intervals of 1 second until message is received.
+
+        @retval A pair (msg, pkt) where msg is a message object and pkt
+        the string representing the packet as received from the socket.
+        This allows additional parsing by the receiver if necessary.
+
+        The data members in the message are in host endian order.
+        If an error occurs, (None, None) is returned
+
+        The current queue is searched for a message of the desired type
+        before sleeping on message in events.
+        """
+
+        msg = pkt = None
+
+        self.logger.debug("Poll for " + ofp_type_map[exp_msg])
+        # First check the current queue
+        self.sync.acquire()
+        if len(self.packets) > 0:
+            if not exp_msg:
+                (msg, pkt) = self.packets.pop(0)
+                self.sync.release()
+                return (msg, pkt)
+            else:
+                for i in range(len(self.packets)):
+                    msg = self.packets[i][0]
+                    if msg.header.type == exp_msg:
+                        (msg, pkt) = self.packets.pop(i)
+                        self.sync.release()
+                        return (msg, pkt)
+
+        # Okay, not currently in the queue
+        if timeout is None or timeout <= 0:
+            self.sync.release()
+            return (None, None)
+
+        msg = pkt = None
+        self.logger.debug("Entering timeout")
+        # Careful of race condition releasing sync before message cv
+        # Also, this style is ripe for a lockup.
+        self.expect_msg_cv.acquire()
+        self.sync.release()
+        self.expect_msg_response = None
+        self.expect_msg = True
+        self.expect_msg_type = exp_msg
+        self.expect_msg_cv.wait(timeout)
+        if self.expect_msg_response is not None:
+            (msg, pkt) = self.expect_msg_response
+        self.expect_msg_cv.release()
+
+        if msg is None:
+            self.logger.debug("Poll time out")
+        else:
+            self.logger.debug("Got msg " + str(msg))
+
+        return (msg, pkt)
+
+    def transact(self, msg, timeout=None, zero_xid=False):
+        """
+        Run a message transaction with the switch
+
+        Send the message in msg and wait for a reply with a matching
+        transaction id.  Transactions have the highest priority in
+        received message handling.
+
+        @param msg The message object to send; must not be a string
+        @param timeout The timeout in seconds (?)
+        @param zero_xid Normally, if the XID is 0 an XID will be generated
+        for the message.  Set xero_xid to override this behavior
+        @return The matching message object or None if unsuccessful
+
+        """
+
+        if not zero_xid and msg.header.xid == 0:
+            msg.header.xid = gen_xid()
+
+        self.xid_cv.acquire()
+        if self.xid:
+            self.xid_cv.release()
+            self.logger.error("Can only run one transaction at a time")
+            return None
+
+        self.xid = msg.header.xid
+        self.xid_response = None
+        self.message_send(msg.pack())
+        self.xid_cv.wait(timeout)
+        if self.xid_response:
+            (resp, pkt) = self.xid_response
+            self.xid_response = None
+        else:
+            (resp, pkt) = (None, None)
+        self.xid_cv.release()
+        if resp is None:
+            self.logger.warning("No response for xid " + str(self.xid))
+        return (resp, pkt)
+
+    def message_send(self, msg, zero_xid=False):
+        """
+        Send the message to the switch
+
+        @param msg A string or OpenFlow message object to be forwarded to
+        the switch.
+        @param zero_xid If msg is an OpenFlow object (not a string) and if
+        the XID in the header is 0, then an XID will be generated
+        for the message.  Set xero_xid to override this behavior (and keep an
+        existing 0 xid)
+
+        @return -1 if error, 0 on success
+
+        """
+
+        if not self.switch_socket:
+            # Sending a string indicates the message is ready to go
+            self.logger.info("message_send: no socket")
+            return -1
+        #@todo If not string, try to pack
+        if type(msg) != type(""):
+            try:
+                if msg.header.xid == 0 and not zero_xid:
+                    msg.header.xid = gen_xid()
+                outpkt = msg.pack()
+            except:
+                self.logger.error(
+                         "message_send: not an OF message or string?")
+                return -1
+        else:
+            outpkt = msg
+
+        self.logger.debug("Sending pkt of len " + str(len(outpkt)))
+        if self.switch_socket.sendall(outpkt) is None:
+            return 0
+
+        self.logger.error("Unknown error on sendall")
+        return -1
+
+    def __str__(self):
+        string = "Controller:\n"
+        string += "  state           " + self.dbg_state + "\n"
+        string += "  switch_addr     " + str(self.switch_addr) + "\n"
+        string += "  pending pkts    " + str(len(self.packets)) + "\n"
+        string += "  total pkts      " + str(self.packets_total) + "\n"
+        string += "  expired pkts    " + str(self.packets_expired) + "\n"
+        string += "  handled pkts    " + str(self.packets_handled) + "\n"
+        string += "  poll discards   " + str(self.poll_discards) + "\n"
+        string += "  parse errors    " + str(self.parse_errors) + "\n"
+        string += "  sock errrors    " + str(self.socket_errors) + "\n"
+        string += "  max pkts        " + str(self.max_pkts) + "\n"
+        string += "  host            " + str(self.host) + "\n"
+        string += "  port            " + str(self.port) + "\n"
+        string += "  keep_alive      " + str(self.keep_alive) + "\n"
+        return string
+
+    def show(self):
+        print str(self)
+
+def sample_handler(controller, msg, pkt):
+    """
+    Sample message handler
+
+    This is the prototype for functions registered with the controller
+    class for packet reception
+
+    @param controller The controller calling the handler
+    @param msg The parsed message object
+    @param pkt The raw packet that was received on the socket.  This is
+    in case the packet contains extra unparsed data.
+    @returns Boolean value indicating if the packet was handled.  If
+    not handled, the packet is placed in the queue for pollers to received
+    """
+    pass
diff --git a/src/python/oftest/dataplane.py b/src/python/oftest/dataplane.py
new file mode 100644
index 0000000..40bda9c
--- /dev/null
+++ b/src/python/oftest/dataplane.py
@@ -0,0 +1,375 @@
+"""
+OpenFlow Test Framework
+
+DataPlane and DataPlanePort classes
+
+Provide the interface to the control the set of ports being used
+to stimulate the switch under test.
+
+See the class dataplaneport for more details.  This class wraps
+a set of those objects allowing general calls and parsing
+configuration.
+
+@todo Add "filters" for matching packets.  Actions supported
+for filters should include a callback or a counter
+"""
+
+import sys
+import os
+import socket
+import time
+import netutils
+from threading import Thread
+from threading import Lock
+from threading import Condition
+import select
+import logging
+from oft_assert import oft_assert
+
+##@todo Find a better home for these identifiers (dataplane)
+RCV_SIZE_DEFAULT = 4096
+ETH_P_ALL = 0x03
+RCV_TIMEOUT = 10000
+
+class DataPlanePort(Thread):
+    """
+    Class defining a port monitoring object.
+
+    Control a dataplane port connected to the switch under test.
+    Creates a promiscuous socket on a physical interface.
+    Queues the packets received on that interface with time stamps.
+    Inherits from Thread class as meant to run in background.  Also
+    supports polling.
+    Use accessors to dequeue packets for proper synchronization.
+
+    Currently assumes a controlling 'parent' which maintains a
+    common Lock object and a total packet-pending count.  May want
+    to decouple that some day.
+    """
+
+    def __init__(self, interface_name, port_number, parent, max_pkts=1024):
+        """
+        Set up a port monitor object
+        @param interface_name The name of the physical interface like eth1
+        @param port_number The port number associated with this port
+        @param parent The controlling dataplane object; for pkt wait CV
+        @param max_pkts Maximum number of pkts to keep in queue
+        """
+        Thread.__init__(self)
+        self.interface_name = interface_name
+        self.max_pkts = max_pkts
+        self.packets_total = 0
+        self.packets = []
+        self.packets_discarded = 0
+        self.port_number = port_number
+        logname = "dp-" + interface_name
+        self.logger = logging.getLogger(logname)
+        try:
+            self.socket = self.interface_open(interface_name)
+        except:
+            self.logger.info("Could not open socket")
+            sys.exit(1)
+        self.logger.info("Openned port monitor socket")
+        self.parent = parent
+        self.pkt_sync = self.parent.pkt_sync
+
+    def interface_open(self, interface_name):
+        """
+        Open a socket in a promiscuous mode for a data connection.
+        @param interface_name port name as a string such as 'eth1'
+        @retval s socket
+        """
+        s = socket.socket(socket.AF_PACKET, socket.SOCK_RAW,
+                          socket.htons(ETH_P_ALL))
+        s.bind((interface_name, 0))
+        netutils.set_promisc(s, interface_name)
+        s.settimeout(RCV_TIMEOUT)
+        return s
+
+    def run(self):
+        """
+        Activity function for class
+        """
+        self.running = True
+        self.socs = [self.socket]
+        error_warned = False # Have we warned about error?
+        while self.running:
+            try:
+                sel_in, sel_out, sel_err = \
+                    select.select(self.socs, [], [], 1)
+            except:
+                print sys.exc_info()
+                self.logger.error("Select error, exiting")
+                break
+
+            if not self.running:
+                break
+
+            if (sel_in is None) or (len(sel_in) == 0):
+                continue
+
+            try:
+                rcvmsg = self.socket.recv(RCV_SIZE_DEFAULT)
+            except socket.error:
+                if not error_warned:
+                    self.logger.info("Socket error on recv")
+                    error_warned = True
+                continue
+
+            if len(rcvmsg) == 0:
+                self.logger.info("Zero len pkt rcvd")
+                self.kill()
+                break
+
+            rcvtime = time.clock()
+            self.logger.debug("Pkt len " + str(len(rcvmsg)) +
+                     " in at " + str(rcvtime))
+
+            # Enqueue packet
+            self.pkt_sync.acquire()
+            if len(self.packets) >= self.max_pkts:
+                # Queue full, throw away oldest
+                self.packets.pop(0)
+                self.packets_discarded += 1
+            else:
+                self.parent.packets_pending += 1
+            # Check if parent is waiting on this (or any) port
+            if self.parent.want_pkt:
+                if (not self.parent.want_pkt_port or
+                        self.parent.want_pkt_port == self.port_number):
+                    self.parent.got_pkt_port = self.port_number
+                    self.parent.want_pkt = False
+                    self.parent.pkt_sync.notify()
+            self.packets.append((rcvmsg, rcvtime))
+            self.packets_total += 1
+            self.pkt_sync.release()
+
+        self.logger.info("Thread exit ")
+
+    def kill(self):
+        """
+        Terminate the running thread
+        """
+        self.logger.debug("Port monitor kill")
+        self.running = False
+        try:
+            self.socket.close()
+        except:
+            self.logger.info("Ignoring dataplane soc shutdown error")
+
+    def dequeue(self, use_lock=True):
+        """
+        Get the oldest packet in the queue
+        @param use_lock If True, acquires the packet sync lock (which is
+        really the parent's lock)
+        @return The pair packet, packet time-stamp
+        """
+        if use_lock:
+            self.pkt_sync.acquire()
+        if len(self.packets) > 0:
+            pkt, pkt_time = self.packets.pop(0)
+            self.parent.packets_pending -= 1
+        else:
+            pkt = pkt_time = None
+        if use_lock:
+            self.pkt_sync.release()
+        return pkt, pkt_time
+
+    def timestamp_head(self):
+        """
+        Return the timestamp of the head of queue or None if empty
+        """
+        rv = None
+        try:
+            rv = self.packets[0][1]
+        except:
+            rv = None
+        return rv
+
+    def flush(self):
+        """
+        Clear the packet queue
+        """
+        self.pkt_sync.acquire()
+        self.packets_discarded += len(self.packets)
+        self.parent.packets_pending -= len(self.packets)
+        self.packets = []
+        self.packet_times = []
+        self.pkt_sync.release()
+
+
+    def send(self, packet):
+        """
+        Send a packet to the dataplane port
+        @param packet The packet data to send to the port
+        @retval The number of bytes sent
+        """
+        return self.socket.send(packet)
+
+
+    def register(self, handler):
+        """
+        Register a callback function to receive packets from this
+        port.  The callback will be passed the packet, the
+        interface name and the port number (if set) on which the
+        packet was received.
+
+        To be implemented
+        """
+        pass
+
+    def show(self, prefix=''):
+        print prefix + "Name:          " + self.interface_name
+        print prefix + "Pkts pending:  " + str(len(self.packets))
+        print prefix + "Pkts total:    " + str(self.packets_total)
+        print prefix + "socket:        " + str(self.socket)
+
+
+class DataPlane:
+    """
+    Class defining access primitives to the data plane
+    Controls a list of DataPlanePort objects
+    """
+    def __init__(self):
+        self.port_list = {}
+        # pkt_sync serves double duty as a regular top level lock and
+        # as a condition variable
+        self.pkt_sync = Condition()
+
+        # These are used to signal async pkt arrival for polling
+        self.want_pkt = False
+        self.want_pkt_port = None # What port required (or None)
+        self.got_pkt_port = None # On what port received?
+        self.packets_pending = 0 # Total pkts in all port queues
+        self.logger = logging.getLogger("dataplane")
+
+    def port_add(self, interface_name, port_number):
+        """
+        Add a port to the dataplane
+        TBD:  Max packets for queue?
+        @param interface_name The name of the physical interface like eth1
+        @param port_number The port number used to refer to the port
+        """
+
+        self.port_list[port_number] = DataPlanePort(interface_name,
+                                                    port_number, self)
+        self.port_list[port_number].start()
+
+    def send(self, port_number, packet):
+        """
+        Send a packet to the given port
+        @param port_number The port to send the data to
+        @param packet Raw packet data to send to port
+        """
+        self.logger.debug("Sending %d bytes to port %d" %
+                          (len(packet), port_number))
+        bytes = self.port_list[port_number].send(packet)
+        if bytes != len(packet):
+            self.logger.error("Unhandled send error, length mismatch %d != %d" %
+                     (bytes, len(packet)))
+        return bytes
+
+    def flood(self, packet):
+        """
+        Send a packet to all ports
+        @param packet Raw packet data to send to port
+        """
+        for port_number in self.port_list.keys():
+            bytes = self.port_list[port_number].send(packet)
+            if bytes != len(packet):
+                self.logger.error("Unhandled send error" +
+                         ", port %d, length mismatch %d != %d" %
+                         (port_number, bytes, len(packet)))
+
+    def _oldest_packet_find(self):
+        # Find port with oldest packet
+        min_time = 0
+        min_port = -1
+        for port_number in self.port_list.keys():
+            ptime = self.port_list[port_number].timestamp_head()
+            if ptime:
+                if (min_port == -1) or (ptime < min_time):
+                    min_time = ptime
+                    min_port = port_number
+        oft_assert(min_port != -1, "Could not find port when pkts pending")
+
+        return min_port
+
+    def poll(self, port_number=None, timeout=None):
+        """
+        Poll one or all dataplane ports for a packet
+
+        If port_number is given, get the oldest packet from that port.
+        Otherwise, find the port with the oldest packet and return
+        that packet.
+        @param port_number If set, get packet from this port
+        @param timeout If positive and no packet is available, block
+        until a packet is received or for this many seconds
+        @return The triple port_number, packet, pkt_time where packet
+        is received from port_number at time pkt_time.  If a timeout
+        occurs, return None, None, None
+        """
+
+        self.pkt_sync.acquire()
+
+        # Check if requested specific port and it has a packet
+        if port_number and len(self.port_list[port_number].packets) != 0:
+            pkt, time = self.port_list[port_number].dequeue(use_lock=False)
+            self.pkt_sync.release()
+            oft_assert(pkt, "Poll: packet not found on port " +
+                       str(port_number))
+            return port_number, pkt, time
+
+        # Check if requested any port and some packet pending
+        if not port_number and self.packets_pending != 0:
+            port = self._oldest_packet_find()
+            pkt, time = self.port_list[port].dequeue(use_lock=False)
+            self.pkt_sync.release()
+            oft_assert(pkt, "Poll: oldest packet not found")
+            return port, pkt, time
+
+        # No packet pending; blocking call requested?
+        if not timeout:
+            self.pkt_sync.release()
+            return None, None, None
+
+        # Desired packet isn't available and timeout is specified
+        # Already holding pkt_sync; wait on pkt_sync variable
+        self.want_pkt = True
+        self.want_pkt_port = port_number
+        self.got_pkt_port = None
+        self.pkt_sync.wait(timeout)
+        self.want_pkt = False
+        if self.got_pkt_port:
+            pkt, time = \
+                self.port_list[self.got_pkt_port].dequeue(use_lock=False)
+            self.pkt_sync.release()
+            oft_assert(pkt, "Poll: pkt reported, but not found at " +
+                       str(self.got_pkt_port))
+            return self.got_pkt_port, pkt, time
+
+        self.pkt_sync.release()
+        self.logger.debug("Poll time out, no packet from " + str(port_number))
+
+        return None, None, None
+
+    def kill(self, join_threads=True):
+        """
+        Close all sockets for dataplane
+        @param join_threads If True call join on each thread
+        """
+        for port_number in self.port_list.keys():
+            self.port_list[port_number].kill()
+            if join_threads:
+                self.logger.debug("Joining " + str(port_number))
+                self.port_list[port_number].join()
+
+        self.logger.info("DataPlane shutdown")
+
+    def show(self, prefix=''):
+        print prefix + "Dataplane Controller"
+        print prefix + "Packets pending" + str(self.packets_pending)
+        for pnum, port in self.port_list.items():
+            print prefix + "OpenFlow Port Number " + str(pnum)
+            port.show(prefix + '  ')
+
diff --git a/src/python/oftest/netutils.py b/src/python/oftest/netutils.py
new file mode 100644
index 0000000..613ac66
--- /dev/null
+++ b/src/python/oftest/netutils.py
@@ -0,0 +1,67 @@
+
+"""
+Network utilities for the OpenFlow test framework
+"""
+
+###########################################################################
+##                                                                         ##
+## Promiscuous mode enable/disable                                         ##
+##                                                                         ##
+## Based on code from Scapy by Phillippe Biondi                            ##
+##                                                                         ##
+##                                                                         ##
+## This program is free software; you can redistribute it and/or modify it ##
+## under the terms of the GNU General Public License as published by the   ##
+## Free Software Foundation; either version 2, or (at your option) any     ##
+## later version.                                                          ##
+##                                                                         ##
+## This program is distributed in the hope that it will be useful, but     ##
+## WITHOUT ANY WARRANTY; without even the implied warranty of              ##
+## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU       ##
+## General Public License for more details.                                ##
+##                                                                         ##
+#############################################################################
+
+import socket
+from fcntl import ioctl
+import struct
+
+# From net/if_arp.h
+ARPHDR_ETHER = 1
+ARPHDR_LOOPBACK = 772
+
+# From bits/ioctls.h
+SIOCGIFHWADDR  = 0x8927          # Get hardware address
+SIOCGIFINDEX   = 0x8933          # name -> if_index mapping
+
+# From netpacket/packet.h
+PACKET_ADD_MEMBERSHIP  = 1
+PACKET_MR_PROMISC      = 1
+
+# From bits/socket.h
+SOL_PACKET = 263
+
+def get_if(iff,cmd):
+  s=socket.socket()
+  ifreq = ioctl(s, cmd, struct.pack("16s16x",iff))
+  s.close()
+  return ifreq
+
+def get_if_hwaddr(iff):
+  addrfamily, mac = struct.unpack("16xh6s8x",get_if(iff,SIOCGIFHWADDR))
+  if addrfamily in [ARPHDR_ETHER,ARPHDR_LOOPBACK]:
+      return str2mac(mac)
+  else:
+      raise Exception("Unsupported address family (%i)"%addrfamily)
+
+def get_if_index(iff):
+  return int(struct.unpack("I",get_if(iff, SIOCGIFINDEX)[16:20])[0])
+
+def set_promisc(s,iff,val=1):
+  mreq = struct.pack("IHH8s", get_if_index(iff), PACKET_MR_PROMISC, 0, "")
+  if val:
+      cmd = PACKET_ADD_MEMBERSHIP
+  else:
+      cmd = PACKET_DROP_MEMBERSHIP
+  s.setsockopt(SOL_PACKET, cmd, mreq)
+
diff --git a/src/python/oftest/oft_assert.py b/src/python/oftest/oft_assert.py
new file mode 100644
index 0000000..d773c84
--- /dev/null
+++ b/src/python/oftest/oft_assert.py
@@ -0,0 +1,28 @@
+"""
+OpenFlow Test Framework
+
+Framework assert definition
+"""
+
+import sys
+import logging
+
+def oft_assert(condition, string):
+    """
+    Test framework assertion check
+
+    @param condition The boolean condition to check
+    @param string String to print if error
+
+    If condition is not true, it is considered a test framework
+    failure and exit is called.
+
+    This assert is meant to represent a violation in the 
+    assumptions of how the test framework is supposed to work
+    (for example, an inconsistent packet queue state) rather than
+    a test failure.
+    """
+    if not condition:
+        logging.critical("Internal error: " + string)
+        sys.exit(1)
+
diff --git a/src/python/oftest/ofutils.py b/src/python/oftest/ofutils.py
new file mode 100644
index 0000000..5daba2f
--- /dev/null
+++ b/src/python/oftest/ofutils.py
@@ -0,0 +1,9 @@
+
+"""
+Utilities for the OpenFlow test framework
+"""
+
+import random
+
+def gen_xid():
+    return random.randrange(1,0xffffffff)
diff --git a/src/python/oftest/parse.py b/src/python/oftest/parse.py
new file mode 100644
index 0000000..11d6983
--- /dev/null
+++ b/src/python/oftest/parse.py
@@ -0,0 +1,334 @@
+"""
+OpenFlow message parsing functions
+"""
+
+import sys
+import logging
+from message import *
+from error import *
+from action import *
+from action_list import action_list
+from cstruct import *
+try:
+    import scapy.all as scapy
+except:
+    try:
+        import scapy as scapy
+    except:
+        sys.exit("Need to install scapy for packet parsing")
+
+"""
+of_message.py
+Contains wrapper functions and classes for the of_message namespace
+that are generated by hand.  It includes the rest of the wrapper
+function information into the of_message namespace
+"""
+
+parse_logger = logging.getLogger("parse")
+#parse_logger.setLevel(logging.DEBUG)
+
+# These message types are subclassed
+msg_type_subclassed = [
+    OFPT_STATS_REQUEST,
+    OFPT_STATS_REPLY,
+    OFPT_ERROR
+]
+
+# Maps from sub-types to classes
+stats_reply_to_class_map = {
+    OFPST_DESC                      : desc_stats_reply,
+    OFPST_AGGREGATE                 : aggregate_stats_reply,
+    OFPST_FLOW                      : flow_stats_reply,
+    OFPST_TABLE                     : table_stats_reply,
+    OFPST_PORT                      : port_stats_reply,
+    OFPST_QUEUE                     : queue_stats_reply
+}
+
+stats_request_to_class_map = {
+    OFPST_DESC                      : desc_stats_request,
+    OFPST_AGGREGATE                 : aggregate_stats_request,
+    OFPST_FLOW                      : flow_stats_request,
+    OFPST_TABLE                     : table_stats_request,
+    OFPST_PORT                      : port_stats_request,
+    OFPST_QUEUE                     : queue_stats_request
+}
+
+error_to_class_map = {
+    OFPET_HELLO_FAILED              : hello_failed_error_msg,
+    OFPET_BAD_REQUEST               : bad_request_error_msg,
+    OFPET_BAD_ACTION                : bad_action_error_msg,
+    OFPET_FLOW_MOD_FAILED           : flow_mod_failed_error_msg,
+    OFPET_PORT_MOD_FAILED           : port_mod_failed_error_msg,
+    OFPET_QUEUE_OP_FAILED           : queue_op_failed_error_msg
+}
+
+# Map from header type value to the underlieing message class
+msg_type_to_class_map = {
+    OFPT_HELLO                      : hello,
+    OFPT_ERROR                      : error,
+    OFPT_ECHO_REQUEST               : echo_request,
+    OFPT_ECHO_REPLY                 : echo_reply,
+    OFPT_VENDOR                     : vendor,
+    OFPT_FEATURES_REQUEST           : features_request,
+    OFPT_FEATURES_REPLY             : features_reply,
+    OFPT_GET_CONFIG_REQUEST         : get_config_request,
+    OFPT_GET_CONFIG_REPLY           : get_config_reply,
+    OFPT_SET_CONFIG                 : set_config,
+    OFPT_PACKET_IN                  : packet_in,
+    OFPT_FLOW_REMOVED               : flow_removed,
+    OFPT_PORT_STATUS                : port_status,
+    OFPT_PACKET_OUT                 : packet_out,
+    OFPT_FLOW_MOD                   : flow_mod,
+    OFPT_PORT_MOD                   : port_mod,
+    OFPT_STATS_REQUEST              : stats_request,
+    OFPT_STATS_REPLY                : stats_reply,
+    OFPT_BARRIER_REQUEST            : barrier_request,
+    OFPT_BARRIER_REPLY              : barrier_reply,
+    OFPT_QUEUE_GET_CONFIG_REQUEST   : queue_get_config_request,
+    OFPT_QUEUE_GET_CONFIG_REPLY     : queue_get_config_reply
+}
+
+def _of_message_to_object(binary_string):
+    """
+    Map a binary string to the corresponding class.
+
+    Appropriately resolves subclasses
+    """
+    hdr = ofp_header()
+    hdr.unpack(binary_string)
+    # FIXME: Add error detection
+    if not hdr.type in msg_type_subclassed:
+        return msg_type_to_class_map[hdr.type]()
+    if hdr.type == OFPT_STATS_REQUEST:
+        sub_hdr = ofp_stats_request()
+        sub_hdr.unpack(binary_string[OFP_HEADER_BYTES:])
+        try:
+            obj = stats_request_to_class_map[sub_hdr.type]()
+        except KeyError:
+            obj = None
+        return obj
+    elif hdr.type == OFPT_STATS_REPLY:
+        sub_hdr = ofp_stats_reply()
+        sub_hdr.unpack(binary_string[OFP_HEADER_BYTES:])
+        try:
+            obj = stats_reply_to_class_map[sub_hdr.type]()
+        except KeyError:
+            obj = None
+        return obj
+    elif hdr.type == OFPT_ERROR:
+        sub_hdr = ofp_error_msg()
+        sub_hdr.unpack(binary_string[OFP_HEADER_BYTES:])
+        return error_to_class_map[sub_hdr.type]()
+    else:
+        parse_logger.error("Cannot parse pkt to message")
+        return None
+
+def of_message_parse(binary_string, raw=False):
+    """
+    Parse an OpenFlow packet
+
+    Parses a raw OpenFlow packet into a Python class, with class
+    members fully populated.
+
+    @param binary_string The packet (string) to be parsed
+    @param raw If true, interpret the packet as an L2 packet.  Not
+    yet supported.
+    @return An object of some message class or None if fails
+    Note that any data beyond that parsed is not returned
+
+    """
+
+    if raw:
+        parse_logger.error("raw packet message parsing not supported")
+        return None
+
+    obj = _of_message_to_object(binary_string)
+    if obj:
+        obj.unpack(binary_string)
+    return obj
+
+
+def of_header_parse(binary_string, raw=False):
+    """
+    Parse only the header from an OpenFlow packet
+
+    Parses the header from a raw OpenFlow packet into a
+    an ofp_header Python class.
+
+    @param binary_string The packet (string) to be parsed
+    @param raw If true, interpret the packet as an L2 packet.  Not
+    yet supported.
+    @return An ofp_header object
+
+    """
+
+    if raw:
+        parse_logger.error("raw packet message parsing not supported")
+        return None
+
+    hdr = ofp_header()
+    hdr.unpack(binary_string)
+
+    return hdr
+
+map_wc_field_to_match_member = {
+    'OFPFW_DL_VLAN'                 : 'dl_vlan',
+    'OFPFW_DL_SRC'                  : 'dl_src',
+    'OFPFW_DL_DST'                  : 'dl_dst',
+    'OFPFW_DL_TYPE'                 : 'dl_type',
+    'OFPFW_NW_PROTO'                : 'nw_proto',
+    'OFPFW_TP_SRC'                  : 'tp_src',
+    'OFPFW_TP_DST'                  : 'tp_dst',
+    'OFPFW_NW_SRC_SHIFT'            : 'nw_src_shift',
+    'OFPFW_NW_SRC_BITS'             : 'nw_src_bits',
+    'OFPFW_NW_SRC_MASK'             : 'nw_src_mask',
+    'OFPFW_NW_SRC_ALL'              : 'nw_src_all',
+    'OFPFW_NW_DST_SHIFT'            : 'nw_dst_shift',
+    'OFPFW_NW_DST_BITS'             : 'nw_dst_bits',
+    'OFPFW_NW_DST_MASK'             : 'nw_dst_mask',
+    'OFPFW_NW_DST_ALL'              : 'nw_dst_all',
+    'OFPFW_DL_VLAN_PCP'             : 'dl_vlan_pcp',
+    'OFPFW_NW_TOS'                  : 'nw_tos'
+}
+
+
+def parse_mac(mac_str):
+    """
+    Parse a MAC address
+
+    Parse a MAC address ':' separated string of hex digits to an
+    array of integer values.  '00:d0:05:5d:24:00' => [0, 208, 5, 93, 36, 0]
+    @param mac_str The string to convert
+    @return Array of 6 integer values
+    """
+    return map(lambda val:eval("0x" + val), mac_str.split(":"))
+
+def parse_ip(ip_str):
+    """
+    Parse an IP address
+
+    Parse an IP address '.' separated string of decimal digits to an
+    host ordered integer.  '172.24.74.77' => 
+    @param ip_str The string to convert
+    @return Integer value
+    """
+    array = map(lambda val:eval(val),ip_str.split("."))
+    val = 0
+    for a in array:
+        val <<= 8
+        val += a
+    return val
+
+def packet_type_classify(ether):
+    try:
+        dot1q = ether[scapy.Dot1Q]
+    except:
+        dot1q = None
+
+    try:
+        ip = ether[scapy.IP]
+    except:
+        ip = None
+
+    try:
+        tcp = ether[scapy.TCP]
+    except:
+        tcp = None
+
+    try:
+        udp = ether[scapy.UDP]
+    except:
+        udp = None
+
+    try:
+        icmp = ether[scapy.ICMP]
+    except:
+        icmp = None
+
+    # @todo arp is not yet supported
+    arp = None
+    return (dot1q, ip, tcp, udp, icmp, arp)
+
+def packet_to_flow_match(packet, pkt_format="L2"):
+    """
+    Create a flow match that matches packet with the given wildcards
+
+    @param packet The packet to use as a flow template
+    @param pkt_format Currently only L2 is supported.  Will indicate the 
+    overall packet type for parsing
+    @return An ofp_match object if successful.  None if format is not
+    recognized.  The wildcards of the match will be cleared for the
+    values extracted from the packet.
+
+    @todo check min length of packet
+    @todo Check if packet is other than L2 format
+    @todo Implement ICMP and ARP fields
+    """
+
+    #@todo check min length of packet
+    if pkt_format.upper() != "L2":
+        parse_logger.error("Only L2 supported for packet_to_flow")
+        return None
+
+    if type(packet) == type(""):
+        ether = scapy.Ether(packet)
+    else:
+        ether = packet
+
+    # For now, assume ether IP packet and ignore wildcards
+    try:
+        (dot1q, ip, tcp, udp, icmp, arp) = packet_type_classify(ether)
+    except:
+        parse_logger.error("packet_to_flow_match: Classify error")
+        return None
+
+    match = ofp_match()
+    match.wildcards = OFPFW_ALL
+    #@todo Check if packet is other than L2 format
+    match.dl_dst = parse_mac(ether.dst)
+    match.wildcards &= ~OFPFW_DL_DST
+    match.dl_src = parse_mac(ether.src)
+    match.wildcards &= ~OFPFW_DL_SRC
+    match.dl_type = ether.type
+    match.wildcards &= ~OFPFW_DL_TYPE
+
+    if dot1q:
+        match.dl_vlan = dot1q.vlan
+        match.dl_vlan_pcp = dot1q.prio
+        match.dl_type = dot1q.type
+    else:
+        match.dl_vlan = OFP_VLAN_NONE
+        match.dl_vlan_pcp = 0
+    match.wildcards &= ~OFPFW_DL_VLAN
+    match.wildcards &= ~OFPFW_DL_VLAN_PCP
+
+    if ip:
+        match.nw_src = parse_ip(ip.src)
+        match.wildcards &= ~OFPFW_NW_SRC_MASK
+        match.nw_dst = parse_ip(ip.dst)
+        match.wildcards &= ~OFPFW_NW_DST_MASK
+        match.nw_tos = ip.tos
+        match.wildcards &= ~OFPFW_NW_TOS
+
+    if tcp:
+        match.nw_proto = 6
+        match.wildcards &= ~OFPFW_NW_PROTO
+    elif not tcp and udp:
+        tcp = udp
+        match.nw_proto = 17
+        match.wildcards &= ~OFPFW_NW_PROTO
+
+    if tcp:
+        match.tp_src = tcp.sport
+        match.wildcards &= ~OFPFW_TP_SRC
+        match.tp_dst = tcp.dport
+        match.wildcards &= ~OFPFW_TP_DST
+
+    if icmp:
+        match.nw_proto = 1
+        match.tp_src = icmp.type
+        match.tp_dst = icmp.code
+
+    #@todo Implement ARP fields
+
+    return match
diff --git a/src/python/setup.py b/src/python/setup.py
new file mode 100644
index 0000000..b5a462d
--- /dev/null
+++ b/src/python/setup.py
@@ -0,0 +1,31 @@
+#!/usr/bin/env python
+'''Setuptools params'''
+
+from setuptools import setup, find_packages
+
+modname = distname = 'oftest'
+
+setup(
+    name='oftest',
+    version='0.0.1',
+    description='An OpenFlow Testing Framework',
+    author='Dan Talayco/Tatsuya Yabe',
+    author_email='dtalayco@stanford.edu',
+    packages=find_packages(),
+    long_description="""\
+OpenFlow test framework package.
+      """,
+      classifiers=[
+          "License :: OSI Approved :: GNU General Public License (GPL)",
+          "Programming Language :: Python",
+          "Development Status :: 4 - Beta",
+          "Intended Audience :: Developers",
+          "Topic :: Internet",
+      ],
+      keywords='networking protocol Internet OpenFlow validation',
+      license='unspecified',
+      install_requires=[
+        'setuptools',
+        'doxypy',
+        'pylint'
+      ])
diff --git a/tests/basic.py b/tests/basic.py
new file mode 100644
index 0000000..a173a30
--- /dev/null
+++ b/tests/basic.py
@@ -0,0 +1,368 @@
+"""
+Basic protocol and dataplane test cases
+
+It is recommended that these definitions be kept in their own
+namespace as different groups of tests will likely define 
+similar identifiers.
+
+Current Assumptions:
+
+  The function test_set_init is called with a complete configuration
+dictionary prior to the invocation of any tests from this file.
+
+  The switch is actively attempting to contact the controller at the address
+indicated oin oft_config
+
+"""
+
+import time
+import signal
+import sys
+import logging
+
+import unittest
+
+import oftest.controller as controller
+import oftest.cstruct as ofp
+import oftest.message as message
+import oftest.dataplane as dataplane
+import oftest.action as action
+
+from testutils import *
+
+#@var basic_port_map Local copy of the configuration map from OF port
+# numbers to OS interfaces
+basic_port_map = None
+#@var basic_logger Local logger object
+basic_logger = None
+#@var basic_config Local copy of global configuration data
+basic_config = None
+
+test_prio = {}
+
+def test_set_init(config):
+    """
+    Set up function for basic test classes
+
+    @param config The configuration dictionary; see oft
+    """
+
+    global basic_port_map
+    global basic_logger
+    global basic_config
+
+    basic_logger = logging.getLogger("basic")
+    basic_logger.info("Initializing test set")
+    basic_port_map = config["port_map"]
+    basic_config = config
+
+class SimpleProtocol(unittest.TestCase):
+    """
+    Root class for setting up the controller
+    """
+
+    def sig_handler(self, v1, v2):
+        basic_logger.critical("Received interrupt signal; exiting")
+        print "Received interrupt signal; exiting"
+        self.clean_shutdown = False
+        self.tearDown()
+        sys.exit(1)
+
+    def setUp(self):
+        self.logger = basic_logger
+        self.config = basic_config
+        signal.signal(signal.SIGINT, self.sig_handler)
+        basic_logger.info("** START TEST CASE " + str(self))
+        self.controller = controller.Controller(
+            host=basic_config["controller_host"],
+            port=basic_config["controller_port"])
+        # clean_shutdown should be set to False to force quit app
+        self.clean_shutdown = True
+        self.controller.start()
+        #@todo Add an option to wait for a pkt transaction to ensure version
+        # compatibilty?
+        self.controller.connect(timeout=20)
+        if not self.controller.active:
+            print "Controller startup failed; exiting"
+            sys.exit(1)
+        basic_logger.info("Connected " + str(self.controller.switch_addr))
+
+    def tearDown(self):
+        basic_logger.info("** END TEST CASE " + str(self))
+        self.controller.shutdown()
+        #@todo Review if join should be done on clean_shutdown
+        if self.clean_shutdown:
+            self.controller.join()
+
+    def runTest(self):
+        # Just a simple sanity check as illustration
+        basic_logger.info("Running simple proto test")
+        self.assertTrue(self.controller.switch_socket is not None,
+                        str(self) + 'No connection to switch')
+
+    def assertTrue(self, cond, msg):
+        if not cond:
+            basic_logger.error("** FAILED ASSERTION: " + msg)
+        unittest.TestCase.assertTrue(self, cond, msg)
+
+test_prio["SimpleProtocol"] = 1
+
+class SimpleDataPlane(SimpleProtocol):
+    """
+    Root class that sets up the controller and dataplane
+    """
+    def setUp(self):
+        SimpleProtocol.setUp(self)
+        self.dataplane = dataplane.DataPlane()
+        for of_port, ifname in basic_port_map.items():
+            self.dataplane.port_add(ifname, of_port)
+
+    def tearDown(self):
+        basic_logger.info("Teardown for simple dataplane test")
+        SimpleProtocol.tearDown(self)
+        self.dataplane.kill(join_threads=self.clean_shutdown)
+        basic_logger.info("Teardown done")
+
+    def runTest(self):
+        self.assertTrue(self.controller.switch_socket is not None,
+                        str(self) + 'No connection to switch')
+        # self.dataplane.show()
+        # Would like an assert that checks the data plane
+
+class DataPlaneOnly(unittest.TestCase):
+    """
+    Root class that sets up only the dataplane
+    """
+
+    def sig_handler(self, v1, v2):
+        basic_logger.critical("Received interrupt signal; exiting")
+        print "Received interrupt signal; exiting"
+        self.clean_shutdown = False
+        self.tearDown()
+        sys.exit(1)
+
+    def setUp(self):
+        self.clean_shutdown = False
+        self.logger = basic_logger
+        self.config = basic_config
+        signal.signal(signal.SIGINT, self.sig_handler)
+        basic_logger.info("** START DataPlaneOnly CASE " + str(self))
+        self.dataplane = dataplane.DataPlane()
+        for of_port, ifname in basic_port_map.items():
+            self.dataplane.port_add(ifname, of_port)
+
+    def tearDown(self):
+        basic_logger.info("Teardown for simple dataplane test")
+        self.dataplane.kill(join_threads=self.clean_shutdown)
+        basic_logger.info("Teardown done")
+
+    def runTest(self):
+        basic_logger.info("DataPlaneOnly")
+        # self.dataplane.show()
+        # Would like an assert that checks the data plane
+
+class Echo(SimpleProtocol):
+    """
+    Test echo response with no data
+    """
+    def runTest(self):
+        request = message.echo_request()
+        response, pkt = self.controller.transact(request)
+        self.assertEqual(response.header.type, ofp.OFPT_ECHO_REPLY,
+                         'response is not echo_reply')
+        self.assertEqual(request.header.xid, response.header.xid,
+                         'response xid != request xid')
+        self.assertEqual(len(response.data), 0, 'response data non-empty')
+
+class EchoWithData(SimpleProtocol):
+    """
+    Test echo response with short string data
+    """
+    def runTest(self):
+        request = message.echo_request()
+        request.data = 'OpenFlow Will Rule The World'
+        response, pkt = self.controller.transact(request)
+        self.assertEqual(response.header.type, ofp.OFPT_ECHO_REPLY,
+                         'response is not echo_reply')
+        self.assertEqual(request.header.xid, response.header.xid,
+                         'response xid != request xid')
+        self.assertEqual(request.data, response.data,
+                         'response data does not match request')
+
+class PacketIn(SimpleDataPlane):
+    """
+    Test packet in function
+
+    Send a packet to each dataplane port and verify that a packet
+    in message is received from the controller for each
+    """
+    def runTest(self):
+        # Construct packet to send to dataplane
+        # Send packet to dataplane, once to each port
+        # Poll controller with expect message type packet in
+
+        rc = delete_all_flows(self.controller, basic_logger)
+        self.assertEqual(rc, 0, "Failed to delete all flows")
+
+        for of_port in basic_port_map.keys():
+            basic_logger.info("PKT IN test, port " + str(of_port))
+            pkt = simple_tcp_packet()
+            self.dataplane.send(of_port, str(pkt))
+            #@todo Check for unexpected messages?
+            (response, raw) = self.controller.poll(ofp.OFPT_PACKET_IN, 2)
+
+            self.assertTrue(response is not None, 
+                            'Packet in message not received on port ' + 
+                            str(of_port))
+            if str(pkt) != response.data:
+                basic_logger.debug("pkt  len " + str(len(str(pkt))) +
+                                   ": " + str(pkt))
+                basic_logger.debug("resp len " + 
+                                   str(len(str(response.data))) + 
+                                   ": " + str(response.data))
+
+            self.assertEqual(str(pkt), response.data,
+                             'Response packet does not match send packet' +
+                             ' for port ' + str(of_port))
+
+class PacketOut(SimpleDataPlane):
+    """
+    Test packet out function
+
+    Send packet out message to controller for each dataplane port and
+    verify the packet appears on the appropriate dataplane port
+    """
+    def runTest(self):
+        # Construct packet to send to dataplane
+        # Send packet to dataplane
+        # Poll controller with expect message type packet in
+
+        rc = delete_all_flows(self.controller, basic_logger)
+        self.assertEqual(rc, 0, "Failed to delete all flows")
+
+        # These will get put into function
+        outpkt = simple_tcp_packet()
+        of_ports = basic_port_map.keys()
+        of_ports.sort()
+        for dp_port in of_ports:
+            msg = message.packet_out()
+            msg.data = str(outpkt)
+            act = action.action_output()
+            act.port = dp_port
+            self.assertTrue(msg.actions.add(act), 'Could not add action to msg')
+
+            basic_logger.info("PacketOut to: " + str(dp_port))
+            rv = self.controller.message_send(msg)
+            self.assertTrue(rv == 0, "Error sending out message")
+
+            (of_port, pkt, pkt_time) = self.dataplane.poll(timeout=1)
+
+            self.assertTrue(pkt is not None, 'Packet not received')
+            basic_logger.info("PacketOut: got pkt from " + str(of_port))
+            if of_port is not None:
+                self.assertEqual(of_port, dp_port, "Unexpected receive port")
+            self.assertEqual(str(outpkt), str(pkt),
+                             'Response packet does not match send packet')
+
+class FlowStatsGet(SimpleProtocol):
+    """
+    Get stats 
+
+    Simply verify stats get transaction
+    """
+    def runTest(self):
+        basic_logger.info("Running StatsGet")
+        basic_logger.info("Inserting trial flow")
+        request = message.flow_mod()
+        request.match.wildcards = ofp.OFPFW_ALL
+        request.buffer_id = 0xffffffff
+        rv = self.controller.message_send(request)
+        self.assertTrue(rv != -1, "Failed to insert test flow")
+        
+        basic_logger.info("Sending flow request")
+        request = message.flow_stats_request()
+        request.out_port = ofp.OFPP_NONE
+        request.table_id = 0xff
+        request.match.wildcards = 0 # ofp.OFPFW_ALL
+        response, pkt = self.controller.transact(request, timeout=2)
+        self.assertTrue(response is not None, "Did not get response")
+        basic_logger.debug(response.show())
+
+class TableStatsGet(SimpleProtocol):
+    """
+    Get table stats 
+
+    Simply verify table stats get transaction
+    """
+    def runTest(self):
+        basic_logger.info("Running TableStatsGet")
+        basic_logger.info("Inserting trial flow")
+        request = message.flow_mod()
+        request.match.wildcards = ofp.OFPFW_ALL
+        request.buffer_id = 0xffffffff
+        rv = self.controller.message_send(request)
+        self.assertTrue(rv != -1, "Failed to insert test flow")
+        
+        basic_logger.info("Sending table stats request")
+        request = message.table_stats_request()
+        response, pkt = self.controller.transact(request, timeout=2)
+        self.assertTrue(response is not None, "Did not get response")
+        basic_logger.debug(response.show())
+
+class FlowMod(SimpleProtocol):
+    """
+    Insert a flow
+
+    Simple verification of a flow mod transaction
+    """
+
+    def runTest(self):
+        basic_logger.info("Running " + str(self))
+        request = message.flow_mod()
+        request.match.wildcards = ofp.OFPFW_ALL
+        request.buffer_id = 0xffffffff
+        rv = self.controller.message_send(request)
+        self.assertTrue(rv != -1, "Error installing flow mod")
+
+class PortConfigMod(SimpleProtocol):
+    """
+    Modify a bit in port config and verify changed
+
+    Get the switch configuration, modify the port configuration
+    and write it back; get the config again and verify changed.
+    Then set it back to the way it was.
+    """
+
+    def runTest(self):
+        basic_logger.info("Running " + str(self))
+        for of_port, ifname in basic_port_map.items(): # Grab first port
+            break
+
+        (hw_addr, config, advert) = \
+            port_config_get(self.controller, of_port, basic_logger)
+        self.assertTrue(config is not None, "Did not get port config")
+
+        basic_logger.debug("No flood bit port " + str(of_port) + " is now " + 
+                           str(config & ofp.OFPPC_NO_FLOOD))
+
+        rv = port_config_set(self.controller, of_port,
+                             config ^ ofp.OFPPC_NO_FLOOD, ofp.OFPPC_NO_FLOOD,
+                             basic_logger)
+        self.assertTrue(rv != -1, "Error sending port mod")
+
+        # Verify change took place with same feature request
+        (hw_addr, config2, advert) = \
+            port_config_get(self.controller, of_port, basic_logger)
+        basic_logger.debug("No flood bit port " + str(of_port) + " is now " + 
+                           str(config2 & ofp.OFPPC_NO_FLOOD))
+        self.assertTrue(config2 is not None, "Did not get port config2")
+        self.assertTrue(config2 & ofp.OFPPC_NO_FLOOD !=
+                        config & ofp.OFPPC_NO_FLOOD,
+                        "Bit change did not take")
+        # Set it back
+        rv = port_config_set(self.controller, of_port, config, 
+                             ofp.OFPPC_NO_FLOOD, basic_logger)
+        self.assertTrue(rv != -1, "Error sending port mod")
+
+if __name__ == "__main__":
+    print "Please run through oft script:  ./oft --test_spec=basic"
diff --git a/tests/caps.py b/tests/caps.py
new file mode 100644
index 0000000..d0bb2d0
--- /dev/null
+++ b/tests/caps.py
@@ -0,0 +1,163 @@
+"""
+Basic capabilities and capacities tests
+
+"""
+
+import logging
+
+import unittest
+
+import oftest.controller as controller
+import oftest.cstruct as ofp
+import oftest.message as message
+import oftest.dataplane as dataplane
+import oftest.action as action
+import oftest.parse as parse
+import basic
+
+from testutils import *
+
+#@var caps_port_map Local copy of the configuration map from OF port
+# numbers to OS interfaces
+caps_port_map = None
+#@var caps_logger Local logger object
+caps_logger = None
+#@var caps_config Local copy of global configuration data
+caps_config = None
+
+# For test priority
+test_prio = {}
+
+def test_set_init(config):
+    """
+    Set up function for caps test classes
+
+    @param config The configuration dictionary; see oft
+    """
+
+    global caps_port_map
+    global caps_logger
+    global caps_config
+
+    caps_logger = logging.getLogger("caps")
+    caps_logger.info("Initializing caps test set")
+    caps_port_map = config["port_map"]
+    caps_config = config
+
+
+def flow_caps_common(obj, is_exact=True):
+    """
+    The common function for 
+
+    @param obj The calling object
+    @param is_exact If True, checking exact match; else wildcard
+    """
+
+    global caps_port_map
+    of_ports = caps_port_map.keys()
+    of_ports.sort()
+
+    rv = delete_all_flows(obj.controller, caps_logger)
+    obj.assertEqual(rv, 0, "Failed to delete all flows")
+
+    pkt = simple_tcp_packet()
+    match = parse.packet_to_flow_match(pkt)
+    obj.assertTrue(match is not None, "Could not generate flow match from pkt")
+    for port in of_ports:
+        break;
+    match.in_port = port
+    match.nw_src = 1
+    request = message.flow_mod()
+    count_check = 101  # fixme:  better way to determine this.
+    if is_exact:
+        match.wildcards = 0
+    else:
+        match.wildcards |= ofp.OFPFW_DL_SRC
+
+    request.match = match
+    request.buffer_id = 0xffffffff      # set to NONE
+    caps_logger.info(request.show())
+
+    tstats = message.table_stats_request()
+    try:  # Determine the table index to check (or "all")
+        table_idx = caps_config["caps_table_idx"]
+    except:
+        table_idx = -1  # Accumulate all table counts
+
+    # Make sure we can install at least one flow
+    caps_logger.info("Inserting initial flow")
+    rv = obj.controller.message_send(request)
+    obj.assertTrue(rv != -1, "Error installing flow mod")
+    do_barrier(obj.controller)
+    flow_count = 1
+
+    caps_logger.info("Table idx: " + str(table_idx))
+    caps_logger.info("Check every " + str(count_check) + " inserts")
+
+    while True:
+        request.match.nw_src += 1
+        rv = obj.controller.message_send(request)
+#        do_barrier(obj.controller)
+        flow_count += 1
+        if flow_count % count_check == 0:
+            response, pkt = obj.controller.transact(tstats, timeout=2)
+            obj.assertTrue(response is not None, "Get tab stats failed")
+            caps_logger.info(response.show())
+            if table_idx == -1:  # Accumulate for all tables
+                active_flows = 0
+                for stats in response.stats:
+                    active_flows += stats.active_count
+            else: # Table index to use specified in config
+                active_flows = response.stats[table_idx].active_count
+            if active_flows != flow_count:
+                break
+
+    caps_logger.error("RESULT: " + str(flow_count) + " flows inserted")
+    caps_logger.error("RESULT: " + str(active_flows) + " flows reported")
+
+
+class FillTableExact(basic.SimpleProtocol):
+    """
+    Fill the flow table with exact matches; can take a while
+
+    Fill table until no more flows can be added.  Report result.
+    Increment the source IP address.  Assume the flow table will
+    fill in less than 4 billion inserts
+
+    To check the number of flows in the tables is expensive, so
+    it's only done periodically.  This is controlled by the
+    count_check variable.
+
+    A switch may have multiple tables.  The default behaviour
+    is to count all the flows in all the tables.  By setting 
+    the parameter "caps_table_idx" in the configuration array,
+    you can control which table to check.
+    """
+    def runTest(self):
+        caps_logger.info("Running " + str(self))
+        flow_caps_common(self)
+
+test_prio["FillTableExact"] = -1
+
+class FillTableWC(basic.SimpleProtocol):
+    """
+    Fill the flow table with wildcard matches
+
+    Fill table using wildcard entries until no more flows can be
+    added.  Report result.
+    Increment the source IP address.  Assume the flow table will
+    fill in less than 4 billion inserts
+
+    To check the number of flows in the tables is expensive, so
+    it's only done periodically.  This is controlled by the
+    count_check variable.
+
+    A switch may have multiple tables.  The default behaviour
+    is to count all the flows in all the tables.  By setting 
+    the parameter "caps_table_idx" in the configuration array,
+    you can control which table to check.
+
+    """
+    def runTest(self):
+        caps_logger.info("Running " + str(self))
+        flow_caps_common(self, is_exact=False)
diff --git a/tests/flow_expire.py b/tests/flow_expire.py
new file mode 100644
index 0000000..cc1fee9
--- /dev/null
+++ b/tests/flow_expire.py
@@ -0,0 +1,106 @@
+"""
+Flow expire test case.
+Similar to Flow expire test case in the perl test harness.
+
+"""
+
+import logging
+
+import unittest
+import random
+
+import oftest.controller as controller
+import oftest.cstruct as ofp
+import oftest.message as message
+import oftest.dataplane as dataplane
+import oftest.action as action
+import oftest.parse as parse
+import basic
+
+from testutils import *
+from time import sleep
+
+#@var port_map Local copy of the configuration map from OF port
+# numbers to OS interfaces
+pa_port_map = None
+#@var pa_logger Local logger object
+pa_logger = None
+#@var pa_config Local copy of global configuration data
+pa_config = None
+
+def test_set_init(config):
+    """
+    Set up function for packet action test classes
+
+    @param config The configuration dictionary; see oft
+    """
+
+    global pa_port_map
+    global pa_logger
+    global pa_config
+
+    pa_logger = logging.getLogger("pkt_act")
+    pa_logger.info("Initializing test set")
+    pa_port_map = config["port_map"]
+    pa_config = config
+
+class FlowExpire(basic.SimpleDataPlane):
+    """
+    Verify flow expire messages are properly generated.
+
+    Generate a packet
+    Generate and install a matching flow with idle timeout = 1 sec
+    Verify the flow expiration message is received
+    """
+    def runTest(self):
+        global pa_port_map
+
+        of_ports = pa_port_map.keys()
+        of_ports.sort()
+        self.assertTrue(len(of_ports) > 1, "Not enough ports for test")
+
+        rc = delete_all_flows(self.controller, pa_logger)
+        self.assertEqual(rc, 0, "Failed to delete all flows")
+
+        pkt = simple_tcp_packet()
+        match = parse.packet_to_flow_match(pkt)
+        match.wildcards &= ~ofp.OFPFW_IN_PORT
+        self.assertTrue(match is not None, 
+                        "Could not generate flow match from pkt")
+        act = action.action_output()
+
+        ingress_port = pa_config["base_of_port"]
+        egress_port  = (pa_config["base_of_port"] + 1) % len(of_ports)
+        pa_logger.info("Ingress " + str(ingress_port) + 
+                       " to egress " + str(egress_port))
+        
+        match.in_port = ingress_port
+        
+        request = message.flow_mod()
+        request.match = match
+        request.cookie = random.randint(0,9007199254740992)
+        request.buffer_id = 0xffffffff
+        request.idle_timeout = 1
+        request.flags |= ofp.OFPFF_SEND_FLOW_REM
+        act.port = egress_port
+        self.assertTrue(request.actions.add(act), "Could not add action")
+        
+        pa_logger.info("Inserting flow")
+        rv = self.controller.message_send(request)
+        self.assertTrue(rv != -1, "Error installing flow mod")
+        do_barrier(self.controller)
+
+        (response, raw) = self.controller.poll(ofp.OFPT_FLOW_REMOVED, 2)
+        
+        self.assertTrue(response is not None, 
+                        'Did not receive flow removed message ')
+
+        self.assertEqual(request.cookie, response.cookie,
+                         'Cookies do not match')
+
+        self.assertEqual(ofp.OFPRR_IDLE_TIMEOUT, response.reason,
+                         'Flow table entry removal reason is not idle_timeout')
+
+        self.assertEqual(match, response.match,
+                         'Flow table entry does not match')
+        
diff --git a/tests/flow_stats.py b/tests/flow_stats.py
new file mode 100644
index 0000000..549382f
--- /dev/null
+++ b/tests/flow_stats.py
@@ -0,0 +1,123 @@
+"""
+Flow stats test case.
+Similar to Flow stats test case in the perl test harness.
+
+"""
+
+import logging
+
+import unittest
+import random
+
+import oftest.controller as controller
+import oftest.cstruct as ofp
+import oftest.message as message
+import oftest.dataplane as dataplane
+import oftest.action as action
+import oftest.parse as parse
+import basic
+
+from testutils import *
+from time import sleep
+
+#@var port_map Local copy of the configuration map from OF port
+# numbers to OS interfaces
+pa_port_map = None
+#@var pa_logger Local logger object
+pa_logger = None
+#@var pa_config Local copy of global configuration data
+pa_config = None
+
+def test_set_init(config):
+    """
+    Set up function for packet action test classes
+
+    @param config The configuration dictionary; see oft
+    """
+
+    global pa_port_map
+    global pa_logger
+    global pa_config
+
+    pa_logger = logging.getLogger("pkt_act")
+    pa_logger.info("Initializing test set")
+    pa_port_map = config["port_map"]
+    pa_config = config
+
+class FlowStats(basic.SimpleDataPlane):
+    """
+    Verify flow stats are properly retrieved.
+
+    Generate a packet
+    Generate and install a matching flow with idle timeout = 1 sec
+    Verify the flow expiration message is received
+    """
+    def runTest(self):
+        global pa_port_map
+
+        of_ports = pa_port_map.keys()
+        of_ports.sort()
+        self.assertTrue(len(of_ports) > 1, "Not enough ports for test")
+
+        rc = delete_all_flows(self.controller, pa_logger)
+        self.assertEqual(rc, 0, "Failed to delete all flows")
+
+        pkt = simple_tcp_packet()
+        match = parse.packet_to_flow_match(pkt)
+        match.wildcards &= ~ofp.OFPFW_IN_PORT
+        self.assertTrue(match is not None, 
+                        "Could not generate flow match from pkt")
+        act = action.action_output()
+
+        ingress_port = of_ports[0];
+        egress_port = of_ports[1];
+        pa_logger.info("Ingress " + str(ingress_port) + 
+                       " to egress " + str(egress_port))
+        
+        match.in_port = ingress_port
+        
+        flow_mod_msg = message.flow_mod()
+        flow_mod_msg.match = match
+        flow_mod_msg.cookie = random.randint(0,9007199254740992)
+        flow_mod_msg.buffer_id = 0xffffffff
+        flow_mod_msg.idle_timeout = 1
+        act.port = egress_port
+        self.assertTrue(flow_mod_msg.actions.add(act), "Could not add action")
+        
+        stat_req = message.flow_stats_request()
+        stat_req.match = match
+        stat_req.table_id = 0xff
+        stat_req.out_port = ofp.OFPP_NONE;
+
+        do_barrier(self.controller)
+        pa_logger.info("Sending stats request")
+        rv = self.controller.message_send(stat_req)
+        self.assertTrue(rv != -1, "Error sending flow stat req")
+        do_barrier(self.controller)
+
+        (response, raw) = self.controller.poll(ofp.OFPT_STATS_REPLY, 2)
+        
+        pa_logger.info("Inserting flow")
+        rv = self.controller.message_send(flow_mod_msg)
+        self.assertTrue(rv != -1, "Error installing flow mod")
+        do_barrier(self.controller)
+
+        pa_logger.info("Sending packet to dp port " + 
+                       str(ingress_port))
+        self.dataplane.send(ingress_port, str(pkt))
+        (rcv_port, rcv_pkt, pkt_time) = self.dataplane.poll(timeout=2)
+        self.assertTrue(rcv_pkt is not None, "Did not receive packet")
+        pa_logger.debug("Packet len " + str(len(pkt)) + " in on " + 
+                        str(rcv_port))
+        self.assertEqual(rcv_port, egress_port, "Unexpected receive port")
+        self.assertEqual(str(pkt), str(rcv_pkt),
+                         'Response packet does not match send packet')
+            
+        pa_logger.info("Sending stats request")
+        rv = self.controller.message_send(stat_req)
+        self.assertTrue(rv != -1, "Error sending flow stat req")
+        do_barrier(self.controller)
+
+        (response, raw) = self.controller.poll(ofp.OFPT_STATS_REPLY, 2)
+        #print "YYY: Stats reply is \n%s" % (response.show())
+        self.assertTrue(len(response.stats) == 1, "Did not receive flow stats reply")
diff --git a/tests/local.py b/tests/local.py
new file mode 100644
index 0000000..0a3bc04
--- /dev/null
+++ b/tests/local.py
@@ -0,0 +1,15 @@
+"""
+Platform configuration file
+platform == local
+
+Update this file to override defaults
+"""
+
+def platform_config_update(config):
+    """
+    Update configuration for the local platform
+
+    @param config The configuration dictionary to use/update
+
+    Update this routine if values other the defaults are required
+    """
diff --git a/tests/oft b/tests/oft
new file mode 100755
index 0000000..f7b72f0
--- /dev/null
+++ b/tests/oft
@@ -0,0 +1,508 @@
+#!/usr/bin/env python
+"""
+@package oft
+
+OpenFlow test framework top level script
+
+This script is the entry point for running OpenFlow tests
+using the OFT framework.
+
+The global configuration is passed around in a dictionary
+generally called config.  The keys have the following
+significance.
+
+<pre>
+    platform          : String identifying the target platform
+    controller_host   : Host on which test controller is running (for sockets)
+    controller_port   : Port on which test controller listens for switch cxn
+    port_count        : (Optional) Number of ports in dataplane
+    base_of_port      : (Optional) Base OpenFlow port number in dataplane
+    base_if_index     : (Optional) Base OS network interface for dataplane
+    test_dir          : (TBD) Directory to search for test files (default .)
+    test_spec         : (TBD) Specification of test(s) to run
+    log_file          : Filename for test logging
+    list              : Boolean:  List all tests and exit
+    debug             : String giving debug level (info, warning, error...)
+</pre>
+
+See config_defaults below for the default values.
+
+The following are stored in the config dictionary, but are not currently
+configurable through the command line.
+
+<pre>
+    dbg_level         : logging module value of debug level
+    port_map          : Map of dataplane OpenFlow port to OS interface names
+    test_mod_map      : Dictionary indexed by module names and whose value
+                        is the module reference
+    all_tests         : Dictionary indexed by module reference and whose
+                        value is a list of functions in that module
+</pre>
+
+Each test may be assigned a priority by setting test_prio["TestName"] in 
+the respective module.  For now, the only use of this is to avoid 
+automatic inclusion of tests into the default list.  This is done by
+setting the test_prio value less than 0.  Eventually we may add ordering
+of test execution by test priority.
+
+To add a test to the system, either: edit an existing test case file (like
+basic.py) to add a test class which inherits from unittest.TestCase (directly
+or indirectly); or add a new file which includes a function definition 
+test_set_init(config).  Preferably the file is in the same directory as existing
+tests, though you can specify the directory on the command line.  The file
+should not be called "all" as that's reserved for the test-spec.
+
+If you add a new file, the test_set_init function should record the port
+map object from the configuration along with whatever other configuration 
+information it may need.
+
+TBD:  To add configuration to the system, first add an entry to config_default
+below.  If you want this to be a command line parameter, edit config_setup
+to add the option and default value to the parser.  Then edit config_get
+to make sure the option value gets copied into the configuration 
+structure (which then gets passed to everyone else).
+
+By convention, oft attempts to import the contents of a file by the 
+name of $platform.py into the local namespace.  
+
+IMPORTANT: That file should define a function platform_config_update which
+takes a configuration dictionary as an argument and updates it for the
+current run.  In particular, it should set up config["port_map"] with
+the proper map from OF port numbers to OF interface names.
+
+You can add your own platform, say gp104, by adding a file gp104.py
+that defines the function platform_config_update and then use the
+parameter --platform=gp104 on the command line.
+
+If platform_config_update does not set config["port_map"], an attempt
+is made to generate a default map via the function default_port_map_setup.
+This will use "local" and "remote" for platform names if available
+and generate a sequential map based on the values of base_of_port and
+base_if_index in the configuration structure.
+
+The current model for test sets is basic.py.  The current convention is
+that the test set should implement a function test_set_init which takes
+an oft configuration dictionary and returns a unittest.TestSuite object.
+Future test sets should do the same thing.
+
+Default setup:
+
+The default setup runs locally using veth pairs.  To exercise this, 
+checkout and build an openflow userspace datapath.  Then start it on 
+the local host:
+<pre>
+  sudo ~/openflow/regress/bin/veth_setup.pl 
+  sudo ofdatapath -i veth0,veth2,veth4,veth6 punix:/tmp/ofd &
+  sudo ofprotocol unix:/tmp/ofd tcp:127.0.0.1 --fail=closed --max-backoff=1 &
+
+Next, run oft: 
+  sudo ./oft --debug=info
+</pre>
+
+Examine oft.log if things don't work.
+
+@todo Support per-component debug levels (esp controller vs dataplane)
+@todo Consider moving oft up a level
+
+Current test case setup:
+    Files in this or sub directories (or later, directory specified on 
+command line) that contain a function test_set_init are considered test
+files.
+    The function test_set_init examines the test_spec config variable
+and generates a suite of tests.
+    Support a command line option --test_mod so that all tests in that
+module will be run.
+    Support all to specify all tests from the module.
+
+"""
+
+import sys
+from optparse import OptionParser
+from subprocess import Popen,PIPE
+import logging
+import unittest
+import time
+import os
+
+import testutils
+
+try:
+    import scapy.all as scapy
+except:
+    try:
+        import scapy as scapy
+    except:
+        sys.exit("Need to install scapy for packet parsing")
+
+##@var DEBUG_LEVELS
+# Map from strings to debugging levels
+DEBUG_LEVELS = {
+    'debug'              : logging.DEBUG,
+    'verbose'            : logging.DEBUG,
+    'info'               : logging.INFO,
+    'warning'            : logging.WARNING,
+    'warn'               : logging.WARNING,
+    'error'              : logging.ERROR,
+    'critical'           : logging.CRITICAL
+}
+
+_debug_default = "warning"
+_debug_level_default = DEBUG_LEVELS[_debug_default]
+
+##@var config_default
+# The default configuration dictionary for OFT
+config_default = {
+    "param"              : None,
+    "platform"           : "local",
+    "controller_host"    : "127.0.0.1",
+    "controller_port"    : 6633,
+    "port_count"         : 4,
+    "base_of_port"       : 1,
+    "base_if_index"      : 1,
+    "test_spec"          : "all",
+    "test_dir"           : ".",
+    "log_file"           : "oft.log",
+    "list"               : False,
+    "debug"              : _debug_default,
+    "dbg_level"          : _debug_level_default,
+    "port_map"           : {},
+    "test_params"        : "None"
+}
+
+# Default test priority
+TEST_PRIO_DEFAULT=100
+
+#@todo Set up a dict of config params so easier to manage:
+# <param> <cmdline flags> <default value> <help> <optional parser>
+
+# Map options to config structure
+def config_get(opts):
+    "Convert options class to OFT configuration dictionary"
+    cfg = config_default.copy()
+    for key in cfg.keys():
+        cfg[key] = eval("opts." + key)
+
+    # Special case checks
+    if opts.debug not in DEBUG_LEVELS.keys():
+        print "Warning:  Bad value specified for debug level; using default"
+        opts.debug = _debug_default
+    if opts.verbose:
+        cfg["debug"] = "verbose"
+    cfg["dbg_level"] = DEBUG_LEVELS[cfg["debug"]]
+
+    return cfg
+
+def config_setup(cfg_dflt):
+    """
+    Set up the configuration including parsing the arguments
+
+    @param cfg_dflt The default configuration dictionary
+    @return A pair (config, args) where config is an config
+    object and args is any additional arguments from the command line
+    """
+
+    parser = OptionParser(version="%prog 0.1")
+
+    #@todo parse port map as option?
+    # Set up default values
+    for key in cfg_dflt.keys():
+        eval("parser.set_defaults("+key+"=cfg_dflt['"+key+"'])")
+
+    #@todo Add options via dictionary
+    plat_help = """Set the platform type.  Valid values include:
+        local:  User space virtual ethernet pair setup
+        remote:  Remote embedded Broadcom based switch
+        Create a new_plat.py file and use --platform=new_plat on the command line
+        """
+    parser.add_option("-P", "--platform", help=plat_help)
+    parser.add_option("-H", "--host", dest="controller_host",
+                      help="The IP/name of the test controller host")
+    parser.add_option("-p", "--port", dest="controller_port",
+                      type="int", help="Port number of the test controller")
+    test_list_help = """Indicate tests to run.  Valid entries are "all" (the
+        default) or a comma separated list of:
+        module            Run all tests in the named module
+        testcase          Run tests in all modules with the name testcase
+        module.testcase   Run the specific test case
+        """
+    parser.add_option("--test-spec", "--test-list", help=test_list_help)
+    parser.add_option("--log-file", 
+                      help="Name of log file, empty string to log to console")
+    parser.add_option("--debug",
+                      help="Debug lvl: debug, info, warning, error, critical")
+    parser.add_option("--port-count", type="int",
+                      help="Number of ports to use (optional)")
+    parser.add_option("--base-of-port", type="int",
+                      help="Base OpenFlow port number (optional)")
+    parser.add_option("--base-if-index", type="int",
+                      help="Base interface index number (optional)")
+    parser.add_option("--list", action="store_true",
+                      help="List all tests and exit")
+    parser.add_option("--verbose", action="store_true",
+                      help="Short cut for --debug=verbose")
+    parser.add_option("--param", type="int",
+                      help="Parameter sent to test (for debugging)")
+    parser.add_option("-t", "--test-params",
+                      help="Set test parameters: key=val;... See --list")
+    # Might need this if other parsers want command line
+    # parser.allow_interspersed_args = False
+    (options, args) = parser.parse_args()
+
+    config = config_get(options)
+
+    return (config, args)
+
+def logging_setup(config):
+    """
+    Set up logging based on config
+    """
+    _format = "%(asctime)s  %(name)-10s: %(levelname)-8s: %(message)s"
+    _datefmt = "%H:%M:%S"
+    logging.basicConfig(filename=config["log_file"],
+                        level=config["dbg_level"],
+                        format=_format, datefmt=_datefmt)
+
+def default_port_map_setup(config):
+    """
+    Setup the OF port mapping based on config
+    @param config The OFT configuration structure
+    @return Port map dictionary
+    """
+    if (config["base_of_port"] is None) or not config["port_count"]:
+        return None
+    port_map = {}
+    if config["platform"] == "local":
+        # For local, use every other veth port
+        for idx in range(config["port_count"]):
+            port_map[config["base_of_port"] + idx] = "veth" + \
+                str(config["base_if_index"] + (2 * idx))
+    elif config["platform"] == "remote":
+        # For remote, use eth ports
+        for idx in range(config["port_count"]):
+            port_map[config["base_of_port"] + idx] = "eth" + \
+                str(config["base_if_index"] + idx)
+    else:
+        return None
+
+    logging.info("Built default port map")
+    return port_map
+
+def test_list_generate(config):
+    """Generate the list of all known tests indexed by module name
+
+    Conventions:  Test files must implement the function test_set_init
+
+    Test cases are classes that implement runTest
+
+    @param config The oft configuration dictionary
+    @returns An array of triples (mod-name, module, [tests]) where 
+    mod-name is the string (filename) of the module, module is the
+    value returned from __import__'ing the module and [tests] is an
+    array of strings giving the test cases from the module.  
+    """
+
+    # Find and import test files
+    p1 = Popen(["find", config["test_dir"], "-type","f"], stdout = PIPE)
+    p2 = Popen(["xargs", "grep", "-l", "-e", "^def test_set_init"], 
+                stdin=p1.stdout, stdout=PIPE)
+
+    all_tests = {}
+    mod_name_map = {}
+    # There's an extra empty entry at the end of the list 
+    filelist = p2.communicate()[0].split("\n")[:-1]
+    for file in filelist:
+        if file[-1:] == '~' or file[0] == '#':
+            continue
+        modfile = file.lstrip('./')[:-3]
+
+        try:
+            mod = __import__(modfile)
+        except:
+            logging.warning("Could not import file " + file)
+            continue
+        mod_name_map[modfile] = mod
+        added_fn = False
+        for fn in dir(mod):
+            if 'runTest' in dir(eval("mod." + fn)):
+                if not added_fn:
+                    mod_name_map[modfile] = mod
+                    all_tests[mod] = []
+                    added_fn = True
+                all_tests[mod].append(fn)
+    config["all_tests"] = all_tests
+    config["mod_name_map"] = mod_name_map
+
+def die(msg, exit_val=1):
+    print msg
+    logging.critical(msg)
+    sys.exit(exit_val)
+
+def add_test(suite, mod, name):
+    logging.info("Adding test " + mod.__name__ + "." + name)
+    suite.addTest(eval("mod." + name)())
+
+def _space_to(n, str):
+    """
+    Generate a string of spaces to achieve width n given string str
+    If length of str >= n, return one space
+    """
+    spaces = n - len(str)
+    if spaces > 0:
+        return " " * spaces
+    return " "
+
+def test_prio_get(mod, test):
+    """
+    Return the priority of a test
+    If set in the test_prio variable for the module, return
+    that value.  Otherwise return 100 (default)
+    """
+    if 'test_prio' in dir(mod):
+        if test in mod.test_prio.keys():
+            return mod.test_prio[test]
+    return TEST_PRIO_DEFAULT
+
+#
+# Main script
+#
+
+# Get configuration, set up logging, import platform from file
+(config, args) = config_setup(config_default)
+
+test_list_generate(config)
+
+# Check if test list is requested; display and exit if so
+if config["list"]:
+    did_print = False
+    print "\nTest List:"
+    for mod in config["all_tests"].keys():
+        if config["test_spec"] != "all" and \
+                config["test_spec"] != mod.__name__:
+            continue
+        did_print = True
+        desc = mod.__doc__.strip()
+        desc = desc.split('\n')[0]
+        start_str = "  Module " + mod.__name__ + ": "
+        print start_str + _space_to(22, start_str) + desc
+        for test in config["all_tests"][mod]:
+            try:
+                desc = eval('mod.' + test + '.__doc__.strip()')
+                desc = desc.split('\n')[0]
+            except:
+                desc = "No description"
+            if test_prio_get(mod, test) < 0:
+                start_str = "  * " + test + ":"
+            else:
+                start_str = "    " + test + ":"
+            if len(start_str) > 22:
+                desc = "\n" + _space_to(22, "") + desc
+            print start_str + _space_to(22, start_str) + desc
+        print
+    if not did_print:
+        print "No tests found for " + config["test_spec"]
+    else:
+        print "Tests preceded by * are not run by default"
+    print "Tests marked (TP1) after name take --test-params including:"
+    print "    'vid=N;strip_vlan=bool;add_vlan=bool'"
+    sys.exit(0)
+
+logging_setup(config)
+logging.info("++++++++ " + time.asctime() + " ++++++++")
+
+# Generate the test suite
+#@todo Decide if multiple suites are ever needed
+suite = unittest.TestSuite()
+
+#@todo Allow specification of priority to override prio check
+if config["test_spec"] == "all":
+    for mod in config["all_tests"].keys(): 
+       for test in config["all_tests"][mod]:
+           # For now, a way to avoid tests
+           if test_prio_get(mod, test) >= 0:
+               add_test(suite, mod, test)
+
+else:
+    for ts_entry in config["test_spec"].split(","):
+        parts = ts_entry.split(".")
+
+        if len(parts) == 1: # Either a module or test name
+            if ts_entry in config["mod_name_map"].keys():
+                mod = config["mod_name_map"][ts_entry]
+                for test in config["all_tests"][mod]:
+                    add_test(suite, mod, test)
+            else: # Search for matching tests
+                test_found = False
+                for mod in config["all_tests"].keys():
+                    if ts_entry in config["all_tests"][mod]:
+                        add_test(suite, mod, ts_entry)
+                        test_found = True
+                if not test_found:
+                    die("Could not find module or test: " + ts_entry)
+
+        elif len(parts) == 2: # module.test
+            if parts[0] not in config["mod_name_map"]:
+                die("Unknown module in test spec: " + ts_entry)
+            mod = config["mod_name_map"][parts[0]]
+            if parts[1] in config["all_tests"][mod]:
+                add_test(suite, mod, parts[1])
+            else:
+                die("No known test matches: " + ts_entry)
+
+        else:
+            die("Bad test spec: " + ts_entry)
+
+# Check if platform specified
+if config["platform"]:
+    _imp_string = "from " + config["platform"] + " import *"
+    logging.info("Importing platform: " + _imp_string)
+    try:
+        exec(_imp_string)
+    except:
+        logging.warn("Failed to import " + config["platform"] + " file")
+
+try:
+    platform_config_update(config)
+except:
+    logging.warn("Could not run platform host configuration")
+
+if not config["port_map"]:
+    # Try to set up default port mapping if not done by platform
+    config["port_map"] = default_port_map_setup(config)
+
+if not config["port_map"]:
+    die("Interface port map is not defined.  Exiting")
+
+logging.debug("Configuration: " + str(config))
+logging.info("OF port map: " + str(config["port_map"]))
+
+# Init the test sets
+for (modname,mod) in config["mod_name_map"].items():
+    try:
+        mod.test_set_init(config)
+    except:
+        logging.warning("Could not run test_set_init for " + modname)
+
+if config["dbg_level"] == logging.CRITICAL:
+    _verb = 0
+elif config["dbg_level"] >= logging.WARNING:
+    _verb = 1
+else:
+    _verb = 2
+
+if os.getuid() != 0:
+    print "ERROR: Super-user privileges required. Please re-run with " \
+          "sudo or as root."
+    exit(1)
+
+
+if __name__ == "__main__":
+    logging.info("*** TEST RUN START: " + time.asctime())
+    unittest.TextTestRunner(verbosity=_verb).run(suite)
+    if testutils.skipped_test_count > 0:
+        ts = " tests"
+        if testutils.skipped_test_count == 1: ts = " test"
+        logging.info("Skipped " + str(testutils.skipped_test_count) + ts)
+        print("Skipped " + str(testutils.skipped_test_count) + ts)
+    logging.info("*** TEST RUN END  : " + time.asctime())
+        
+
diff --git a/tests/pktact.py b/tests/pktact.py
new file mode 100644
index 0000000..c2d4a68
--- /dev/null
+++ b/tests/pktact.py
@@ -0,0 +1,984 @@
+"""
+Test cases for testing actions taken on packets
+
+See basic.py for other info.
+
+It is recommended that these definitions be kept in their own
+namespace as different groups of tests will likely define 
+similar identifiers.
+
+  The function test_set_init is called with a complete configuration
+dictionary prior to the invocation of any tests from this file.
+
+  The switch is actively attempting to contact the controller at the address
+indicated oin oft_config
+
+"""
+
+import copy
+
+import logging
+
+import unittest
+
+import oftest.controller as controller
+import oftest.cstruct as ofp
+import oftest.message as message
+import oftest.dataplane as dataplane
+import oftest.action as action
+import oftest.parse as parse
+import basic
+
+from testutils import *
+
+#@var port_map Local copy of the configuration map from OF port
+# numbers to OS interfaces
+pa_port_map = None
+#@var pa_logger Local logger object
+pa_logger = None
+#@var pa_config Local copy of global configuration data
+pa_config = None
+
+# For test priority
+#@var test_prio Set test priority for local tests
+test_prio = {}
+
+WILDCARD_VALUES = [ofp.OFPFW_IN_PORT,
+                   ofp.OFPFW_DL_VLAN,
+                   ofp.OFPFW_DL_SRC,
+                   ofp.OFPFW_DL_DST,
+                   ofp.OFPFW_DL_TYPE,
+                   ofp.OFPFW_NW_PROTO,
+                   ofp.OFPFW_TP_SRC,
+                   ofp.OFPFW_TP_DST,
+                   0x3F << ofp.OFPFW_NW_SRC_SHIFT,
+                   0x3F << ofp.OFPFW_NW_DST_SHIFT,
+                   ofp.OFPFW_DL_VLAN_PCP,
+                   ofp.OFPFW_NW_TOS]
+
+MODIFY_ACTION_VALUES =  [ofp.OFPAT_SET_VLAN_VID,
+                         ofp.OFPAT_SET_VLAN_PCP,
+                         ofp.OFPAT_STRIP_VLAN,
+                         ofp.OFPAT_SET_DL_SRC,
+                         ofp.OFPAT_SET_DL_DST,
+                         ofp.OFPAT_SET_NW_SRC,
+                         ofp.OFPAT_SET_NW_DST,
+                         ofp.OFPAT_SET_NW_TOS,
+                         ofp.OFPAT_SET_TP_SRC,
+                         ofp.OFPAT_SET_TP_DST]
+
+# Cache supported features to avoid transaction overhead
+cached_supported_actions = None
+
+TEST_VID_DEFAULT = 2
+
+def test_set_init(config):
+    """
+    Set up function for packet action test classes
+
+    @param config The configuration dictionary; see oft
+    """
+
+    global pa_port_map
+    global pa_logger
+    global pa_config
+
+    pa_logger = logging.getLogger("pkt_act")
+    pa_logger.info("Initializing test set")
+    pa_port_map = config["port_map"]
+    pa_config = config
+
+class DirectPacket(basic.SimpleDataPlane):
+    """
+    Send packet to single egress port
+
+    Generate a packet
+    Generate and install a matching flow
+    Add action to direct the packet to an egress port
+    Send the packet to ingress dataplane port
+    Verify the packet is received at the egress port only
+    """
+    def runTest(self):
+        self.handleFlow()
+
+    def handleFlow(self, pkttype='TCP'):
+        of_ports = pa_port_map.keys()
+        of_ports.sort()
+        self.assertTrue(len(of_ports) > 1, "Not enough ports for test")
+
+        if (pkttype == 'ICMP'):
+            pkt = simple_icmp_packet()
+        else:
+            pkt = simple_tcp_packet()
+        match = parse.packet_to_flow_match(pkt)
+        match.wildcards &= ~ofp.OFPFW_IN_PORT
+        self.assertTrue(match is not None, 
+                        "Could not generate flow match from pkt")
+        act = action.action_output()
+
+        for idx in range(len(of_ports)):
+            rv = delete_all_flows(self.controller, pa_logger)
+            self.assertEqual(rv, 0, "Failed to delete all flows")
+
+            ingress_port = of_ports[idx]
+            egress_port = of_ports[(idx + 1) % len(of_ports)]
+            pa_logger.info("Ingress " + str(ingress_port) + 
+                             " to egress " + str(egress_port))
+
+            match.in_port = ingress_port
+
+            request = message.flow_mod()
+            request.match = match
+            request.buffer_id = 0xffffffff
+            act.port = egress_port
+            self.assertTrue(request.actions.add(act), "Could not add action")
+
+            pa_logger.info("Inserting flow")
+            rv = self.controller.message_send(request)
+            self.assertTrue(rv != -1, "Error installing flow mod")
+            do_barrier(self.controller)
+
+            pa_logger.info("Sending packet to dp port " + 
+                           str(ingress_port))
+            self.dataplane.send(ingress_port, str(pkt))
+            (rcv_port, rcv_pkt, pkt_time) = self.dataplane.poll(timeout=1)
+            self.assertTrue(rcv_pkt is not None, "Did not receive packet")
+            pa_logger.debug("Packet len " + str(len(rcv_pkt)) + " in on " + 
+                         str(rcv_port))
+            self.assertEqual(rcv_port, egress_port, "Unexpected receive port")
+            self.assertEqual(str(pkt), str(rcv_pkt),
+                             'Response packet does not match send packet')
+
+class DirectPacketICMP(DirectPacket):
+    """
+    Send ICMP packet to single egress port
+
+    Generate a ICMP packet
+    Generate and install a matching flow
+    Add action to direct the packet to an egress port
+    Send the packet to ingress dataplane port
+    Verify the packet is received at the egress port only
+    Difference from DirectPacket test is that sent packet is ICMP
+    """
+    def runTest(self):
+        self.handleFlow(pkttype='ICMP')
+
+class DirectTwoPorts(basic.SimpleDataPlane):
+    """
+    Send packet to two egress ports
+
+    Generate a packet
+    Generate and install a matching flow
+    Add action to direct the packet to two egress ports
+    Send the packet to ingress dataplane port
+    Verify the packet is received at the two egress ports
+    """
+    def runTest(self):
+        of_ports = pa_port_map.keys()
+        of_ports.sort()
+        self.assertTrue(len(of_ports) > 2, "Not enough ports for test")
+
+        pkt = simple_tcp_packet()
+        match = parse.packet_to_flow_match(pkt)
+        match.wildcards &= ~ofp.OFPFW_IN_PORT
+        self.assertTrue(match is not None, 
+                        "Could not generate flow match from pkt")
+        act = action.action_output()
+
+        for idx in range(len(of_ports)):
+            rv = delete_all_flows(self.controller, pa_logger)
+            self.assertEqual(rv, 0, "Failed to delete all flows")
+
+            ingress_port = of_ports[idx]
+            egress_port1 = of_ports[(idx + 1) % len(of_ports)]
+            egress_port2 = of_ports[(idx + 2) % len(of_ports)]
+            pa_logger.info("Ingress " + str(ingress_port) + 
+                           " to egress " + str(egress_port1) + " and " +
+                           str(egress_port2))
+
+            match.in_port = ingress_port
+
+            request = message.flow_mod()
+            request.match = match
+            request.buffer_id = 0xffffffff
+            act.port = egress_port1
+            self.assertTrue(request.actions.add(act), "Could not add action1")
+            act.port = egress_port2
+            self.assertTrue(request.actions.add(act), "Could not add action2")
+            # pa_logger.info(request.show())
+
+            pa_logger.info("Inserting flow")
+            rv = self.controller.message_send(request)
+            self.assertTrue(rv != -1, "Error installing flow mod")
+            do_barrier(self.controller)
+
+            pa_logger.info("Sending packet to dp port " + 
+                           str(ingress_port))
+            self.dataplane.send(ingress_port, str(pkt))
+            yes_ports = set([egress_port1, egress_port2])
+            no_ports = set(of_ports).difference(yes_ports)
+
+            receive_pkt_check(self.dataplane, pkt, yes_ports, no_ports,
+                              self, pa_logger)
+
+class DirectMCNonIngress(basic.SimpleDataPlane):
+    """
+    Multicast to all non-ingress ports
+
+    Generate a packet
+    Generate and install a matching flow
+    Add action to direct the packet to all non-ingress ports
+    Send the packet to ingress dataplane port
+    Verify the packet is received at all non-ingress ports
+
+    Does not use the flood action
+    """
+    def runTest(self):
+        of_ports = pa_port_map.keys()
+        of_ports.sort()
+        self.assertTrue(len(of_ports) > 2, "Not enough ports for test")
+
+        pkt = simple_tcp_packet()
+        match = parse.packet_to_flow_match(pkt)
+        match.wildcards &= ~ofp.OFPFW_IN_PORT
+        self.assertTrue(match is not None, 
+                        "Could not generate flow match from pkt")
+        act = action.action_output()
+
+        for ingress_port in of_ports:
+            rv = delete_all_flows(self.controller, pa_logger)
+            self.assertEqual(rv, 0, "Failed to delete all flows")
+
+            pa_logger.info("Ingress " + str(ingress_port) + 
+                           " all non-ingress ports")
+            match.in_port = ingress_port
+
+            request = message.flow_mod()
+            request.match = match
+            request.buffer_id = 0xffffffff
+            for egress_port in of_ports:
+                if egress_port == ingress_port:
+                    continue
+                act.port = egress_port
+                self.assertTrue(request.actions.add(act), 
+                                "Could not add output to " + str(egress_port))
+            pa_logger.debug(request.show())
+
+            pa_logger.info("Inserting flow")
+            rv = self.controller.message_send(request)
+            self.assertTrue(rv != -1, "Error installing flow mod")
+            do_barrier(self.controller)
+
+            pa_logger.info("Sending packet to dp port " + str(ingress_port))
+            self.dataplane.send(ingress_port, str(pkt))
+            yes_ports = set(of_ports).difference([ingress_port])
+            receive_pkt_check(self.dataplane, pkt, yes_ports, [ingress_port],
+                              self, pa_logger)
+
+
+class DirectMC(basic.SimpleDataPlane):
+    """
+    Multicast to all ports including ingress
+
+    Generate a packet
+    Generate and install a matching flow
+    Add action to direct the packet to all non-ingress ports
+    Send the packet to ingress dataplane port
+    Verify the packet is received at all ports
+
+    Does not use the flood action
+    """
+    def runTest(self):
+        of_ports = pa_port_map.keys()
+        of_ports.sort()
+        self.assertTrue(len(of_ports) > 2, "Not enough ports for test")
+
+        pkt = simple_tcp_packet()
+        match = parse.packet_to_flow_match(pkt)
+        match.wildcards &= ~ofp.OFPFW_IN_PORT
+        self.assertTrue(match is not None, 
+                        "Could not generate flow match from pkt")
+        act = action.action_output()
+
+        for ingress_port in of_ports:
+            rv = delete_all_flows(self.controller, pa_logger)
+            self.assertEqual(rv, 0, "Failed to delete all flows")
+
+            pa_logger.info("Ingress " + str(ingress_port) + " to all ports")
+            match.in_port = ingress_port
+
+            request = message.flow_mod()
+            request.match = match
+            request.buffer_id = 0xffffffff
+            for egress_port in of_ports:
+                if egress_port == ingress_port:
+                    act.port = ofp.OFPP_IN_PORT
+                else:
+                    act.port = egress_port
+                self.assertTrue(request.actions.add(act), 
+                                "Could not add output to " + str(egress_port))
+            # pa_logger.info(request.show())
+
+            pa_logger.info("Inserting flow")
+            rv = self.controller.message_send(request)
+            self.assertTrue(rv != -1, "Error installing flow mod")
+            do_barrier(self.controller)
+
+            pa_logger.info("Sending packet to dp port " + str(ingress_port))
+            self.dataplane.send(ingress_port, str(pkt))
+            receive_pkt_check(self.dataplane, pkt, of_ports, [], self,
+                              pa_logger)
+
+class Flood(basic.SimpleDataPlane):
+    """
+    Flood to all ports except ingress
+
+    Generate a packet
+    Generate and install a matching flow
+    Add action to flood the packet
+    Send the packet to ingress dataplane port
+    Verify the packet is received at all other ports
+    """
+    def runTest(self):
+        of_ports = pa_port_map.keys()
+        of_ports.sort()
+        self.assertTrue(len(of_ports) > 1, "Not enough ports for test")
+
+        pkt = simple_tcp_packet()
+        match = parse.packet_to_flow_match(pkt)
+        match.wildcards &= ~ofp.OFPFW_IN_PORT
+        self.assertTrue(match is not None, 
+                        "Could not generate flow match from pkt")
+        act = action.action_output()
+
+        for ingress_port in of_ports:
+            rv = delete_all_flows(self.controller, pa_logger)
+            self.assertEqual(rv, 0, "Failed to delete all flows")
+
+            pa_logger.info("Ingress " + str(ingress_port) + " to all ports")
+            match.in_port = ingress_port
+
+            request = message.flow_mod()
+            request.match = match
+            request.buffer_id = 0xffffffff
+            act.port = ofp.OFPP_FLOOD
+            self.assertTrue(request.actions.add(act), 
+                            "Could not add flood port action")
+            pa_logger.info(request.show())
+
+            pa_logger.info("Inserting flow")
+            rv = self.controller.message_send(request)
+            self.assertTrue(rv != -1, "Error installing flow mod")
+            do_barrier(self.controller)
+
+            pa_logger.info("Sending packet to dp port " + str(ingress_port))
+            self.dataplane.send(ingress_port, str(pkt))
+            yes_ports = set(of_ports).difference([ingress_port])
+            receive_pkt_check(self.dataplane, pkt, yes_ports, [ingress_port],
+                              self, pa_logger)
+
+class FloodPlusIngress(basic.SimpleDataPlane):
+    """
+    Flood to all ports plus send to ingress port
+
+    Generate a packet
+    Generate and install a matching flow
+    Add action to flood the packet
+    Add action to send to ingress port
+    Send the packet to ingress dataplane port
+    Verify the packet is received at all other ports
+    """
+    def runTest(self):
+        of_ports = pa_port_map.keys()
+        of_ports.sort()
+        self.assertTrue(len(of_ports) > 1, "Not enough ports for test")
+
+        pkt = simple_tcp_packet()
+        match = parse.packet_to_flow_match(pkt)
+        match.wildcards &= ~ofp.OFPFW_IN_PORT
+        self.assertTrue(match is not None, 
+                        "Could not generate flow match from pkt")
+        act = action.action_output()
+
+        for ingress_port in of_ports:
+            rv = delete_all_flows(self.controller, pa_logger)
+            self.assertEqual(rv, 0, "Failed to delete all flows")
+
+            pa_logger.info("Ingress " + str(ingress_port) + " to all ports")
+            match.in_port = ingress_port
+
+            request = message.flow_mod()
+            request.match = match
+            request.buffer_id = 0xffffffff
+            act.port = ofp.OFPP_FLOOD
+            self.assertTrue(request.actions.add(act), 
+                            "Could not add flood port action")
+            act.port = ofp.OFPP_IN_PORT
+            self.assertTrue(request.actions.add(act), 
+                            "Could not add ingress port for output")
+            pa_logger.info(request.show())
+
+            pa_logger.info("Inserting flow")
+            rv = self.controller.message_send(request)
+            self.assertTrue(rv != -1, "Error installing flow mod")
+            do_barrier(self.controller)
+
+            pa_logger.info("Sending packet to dp port " + str(ingress_port))
+            self.dataplane.send(ingress_port, str(pkt))
+            receive_pkt_check(self.dataplane, pkt, of_ports, [], self,
+                              pa_logger)
+
+class All(basic.SimpleDataPlane):
+    """
+    Send to OFPP_ALL port
+
+    Generate a packet
+    Generate and install a matching flow
+    Add action to forward to OFPP_ALL
+    Send the packet to ingress dataplane port
+    Verify the packet is received at all other ports
+    """
+    def runTest(self):
+        of_ports = pa_port_map.keys()
+        of_ports.sort()
+        self.assertTrue(len(of_ports) > 1, "Not enough ports for test")
+
+        pkt = simple_tcp_packet()
+        match = parse.packet_to_flow_match(pkt)
+        match.wildcards &= ~ofp.OFPFW_IN_PORT
+        self.assertTrue(match is not None, 
+                        "Could not generate flow match from pkt")
+        act = action.action_output()
+
+        for ingress_port in of_ports:
+            rv = delete_all_flows(self.controller, pa_logger)
+            self.assertEqual(rv, 0, "Failed to delete all flows")
+
+            pa_logger.info("Ingress " + str(ingress_port) + " to all ports")
+            match.in_port = ingress_port
+
+            request = message.flow_mod()
+            request.match = match
+            request.buffer_id = 0xffffffff
+            act.port = ofp.OFPP_ALL
+            self.assertTrue(request.actions.add(act), 
+                            "Could not add ALL port action")
+            pa_logger.info(request.show())
+
+            pa_logger.info("Inserting flow")
+            rv = self.controller.message_send(request)
+            self.assertTrue(rv != -1, "Error installing flow mod")
+            do_barrier(self.controller)
+
+            pa_logger.info("Sending packet to dp port " + str(ingress_port))
+            self.dataplane.send(ingress_port, str(pkt))
+            yes_ports = set(of_ports).difference([ingress_port])
+            receive_pkt_check(self.dataplane, pkt, yes_ports, [ingress_port],
+                              self, pa_logger)
+
+class AllPlusIngress(basic.SimpleDataPlane):
+    """
+    Send to OFPP_ALL port and ingress port
+
+    Generate a packet
+    Generate and install a matching flow
+    Add action to forward to OFPP_ALL
+    Add action to forward to ingress port
+    Send the packet to ingress dataplane port
+    Verify the packet is received at all other ports
+    """
+    def runTest(self):
+        of_ports = pa_port_map.keys()
+        of_ports.sort()
+        self.assertTrue(len(of_ports) > 1, "Not enough ports for test")
+
+        pkt = simple_tcp_packet()
+        match = parse.packet_to_flow_match(pkt)
+        match.wildcards &= ~ofp.OFPFW_IN_PORT
+        self.assertTrue(match is not None, 
+                        "Could not generate flow match from pkt")
+        act = action.action_output()
+
+        for ingress_port in of_ports:
+            rv = delete_all_flows(self.controller, pa_logger)
+            self.assertEqual(rv, 0, "Failed to delete all flows")
+
+            pa_logger.info("Ingress " + str(ingress_port) + " to all ports")
+            match.in_port = ingress_port
+
+            request = message.flow_mod()
+            request.match = match
+            request.buffer_id = 0xffffffff
+            act.port = ofp.OFPP_ALL
+            self.assertTrue(request.actions.add(act), 
+                            "Could not add ALL port action")
+            act.port = ofp.OFPP_IN_PORT
+            self.assertTrue(request.actions.add(act), 
+                            "Could not add ingress port for output")
+            pa_logger.info(request.show())
+
+            pa_logger.info("Inserting flow")
+            rv = self.controller.message_send(request)
+            self.assertTrue(rv != -1, "Error installing flow mod")
+            do_barrier(self.controller)
+
+            pa_logger.info("Sending packet to dp port " + str(ingress_port))
+            self.dataplane.send(ingress_port, str(pkt))
+            receive_pkt_check(self.dataplane, pkt, of_ports, [], self,
+                              pa_logger)
+            
+class FloodMinusPort(basic.SimpleDataPlane):
+    """
+    Config port with No_Flood and test Flood action
+
+    Generate a packet
+    Generate a matching flow
+    Add action to forward to OFPP_ALL
+    Set port to no-flood
+    Send the packet to ingress dataplane port
+    Verify the packet is received at all other ports except
+    the ingress port and the no_flood port
+    """
+    def runTest(self):
+        of_ports = pa_port_map.keys()
+        of_ports.sort()
+        self.assertTrue(len(of_ports) > 2, "Not enough ports for test")
+
+        pkt = simple_tcp_packet()
+        match = parse.packet_to_flow_match(pkt)
+        match.wildcards &= ~ofp.OFPFW_IN_PORT
+        self.assertTrue(match is not None, 
+                        "Could not generate flow match from pkt")
+        act = action.action_output()
+
+        for idx in range(len(of_ports)):
+            rv = delete_all_flows(self.controller, pa_logger)
+            self.assertEqual(rv, 0, "Failed to delete all flows")
+
+            ingress_port = of_ports[idx]
+            no_flood_idx = (idx + 1) % len(of_ports)
+            no_flood_port = of_ports[no_flood_idx]
+            rv = port_config_set(self.controller, no_flood_port,
+                                 ofp.OFPPC_NO_FLOOD, ofp.OFPPC_NO_FLOOD,
+                                 pa_logger)
+            self.assertEqual(rv, 0, "Failed to set port config")
+
+            match.in_port = ingress_port
+
+            request = message.flow_mod()
+            request.match = match
+            request.buffer_id = 0xffffffff
+            act.port = ofp.OFPP_FLOOD
+            self.assertTrue(request.actions.add(act), 
+                            "Could not add flood port action")
+            pa_logger.info(request.show())
+
+            pa_logger.info("Inserting flow")
+            rv = self.controller.message_send(request)
+            self.assertTrue(rv != -1, "Error installing flow mod")
+            do_barrier(self.controller)
+
+            pa_logger.info("Sending packet to dp port " + str(ingress_port))
+            pa_logger.info("No flood port is " + str(no_flood_port))
+            self.dataplane.send(ingress_port, str(pkt))
+            no_ports = set([ingress_port, no_flood_port])
+            yes_ports = set(of_ports).difference(no_ports)
+            receive_pkt_check(self.dataplane, pkt, yes_ports, no_ports, self,
+                              pa_logger)
+
+            # Turn no flood off again
+            rv = port_config_set(self.controller, no_flood_port,
+                                 0, ofp.OFPPC_NO_FLOOD, pa_logger)
+            self.assertEqual(rv, 0, "Failed to reset port config")
+
+            #@todo Should check no other packets received
+
+
+
+################################################################
+
+class BaseMatchCase(basic.SimpleDataPlane):
+    def setUp(self):
+        basic.SimpleDataPlane.setUp(self)
+        self.logger = pa_logger
+    def runTest(self):
+        self.logger.info("BaseMatchCase")
+
+class ExactMatch(BaseMatchCase):
+    """
+    Exercise exact matching for all port pairs
+
+    Generate a packet
+    Generate and install a matching flow without wildcard mask
+    Add action to forward to a port
+    Send the packet to the port
+    Verify the packet is received at all other ports (one port at a time)
+    """
+
+    def runTest(self):
+        flow_match_test(self, pa_port_map)
+
+class ExactMatchTagged(BaseMatchCase):
+    """
+    Exact match for all port pairs with tagged pkts
+    """
+
+    def runTest(self):
+        vid = test_param_get(self.config, 'vid', default=TEST_VID_DEFAULT)
+        flow_match_test(self, pa_port_map, dl_vlan=vid)
+
+class ExactMatchTaggedMany(BaseMatchCase):
+    """
+    ExactMatchTagged with many VLANS
+    """
+
+    def runTest(self):
+        for vid in range(2,100,10):
+            flow_match_test(self, pa_port_map, dl_vlan=vid, max_test=5)
+        for vid in range(100,4000,389):
+            flow_match_test(self, pa_port_map, dl_vlan=vid, max_test=5)
+        flow_match_test(self, pa_port_map, dl_vlan=4094, max_test=5)
+
+# Don't run by default
+test_prio["ExactMatchTaggedMany"] = -1
+
+
+class SingleWildcardMatch(BaseMatchCase):
+    """
+    Exercise wildcard matching for all ports
+
+    Generate a packet
+    Generate and install a matching flow with wildcard mask
+    Add action to forward to a port
+    Send the packet to the port
+    Verify the packet is received at all other ports (one port at a time)
+    Verify flow_expiration message is correct when command option is set
+    """
+    def runTest(self):
+        for wc in WILDCARD_VALUES:
+            flow_match_test(self, pa_port_map, wildcards=wc, max_test=10)
+
+class SingleWildcardMatchTagged(BaseMatchCase):
+    """
+    SingleWildcardMatch with tagged packets
+    """
+    def runTest(self):
+        vid = test_param_get(self.config, 'vid', default=TEST_VID_DEFAULT)
+        for wc in WILDCARD_VALUES:
+            flow_match_test(self, pa_port_map, wildcards=wc, dl_vlan=vid,
+                            max_test=10)
+
+class AllExceptOneWildcardMatch(BaseMatchCase):
+    """
+    Match exactly one field
+
+    Generate a packet
+    Generate and install a matching flow with wildcard all except one filed
+    Add action to forward to a port
+    Send the packet to the port
+    Verify the packet is received at all other ports (one port at a time)
+    Verify flow_expiration message is correct when command option is set
+    """
+    def runTest(self):
+        for wc in WILDCARD_VALUES:
+            all_exp_one_wildcard = ofp.OFPFW_ALL ^ wc
+            flow_match_test(self, pa_port_map, wildcards=all_exp_one_wildcard)
+
+class AllExceptOneWildcardMatchTagged(BaseMatchCase):
+    """
+    Match one field with tagged packets
+    """
+    def runTest(self):
+        vid = test_param_get(self.config, 'vid', default=TEST_VID_DEFAULT)
+        for wc in WILDCARD_VALUES:
+            all_exp_one_wildcard = ofp.OFPFW_ALL ^ wc
+            flow_match_test(self, pa_port_map, wildcards=all_exp_one_wildcard,
+                            dl_vlan=vid)
+
+class AllWildcardMatch(BaseMatchCase):
+    """
+    Create Wildcard-all flow and exercise for all ports
+
+    Generate a packet
+    Generate and install a matching flow with wildcard-all
+    Add action to forward to a port
+    Send the packet to the port
+    Verify the packet is received at all other ports (one port at a time)
+    Verify flow_expiration message is correct when command option is set
+    """
+    def runTest(self):
+        flow_match_test(self, pa_port_map, wildcards=ofp.OFPFW_ALL)
+
+class AllWildcardMatchTagged(BaseMatchCase):
+    """
+    AllWildcardMatch with tagged packets
+    """
+    def runTest(self):
+        vid = test_param_get(self.config, 'vid', default=TEST_VID_DEFAULT)
+        flow_match_test(self, pa_port_map, wildcards=ofp.OFPFW_ALL, 
+                        dl_vlan=vid)
+
+    
+class AddVLANTag(BaseMatchCase):
+    """
+    Add a VLAN tag to an untagged packet
+    """
+    def runTest(self):
+        new_vid = 2
+        sup_acts = supported_actions_get(self)
+        if not(sup_acts & 1<<ofp.OFPAT_SET_VLAN_VID):
+            skip_message_emit(self, "Add VLAN tag test")
+            return
+
+        len = 100
+        len_w_vid = 104
+        pkt = simple_tcp_packet(pktlen=len)
+        exp_pkt = simple_tcp_packet(pktlen=len_w_vid, dl_vlan_enable=True, 
+                                    dl_vlan=new_vid)
+        vid_act = action.action_set_vlan_vid()
+        vid_act.vlan_vid = new_vid
+
+        flow_match_test(self, pa_port_map, pkt=pkt, 
+                        exp_pkt=exp_pkt, action_list=[vid_act])
+
+class PacketOnly(basic.DataPlaneOnly):
+    """
+    Just send a packet thru the switch
+    """
+    def runTest(self):
+        pkt = simple_tcp_packet()
+        of_ports = pa_port_map.keys()
+        of_ports.sort()
+        ing_port = of_ports[0]
+        pa_logger.info("Sending packet to " + str(ing_port))
+        pa_logger.debug("Data: " + str(pkt).encode('hex'))
+        self.dataplane.send(ing_port, str(pkt))
+
+class PacketOnlyTagged(basic.DataPlaneOnly):
+    """
+    Just send a packet thru the switch
+    """
+    def runTest(self):
+        vid = test_param_get(self.config, 'vid', default=TEST_VID_DEFAULT)
+        pkt = simple_tcp_packet(dl_vlan_enable=True, dl_vlan=vid)
+        of_ports = pa_port_map.keys()
+        of_ports.sort()
+        ing_port = of_ports[0]
+        pa_logger.info("Sending packet to " + str(ing_port))
+        pa_logger.debug("Data: " + str(pkt).encode('hex'))
+        self.dataplane.send(ing_port, str(pkt))
+
+test_prio["PacketOnly"] = -1
+test_prio["PacketOnlyTagged"] = -1
+
+class ModifyVID(BaseMatchCase):
+    """
+    Modify the VLAN ID in the VLAN tag of a tagged packet
+    """
+    def runTest(self):
+        old_vid = 2
+        new_vid = 3
+        sup_acts = supported_actions_get(self)
+        if not (sup_acts & 1 << ofp.OFPAT_SET_VLAN_VID):
+            skip_message_emit(self, "Modify VLAN tag test")
+            return
+
+        pkt = simple_tcp_packet(dl_vlan_enable=True, dl_vlan=old_vid)
+        exp_pkt = simple_tcp_packet(dl_vlan_enable=True, dl_vlan=new_vid)
+        vid_act = action.action_set_vlan_vid()
+        vid_act.vlan_vid = new_vid
+
+        flow_match_test(self, pa_port_map, pkt=pkt, exp_pkt=exp_pkt,
+                        action_list=[vid_act])
+
+class StripVLANTag(BaseMatchCase):
+    """
+    Strip the VLAN tag from a tagged packet
+    """
+    def runTest(self):
+        old_vid = 2
+        sup_acts = supported_actions_get(self)
+        if not (sup_acts & 1 << ofp.OFPAT_STRIP_VLAN):
+            skip_message_emit(self, "Strip VLAN tag test")
+            return
+
+        len_w_vid = 104
+        len = 100
+        pkt = simple_tcp_packet(pktlen=len_w_vid, dl_vlan_enable=True, 
+                                dl_vlan=old_vid)
+        exp_pkt = simple_tcp_packet(pktlen=len)
+        vid_act = action.action_strip_vlan()
+
+        flow_match_test(self, pa_port_map, pkt=pkt, exp_pkt=exp_pkt,
+                        action_list=[vid_act])
+
+def init_pkt_args():
+    """
+    Pass back a dictionary with default packet arguments
+    """
+    args = {}
+    args["dl_src"] = '00:23:45:67:89:AB'
+
+    dl_vlan_enable=False
+    dl_vlan=-1
+    if pa_config["test-params"]["vid"]:
+        dl_vlan_enable=True
+        dl_vlan = pa_config["test-params"]["vid"]
+
+# Unpack operator is ** on a dictionary
+
+    return args
+
+class ModifyL2Src(BaseMatchCase):
+    """
+    Modify the source MAC address (TP1)
+    """
+    def runTest(self):
+        sup_acts = supported_actions_get(self)
+        if not (sup_acts & 1 << ofp.OFPAT_SET_DL_SRC):
+            skip_message_emit(self, "ModifyL2Src test")
+            return
+
+        (pkt, exp_pkt, acts) = pkt_action_setup(self, mod_fields=['dl_src'],
+                                                check_test_params=True)
+        flow_match_test(self, pa_port_map, pkt=pkt, exp_pkt=exp_pkt, 
+                        action_list=acts, max_test=2)
+
+class ModifyL2Dst(BaseMatchCase):
+    """
+    Modify the dest MAC address (TP1)
+    """
+    def runTest(self):
+        sup_acts = supported_actions_get(self)
+        if not (sup_acts & 1 << ofp.OFPAT_SET_DL_DST):
+            skip_message_emit(self, "ModifyL2dst test")
+            return
+
+        (pkt, exp_pkt, acts) = pkt_action_setup(self, mod_fields=['dl_dst'],
+                                                check_test_params=True)
+        flow_match_test(self, pa_port_map, pkt=pkt, exp_pkt=exp_pkt, 
+                        action_list=acts, max_test=2)
+
+class ModifyL3Src(BaseMatchCase):
+    """
+    Modify the source IP address of an IP packet (TP1)
+    """
+    def runTest(self):
+        sup_acts = supported_actions_get(self)
+        if not (sup_acts & 1 << ofp.OFPAT_SET_NW_SRC):
+            skip_message_emit(self, "ModifyL3Src test")
+            return
+
+        (pkt, exp_pkt, acts) = pkt_action_setup(self, mod_fields=['ip_src'],
+                                                check_test_params=True)
+        flow_match_test(self, pa_port_map, pkt=pkt, exp_pkt=exp_pkt, 
+                        action_list=acts, max_test=2)
+
+class ModifyL3Dst(BaseMatchCase):
+    """
+    Modify the dest IP address of an IP packet (TP1)
+    """
+    def runTest(self):
+        sup_acts = supported_actions_get(self)
+        if not (sup_acts & 1 << ofp.OFPAT_SET_NW_DST):
+            skip_message_emit(self, "ModifyL3Dst test")
+            return
+
+        (pkt, exp_pkt, acts) = pkt_action_setup(self, mod_fields=['ip_dst'],
+                                                check_test_params=True)
+        flow_match_test(self, pa_port_map, pkt=pkt, exp_pkt=exp_pkt, 
+                        action_list=acts, max_test=2)
+
+class ModifyL4Src(BaseMatchCase):
+    """
+    Modify the source TCP port of a TCP packet (TP1)
+    """
+    def runTest(self):
+        sup_acts = supported_actions_get(self)
+        if not (sup_acts & 1 << ofp.OFPAT_SET_TP_SRC):
+            skip_message_emit(self, "ModifyL4Src test")
+            return
+
+        (pkt, exp_pkt, acts) = pkt_action_setup(self, mod_fields=['tcp_sport'],
+                                                check_test_params=True)
+        flow_match_test(self, pa_port_map, pkt=pkt, exp_pkt=exp_pkt, 
+                        action_list=acts, max_test=2)
+
+class ModifyL4Dst(BaseMatchCase):
+    """
+    Modify the dest TCP port of a TCP packet (TP1)
+    """
+    def runTest(self):
+        sup_acts = supported_actions_get(self)
+        if not (sup_acts & 1 << ofp.OFPAT_SET_TP_DST):
+            skip_message_emit(self, "ModifyL4Dst test")
+            return
+
+        (pkt, exp_pkt, acts) = pkt_action_setup(self, mod_fields=['tcp_dport'],
+                                                check_test_params=True)
+        flow_match_test(self, pa_port_map, pkt=pkt, exp_pkt=exp_pkt, 
+                        action_list=acts, max_test=2)
+
+class ModifyTOS(BaseMatchCase):
+    """
+    Modify the IP type of service of an IP packet (TP1)
+    """
+    def runTest(self):
+        sup_acts = supported_actions_get(self)
+        if not (sup_acts & 1 << ofp.OFPAT_SET_NW_TOS):
+            skip_message_emit(self, "ModifyTOS test")
+            return
+
+        (pkt, exp_pkt, acts) = pkt_action_setup(self, mod_fields=['ip_tos'],
+                                                check_test_params=True)
+        flow_match_test(self, pa_port_map, pkt=pkt, exp_pkt=exp_pkt, 
+                        action_list=acts, max_test=2)
+
+#@todo Need to implement tagged versions of the above tests
+#
+#@todo Implement a test case that strips tag 2, adds tag 3
+# and modifies tag 4 to tag 5.  Then verify (in addition) that
+# tag 6 does not get modified.
+
+class MixedVLAN(BaseMatchCase):
+    """
+    Test mixture of VLAN tag actions
+
+    Strip tag 2 on port 1, send to port 2
+    Add tag 3 on port 1, send to port 2
+    Modify tag 4 to 5 on port 1, send to port 2
+    All other traffic from port 1, send to port 3
+    All traffic from port 2 sent to port 4
+    Use exact matches with different packets for all mods
+    Verify the following:  (port, vid)
+        (port 1, vid 2) => VLAN tag stripped, out port 2
+        (port 1, no tag) => tagged packet w/ vid 2 out port 2
+        (port 1, vid 4) => tagged packet w/ vid 5 out port 2
+        (port 1, vid 5) => tagged packet w/ vid 5 out port 2
+        (port 1, vid 6) => tagged packet w/ vid 6 out port 2
+        (port 2, no tag) => untagged packet out port 4
+        (port 2, vid 2-6) => unmodified packet out port 4
+
+    Variation:  Might try sending VID 5 to port 3 and check.
+    If only VID 5 distinguishes pkt, this will fail on some platforms
+    """   
+
+test_prio["MixedVLAN"] = -1
+ 
+def supported_actions_get(parent, use_cache=True):
+    """
+    Get the bitmap of supported actions from the switch
+    If use_cache is false, the cached value will be updated
+    """
+    global cached_supported_actions
+    if cached_supported_actions is None or not use_cache:
+        request = message.features_request()
+        (reply, pkt) = parent.controller.transact(request, timeout=2)
+        parent.assertTrue(reply is not None, "Did not get response to ftr req")
+        cached_supported_actions = reply.actions
+        pa_logger.info("Supported actions: " + hex(cached_supported_actions))
+
+    return cached_supported_actions
+
+if __name__ == "__main__":
+    print "Please run through oft script:  ./oft --test_spec=basic"
diff --git a/tests/remote.py b/tests/remote.py
new file mode 100644
index 0000000..5931153
--- /dev/null
+++ b/tests/remote.py
@@ -0,0 +1,23 @@
+"""
+Platform configuration file
+platform == remote
+"""
+
+remote_port_map = {
+    23 : "eth2",
+    24 : "eth3",
+    25 : "eth4",
+    26 : "eth5"
+    }
+
+def platform_config_update(config):
+    """
+    Update configuration for the remote platform
+
+    @param config The configuration dictionary to use/update
+    This routine defines the port map used for this configuration
+    """
+
+    global remote_port_map
+    config["port_map"] = remote_port_map.copy()
+    config["caps_table_idx"] = 0
diff --git a/tests/run_switch.py b/tests/run_switch.py
new file mode 100755
index 0000000..66e7837
--- /dev/null
+++ b/tests/run_switch.py
@@ -0,0 +1,74 @@
+#!/usr/bin/env python
+#
+# Create veth pairs and start up switch daemons
+#
+
+import os
+import time
+from subprocess import Popen,PIPE,call,check_call
+from optparse import OptionParser
+
+parser = OptionParser(version="%prog 0.1")
+parser.set_defaults(port_count=4)
+parser.set_defaults(of_dir="../../openflow")
+parser.set_defaults(port=6633)
+parser.add_option("-n", "--port_count", type="int",
+                  help="Number of veth pairs to create")
+parser.add_option("-o", "--of_dir", help="OpenFlow root directory for host")
+parser.add_option("-p", "--port", type="int",
+                  help="Port for OFP to listen on")
+parser.add_option("-N", "--no_wait", action="store_true",
+                  help="Do not wait 2 seconds to start daemons")
+(options, args) = parser.parse_args()
+
+call(["/sbin/modprobe", "veth"])
+for idx in range(0, options.port_count):
+    print "Creating veth pair " + str(idx)
+    call(["/sbin/ip", "link", "add", "type", "veth"])
+
+for idx in range(0, 2 * options.port_count):
+    cmd = ["/sbin/ifconfig", 
+           "veth" + str(idx), 
+           "192.168.1" + str(idx) + ".1", 
+           "netmask", 
+           "255.255.255.0"]
+    print "Cmd: " + str(cmd)
+    call(cmd)
+
+veths = "veth0"
+for idx in range(1, options.port_count):
+    veths += ",veth" + str(2 * idx)
+
+ofd = options.of_dir + "/udatapath/ofdatapath"
+ofp = options.of_dir + "/secchan/ofprotocol"
+
+try:
+    check_call(["ls", ofd])
+except:
+    print "Could not find datapath daemon: " + ofd
+    os.exit(1)
+
+try:
+    check_call(["ls", ofp])
+except:
+    print "Could not find protocol daemon: " + ofp
+    os.exit(1)
+
+if not options.no_wait:
+    print "Starting ofprotocol in 2 seconds; ^C to quit"
+    time.sleep(2)
+else:
+    print "Starting ofprotocol; ^C to quit"
+
+ofd_op = Popen([ofd, "-i", veths, "punix:/tmp/ofd"])
+print "Started ofdatapath on IFs " + veths + " with pid " + str(ofd_op.pid)
+
+call([ofp, "unix:/tmp/ofd", "tcp:127.0.0.1:" + str(options.port),
+      "--fail=closed", "--max-backoff=1"])
+
+ofd_op.kill()
+
+
+
+
+
diff --git a/tests/testutils.py b/tests/testutils.py
new file mode 100644
index 0000000..084f74d
--- /dev/null
+++ b/tests/testutils.py
@@ -0,0 +1,686 @@
+
+import sys
+import copy
+
+try:
+    import scapy.all as scapy
+except:
+    try:
+        import scapy as scapy
+    except:
+        sys.exit("Need to install scapy for packet parsing")
+
+import oftest.controller as controller
+import oftest.cstruct as ofp
+import oftest.message as message
+import oftest.dataplane as dataplane
+import oftest.action as action
+import oftest.parse as parse
+import logging
+import types
+
+global skipped_test_count
+skipped_test_count = 0
+
+# Some useful defines
+IP_ETHERTYPE = 0x800
+TCP_PROTOCOL = 0x6
+UDP_PROTOCOL = 0x11
+
+def clear_switch(parent, port_list, logger):
+    """
+    Clear the switch configuration
+
+    @param parent Object implementing controller and assert equal
+    @param logger Logging object
+    """
+    for port in port_list:
+        clear_port_config(parent, port, logger)
+    delete_all_flows(parent.controller, logger)
+
+def delete_all_flows(ctrl, logger):
+    """
+    Delete all flows on the switch
+    @param ctrl The controller object for the test
+    @param logger Logging object
+    """
+
+    logger.info("Deleting all flows")
+    msg = message.flow_mod()
+    msg.match.wildcards = ofp.OFPFW_ALL
+    msg.out_port = ofp.OFPP_NONE
+    msg.command = ofp.OFPFC_DELETE
+    msg.buffer_id = 0xffffffff
+    return ctrl.message_send(msg)
+
+def clear_port_config(parent, port, logger):
+    """
+    Clear the port configuration (currently only no flood setting)
+
+    @param parent Object implementing controller and assert equal
+    @param logger Logging object
+    """
+    rv = port_config_set(parent.controller, port,
+                         0, ofp.OFPPC_NO_FLOOD, logger)
+    self.assertEqual(rv, 0, "Failed to reset port config")
+
+def simple_tcp_packet(pktlen=100, 
+                      dl_dst='00:01:02:03:04:05',
+                      dl_src='00:06:07:08:09:0a',
+                      dl_vlan_enable=False,
+                      dl_vlan=0,
+                      dl_vlan_pcp=0,
+                      dl_vlan_cfi=0,
+                      ip_src='192.168.0.1',
+                      ip_dst='192.168.0.2',
+                      ip_tos=0,
+                      tcp_sport=1234,
+                      tcp_dport=80
+                      ):
+    """
+    Return a simple dataplane TCP packet
+
+    Supports a few parameters:
+    @param len Length of packet in bytes w/o CRC
+    @param dl_dst Destinatino MAC
+    @param dl_src Source MAC
+    @param dl_vlan_enable True if the packet is with vlan, False otherwise
+    @param dl_vlan VLAN ID
+    @param dl_vlan_pcp VLAN priority
+    @param ip_src IP source
+    @param ip_dst IP destination
+    @param ip_tos IP ToS
+    @param tcp_dport TCP destination port
+    @param ip_sport TCP source port
+
+    Generates a simple TCP request.  Users
+    shouldn't assume anything about this packet other than that
+    it is a valid ethernet/IP/TCP frame.
+    """
+    # Note Dot1Q.id is really CFI
+    if (dl_vlan_enable):
+        pkt = scapy.Ether(dst=dl_dst, src=dl_src)/ \
+            scapy.Dot1Q(prio=dl_vlan_pcp, id=dl_vlan_cfi, vlan=dl_vlan)/ \
+            scapy.IP(src=ip_src, dst=ip_dst, tos=ip_tos)/ \
+            scapy.TCP(sport=tcp_sport, dport=tcp_dport)
+    else:
+        pkt = scapy.Ether(dst=dl_dst, src=dl_src)/ \
+            scapy.IP(src=ip_src, dst=ip_dst, tos=ip_tos)/ \
+            scapy.TCP(sport=tcp_sport, dport=tcp_dport)
+
+    pkt = pkt/("D" * (pktlen - len(pkt)))
+
+    return pkt
+
+def simple_icmp_packet(pktlen=60, 
+                      dl_dst='00:01:02:03:04:05',
+                      dl_src='00:06:07:08:09:0a',
+                      dl_vlan_enable=False,
+                      dl_vlan=0,
+                      dl_vlan_pcp=0,
+                      ip_src='192.168.0.1',
+                      ip_dst='192.168.0.2',
+                      ip_tos=0,
+                      icmp_type=8,
+                      icmp_code=0
+                      ):
+    """
+    Return a simple ICMP packet
+
+    Supports a few parameters:
+    @param len Length of packet in bytes w/o CRC
+    @param dl_dst Destinatino MAC
+    @param dl_src Source MAC
+    @param dl_vlan_enable True if the packet is with vlan, False otherwise
+    @param dl_vlan VLAN ID
+    @param dl_vlan_pcp VLAN priority
+    @param ip_src IP source
+    @param ip_dst IP destination
+    @param ip_tos IP ToS
+    @param icmp_type ICMP type
+    @param icmp_code ICMP code
+
+    Generates a simple ICMP ECHO REQUEST.  Users
+    shouldn't assume anything about this packet other than that
+    it is a valid ethernet/ICMP frame.
+    """
+    if (dl_vlan_enable):
+        pkt = scapy.Ether(dst=dl_dst, src=dl_src)/ \
+            scapy.Dot1Q(prio=dl_vlan_pcp, id=0, vlan=dl_vlan)/ \
+            scapy.IP(src=ip_src, dst=ip_dst, tos=ip_tos)/ \
+            scapy.ICMP(type=icmp_type, code=icmp_code)
+    else:
+        pkt = scapy.Ether(dst=dl_dst, src=dl_src)/ \
+            scapy.IP(src=ip_src, dst=ip_dst, tos=ip_tos)/ \
+            scapy.ICMP(type=icmp_type, code=icmp_code)
+
+    pkt = pkt/("0" * (pktlen - len(pkt)))
+
+    return pkt
+
+def do_barrier(ctrl):
+    b = message.barrier_request()
+    ctrl.transact(b)
+
+
+def port_config_get(controller, port_no, logger):
+    """
+    Get a port's configuration
+
+    Gets the switch feature configuration and grabs one port's
+    configuration
+
+    @returns (hwaddr, config, advert) The hwaddress, configuration and
+    advertised values
+    """
+    request = message.features_request()
+    reply, pkt = controller.transact(request, timeout=2)
+    logger.debug(reply.show())
+    if reply is None:
+        logger.warn("Get feature request failed")
+        return None, None, None
+    for idx in range(len(reply.ports)):
+        if reply.ports[idx].port_no == port_no:
+            return (reply.ports[idx].hw_addr, reply.ports[idx].config,
+                    reply.ports[idx].advertised)
+    
+    logger.warn("Did not find port number for port config")
+    return None, None, None
+
+def port_config_set(controller, port_no, config, mask, logger):
+    """
+    Set the port configuration according the given parameters
+
+    Gets the switch feature configuration and updates one port's
+    configuration value according to config and mask
+    """
+    logger.info("Setting port " + str(port_no) + " to config " + str(config))
+    request = message.features_request()
+    reply, pkt = controller.transact(request, timeout=2)
+    if reply is None:
+        return -1
+    logger.debug(reply.show())
+    for idx in range(len(reply.ports)):
+        if reply.ports[idx].port_no == port_no:
+            break
+    if idx >= len(reply.ports):
+        return -1
+    mod = message.port_mod()
+    mod.port_no = port_no
+    mod.hw_addr = reply.ports[idx].hw_addr
+    mod.config = config
+    mod.mask = mask
+    mod.advertise = reply.ports[idx].advertised
+    rv = controller.message_send(mod)
+    return rv
+
+def receive_pkt_check(dataplane, pkt, yes_ports, no_ports, assert_if, logger):
+    """
+    Check for proper receive packets across all ports
+    @param dataplane The dataplane object
+    @param pkt Expected packet; may be None if yes_ports is empty
+    @param yes_ports Set or list of ports that should recieve packet
+    @param no_ports Set or list of ports that should not receive packet
+    @param assert_if Object that implements assertXXX
+    """
+    for ofport in yes_ports:
+        logger.debug("Checking for pkt on port " + str(ofport))
+        (rcv_port, rcv_pkt, pkt_time) = dataplane.poll(
+            port_number=ofport, timeout=1)
+        assert_if.assertTrue(rcv_pkt is not None, 
+                             "Did not receive pkt on " + str(ofport))
+        assert_if.assertEqual(str(pkt), str(rcv_pkt),
+                              "Response packet does not match send packet " +
+                              "on port " + str(ofport))
+
+    for ofport in no_ports:
+        logger.debug("Negative check for pkt on port " + str(ofport))
+        (rcv_port, rcv_pkt, pkt_time) = dataplane.poll(
+            port_number=ofport, timeout=1)
+        assert_if.assertTrue(rcv_pkt is None, 
+                             "Unexpected pkt on port " + str(ofport))
+
+
+def receive_pkt_verify(parent, egr_port, exp_pkt):
+    """
+    Receive a packet and verify it matches an expected value
+
+    parent must implement dataplane, assertTrue and assertEqual
+    """
+    (rcv_port, rcv_pkt, pkt_time) = parent.dataplane.poll(port_number=egr_port,
+                                                          timeout=1)
+    if rcv_pkt is None:
+        parent.logger.error("ERROR: No packet received from " + str(egr_port))
+
+    parent.assertTrue(rcv_pkt is not None,
+                      "Did not receive packet port " + str(egr_port))
+    parent.logger.debug("Packet len " + str(len(rcv_pkt)) + " in on " + 
+                    str(rcv_port))
+
+    if str(exp_pkt) != str(rcv_pkt):
+        parent.logger.error("ERROR: Packet match failed.")
+        parent.logger.debug("Expected len " + str(len(exp_pkt)) + ": "
+                        + str(exp_pkt).encode('hex'))
+        parent.logger.debug("Received len " + str(len(rcv_pkt)) + ": "
+                        + str(rcv_pkt).encode('hex'))
+    parent.assertEqual(str(exp_pkt), str(rcv_pkt),
+                       "Packet match error on port " + str(egr_port))
+    
+def match_verify(parent, req_match, res_match):
+    """
+    Verify flow matches agree; if they disagree, report where
+
+    parent must implement assertEqual
+    Use str() to ensure content is compared and not pointers
+    """
+
+    parent.assertEqual(req_match.wildcards, res_match.wildcards,
+                       'Match failed: wildcards: ' + hex(req_match.wildcards) +
+                       " != " + hex(res_match.wildcards))
+    parent.assertEqual(req_match.in_port, res_match.in_port,
+                       'Match failed: in_port: ' + str(req_match.in_port) +
+                       " != " + str(res_match.in_port))
+    parent.assertEqual(str(req_match.dl_src), str(res_match.dl_src),
+                       'Match failed: dl_src: ' + str(req_match.dl_src) +
+                       " != " + str(res_match.dl_src))
+    parent.assertEqual(str(req_match.dl_dst), str(res_match.dl_dst),
+                       'Match failed: dl_dst: ' + str(req_match.dl_dst) +
+                       " != " + str(res_match.dl_dst))
+    parent.assertEqual(req_match.dl_vlan, res_match.dl_vlan,
+                       'Match failed: dl_vlan: ' + str(req_match.dl_vlan) +
+                       " != " + str(res_match.dl_vlan))
+    parent.assertEqual(req_match.dl_vlan_pcp, res_match.dl_vlan_pcp,
+                       'Match failed: dl_vlan_pcp: ' + 
+                       str(req_match.dl_vlan_pcp) + " != " + 
+                       str(res_match.dl_vlan_pcp))
+    parent.assertEqual(req_match.dl_type, res_match.dl_type,
+                       'Match failed: dl_type: ' + str(req_match.dl_type) +
+                       " != " + str(res_match.dl_type))
+
+    if (not(req_match.wildcards & ofp.OFPFW_DL_TYPE)
+        and (req_match.dl_type == IP_ETHERTYPE)):
+        parent.assertEqual(req_match.nw_tos, res_match.nw_tos,
+                           'Match failed: nw_tos: ' + str(req_match.nw_tos) +
+                           " != " + str(res_match.nw_tos))
+        parent.assertEqual(req_match.nw_proto, res_match.nw_proto,
+                           'Match failed: nw_proto: ' + str(req_match.nw_proto) +
+                           " != " + str(res_match.nw_proto))
+        parent.assertEqual(req_match.nw_src, res_match.nw_src,
+                           'Match failed: nw_src: ' + str(req_match.nw_src) +
+                           " != " + str(res_match.nw_src))
+        parent.assertEqual(req_match.nw_dst, res_match.nw_dst,
+                           'Match failed: nw_dst: ' + str(req_match.nw_dst) +
+                           " != " + str(res_match.nw_dst))
+
+        if (not(req_match.wildcards & ofp.OFPFW_NW_PROTO)
+            and ((req_match.nw_proto == TCP_PROTOCOL)
+                 or (req_match.nw_proto == UDP_PROTOCOL))):
+            parent.assertEqual(req_match.tp_src, res_match.tp_src,
+                               'Match failed: tp_src: ' + 
+                               str(req_match.tp_src) +
+                               " != " + str(res_match.tp_src))
+            parent.assertEqual(req_match.tp_dst, res_match.tp_dst,
+                               'Match failed: tp_dst: ' + 
+                               str(req_match.tp_dst) +
+                               " != " + str(res_match.tp_dst))
+
+def flow_removed_verify(parent, request=None, pkt_count=-1, byte_count=-1):
+    """
+    Receive a flow removed msg and verify it matches expected
+
+    @params parent Must implement controller, assertEqual
+    @param pkt_count If >= 0, verify packet count
+    @param byte_count If >= 0, verify byte count
+    """
+    (response, raw) = parent.controller.poll(ofp.OFPT_FLOW_REMOVED, 2)
+    parent.assertTrue(response is not None, 'No flow removed message received')
+
+    if request is None:
+        return
+
+    parent.assertEqual(request.cookie, response.cookie,
+                       "Flow removed cookie error: " +
+                       hex(request.cookie) + " != " + hex(response.cookie))
+
+    req_match = request.match
+    res_match = response.match
+    verifyMatchField(req_match, res_match)
+
+    if (req_match.wildcards != 0):
+        parent.assertEqual(request.priority, response.priority,
+                           'Flow remove prio mismatch: ' + 
+                           str(request,priority) + " != " + 
+                           str(response.priority))
+        parent.assertEqual(response.reason, ofp.OFPRR_HARD_TIMEOUT,
+                           'Flow remove reason is not HARD TIMEOUT:' +
+                           str(response.reason))
+        if pkt_count >= 0:
+            parent.assertEqual(response.packet_count, pkt_count,
+                               'Flow removed failed, packet count: ' + 
+                               str(response.packet_count) + " != " +
+                               str(pkt_count))
+        if byte_count >= 0:
+            parent.assertEqual(response.byte_count, byte_count,
+                               'Flow removed failed, byte count: ' + 
+                               str(response.byte_count) + " != " + 
+                               str(byte_count))
+
+def flow_msg_create(parent, pkt, ing_port=None, action_list=None, wildcards=0,
+               egr_port=None, egr_queue=None, check_expire=False):
+    """
+    Create a flow message
+
+    Match on packet with given wildcards.  
+    See flow_match_test for other parameter descriptoins
+    @param egr_queue if not None, make the output an enqueue action
+    """
+    match = parse.packet_to_flow_match(pkt)
+    parent.assertTrue(match is not None, "Flow match from pkt failed")
+    match.wildcards = wildcards
+    match.in_port = ing_port
+
+    request = message.flow_mod()
+    request.match = match
+    request.buffer_id = 0xffffffff
+    if check_expire:
+        request.flags |= ofp.OFPFF_SEND_FLOW_REM
+        request.hard_timeout = 1
+
+    if action_list is not None:
+        for act in action_list:
+            parent.logger.debug("Adding action " + act.show())
+            rv = request.actions.add(act)
+            parent.assertTrue(rv, "Could not add action" + act.show())
+
+    # Set up output/enqueue action if directed
+    if egr_queue is not None:
+        parent.assertTrue(egr_port is not None, "Egress port not set")
+        act = action.action_enqueue()
+        act.port = egr_port
+        act.queue_id = egr_queue
+        rv = request.actions.add(act)
+        parent.assertTrue(rv, "Could not add enqueue action " + 
+                          str(egr_port) + " Q: " + str(egr_queue))
+    elif egr_port is not None:
+        act = action.action_output()
+        act.port = egr_port
+        rv = request.actions.add(act)
+        parent.assertTrue(rv, "Could not add output action " + str(egr_port))
+
+    parent.logger.debug(request.show())
+
+    return request
+
+def flow_msg_install(parent, request, clear_table=True):
+    """
+    Install a flow mod message in the switch
+
+    @param parent Must implement controller, assertEqual, assertTrue
+    @param request The request, all set to go
+    @param clear_table If true, clear the flow table before installing
+    """
+    if clear_table:
+        parent.logger.debug("Clear flow table")
+        rc = delete_all_flows(parent.controller, parent.logger)
+        parent.assertEqual(rc, 0, "Failed to delete all flows")
+        do_barrier(parent.controller)
+
+    parent.logger.debug("Insert flow")
+    rv = parent.controller.message_send(request)
+    parent.assertTrue(rv != -1, "Error installing flow mod")
+    do_barrier(parent.controller)
+
+def flow_match_test_port_pair(parent, ing_port, egr_port, wildcards=0, 
+                              dl_vlan=-1, pkt=None, exp_pkt=None,
+                              action_list=None, check_expire=False):
+    """
+    Flow match test on single TCP packet
+
+    Run test with packet through switch from ing_port to egr_port
+    See flow_match_test for parameter descriptions
+    """
+
+    parent.logger.info("Pkt match test: " + str(ing_port) + " to " + str(egr_port))
+    parent.logger.debug("  WC: " + hex(wildcards) + " vlan: " + str(dl_vlan) +
+                    " expire: " + str(check_expire))
+    if pkt is None:
+        pkt = simple_tcp_packet(dl_vlan_enable=(dl_vlan >= 0), dl_vlan=dl_vlan)
+
+    request = flow_msg_create(parent, pkt, ing_port=ing_port, 
+                              wildcards=wildcards, egr_port=egr_port,
+                              action_list=action_list)
+
+    flow_msg_install(parent, request)
+
+    parent.logger.debug("Send packet: " + str(ing_port) + " to " + str(egr_port))
+    parent.dataplane.send(ing_port, str(pkt))
+
+    if exp_pkt is None:
+        exp_pkt = pkt
+    receive_pkt_verify(parent, egr_port, exp_pkt)
+
+    if check_expire:
+        #@todo Not all HW supports both pkt and byte counters
+        flow_removed_verify(parent, request, pkt_count=1, byte_count=len(pkt))
+
+def flow_match_test(parent, port_map, wildcards=0, dl_vlan=-1, pkt=None, 
+                    exp_pkt=None, action_list=None, check_expire=False, 
+                    max_test=0):
+    """
+    Run flow_match_test_port_pair on all port pairs
+
+    @param max_test If > 0 no more than this number of tests are executed.
+    @param parent Must implement controller, dataplane, assertTrue, assertEqual
+    and logger
+    @param pkt If not None, use this packet for ingress
+    @param wildcards For flow match entry
+    @param dl_vlan If not -1, and pkt is None, create a pkt w/ VLAN tag
+    @param exp_pkt If not None, use this as the expected output pkt; els use pkt
+    @param action_list Additional actions to add to flow mod
+    @param check_expire Check for flow expiration message
+    """
+    of_ports = port_map.keys()
+    of_ports.sort()
+    parent.assertTrue(len(of_ports) > 1, "Not enough ports for test")
+    test_count = 0
+
+    for ing_idx in range(len(of_ports)):
+        ingress_port = of_ports[ing_idx]
+        for egr_idx in range(len(of_ports)):
+            if egr_idx == ing_idx:
+                continue
+            egress_port = of_ports[egr_idx]
+            flow_match_test_port_pair(parent, ingress_port, egress_port, 
+                                      wildcards=wildcards, dl_vlan=dl_vlan, 
+                                      pkt=pkt, exp_pkt=exp_pkt,
+                                      action_list=action_list,
+                                      check_expire=check_expire)
+            test_count += 1
+            if (max_test > 0) and (test_count > max_test):
+                parent.logger.info("Ran " + str(test_count) + " tests; exiting")
+                return
+
+def test_param_get(config, key, default=None):
+    """
+    Return value passed via test-params if present
+
+    @param config The configuration structure for OFTest
+    @param key The lookup key
+    @param default Default value to use if not found
+
+    If the pair 'key=val' appeared in the string passed to --test-params
+    on the command line, return val (as interpreted by exec).  Otherwise
+    return default value.
+    """
+    try:
+        exec config["test_params"]
+    except:
+        return default
+
+    s = "val = " + str(key)
+    try:
+        exec s
+        return val
+    except:
+        return default
+
+def action_generate(parent, field_to_mod, mod_field_vals):
+    """
+    Create an action to modify the field indicated in field_to_mod
+
+    @param parent Must implement, assertTrue
+    @param field_to_mod The field to modify as a string name
+    @param mod_field_vals Hash of values to use for modified values
+    """
+
+    act = None
+
+    if field_to_mod in ['pktlen']:
+        return None
+
+    if field_to_mod == 'dl_dst':
+        act = action.action_set_dl_dst()
+        act.dl_addr = parse.parse_mac(mod_field_vals['dl_dst'])
+    elif field_to_mod == 'dl_src':
+        act = action.action_set_dl_src()
+        act.dl_addr = parse.parse_mac(mod_field_vals['dl_src'])
+    elif field_to_mod == 'dl_vlan_enable':
+        if not mod_field_vals['dl_vlan_enable']: # Strip VLAN tag
+            act = action.action_strip_vlan()
+        # Add VLAN tag is handled by dl_vlan field
+        # Will return None in this case
+    elif field_to_mod == 'dl_vlan':
+        act = action.action_set_vlan_vid()
+        act.vlan_vid = mod_field_vals['dl_vlan']
+    elif field_to_mod == 'dl_vlan_pcp':
+        act = action.action_set_vlan_pcp()
+        act.vlan_pcp = mod_field_vals['dl_vlan_pcp']
+    elif field_to_mod == 'ip_src':
+        act = action.action_set_nw_src()
+        act.nw_addr = parse.parse_ip(mod_field_vals['ip_src'])
+    elif field_to_mod == 'ip_dst':
+        act = action.action_set_nw_dst()
+        act.nw_addr = parse.parse_ip(mod_field_vals['ip_dst'])
+    elif field_to_mod == 'ip_tos':
+        act = action.action_set_nw_tos()
+        act.nw_tos = mod_field_vals['ip_tos']
+    elif field_to_mod == 'tcp_sport':
+        act = action.action_set_tp_src()
+        act.tp_port = mod_field_vals['tcp_sport']
+    elif field_to_mod == 'tcp_dport':
+        act = action.action_set_tp_dst()
+        act.tp_port = mod_field_vals['tcp_dport']
+    else:
+        parent.assertTrue(0, "Unknown field to modify: " + str(field_to_mod))
+
+    return act
+
+def pkt_action_setup(parent, start_field_vals={}, mod_field_vals={}, 
+                     mod_fields={}, check_test_params=False):
+    """
+    Set up the ingress and expected packet and action list for a test
+
+    @param parent Must implement, assertTrue, config hash and logger
+    @param start_field_values Field values to use for ingress packet (optional)
+    @param mod_field_values Field values to use for modified packet (optional)
+    @param mod_fields The list of fields to be modified by the switch in the test.
+    @params check_test_params If True, will check the parameters vid, add_vlan
+    and strip_vlan from the command line.
+
+    Returns a triple:  pkt-to-send, expected-pkt, action-list
+    """
+
+    new_actions = []
+
+
+    base_pkt_params = {}
+    base_pkt_params['pktlen'] = 100
+    base_pkt_params['dl_dst'] = '00:DE:F0:12:34:56'
+    base_pkt_params['dl_src'] = '00:23:45:67:89:AB'
+    base_pkt_params['dl_vlan_enable'] = False
+    base_pkt_params['dl_vlan'] = 2
+    base_pkt_params['dl_vlan_pcp'] = 0
+    base_pkt_params['ip_src'] = '192.168.0.1'
+    base_pkt_params['ip_dst'] = '192.168.0.2'
+    base_pkt_params['ip_tos'] = 0
+    base_pkt_params['tcp_sport'] = 1234
+    base_pkt_params['tcp_dport'] = 80
+    for keyname in start_field_vals.keys():
+        base_pkt_params[keyname] = start_field_vals[keyname]
+
+    mod_pkt_params = {}
+    mod_pkt_params['pktlen'] = 100
+    mod_pkt_params['dl_dst'] = '00:21:0F:ED:CB:A9'
+    mod_pkt_params['dl_src'] = '00:ED:CB:A9:87:65'
+    mod_pkt_params['dl_vlan_enable'] = False
+    mod_pkt_params['dl_vlan'] = 3
+    mod_pkt_params['dl_vlan_pcp'] = 7
+    mod_pkt_params['ip_src'] = '10.20.30.40'
+    mod_pkt_params['ip_dst'] = '50.60.70.80'
+    mod_pkt_params['ip_tos'] = 0xf0
+    mod_pkt_params['tcp_sport'] = 4321
+    mod_pkt_params['tcp_dport'] = 8765
+    for keyname in mod_field_vals.keys():
+        mod_pkt_params[keyname] = mod_field_vals[keyname]
+
+    # Check for test param modifications
+    strip = False
+    if check_test_params:
+        add_vlan = test_param_get(parent.config, 'add_vlan')
+        strip_vlan = test_param_get(parent.config, 'strip_vlan')
+        vid = test_param_get(parent.config, 'vid')
+
+        if add_vlan and strip_vlan:
+            parent.assertTrue(0, "Add and strip VLAN both specified")
+
+        if vid:
+            base_pkt_params['dl_vlan_enable'] = True
+            base_pkt_params['dl_vlan'] = vid
+            if 'dl_vlan' in mod_fields:
+                mod_pkt_params['dl_vlan'] = vid + 1
+
+        if add_vlan:
+            base_pkt_params['dl_vlan_enable'] = False
+            mod_pkt_params['dl_vlan_enable'] = True
+            mod_pkt_params['pktlen'] = base_pkt_params['pktlen'] + 4
+            mod_fields.append('pktlen')
+            mod_fields.append('dl_vlan_enable')
+            if 'dl_vlan' not in mod_fields:
+                mod_fields.append('dl_vlan')
+        elif strip_vlan:
+            base_pkt_params['dl_vlan_enable'] = True
+            mod_pkt_params['dl_vlan_enable'] = False
+            mod_pkt_params['pktlen'] = base_pkt_params['pktlen'] - 4
+            mod_fields.append('dl_vlan_enable')
+            mod_fields.append('pktlen')
+
+    # Build the ingress packet
+    ingress_pkt = simple_tcp_packet(**base_pkt_params)
+
+    # Build the expected packet, modifying the indicated fields
+    for item in mod_fields:
+        base_pkt_params[item] = mod_pkt_params[item]
+        act = action_generate(parent, item, mod_pkt_params)
+        if act:
+            new_actions.append(act)
+
+    expected_pkt = simple_tcp_packet(**base_pkt_params)
+
+    return (ingress_pkt, expected_pkt, new_actions)
+        
+
+def skip_message_emit(parent, s):
+    """
+    Print out a 'skipped' message to stderr
+
+    @param s The string to print out to the log file
+    @param parent Must implement config and logger objects
+    """
+    global skipped_test_count
+
+    skipped_test_count += 1
+    parent.logger.info("Skipping: " + s)
+    if parent.config["dbg_level"] < logging.WARNING:
+        sys.stderr.write("(skipped) ")
+    else:
+        sys.stderr.write("(S)")
diff --git a/tools/munger/Makefile b/tools/munger/Makefile
new file mode 100644
index 0000000..31f3423
--- /dev/null
+++ b/tools/munger/Makefile
@@ -0,0 +1,100 @@
+#
+# Simple make file to generate OpenFlow python files
+#
+# Fixme:  Would like pylibopenflow to be able to run remotely
+# Currently, we have to cd to it's dir and refer back to local
+
+TOP_DIR = ../..
+TOOLS_DIR = ..
+DOC_DIR = ${TOP_DIR}/doc
+TESTS_DIR = ${TOP_DIR}/tests
+
+PYLIBOF_DIR = ${TOOLS_DIR}/pylibopenflow
+
+TARGET_DIR = ${TOP_DIR}/src/python/oftest
+SETUP_DIR = ${TOP_DIR}/src/python/
+
+# Relative to pyopenflow-pythonize exec location
+OF_HEADER = include/openflow.h
+
+# Relative to here
+ABS_OF_HEADER = ${PYLIBOF_DIR}/${OF_HEADER}
+
+PYTHONIZE = bin/pyopenflow-pythonize.py
+CSTRUCT_GEN_CMD = (cd ${PYLIBOF_DIR} && ${PYTHONIZE} -i ${OF_HEADER} \
+	${TARGET_DIR}/cstruct.py)
+CSTRUCT_AUX_INFO = ${TARGET_DIR}/class_maps.py
+
+# Dependencies for cstruct.py
+CSTRUCT_DEP = ${ABS_OF_HEADER} $(wildcard ${PYLIBOF_DIR}/pylib/*.py)
+CSTRUCT_DEP += $(wildcard ${PYLIBOF_DIR}/pylib/of/*.py) 
+
+# Generated and other files
+GEN_FILES := $(addprefix ${TARGET_DIR}/,cstruct.py message.py error.py \
+	action.py)
+# class_maps is generated as a side effect of cstruct....
+OTHER_FILES :=  $(addprefix ${TARGET_DIR}/,action_list.py parse.py \
+	controller.py dataplane.py class_maps.py)
+LINT_SOURCE := ${GEN_FILES} ${OTHER_FILES}
+LINT_FILES := $(subst .py,.log,${LINT_SOURCE})
+LINT_FILES := $(subst ${TARGET_DIR}/,lint/,${LINT_FILES})
+
+all: ${GEN_FILES}
+
+install: all
+	(cd ${SETUP_DIR} && sudo python setup.py install)
+
+# The core OpenFlow libraries generated from openflow.h
+${TARGET_DIR}/cstruct.py: ${CSTRUCT_DEP}
+	${CSTRUCT_GEN_CMD} > ${CSTRUCT_AUX_INFO}
+
+# General rule like src/message.py comes from scripts/message_gen.py
+${TARGET_DIR}/%.py: scripts/%_gen.py ${TARGET_DIR}/cstruct.py
+	python $< > $@
+
+# The pylint files
+lint/%.log: ${TARGET_DIR}/%.py
+	mkdir -p lint
+	(cd ${TARGET_DIR} && pylint -e $(notdir $<)) > $@
+
+# Note that lint has issues with scapy syntax
+lint: ${LINT_FILES}
+
+${TESTS_DIR}/oft.py:
+	ln -s oft $@
+
+# For now. just local source doc generated
+doc: ${GEN_FILES} ${OTHER_FILES} ${DOC_DIR}/Doxyfile ${TESTS_DIR}/oft.py
+	(cd ${DOC_DIR} && doxygen)
+
+clean:
+	rm -rf ${GEN_FILES} ${LINT_FILES} ${DOC_DIR}/html/*
+
+test: all
+	(cd tests && python msg_test.py) > tests/msg_test.log
+
+help:
+	@echo
+	@echo "Makefile for oftest source munger"
+	@echo "    Default builds python files and installs in ${TARGET_DIR}"
+	@echo "    make local:  Generate files and put in src/"
+	@echo
+	@echo "Targets:"
+	@echo "   all:     Puts generated .py in ${TARGET_DIR}"
+	@echo "   lint:    Puts error report in lint/*.log"
+	@echo "   doc:     Runs doxygen on generated files in ../../doc"
+	@echo "   install: Run setup tools for generated python	source"
+	@echo "   clean:   Removes generated files"
+	@echo
+	@echo "Debug info:"
+	@echo
+	@echo "Files generated GEN_FILES:  ${GEN_FILES}"
+	@echo
+	@echo "Dependencies for cstruct.py CSTRUCT_DEP:  ${CSTRUCT_DEP}"
+	@echo
+	@echo "Already created files OTHER_FILES:  ${OTHER_FILES}"
+	@echo
+	@echo "LINT_FILES:  ${LINT_FILES}"
+
+
+.PHONY: all local install help doc lint clean test
diff --git a/tools/munger/scripts/action_gen.py b/tools/munger/scripts/action_gen.py
new file mode 100644
index 0000000..116cd7b
--- /dev/null
+++ b/tools/munger/scripts/action_gen.py
@@ -0,0 +1,107 @@
+#!/usr/bin/python
+#
+# This python script generates action subclasses
+#
+
+import re
+import sys
+sys.path.append("../../src/python/oftest")
+from cstruct import *
+from class_maps import class_to_members_map
+
+print """
+# Python OpenFlow action wrapper classes
+
+from cstruct import *
+
+"""
+
+################################################################
+#
+# Action subclasses
+#
+################################################################
+
+action_structs = [
+    'output',
+    'vlan_vid',
+    'vlan_pcp',
+    'dl_addr',
+    'nw_addr',
+    'tp_port',
+    'nw_tos',
+    'vendor_header']
+
+action_types = [
+    'output',
+    'set_vlan_vid',
+    'set_vlan_pcp',
+    'strip_vlan',
+    'set_dl_src',
+    'set_dl_dst',
+    'set_nw_src',
+    'set_nw_dst',
+    'set_nw_tos',
+    'set_tp_src',
+    'set_tp_dst',
+    'enqueue',
+    'vendor'
+]
+action_types.sort()
+
+action_class_map = {
+    'output' : 'ofp_action_output',
+    'set_vlan_vid' : 'ofp_action_vlan_vid',
+    'set_vlan_pcp' : 'ofp_action_vlan_pcp',
+    'strip_vlan' : 'ofp_action_header',
+    'set_dl_src' : 'ofp_action_dl_addr',
+    'set_dl_dst' : 'ofp_action_dl_addr',
+    'set_nw_src' : 'ofp_action_nw_addr',
+    'set_nw_dst' : 'ofp_action_nw_addr',
+    'set_nw_tos' : 'ofp_action_nw_tos',
+    'set_tp_src' : 'ofp_action_tp_port',
+    'set_tp_dst' : 'ofp_action_tp_port',
+    'enqueue' : 'ofp_action_enqueue',
+    'vendor' : 'ofp_action_vendor_header'
+}
+
+template = """
+class action_--TYPE--(--PARENT_TYPE--):
+    \"""
+    Wrapper class for --TYPE-- action object
+
+    --DOC_INFO--
+    \"""
+    def __init__(self):
+        --PARENT_TYPE--.__init__(self)
+        self.type = --ACTION_NAME--
+        self.len = self.__len__()
+    def show(self, prefix=''):
+        outstr = prefix + "action_--TYPE--\\n"
+        outstr += --PARENT_TYPE--.show(self, prefix)
+        return outstr
+"""
+
+if __name__ == '__main__':
+    for (t, parent) in action_class_map.items():
+        if not parent in class_to_members_map.keys():
+            doc_info = "Unknown parent action class: " + parent
+        else:
+            doc_info = "Data members inherited from " + parent + ":\n"
+            for var in class_to_members_map[parent]:
+                doc_info += "    @arg " + var + "\n"
+        action_name = "OFPAT_" + t.upper()
+        to_print = re.sub('--TYPE--', t, template)
+        to_print = re.sub('--PARENT_TYPE--', parent, to_print)
+        to_print = re.sub('--ACTION_NAME--', action_name, to_print)
+        to_print = re.sub('--DOC_INFO--', doc_info, to_print)
+        print to_print
+
+    # Generate a list of action classes
+    print "action_class_list = ("
+    prev = None
+    for (t, parent) in action_class_map.items():
+        if prev:
+            print "    action_" + prev + ","
+        prev = t
+    print "    action_" + prev + ")"
diff --git a/tools/munger/scripts/error_gen.py b/tools/munger/scripts/error_gen.py
new file mode 100644
index 0000000..83850f5
--- /dev/null
+++ b/tools/munger/scripts/error_gen.py
@@ -0,0 +1,90 @@
+#!/usr/bin/python
+#
+# This python script generates error subclasses
+#
+
+import re
+import sys
+sys.path.append("../../src/python/oftest")
+from cstruct import *
+from class_maps import class_to_members_map
+
+print """
+# Python OpenFlow error wrapper classes
+
+from cstruct import *
+
+"""
+
+################################################################
+#
+# Error message subclasses
+#
+################################################################
+
+# Template for error subclasses
+
+template = """
+class --TYPE--_error_msg(ofp_error_msg):
+    \"""
+    Wrapper class for --TYPE-- error message class
+
+    Data members inherited from ofp_error_msg:
+    @arg type
+    @arg code
+    @arg data: Binary string following message members
+    
+    \"""
+    def __init__(self):
+        ofp_error_msg.__init__(self)
+        self.header = ofp_header()
+        self.header.type = OFPT_ERROR
+        self.type = --ERROR_NAME--
+        self.data = ""
+
+    def pack(self, assertstruct=True):
+        self.header.length = self.__len__()
+        packed = self.header.pack()
+        packed += ofp_error_msg.pack(self)
+        packed += self.data
+        return packed
+
+    def unpack(self, binary_string):
+        binary_string = self.header.unpack(binary_string)
+        binary_string = ofp_error_msg.unpack(self, binary_string)
+        self.data = binary_string
+        return ""
+
+    def __len__(self):
+        return OFP_HEADER_BYTES + OFP_ERROR_MSG_BYTES + len(self.data)
+
+    def show(self, prefix=''):
+        outstr = prefix + "--TYPE--_error_msg\\m"
+        outstr += self.header.show(prefix + '  ')
+        outstr += ofp_error_msg.show(self, prefix + '  ')
+        outstr += prefix + "data is of length " + str(len(self.data)) + '\\n'
+        ##@todo Consider trying to parse the string
+        return outstr
+
+    def __eq__(self, other):
+        if type(self) != type(other): return False
+        return (self.header == other.header and
+                ofp_error_msg.__eq__(self, other) and
+                self.data == other.data)
+
+    def __ne__(self, other): return not self.__eq__(other)
+"""
+
+error_types = [
+    'hello_failed',
+    'bad_request',
+    'bad_action',
+    'flow_mod_failed',
+    'port_mod_failed',
+    'queue_op_failed']
+
+for t in error_types:
+    error_name = "OFPET_" + t.upper()
+    to_print = re.sub('--TYPE--', t, template)
+    to_print = re.sub('--ERROR_NAME--', error_name, to_print)
+    print to_print
diff --git a/tools/munger/scripts/message_gen.py b/tools/munger/scripts/message_gen.py
new file mode 100644
index 0000000..1c627c1
--- /dev/null
+++ b/tools/munger/scripts/message_gen.py
@@ -0,0 +1,807 @@
+#!/usr/bin/python
+#
+# This python script generates wrapper functions for OpenFlow messages
+#
+# See the doc string below for more info
+#
+
+# To do:
+#    Default type values for messages
+#    Generate all message objects
+#    Action list objects?
+#    Autogen lengths when possible
+#    Dictionaries for enum strings
+#    Resolve sub struct initializers (see ofp_flow_mod)
+
+
+"""
+Generate wrapper classes for OpenFlow messages
+
+(C) Copyright Stanford University
+Date February 2010
+Created by dtalayco
+
+Attempting to follow http://www.python.org/dev/peps/pep-0008/
+The main exception is that our class names do not use CamelCase
+so as to more closely match the original C code names.
+
+This file is meant to generate a file of_wrapper.py which imports
+the base classes generated form automatic processing of openflow.h
+and produces wrapper classes for each OpenFlow message type.
+
+This file will normally be included in of_message.py which provides
+additional hand-generated work.
+
+There are two types of structures/classes here: base components and
+message classes.
+
+Base components are the base data classes which are fixed
+length structures including:
+    ofp_header
+    Each ofp_action structure
+    ofp_phy_port
+    The array elements of all the stats reply messages
+The base components are to be imported from a file of_header.py.
+
+Message classes define a complete message on the wire.  These are
+comprised of possibly variable length lists of possibly variably
+typed objects from the base component list above.
+
+Each OpenFlow message has a header and zero or more fixed length
+members (the "core members" of the class) followed by zero or more
+variable length lists.
+
+The wrapper classes should live in their own name space, probably
+of_message.  Automatically generated base component and skeletons for
+the message classes are assumed generated and the wrapper classes
+will inherit from those.
+
+Every message class must implement pack and unpack functions to
+convert between the class and a string representing what goes on the
+wire.
+
+For unpacking, the low level (base-component) classes must implement
+their own unpack functions.  A single top level unpack function
+will do the parsing and call the lower layer unpack functions as
+appropriate.
+
+Every base and message class should implement a show function to
+(recursively) display the contents of the object.
+
+Certain OpenFlow message types are further subclassed.  These include
+stats_request, stats_reply and error.
+
+"""
+
+# Don't generate header object in messages
+# Map each message to a body that doesn't include the header
+# The body has does not include variable length info at the end
+
+import re
+import string
+import sys
+sys.path.append("../../src/python/oftest")
+from cstruct import *
+from class_maps import class_to_members_map
+
+message_top_matter = """
+# Python OpenFlow message wrapper classes
+
+from cstruct import *
+from action_list import action_list
+from error import *
+
+# Define templates for documentation
+class ofp_template_msg:
+    \"""
+    Sample base class for template_msg; normally auto generated
+    This class should live in the of_header name space and provides the
+    base class for this type of message.  It will be wrapped for the
+    high level API.
+
+    \"""
+    def __init__(self):
+        \"""
+        Constructor for base class
+
+        \"""
+        self.header = ofp_header()
+        # Additional base data members declared here
+
+    # Normally will define pack, unpack, __len__ functions
+
+class template_msg(ofp_template_msg):
+    \"""
+    Sample class wrapper for template_msg
+    This class should live in the of_message name space and provides the
+    high level API for an OpenFlow message object.  These objects must
+    implement the functions indicated in this template.
+
+    \"""
+    def __init__(self):
+        \"""
+        Constructor
+        Must set the header type value appropriately for the message
+
+        \"""
+
+        ##@var header
+        # OpenFlow message header: length, version, xid, type
+        ofp_template_msg.__init__(self)
+        self.header = ofp_header()
+        # For a real message, will be set to an integer
+        self.header.type = "TEMPLATE_MSG_VALUE"
+    def pack(self):
+        \"""
+        Pack object into string
+
+        @return The packed string which can go on the wire
+
+        \"""
+        pass
+    def unpack(self, binary_string):
+        \"""
+        Unpack object from a binary string
+
+        @param binary_string The wire protocol byte string holding the object
+        represented as an array of bytes.
+
+        @return Typically returns the remainder of binary_string that
+        was not parsed.  May give a warning if that string is non-empty
+
+        \"""
+        pass
+    def __len__(self):
+        \"""
+        Return the length of this object once packed into a string
+
+        @return An integer representing the number bytes in the packed
+        string.
+
+        \"""
+        pass
+    def show(self, prefix=''):
+        \"""
+        Generate a string (with multiple lines) describing the contents
+        of the object in a readable manner
+
+        @param prefix Pre-pended at the beginning of each line.
+
+        \"""
+        pass
+    def __eq__(self, other):
+        \"""
+        Return True if self and other hold the same data
+
+        @param other Other object in comparison
+
+        \"""
+        pass
+    def __ne__(self, other):
+        \"""
+        Return True if self and other do not hold the same data
+
+        @param other Other object in comparison
+
+        \"""
+        pass
+"""
+
+# Dictionary mapping wrapped classes to the auto-generated structure
+# underlieing the class (body only, not header or var-length data)
+message_class_map = {
+    "hello"                         : "ofp_header",
+    "error"                         : "ofp_error_msg",
+    "echo_request"                  : "ofp_header",
+    "echo_reply"                    : "ofp_header",
+    "vendor"                        : "ofp_vendor_header",
+    "features_request"              : "ofp_header",
+    "features_reply"                : "ofp_switch_features",
+    "get_config_request"            : "ofp_header",
+    "get_config_reply"              : "ofp_switch_config",
+    "set_config"                    : "ofp_switch_config",
+    "packet_in"                     : "ofp_packet_in",
+    "flow_removed"                  : "ofp_flow_removed",
+    "port_status"                   : "ofp_port_status",
+    "packet_out"                    : "ofp_packet_out",
+    "flow_mod"                      : "ofp_flow_mod",
+    "port_mod"                      : "ofp_port_mod",
+    "stats_request"                 : "ofp_stats_request",
+    "stats_reply"                   : "ofp_stats_reply",
+    "barrier_request"               : "ofp_header",
+    "barrier_reply"                 : "ofp_header",
+    "queue_get_config_request"      : "ofp_queue_get_config_request",
+    "queue_get_config_reply"        : "ofp_queue_get_config_reply"
+}
+
+# These messages have a string member at the end of the data
+string_members = [
+    "hello",
+    "error",
+    "echo_request",
+    "echo_reply",
+    "vendor",
+    "packet_in",
+    "packet_out"
+]
+
+# These messages have a list (with the given name) in the data,
+# after the core members; the type is given for validation
+list_members = {
+    "features_reply"                : ('ports', None),
+    "packet_out"                    : ('actions', 'action_list'),
+    "flow_mod"                      : ('actions', 'action_list'),
+    "queue_get_config_reply"        : ('queues', None)
+}
+
+_ind = "    "
+
+def _p1(s): print _ind + s
+def _p2(s): print _ind * 2 + s
+def _p3(s): print _ind * 3 + s
+def _p4(s): print _ind * 4 + s
+
+# Okay, this gets kind of ugly:
+# There are three variables:  
+# has_core_members:  If parent class is not ofp_header, has inheritance
+# has_list: Whether class has trailing array or class
+# has_string: Whether class has trailing string
+
+def gen_message_wrapper(msg):
+    """
+    Generate a wrapper for the given message based on above info
+    @param msg String identifying the message name for the class
+    """
+
+    msg_name = "OFPT_" + msg.upper()
+    parent = message_class_map[msg]
+
+    has_list = False    # Has trailing list
+    has_core_members = False
+    has_string = False  # Has trailing string
+    if parent != 'ofp_header':
+        has_core_members = True
+    if msg in list_members.keys():
+        (list_var, list_type) = list_members[msg]
+        has_list = True
+    if msg in string_members:
+        has_string = True
+
+    if has_core_members:
+        print "class " + msg + "(" + parent + "):"
+    else:
+        print "class " + msg + ":"
+    _p1('"""')
+    _p1("Wrapper class for " + msg)
+    print
+    _p1("OpenFlow message header: length, version, xid, type")
+    _p1("@arg length: The total length of the message")
+    _p1("@arg version: The OpenFlow version (" + str(OFP_VERSION) + ")")
+    _p1("@arg xid: The transaction ID")
+    _p1("@arg type: The message type (" + msg_name + "=" + 
+        str(eval(msg_name)) + ")")
+    print
+    if has_core_members and parent in class_to_members_map.keys():
+        _p1("Data members inherited from " + parent + ":")
+        for var in class_to_members_map[parent]:
+            _p1("@arg " + var)
+    if has_list:
+        if list_type == None:
+            _p1("@arg " + list_var + ": Variable length array of TBD")
+        else:
+            _p1("@arg " + list_var + ": Object of type " + list_type);
+    if has_string:
+        _p1("@arg data: Binary string following message members")
+    print
+    _p1('"""')
+
+    print
+    _p1("def __init__(self):")
+    if has_core_members:
+        _p2(parent + ".__init__(self)")
+    _p2("self.header = ofp_header()")
+    _p2("self.header.type = " + msg_name)
+    if has_list:
+        if list_type == None:
+            _p2('self.' + list_var + ' = []')
+        else:
+            _p2('self.' + list_var + ' = ' + list_type + '()')
+    if has_string:
+        _p2('self.data = ""')
+
+    print """
+
+    def pack(self):
+        \"""
+        Pack object into string
+
+        @return The packed string which can go on the wire
+
+        \"""
+        self.header.length = len(self)
+        packed = self.header.pack()
+"""
+
+    # Have to special case the action length calculation for pkt out
+    if msg == 'packet_out':
+        _p2('self.actions_len = len(self.actions)')
+    if has_core_members:
+        _p2("packed += " + parent + ".pack(self)")
+    if has_list:
+        if list_type == None:
+            _p2('for obj in self.' + list_var + ':')
+            _p3('packed += obj.pack()')
+        else:
+            _p2('packed += self.' + list_var + '.pack()')
+    if has_string:
+        _p2('packed += self.data')
+    _p2("return packed")
+
+    print """
+    def unpack(self, binary_string):
+        \"""
+        Unpack object from a binary string
+
+        @param binary_string The wire protocol byte string holding the object
+        represented as an array of bytes.
+        @return The remainder of binary_string that was not parsed.
+
+        \"""
+        binary_string = self.header.unpack(binary_string)
+"""
+    if has_core_members:
+        _p2("binary_string = " + parent + ".unpack(self, binary_string)")
+    if has_list:
+        if msg == "features_reply":  # Special case port parsing
+            # For now, cheat and assume the rest of the message is port list
+            _p2("while len(binary_string) >= OFP_PHY_PORT_BYTES:")
+            _p3("new_port = ofp_phy_port()")
+            _p3("binary_string = new_port.unpack(binary_string)")
+            _p3("self.ports.append(new_port)")
+        elif list_type == None:
+            _p2("for obj in self." + list_var + ":")
+            _p3("binary_string = obj.unpack(binary_string)")
+        elif msg == "packet_out":  # Special case this
+            _p2('binary_string = self.actions.unpack(' + 
+                'binary_string, bytes=self.actions_len)')
+        elif msg == "flow_mod":  # Special case this
+            _p2("ai_len = self.header.length - (OFP_FLOW_MOD_BYTES + " + 
+                "OFP_HEADER_BYTES)")
+            _p2("binary_string = self.actions.unpack(binary_string, " +
+                "bytes=ai_len)")
+        else:
+            _p2("binary_string = self." + list_var + ".unpack(binary_string)")
+    if has_string:
+        _p2("self.data = binary_string")
+        _p2("binary_string = ''")
+    else:
+        _p2("# Fixme: If no self.data, add check for data remaining")
+    _p2("return binary_string")
+
+    print """
+    def __len__(self):
+        \"""
+        Return the length of this object once packed into a string
+
+        @return An integer representing the number bytes in the packed
+        string.
+
+        \"""
+        length = OFP_HEADER_BYTES
+"""
+    if has_core_members:
+        _p2("length += " + parent + ".__len__(self)")
+    if has_list:
+        if list_type == None:
+            _p2("for obj in self." + list_var + ":")
+            _p3("length += len(obj)")
+        else:
+            _p2("length += len(self." + list_var + ")")
+    if has_string:
+        _p2("length += len(self.data)")
+    _p2("return length")
+
+    print """
+    def show(self, prefix=''):
+        \"""
+        Generate a string (with multiple lines) describing the contents
+        of the object in a readable manner
+
+        @param prefix Pre-pended at the beginning of each line.
+
+        \"""
+"""
+    _p2("outstr = prefix + '" + msg + " (" + msg_name + ")\\n'")
+    _p2("prefix += '  '")
+    _p2("outstr += prefix + 'ofp header\\n'")
+    _p2("outstr += self.header.show(prefix + '  ')")
+    if has_core_members:
+        _p2("outstr += " + parent + ".show(self, prefix)")
+    if has_list:
+        if list_type == None:
+            _p2('outstr += prefix + "Array ' + list_var + '\\n"')
+            _p2('for obj in self.' + list_var +':')
+            _p3("outstr += obj.show(prefix + '  ')")
+        else:
+            _p2('outstr += prefix + "List ' + list_var + '\\n"')
+            _p2('outstr += self.' + list_var + ".show(prefix + '  ')")
+    if has_string:
+        _p2("outstr += prefix + 'data is of length ' + str(len(self.data)) + '\\n'")
+        _p2("##@todo Fix this circular reference")
+        _p2("# if len(self.data) > 0:")
+        _p3("# obj = of_message_parse(self.data)")
+        _p3("# if obj != None:")
+        _p4("# outstr += obj.show(prefix)")
+        _p3("# else:")
+        _p4('# outstr += prefix + "Unable to parse data\\n"')
+    _p2('return outstr')
+
+    print """
+    def __eq__(self, other):
+        \"""
+        Return True if self and other hold the same data
+
+        @param other Other object in comparison
+
+        \"""
+        if type(self) != type(other): return False
+        if not self.header.__eq__(other.header): return False
+"""
+    if has_core_members:
+        _p2("if not " + parent + ".__eq__(self, other): return False")
+    if has_string:
+        _p2("if self.data != other.data: return False")
+    if has_list:
+        _p2("if self." + list_var + " != other." + list_var + ": return False")
+    _p2("return True")
+
+    print """
+    def __ne__(self, other):
+        \"""
+        Return True if self and other do not hold the same data
+
+        @param other Other object in comparison
+
+        \"""
+        return not self.__eq__(other)
+    """
+
+
+################################################################
+#
+# Stats request subclasses
+# description_request, flow, aggregate, table, port, vendor
+#
+################################################################
+
+# table and desc stats requests are special with empty body
+extra_ofp_stats_req_defs = """
+# Stats request bodies for desc and table stats are not defined in the
+# OpenFlow header;  We define them here.  They are empty classes, really
+
+class ofp_desc_stats_request:
+    \"""
+    Forced definition of ofp_desc_stats_request (empty class)
+    \"""
+    def __init__(self):
+        pass
+    def pack(self, assertstruct=True):
+        return ""
+    def unpack(self, binary_string):
+        return binary_string
+    def __len__(self):
+        return 0
+    def show(self, prefix=''):
+        return prefix + "ofp_desc_stats_request (empty)\\n"
+    def __eq__(self, other):
+        return type(self) == type(other)
+    def __ne__(self, other):
+        return type(self) != type(other)
+
+OFP_DESC_STATS_REQUEST_BYTES = 0
+
+class ofp_table_stats_request:
+    \"""
+    Forced definition of ofp_table_stats_request (empty class)
+    \"""
+    def __init__(self):
+        pass
+    def pack(self, assertstruct=True):
+        return ""
+    def unpack(self, binary_string):
+        return binary_string
+    def __len__(self):
+        return 0
+    def show(self, prefix=''):
+        return prefix + "ofp_table_stats_request (empty)\\n"
+    def __eq__(self, other):
+        return type(self) == type(other)
+    def __ne__(self, other):
+        return type(self) != type(other)
+
+OFP_TABLE_STATS_REQUEST_BYTES = 0
+
+"""
+
+stats_request_template = """
+class --TYPE--_stats_request(ofp_stats_request, ofp_--TYPE--_stats_request):
+    \"""
+    Wrapper class for --TYPE-- stats request message
+    \"""
+    def __init__(self):
+        self.header = ofp_header()
+        ofp_stats_request.__init__(self)
+        ofp_--TYPE--_stats_request.__init__(self)
+        self.header.type = OFPT_STATS_REQUEST
+        self.type = --STATS_NAME--
+
+    def pack(self, assertstruct=True):
+        self.header.length = len(self)
+        packed = self.header.pack()
+        packed += ofp_stats_request.pack(self)
+        packed += ofp_--TYPE--_stats_request.pack(self)
+        return packed
+
+    def unpack(self, binary_string):
+        binary_string = self.header.unpack(binary_string)
+        binary_string = ofp_stats_request.unpack(self, binary_string)
+        binary_string = ofp_--TYPE--_stats_request.unpack(self, binary_string)
+        if len(binary_string) != 0:
+            print "ERROR unpacking --TYPE--: extra data"
+        return binary_string
+
+    def __len__(self):
+        return len(self.header) + OFP_STATS_REQUEST_BYTES + \\
+               OFP_--TYPE_UPPER--_STATS_REQUEST_BYTES
+
+    def show(self, prefix=''):
+        outstr = prefix + "--TYPE--_stats_request\\n"
+        outstr += prefix + "ofp header:\\n"
+        outstr += self.header.show(prefix + '  ')
+        outstr += ofp_stats_request.show(self)
+        outstr += ofp_--TYPE--_stats_request.show(self)
+        return outstr
+
+    def __eq__(self, other):
+        if type(self) != type(other): return False
+        return (self.header == other.header and
+                ofp_stats_request.__eq__(self, other) and
+                ofp_--TYPE--_stats_request.__eq__(self, other))
+
+    def __ne__(self, other): return not self.__eq__(other)
+"""
+
+################################################################
+#
+# Stats replies always have an array at the end.
+# For aggregate and desc, these arrays are always of length 1
+# This array is always called stats
+#
+################################################################
+
+
+# Template for objects stats reply messages
+stats_reply_template = """
+class --TYPE--_stats_reply(ofp_stats_reply):
+    \"""
+    Wrapper class for --TYPE-- stats reply
+    \"""
+    def __init__(self):
+        self.header = ofp_header()
+        ofp_stats_reply.__init__(self)
+        self.header.type = OFPT_STATS_REPLY
+        self.type = --STATS_NAME--
+        # stats: Array of type --TYPE--_stats_entry
+        self.stats = []
+
+    def pack(self, assertstruct=True):
+        self.header.length = len(self)
+        packed = self.header.pack()
+        packed += ofp_stats_reply.pack(self)
+        for obj in self.stats:
+            packed += obj.pack()
+        return packed
+
+    def unpack(self, binary_string):
+        binary_string = self.header.unpack(binary_string)
+        binary_string = ofp_stats_reply.unpack(self, binary_string)
+        dummy = --TYPE--_stats_entry()
+        while len(binary_string) >= len(dummy):
+            obj = --TYPE--_stats_entry()
+            binary_string = obj.unpack(binary_string)
+            self.stats.append(obj)
+        if len(binary_string) != 0:
+            print "ERROR unpacking --TYPE-- stats string: extra bytes"
+        return binary_string
+
+    def __len__(self):
+        length = len(self.header) + OFP_STATS_REPLY_BYTES
+        for obj in self.stats:
+            length += len(obj)
+        return length
+
+    def show(self, prefix=''):
+        outstr = prefix + "--TYPE--_stats_reply\\n"
+        outstr += prefix + "ofp header:\\n"
+        outstr += self.header.show(prefix + '  ')
+        outstr += ofp_stats_reply.show(self)
+        outstr += prefix + "Stats array of length " + str(len(self.stats)) + '\\n'
+        for obj in self.stats:
+            outstr += obj.show()
+        return outstr
+
+    def __eq__(self, other):
+        if type(self) != type(other): return False
+        return (self.header == other.header and
+                ofp_stats_reply.__eq__(self, other) and
+                self.stats == other.stats)
+
+    def __ne__(self, other): return not self.__eq__(other)
+"""
+
+#
+# To address variations in stats reply bodies, the following
+# "_entry" classes are defined for each element in the reply
+#
+
+extra_stats_entry_defs = """
+# Stats entries define the content of one element in a stats
+# reply for the indicated type; define _entry for consistency
+
+aggregate_stats_entry = ofp_aggregate_stats_reply
+desc_stats_entry = ofp_desc_stats
+port_stats_entry = ofp_port_stats
+queue_stats_entry = ofp_queue_stats
+table_stats_entry = ofp_table_stats
+"""
+
+# Special case flow_stats to handle actions_list
+
+flow_stats_entry_def = """
+#
+# Flow stats entry contains an action list of variable length, so
+# it is done by hand
+#
+
+class flow_stats_entry(ofp_flow_stats):
+    \"""
+    Special case flow stats entry to handle action list object
+    \"""
+    def __init__(self):
+        ofp_flow_stats.__init__(self)
+        self.actions = action_list()
+
+    def pack(self, assertstruct=True):
+        self.length = len(self)
+        packed = ofp_flow_stats.pack(self, assertstruct)
+        packed += self.actions.pack()
+        if len(packed) != self.length:
+            print("ERROR: flow_stats_entry pack length not equal",
+                  self.length, len(packed))
+        return packed
+
+    def unpack(self, binary_string):
+        binary_string = ofp_flow_stats.unpack(self, binary_string)
+        ai_len = self.length - OFP_FLOW_STATS_BYTES
+        if ai_len < 0:
+            print("ERROR: flow_stats_entry unpack length too small",
+                  self.length)
+        binary_string = self.actions.unpack(binary_string, bytes=ai_len)
+        return binary_string
+
+    def __len__(self):
+        return OFP_FLOW_STATS_BYTES + len(self.actions)
+
+    def show(self, prefix=''):
+        outstr = prefix + "flow_stats_entry\\n"
+        outstr += ofp_flow_stats.show(self, prefix + '  ')
+        outstr += self.actions.show(prefix + '  ')
+        return outstr
+
+    def __eq__(self, other):
+        if type(self) != type(other): return False
+        return (ofp_flow_stats.__eq__(self, other) and 
+                self.actions == other.actions)
+
+    def __ne__(self, other): return not self.__eq__(other)
+"""
+
+stats_types = [
+    'aggregate',
+    'desc',
+    'flow',
+    'port',
+    'queue',
+    'table']
+
+if __name__ == '__main__':
+
+    print message_top_matter
+
+    print """
+################################################################
+#
+# OpenFlow Message Definitions
+#
+################################################################
+"""
+
+    msg_types = message_class_map.keys()
+    msg_types.sort()
+
+    for t in msg_types:
+        gen_message_wrapper(t)
+        print
+
+    print """
+################################################################
+#
+# Stats request and reply subclass definitions
+#
+################################################################
+"""
+
+    print extra_ofp_stats_req_defs
+    print extra_stats_entry_defs
+    print flow_stats_entry_def
+
+    # Generate stats request and reply subclasses
+    for t in stats_types:
+        stats_name = "OFPST_" + t.upper()
+        to_print = re.sub('--TYPE--', t, stats_request_template)
+        to_print = re.sub('--TYPE_UPPER--', t.upper(), to_print)
+        to_print = re.sub('--STATS_NAME--', stats_name, to_print)
+        print to_print
+        to_print = re.sub('--TYPE--', t, stats_reply_template)
+        to_print = re.sub('--STATS_NAME--', stats_name, to_print)
+        print to_print
+
+    # Lastly, generate a tuple containing all the message classes
+    print """
+message_type_list = (
+    aggregate_stats_reply,
+    aggregate_stats_request,
+    bad_action_error_msg,
+    bad_request_error_msg,
+    barrier_reply,
+    barrier_request,
+    desc_stats_reply,
+    desc_stats_request,
+    echo_reply,
+    echo_request,
+    features_reply,
+    features_request,
+    flow_mod,
+    flow_mod_failed_error_msg,
+    flow_removed,
+    flow_stats_reply,
+    flow_stats_request,
+    get_config_reply,
+    get_config_request,
+    hello,
+    hello_failed_error_msg,
+    packet_in,
+    packet_out,
+    port_mod,
+    port_mod_failed_error_msg,
+    port_stats_reply,
+    port_stats_request,
+    port_status,
+    queue_get_config_reply,
+    queue_get_config_request,
+    queue_op_failed_error_msg,
+    queue_stats_reply,
+    queue_stats_request,
+    set_config,
+    table_stats_reply,
+    table_stats_request,
+    vendor
+    )
+"""
+
+#
+# OFP match variants
+#  ICMP 0x801 (?) ==> icmp_type/code replace tp_src/dst
+#
+
+
diff --git a/tools/munger/tests/defs.py b/tools/munger/tests/defs.py
new file mode 100644
index 0000000..5cb779f
--- /dev/null
+++ b/tools/munger/tests/defs.py
@@ -0,0 +1,193 @@
+import sys
+sys.path.append('../../../src/python/oftest/protocol')
+from message import *
+from action import *
+from error import *
+from class_maps import *
+
+ofmsg_class_map_to_parents = {
+    action_enqueue                     : [ofp_action_enqueue],
+    action_output                      : [ofp_action_output],
+    action_set_dl_dst                  : [ofp_action_dl_addr],
+    action_set_dl_src                  : [ofp_action_dl_addr],
+    action_set_nw_dst                  : [ofp_action_nw_addr],
+    action_set_nw_src                  : [ofp_action_nw_addr],
+    action_set_nw_tos                  : [ofp_action_nw_tos],
+    action_set_tp_dst                  : [ofp_action_tp_port],
+    action_set_tp_src                  : [ofp_action_tp_port],
+    action_set_vlan_pcp                : [ofp_action_vlan_pcp],
+    action_set_vlan_vid                : [ofp_action_vlan_vid],
+    action_strip_vlan                  : [ofp_action_header],
+    action_vendor                      : [ofp_action_vendor_header],
+    aggregate_stats_entry              : [],
+    aggregate_stats_reply              : [ofp_stats_reply],
+    aggregate_stats_request            : [ofp_stats_request,
+                                          ofp_aggregate_stats_request],
+    bad_action_error_msg               : [ofp_error_msg],
+    bad_request_error_msg              : [ofp_error_msg],
+    barrier_reply                      : [],
+    barrier_request                    : [],
+    desc_stats_entry                   : [],
+    desc_stats_reply                   : [ofp_stats_reply],
+    desc_stats_request                 : [ofp_stats_request,
+                                          ofp_desc_stats_request],
+    echo_reply                         : [],
+    echo_request                       : [],
+    error                              : [ofp_error_msg],
+    features_reply                     : [ofp_switch_features],
+    features_request                   : [],
+    flow_mod                           : [ofp_flow_mod],
+    flow_mod_failed_error_msg          : [ofp_error_msg],
+    flow_removed                       : [ofp_flow_removed],
+    flow_stats_entry                   : [ofp_flow_stats],
+    flow_stats_reply                   : [ofp_stats_reply],
+    flow_stats_request                 : [ofp_stats_request,
+                                          ofp_flow_stats_request],
+    get_config_reply                   : [ofp_switch_config],
+    get_config_request                 : [],
+    hello                              : [],
+    hello_failed_error_msg             : [ofp_error_msg],
+    packet_in                          : [ofp_packet_in],
+    packet_out                         : [ofp_packet_out],
+    port_mod                           : [ofp_port_mod],
+    port_mod_failed_error_msg          : [ofp_error_msg],
+    port_stats_entry                   : [],
+    port_stats_reply                   : [ofp_stats_reply],
+    port_stats_request                 : [ofp_stats_request,
+                                          ofp_port_stats_request],
+    port_status                        : [ofp_port_status],
+    queue_get_config_reply             : [ofp_queue_get_config_reply],
+    queue_get_config_request           : [ofp_queue_get_config_request],
+    queue_op_failed_error_msg          : [ofp_error_msg],
+    queue_stats_entry                  : [],
+    queue_stats_reply                  : [ofp_stats_reply],
+    queue_stats_request                : [ofp_stats_request,
+                                          ofp_queue_stats_request],
+    set_config                         : [ofp_switch_config],
+    stats_reply                        : [ofp_stats_reply],
+    stats_request                      : [ofp_stats_request],
+    table_stats_entry                  : [],
+    table_stats_reply                  : [ofp_stats_reply],
+    table_stats_request                : [ofp_stats_request,
+                                          ofp_table_stats_request],
+    vendor                             : [ofp_vendor_header]
+}
+
+ofmsg_names = {
+    action_enqueue                     : 'action_enqueue',
+    action_output                      : 'action_output',
+    action_set_dl_dst                  : 'action_set_dl_dst',
+    action_set_dl_src                  : 'action_set_dl_src',
+    action_set_nw_dst                  : 'action_set_nw_dst',
+    action_set_nw_src                  : 'action_set_nw_src',
+    action_set_nw_tos                  : 'action_set_nw_tos',
+    action_set_tp_dst                  : 'action_set_tp_dst',
+    action_set_tp_src                  : 'action_set_tp_src',
+    action_set_vlan_pcp                : 'action_set_vlan_pcp',
+    action_set_vlan_vid                : 'action_set_vlan_vid',
+    action_strip_vlan                  : 'action_strip_vlan',
+    action_vendor                      : 'action_vendor',
+    aggregate_stats_entry              : 'aggregate_stats_entry',
+    aggregate_stats_reply              : 'aggregate_stats_reply',
+    aggregate_stats_request            : 'aggregate_stats_request',
+    bad_action_error_msg               : 'bad_action_error_msg',
+    bad_request_error_msg              : 'bad_request_error_msg',
+    barrier_reply                      : 'barrier_reply',
+    barrier_request                    : 'barrier_request',
+    desc_stats_entry                   : 'desc_stats_entry',
+    desc_stats_reply                   : 'desc_stats_reply',
+    desc_stats_request                 : 'desc_stats_request',
+    echo_reply                         : 'echo_reply',
+    echo_request                       : 'echo_request',
+    error                              : 'error',
+    features_reply                     : 'features_reply',
+    features_request                   : 'features_request',
+    flow_mod                           : 'flow_mod',
+    flow_mod_failed_error_msg          : 'flow_mod_failed_error_msg',
+    flow_removed                       : 'flow_removed',
+    flow_stats_entry                   : 'flow_stats_entry',
+    flow_stats_reply                   : 'flow_stats_reply',
+    flow_stats_request                 : 'flow_stats_request',
+    get_config_reply                   : 'get_config_reply',
+    get_config_request                 : 'get_config_request',
+    hello                              : 'hello',
+    hello_failed_error_msg             : 'hello_failed_error_msg',
+    ofp_desc_stats_request             : 'ofp_desc_stats_request',
+    ofp_table_stats_request            : 'ofp_table_stats_request',
+    packet_in                          : 'packet_in',
+    packet_out                         : 'packet_out',
+    port_mod                           : 'port_mod',
+    port_mod_failed_error_msg          : 'port_mod_failed_error_msg',
+    port_stats_entry                   : 'port_stats_entry',
+    port_stats_reply                   : 'port_stats_reply',
+    port_stats_request                 : 'port_stats_request',
+    port_status                        : 'port_status',
+    queue_get_config_reply             : 'queue_get_config_reply',
+    queue_get_config_request           : 'queue_get_config_request',
+    queue_op_failed_error_msg          : 'queue_op_failed_error_msg',
+    queue_stats_entry                  : 'queue_stats_entry',
+    queue_stats_reply                  : 'queue_stats_reply',
+    queue_stats_request                : 'queue_stats_request',
+    set_config                         : 'set_config',
+    stats_reply                        : 'stats_reply',
+    stats_request                      : 'stats_request',
+    table_stats_entry                  : 'table_stats_entry',
+    table_stats_reply                  : 'table_stats_reply',
+    table_stats_request                : 'table_stats_request',
+    vendor                             : 'vendor'
+}
+
+stats_entry_types = [
+    aggregate_stats_entry,
+    desc_stats_entry,
+    port_stats_entry,
+    queue_stats_entry,
+    table_stats_entry
+]
+
+##@var A list of all OpenFlow messages including subtyped messages
+of_messages = [
+    aggregate_stats_reply,
+    aggregate_stats_request,
+    bad_action_error_msg,
+    bad_request_error_msg,
+    barrier_reply,
+    barrier_request,
+    desc_stats_reply,
+    desc_stats_request,
+    echo_reply,
+    echo_request,
+    features_reply,
+    features_request,
+    flow_mod,
+    flow_mod_failed_error_msg,
+    flow_removed,
+    flow_stats_reply,
+    flow_stats_request,
+    get_config_reply,
+    get_config_request,
+    hello,
+    hello_failed_error_msg,
+    packet_in,
+    packet_out,
+    port_mod,
+    port_mod_failed_error_msg,
+    port_stats_reply,
+    port_stats_request,
+    port_status,
+    queue_get_config_reply,
+    queue_get_config_request,
+    queue_op_failed_error_msg,
+    queue_stats_reply,
+    queue_stats_request,
+    set_config,
+    table_stats_reply,
+    table_stats_request,
+    vendor
+]
+
+# header_fields = ['version', 'xid']
+# fixed_header_fields = ['type', 'length']
+
+all_objs = ofmsg_class_map_to_parents.keys()
+all_objs.sort()
diff --git a/tools/munger/tests/msg_test.py b/tools/munger/tests/msg_test.py
new file mode 100644
index 0000000..e816cc1
--- /dev/null
+++ b/tools/munger/tests/msg_test.py
@@ -0,0 +1,215 @@
+import sys
+sys.path.append('../../../src/python/oftest')
+
+from parse import of_message_parse
+from parse import of_header_parse
+
+from defs import *
+
+def error_out(string):
+    print >> sys.stderr, string
+    print string
+
+def obj_comp(orig, new, objname, errstr=None):
+    """
+    Compare two objects
+    """
+    dump = False        
+    if not errstr:
+        errstr = "(unknown)"
+    errstr += " " + objname
+    if not new:
+        error_out("ERROR: obj comp, new is None for " + errstr)
+        dump = True
+    elif type(orig) != type(new):
+        error_out("ERROR: type mismatch for " + errstr + " ")
+        dump = True
+    elif orig != new:
+        error_out("ERROR: " + errstr + " orig != new")
+        dump = True
+    if dump:
+        print "Dump of mismatch for " + errstr
+        print "type orig " + str(type(orig))
+        print "orig length ", len(orig)
+        orig.show("  ")
+        if new:
+            print "type new" + str(type(new))
+            print "new length ", len(new)
+            new.show("  ")
+        print
+
+
+# Generate a long action list
+
+def action_list_create(n=10):
+    """
+    Create an action list
+
+    @param n The number of actions to put in the list
+
+    Cycle through the list of all actions, adding each type
+    """
+
+    al = action_list()
+    for i in range(n):
+        idx = i % len(action_class_list)
+        cls = action_class_list[idx]()
+        al.add(cls)
+    return al
+
+# Test classes with action lists
+def class_action_test():
+    """
+    Test objects that use action lists
+    """
+
+    print "Testing action lists:  flow mod, packet out, flow stats"
+    for acount in [0, 1, 5, 16, 34]:
+        print "  " + str(acount) + " actions in list"
+        obj = flow_mod()
+        obj.actions = action_list_create(acount)
+        packed = obj.pack()
+        header = of_header_parse(packed)
+        obj_check = flow_mod()
+        if obj_check.unpack(packed) != "":
+            error_out("ERROR: flow mod action list test extra " +
+                      "string on unpack")
+        obj_comp(obj, obj_check, 'flow_mod', "unpack test " + str(acount))
+        obj_check = of_message_parse(packed)
+        obj_comp(obj, obj_check, 'flow_mod', "parse test " + str(acount))
+        # obj.show()
+
+        # packet out with and without data
+        obj = packet_out()
+        obj.actions = action_list_create(acount)
+        packed = obj.pack()
+        header = of_header_parse(packed)
+        obj_check = packet_out()
+        if obj_check.unpack(packed) != "":
+            error_out("ERROR: packet out packet_out test extra " +
+                      "string on unpack")
+        obj_comp(obj, obj_check, 'packet_out', "unpack test " + str(acount))
+        obj_check = of_message_parse(packed)
+        obj_comp(obj, obj_check, 'packet_out', "parse test " + str(acount))
+        # obj.show()
+
+        obj = packet_out()
+        obj.actions = action_list_create(acount)
+        obj.data = "short test string for packet data"
+        packed = obj.pack()
+        header = of_header_parse(packed)
+        obj_check = packet_out()
+        if obj_check.unpack(packed) != "":
+            error_out("ERROR: packet out packet_out test extra " +
+                      "string on unpack")
+        obj_comp(obj, obj_check, 'packet_out', "unpack test " + str(acount))
+        obj_check = of_message_parse(packed)
+        obj_comp(obj, obj_check, 'packet_out', "parse test " + str(acount))
+        # obj.show()
+
+        # flow stats entry (not a message)
+        obj = flow_stats_entry()
+        obj.actions = action_list_create(acount)
+        packed = obj.pack()
+        obj_check = flow_stats_entry()
+        if obj_check.unpack(packed) != "":
+            error_out("ERROR: packet out flow stats test extra " +
+                      "string on unpack")
+        obj_comp(obj, obj_check, 'packet_out', "unpack test " + str(acount))
+        # obj.show()
+
+print "Generating all classes with no data init"
+print
+for cls in all_objs:
+    print "Creating class " + ofmsg_names[cls]
+    obj = cls()
+    print ofmsg_names[cls] + " length: " + str(len(obj))
+    obj.show("  ")
+    print
+
+print "End of class generation"
+print
+print
+
+print "Generating messages, packing, showing (to verify len)"
+print "and calling self unpack"
+print
+for cls in all_objs:
+    print "Pack/unpack test for class " + ofmsg_names[cls]
+    obj = cls()
+    packed = obj.pack()
+    obj_check = cls()
+    string = obj_check.unpack(packed)
+    if string != "":
+        print >> sys.stderr, "WARNING: " + ofmsg_names[cls] + \
+            ", unpack returned string " + string
+    obj_comp(obj, obj_check, ofmsg_names[cls], "pack/unpack")
+
+print "End of class pack check"
+print
+print
+
+
+print "Testing message parsing"
+print
+for cls in all_objs:
+    # Can only parse real messages
+    if not cls in of_messages:
+        print "Not testing " + ofmsg_names[cls]
+        continue
+    print "Parse test for class " + ofmsg_names[cls]
+    obj = cls()
+    packed = obj.pack()
+    header = of_header_parse(packed)
+    obj_check = of_message_parse(packed)
+    obj_comp(obj, obj_check, ofmsg_names[cls], "parse test")
+
+print "End of parse testing"
+print
+print
+
+class_action_test()
+print
+print
+
+#
+# TO DO
+#     Generate varying actions lists and attach to flow_mod,
+# packet out and flow_stats_entry objects.
+#     Generate varying lists of stats entries for replies in
+# flow_stats_reply, table_stats_reply, port_stats_reply and
+# queue_stats_reply
+#     Create and test packet-to-flow function
+
+
+f = flow_stats_reply()
+ent = flow_stats_entry()
+
+
+act = action_strip_vlan()
+alist = action_list()
+alist.add(act)
+
+act = action_set_tp_dst()
+act.tp_port = 17
+
+m = ofp_match()
+m.wildcards = OFPFW_IN_PORT + OFPFW_DL_VLAN + OFPFW_DL_SRC
+
+#
+# Need: Easy reference from action to data members
+m.in_port = 12
+m.dl_src= [1,2,3,4,5,6]
+m.dl_dst= [11,22,23,24,25,26]
+m.dl_vlan = 83
+m.dl_vlan_pcp = 1
+m.dl_type = 0x12
+m.nw_tos = 3
+m.nw_proto = 0x300
+m.nw_src = 0x232323
+m.nw_dst = 0x3232123
+m.tp_src = 32
+m.tp_dst = 2
+
+m.show()
+
diff --git a/tools/pylibopenflow/.gitignore b/tools/pylibopenflow/.gitignore
new file mode 100644
index 0000000..2f836aa
--- /dev/null
+++ b/tools/pylibopenflow/.gitignore
@@ -0,0 +1,2 @@
+*~
+*.pyc
diff --git a/tools/pylibopenflow/bin/cstruct2py-get-struct.py b/tools/pylibopenflow/bin/cstruct2py-get-struct.py
new file mode 100755
index 0000000..d78d7c8
--- /dev/null
+++ b/tools/pylibopenflow/bin/cstruct2py-get-struct.py
@@ -0,0 +1,76 @@
+#!/usr/bin/env python
+"""This script reads struct from C/C++ header file and output query
+
+Author ykk
+Date June 2009
+"""
+import sys
+import getopt
+import cheader
+import c2py
+
+
+def usage():
+    """Display usage
+    """
+    print "Usage "+sys.argv[0]+" <options> header_files... struct_name\n"+\
+          "Options:\n"+\
+          "-h/--help\n\tPrint this usage guide\n"+\
+          "-c/--cstruct\n\tPrint C struct\n"+\
+          "-n/--name\n\tPrint names of struct\n"+\
+          "-s/--size\n\tPrint size of struct\n"+\
+          ""
+          
+#Parse options and arguments
+try:
+    opts, args = getopt.getopt(sys.argv[1:], "hcsn",
+                               ["help","cstruct","size","names"])
+except getopt.GetoptError:
+    usage()
+    sys.exit(2)
+
+#Check there is at least 1 input file and struct name
+if (len(args) < 2):
+    usage()
+    sys.exit(2)
+    
+#Parse options
+##Print C struct
+printc = False
+##Print names
+printname = False
+##Print size
+printsize = False
+for opt,arg in opts:
+    if (opt in ("-h","--help")):
+        usage()
+        sys.exit(0)
+    elif (opt in ("-s","--size")): 
+        printsize = True
+    elif (opt in ("-c","--cstruct")): 
+        printc = True
+    elif (opt in ("-n","--names")): 
+        printname = True
+    else:
+        print "Unhandled option :"+opt
+        sys.exit(1)
+
+headerfile = cheader.cheaderfile(args[:-1])
+cstruct = headerfile.structs[args[-1].strip()]
+cs2p = c2py.cstruct2py()
+pattern = cs2p.get_pattern(cstruct)
+
+#Print C struct
+if (printc):
+    print cstruct
+
+#Print pattern
+print "Python pattern = "+pattern
+
+#Print name
+if (printname):
+    print cstruct.get_names()
+
+#Print size
+if (printsize):
+    print "Size = "+str(cs2p.get_size(pattern))
diff --git a/tools/pylibopenflow/bin/cstruct2py-pythonize.py b/tools/pylibopenflow/bin/cstruct2py-pythonize.py
new file mode 100755
index 0000000..39508cf
--- /dev/null
+++ b/tools/pylibopenflow/bin/cstruct2py-pythonize.py
@@ -0,0 +1,48 @@
+#!/usr/bin/env python
+"""This script reads struct
+
+Author ykk
+Date Jan 2010
+"""
+import sys
+import getopt
+import cpythonize
+import cheader
+
+def usage():
+    """Display usage
+    """
+    print "Usage "+sys.argv[0]+" <options> header_files... output_file\n"+\
+          "Options:\n"+\
+          "-h/--help\n\tPrint this usage guide\n"+\
+          ""
+
+#Parse options and arguments
+try:
+    opts, args = getopt.getopt(sys.argv[1:], "h",
+                               ["help"])
+except getopt.GetoptError:
+    usage()
+    sys.exit(2)
+   
+#Parse options
+for opt,arg in opts:
+    if (opt in ("-h","--help")):
+        usage()
+        sys.exit(0)
+    else:
+        print "Unhandled option :"+opt
+        sys.exit(2)
+
+#Check there is at least 1 input file with 1 output file
+if (len(args) < 2):
+    usage()
+    sys.exit(2)
+
+ch = cheader.cheaderfile(args[:-1])
+py = cpythonize.pythonizer(ch)
+fileRef = open(args[len(args)-1], "w")
+for l in py.pycode():
+    fileRef.write(l+"\n")
+fileRef.close()
+
diff --git a/tools/pylibopenflow/bin/cstruct2py-query-cheader.py b/tools/pylibopenflow/bin/cstruct2py-query-cheader.py
new file mode 100755
index 0000000..ed82316
--- /dev/null
+++ b/tools/pylibopenflow/bin/cstruct2py-query-cheader.py
@@ -0,0 +1,138 @@
+#!/usr/bin/env python
+"""This script reads C/C++ header file and output query
+
+Author ykk
+Date June 2009
+"""
+import sys
+import getopt
+import cheader
+
+def usage():
+    """Display usage
+    """
+    print "Usage "+sys.argv[0]+" <options> header_file_1 header_file_2 ...\n"+\
+          "Options:\n"+\
+          "-h/--help\n\tPrint this usage guide\n"+\
+          "-E/--enums\n\tPrint all enumerations\n"+\
+          "-e/--enum\n\tPrint specified enumeration\n"+\
+          "-M/--macros\n\tPrint all macros\n"+\
+          "-m/--macro\n\tPrint value of macro\n"+\
+          "-S/--structs\n\tPrint all structs\n"+\
+          "-s/--struct\n\tPrint struct\n"+\
+          "-n/--name-only\n\tPrint names only\n"+\
+          "-P/--print-no-comment\n\tPrint with comment removed only\n"+\
+          ""
+          
+#Parse options and arguments
+try:
+    opts, args = getopt.getopt(sys.argv[1:], "hMm:Ee:Ss:nP",
+                               ["help","macros","macro=","enums","enum=",
+                                "structs","struct="
+                                "name-only","print-no-comment"])
+except getopt.GetoptError:
+    usage()
+    sys.exit(2)
+
+#Check there is at least input file
+if (len(args) < 1):
+    usage()
+    sys.exit(2)
+
+#Parse options
+##Print names only
+nameOnly = False
+##Print all structs?
+allStructs = False
+##Query specific struct
+struct=""
+##Print all enums?
+allEnums = False
+##Query specific enum
+enum=""
+##Print all macros?
+allMacros = False
+##Query specific macro
+macro=""
+##Print without comment
+printNoComment=False
+for opt,arg in opts:
+    if (opt in ("-h","--help")):
+        usage()
+        sys.exit(0)
+    elif (opt in ("-S","--structs")): 
+        allStructs = True
+    elif (opt in ("-s","--struct")): 
+        struct = arg
+    elif (opt in ("-M","--macros")): 
+        allMacros = True
+    elif (opt in ("-m","--macro")): 
+        macro=arg
+    elif (opt in ("-E","--enums")): 
+        allEnums = True
+    elif (opt in ("-e","--enum")): 
+        enum = arg
+    elif (opt in ("-n","--name-only")): 
+        nameOnly = True
+    elif (opt in ("-P","--print-no-comment")): 
+        printNoComment = True
+    else:
+        assert (False,"Unhandled option :"+opt)
+
+headerfile = cheader.cheaderfile(args)
+if (printNoComment):
+    for line in headerfile.content:
+        print line
+    sys.exit(0)
+    
+#Print all macros
+if (allMacros):
+    for (macroname, value) in headerfile.macros.items():
+        if (nameOnly):
+            print macroname
+        else:
+            print macroname+"\t=\t"+str(value)
+#Print specified macro
+if (macro != ""):
+    try:
+        print macro+"="+headerfile.macros[macro]
+    except KeyError:
+        print "Macro "+macro+" not found!"
+
+#Print all structs
+if (allStructs):
+    for (structname, value) in headerfile.structs.items():
+        if (nameOnly):
+            print structname
+        else:
+            print str(value)+"\n"
+
+#Print specified struct
+if (struct != ""):
+    try:
+        print str(headerfile.structs[struct])
+    except KeyError:
+        print "Struct "+struct+" not found!"
+
+#Print all enumerations
+if (allEnums):
+    for (enumname, values) in headerfile.enums.items():
+        print enumname
+        if (not nameOnly):
+            for enumval in values:
+                try:
+                    print "\t"+enumval+"="+\
+                          str(headerfile.enum_values[enumval])
+                except KeyError:
+                    print enumval+" not found in enum!";
+
+#Print specifed enum
+if (enum != ""):
+    try:
+        for enumval in headerfile.enums[enum]:
+            try:
+                print enumval+"="+str(headerfile.enum_values[enumval])
+            except KeyError:
+                print enumval+" not found in enum!";
+    except KeyError:
+        print "Enumeration "+enum+" not found!"
diff --git a/tools/pylibopenflow/bin/pyopenflow-get-struct.py b/tools/pylibopenflow/bin/pyopenflow-get-struct.py
new file mode 100755
index 0000000..d07d85e
--- /dev/null
+++ b/tools/pylibopenflow/bin/pyopenflow-get-struct.py
@@ -0,0 +1,74 @@
+#!/usr/bin/env python
+"""This script reads struct from OpenFlow header file and output query
+
+(C) Copyright Stanford University
+Author ykk
+Date October 2009
+"""
+import sys
+import getopt
+import openflow
+
+def usage():
+    """Display usage
+    """
+    print "Usage "+sys.argv[0]+" <options> struct_name\n"+\
+          "Options:\n"+\
+          "-h/--help\n\tPrint this usage guide\n"+\
+          "-c/--cstruct\n\tPrint C struct\n"+\
+          "-n/--name\n\tPrint names of struct\n"+\
+          "-s/--size\n\tPrint size of struct\n"+\
+          ""
+          
+#Parse options and arguments
+try:
+    opts, args = getopt.getopt(sys.argv[1:], "hcsn",
+                               ["help","cstruct","size","names"])
+except getopt.GetoptError:
+    usage()
+    sys.exit(2)
+
+#Check there is only struct name
+if not (len(args) == 1):
+    usage()
+    sys.exit(2)
+    
+#Parse options
+##Print C struct
+printc = False
+##Print names
+printname = False
+##Print size
+printsize = False
+for opt,arg in opts:
+    if (opt in ("-h","--help")):
+        usage()
+        sys.exit(0)
+    elif (opt in ("-s","--size")): 
+        printsize = True
+    elif (opt in ("-c","--cstruct")): 
+        printc = True
+    elif (opt in ("-n","--names")): 
+        printname = True
+    else:
+        assert (False,"Unhandled option :"+opt)
+
+pyopenflow = openflow.messages()
+cstruct = pyopenflow.structs[args[0].strip()]
+pattern = pyopenflow.get_pattern(cstruct)
+
+#Print C struct
+if (printc):
+    print cstruct
+
+#Print pattern
+print "Python pattern = "+str(pattern)
+
+#Print name
+if (printname):
+    print cstruct.get_names()
+
+#Print size
+if (printsize):
+    print "Size = "+str(pyopenflow.get_size(pattern))
+
diff --git a/tools/pylibopenflow/bin/pyopenflow-lavi-pythonize.py b/tools/pylibopenflow/bin/pyopenflow-lavi-pythonize.py
new file mode 100755
index 0000000..914a424
--- /dev/null
+++ b/tools/pylibopenflow/bin/pyopenflow-lavi-pythonize.py
@@ -0,0 +1,89 @@
+#!/usr/bin/env python
+"""This script generate class files for messenger and lavi in NOX, 
+specifically it creates a Python class for each data structure.
+
+(C) Copyright Stanford University
+Author ykk
+Date January 2010
+"""
+import sys
+import os.path
+import getopt
+import cheader
+import lavi.pythonize
+
+def usage():
+    """Display usage
+    """
+    print "Usage "+sys.argv[0]+" <options> nox_dir\n"+\
+          "Options:\n"+\
+          "-i/--input-dir\n\tSpecify input directory (nox src directory)\n"+\
+          "-t/--template\n\tSpecify (non-default) template file\n"+\
+          "-n/--no-lavi\n\tSpecify that LAVI's file will not be created\n"+\
+          "-h/--help\n\tPrint this usage guide\n"+\
+          ""
+          
+#Parse options and arguments
+try:
+    opts, args = getopt.getopt(sys.argv[1:], "hm:n",
+                               ["help","messenger-template","no-lavi"])
+except getopt.GetoptError:
+    usage()
+    sys.exit(2)
+
+#Check there is only NOX directory given
+if not (len(args) == 1):
+    usage()
+    sys.exit(2)
+
+#Parse options
+##Output LAVI
+outputlavi=True
+##Template file
+templatefile="include/messenger.template.py"
+for opt,arg in opts:
+    if (opt in ("-h","--help")):
+        usage()
+        sys.exit(0)
+    elif (opt in ("-t","--template")):
+        templatefile=arg
+    elif (opt in ("-n","--no-lavi")):
+        outputlavi=False
+    else:
+        print "Unhandled option:"+opt
+        sys.exit(2)
+
+#Check for header file in NOX directory
+if not (os.path.isfile(args[0]+"/src/nox/coreapps/messenger/message.hh")):
+    print "Messenger header file not found!"
+    sys.exit(2)
+if (outputlavi):
+    if not (os.path.isfile(args[0]+"/src/nox/netapps/lavi/lavi-message.hh")):
+        print "LAVI message header file not found!"
+        sys.exit(2)
+
+#Get headerfile and pythonizer
+msgheader = cheader.cheaderfile(args[0]+"/src/nox/coreapps/messenger/message.hh")
+mpynizer = lavi.pythonize.msgpythonizer(msgheader)
+if (outputlavi):
+    laviheader = cheader.cheaderfile([args[0]+"/src/nox/coreapps/messenger/message.hh",
+                                      args[0]+"/src/nox/netapps/lavi/lavi-message.hh"])
+    lpynizer = lavi.pythonize.lavipythonizer(laviheader)
+    
+#Generate Python code for messenger
+fileRef = open(args[0]+"/src/nox/coreapps/messenger/messenger.py", "w")
+for x in mpynizer.pycode(templatefile):
+    fileRef.write(x+"\n")
+fileRef.write("\n")
+fileRef.close()
+
+if (outputlavi):
+    fileRef = open(args[0]+"/src/nox/netapps/lavi/lavi.py", "w")
+    for x in lpynizer.pycode(templatefile):
+        fileRef.write(x.replace("def __init__(self,ipAddr,portNo=2603,debug=False):",
+                                "def __init__(self,ipAddr,portNo=2503,debug=False):").\
+                      replace("def __init__(self, ipAddr, portNo=1304,debug=False):",
+                              "def __init__(self, ipAddr, portNo=1305,debug=False):")+\
+                      "\n")
+    fileRef.write("\n")
+    fileRef.close()
diff --git a/tools/pylibopenflow/bin/pyopenflow-load-controller.py b/tools/pylibopenflow/bin/pyopenflow-load-controller.py
new file mode 100755
index 0000000..715a73a
--- /dev/null
+++ b/tools/pylibopenflow/bin/pyopenflow-load-controller.py
@@ -0,0 +1,131 @@
+#!/usr/bin/env python
+"""This script fakes as n OpenFlow switch and
+load the controller with k packets per second.
+
+(C) Copyright Stanford University
+Author ykk
+Date January 2010
+"""
+import sys
+import getopt
+import struct
+import openflow
+import time
+import output
+import of.msg
+import of.simu
+import of.network
+import dpkt.ethernet
+
+def usage():
+    """Display usage
+    """
+    print "Usage "+sys.argv[0]+" <options> controller\n"+\
+          "Options:\n"+\
+          "-p/--port\n\tSpecify port number\n"+\
+          "-v/--verbose\n\tPrint message exchange\n"+\
+          "-r/--rate\n\tSpecify rate per switch to send packets (default=1)\n"+\
+          "-d/--duration\n\tSpecify duration of load test in seconds (default=5)\n"+\
+          "-s/--switch\n\tSpecify number of switches (default=1)\n"+\
+          "-h/--help\n\tPrint this usage guide\n"+\
+          ""
+          
+#Parse options and arguments
+try:
+    opts, args = getopt.getopt(sys.argv[1:], "hvp:s:d:r:",
+                               ["help","verbose","port=",
+                                "switch=","duration=","rate="])
+except getopt.GetoptError:
+    usage()
+    sys.exit(2)
+
+#Check there is only controller
+if not (len(args) == 1):
+    usage()
+    sys.exit(2)
+    
+#Parse options
+##Duration
+duration = 5
+##Rate
+rate = 1.0
+##Switch number
+swno = 1
+##Port to connect to
+port = 6633
+##Set output mode
+output.set_mode("INFO")
+for opt,arg in opts:
+    if (opt in ("-h","--help")):
+        usage()
+        sys.exit(0)
+    elif (opt in ("-v","--verbose")):
+        output.set_mode("DBG")
+    elif (opt in ("-p","--port")):
+        port=int(arg)
+    elif (opt in ("-s","--switch")):
+        swno=int(arg)
+    elif (opt in ("-d","--duration")):
+        duration=int(arg)
+    elif (opt in ("-r","--rate")):
+        rate=float(arg)
+    else:
+        print "Unhandled option :"+opt
+        sys.exit(2)
+
+#Form packet
+pkt = dpkt.ethernet.Ethernet()
+pkt.type = dpkt.ethernet.ETH_MIN
+pkt.dst = '\xFF\xFF\xFF\xFF\xFF\xFF'
+
+#Connect to controller
+ofmsg = openflow.messages()
+parser = of.msg.parser(ofmsg)
+ofnet = of.simu.network()
+for i in range(1,swno+1):
+    ofsw = of.simu.switch(ofmsg, args[0], port,
+                          dpid=i,
+                          parser=parser)
+    ofnet.add_switch(ofsw)
+    ofsw.send_hello()
+    
+output.info("Running "+str(swno)+" switches at "+str(rate)+\
+            " packets per seconds for "+str(duration)+" s")
+
+starttime = time.time()
+running = True
+interval = 1.0/(rate*swno)
+ntime=time.time()+(interval/10.0)
+swindex = 0
+pcount = 0
+rcount = 0
+while running:
+    ctime = time.time()
+    time.sleep(max(0,min(ntime-ctime,interval/10.0)))
+
+    if ((ctime-starttime) <= duration):
+        #Send packet if time's up
+        if (ctime >= ntime):
+            ntime += interval
+            pkt.src = struct.pack("Q",pcount)[:6]
+            ofnet.switches[swindex].send_packet(1,10,pkt.pack()+'A'*3)
+            pcount += 1
+            swno += 1
+            if (swno >= len(ofnet.switches)):
+                swno=0
+
+        #Process any received message
+        (ofsw, msg) = ofnet.connections.msgreceive()
+        while (msg != None):
+            dic = ofmsg.peek_from_front("ofp_header", msg)
+            if (dic["type"][0] == ofmsg.get_value("OFPT_FLOW_MOD")):
+                output.dbg("Received flow mod")
+                rcount += 1
+            ofsw.receive_openflow(msg)
+            (ofsw, msg) = ofnet.connections.msgreceive()
+    else:
+        running = False
+    
+output.info("Sent "+str(pcount)+" packets at rate "+\
+            str(float(pcount)/float(duration))+" and received "+\
+            str(rcount)+" back")
diff --git a/tools/pylibopenflow/bin/pyopenflow-ping-controller.py b/tools/pylibopenflow/bin/pyopenflow-ping-controller.py
new file mode 100755
index 0000000..cae3a43
--- /dev/null
+++ b/tools/pylibopenflow/bin/pyopenflow-ping-controller.py
@@ -0,0 +1,78 @@
+#!/usr/bin/env python
+"""This script fakes as an OpenFlow switch to the controller
+
+(C) Copyright Stanford University
+Author ykk
+Date October 2009
+"""
+import sys
+import getopt
+import openflow
+import time
+import output
+import of.msg
+import of.simu
+
+def usage():
+    """Display usage
+    """
+    print "Usage "+sys.argv[0]+" <options> controller\n"+\
+          "Options:\n"+\
+          "-p/--port\n\tSpecify port number\n"+\
+          "-v/--verbose\n\tPrint message exchange\n"+\
+          "-h/--help\n\tPrint this usage guide\n"+\
+          ""
+          
+#Parse options and arguments
+try:
+    opts, args = getopt.getopt(sys.argv[1:], "hvp:",
+                               ["help","verbose","port="])
+except getopt.GetoptError:
+    usage()
+    sys.exit(2)
+
+#Check there is only controller
+if not (len(args) == 1):
+    usage()
+    sys.exit(2)
+    
+#Parse options
+##Port to connect to
+port = 6633
+##Set output mode
+output.set_mode("INFO")
+for opt,arg in opts:
+    if (opt in ("-h","--help")):
+        usage()
+        sys.exit(0)
+    elif (opt in ("-v","--verbose")):
+        output.set_mode("DBG")
+    elif (opt in ("-p","--port")):
+        port=int(arg)
+    else:
+        assert (False,"Unhandled option :"+opt)
+
+#Connect to controller
+ofmsg = openflow.messages()
+parser = of.msg.parser(ofmsg)
+ofsw = of.simu.switch(ofmsg, args[0], port,
+                      dpid=int("0xcafecafe",16),
+                      parser=parser)
+ofsw.send_hello()
+#Send echo and wait
+xid = 22092009
+running = True
+ofsw.send_echo(xid)
+starttime = time.time()
+while running:
+    msg = ofsw.connection.msgreceive(True, 0.00001)
+    pkttime = time.time()
+    dic = ofmsg.peek_from_front("ofp_header", msg)
+    if (dic["type"][0] == ofmsg.get_value("OFPT_ECHO_REPLY") and
+        dic["xid"][0] == xid):
+        #Check reply for echo request
+        output.info("Received echo reply after "+\
+                    str((pkttime-starttime)*1000)+" ms", "ping-controller")
+        running = False
+    else:
+        ofsw.receive_openflow(msg)
diff --git a/tools/pylibopenflow/bin/pyopenflow-pythonize.py b/tools/pylibopenflow/bin/pyopenflow-pythonize.py
new file mode 100755
index 0000000..6da4af9
--- /dev/null
+++ b/tools/pylibopenflow/bin/pyopenflow-pythonize.py
@@ -0,0 +1,67 @@
+#!/usr/bin/env python
+"""This script generate openflow-packets.py which
+creates Python class for each data structure in openflow.h.
+
+(C) Copyright Stanford University
+Author ykk
+Date December 2009
+"""
+import sys
+#@todo Fix this include path mechanism
+sys.path.append('./bin')
+sys.path.append('./pylib')
+import getopt
+import openflow
+import time
+import output
+import of.pythonize
+
+def usage():
+    """Display usage
+    """
+    print "Usage "+sys.argv[0]+" <options> output_file\n"+\
+          "Options:\n"+\
+          "-i/--input\n\tSpecify (non-default) OpenFlow header\n"+\
+          "-t/--template\n\tSpecify (non-default) template file\n"+\
+          "-h/--help\n\tPrint this usage guide\n"+\
+          ""
+          
+#Parse options and arguments
+try:
+    opts, args = getopt.getopt(sys.argv[1:], "hi:t:",
+                               ["help","input","template"])
+except getopt.GetoptError:
+    usage()
+    sys.exit(2)
+
+#Check there is only output file
+if not (len(args) == 1):
+    usage()
+    sys.exit(2)
+
+#Parse options
+##Input
+headerfile=None
+##Template file
+templatefile=None
+for opt,arg in opts:
+    if (opt in ("-h","--help")):
+        usage()
+        sys.exit(0)
+    elif (opt in ("-i","--input")):
+        headerfile=arg
+    elif (opt in ("-t","--template")):
+        templatefile=arg
+    else:
+        print "Unhandled option:"+opt
+        sys.exit(2)
+
+#Generate Python code
+ofmsg = openflow.messages(headerfile)
+pynizer = of.pythonize.pythonizer(ofmsg)
+
+fileRef = open(args[0], "w")
+for x in pynizer.pycode(templatefile):
+    fileRef.write(x+"\n")
+fileRef.write("\n")
+fileRef.close()
diff --git a/tools/pylibopenflow/include/Put C header files here... b/tools/pylibopenflow/include/Put C header files here...
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tools/pylibopenflow/include/Put C header files here...
diff --git a/tools/pylibopenflow/include/messenger.template.py b/tools/pylibopenflow/include/messenger.template.py
new file mode 100644
index 0000000..25e7c76
--- /dev/null
+++ b/tools/pylibopenflow/include/messenger.template.py
@@ -0,0 +1,115 @@
+import socket
+import select
+
+## This module provides library to send and receive messages to NOX's messenger
+#
+# This is a rewrite of noxmsg.py from OpenRoads (OpenFlow Wireless)
+#
+# @author ykk (Stanford University)
+# @date January, 2010
+# @see messenger
+
+def stringarray(string):
+    """Output array of binary values in string.
+    """
+    arrstr = ""
+    if (len(string) != 0):
+        for i in range(0,len(string)):
+            arrstr += "%x " % struct.unpack("=B",string[i])[0]
+    return arrstr
+
+def printarray(string):
+    """Print array of binary values
+    """
+    print "Array of length "+str(len(string))
+    print stringarray(string)
+
+class channel:
+    """TCP channel to communicate to NOX with.
+    """
+    def __init__(self,ipAddr,portNo=2603,debug=False):
+        """Initialize with socket
+        """
+        ##Socket reference for channel
+        self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+        self.sock.connect((ipAddr,portNo))
+        self.debug = debug
+        ##Internal buffer for receiving
+        self.__buffer = ""
+        ##Internal reference to header
+        self.__header = messenger_msg()
+
+    def baresend(self, msg):
+        """Send bare message"""
+        self.sock.send(msg)
+
+    def send(self,msg):
+        """Send message
+        """
+        msgh = messenger_msg()
+        remaining = msgh.unpack(msg)
+        if (msgh.length != len(msg)):
+            msgh.length = len(msg)
+            msg = msgh.pack()+remaining
+        self.baresend(msg)
+        if (self.debug):
+            printarray(msg)
+        
+    def receive(self, recvLen=0,timeout=0):
+        """Receive command
+        If length == None, nonblocking receive (return None or message)
+        With nonblocking receive, timeout is used for select statement
+
+        If length is zero, return single message
+        """            
+        if (recvLen==0):
+            #Receive full message
+            msg=""
+            length=len(self.__header)
+            while (len(msg) < length):
+                msg+=self.sock.recv(1)
+                #Get length
+                if (len(msg) == length):
+                    self.__header.unpack(msg)
+                    length=self.__header.length
+            return msg
+        elif (recvLen==None):
+            #Non-blocking receive
+            ready_to_read = select.select([self.sock],[],[],timeout)[0]
+            if (ready_to_read):
+                self.__buffer += self.sock.recv(1)
+            if (len(self.__buffer) >= len(self.__header)):
+                self.__header.unpack(self.__buffer)
+                if (self.__header.length == len(self.__buffer)):
+                    msg = self.__buffer
+                    self.__buffer = ""
+                    return msg
+            return None
+        else:
+            #Fixed length blocking receive
+            return self.sock.recv(recvLen)
+
+    def __del__(self):
+        """Terminate connection
+        """
+        emsg = messenger_msg()
+        emsg.type = MSG_DISCONNECT
+        emsg.length = len(emsg)
+        self.send(emsg.pack())
+        self.sock.shutdown(1)
+        self.sock.close()
+
+class sslChannel(channel):
+    """SSL channel to communicate to NOX with.
+    """
+    def __init__(self, ipAddr, portNo=1304,debug=False):
+        """Initialize with SSL sock
+        """
+        NOXChannel.__init__(self, ipAddr, portNo,debug)
+        ##Reference to SSL socket for channel
+        self.sslsock = socket.ssl(self.sock)
+
+    def baresend(self, msg):
+        """Send bare message"""
+        self.sslsock.write(msg)
+
diff --git a/tools/pylibopenflow/include/openflow.h b/tools/pylibopenflow/include/openflow.h
new file mode 100644
index 0000000..c0b5090
--- /dev/null
+++ b/tools/pylibopenflow/include/openflow.h
@@ -0,0 +1,970 @@
+/* Copyright (c) 2008 The Board of Trustees of The Leland Stanford
+ * Junior University
+ *
+ * We are making the OpenFlow specification and associated documentation
+ * (Software) available for public use and benefit with the expectation
+ * that others will use, modify and enhance the Software and contribute
+ * those enhancements back to the community. However, since we would
+ * like to make the Software available for broadest use, with as few
+ * restrictions as possible permission is hereby granted, free of
+ * charge, to any person obtaining a copy of this Software to deal in
+ * the Software under the copyrights without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sublicense, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT.  IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * The name and trademarks of copyright holder(s) may NOT be used in
+ * advertising or publicity pertaining to the Software or any
+ * derivatives without specific, written prior permission.
+ */
+
+/* OpenFlow: protocol between controller and datapath. */
+
+#ifndef OPENFLOW_OPENFLOW_H
+#define OPENFLOW_OPENFLOW_H 1
+
+#ifdef __KERNEL__
+#include <linux/types.h>
+#else
+#include <stdint.h>
+#endif
+
+#ifdef SWIG
+#define OFP_ASSERT(EXPR)        /* SWIG can't handle OFP_ASSERT. */
+#elif !defined(__cplusplus)
+/* Build-time assertion for use in a declaration context. */
+#define OFP_ASSERT(EXPR)                                                \
+        extern int (*build_assert(void))[ sizeof(struct {               \
+                    unsigned int build_assert_failed : (EXPR) ? 1 : -1; })]
+#else /* __cplusplus */
+#define OFP_ASSERT(_EXPR) typedef int build_assert_failed[(_EXPR) ? 1 : -1]
+#endif /* __cplusplus */
+
+#ifndef SWIG
+#define OFP_PACKED __attribute__((packed))
+#else
+#define OFP_PACKED              /* SWIG doesn't understand __attribute. */
+#endif
+
+/* Version number:
+ * Non-experimental versions released: 0x01
+ * Experimental versions released: 0x81 -- 0x99
+ */
+/* The most significant bit being set in the version field indicates an
+ * experimental OpenFlow version.
+ */
+#define OFP_VERSION   0x01
+
+#define OFP_MAX_TABLE_NAME_LEN 32
+#define OFP_MAX_PORT_NAME_LEN  16
+
+#define OFP_TCP_PORT  6633
+#define OFP_SSL_PORT  6633
+
+#define OFP_ETH_ALEN 6          /* Bytes in an Ethernet address. */
+
+/* Port numbering.  Physical ports are numbered starting from 1. */
+enum ofp_port {
+    /* Maximum number of physical switch ports. */
+    OFPP_MAX = 0xff00,
+
+    /* Fake output "ports". */
+    OFPP_IN_PORT    = 0xfff8,  /* Send the packet out the input port.  This
+                                  virtual port must be explicitly used
+                                  in order to send back out of the input
+                                  port. */
+    OFPP_TABLE      = 0xfff9,  /* Perform actions in flow table.
+                                  NB: This can only be the destination
+                                  port for packet-out messages. */
+    OFPP_NORMAL     = 0xfffa,  /* Process with normal L2/L3 switching. */
+    OFPP_FLOOD      = 0xfffb,  /* All physical ports except input port and
+                                  those disabled by STP. */
+    OFPP_ALL        = 0xfffc,  /* All physical ports except input port. */
+    OFPP_CONTROLLER = 0xfffd,  /* Send to controller. */
+    OFPP_LOCAL      = 0xfffe,  /* Local openflow "port". */
+    OFPP_NONE       = 0xffff   /* Not associated with a physical port. */
+};
+
+enum ofp_type {
+    /* Immutable messages. */
+    OFPT_HELLO,               /* Symmetric message */
+    OFPT_ERROR,               /* Symmetric message */
+    OFPT_ECHO_REQUEST,        /* Symmetric message */
+    OFPT_ECHO_REPLY,          /* Symmetric message */
+    OFPT_VENDOR,              /* Symmetric message */
+
+    /* Switch configuration messages. */
+    OFPT_FEATURES_REQUEST,    /* Controller/switch message */
+    OFPT_FEATURES_REPLY,      /* Controller/switch message */
+    OFPT_GET_CONFIG_REQUEST,  /* Controller/switch message */
+    OFPT_GET_CONFIG_REPLY,    /* Controller/switch message */
+    OFPT_SET_CONFIG,          /* Controller/switch message */
+
+    /* Asynchronous messages. */
+    OFPT_PACKET_IN,           /* Async message */
+    OFPT_FLOW_REMOVED,        /* Async message */
+    OFPT_PORT_STATUS,         /* Async message */
+
+    /* Controller command messages. */
+    OFPT_PACKET_OUT,          /* Controller/switch message */
+    OFPT_FLOW_MOD,            /* Controller/switch message */
+    OFPT_PORT_MOD,            /* Controller/switch message */
+
+    /* Statistics messages. */
+    OFPT_STATS_REQUEST,       /* Controller/switch message */
+    OFPT_STATS_REPLY,         /* Controller/switch message */
+
+    /* Barrier messages. */
+    OFPT_BARRIER_REQUEST,     /* Controller/switch message */
+    OFPT_BARRIER_REPLY,       /* Controller/switch message */
+
+    /* Queue Configuration messages. */
+    OFPT_QUEUE_GET_CONFIG_REQUEST,  /* Controller/switch message */
+    OFPT_QUEUE_GET_CONFIG_REPLY     /* Controller/switch message */
+
+};
+
+/* Header on all OpenFlow packets. */
+struct ofp_header {
+    uint8_t version;    /* OFP_VERSION. */
+    uint8_t type;       /* One of the OFPT_ constants. */
+    uint16_t length;    /* Length including this ofp_header. */
+    uint32_t xid;       /* Transaction id associated with this packet.
+                           Replies use the same id as was in the request
+                           to facilitate pairing. */
+};
+OFP_ASSERT(sizeof(struct ofp_header) == 8);
+
+/* OFPT_HELLO.  This message has an empty body, but implementations must
+ * ignore any data included in the body, to allow for future extensions. */
+struct ofp_hello {
+    struct ofp_header header;
+};
+
+#define OFP_DEFAULT_MISS_SEND_LEN   128
+
+enum ofp_config_flags {
+    /* Handling of IP fragments. */
+    OFPC_FRAG_NORMAL   = 0,  /* No special handling for fragments. */
+    OFPC_FRAG_DROP     = 1,  /* Drop fragments. */
+    OFPC_FRAG_REASM    = 2,  /* Reassemble (only if OFPC_IP_REASM set). */
+    OFPC_FRAG_MASK     = 3
+};
+
+/* Switch configuration. */
+struct ofp_switch_config {
+    struct ofp_header header;
+    uint16_t flags;             /* OFPC_* flags. */
+    uint16_t miss_send_len;     /* Max bytes of new flow that datapath should
+                                   send to the controller. */
+};
+OFP_ASSERT(sizeof(struct ofp_switch_config) == 12);
+
+/* Capabilities supported by the datapath. */
+enum ofp_capabilities {
+    OFPC_FLOW_STATS     = 1 << 0,  /* Flow statistics. */
+    OFPC_TABLE_STATS    = 1 << 1,  /* Table statistics. */
+    OFPC_PORT_STATS     = 1 << 2,  /* Port statistics. */
+    OFPC_STP            = 1 << 3,  /* 802.1d spanning tree. */
+    OFPC_RESERVED       = 1 << 4,  /* Reserved, must be zero. */
+    OFPC_IP_REASM       = 1 << 5,  /* Can reassemble IP fragments. */
+    OFPC_QUEUE_STATS    = 1 << 6,  /* Queue statistics. */
+    OFPC_ARP_MATCH_IP   = 1 << 7   /* Match IP addresses in ARP pkts. */
+};
+
+/* Flags to indicate behavior of the physical port.  These flags are
+ * used in ofp_phy_port to describe the current configuration.  They are
+ * used in the ofp_port_mod message to configure the port's behavior.
+ */
+enum ofp_port_config {
+    OFPPC_PORT_DOWN    = 1 << 0,  /* Port is administratively down. */
+
+    OFPPC_NO_STP       = 1 << 1,  /* Disable 802.1D spanning tree on port. */
+    OFPPC_NO_RECV      = 1 << 2,  /* Drop all packets except 802.1D spanning
+                                     tree packets. */
+    OFPPC_NO_RECV_STP  = 1 << 3,  /* Drop received 802.1D STP packets. */
+    OFPPC_NO_FLOOD     = 1 << 4,  /* Do not include this port when flooding. */
+    OFPPC_NO_FWD       = 1 << 5,  /* Drop packets forwarded to port. */
+    OFPPC_NO_PACKET_IN = 1 << 6   /* Do not send packet-in msgs for port. */
+};
+
+/* Current state of the physical port.  These are not configurable from
+ * the controller.
+ */
+enum ofp_port_state {
+    OFPPS_LINK_DOWN   = 1 << 0, /* No physical link present. */
+
+    /* The OFPPS_STP_* bits have no effect on switch operation.  The
+     * controller must adjust OFPPC_NO_RECV, OFPPC_NO_FWD, and
+     * OFPPC_NO_PACKET_IN appropriately to fully implement an 802.1D spanning
+     * tree. */
+    OFPPS_STP_LISTEN  = 0 << 8, /* Not learning or relaying frames. */
+    OFPPS_STP_LEARN   = 1 << 8, /* Learning but not relaying frames. */
+    OFPPS_STP_FORWARD = 2 << 8, /* Learning and relaying frames. */
+    OFPPS_STP_BLOCK   = 3 << 8, /* Not part of spanning tree. */
+    OFPPS_STP_MASK    = 3 << 8  /* Bit mask for OFPPS_STP_* values. */
+};
+
+/* Features of physical ports available in a datapath. */
+enum ofp_port_features {
+    OFPPF_10MB_HD    = 1 << 0,  /* 10 Mb half-duplex rate support. */
+    OFPPF_10MB_FD    = 1 << 1,  /* 10 Mb full-duplex rate support. */
+    OFPPF_100MB_HD   = 1 << 2,  /* 100 Mb half-duplex rate support. */
+    OFPPF_100MB_FD   = 1 << 3,  /* 100 Mb full-duplex rate support. */
+    OFPPF_1GB_HD     = 1 << 4,  /* 1 Gb half-duplex rate support. */
+    OFPPF_1GB_FD     = 1 << 5,  /* 1 Gb full-duplex rate support. */
+    OFPPF_10GB_FD    = 1 << 6,  /* 10 Gb full-duplex rate support. */
+    OFPPF_COPPER     = 1 << 7,  /* Copper medium. */
+    OFPPF_FIBER      = 1 << 8,  /* Fiber medium. */
+    OFPPF_AUTONEG    = 1 << 9,  /* Auto-negotiation. */
+    OFPPF_PAUSE      = 1 << 10, /* Pause. */
+    OFPPF_PAUSE_ASYM = 1 << 11  /* Asymmetric pause. */
+};
+
+/* Description of a physical port */
+struct ofp_phy_port {
+    uint16_t port_no;
+    uint8_t hw_addr[OFP_ETH_ALEN];
+    char name[OFP_MAX_PORT_NAME_LEN]; /* Null-terminated */
+
+    uint32_t config;        /* Bitmap of OFPPC_* flags. */
+    uint32_t state;         /* Bitmap of OFPPS_* flags. */
+
+    /* Bitmaps of OFPPF_* that describe features.  All bits zeroed if
+     * unsupported or unavailable. */
+    uint32_t curr;          /* Current features. */
+    uint32_t advertised;    /* Features being advertised by the port. */
+    uint32_t supported;     /* Features supported by the port. */
+    uint32_t peer;          /* Features advertised by peer. */
+};
+OFP_ASSERT(sizeof(struct ofp_phy_port) == 48);
+
+/* Switch features. */
+struct ofp_switch_features {
+    struct ofp_header header;
+    uint64_t datapath_id;   /* Datapath unique ID.  The lower 48-bits are for
+                               a MAC address, while the upper 16-bits are
+                               implementer-defined. */
+
+    uint32_t n_buffers;     /* Max packets buffered at once. */
+
+    uint8_t n_tables;       /* Number of tables supported by datapath. */
+    uint8_t pad[3];         /* Align to 64-bits. */
+
+    /* Features. */
+    uint32_t capabilities;  /* Bitmap of support "ofp_capabilities". */
+    uint32_t actions;       /* Bitmap of supported "ofp_action_type"s. */
+
+    /* Port info.*/
+    struct ofp_phy_port ports[0];  /* Port definitions.  The number of ports
+                                      is inferred from the length field in
+                                      the header. */
+};
+OFP_ASSERT(sizeof(struct ofp_switch_features) == 32);
+
+/* What changed about the physical port */
+enum ofp_port_reason {
+    OFPPR_ADD,              /* The port was added. */
+    OFPPR_DELETE,           /* The port was removed. */
+    OFPPR_MODIFY            /* Some attribute of the port has changed. */
+};
+
+/* A physical port has changed in the datapath */
+struct ofp_port_status {
+    struct ofp_header header;
+    uint8_t reason;          /* One of OFPPR_*. */
+    uint8_t pad[7];          /* Align to 64-bits. */
+    struct ofp_phy_port desc;
+};
+OFP_ASSERT(sizeof(struct ofp_port_status) == 64);
+
+/* Modify behavior of the physical port */
+struct ofp_port_mod {
+    struct ofp_header header;
+    uint16_t port_no;
+    uint8_t hw_addr[OFP_ETH_ALEN]; /* The hardware address is not
+                                      configurable.  This is used to
+                                      sanity-check the request, so it must
+                                      be the same as returned in an
+                                      ofp_phy_port struct. */
+
+    uint32_t config;        /* Bitmap of OFPPC_* flags. */
+    uint32_t mask;          /* Bitmap of OFPPC_* flags to be changed. */
+
+    uint32_t advertise;     /* Bitmap of "ofp_port_features"s.  Zero all
+                               bits to prevent any action taking place. */
+    uint8_t pad[4];         /* Pad to 64-bits. */
+};
+OFP_ASSERT(sizeof(struct ofp_port_mod) == 32);
+
+/* Why is this packet being sent to the controller? */
+enum ofp_packet_in_reason {
+    OFPR_NO_MATCH,          /* No matching flow. */
+    OFPR_ACTION             /* Action explicitly output to controller. */
+};
+
+/* Packet received on port (datapath -> controller). */
+struct ofp_packet_in {
+    struct ofp_header header;
+    uint32_t buffer_id;     /* ID assigned by datapath. */
+    uint16_t total_len;     /* Full length of frame. */
+    uint16_t in_port;       /* Port on which frame was received. */
+    uint8_t reason;         /* Reason packet is being sent (one of OFPR_*) */
+    uint8_t pad;
+    uint8_t data[0];        /* Ethernet frame, halfway through 32-bit word,
+                               so the IP header is 32-bit aligned.  The
+                               amount of data is inferred from the length
+                               field in the header.  Because of padding,
+                               offsetof(struct ofp_packet_in, data) ==
+                               sizeof(struct ofp_packet_in) - 2. */
+};
+OFP_ASSERT(sizeof(struct ofp_packet_in) == 20);
+
+enum ofp_action_type {
+    OFPAT_OUTPUT,           /* Output to switch port. */
+    OFPAT_SET_VLAN_VID,     /* Set the 802.1q VLAN id. */
+    OFPAT_SET_VLAN_PCP,     /* Set the 802.1q priority. */
+    OFPAT_STRIP_VLAN,       /* Strip the 802.1q header. */
+    OFPAT_SET_DL_SRC,       /* Ethernet source address. */
+    OFPAT_SET_DL_DST,       /* Ethernet destination address. */
+    OFPAT_SET_NW_SRC,       /* IP source address. */
+    OFPAT_SET_NW_DST,       /* IP destination address. */
+    OFPAT_SET_NW_TOS,       /* IP ToS (DSCP field, 6 bits). */
+    OFPAT_SET_TP_SRC,       /* TCP/UDP source port. */
+    OFPAT_SET_TP_DST,       /* TCP/UDP destination port. */
+    OFPAT_ENQUEUE,          /* Output to queue.  */
+    OFPAT_VENDOR = 0xffff
+};
+
+/* Action structure for OFPAT_OUTPUT, which sends packets out 'port'.
+ * When the 'port' is the OFPP_CONTROLLER, 'max_len' indicates the max
+ * number of bytes to send.  A 'max_len' of zero means no bytes of the
+ * packet should be sent.*/
+struct ofp_action_output {
+    uint16_t type;                  /* OFPAT_OUTPUT. */
+    uint16_t len;                   /* Length is 8. */
+    uint16_t port;                  /* Output port. */
+    uint16_t max_len;               /* Max length to send to controller. */
+};
+OFP_ASSERT(sizeof(struct ofp_action_output) == 8);
+
+/* The VLAN id is 12 bits, so we can use the entire 16 bits to indicate
+ * special conditions.  All ones is used to match that no VLAN id was
+ * set. */
+#define OFP_VLAN_NONE      0xffff
+
+/* Action structure for OFPAT_SET_VLAN_VID. */
+struct ofp_action_vlan_vid {
+    uint16_t type;                  /* OFPAT_SET_VLAN_VID. */
+    uint16_t len;                   /* Length is 8. */
+    uint16_t vlan_vid;              /* VLAN id. */
+    uint8_t pad[2];
+};
+OFP_ASSERT(sizeof(struct ofp_action_vlan_vid) == 8);
+
+/* Action structure for OFPAT_SET_VLAN_PCP. */
+struct ofp_action_vlan_pcp {
+    uint16_t type;                  /* OFPAT_SET_VLAN_PCP. */
+    uint16_t len;                   /* Length is 8. */
+    uint8_t vlan_pcp;               /* VLAN priority. */
+    uint8_t pad[3];
+};
+OFP_ASSERT(sizeof(struct ofp_action_vlan_pcp) == 8);
+
+/* Action structure for OFPAT_SET_DL_SRC/DST. */
+struct ofp_action_dl_addr {
+    uint16_t type;                  /* OFPAT_SET_DL_SRC/DST. */
+    uint16_t len;                   /* Length is 16. */
+    uint8_t dl_addr[OFP_ETH_ALEN];  /* Ethernet address. */
+    uint8_t pad[6];
+};
+OFP_ASSERT(sizeof(struct ofp_action_dl_addr) == 16);
+
+/* Action structure for OFPAT_SET_NW_SRC/DST. */
+struct ofp_action_nw_addr {
+    uint16_t type;                  /* OFPAT_SET_TW_SRC/DST. */
+    uint16_t len;                   /* Length is 8. */
+    uint32_t nw_addr;               /* IP address. */
+};
+OFP_ASSERT(sizeof(struct ofp_action_nw_addr) == 8);
+
+/* Action structure for OFPAT_SET_TP_SRC/DST. */
+struct ofp_action_tp_port {
+    uint16_t type;                  /* OFPAT_SET_TP_SRC/DST. */
+    uint16_t len;                   /* Length is 8. */
+    uint16_t tp_port;               /* TCP/UDP port. */
+    uint8_t pad[2];
+};
+OFP_ASSERT(sizeof(struct ofp_action_tp_port) == 8);
+
+/* Action structure for OFPAT_SET_NW_TOS. */
+struct ofp_action_nw_tos {
+    uint16_t type;                  /* OFPAT_SET_TW_SRC/DST. */
+    uint16_t len;                   /* Length is 8. */
+    uint8_t nw_tos;                 /* IP ToS (DSCP field, 6 bits). */
+    uint8_t pad[3];
+};
+OFP_ASSERT(sizeof(struct ofp_action_nw_tos) == 8);
+
+/* Action header for OFPAT_VENDOR. The rest of the body is vendor-defined. */
+struct ofp_action_vendor_header {
+    uint16_t type;                  /* OFPAT_VENDOR. */
+    uint16_t len;                   /* Length is a multiple of 8. */
+    uint32_t vendor;                /* Vendor ID, which takes the same form
+                                       as in "struct ofp_vendor_header". */
+};
+OFP_ASSERT(sizeof(struct ofp_action_vendor_header) == 8);
+
+/* Action header that is common to all actions.  The length includes the
+ * header and any padding used to make the action 64-bit aligned.
+ * NB: The length of an action *must* always be a multiple of eight. */
+struct ofp_action_header {
+    uint16_t type;                  /* One of OFPAT_*. */
+    uint16_t len;                   /* Length of action, including this
+                                       header.  This is the length of action,
+                                       including any padding to make it
+                                       64-bit aligned. */
+    uint8_t pad[4];
+};
+OFP_ASSERT(sizeof(struct ofp_action_header) == 8);
+
+/* Send packet (controller -> datapath). */
+struct ofp_packet_out {
+    struct ofp_header header;
+    uint32_t buffer_id;           /* ID assigned by datapath (-1 if none). */
+    uint16_t in_port;             /* Packet's input port (OFPP_NONE if none). */
+    uint16_t actions_len;         /* Size of action array in bytes. */
+    struct ofp_action_header actions[0]; /* Actions. */
+    /* uint8_t data[0]; */        /* Packet data.  The length is inferred
+                                     from the length field in the header.
+                                     (Only meaningful if buffer_id == -1.) */
+};
+OFP_ASSERT(sizeof(struct ofp_packet_out) == 16);
+
+enum ofp_flow_mod_command {
+    OFPFC_ADD,              /* New flow. */
+    OFPFC_MODIFY,           /* Modify all matching flows. */
+    OFPFC_MODIFY_STRICT,    /* Modify entry strictly matching wildcards */
+    OFPFC_DELETE,           /* Delete all matching flows. */
+    OFPFC_DELETE_STRICT    /* Strictly match wildcards and priority. */
+};
+
+/* Flow wildcards. */
+enum ofp_flow_wildcards {
+    OFPFW_IN_PORT  = 1 << 0,  /* Switch input port. */
+    OFPFW_DL_VLAN  = 1 << 1,  /* VLAN id. */
+    OFPFW_DL_SRC   = 1 << 2,  /* Ethernet source address. */
+    OFPFW_DL_DST   = 1 << 3,  /* Ethernet destination address. */
+    OFPFW_DL_TYPE  = 1 << 4,  /* Ethernet frame type. */
+    OFPFW_NW_PROTO = 1 << 5,  /* IP protocol. */
+    OFPFW_TP_SRC   = 1 << 6,  /* TCP/UDP source port. */
+    OFPFW_TP_DST   = 1 << 7,  /* TCP/UDP destination port. */
+
+    /* IP source address wildcard bit count.  0 is exact match, 1 ignores the
+     * LSB, 2 ignores the 2 least-significant bits, ..., 32 and higher wildcard
+     * the entire field.  This is the *opposite* of the usual convention where
+     * e.g. /24 indicates that 8 bits (not 24 bits) are wildcarded. */
+    OFPFW_NW_SRC_SHIFT = 8,
+    OFPFW_NW_SRC_BITS = 6,
+    OFPFW_NW_SRC_MASK = ((1 << OFPFW_NW_SRC_BITS) - 1) << OFPFW_NW_SRC_SHIFT,
+    OFPFW_NW_SRC_ALL = 32 << OFPFW_NW_SRC_SHIFT,
+
+    /* IP destination address wildcard bit count.  Same format as source. */
+    OFPFW_NW_DST_SHIFT = 14,
+    OFPFW_NW_DST_BITS = 6,
+    OFPFW_NW_DST_MASK = ((1 << OFPFW_NW_DST_BITS) - 1) << OFPFW_NW_DST_SHIFT,
+    OFPFW_NW_DST_ALL = 32 << OFPFW_NW_DST_SHIFT,
+
+    OFPFW_DL_VLAN_PCP = 1 << 20,  /* VLAN priority. */
+    OFPFW_NW_TOS = 1 << 21,  /* IP ToS (DSCP field, 6 bits). */
+
+    /* Wildcard all fields. */
+    OFPFW_ALL = ((1 << 22) - 1)
+};
+
+/* The wildcards for ICMP type and code fields use the transport source
+ * and destination port fields, respectively. */
+#define OFPFW_ICMP_TYPE OFPFW_TP_SRC
+#define OFPFW_ICMP_CODE OFPFW_TP_DST
+
+/* Values below this cutoff are 802.3 packets and the two bytes
+ * following MAC addresses are used as a frame length.  Otherwise, the
+ * two bytes are used as the Ethernet type.
+ */
+#define OFP_DL_TYPE_ETH2_CUTOFF   0x0600
+
+/* Value of dl_type to indicate that the frame does not include an
+ * Ethernet type.
+ */
+#define OFP_DL_TYPE_NOT_ETH_TYPE  0x05ff
+
+/* The VLAN id is 12-bits, so we can use the entire 16 bits to indicate
+ * special conditions.  All ones indicates that no VLAN id was set.
+ */
+#define OFP_VLAN_NONE      0xffff
+
+/* Fields to match against flows */
+struct ofp_match {
+    uint32_t wildcards;        /* Wildcard fields. */
+    uint16_t in_port;          /* Input switch port. */
+    uint8_t dl_src[OFP_ETH_ALEN]; /* Ethernet source address. */
+    uint8_t dl_dst[OFP_ETH_ALEN]; /* Ethernet destination address. */
+    uint16_t dl_vlan;          /* Input VLAN id. */
+    uint8_t dl_vlan_pcp;       /* Input VLAN priority. */
+    uint8_t pad1[1];           /* Align to 64-bits */
+    uint16_t dl_type;          /* Ethernet frame type. */
+    uint8_t nw_tos;            /* IP ToS (actually DSCP field, 6 bits). */
+    uint8_t nw_proto;          /* IP protocol or lower 8 bits of
+                                * ARP opcode. */
+    uint8_t pad2[2];           /* Align to 64-bits */
+    uint32_t nw_src;           /* IP source address. */
+    uint32_t nw_dst;           /* IP destination address. */
+    uint16_t tp_src;           /* TCP/UDP source port. */
+    uint16_t tp_dst;           /* TCP/UDP destination port. */
+};
+OFP_ASSERT(sizeof(struct ofp_match) == 40);
+
+/* The match fields for ICMP type and code use the transport source and
+ * destination port fields, respectively. */
+#define icmp_type tp_src
+#define icmp_code tp_dst
+
+/* Value used in "idle_timeout" and "hard_timeout" to indicate that the entry
+ * is permanent. */
+#define OFP_FLOW_PERMANENT 0
+
+/* By default, choose a priority in the middle. */
+#define OFP_DEFAULT_PRIORITY 0x8000
+
+enum ofp_flow_mod_flags {
+    OFPFF_SEND_FLOW_REM = 1 << 0,  /* Send flow removed message when flow
+                                    * expires or is deleted. */
+    OFPFF_CHECK_OVERLAP = 1 << 1,  /* Check for overlapping entries first. */
+    OFPFF_EMERG         = 1 << 2   /* Remark this is for emergency. */
+};
+
+/* Flow setup and teardown (controller -> datapath). */
+struct ofp_flow_mod {
+    struct ofp_header header;
+    struct ofp_match match;      /* Fields to match */
+    uint64_t cookie;             /* Opaque controller-issued identifier. */
+
+    /* Flow actions. */
+    uint16_t command;             /* One of OFPFC_*. */
+    uint16_t idle_timeout;        /* Idle time before discarding (seconds). */
+    uint16_t hard_timeout;        /* Max time before discarding (seconds). */
+    uint16_t priority;            /* Priority level of flow entry. */
+    uint32_t buffer_id;           /* Buffered packet to apply to (or -1).
+                                     Not meaningful for OFPFC_DELETE*. */
+    uint16_t out_port;            /* For OFPFC_DELETE* commands, require
+                                     matching entries to include this as an
+                                     output port.  A value of OFPP_NONE
+                                     indicates no restriction. */
+    uint16_t flags;               /* One of OFPFF_*. */
+    struct ofp_action_header actions[0]; /* The action length is inferred
+                                            from the length field in the
+                                            header. */
+};
+OFP_ASSERT(sizeof(struct ofp_flow_mod) == 72);
+
+/* Why was this flow removed? */
+enum ofp_flow_removed_reason {
+    OFPRR_IDLE_TIMEOUT,         /* Flow idle time exceeded idle_timeout. */
+    OFPRR_HARD_TIMEOUT,         /* Time exceeded hard_timeout. */
+    OFPRR_DELETE                /* Evicted by a DELETE flow mod. */
+};
+
+/* Flow removed (datapath -> controller). */
+struct ofp_flow_removed {
+    struct ofp_header header;
+    struct ofp_match match;   /* Description of fields. */
+    uint64_t cookie;          /* Opaque controller-issued identifier. */
+
+    uint16_t priority;        /* Priority level of flow entry. */
+    uint8_t reason;           /* One of OFPRR_*. */
+    uint8_t pad[1];           /* Align to 32-bits. */
+
+    uint32_t duration_sec;    /* Time flow was alive in seconds. */
+    uint32_t duration_nsec;   /* Time flow was alive in nanoseconds beyond
+                                 duration_sec. */
+    uint16_t idle_timeout;    /* Idle timeout from original flow mod. */
+    uint8_t pad2[2];          /* Align to 64-bits. */
+    uint64_t packet_count;
+    uint64_t byte_count;
+};
+OFP_ASSERT(sizeof(struct ofp_flow_removed) == 88);
+
+/* Values for 'type' in ofp_error_message.  These values are immutable: they
+ * will not change in future versions of the protocol (although new values may
+ * be added). */
+enum ofp_error_type {
+    OFPET_HELLO_FAILED,         /* Hello protocol failed. */
+    OFPET_BAD_REQUEST,          /* Request was not understood. */
+    OFPET_BAD_ACTION,           /* Error in action description. */
+    OFPET_FLOW_MOD_FAILED,      /* Problem modifying flow entry. */
+    OFPET_PORT_MOD_FAILED,      /* Port mod request failed. */
+    OFPET_QUEUE_OP_FAILED       /* Queue operation failed. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_HELLO_FAILED.  'data' contains an
+ * ASCII text string that may give failure details. */
+enum ofp_hello_failed_code {
+    OFPHFC_INCOMPATIBLE,        /* No compatible version. */
+    OFPHFC_EPERM                /* Permissions error. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_BAD_REQUEST.  'data' contains at least
+ * the first 64 bytes of the failed request. */
+enum ofp_bad_request_code {
+    OFPBRC_BAD_VERSION,         /* ofp_header.version not supported. */
+    OFPBRC_BAD_TYPE,            /* ofp_header.type not supported. */
+    OFPBRC_BAD_STAT,            /* ofp_stats_request.type not supported. */
+    OFPBRC_BAD_VENDOR,          /* Vendor not supported (in ofp_vendor_header
+                                 * or ofp_stats_request or ofp_stats_reply). */
+    OFPBRC_BAD_SUBTYPE,         /* Vendor subtype not supported. */
+    OFPBRC_EPERM,               /* Permissions error. */
+    OFPBRC_BAD_LEN,             /* Wrong request length for type. */
+    OFPBRC_BUFFER_EMPTY,        /* Specified buffer has already been used. */
+    OFPBRC_BUFFER_UNKNOWN       /* Specified buffer does not exist. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_BAD_ACTION.  'data' contains at least
+ * the first 64 bytes of the failed request. */
+enum ofp_bad_action_code {
+    OFPBAC_BAD_TYPE,           /* Unknown action type. */
+    OFPBAC_BAD_LEN,            /* Length problem in actions. */
+    OFPBAC_BAD_VENDOR,         /* Unknown vendor id specified. */
+    OFPBAC_BAD_VENDOR_TYPE,    /* Unknown action type for vendor id. */
+    OFPBAC_BAD_OUT_PORT,       /* Problem validating output action. */
+    OFPBAC_BAD_ARGUMENT,       /* Bad action argument. */
+    OFPBAC_EPERM,              /* Permissions error. */
+    OFPBAC_TOO_MANY,           /* Can't handle this many actions. */
+    OFPBAC_BAD_QUEUE           /* Problem validating output queue. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_FLOW_MOD_FAILED.  'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_flow_mod_failed_code {
+    OFPFMFC_ALL_TABLES_FULL,    /* Flow not added because of full tables. */
+    OFPFMFC_OVERLAP,            /* Attempted to add overlapping flow with
+                                 * CHECK_OVERLAP flag set. */
+    OFPFMFC_EPERM,              /* Permissions error. */
+    OFPFMFC_BAD_EMERG_TIMEOUT,  /* Flow not added because of non-zero idle/hard
+                                 * timeout. */
+    OFPFMFC_BAD_COMMAND,        /* Unknown command. */
+    OFPFMFC_UNSUPPORTED         /* Unsupported action list - cannot process in
+                                 * the order specified. */
+};
+
+/* ofp_error_msg 'code' values for OFPET_PORT_MOD_FAILED.  'data' contains
+ * at least the first 64 bytes of the failed request. */
+enum ofp_port_mod_failed_code {
+    OFPPMFC_BAD_PORT,            /* Specified port does not exist. */
+    OFPPMFC_BAD_HW_ADDR,         /* Specified hardware address is wrong. */
+};
+
+/* ofp_error msg 'code' values for OFPET_QUEUE_OP_FAILED. 'data' contains
+ * at least the first 64 bytes of the failed request */
+enum ofp_queue_op_failed_code {
+    OFPQOFC_BAD_PORT,           /* Invalid port (or port does not exist). */
+    OFPQOFC_BAD_QUEUE,          /* Queue does not exist. */
+    OFPQOFC_EPERM               /* Permissions error. */
+};
+
+/* OFPT_ERROR: Error message (datapath -> controller). */
+struct ofp_error_msg {
+    struct ofp_header header;
+
+    uint16_t type;
+    uint16_t code;
+    uint8_t data[0];          /* Variable-length data.  Interpreted based
+                                 on the type and code. */
+};
+OFP_ASSERT(sizeof(struct ofp_error_msg) == 12);
+
+enum ofp_stats_types {
+    /* Description of this OpenFlow switch.
+     * The request body is empty.
+     * The reply body is struct ofp_desc_stats. */
+    OFPST_DESC,
+
+    /* Individual flow statistics.
+     * The request body is struct ofp_flow_stats_request.
+     * The reply body is an array of struct ofp_flow_stats. */
+    OFPST_FLOW,
+
+    /* Aggregate flow statistics.
+     * The request body is struct ofp_aggregate_stats_request.
+     * The reply body is struct ofp_aggregate_stats_reply. */
+    OFPST_AGGREGATE,
+
+    /* Flow table statistics.
+     * The request body is empty.
+     * The reply body is an array of struct ofp_table_stats. */
+    OFPST_TABLE,
+
+    /* Physical port statistics.
+     * The request body is struct ofp_port_stats_request.
+     * The reply body is an array of struct ofp_port_stats. */
+    OFPST_PORT,
+
+    /* Queue statistics for a port
+     * The request body defines the port
+     * The reply body is an array of struct ofp_queue_stats */
+    OFPST_QUEUE,
+
+    /* Vendor extension.
+     * The request and reply bodies begin with a 32-bit vendor ID, which takes
+     * the same form as in "struct ofp_vendor_header".  The request and reply
+     * bodies are otherwise vendor-defined. */
+    OFPST_VENDOR = 0xffff
+};
+
+struct ofp_stats_request {
+    struct ofp_header header;
+    uint16_t type;              /* One of the OFPST_* constants. */
+    uint16_t flags;             /* OFPSF_REQ_* flags (none yet defined). */
+    uint8_t body[0];            /* Body of the request. */
+};
+OFP_ASSERT(sizeof(struct ofp_stats_request) == 12);
+
+enum ofp_stats_reply_flags {
+    OFPSF_REPLY_MORE  = 1 << 0  /* More replies to follow. */
+};
+
+struct ofp_stats_reply {
+    struct ofp_header header;
+    uint16_t type;              /* One of the OFPST_* constants. */
+    uint16_t flags;             /* OFPSF_REPLY_* flags. */
+    uint8_t body[0];            /* Body of the reply. */
+};
+OFP_ASSERT(sizeof(struct ofp_stats_reply) == 12);
+
+#define DESC_STR_LEN   256
+#define SERIAL_NUM_LEN 32
+/* Body of reply to OFPST_DESC request.  Each entry is a NULL-terminated
+ * ASCII string. */
+struct ofp_desc_stats {
+    char mfr_desc[DESC_STR_LEN];       /* Manufacturer description. */
+    char hw_desc[DESC_STR_LEN];        /* Hardware description. */
+    char sw_desc[DESC_STR_LEN];        /* Software description. */
+    char serial_num[SERIAL_NUM_LEN];   /* Serial number. */
+    char dp_desc[DESC_STR_LEN];        /* Human readable description of datapath. */
+};
+OFP_ASSERT(sizeof(struct ofp_desc_stats) == 1056);
+
+/* Body for ofp_stats_request of type OFPST_FLOW. */
+struct ofp_flow_stats_request {
+    struct ofp_match match;   /* Fields to match. */
+    uint8_t table_id;         /* ID of table to read (from ofp_table_stats),
+                                 0xff for all tables or 0xfe for emergency. */
+    uint8_t pad;              /* Align to 32 bits. */
+    uint16_t out_port;        /* Require matching entries to include this
+                                 as an output port.  A value of OFPP_NONE
+                                 indicates no restriction. */
+};
+OFP_ASSERT(sizeof(struct ofp_flow_stats_request) == 44);
+
+/* Body of reply to OFPST_FLOW request. */
+struct ofp_flow_stats {
+    uint16_t length;          /* Length of this entry. */
+    uint8_t table_id;         /* ID of table flow came from. */
+    uint8_t pad;
+    struct ofp_match match;   /* Description of fields. */
+    uint32_t duration_sec;    /* Time flow has been alive in seconds. */
+    uint32_t duration_nsec;   /* Time flow has been alive in nanoseconds beyond
+                                 duration_sec. */
+    uint16_t priority;        /* Priority of the entry. Only meaningful
+                                 when this is not an exact-match entry. */
+    uint16_t idle_timeout;    /* Number of seconds idle before expiration. */
+    uint16_t hard_timeout;    /* Number of seconds before expiration. */
+    uint8_t pad2[6];          /* Align to 64-bits. */
+    uint64_t cookie;          /* Opaque controller-issued identifier. */
+    uint64_t packet_count;    /* Number of packets in flow. */
+    uint64_t byte_count;      /* Number of bytes in flow. */
+    struct ofp_action_header actions[0]; /* Actions. */
+};
+OFP_ASSERT(sizeof(struct ofp_flow_stats) == 88);
+
+/* Body for ofp_stats_request of type OFPST_AGGREGATE. */
+struct ofp_aggregate_stats_request {
+    struct ofp_match match;   /* Fields to match. */
+    uint8_t table_id;         /* ID of table to read (from ofp_table_stats)
+                                 0xff for all tables or 0xfe for emergency. */
+    uint8_t pad;              /* Align to 32 bits. */
+    uint16_t out_port;        /* Require matching entries to include this
+                                 as an output port.  A value of OFPP_NONE
+                                 indicates no restriction. */
+};
+OFP_ASSERT(sizeof(struct ofp_aggregate_stats_request) == 44);
+
+/* Body of reply to OFPST_AGGREGATE request. */
+struct ofp_aggregate_stats_reply {
+    uint64_t packet_count;    /* Number of packets in flows. */
+    uint64_t byte_count;      /* Number of bytes in flows. */
+    uint32_t flow_count;      /* Number of flows. */
+    uint8_t pad[4];           /* Align to 64 bits. */
+};
+OFP_ASSERT(sizeof(struct ofp_aggregate_stats_reply) == 24);
+
+/* Body of reply to OFPST_TABLE request. */
+struct ofp_table_stats {
+    uint8_t table_id;        /* Identifier of table.  Lower numbered tables
+                                are consulted first. */
+    uint8_t pad[3];          /* Align to 32-bits. */
+    char name[OFP_MAX_TABLE_NAME_LEN];
+    uint32_t wildcards;      /* Bitmap of OFPFW_* wildcards that are
+                                supported by the table. */
+    uint32_t max_entries;    /* Max number of entries supported. */
+    uint32_t active_count;   /* Number of active entries. */
+    uint64_t lookup_count;   /* Number of packets looked up in table. */
+    uint64_t matched_count;  /* Number of packets that hit table. */
+};
+OFP_ASSERT(sizeof(struct ofp_table_stats) == 64);
+
+/* Body for ofp_stats_request of type OFPST_PORT. */
+struct ofp_port_stats_request {
+    uint16_t port_no;        /* OFPST_PORT message must request statistics
+                              * either for a single port (specified in
+                              * port_no) or for all ports (if port_no ==
+                              * OFPP_NONE). */
+    uint8_t pad[6];
+};
+OFP_ASSERT(sizeof(struct ofp_port_stats_request) == 8);
+
+/* Body of reply to OFPST_PORT request. If a counter is unsupported, set
+ * the field to all ones. */
+struct ofp_port_stats {
+    uint16_t port_no;
+    uint8_t pad[6];          /* Align to 64-bits. */
+    uint64_t rx_packets;     /* Number of received packets. */
+    uint64_t tx_packets;     /* Number of transmitted packets. */
+    uint64_t rx_bytes;       /* Number of received bytes. */
+    uint64_t tx_bytes;       /* Number of transmitted bytes. */
+    uint64_t rx_dropped;     /* Number of packets dropped by RX. */
+    uint64_t tx_dropped;     /* Number of packets dropped by TX. */
+    uint64_t rx_errors;      /* Number of receive errors.  This is a super-set
+                                of more specific receive errors and should be
+                                greater than or equal to the sum of all
+                                rx_*_err values. */
+    uint64_t tx_errors;      /* Number of transmit errors.  This is a super-set
+                                of more specific transmit errors and should be
+                                greater than or equal to the sum of all
+                                tx_*_err values (none currently defined.) */
+    uint64_t rx_frame_err;   /* Number of frame alignment errors. */
+    uint64_t rx_over_err;    /* Number of packets with RX overrun. */
+    uint64_t rx_crc_err;     /* Number of CRC errors. */
+    uint64_t collisions;     /* Number of collisions. */
+};
+OFP_ASSERT(sizeof(struct ofp_port_stats) == 104);
+
+/* Vendor extension. */
+struct ofp_vendor_header {
+    struct ofp_header header;   /* Type OFPT_VENDOR. */
+    uint32_t vendor;            /* Vendor ID:
+                                 * - MSB 0: low-order bytes are IEEE OUI.
+                                 * - MSB != 0: defined by OpenFlow
+                                 *   consortium. */
+    /* Vendor-defined arbitrary additional data. */
+};
+OFP_ASSERT(sizeof(struct ofp_vendor_header) == 12);
+
+/* All ones is used to indicate all queues in a port (for stats retrieval). */
+#define OFPQ_ALL      0xffffffff
+
+/* Min rate > 1000 means not configured. */
+#define OFPQ_MIN_RATE_UNCFG      0xffff
+
+enum ofp_queue_properties {
+    OFPQT_NONE = 0,       /* No property defined for queue (default). */
+    OFPQT_MIN_RATE,       /* Minimum datarate guaranteed. */
+                          /* Other types should be added here
+                           * (i.e. max rate, precedence, etc). */
+};
+
+/* Common description for a queue. */
+struct ofp_queue_prop_header {
+    uint16_t property;    /* One of OFPQT_. */
+    uint16_t len;         /* Length of property, including this header. */
+    uint8_t pad[4];       /* 64-bit alignemnt. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_prop_header) == 8);
+
+/* Min-Rate queue property description. */
+struct ofp_queue_prop_min_rate {
+    struct ofp_queue_prop_header prop_header; /* prop: OFPQT_MIN, len: 16. */
+    uint16_t rate;        /* In 1/10 of a percent; >1000 -> disabled. */
+    uint8_t pad[6];       /* 64-bit alignment */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_prop_min_rate) == 16);
+
+/* Full description for a queue. */
+struct ofp_packet_queue {
+    uint32_t queue_id;     /* id for the specific queue. */
+    uint16_t len;          /* Length in bytes of this queue desc. */
+    uint8_t pad[2];        /* 64-bit alignment. */
+    struct ofp_queue_prop_header properties[0]; /* List of properties. */
+};
+OFP_ASSERT(sizeof(struct ofp_packet_queue) == 8);
+
+/* Query for port queue configuration. */
+struct ofp_queue_get_config_request {
+    struct ofp_header header;
+    uint16_t port;         /* Port to be queried. Should refer
+                              to a valid physical port (i.e. < OFPP_MAX) */
+    uint8_t pad[2];        /* 32-bit alignment. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_get_config_request) == 12);
+
+/* Queue configuration for a given port. */
+struct ofp_queue_get_config_reply {
+    struct ofp_header header;
+    uint16_t port;
+    uint8_t pad[6];
+    struct ofp_packet_queue queues[0]; /* List of configured queues. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_get_config_reply) == 16);
+
+/* OFPAT_ENQUEUE action struct: send packets to given queue on port. */
+struct ofp_action_enqueue {
+    uint16_t type;            /* OFPAT_ENQUEUE. */
+    uint16_t len;             /* Len is 16. */
+    uint16_t port;            /* Port that queue belongs. Should
+                                 refer to a valid physical port
+                                 (i.e. < OFPP_MAX) or OFPP_IN_PORT. */
+    uint8_t pad[6];           /* Pad for 64-bit alignment. */
+    uint32_t queue_id;        /* Where to enqueue the packets. */
+};
+OFP_ASSERT(sizeof(struct ofp_action_enqueue) == 16);
+
+struct ofp_queue_stats_request {
+    uint16_t port_no;        /* All ports if OFPT_ALL. */
+    uint8_t pad[2];          /* Align to 32-bits. */
+    uint32_t queue_id;       /* All queues if OFPQ_ALL. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_stats_request) == 8);
+
+struct ofp_queue_stats {
+    uint16_t port_no;
+    uint8_t pad[2];          /* Align to 32-bits. */
+    uint32_t queue_id;       /* Queue i.d */
+    uint64_t tx_bytes;       /* Number of transmitted bytes. */
+    uint64_t tx_packets;     /* Number of transmitted packets. */
+    uint64_t tx_errors;      /* Number of packets dropped due to overrun. */
+};
+OFP_ASSERT(sizeof(struct ofp_queue_stats) == 32);
+
+#endif /* openflow/openflow.h */
diff --git a/tools/pylibopenflow/include/pyopenflow.template.py b/tools/pylibopenflow/include/pyopenflow.template.py
new file mode 100644
index 0000000..29b59f4
--- /dev/null
+++ b/tools/pylibopenflow/include/pyopenflow.template.py
@@ -0,0 +1,21 @@
+import socket
+
+class ofsocket:
+	"""OpenFlow scoket
+	"""
+	def __init__(self, socket):
+		"""Initialize with socket
+		"""
+		##Reference to socket
+		self.socket = socket
+
+	def send(self, msg):
+		"""Send message
+		"""
+		ofph = ofp_header()
+		remaining = ofph.unpack(msg)
+		if (ofph.length != len(msg)):
+			ofph.length = len(msg)
+			msg = ofph.pack()+remaining
+		self.socket.send(msg)
+
diff --git a/tools/pylibopenflow/pylib/c2py.py b/tools/pylibopenflow/pylib/c2py.py
new file mode 100644
index 0000000..b699c5e
--- /dev/null
+++ b/tools/pylibopenflow/pylib/c2py.py
@@ -0,0 +1,154 @@
+"""This module converts C types to Python struct pattern string.
+
+Date June 2009
+Created by ykk
+"""
+import cheader
+import struct
+
+class cstruct2py:
+    """Class converts C struct to Python struct pattern string
+
+    Date October 2009
+    Created by ykk
+    """
+    def __init__(self):
+        """Initialize
+        """
+        ##Mapping
+        self.structmap = {}
+        self.structmap["char"] = "c"
+        self.structmap["signed char"] = "b"
+        self.structmap["uint8_t"]=\
+            self.structmap["unsigned char"] = "B"
+        self.structmap["short"] = "h"
+        self.structmap["uint16_t"] =\
+            self.structmap["unsigned short"] = "H"
+        self.structmap["int"] = "i"
+        self.structmap["unsigned int"] = "I"
+        self.structmap["long"] = "l"
+        self.structmap["uint32_t"] =\
+            self.structmap["unsigned long"] = "L"
+        self.structmap["long long"] = "q"
+        self.structmap["uint64_t"] =\
+            self.structmap["unsigned long long"] = "Q"
+        self.structmap["float"] = "f"
+        self.structmap["double"] = "d"
+
+    def get_pattern(self,ctype):
+        """Get pattern string for ctype.
+        Return None if ctype is not expanded.
+        """
+        if (ctype.expanded):
+            if (isinstance(ctype, cheader.cprimitive)):
+                return self.structmap[ctype.typename]
+            elif (isinstance(ctype, cheader.cstruct)):
+                string=""
+                for member in ctype.members:
+                    string += self.get_pattern(member)
+                return string
+            elif (isinstance(ctype, cheader.carray)):
+                if (ctype.size == 0):
+                    return ""
+                else:
+                    string = self.get_pattern(ctype.object)
+                    return string * ctype.size
+        return None
+        
+    def get_size(self, ctype, prefix="!"):
+        """Return size of struct or pattern specified
+        """
+        if (isinstance(ctype, str)):
+            return struct.calcsize(ctype)
+        elif (isinstance(ctype, cheader.ctype)):
+            return struct.calcsize(prefix + self.get_pattern(ctype))
+        else:
+            return 0
+
+class structpacker:
+    """Pack/unpack packets with ctype.
+    
+    Date October 2009
+    Created by ykk
+    """
+    def __init__(self, prefix=""):
+        """Initialize with prefix to struct
+        """
+        ##Reference to prefix
+        self.prefix = prefix
+        
+    def pack(self, ctype, *arg):
+        """Pack packet accordingly ctype or pattern provided.
+        Return struct packed.
+        """
+        if (isinstance(ctype, str)):
+            return struct.pack(self.prefix+ctype, *arg)
+        elif (isinstance(ctype, cheader.ctype)):
+            return struct.pack(self.prefix+cstruct2py.get_pattern(ctype),
+                               *arg)
+        else:
+            return None
+
+    def unpack_from_front(self, ctype, binaryString, returnDictionary=True):
+        """Unpack packet using front of packet,
+        accordingly ctype or pattern provided.
+
+        Return (dictionary of values indexed by arg name, 
+        remaining binary string) if ctype is cheader.ctype
+        and returnDictionary is True, 
+        else return (array of data unpacked, remaining binary string).
+        """
+        pattern = ""
+        if (isinstance(ctype, str)):
+            pattern = ctype
+        elif (isinstance(ctype, cheader.ctype)):
+            pattern = cstruct2py.get_pattern(ctype)
+        else:
+            return None
+        dsize = struct.calcsize(pattern)
+
+        if (dsize > len(binaryString)):
+            return None
+
+        return (structpacker.peek_from_front(self, pattern, binaryString, returnDictionary),
+                binaryString[dsize:])
+
+    def peek_from_front(self, ctype, binaryString, returnDictionary=True):
+        """Unpack packet using front of packet,
+        accordingly ctype or pattern provided.
+
+        Return dictionary of values indexed by arg name,
+        if ctype is cheader.ctype and returnDictionary is True, 
+        else return array of data unpacked.
+        """
+        pattern = self.prefix
+        if (isinstance(ctype, str)):
+            pattern += ctype
+        elif (isinstance(ctype, cheader.ctype)):
+            pattern += cstruct2py.get_pattern(ctype)
+        else:
+            return None
+        dsize = struct.calcsize(pattern)
+        if (dsize > len(binaryString)):
+            return None
+        data = struct.unpack(pattern, binaryString[0:dsize])
+        
+        #Return values
+        if (isinstance(ctype, str) or
+            (not returnDictionary)):
+            return data
+        else:
+            return self.data2dic(data, ctype)
+
+    def data2dic(self,ctype,data):
+        """Convert data to dictionary
+        """
+        valDic = {}
+        names = ctype.get_names()
+        for name in names:
+            valDic[name] = []
+        for d in data:
+            name = names.pop(0)
+            valDic[name].append(d)
+        return valDic
+
diff --git a/tools/pylibopenflow/pylib/cheader.py b/tools/pylibopenflow/pylib/cheader.py
new file mode 100644
index 0000000..a23e1eb
--- /dev/null
+++ b/tools/pylibopenflow/pylib/cheader.py
@@ -0,0 +1,434 @@
+"""This module parse and store a C/C++ header file.
+
+Date June 2009
+Created by ykk
+"""
+import re
+from config import *
+
+class textfile:
+    """Class to handle text file.
+    
+    Date June 2009
+    Created by ykk
+    """
+    def __init__(self, filename):
+        """Initialize filename with no content.
+        """
+        ##Filename
+        if (isinstance(filename, str)):
+            self.filename = []
+            self.filename.append(filename)
+        else:
+            self.filename = filename
+        ##Content
+        self.content = []
+
+    def read(self):
+        """Read file
+        """
+        for filename in self.filename:
+            fileRef = open(filename, "r")
+            for line in fileRef:
+                self.content.append(line)
+            fileRef.close()        
+
+class ctype:
+    """Class to represent types in C
+    """
+    def __init__(self,typename, name=None, expanded=False):
+        """Initialize
+        """
+        ##Name
+        self.name = name
+        ##Type of primitive
+        self.typename = typename
+        ##Expanded
+        self.expanded = expanded
+
+    def expand(self, cheader):
+        """Expand type if applicable
+        """
+        raise NotImplementedError()
+
+    def get_names(self):
+        """Return name of variables
+        """
+        raise NotImplementedError()
+
+class cprimitive(ctype):
+    """Class to represent C primitive
+
+    Date October 2009
+    Created by ykk
+    """
+    def __init__(self,typename, name=None):
+        """Initialize and store primitive
+        """
+        ctype.__init__(self, typename, name, True)
+
+    def __str__(self):
+        """Return string representation
+        """
+        if (self.name == None):
+            return self.typename
+        else:
+            return self.typename+" "+str(self.name)
+
+    def expand(self, cheader):
+        """Expand type if applicable
+        """
+        pass
+    
+    def get_names(self):
+        """Return name of variables
+        """
+        namelist = []
+        namelist.append(self.name)
+        return namelist
+
+class cstruct(ctype):
+    """Class to represent C struct
+
+    Date October 2009
+    Created by ykk
+    """
+    def __init__(self, typename, name=None):
+        """Initialize struct
+        """
+        ctype.__init__(self, typename, name)
+        ##List of members in struct
+        self.members = []
+    
+    def __str__(self):
+        """Return string representation
+        """
+        string = "struct "+self.typename
+        if (self.name != None):
+            string += " "+self.name
+        if (len(self.members) == 0):
+            return string
+        #Add members
+        string +=" {\n"
+        for member in self.members:
+            string += "\t"+str(member)
+            if (not isinstance(member, cstruct)):
+                string += ";"
+            string += "\n"
+        string +="};"
+        return string
+
+    def expand(self, cheader):
+        """Expand struct
+        """
+        self.expanded = True
+        #Expanded each member
+        for member in self.members:
+            if (isinstance(member, cstruct) and 
+                (not member.expanded)):
+                try:
+                    if (not cheader.structs[member.typename].expanded):
+                        cheader.structs[member.typename].expand(cheader)
+                    member.members=cheader.structs[member.typename].members[:]
+                    member.expanded = True
+                except KeyError:
+                    self.expanded=False
+            else:
+                member.expand(cheader)
+
+    def get_names(self):
+        """Return name of variables
+        """
+        namelist = []
+        for member in self.members:
+            if (isinstance(member, cstruct)):
+                tmplist = member.get_names()
+                for item in tmplist:
+                    namelist.append(member.name+"."+item)
+            else:
+                namelist.extend(member.get_names())
+        return namelist
+
+
+class carray(ctype):
+    """Class to represent C array
+
+    Date October 2009
+    Created by ykk
+    """
+    def __init__(self, typename, name, isPrimitive, size):
+        """Initialize array of object.
+        """
+        ctype.__init__(self, typename, name,
+                       (isinstance(size, int) and isPrimitive))
+        ##Object reference
+        if (isPrimitive):
+            self.object = cprimitive(typename, name)
+        else:
+            self.object = cstruct(typename, name)
+        ##Size of array
+        self.size = size
+        
+    def __str__(self):
+        """Return string representation
+        """
+        return str(self.object)+"["+str(self.size)+"]"
+
+    def expand(self, cheader):
+        """Expand array
+        """
+        self.expanded = True
+        if (not self.object.expanded):
+            if (isinstance(self.object, cstruct)):
+                cheader.structs[self.object.typename].expand(cheader)
+                self.object.members=cheader.structs[self.object.typename].members[:]    
+            else:
+                self.object.expand(cheader)
+
+        if (not isinstance(self.size, int)):
+            val = cheader.get_value(self.size)
+            if (val == None):
+                self.expanded = False
+            else:
+                try:
+                    self.size = int(val)
+                except ValueError:
+                    self.size = val
+                    self.expanded = False
+
+    def get_names(self):
+        """Return name of variables
+        """
+        namelist = []
+        for i in range(0,self.size):
+            namelist.append(self.object.name)
+        return namelist
+
+class ctype_parser:
+    """Class to check c types
+
+    Date October 2009
+    Created by ykk
+    """
+    def __init__(self):
+        """Initialize
+        """
+        self.CPrimitives = ["char","signed char","unsigned char",
+                            "short","unsigned short",
+                            "int","unsigned int",
+                            "long","unsigned long",
+                            "long long","unsigned long long",
+                            "float","double",
+                            "uint8_t","uint16_t","uint32_t","uint64_t"]
+
+    def is_primitive(self,type):
+        """Check type given is primitive.
+
+        Return true if valid, and false otherwise
+        """
+        if (type in self.CPrimitives):
+            return True
+        else:
+            return False
+
+    def is_array(self, string):
+        """Check if string declares an array
+        """
+        parts=string.strip().split()
+        if (len(parts) <= 1):
+            return False
+        else:
+            pattern = re.compile("\[.*?\]", re.MULTILINE)
+            values = pattern.findall(string)
+            if (len(values) == 1):
+                return True
+            else:
+                return False
+
+    def parse_array(self, string):
+        """Parse array from string.
+        Return occurrence and name.
+        """
+        pattern = re.compile("\[.*?\]", re.MULTILINE)
+        namepattern = re.compile(".*?\[", re.MULTILINE)
+        values = pattern.findall(string)
+        if (len(values) != 1):
+            return (1,string)
+        else:
+            val = values[0][1:-1]
+            try:
+                sizeval = int(val)
+            except ValueError:
+                if (val==""):
+                    sizeval = 0
+                else:
+                    sizeval = val
+            return (sizeval,
+                    namepattern.findall(string)[0].strip()[0:-1])
+
+    def parse_type(self, string):
+        """Parse string and return cstruct or cprimitive.
+        Else return None
+        """
+        parts=string.strip().split()
+        if (len(parts) >= 2):
+            if (parts[0].strip() == "struct"):
+                typename = " ".join(parts[1:-1])
+            else:
+                typename = " ".join(parts[:-1])
+            (size, name) = self.parse_array(parts[-1])
+            if IGNORE_ZERO_ARRAYS and size == 0:
+                return None
+            #Create appropriate type
+            if (size != 1):
+                #Array
+                return carray(typename, name, 
+                              self.is_primitive(typename),size)
+            else:
+                #Not array
+                if IGNORE_OFP_HEADER and typename == "ofp_header":
+                    return None
+                if (self.is_primitive(typename)):
+                    return cprimitive(typename, name)
+                else:
+                    return cstruct(typename, name)
+        else:
+            return None
+
+class cheaderfile(textfile):
+    """Class to handle C header file.
+    
+    Date June 2009
+    Created by ykk
+    """
+    def __init__(self, filename):
+        """Initialize filename and read from file
+        """
+        textfile.__init__(self,filename)
+        self.read()
+        self.__remove_comments()
+        ##Dictionary of macros
+        self.macros = {}
+        self.__get_macros()
+        ##Dictionary of enumerations
+        self.enums = {}
+        self.enum_values = {}
+        self.__get_enum()
+        self.__get_enum_values()
+        ##Dictionary of structs
+        self.structs = {}
+        self.__get_struct()
+
+    def get_enum_name(self, enum, value):
+        """Return name of variable in enum
+        """
+        for e in self.enums[enum]:
+            if (self.enum_values[e] == value):
+                return e
+
+    def eval_value(self, value):
+        """Evaluate value string
+        """
+        try:
+            return eval(value, self.enum_values)
+        except:
+            return value.strip()
+
+    def get_value(self, name):
+        """Get value for variable name,
+        searching through enum and macros.
+        Else return None
+        """
+        try:
+            return self.enum_values[name]
+        except KeyError:
+            try:
+                return self.macros[name]
+            except KeyError:
+                return None
+
+    def __remove_comments(self):
+        """Remove all comments
+        """
+        fileStr = "".join(self.content)
+        pattern = re.compile("\\\.*?\n", re.MULTILINE)
+        fileStr = pattern.sub("",fileStr)
+        pattern = re.compile(r"/\*.*?\*/", re.MULTILINE|re.DOTALL)
+        fileStr = pattern.sub("",fileStr)
+        pattern = re.compile("//.*$", re.MULTILINE)
+        fileStr = pattern.sub("",fileStr)
+        self.content = fileStr.split('\n')
+
+    def __get_struct(self):
+        """Get all structs
+        """
+        typeparser = ctype_parser()
+        fileStr = "".join(self.content)
+        #Remove attribute
+        attrpattern = re.compile("} __attribute__ \(\((.+?)\)\);", re.MULTILINE)
+        attrmatches = attrpattern.findall(fileStr)
+        for amatch in attrmatches:
+            fileStr=fileStr.replace(" __attribute__ (("+amatch+"));",";")
+        #Find all structs
+        pattern = re.compile("struct[\w\s]*?{.*?};", re.MULTILINE)
+        matches = pattern.findall(fileStr)
+        #Process each struct
+        namepattern = re.compile("struct(.+?)[ {]", re.MULTILINE)
+        pattern = re.compile("{(.+?)};", re.MULTILINE)
+        for match in matches:
+            structname = namepattern.findall(match)[0].strip()
+            if (len(structname) != 0):
+                values = pattern.findall(match)[0].strip().split(";")
+                cstru = cstruct(structname)
+                for val in values:
+                    presult = typeparser.parse_type(val)
+                    if (presult != None):
+                        cstru.members.append(presult)
+                self.structs[structname] = cstru
+        #Expand all structs
+        for (structname, struct) in self.structs.items():
+            struct.expand(self)
+
+    def __get_enum(self):
+        """Get all enumeration
+        """
+        fileStr = "".join(self.content)
+        #Find all enumerations
+        pattern = re.compile("enum[\w\s]*?{.*?}", re.MULTILINE)
+        matches = pattern.findall(fileStr)
+        #Process each enumeration
+        namepattern = re.compile("enum(.+?){", re.MULTILINE)
+        pattern = re.compile("{(.+?)}", re.MULTILINE)
+        for match in matches:
+            values = pattern.findall(match)[0].strip().split(",")
+            #Process each value in enumeration
+            enumList = []
+            value = 0
+            for val in values:
+                if not (val.strip() == ""):
+                    valList=val.strip().split("=")
+                    enumList.append(valList[0].strip())
+                    if (len(valList) == 1):
+                        self.enum_values[valList[0].strip()] = value
+                        value += 1
+                    else:
+                        self.enum_values[valList[0].strip()] = self.eval_value(valList[1].strip())
+                    self.enums[namepattern.findall(match)[0].strip()] = enumList
+
+    def __get_enum_values(self):
+        """Patch unresolved enum values
+        """
+        for name,enumval in self.enum_values.items():
+            if isinstance(enumval,str):
+                self.enum_values[name] = self.eval_value(enumval)
+        
+    def __get_macros(self):
+        """Extract macros
+        """
+        for line in self.content:
+            if (line[0:8] == "#define "):
+                lineList = line[8:].split()
+                if (len(lineList) >= 2):
+                    self.macros[lineList[0]] = self.eval_value("".join(lineList[1:]))
+                else:
+                    self.macros[lineList[0]] = ""
diff --git a/tools/pylibopenflow/pylib/config.py b/tools/pylibopenflow/pylib/config.py
new file mode 100644
index 0000000..61c903d
--- /dev/null
+++ b/tools/pylibopenflow/pylib/config.py
@@ -0,0 +1,29 @@
+
+# of_message specific controls
+
+# Do not include any arrays marked [0]
+IGNORE_ZERO_ARRAYS = True
+
+# Do not include the ofp_header as a member in any structure
+# This allows messages to be consistently generated as:
+#   explicit header declaration
+#   core member declaration
+#   variable length data
+IGNORE_OFP_HEADER = True
+
+# Generate object equality functions
+GEN_OBJ_EQUALITY = True
+
+# Generate object show functions
+GEN_OBJ_SHOW = True
+
+# Generate lists of enum values
+GEN_ENUM_VALUES_LIST = False
+
+# Generate dictionary of enum strings to values
+GEN_ENUM_DICTIONARY = True
+
+# Auxilary info:  Stuff written to stdout for additional processing
+# Currently generates a (python) map from a class to a list of
+# the data members; used for documentation
+GEN_AUX_INFO = True
diff --git a/tools/pylibopenflow/pylib/cpythonize.py b/tools/pylibopenflow/pylib/cpythonize.py
new file mode 100644
index 0000000..22b5214
--- /dev/null
+++ b/tools/pylibopenflow/pylib/cpythonize.py
@@ -0,0 +1,571 @@
+"""This module generate Python code for C structs.
+
+Date January 2010
+Created by ykk
+"""
+import sys
+import cheader
+import c2py
+import datetime
+import struct
+import re
+from config import *
+
+def _space_to(n, str):
+    """
+    Generate a string of spaces to achieve width n given string str
+    If length of str >= n, return one space
+    """
+    spaces = n - len(str)
+    if spaces > 0:
+        return " " * spaces
+    return " "
+
+class rules:
+    """Class that specify rules for pythonization
+
+    Date January 2010
+    Created by ykk
+    """
+    def __init__(self):
+        """Initialize rules
+        """
+        ##Default values for members
+        self.default_values = {}
+        #Default values for struct
+        self.struct_default = {}
+        ##What is a tab
+        self.tab = "    "
+        ##Macros to exclude
+        self.excluded_macros = []
+        ##Enforce mapping
+        self.enforced_maps = {}
+
+    def get_enforced_map(self, structname):
+        """Get code to enforce mapping
+        """
+        code = []
+        try:
+            mapping = self.enforced_maps[structname]
+        except KeyError:
+            return None
+        for (x,xlist) in mapping:
+            code.append("if (not (self."+x+" in "+xlist+")):")
+            code.append(self.tab+"return (False, \""+x+" must have values from "+xlist+"\")")
+        return code
+        
+
+    def get_struct_default(self, structname, fieldname):
+        """Get code to set defaults for member struct
+        """
+        try:
+            return "."+fieldname+self.struct_default[(structname, fieldname)]
+        except KeyError:
+            return None
+        
+    def get_default_value(self, structname, fieldname):
+        """Get default value for struct's field
+        """
+        try:
+            return self.default_values[(structname, fieldname)]
+        except KeyError:
+            return 0
+
+    def include_macro(self, name):
+        """Check if macro should be included
+        """
+        return not (name in self.excluded_macros)
+
+class pythonizer:
+    """Class that pythonize C structures
+
+    Date January 2010
+    Created by ykk
+    """
+    def __init__(self, cheaderfile, pyrules = None, tab="    "):
+        """Initialize
+        """
+        ##Rules
+        if (pyrules == None):
+            self.rules = rules()
+        else:
+            self.rules = pyrules
+        ##What is a tab (same as rules)
+        self.tab = str(tab)
+        self.rules.tab = self.tab
+        ##Reference to C header file
+        self.cheader = cheaderfile
+        ##Reference to cstruct2py
+        self.__c2py = c2py.cstruct2py()
+        ##Code for assertion
+        self.__assertcode = []
+
+    def pycode(self,preamble=None):
+        """Return pythonized code
+        """
+        code = []
+        code.append("import struct")
+        code.append("")
+        if (preamble != None):
+            fileRef = open(preamble,"r")
+            for l in fileRef:
+                code.append(l[:-1])
+            fileRef.close()
+        code.append("# Structure definitions")
+        for name,struct in self.cheader.structs.items():
+            code.extend(self.pycode_struct(struct))
+            code.append("")
+        code.append("# Enumerated type definitions")
+        for name,enum in self.cheader.enums.items():
+            code.extend(self.pycode_enum(name,enum))
+            if GEN_ENUM_DICTIONARY:
+                code.extend(self.pycode_enum_map(name,enum))
+            code.append("")
+        code.append("# Values from macro definitions")
+        for name,macro in self.cheader.macros.items():
+            code.extend(self.pycode_macro(name))
+        code.append("")
+        code.append("# Basic structure size definitions.")
+        if IGNORE_OFP_HEADER:
+            code.append("# Does not include ofp_header members.")
+        if IGNORE_ZERO_ARRAYS:
+            code.append("# Does not include variable length arrays.")
+        struct_keys = self.cheader.structs.keys()
+        struct_keys.sort()
+        for name in struct_keys:
+            struct = self.cheader.structs[name]
+            code.append(self.pycode_struct_size(name, struct))
+        if GEN_AUX_INFO:
+            self.gen_struct_map()
+
+        return code
+
+    def pycode_enum(self, name, enum):
+        """Return Python array for enum
+        """
+        code=[]
+        code.append(name+" = "+str(enum))
+        ev = []
+        for e in enum:
+            v = self.cheader.get_value(e)
+            ev.append(v)
+            code.append(e+"%s= "%_space_to(36,e)+str(v))
+        if GEN_ENUM_VALUES_LIST:
+            code.append(name+"_values = "+str(ev))
+        return code
+
+    def pycode_enum_map(self, name, enum):
+        """Return Python dictionary for enum
+        """
+        code = []
+        code.append(name+"_map = {")
+        first = 1
+        for e in enum:
+            v = self.cheader.get_value(e)
+            if first:
+                prev_e = e
+                prev_v = v
+                first = 0
+            else:
+                code.append(self.tab + "%s%s: '%s'," %
+                            (prev_v, _space_to(32, str(prev_v)), prev_e))
+                prev_e = e
+                prev_v = v
+        code.append(self.tab + "%s%s: '%s'" %
+                            (prev_v, _space_to(32, str(prev_v)), prev_e))
+        code.append("}")
+        return code
+
+    def pycode_macro(self,name):
+        """Return Python dict for macro
+        """
+        code = []
+        if (self.rules.include_macro(name)):
+            code.append(name+" = "+str(self.cheader.get_value(name)))
+        return code
+
+    def pycode_struct_size(self, name, struct):
+        """Return one liner giving the structure size in bytes
+        """
+        pattern = '!' + self.__c2py.get_pattern(struct)
+        bytes = self.__c2py.get_size(pattern)
+        code = name.upper() + "_BYTES = " + str(bytes)
+        return code
+
+    def pycode_struct(self, struct_in):
+        """Return Python class code given C struct.
+
+        Returns None if struct_in is not cheader.cstruct.
+        Else return list of strings that codes Python class.
+        """
+        if (not isinstance(struct_in, cheader.cstruct)):
+            return None
+
+        code=[]
+        self.__assertcode = []
+        code.extend(self.codeheader(struct_in))
+        code.extend(self.codeinit(struct_in))
+        code.append("")
+        code.extend(self.codeassert(struct_in))
+        code.append("")
+        code.extend(self.codepack(struct_in))
+        code.append("")
+        code.extend(self.codeunpack(struct_in))
+        code.append("")
+        code.extend(self.codelen(struct_in))
+        code.append("")
+        if GEN_OBJ_EQUALITY:
+            code.extend(self.codeeq(struct_in))
+            code.append("")
+        if GEN_OBJ_SHOW:
+            code.extend(self.codeshow(struct_in))
+            code.append("")
+        return code
+
+    def codeheader(self, struct_in):
+        """Return Python code for header
+        """
+        code=[]
+        code.append("class "+struct_in.typename+":")
+        code.append(self.tab+"\"\"\"Automatically generated Python class for "+struct_in.typename)
+        code.append("")
+        code.append(self.tab+"Date "+str(datetime.date.today()))
+        code.append(self.tab+"Created by "+self.__module__+"."+self.__class__.__name__)
+        if IGNORE_OFP_HEADER:
+            code.append(self.tab+"Core structure: Messages do not include ofp_header")
+        if IGNORE_ZERO_ARRAYS:
+            code.append(self.tab+"Does not include var-length arrays")
+        code.append(self.tab+"\"\"\"")
+        return code
+
+    def codeinit(self, struct_in):
+        """Return Python code for init function
+        """
+        code = []
+        code.append(self.tab+"def __init__(self):")
+        code.append(self.tab*2+"\"\"\"Initialize")
+        code.append(self.tab*2+"Declare members and default values")
+        code.append(self.tab*2+"\"\"\"")
+        code.extend(self.codemembers(struct_in,self.tab*2+"self"))
+        return code
+
+    def codemembers(self, struct_in, prepend=""):
+        """Return members of class
+        """
+        code = []
+        for member in struct_in.members:
+            if (isinstance(member, cheader.cstruct)):
+                code.append(prepend+"."+member.name+" = "+member.typename+"()")
+                struct_default = self.rules.get_struct_default(struct_in.typename, member.name)
+                if (struct_default != None):
+                    code.append(prepend+struct_default)
+                self.__structassert(member, (prepend+"."+member.name).strip())
+            elif (isinstance(member, cheader.carray)):
+                if (member.typename == "char"):
+                    initvalue = "\"\""
+                    self.__stringassert(member, (prepend+"."+member.name).strip())
+                else:
+                    if (isinstance(member.object, cheader.cprimitive)):
+                        initvalue="0"
+                    else:
+                        initvalue="None"
+                    initvalue=(initvalue+",")*member.size
+                    initvalue="["+initvalue[:-1]+"]"
+                    self.__arrayassert(member, (prepend+"."+member.name).strip())
+                code.append(prepend+"."+member.name+"= "+initvalue)
+            else:
+                code.append(prepend+"."+member.name+" = "+
+                            str(self.rules.get_default_value(struct_in.typename, member.name)))
+        return code
+
+    def gen_struct_map(self, file=None):
+        if not file:
+            file = sys.stdout
+        print >> file
+        print >> file, "# Class to array member map"
+        print >> file, "class_to_members_map = {"
+        for name, struct in self.cheader.structs.items():
+            if not len(struct.members):
+                continue
+            s =  "    '" + name + "'"
+            print >> file, s + _space_to(36, s) + ": ["
+            prev = None
+            for member in struct.members:
+                if re.search('pad', member.name):
+                    continue
+                if prev:
+                    print _space_to(39, "") + "'" + prev + "',"
+                prev = member.name
+            print >> file, _space_to(39, "") + "'" + prev + "'"
+            print >> file, _space_to(38, "") + "],"
+        print >> file, "    '_ignore' : []"
+        print >> file, "}"
+
+    def __structassert(self, cstruct, cstructname):
+        """Return code to check for C array
+        """
+        self.__assertcode.append(self.tab*2+"if(not isinstance("+cstructname+", "+cstruct.typename+")):")
+        self.__assertcode.append(self.tab*3+"return (False, \""+cstructname+" is not class "+cstruct.typename+" as expected.\")")        
+
+    def __addassert(self, prefix):
+        code = []
+        code.append(prefix+"if(not self.__assert()[0]):")
+        code.append(prefix+self.tab+"return None")        
+        return code
+
+    def __stringassert(self, carray, carrayname):
+        """Return code to check for C array
+        """
+        self.__assertcode.append(self.tab*2+"if(not isinstance("+carrayname+", str)):")
+        self.__assertcode.append(self.tab*3+"return (False, \""+carrayname+" is not string as expected.\")")        
+        self.__assertcode.append(self.tab*2+"if(len("+carrayname+") > "+str(carray.size)+"):")      
+        self.__assertcode.append(self.tab*3+"return (False, \""+carrayname+" is not of size "+str(carray.size)+" as expected.\")")
+
+    def __arrayassert(self, carray, carrayname):
+        """Return code to check for C array
+        """
+        if (carray.size == 0):
+            return
+        self.__assertcode.append(self.tab*2+"if(not isinstance("+carrayname+", list)):")
+        self.__assertcode.append(self.tab*3+"return (False, \""+carrayname+" is not list as expected.\")")
+        self.__assertcode.append(self.tab*2+"if(len("+carrayname+") != "+str(carray.size)+"):")
+        self.__assertcode.append(self.tab*3+"return (False, \""+carrayname+" is not of size "+str(carray.size)+" as expected.\")") 
+
+    def codeassert(self, struct_in):
+        """Return code for sanity checking
+        """
+        code = []
+        code.append(self.tab+"def __assert(self):")
+        code.append(self.tab*2+"\"\"\"Sanity check")
+        code.append(self.tab*2+"\"\"\"")
+        enforce = self.rules.get_enforced_map(struct_in.typename)
+        if (enforce != None):
+            for line in enforce:
+                code.append(self.tab*2+line)
+        code.extend(self.__assertcode)
+        code.append(self.tab*2+"return (True, None)")
+        return code
+
+    def codepack(self, struct_in, prefix="!"):
+        """Return code that pack struct
+        """
+        code = []
+        code.append(self.tab+"def pack(self, assertstruct=True):")
+        code.append(self.tab*2+"\"\"\"Pack message")
+        code.append(self.tab*2+"Packs empty array used as placeholder")
+        code.append(self.tab*2+"\"\"\"")
+        code.append(self.tab*2+"if(assertstruct):")
+        code.extend(self.__addassert(self.tab*3))
+        code.append(self.tab*2+"packed = \"\"")
+        primPattern = ""
+        primMemberNames = []
+        for member in struct_in.members:
+            if (isinstance(member, cheader.cprimitive)):
+                #Primitives
+                primPattern += self.__c2py.structmap[member.typename]
+                primMemberNames.append("self."+member.name)
+            else:
+                (primPattern, primMemberNames) = \
+                              self.__codepackprimitive(code, primPattern,
+                                                       primMemberNames, prefix)
+                if (isinstance(member, cheader.cstruct)):
+                    #Struct
+                    code.append(self.tab*2+"packed += self."+member.name+".pack()")
+                elif (isinstance(member, cheader.carray) and member.typename == "char"):
+                    #String
+                    code.append(self.tab*2+"packed += self."+member.name+".ljust("+\
+                                str(member.size)+",'\\0')")
+                elif (isinstance(member, cheader.carray) and \
+                      isinstance(member.object, cheader.cprimitive)):
+                    #Array of Primitives
+                    expandedarr = ""
+                    if (member.size != 0):
+                        for x in range(0, member.size):
+                            expandedarr += ", self."+member.name+"["+\
+                                           str(x).strip()+"]"
+                        code.append(self.tab*2+"packed += struct.pack(\""+prefix+\
+                                    self.__c2py.structmap[member.object.typename]*member.size+\
+                                    "\""+expandedarr+")")
+                    else:
+                        code.append(self.tab*2+"for i in self."+member.name+":")
+                        code.append(self.tab*3+"packed += struct.pack(\""+\
+                                    prefix+self.__c2py.get_pattern(member.object)+\
+                                    "\",i)")
+                elif (isinstance(member, cheader.carray) and \
+                      isinstance(member.object, cheader.cstruct)):
+                    #Array of struct
+                    if (member.size != 0):
+                        for x in range(0, member.size):
+                            code.append(self.tab*2+"packed += self."+member.name+"["+\
+                                        str(x).strip()+"].pack()")
+                    else:
+                        code.append(self.tab*2+"for i in self."+member.name+":")
+                        code.append(self.tab*3+"packed += i.pack(assertstruct)")
+        #Clear remaining fields
+        (primPattern, primMemberNames) = \
+                      self.__codepackprimitive(code, primPattern,
+                                               primMemberNames, prefix)
+        code.append(self.tab*2+"return packed")
+        return code
+
+    def __codepackprimitive(self, code, primPattern, primMemberNames, prefix):
+        """Return code for packing primitives
+        """
+        if (primPattern != ""):
+            #Clear prior primitives
+            code.append(self.tab*2+"packed += struct.pack(\""+\
+                        prefix+primPattern+"\", "+\
+                        str(primMemberNames).replace("'","")[1:-1]+")")
+        return ("",[])
+
+    def codelen(self, struct_in):
+        """Return code to return length
+        """
+        pattern = "!" + self.__c2py.get_pattern(struct_in)
+        code = []
+        code.append(self.tab+"def __len__(self):")
+        code.append(self.tab*2+"\"\"\"Return length of message")
+        code.append(self.tab*2+"\"\"\"")
+        code.append(self.tab*2+"l = "+str(self.__c2py.get_size(pattern)))
+        for member in struct_in.members:
+            if (isinstance(member, cheader.carray) and member.size == 0):
+                if (isinstance(member.object, cheader.cstruct)):
+                    code.append(self.tab*2+"for i in self."+member.name+":")
+                    # FIXME:  Is this right?  Doesn't seem to be called
+                    code.append(self.tab*3+"l += i.length()")
+                else:
+                    pattern="!"+self.__c2py.get_pattern(member.object)
+                    size=self.__c2py.get_size(pattern)
+                    code.append(self.tab*2+"l += len(self."+member.name+")*"+str(size))
+        code.append(self.tab*2+"return l")
+        return code
+
+    def codeeq(self, struct_in):
+        """Return code to return equality comparisons
+        """
+        code = []
+        code.append(self.tab+"def __eq__(self, other):")
+        code.append(self.tab*2+"\"\"\"Return True if self and other have same values")
+        code.append(self.tab*2+"\"\"\"")
+        code.append(self.tab*2+"if type(self) != type(other): return False")
+        for member in struct_in.members:
+            code.append(self.tab*2 + "if self." + member.name + " !=  other." +
+                        member.name + ": return False")
+        code.append(self.tab*2+"return True")
+        code.append("")
+        code.append(self.tab+"def __ne__(self, other): return not self.__eq__(other)")
+        return code
+
+    def codeshow(self, struct_in):
+        """Return code to print basic members of structure
+        """
+        code = []
+        code.append(self.tab+"def show(self, prefix=''):")
+        code.append(self.tab*2+"\"\"\"" + "Generate string showing basic members of structure")
+        code.append(self.tab*2+"\"\"\"")
+        code.append(self.tab*2+"outstr = ''")
+        for member in struct_in.members:
+            if re.search('pad', member.name):
+                continue
+            elif (isinstance(member, cheader.cstruct)):
+                code.append(self.tab*2 + "outstr += prefix + '" + 
+                            member.name + ": \\n' ")
+                code.append(self.tab*2 + "outstr += self." + member.name + 
+                            ".show(prefix + '  ')")
+            elif (isinstance(member, cheader.carray) and
+                  not isinstance(member.object, cheader.cprimitive)):
+                code.append(self.tab*2 + "outstr += prefix + '" + member.name +
+                            ": \\n' ")
+                code.append(self.tab*2 + "for obj in self." + member.name + ":")
+                code.append(self.tab*3 + "outstr += obj.show(prefix + '  ')")
+            else:
+                code.append(self.tab*2 + "outstr += prefix + '" + member.name +
+                            ": ' + str(self." + member.name + ") + '\\n'")
+        code.append(self.tab*2+"return outstr")
+        return code
+
+    def codeunpack(self, struct_in, prefix="!"):
+        """Return code that unpack struct
+        """
+        pattern = self.__c2py.get_pattern(struct_in)
+        structlen = self.__c2py.get_size(prefix + pattern)
+        code = []
+        code.append(self.tab+"def unpack(self, binaryString):")
+        code.append(self.tab*2+"\"\"\"Unpack message")
+        code.append(self.tab*2+"Do not unpack empty array used as placeholder")
+        code.append(self.tab*2+"since they can contain heterogeneous type")
+        code.append(self.tab*2+"\"\"\"")
+        code.append(self.tab*2+"if (len(binaryString) < "+str(structlen)+"):")
+        code.append(self.tab*3+"return binaryString")
+        offset = 0
+        primPattern = ""
+        primMemberNames = []
+        for member in struct_in.members:
+            if (isinstance(member, cheader.cprimitive)):
+                #Primitives
+                primPattern += self.__c2py.structmap[member.typename]
+                primMemberNames.append("self."+member.name)
+            else:
+                (primPattern, primMemberNames, offset) = \
+                              self.__codeunpackprimitive(code, offset, primPattern,
+                                                         primMemberNames, prefix)
+                if (isinstance(member, cheader.cstruct)):
+                    #Struct
+                    code.append(self.tab*2+"self."+member.name+\
+                                ".unpack(binaryString["+str(offset)+":])")
+                    pattern = self.__c2py.get_pattern(member)
+                    offset += self.__c2py.get_size(prefix+pattern)
+                elif (isinstance(member, cheader.carray) and member.typename == "char"):
+                    #String
+                    code.append(self.tab*2+"self."+member.name+\
+                                " = binaryString["+str(offset)+":"+\
+                                str(offset+member.size)+"].replace(\"\\0\",\"\")")
+                    offset += member.size
+                elif (isinstance(member, cheader.carray) and \
+                      isinstance(member.object, cheader.cprimitive)):
+                    #Array of Primitives
+                    expandedarr = ""
+                    if (member.size != 0):
+                        arrpattern = self.__c2py.structmap[member.object.typename]*member.size
+                        for x in range(0, member.size):
+                            expandedarr += "self."+member.name+"["+\
+                                           str(x).strip()+"], "
+                        code.append(self.tab*2 + "fmt = '" + prefix+arrpattern + "'")
+                        code.append(self.tab*2 + "start = " + str(offset))
+                        code.append(self.tab*2 + "end = start + struct.calcsize(fmt)")
+                        code.append(self.tab*2 + "("+expandedarr[:-2] + 
+                                    ") = struct.unpack(fmt, binaryString[start:end])")
+                        offset += struct.calcsize(prefix + arrpattern)
+                elif (isinstance(member, cheader.carray) and \
+                      isinstance(member.object, cheader.cstruct)):
+                    #Array of struct
+                    astructlen = self.__c2py.get_size("!"+self.__c2py.get_pattern(member.object))
+                    for x in range(0, member.size):
+                        code.append(self.tab*2+"self."+member.name+"["+str(x)+"]"+\
+                                ".unpack(binaryString["+str(offset)+":])")
+                        offset += astructlen
+        #Clear remaining fields
+        (primPattern, primMemberNames, offset) = \
+                      self.__codeunpackprimitive(code, offset, primPattern,
+                                                 primMemberNames, prefix)
+        code.append(self.tab*2+"return binaryString["+str(structlen)+":]");
+        return code
+
+    def __codeunpackprimitive(self, code, offset, primPattern,
+                              primMemberNames, prefix):
+        """Return code for unpacking primitives
+        """
+        if (primPattern != ""):
+            #Clear prior primitives
+            code.append(self.tab*2 + "fmt = '" + prefix + primPattern + "'")
+            code.append(self.tab*2 + "start = " + str(offset))
+            code.append(self.tab*2 + "end = start + struct.calcsize(fmt)")
+            if len(primMemberNames) == 1:
+                code.append(self.tab*2 + "(" + str(primMemberNames[0]) + 
+                            ",) = struct.unpack(fmt, binaryString[start:end])")
+            else:
+                code.append(self.tab*2+"("+str(primMemberNames).replace("'","")[1:-1]+
+                            ") = struct.unpack(fmt,  binaryString[start:end])")
+
+        return ("",[], offset+struct.calcsize(prefix+primPattern))
+
diff --git a/tools/pylibopenflow/pylib/lavi/__init__.py b/tools/pylibopenflow/pylib/lavi/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tools/pylibopenflow/pylib/lavi/__init__.py
diff --git a/tools/pylibopenflow/pylib/lavi/pythonize.py b/tools/pylibopenflow/pylib/lavi/pythonize.py
new file mode 100644
index 0000000..3c150aa
--- /dev/null
+++ b/tools/pylibopenflow/pylib/lavi/pythonize.py
@@ -0,0 +1,74 @@
+"""This module generate Python code for LAVI and messenger
+
+(C) Copyright Stanford University
+Date January 2010
+Created by ykk
+"""
+import cpythonize
+
+class msgrules(cpythonize.rules):
+    """Class that specify rules for pythonization of messenger
+
+    (C) Copyright Stanford University
+    Date January 2010
+    Created by ykk
+    """
+    def __init__(self):
+        """Initialize rules
+        """
+        cpythonize.rules.__init__(self)
+        ##Default values for members
+        #Default values for struct
+        ##Macros to exclude
+        self.excluded_macros = ['MESSAGE_HH__']
+        ##Enforce mapping
+        self.enforced_maps['messenger_msg'] = [ ('type','msg_type') ]
+
+class lavirules(msgrules):
+    """Class that specify rules for pythonization of LAVI messages
+
+    (C) Copyright Stanford University
+    Date January 2010
+    Created by ykk
+    """
+    def __init__(self, laviheader):
+        """Initialize rules
+        """
+        msgrules.__init__(self)
+        ##Default values for members
+        
+        #Default values for struct
+        self.struct_default[('lavi_poll_message',
+                             'header')] = ".type = "+str(laviheader.get_value('LAVIT_POLL'))
+        self.struct_default[('lavi_poll_stop_message',
+                             'header')] = ".type = "+str(laviheader.get_value('LAVIT_POLL_STOP'))
+        ##Macros to exclude
+        self.excluded_macros = ['LAVI_MSG_HH']
+        ##Enforce mapping
+        self.enforced_maps['lavi_header'] = [ ('type','lavi_type') ]
+
+class msgpythonizer(cpythonize.pythonizer):
+    """Class that pythonize C messenger messages
+
+    (C) Copyright Stanford University
+    Date January 2010
+    Created by ykk
+    """
+    def __init__(self, msgheader):
+        """Initialize
+        """
+        rules =  msgrules()
+        cpythonize.pythonizer.__init__(self, msgheader, rules)
+        
+class lavipythonizer(cpythonize.pythonizer):
+    """Class that pythonize C messenger messages
+
+    (C) Copyright Stanford University
+    Date December 2009
+    Created by ykk
+    """
+    def __init__(self, msgheader):
+        """Initialize
+        """
+        rules =  lavirules(msgheader)
+        cpythonize.pythonizer.__init__(self, msgheader, rules)
diff --git a/tools/pylibopenflow/pylib/of/__init__.py b/tools/pylibopenflow/pylib/of/__init__.py
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/tools/pylibopenflow/pylib/of/__init__.py
diff --git a/tools/pylibopenflow/pylib/of/msg.py b/tools/pylibopenflow/pylib/of/msg.py
new file mode 100644
index 0000000..8617f56
--- /dev/null
+++ b/tools/pylibopenflow/pylib/of/msg.py
@@ -0,0 +1,117 @@
+"""This module parses OpenFlow packets.
+
+Unfortunately, this has to be updated manually for each OpenFlow version
+and packet type.  Ugly.
+
+(C) Copyright Stanford University
+Date October 2009
+Created by ykk
+"""
+class parser:
+    """Parser for  OpenFlow packets
+    
+    (C) Copyright Stanford University
+    Date October 2009
+    Created by ykk
+    """
+    def __init__(self, messages):
+        """Initialize
+        """
+        ##Internal reference to OpenFlow messages
+        self.__messages = messages
+
+    def describe(self, packet):
+        """Parse OpenFlow packet and return string description
+        """
+        dic = self.__messages.peek_from_front("ofp_header", packet)
+        desc = self.header_describe(dic)
+        if (dic["type"][0] == self.__messages.get_value("OFPT_HELLO")):
+            pass
+        elif (dic["type"][0] == self.__messages.get_value("OFPT_SET_CONFIG")):
+            desc += "\n\t"+self.switch_config_describe(packet)
+        elif (dic["type"][0] == self.__messages.get_value("OFPT_FLOW_MOD")):
+            (fmdic, remaining) = self.__messages.unpack_from_front("ofp_flow_mod", packet)
+            desc += self.flow_mod_describe(fmdic, "\n\t")
+            desc += "\n\twith remaining "+str(len(remaining))+" bytes"
+        else:
+            desc += "\n\tUnparsed..."
+        return desc
+
+    def flow_mod_describe(self, packet, prefix=""):
+        """Parse flow mod and return description
+        """
+        dic = self.__assert_dic(packet, "ofp_flow_mod")
+        if (dic == None):
+            return ""
+        return prefix+\
+               "Flow_mod of command "+self.__messages.get_enum_name("ofp_flow_mod_command", dic["command"][0])+\
+               " idle/hard timeout:"+str(dic["idle_timeout"][0])+"/"+str(dic["hard_timeout"][0])+\
+               self.match_describe(dic, "match.", "\n\t")+\
+               prefix+\
+               "(priority:"+str(dic["priority"][0])+\
+               "/buffer id:"+str(dic["buffer_id"][0])+\
+               "/out port:"+str(dic["out_port"][0])+")"
+
+    def match_describe(self, dic, nameprefix="", prefix=""):
+        """Return description for ofp match
+        """
+        return prefix+"match wildcards:%x" % dic[nameprefix+"wildcards"][0]+\
+               " inport="+str(dic[nameprefix+"in_port"][0])+\
+               prefix+"     "+\
+               " ethertype="+str(dic[nameprefix+"dl_type"][0])+\
+               " vlan="+str(dic[nameprefix+"dl_vlan"][0])+\
+               " "+self.eth_describe(dic[nameprefix+"dl_src"])+"->"+\
+               self.eth_describe(dic[nameprefix+"dl_dst"])+\
+               prefix+"     "+\
+               " ipproto="+str(dic[nameprefix+"nw_proto"][0])+\
+               " "+self.ip_describe(dic[nameprefix+"nw_src"][0])+\
+               "->"+self.ip_describe(dic[nameprefix+"nw_src"][0])+\
+               prefix+"     "+\
+               " transport "+str(dic[nameprefix+"tp_src"][0])+\
+               "->"+str(dic[nameprefix+"tp_dst"][0])
+               
+    def switch_config_describe(self, packet):
+        """Parse OpenFlow switch config and return description
+        """
+        dic = self.__assert_dic(packet, "ofp_switch_config")
+        if (dic == None):
+            return ""
+        return "with flag "+str(self.__messages.get_enum_name("ofp_config_flags", dic["flags"][0]))+\
+               " and miss send length "+str(dic["miss_send_len"][0])
+        
+    def header_describe(self, packet):
+        """Parse OpenFlow header and return string description
+        """
+        dic = self.__assert_dic(packet, "ofp_header")
+        if (dic == None):
+            return ""
+        return self.__messages.get_enum_name("ofp_type", dic["type"][0])+" packet "+\
+               "(length:"+str(dic["length"][0])+\
+               "/xid:"+str(dic["xid"][0])+")"
+
+    def ip_describe(self, value):
+        """Return string for ip address
+        """
+        desc = ""
+        for i in range(0,4):
+            (value, cv) = divmod(value, 256)
+            desc = str(cv).strip() +"." + desc
+        return desc
+    
+    def eth_describe(self, etheraddr):
+        """Return string for ethernet address
+        """
+        desc = ""
+        for value in etheraddr:
+            desc += ":"+("%x" % value).zfill(2)
+        return desc[1:]
+
+    def __assert_dic(self, packet, typename):
+        """Assert and ensure dictionary is given
+        """
+        dic = None
+        if (isinstance(packet, str)):
+            dic = self.__messages.peek_from_front(typename, packet)
+        elif (isinstance(packet, dict)):
+            dic = packet
+        return dic
diff --git a/tools/pylibopenflow/pylib/of/network.py b/tools/pylibopenflow/pylib/of/network.py
new file mode 100644
index 0000000..6765a12
--- /dev/null
+++ b/tools/pylibopenflow/pylib/of/network.py
@@ -0,0 +1,191 @@
+"""This module holds the network.
+
+Copyright(C) 2009, Stanford University
+Date October 2009
+Created by ykk
+"""
+import random
+import openflow
+
+class network:
+    """Class holding information about OpenFlow network
+    """
+    def __init__(self):
+        """Initialize
+        """
+        ##List of switches
+        self.switches = []
+        ##Dictionary of links
+        self.links = {}
+        ##Reference to connections
+        self.connections = openflow.connections()
+
+    def add_switch(self, sw):
+        """Add switch to network
+        """
+        self.switches.append(sw)
+        self.connections.add_connection(sw, sw.connection)
+
+    def add_link(self, link):
+        """Add link to network
+        """
+        try:
+            self.links[link.switch1,link.switch2].append(link)
+        except KeyError:
+            self.links[link.switch1,link.switch2] = []
+            self.links[link.switch1,link.switch2].append(link)
+
+class link:
+    """Class to hold information about link
+
+    Copyright(C) 2009, Stanford University
+    Date November 2009
+    Created by ykk
+    """
+    def __init__(self, switch1, switch2):
+        """Initialize link between specified switches
+        """
+        ##Reference to first switch
+        self.switch1 = switch1
+        ##Reference to second switch
+        self.switch2 = switch2
+
+class switch:
+    """Class holding information about OpenFlow switch
+
+    Copyright(C) 2009, Stanford University
+    Date October 2009
+    Created by ykk
+    """
+    def __init__(self, miss_send_len=128,
+                 sock=None, dpid=None, n_buffers=100, n_tables=1,
+                 capability=None):
+        """Initialize switch
+        """
+        ##Socket to controller
+        self.sock = sock
+        ##Datapath id of switch
+        if (dpid != None):
+            self.datapath_id = dpid
+        else:
+            self.datapath_id = random.randrange(1, pow(2,64))
+        ##Number of buffers
+        self.n_buffers = n_buffers
+        ##Number of tables
+        self.n_tables= n_tables
+        ##Capabilities
+        if (isinstance(capability, switch_capabilities)):
+            self.capability = capability
+        else:
+            self.capability = switch_capabilities(miss_send_len)
+        ##Valid Actions
+        self.valid_actions = 0
+        ##List of port
+        self.port = []
+
+class switch_capabilities:
+    """Class to hold switch capabilities
+    """
+    def __init__(self, miss_send_len=128):
+        """Initialize
+
+        Copyright(C) 2009, Stanford University
+        Date October 2009
+        Created by ykk
+        """
+        ##Capabilities support by datapath
+        self.flow_stats = True
+        self.table_stats = True
+        self.port_stats = True
+        self.stp = True
+        self.multi_phy_tx = True
+        self.ip_resam = False
+        ##Switch config
+        self.send_exp = None
+        self.ip_frag = 0
+        self.miss_send_len = miss_send_len
+        ##Valid actions
+        self.act_output = True
+        self.act_set_vlan_vid = True
+        self.act_set_vlan_pcp = True
+        self.act_strip_vlan = True
+        self.act_set_dl_src = True
+        self.act_set_dl_dst = True
+        self.act_set_nw_src = True
+        self.act_set_nw_dst = True
+        self.act_set_tp_src = True
+        self.act_set_tp_dst = True
+        self.act_vendor = False
+
+    def get_capability(self, ofmsg):
+        """Return value for uint32_t capability field
+        """
+        value = 0
+        if (self.flow_stats):
+            value += ofmsg.get_value("OFPC_FLOW_STATS")
+        if (self.table_stats):
+            value += ofmsg.get_value("OFPC_TABLE_STATS")
+        if (self.port_stats):
+            value += ofmsg.get_value("OFPC_PORT_STATS")
+        if (self.stp):
+            value += ofmsg.get_value("OFPC_STP")
+        if (self.multi_phy_tx):
+            value += ofmsg.get_value("OFPC_MULTI_PHY_TX")
+        if (self.ip_resam):
+            value += ofmsg.get_value("OFPC_IP_REASM")
+        return value
+
+    def get_actions(self, ofmsg):
+        """Return value for uint32_t action field
+        """
+        value = 0
+        if (self.act_output):
+            value += (1 << (ofmsg.get_value("OFPAT_OUTPUT")+1))
+        if (self.act_set_vlan_vid):
+            value += (1 << (ofmsg.get_value("OFPAT_SET_VLAN_VID")+1))
+        if (self.act_set_vlan_pcp):
+            value += (1 << (ofmsg.get_value("OFPAT_SET_VLAN_PCP")+1))
+        if (self.act_strip_vlan):
+            value += (1 << (ofmsg.get_value("OFPAT_STRIP_VLAN")+1))
+        if (self.act_set_dl_src):
+            value += (1 << (ofmsg.get_value("OFPAT_SET_DL_SRC")+1))
+        if (self.act_set_dl_dst):
+            value += (1 << (ofmsg.get_value("OFPAT_SET_DL_DST")+1))
+        if (self.act_set_nw_src):
+            value += (1 << (ofmsg.get_value("OFPAT_SET_NW_SRC")+1))
+        if (self.act_set_nw_dst):
+            value += (1 << (ofmsg.get_value("OFPAT_SET_NW_DST")+1))
+        if (self.act_set_tp_src):
+            value += (1 << (ofmsg.get_value("OFPAT_SET_TP_SRC")+1))
+        if (self.act_set_tp_dst):
+            value += (1 << (ofmsg.get_value("OFPAT_SET_TP_DST")+1))
+        return value
+
+class port:
+    """Class to hold information about port
+    
+    Copyright(C) 2009, Stanford University
+    Date October 2009
+    Created by ykk
+    """
+    def __init__(self, port_no, stp=(2 << 8), hw_addr=None, name=""):
+        """Initialize
+        """
+        ##Port properties
+        self.port_no = port_no
+        if (hw_addr != None):
+            self.hw_addr = hw_addr
+        else:
+            self.hw_addr = random.randrange(1, pow(2,48))
+        self.name = name
+        ##Port config
+        self.port_down = False
+        self.no_stp = False
+        self.no_recv = False
+        self.no_recv_stp = False
+        self.no_flood = False
+        self.no_fwd = False
+        self.no_packet_in = False
+        #Port state
+        self.link_down = False
+        self.stp = stp
diff --git a/tools/pylibopenflow/pylib/of/pythonize.py b/tools/pylibopenflow/pylib/of/pythonize.py
new file mode 100644
index 0000000..5a28818
--- /dev/null
+++ b/tools/pylibopenflow/pylib/of/pythonize.py
@@ -0,0 +1,58 @@
+"""This module generate Python code for OpenFlow structs.
+
+(C) Copyright Stanford University
+Date December 2009
+Created by ykk
+"""
+import cpythonize
+from config import *
+
+class rules(cpythonize.rules):
+    """Class that specify rules for pythonization of OpenFlow messages
+
+    (C) Copyright Stanford University
+    Date December 2009
+    Created by ykk
+    """
+    def __init__(self, ofmsg):
+        """Initialize rules
+        """
+        cpythonize.rules.__init__(self)
+        ##Reference to ofmsg
+        self.__ofmsg = ofmsg
+        ##Default values for members
+        self.default_values[('ofp_header','version')] = self.__ofmsg.get_value('OFP_VERSION')
+        self.default_values[('ofp_switch_config',\
+                             'miss_send_len')] = self.__ofmsg.get_value('OFP_DEFAULT_MISS_SEND_LEN')
+        for x in ['ofp_flow_mod','ofp_flow_expired','ofp_flow_stats']:
+            self.default_values[(x,'priority')] = self.__ofmsg.get_value('OFP_DEFAULT_PRIORITY')
+        #Default values for struct
+        self.default_values[('ofp_packet_out','buffer_id')] = 0xffffffff
+        self.struct_default[('ofp_flow_mod',
+                             'header')] = ".type = OFPT_FLOW_MOD"
+#                             'header')] = ".type = "+str(self.__ofmsg.get_value('OFPT_FLOW_MOD'))
+        ##Macros to exclude
+        self.excluded_macros = ['OFP_ASSERT(EXPR)','OFP_ASSERT(_EXPR)','OFP_ASSERT',
+                                'icmp_type','icmp_code','OFP_PACKED',
+                                'OPENFLOW_OPENFLOW_H']
+        ##Enforce mapping
+        if GEN_ENUM_VALUES_LIST:
+            self.enforced_maps['ofp_header'] = [ ('type','ofp_type_values') ]
+        elif GEN_ENUM_DICTIONARY:
+            self.enforced_maps['ofp_header'] = \
+                [ ('type','ofp_type_map.keys()') ]
+        
+class pythonizer(cpythonize.pythonizer):
+    """Class that pythonize C structures of OpenFlow messages
+
+    (C) Copyright Stanford University
+    Date December 2009
+    Created by ykk
+    """
+    def __init__(self, ofmsg):
+        """Initialize
+        """
+        ofrules =  rules(ofmsg)
+        cpythonize.pythonizer.__init__(self, ofmsg, ofrules)
+        ##Reference to OpenFlow message class
+        self.__ofmsg = ofmsg
diff --git a/tools/pylibopenflow/pylib/of/simu.py b/tools/pylibopenflow/pylib/of/simu.py
new file mode 100644
index 0000000..508b076
--- /dev/null
+++ b/tools/pylibopenflow/pylib/of/simu.py
@@ -0,0 +1,144 @@
+"""This module simulates the network.
+
+Copyright(C) 2009, Stanford University
+Date November 2009
+Created by ykk
+"""
+import openflow
+import output
+import of.msg
+import of.network
+
+class network(of.network.network):
+    """Class to simulate OpenFlow network
+
+    Copyright(C) 2009, Stanford University
+    Date November 2009
+    Created by ykk
+    """
+    def __init__(self):
+        """Initialize network
+        """
+        of.network.network.__init__(self)
+        ##Name of use for output
+        self.name = self.__class__.__name__+str(id(self))
+
+class link(of.network.link):
+    """Class to simulate link
+
+    Copyright(C) 2009, Stanford University
+    Date November 2009
+    Created by ykk
+    """
+    def __init__(self, switch1, switch2, isUp=True):
+        """Initialize link
+        """
+        of.network.link.__init__(self, switch1, switch2)
+        ##Name of use for output
+        self.name = self.__class__.__name__+str(id(self))
+        ##Indicate if link is up
+        self.isUp = isUp
+
+class switch(of.network.switch):
+    """Class to simulate OpenFlow switch
+
+    Copyright(C) 2009, Stanford University
+    Date November 2009
+    Created by ykk
+    """
+    def __init__(self,  messages, controller, port, miss_send_len=128,
+                 dpid=None, n_buffers=100, n_tables=1,
+                 capability=None, parser=None, connection=None):
+        """Initialize switch
+        """
+        of.network.switch.__init__(self,  miss_send_len,
+                                   None, dpid, n_buffers, n_tables,
+                                   capability)
+        ##Name of use for output
+        self.name = self.__class__.__name__+str(id(self))
+        ##Reference to OpenFlow messages
+        self.__messages = messages
+        ##Reference to connection
+        self.connection = openflow.tcpsocket(messages, controller, port)
+        self.sock = self.connection.sock
+        ##Reference to Parser
+        self.parser = None
+        if (parser == None):
+            self.parser = of.msg.parser(messages)
+        else:
+            self.parser = parser
+
+    def receive_openflow(self, packet):
+        """Switch receive OpenFlow packet, and respond accordingly
+        """
+        dic = self.__messages.peek_from_front("ofp_header", packet)
+        if (dic["type"][0] == self.__messages.get_value("OFPT_HELLO")):
+            output.dbg("Receive hello", self.name)
+        elif (dic["type"][0] == self.__messages.get_value("OFPT_ECHO_REQUEST")):
+            self.reply_echo(dic["xid"][0])
+        elif (dic["type"][0] == self.__messages.get_value("OFPT_FEATURES_REQUEST")):
+            self.reply_features(dic["xid"][0])
+        elif (dic["type"][0] == self.__messages.get_value("OFPT_FLOW_MOD")):
+            self.handle_flow_mod(packet)
+        else:
+            output.dbg("Unprocessed message "+self.parser.header_describe(dic),
+                       self.name)
+
+    def send_hello(self):
+        """Send hello
+        """
+        self.connection.structsend("ofp_hello",
+                                   0, self.__messages.get_value("OFPT_HELLO"),
+                                   0, 0)
+        output.dbg("Send hello",self.name)
+
+    def send_packet(self, inport, bufferid=None, packet="", xid=0, reason=None):
+        """Send packet in
+        Assume no match as reason, bufferid = 0xFFFFFFFF,
+        and empty packet by default
+        """
+        if (reason == None):
+            reason = self.__messages.get_value("OFPR_NO_MATCH")
+        if (bufferid == None):
+            bufferid = int("0xFFFFFFFF",16)
+        pktin = self.__messages.pack("ofp_packet_in",
+                                     0, self.__messages.get_value("OFPT_PACKET_IN"),
+                                     0, xid, #header
+                                     bufferid, len(packet),
+                                     inport, reason, 0)
+        self.connection.structsend_raw(pktin+packet)
+        output.dbg("Send packet ",self.name)
+
+    def send_echo(self, xid=0):
+        """Send echo
+        """
+        self.connection.structsend_xid("ofp_header",
+                                       0, self.__messages.get_value("OFPT_ECHO_REQUEST"),
+                                       0, xid)
+        output.dbg("Send echo", self.name)
+
+    def reply_echo(self, xid):
+        """Reply to echo request
+        """
+        self.connection.structsend_xid("ofp_header",
+                                       0, self.__messages.get_value("OFPT_ECHO_REPLY"),
+                                       0, xid)                                 
+        output.dbg("Reply echo of xid:"+str(xid),self.name)
+
+    def reply_features(self, xid):
+        """Reply to feature request
+        """
+        self.connection.structsend_xid("ofp_switch_features",
+                                       0, self.__messages.get_value("OFPT_FEATURES_REPLY"),
+                                       0, xid,
+                                       self.datapath_id, self.n_buffers,
+                                       self.n_tables,0,0,0,
+                                       self.capability.get_capability(self.__messages),
+                                       self.capability.get_actions(self.__messages))
+        output.dbg("Replied features request of xid "+str(xid), self.name)
+        
+    def handle_flow_mod(self, packet):
+        """Handle flow mod: just print it here
+        """
+        output.dbg(self.parser.flow_mod_describe(packet), self.name)
+        
diff --git a/tools/pylibopenflow/pylib/openflow.py b/tools/pylibopenflow/pylib/openflow.py
new file mode 100644
index 0000000..25945b9
--- /dev/null
+++ b/tools/pylibopenflow/pylib/openflow.py
@@ -0,0 +1,336 @@
+"""This module exports OpenFlow protocol to Python.
+
+(C) Copyright Stanford University
+Date October 2009
+Created by ykk
+"""
+import c2py
+import cheader
+import os
+import socket
+import select
+import struct
+import time
+
+class messages(cheader.cheaderfile,c2py.cstruct2py,c2py.structpacker):
+    """Class to handle OpenFlow messages
+
+    (C) Copyright Stanford University
+    Date October 2009
+    Created by ykk
+    """
+    def __init__(self, openflow_headerfile=None):
+        """Initialize with OpenFlow header file
+
+        If filename is not provided, check the environment
+        variable PYLIB_OPENFLOW_HEADER and search for openflow.h
+        """
+        if (openflow_headerfile != None):
+            cheader.cheaderfile.__init__(self, openflow_headerfile)
+        else:
+            #Check environment variable
+            path = os.getenv("PYLIB_OPENFLOW_HEADER")
+            if not path:
+                print "PYLIB_OPENFLOW_HEADER is not set in environment"
+                sys.exit(2)
+            cheader.cheaderfile.__init__(self, path+"/openflow.h")
+        #Initialize cstruct2py
+        c2py.cstruct2py.__init__(self)
+        #Initalize packet
+        c2py.structpacker.__init__(self, "!")
+        ##Cached patterns
+        self.patterns={}
+        for (cstructname, cstruct) in self.structs.items():
+            self.patterns[cstructname] = self.get_pattern(cstruct)
+
+    def get_size(self, ctype):
+        """Get size for ctype or name of type.
+        Return None if ctype is not expanded or
+        type with name is not found.
+        """
+        pattern = self.get_pattern(ctype)
+        if (pattern != None):
+            return c2py.cstruct2py.get_size(self,pattern)
+    
+    def get_pattern(self,ctype):
+        """Get pattern string for ctype or name of type.
+        Return None if ctype is not expanded or
+        type with name is not found.
+        """
+        if (isinstance(ctype, str)):
+            #Is name
+            return self.patterns[ctype]
+        else:
+            return c2py.cstruct2py.get_pattern(self, ctype)
+        
+    def pack(self, ctype, *arg):
+        """Pack packet accordingly ctype or name of type provided.
+        Return struct packed.
+        """
+        if (isinstance(ctype, str)):
+            return struct.pack(self.prefix+self.patterns[ctype], *arg)
+        else:
+            return c2py.structpacker.pack(self, ctype, *arg)
+
+    def peek_from_front(self, ctype, binaryString, returnDictionary=True):
+        """Unpack packet using front of the packet,
+        accordingly ctype or name of ctype provided.
+
+        Return dictionary of values indexed by arg name, 
+        if ctype is known struct/type and returnDictionary is True,
+        else return array of data unpacked.
+        """
+        if (isinstance(ctype,str)):
+            data = c2py.structpacker.peek_from_front(self,
+                                                     self.patterns[ctype],
+                                                     binaryString,
+                                                     returnDictionary)
+            return self.data2dic(self.structs[ctype], data)
+        else:
+            return c2py.structpacker.peek_from_front(self,
+                                                     ctype,
+                                                     binaryString,
+                                                     returnDictionary)
+        
+    def unpack_from_front(self, ctype, binaryString, returnDictionary=True):
+        """Unpack packet using front of packet,
+        accordingly ctype or name of ctype provided.
+
+        Return (dictionary of values indexed by arg name, 
+        remaining binary string) if ctype is known struct/type
+        and returnDictionary is True,
+        else return (array of data unpacked, remaining binary string).
+        """
+        if (isinstance(ctype,str)):
+            (data, remaining) = c2py.structpacker.unpack_from_front(self,
+                                                                    self.patterns[ctype],
+                                                                    binaryString,
+                                                                    returnDictionary)
+            return (self.data2dic(self.structs[ctype], data), remaining)
+        else:
+            return c2py.structpacker.unpack_from_front(self,
+                                                       ctype,
+                                                       binaryString,
+                                                       returnDictionary)
+
+class connection:
+    """Class to hold a connection.
+
+    (C) Copyright Stanford University
+    Date October 2009
+    Created by ykk
+    """
+    def __init__(self, messages, sock=None):
+        """Initialize
+        """
+        ##Reference to socket
+        self.sock = sock
+        ##Internal reference to OpenFlow messages
+        self._messages = messages
+        ##Buffer
+        self.buffer = ""
+        ##Header length for OpenFlow
+        self.__header_length = self._messages.get_size("ofp_header")
+
+    def send(self, msg):
+        """Send bare message (given as binary string)
+        """
+        raise NotImplementedError()
+
+    def structsend(self, ctype, *arg):
+        """Build and send message.
+        """
+        self.send(self._messages.pack(ctype, *arg))
+
+    def receive(self, maxlength=1024):
+       """Receive raw in non-blocking way.
+
+       Return buffer
+       """
+       if (select.select([self.sock],[],[],0)[0]):
+           self.buffer += self.sock.recv(maxlength)
+       return self.buffer
+
+    def buffer_has_msg(self):
+        """Check if buffer has a complete message
+        """
+        #Check at least ofp_header is received
+        if (len(self.buffer) < self.__header_length):
+            return False
+        values = self._messages.peek_from_front("ofp_header", self.buffer)
+        return (len(self.buffer) >= values["length"][0])
+
+    def get_msg(self):
+        """Get message from current buffer
+        """
+        if (self.buffer_has_msg()):
+            values = self._messages.peek_from_front("ofp_header", self.buffer)
+            msg = self.buffer[:values["length"][0]]
+            self.buffer = self.buffer[values["length"][0]:]
+            return msg
+        else:
+            return None
+
+    def msgreceive(self, blocking=False, pollInterval=0.001):
+        """Receive OpenFlow message.
+
+        If non-blocking, can return None.
+        """
+        self.receive()
+        if (self.buffer_has_msg()):
+            return self.get_msg()
+        if (blocking):
+            while (not self.buffer_has_msg()):
+                time.sleep(pollInterval)
+                self.receive()
+        return self.get_msg()
+
+class safeconnection(connection):
+    """OpenFlow connection with safety checks
+    
+    (C) Copyright Stanford University
+    Date October 2009
+    Created by ykk
+    """
+    def __init__(self, messages, sock=None, version=None,
+                 xidstart = 0, autoxid=True):
+        """Initialize with OpenFlow version.
+        """
+        connection.__init__(self, messages, sock)
+        ##OpenFlow version
+        if (version != None):
+            self.version = version
+        else:
+            self.version = messages.get_value("OFP_VERSION")
+        ##xid Counter
+        self.nextxid = xidstart
+        ##Automatic xid
+        self.autoxid = autoxid
+        ##Miss auto xid
+        self.skipautoxid = 0
+
+    def skip_auto_xid(self, n):
+        """Miss automatic xid for the next n packets
+        """
+        self.skipautoxid = n
+
+    def structsend_xid(self, ctype, *arg):
+        """Build and send message, populating header automatically.
+        Type and xid of message is not populated.
+        """
+        self.skipautoxid+=1
+        self.structsend(ctype, *arg)
+
+    def structsend(self, ctype, *arg):
+        """Build and send message, populating header automatically.
+        Type of message is not populated
+        """
+        msg = self._messages.pack(ctype, *arg)
+        self.structsend_raw(msg)
+        
+    def structsend_raw(self, msg):
+        """Check ofp_header and ensure correctness before sending.
+        """
+        (dic, remaining) = self._messages.unpack_from_front("ofp_header", msg)
+        #Amend header
+        if (self.version != None):
+            dic["version"][0] = self.version
+        if (self.autoxid and (self.skipautoxid == 0)):
+            dic["xid"][0] = self.nextxid
+            self.nextxid+=1
+        if (self.skipautoxid != 0):
+            self.skipautoxid-=1
+        dic["length"][0] = len(remaining)+8
+        #Send message
+        self.send(self._messages.pack("ofp_header",
+                                      dic["version"][0],
+                                      dic["type"][0],
+                                      dic["length"][0],
+                                      dic["xid"][0])+\
+                  remaining)
+
+class tcpsocket(safeconnection):
+    """Class to hold connection
+
+    (C) Copyright Stanford University
+    Date October 2009
+    Created by ykk
+    """
+    def __init__(self, messages, host, port):
+        """Initialize TCP socket to host and port
+        """
+        safeconnection.__init__(self, messages)
+        ##Reference to socket
+        self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+        self.sock.connect((host, port))
+        self.sock.setblocking(False)
+        self.sock.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, 0)
+
+    def __del__(self):
+        """Terminate connection
+        """
+        self.sock.shutdown(1)
+        self.sock.close()
+
+    def send(self, msg):
+        """Send raw message (binary string)
+        """
+        self.sock.sendall(msg)
+
+class connections:
+    """Class to hold multiple connections
+    
+    (C) Copyright Stanford University
+    Date November 2009
+    Created by ykk
+    """
+    def __init__(self):
+        """Initialize
+        """
+        ##List of sockets
+        self.__sockets = []
+        ##Dicionary of sockets to connection
+        self.__connections = {}
+        
+    def add_connection(self, reference, connect):
+        """Add connection with opaque reference object
+        """
+        if (not isinstance(connect,connection)): 
+            raise RuntimeError("Connection must be openflow.connection!")
+        self.__sockets.append(connect.sock)
+        self.__connections[connect.sock] = (reference, connect)
+
+    def receive(self, maxlength=1024):
+        """Receive raw in non-blocking way
+        """
+        read_ready = select.select(self.__sockets,[],[],0)[0]
+        for sock in read_ready:
+            self.__connections[sock][1].receive(maxlength)
+        
+    def has_msg(self):
+        """Check if any of the connections has a message
+
+        Return (reference,connection) with message
+        """
+        for sock, refconnect in self.__connections.items():
+            if (refconnect[1].buffer_has_msg()):
+                return refconnect
+        return None
+
+    def msgreceive(self, blocking=False, pollInterval=0.001):
+        """Receive OpenFlow message.
+
+        If non-blocking, can return None.
+        """
+        self.receive()
+        c = self.has_msg()
+        if (c != None):
+            return (c[0],c[1].get_msg())
+        if (blocking):
+            while (c == None):
+                time.sleep(pollInterval)
+                self.receive()
+                c = self.has_msg()
+        else:
+            return (None, None)
+        return (c[0],c[1].get_msg())
diff --git a/tools/pylibopenflow/pylib/output.py b/tools/pylibopenflow/pylib/output.py
new file mode 100644
index 0000000..64df4f5
--- /dev/null
+++ b/tools/pylibopenflow/pylib/output.py
@@ -0,0 +1,85 @@
+"""This module implements output printing.
+
+Output are divided into 4 levels and
+can be configured for different verbosity
+
+Copyright(C) 2009, Stanford University
+Date August 2009
+Created by ykk
+"""
+
+##Various output modes
+MODE = {}
+MODE["ERR"] = 0
+MODE["WARN"] = 1
+MODE["INFO"] = 2
+MODE["DBG"] = 3
+
+#Global mode
+global output_mode
+output_mode = None
+
+def set_mode(msg_mode, who=None):
+    """Set the message mode for who
+    If who is None, set global mode
+    """
+    global output_mode
+    if (output_mode == None):
+        output_mode = {}
+        output_mode["global"] = MODE["WARN"]
+        output_mode["DBG"] = []
+        output_mode["INFO"] = []
+        output_mode["WARN"] = []
+
+    #Set global mode
+    if (who == None):
+        output_mode["global"] = MODE[msg_mode]
+        return
+    
+    #Individual mode
+    if (msg_mode == "ERR"):
+        return
+    for mode in ["WARN","INFO","DBG"]:
+        if (not (who in mode[mode])):
+            mode[mode].append(who)
+        if (msg_mode == mode):
+            return
+    
+def output(msg_mode, msg, who=None):
+    """Print message
+    """
+    global output_mode
+    if (output_mode == None):
+        raise RuntimeException("Output mode is not set")
+
+    #Indicate who string
+    if (who == None):
+        whostr = ""
+    else:
+        whostr = who+":"
+
+    #Print output 
+    if (MODE[msg_mode] <= output_mode["global"]):
+        print msg_mode.ljust(4, ' ')+"|"+whostr+msg
+    elif (who in output_mode[msg_mode]):
+        print msg_mode.ljust(4, ' ')+"|"+whostr+msg
+        
+def err(msg, who=None):
+    """Print error messages
+    """
+    output("ERR", msg, who)
+
+def warn(msg, who=None):
+    """Print warning messages
+    """
+    output("WARN", msg, who)
+
+def info(msg, who=None):
+    """Print informational messages
+    """
+    output("INFO", msg, who)
+
+def dbg(msg, who=None):
+    """Print debug messages
+    """
+    output("DBG", msg, who)