Twisted as Your WSGI Container

October 29, 2016

Introduction

WSGI is a great standard. It has been amazingly successful. In order to describe how successful it is, let me describe life before WSGI. In the beginning, CGI existed. CGI was just a standard for how a web server can run a process — what environment variables to pass, and so forth. In order to write a web-based application, people would write programs that complied with CGI. At that time, Apache’s only competition was commercial web servers, and CGI allowed you to write applications that ran on both. However, starting a process for each request was slow and wasteful.

For Python applications, people wrote mod_python for Apache. It allowed people to write Python programs that ran inside the Apache process, and directly used Apache’s API to access the HTTP request details. Since Apache was the only server that mattered, that was fine. However, as more servers arrived, a standard was needed. mod_wsgi was originally a way to run the same Django application on many servers. However, as a side effect, it also allowed the second wave of Python web application frameworks –Paste, Flask and more — to have something to run on. In order to make life easier, Python included wsgiref, a module that implemented a single-thread single-process blocking web server with the WSGI protocol.

Development

Some web frameworks come with their own development web servers that will run their WSGI apps. Some use wsgiref. Almost always those options are carefully documented as “just for development use, do not use in production.” Wouldn’t it be nice to use the same WSGI container in both development and production, eliminating one potential source of reproduction bugs?

For ease of use, it should probably be written in Python. Luckily, “twist web –wsgi” is just such a server. In order to show-case how easy it is to use it, twist-wsgi shows commands to run Django, Flask, Pyramid and Bottle apps as easy as it is to run frameworks’ built-in web server.

Production

In production, using the Twisted WSGI containers come with several advantages. Production-grade SSL support using PyOpenssl and cryptography allows elimination of “SSL terminators”, removing one moving piece from the equation. With third-party extensions like txsni and txacme, it allows modern support for “easy SSL”. The built-in HTTP/2 support, starting with Twisted 16.3, allows better support for parallel requests from modern browsers.

The Twisted web server also has a built-in static file server, allowing the elimination of a “front-end” web server that deals with static files by itself, and passing dynamic requests to the application server.

Twisted is also not limited to web serving. As a full-stack network application, it has support for scheduling repeated tasks, running processes and supporting other protocols (for example, a side-channel for online control). Last but not least, in order to integrate that, the language used is Python. As an example for an integrated solution, the Frankenstenian monster plugin show-cases a combo web application combining 4 frameworks, a static file server and a scheduled task updating a file.

While the goal is not to encourage using four web frameworks and a couple of side services in order to greet the user and tell them what time it is, it is nice that if the need strikes this can all be integrated into one process in one language, without the need to remember how to spell “every 4 seconds” in cron or how to quote a string in the nginx configuration file.


Post-Object-Oriented Design

September 15, 2016

In the beginning, came the so-called “procedural” style. Data was data, and behavior, implemented as procedure, were separate things. Object-oriented design is the idea to bundle data and behavior into a single thing, usually called “classes”. In return for having to tie the two together, the thought went, we would get polymorphism.

Polymorphism is pretty neat. We send different objects the same message, for example, “turn yourself into a string”, and they respond appropriately — each according to their uniquely defined behavior.

But what if we could separate the data and beahvior, and still get polymorphism? This is the idea behind post-object-oriented design.

In Python, we achieve this with two external packages. One is the “attr” package. This package allows a useful way to define bundles of data, that still exhibit the minimum amount of behavior we do want: initialization, string representation, hashing and more.

The other is the “singledispatch” package (available as functools.singledispatch in Python 3.4+).

import attr
import singledispatch

In order to be specific, we imagine a simple protocol. The low-level details of the protocol do not concern us, but we assume some lower-level parsing allows us to communicate in dictionaries back and forth (perhaps serialized/deserialized using JSON).

Our protocol is one to send changes to a map. The only two messages are “set”, to set a key to a given value, and “delete”, to delete a key.

messages = (
{
    'type': 'set',
    'key': 'language',
    'value': 'python'
},
{
    'type': 'delete',
    'key': 'human'
}
)

We want to represent those as attr-based classes.

@attr.s
class Set(object):
    key = attr.ib()
    value = attr.ib()

@attr.s
class Delete(object):
    key = attr.ib()
print(Set(key='language', value='python'))
print(Delete(key='human'))
Set(key='language', value='python')
Delete(key='human')

When incoming dictionaries arrive, we want to convert them to the logical classes. This code could not be simpler, in this example. (The reason is mostly because the protocol is simple.)

def from_dict(dct):
    tp = dct.pop('type')
    name_to_klass = dict(set=Set, delete=Delete)
    try:
        klass = name_to_klass[tp]
    except KeyError:
        raise ValueError('unknown type', tp)
    return klass(**dct)

Note how we take advantage of the fact that attr-based classes accept correctly-named keyword arguments.

from_dict(dict(type='set', key='name', value='myname')), from_dict(dict(type='delete', key='data'))
(Set(key='name', value='myname'), Delete(key='data'))

But this was easy! There was no need for polymorphism: we always get one type in (dictionaries), and we consult a mapping to decide which type to produce.

However, for serialization, we do need polymorphism. Enter our second tool — the singledispatch package. The default function is equivalent to a method defined on “object”: the ultimate super-class. Since we do not want to serialize generic objects, our default implementation errors out.

@singledispatch.singledispatch
def to_dict(obj):
    raise TypeError("cannot serialize", obj)

Now, we implement the actual serializers. The names of the functions are not important. To emphasize they should not be used directly, we make them “private” by prepending an underscore.

@to_dict.register(Set)
def _to_dict_set(st):
    return dict(type='set', key=st.key, value=st.value)

@to_dict.register(Delete)
def _to_dict_delete(dlt):
    return dict(type='delete', key=dlt.key)

Indeed, we do not call them directly.

print(to_dict(Set(key='k', value='v')))
print(to_dict(Delete(key='kk')))
{'type': 'set', 'value': 'v', 'key': 'k'}
{'type': 'delete', 'key': 'kk'}

However, arbitrary objects cannot be serialized.

try:
    to_dict(object())
except TypeError as e:
    print e
('cannot serialize', <object object at 0x7fbdb254ac60>)

Now that the structure of adding such an “external method” has been shown, another example can be given: “act on”: applying the changes requested to an in-memory map.

@singledispatch.singledispatch
def act_on(command, d):
    raise TypeError("Cannot act on", command)

@act_on.register(Set)
def act_on_set(st, d):
    d[st.key] = st.value

@act_on.register(Delete)
def act_on_delete(dlt, d):
    del d[dlt.key]

d = {}
act_on(Set(key='name', value='woohoo'), d)
print("After setting")
print(d)
act_on(Delete(key='name'), d)
print("After deleting")
print(d)
After setting
{'name': 'woohoo'}
After deleting
{}

In this case, we kept the functionality “near” the code. However, note that the functionality could be implemented in a different module: these functions, even though they are polymorphic, follow Python namespace rules. This is useful: several different modules could implement “act_on”: for example, an in-memory map (as we defined above), a module using Redis or a module using a SQL database.

Actual methods are not completely obsolete. It would still be best to make methods do anything that would require private attribute access. In simple cases, as above, there is no difference between the public interface and the public implementation.


Time Series Data

August 25, 2016

When operating computers, we are often exposed to so-called “time series”. Whether it is database latency, page fault rate or total memory used, these are all exposed as numbers that are usually sampled at frequent intervals.

However, not only computer engineers are exposed to such data. It is worthwhile to know what other disciplines are exposed to such data, and what they do with it. “Earth sciences” (geology, climate, etc.) have a lot of numbers, and often need to analyze trends and make predictions. Sometimes these predictions have, literally, billions dollars’ worth of decision hinging on them. It is worthwhile to read some of the textbooks for students of those disciplines to see how to approach those series.

Another discipline that needs to visually inspect time series data is physicians. EKG data is often vital to analyze patients’ health — and especially when compared to their historical records. For that, that data needs to be saved. A lot of EKG research has been done on how to compress numerical data, but still keep it “visually the same”. While the research on that is not as rigorous, and not as settled, as the trend analysis in geology, it is still useful to look into. Indeed, even the basics are already better than so-called “roll-ups”, which preserve none of the visual distinction of the data, flattening peaks and filling hills while keeping a score of “standard deviation” that is not as helpful as is usually hoped for.


Extension API: An exercise in a negative case study

August 20, 2016

I was idly contemplating implementing a new Jupyter kernel. Luckily, they try to provide facility to make it easier. Unfortunately, they made a number of suboptimal choices in their API. Fortunately, those mistakes are both common and easily avoidable.

Subclassing as API

They suggest subclassing IPython.kernel.zmq.kernelbase.Kernel. Errr…not “suggest”. It is a “required step”. The reason is probably that this class already implements 21 methods. When you subclass, make sure to not use any of these names, or things will break randomly. If you do not want to subclass, good luck figuring out what the assumption that the system makes about these 21 methods because there is no interface or even prose documentation.

The return statement in their example is particularly illuminating:

        return {'status': 'ok',
                # The base class increments the execution count
                'execution_count': self.execution_count,
                'payload': [],
                'user_expressions': {},
               }

Note the comment “base class increments the execution count”. This is a classic code smell: this seems like this would be needed in every single overrider, which means it really belongs in the helper class, not in every kernel.

None

The signature for the example do_execute is:

    def do_execute(self, code, silent, store_history=True, 
                   user_expressions=None,
                   allow_stdin=False):

Of course, this means that user_expressions will sometimes be a dictionary and sometimes None. It is likely that the code will be written to anticipate one or the other, and will fail in interesting ways if None is actually sent.

Optional Overrides

As described in this section there are also ways to make the kernel better with optional overrides. The convention used, which is nowhere explained, is that do_ methods mean you should override to make a better kernel. Nowhere it is explained why there is no default history implementation, or where to get one, or why a simple stupid implementation is wrong.

Dictionaries

 

All overrides return dictionaries, which get serialized directly into the underlying communication platform. This is a poor abstraction, especially when the documentation is direct links to the underlying protocol. When wrapping a protocol, it is much nicer to use an Interface as the documentation of what is assumed — and define an attr.s-based class to allow returning something which is automatically the correct type, and will fail in nice ways if a parameter is forgotten.

Summary

If you are providing an API, here are a few positive lessons based on the issues above:

  • You should expect interfaces, not subclasses. Use composition, not subclassing.If you want to provide a default implementation in composition, just check for a return of NotImplemeted(), and use the default.
  • Do the work of abstracting your customers from the need to use dictionaries and unwrap automatically. Use attr.s to avoid customer boilerplate.
  • Send all arguments. Isolate your customers from the need to come up with sane defaults.
  • As much as possible, try to have your interfaces be side-effect free. Instead of asking the customer to directly make a change, allow the customer to make the “needed change” be part of the return type. This will let the customers test their class much more easily.

__name__ == __main__ considered harmful

June 7, 2016

Every single Python tutorial shows the pattern of

# define functions, classes,
# etc.

if __name__ == '__main__':
    main()

This is not a good pattern. If your code is not going to be in a Python module, there is no reason not to unconditionally call ‘main()’ at the bottom. So this code will only be used in modules — where it leads to unpredictable effects. If this module is imported as ‘foo’, then the identity of ‘foo.something’ and ‘__main__.something’ will be different, even though they share code.

This leads to hilarious effects like @cache decorators not doing what they are supposed to, parallel registry lists and all kinds of other issues. Hilarious unless you spend a couple of hours debugging why ‘isinstance()’ is giving incorrect results.

If you want to write a main module, make sure it cannot be imported. In this case, reversed stupidity is intelligence — just reverse the idiom:

# at the top
if __name__ != '__main__':
    raise ImportError("this module cannot be imported")

This, of course, will mean that this module cannot be unit tested: therefore, any non-trivial code should go in a different module that this one imports. Because of this, it is easy to gravitate towards a package. In that case, put the code above in a module called ‘__main__.py‘. This will lead to the following layout for a simple package:

PACKAGE_NAME/
             __init__.py
                 # Empty
             __main__.py
                 if __name__ != '__main__':
                     raise ImportError("this module cannot be imported")
                 from PACKAGE_NAME import api
                 api.main()
             api.py
                 # Actual code
             test_api.py
                 import unittest
                 # Testing code

And then, when executing:

$ python -m PACKAGE_NAME arg1 arg2 arg3

This will work in any environment where the package is on the sys.path: in particular, in any virtualenv where it was pip-installed. Unless a short command-line is important, it allows skipping over creating a console script in setup.py completely, and letting “python -m” be the official CLI. Since pex supports setting a module as an entry point, if this tool needs to be deployed in other environment, it is easy to package into a tool that will execute the script:

$ pex . --entry-point SOME_PACKAGE --output-file toolname

Stop Feeling Guilty about Writing Classes

May 31, 2016

I’ve seen a few talks about “stop writing classes”. I think they have a point, but it is a little over-stated. All debates are bravery debates, so it is hard to say which problem is harder — but as a recovering class-writing-guiltoholic, let me admit this: I took this too far. I was avoiding classes when I shouldn’t have.

Classes are best kept small

It is true that classes are best kept small. Any “method” which is not really designed to be overridden is often best implemented as a function that accepts a “duck-type” (or a more formal interface).

This, of course, sometimes leads to…

If a class has only one public method, except __init__, it wants to be a function

Especially given function.partial, it is not needed to decide ahead of time which arguments are “static” and which are “dynamic”

Classes are useful as data packets

This is the usual counter-point to the first two anti-class sentiments: a class which is nothing more than a bunch of attributes (a good example is the TCP envelope: source IP/target IP/source port/target port) are useful. Sure, they could be passed around as dictionaries, but this does not make things better. Just use attrs — and it is often useful to write two more methods:

  • Some variant of “serialize”, an instance method that returns some lower-level format (dictionary, string, etc.)
  • Some variant of “deserialize”, a class method that takes the lower-level format above and returns a corresponding instance.

It is perfectly ok to write this class rather than shipping dictionaries around. If nothing else, error messages will be a lot nicer. Please do not feel guilty.


Forking Skip-level Dependencies

May 6, 2016

I have recently found I explain this concept over and over to people, so I want to have a reference.

Most modern languages comes with a “dependency manager” of sorts that helps manage the 3rd party libraries a given project uses. Rust has Cargo, Node.js has npm, Python has pip and so on. All of these do some things well and some things poorly. But one thing that can be done (well or poorly) is “support forking skip-level dependencies”.

In order to explain what I mean, here as an example: our project is PlanetLocator, a program to tell the user which direction they should face to see a planet. It depends on a library called Astronomy. Astronomy depends on Physics. Physics depends on Math.

  • PlanetLocator
    • Astronomy
      • Physics
        • Math

PlanetLocator is a SaaS, running on our servers. One day, we find Math has a critical bug, leading to a remote execution vulnerability. This is pretty bad, because it can be triggered via our application by simply asking PlanetLocator for the location of Mars at a specific date in the future. Luckily, the bug is simple — in Math’s definition of Pi, we need to add a couple of significant digits.

How easy is it to fix?

Well, assume PlanetLocator is written in Go, and not using any package manager. A typical import statement in PlanetLocator is

import “github.com/astronomy/astronomy”

A typical import statement in Astronomy is

import “github.com/physics/physics

..and so on.

We fork Math over to “github.com/planetlocator/math” and fix the vulnerability. Now we have to fork over physics to use the forked math, and astronomy to use the forked physics and finally, change all of our imports to import the forked astronomy — and Physics, Astronomy and PlanetLocator had no bugs!

Now assume, instead, we had used Python. In our requirements.txt file, we could put

git+https://github.com/planetlocator/math#egg=math

and voila! even though Physics’ “setup.py” said “install_requires=[‘math’]”, it will get our forked math.

When starting to use a new language/dependency manager, the first question to ask is: will it support me forking skip-level dependencies? Because every upstream maintainer is,  effectively, an absent-maintainer if rapid response is at stake (for any reason — I chose security above, but it might be beating the competition to a deadline, or fulfilling contractual obligations).


Use virtualenv

April 24, 2016

In a conversation recently with a friend, we agreed that “something the instructions tell you to do ‘sudo pip install’…which is good, because then you know to ignore them”.

There is never a need for “sudo pip install”, and doing it is an anti-pattern. Instead, all installation of packages should go into a virtualenv. The only exception is, of course, virtualenv (and arguably, pip and wheel). I got enough questions about this that I wanted to write up an explanation about the how, why and why the counter-arguments are wrong.

What is virtualenv?

The documentation says:

virtualenv is a tool to create isolated Python environments.

The basic problem being addressed is one of dependencies and versions, and indirectly permissions. Imagine you have an application that needs version 1 of LibFoo, but another application requires version 2. How can you use both these applications? If you install everything into/usr/lib/python2.7/site-packages (or whatever your platform’s standard location is), it’s easy to end up in a situation where you unintentionally upgrade an application that shouldn’t be upgraded.

Or more generally, what if you want to install an application and leave it be? If an application works, any change in its libraries or the versions of those libraries can break the application.

The tl:dr; is:

  • virtualenv allows not needing administrator privileges
  • virtualenv allows installing different versions of the same library
  • virtualenv allows installing an application and never accidentally updating a dependency

The first problem is the one the “sudo” comment addresses — but the real issues stem from the second and third: not using a virtual environment leads to the potential of conflicts and dependency hell.

How to use virtualenv?

Creating a virtual environment is easy:

$ virtualenv dirname

will create the directory, if it does not exist, and then create a virtual environment in it. It is possible to use it either activated or unactivated. Activating a virtual environment is done by

$ . dirname/bin/activate
(dirname)$

this will make python, as well as any script installed using setuptools’ “console_scripts” option in the virtual environment, on the command-execution path. The most important of those is pip, and so using pip will install into the virtual environment.

It is also possible to use a virtual environment without activating it, by directly calling dirname/bin/python or any other console script. Again, pip is an example of those, and used for installing into the virtual environment.

Installing tools for “general use”

I have seen a couple of times the argument that when installing tools for general use it makes sense to install them into the system install. I do not think that this is a reasonable exception for two reasons:

  • It still forces to use root to install/upgrade those tools
  • It still runs into the dependency/conflict hell problems

There are a few good alternatives for this:

  • Create a (handful of) virtual environments, and add them to users’ path.
  • Use “pex” to install Python tools in a way that isolates them even further from system dependencies.

Exploratory programming

People often use Python for exploratory programming. That’s great! Note that since pip 7, pip is building and caching wheels by default. This means that creating virtual environments is even cheaper: tearing down an environment and building a new one will not require recompilation. Because of that, it is easy to treat virtual environments as disposable except for configuration: activate a virtual environment, explore — and whenever needing to move things into production, ‘pip freeze’ will allow easy recreation of the environment.


Weak references, caches and immutable objects

March 26, 2016

Consider the following situation:

  • We have a lot of immutable objects. For our purposes, an “immutable” object is one where “hash(…)” is defined.
  • We have a (pure) function that is fairly expensive to compute.
  • The objects get created and destroyed regularly.

We often would like to cache the function. As an example, consider a function to serialize an object — if the same objects serialized several times, we would like to avoid recomputing the serialization.

One naive solution would be to implement a cache:

cache = {}
def serialize(obj):
    if obj not in cache:
        cache[obj] = _really_serialize(obj)
    return cache[obj]

The problem with that is that the cache would keep references to our objects long after they should have died. We can try and use an LRU (for example, repoze.lru) so that only a certain number of objects would extend their lifetimes in that way, but the size of the LRU would trade-off space overhead and time overhead.

An alternative is to use weak references. Weak references are references that do not keep objects from being collected. There are several ways to use weak references, but here one is ideally suited:

import weakref
cache = weakref.WeakKeyDictionary()
def serialize(obj):
    if obj not in cache:
        cache[obj] = _really_serialize(obj)
    return cache[obj]

Note that this is the same code as before — except that the cache is a weak key dictionary. A weak key dictionary keeps weak references to the keys, but strong references to the value. When a key is garbage collected, the entry in the dictionary disappears.

>>> import weakref
>>> a=weakref.WeakKeyDictionary()
>>> fs = frozenset([1,2,3])
>>> a[fs] = "three objects"
>>> print a[fs]
three objects
>>> len(a)
1
>>> fs = None
>>> len(a)
0

Learn Public Speaking: The Heart-Attack Way

February 11, 2016

I know there are all kinds of organizations, and classes, that teach public speaking. But if you want to learn how to do public speaking the way I did, it’s pretty easy. Get a bunch of friends together, who are all interested in improving their skills. Meet once a week.

The rules are this: one person gets to give a talk with no more props than a marker and a whiteboard. If it’s boring, or they hesitate, another person can butt in, say “boring” and give a talk about something else. Talks are to be prepared somewhat in advance, and to be about five minutes long.

This is scary. You stand there, at the board, never knowing when someone will decide you’re boring. You try to entertain the audience. You try to keep their interest. You lose your point for 10 seconds…and you’re out. It’s the fight club of public speaking. After a few weeks of this, you’ll give flowing talks, and nothing will ever scare you again about public speaking — you’ll laugh as you realize that nobody is going to kick you out.

Yeah, Israel was a good training ground for speaking.