Shortest unused Twitter #hashtag

What is the shortest Twitter #hashtag that has never been used? As an additional constraint, let’s focus on the lexicographically first hashtag composed of ASCII letters only of that kind. Let’s skip dessert and use Python and the Twitter Search API to find out:

import urllib
import json
import itertools
import string
import time

k, max_k = 1, 10
while k < max_k:
    for tag in itertools.product(string.ascii_lowercase, repeat=k):
        tag = ''.join(tag)
        print "Searching for #%s" % tag
        search_url = 'https://search.twitter.com/search.json?q=%%23%s' % tag
        while True:
            search_result = json.loads(urllib.urlopen(search_url).read())
            if 'results' in search_result:
                break
            print "Wait a few seconds because of Twitter Search API limit"
            time.sleep(5)
        search_result = search_result['results']
        if not search_result:
            print "Unused hashtag: #%s" % tag
            k = max_k
            break
    k += 1

After a few minutes, the result: #agy. What could we use that one for?

Memory-efficient Django queries

When doing heavy computations involving the Django Object-Relational Mapper to access your database, you might notice Python consuming lots of memory. This will probably not happen during production web server mode, because dealing with lots of data to serve a single request already indicates that something is wrong and at least some preprocessing should be done. So that’s probably why Django isn’t tailored towards such needs, rather favoring speed (both at execution and coding time) at the cost of memory. But at least when you’re in that preprocessing phase or you’re just using the Django ORM for some scientific computation, you don’t want to consume more memory than absolutely necessary.

The first issue to watch out for is debug mode. From the docs:

It is also important to remember that when running with DEBUG turned on, Django will remember every SQL query it executes. This is useful when you are debugging, but on a production server, it will rapidly consume memory.

“Remembering” means that Django stores all SQL queries and their execution times in a list django.db.connection.queries of the form [{'sql': 'SELECT ...', 'time': '0.01'}, ]. This is useful when you want to profile your queries (see the docs), but needless overhead when you’re just doing heavy-database-access computations. So either set DEBUG to False or clear this list regularly in that case.

Moreover, it is important to understand how Django holds data in memory. Although the resulting objects are constructed “lazily” on the fly, the resulting rows of querysets are kept in memory by default so that multiple iterations on the result can use these cached versions. This can be avoided by using queryset.iterator(). So while

entries = Entry.objects.all()
for entry in entries:
    print entry
for entry in entries:
    print entry

will receive all entries from the database once and keep them in memory,

for entry in entries.iterator():
    print entry
for entry in entries.iterator():
    print entry

will execute the query twice but save memory.

However, even using iterator() can still lead to a heavy memory footprint, not directly on Django’s side, but from the database interface (e.g. the Python MySQLdb module). It will receive and store all the resulting data from the database server before even handing bits over to Django. So the only way to avoid this is to use queries that don’t produce too much data at once. This snippet does exactly that:

def queryset_iterator(queryset, chunksize=1000):
    pk = 0
    last_pk = queryset.order_by('-pk')[0].pk
    queryset = queryset.order_by('pk')
    while pk < last_pk:
        for row in queryset.filter(pk__gt=pk)[:chunksize]:
            pk = row.pk
            yield row

It basically receives slices (default size 1000) of the queryset ordered by the primary key. As a consequence, any ordering previously applied to the queryset is lost. The reason for this behavior is, of course, that ordering (and especially slicing) by primary key is fast. You can use the function like this:

for entry in queryset_iterator(Entry.objects):
    print entry

Apart from not being able to order the queryset, this approach does not handle concurrent modification of the data well: When rows are inserted or deleted while iterating over the dataset with queryset_iterator, rows might be reported several times or never. That should not be a big problem when preprocessing data though.

I modified queryset_iterator slightly to save the initial primary key query and to allow for non-numerical primary keys and also reverse ordering of primary keys:

def queryset_iterator(queryset, chunksize=1000, reverse=False):
    ordering = '-' if reverse else ''
    queryset = queryset.order_by(ordering + 'pk')
    last_pk = None
    new_items = True
    while new_items:
        new_items = False
        chunk = queryset
        if last_pk is not None:
            func = 'lt' if reverse else 'gt'
            chunk = chunk.filter(**{'pk__' + func: last_pk})
        chunk = chunk[:chunksize]
        row = None
        for row in chunk:
            yield row
        if row is not None:
            last_pk = row.pk
            new_items = True

To sum things up:

  • Set DEBUG = False when doing lots of database operations.
  • Use queryset_iterator when you deal with millions of rows and don’t care about ordering.
  • Still enjoy the convenience of Django!