Friday, November 14, 2014

rdbms-subsetter

I've never had a tool I really liked that would extract a chunk of a large production database for testing purposes while respecting the database's foreign keys. This past week I finally got to write one: rdbms-subsetter.

rdbms-subsetter postgresql://user:passwd@host/source_db postgresql://user:passwd@host/excerpted_db 0.001

Getting it to respect referential integrity "upward" - guaranteeing every needed parent record would be included for each child row - took less than a day. Trying to get it to also guarantee referential integrity "downward" - including all child records for each parent record - was a Quixotic idea that had me tilting at windmills for days. It's important, because parent records without child records are often useless or illogical. Yet trying to pull them all in led to an endlessly propagating process - percolation, in chemical engineering terms - that threatened to make every test database a complete (but extremely slow) clone of production. After all, if every row in parent table P1 demands rows in child tables C1, C2, and C3, and those child rows demand new rows in parent tables P2 and P3, which demand more rows in C1, C2, and C3, which demand more rows in their parent tables... I felt like I was trying to cut a little sweater out of a big sweater without snipping any yarns.

So I can't guarantee child records - instead, the final process prioritizes creating records that will fill out the empty child slots in existing parent records. But there will almost inevitably be some child slots left open when the program is done.

I've been using it against one multi-GB, highly interconnected production data warehouse, so it's had some testing, but your bug reports are welcome.

Like virtually everything else I do, this project depends utterly on SQLAlchemy.

I developed this for use at 18F, and my choice of a workplace where everything defaults to open was wonderfully validated when I asked about the procedure for releasing my 18F work to PyPI. The procedure is - and I quote -

Just go for it.

Tuesday, August 26, 2014

%sql: To Pandas and Back

A Pandas DataFrame has a nice to_sql(table_name, sqlalchemy_engine) method that saves itself to a database.

The only trouble is that coming up with the SQLAlchemy Engine object is a little bit of a pain, and if you're using the IPython %sql magic, your %sql session already has an SQLAlchemy engine anyway. So I created a bogus PERSIST pseudo-SQL command that simply calls to_sql with the open database connection:

%sql PERSIST mydataframe

The result is that your data can make a very convenient round-trip from your database, to Pandas and whatever transformations you want to apply there, and back to your database:


In [1]: %load_ext sql

In [2]: %sql postgresql://@localhost/
Out[2]: u'Connected: @'

In [3]: ohio = %sql select * from cities_of_ohio;
246 rows affected.

In [4]: df = ohio.DataFrame()

In [5]: montgomery = df[df['county']=='Montgomery County']

In [6]: %sql PERSIST montgomery
Out[6]: u'Persisted montgomery'

In [7]: %sql SELECT * FROM montgomery
11 rows affected.
Out[7]: 
[(27L, u'Brookville', u'5,884', u'Montgomery County'),
 (54L, u'Dayton', u'141,527', u'Montgomery County'),
 (66L, u'Englewood', u'13,465', u'Montgomery County'),
 (81L, u'Germantown', u'6,215', u'Montgomery County'),
 (130L, u'Miamisburg', u'20,181', u'Montgomery County'),
 (136L, u'Moraine', u'6,307', u'Montgomery County'),
 (157L, u'Oakwood', u'9,202', u'Montgomery County'),
 (180L, u'Riverside', u'25,201', u'Montgomery County'),
 (210L, u'Trotwood', u'24,431', u'Montgomery County'),
 (220L, u'Vandalia', u'15,246', u'Montgomery County'),
 (230L, u'West Carrollton', u'13,143', u'Montgomery County')]

Monday, July 28, 2014

auto-generate SQLAlchemy models

PyOhio gave my lightning talk on ddlgenerator a warm reception, and Brandon Lorenz got me thinking, and PyOhio sprints filled my with py-drenaline, and now ddlgenerator can inspect your data and spit out SQLAlchemy model definitions for you:

$ cat merovingians.yaml 
-
  name: Clovis I
  reign:
    from: 486
    to: 511
-
  name: Childebert I
  reign:
    from: 511
    to: 558
$ ddlgenerator --inserts sqlalchemy merovingians.yaml 

from sqlalchemy import create_engine, Column, Integer, Table, Unicode
engine = create_engine(r'sqlite:///:memory:')
metadata = MetaData(bind=engine)

merovingians = Table('merovingians', metadata, 
  Column('name', Unicode(length=12), nullable=False), 
  Column('reign_from', Integer(), nullable=False), 
  Column('reign_to', Integer(), nullable=False), 
  schema=None)

metadata.create_all()
conn = engine.connect()
inserter = merovingians.insert()
conn.execute(inserter, **{'name': 'Clovis I', 'reign_from': 486, 'reign_to': 511})
conn.execute(inserter, **{'name': 'Childebert I', 'reign_from': 511, 'reign_to': 558})
conn.connection.commit()

Brandon's working on a pull request to provide similar functionality for Django models!

Tuesday, July 01, 2014

18F

Yesterday was my first day at 18F!

What is 18F? We're a small, little-known government organization that works outside the usual channels to accomplish special projects. It involves black outfits and a lot of martial arts.

Kidding! Sort of. 18F is a new agency within the GSA that does citizen-focused work for other parts of the U.S. Government, working small, quick projects to make information more accessible. We're using all the tricks: small teams, agile development, rapid iteration, open-source software, test-first, continuous integration. We do our work in the open.

Sure, this is old hat to you, faithful blog readers. But bringing it into government IT work is what makes it exciting. We're hoping that the techniques we use will ripple out beyond the immediate projects we work on, popularizing them throughout government IT and helping efficiency and responsiveness throughout. This is a chance to put all the techniques I've learned from you to work for all of us. Who wouldn't love to get paid to work for the common good?

Obviously, this is still my personal blog, so nothing I say about 18F counts as official information. Just take it as my usual enthusiastic babbling.

Friday, May 23, 2014

ddlgenerator

I've had it on github for a while, but I finally released ddlgenerator to PyPI.

I've been frustrated for years that there was no good open-source way to set up RDBMS tables from flat data files. Sure, you could import the data - after setting up the DDL by hand. ddlgenerator handles that; in fact, you can go from zero, setting up and populating a table in a single line. Nothing up my sleeve:

$ psql -c "SELECT * FROM knights"
ERROR:  relation "knights" does not exist
LINE 1: SELECT * FROM knights
                      ^
$ ddlgenerator --inserts postgresql knights.yaml | psql
CREATE TABLE
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
$ psql -c "SELECT * FROM knights"
    name    |         dob         |   kg    | brave 
------------+---------------------+---------+-------
 Lancelot   | 0471-01-09 00:00:00 | 82.0000 | t
 Gawain     |                     | 69.2000 | t
 Robin      | 0471-01-09 00:00:00 |         | f
 Reepacheep |                     |  0.0691 | t

This is a fairly complex tool so I'm sure you'll be using the bug tracker. But I hope you'll enjoy it nonetheless!

Wednesday, May 21, 2014

data_dispenser

I went down a refactoring rabbit hole on ddl-generator and ended up pulling out the portion that pulls in data from various file formats. Perhaps it will be useful to others.

>>> from data_dispenser.sources import Source
>>> for row in Source('animals.csv'):
...     print(row)
... 
OrderedDict([('name', 'Alfred'), ('species', 'wart hog'), ('kg', '22'), ('notes', 'loves turnips')])
OrderedDict([('name', 'Gertrude'), ('species', 'polar bear'), ('kg', '312.7'), ('notes', 'deep thinker')])
OrderedDict([('name', 'Emily'), ('species', 'salamander'), ('kg', '0.3'), ('notes', '')])

Basically, I wanted a consistent way to consume rows of data - no matter where those rows come from. Right now, JSON, CSV, YAML, etc. all require separate libraries, each with its own API. This abstracts all that out, for reading purposes; now each data source is just a Source.

I'd love bug reports, and sample files to test against. And feel free to contribute patches! For example, it wouldn't be hard to add MS Excel as a data source.

https://pypi.python.org/pypi/data_dispenser

Tuesday, May 06, 2014

G+ Public Hangout Fail

tl;dr:Do not use public Google+ Hangouts under any circumstances, because people suck.

Before the PyCon 2014 CFP came due, PyLadies hosted several G+ hangouts for talk proposal brainstorming. Potential speakers could talk over and flesh out their ideas with each other, producing better talk proposals. More importantly, it was a nice psychological stepping stone on the way to filling out that big, scary CFP form all alone. I thought they went great.

I wanted to emulate them for Postgres Open and PyOhio, which both have CFPs open now. The PyLadies hangouts had used EventBrite to preregister attendees, and I unfortunately did not consider this and the reasons why. Instead, I just scheduled hangouts, made them public, and sent out invitations with the hangout URLs, encouraging people to forward the invites onward. Why make participating any harder than it has to be?

The more worldly of you are already shaking your heads at my naiveté. It turns out that the world's exhibitionists have figured out how to automatically detect and join public hangouts. For several seconds I tried kicking out and banning them as they joined, but new ones kept arriving, faster than one per second. Then I hung up - which unfortunately did not terminate the hangout. It took me frantic minutes to find how to delete a hangout in progress. I dearly hope that no actual tech community members made it to the hangout during that time.

I had intended to create a place where new speakers, and women especially, would feel safe increasing their community participation. The absoluteness of my failure infuriates me.

Hey, Google: public G+ hangouts have been completely broken, not by technical failure, but by the degraded human condition. You need to remove them immediately. The option can only cause harm, as people accidentally expose themselves and others to sexual harrassment.

In the future, a "public" hangout URL should actually take you to a page where you request entrance from the organizer by text message (which should get the same spam filtration that an email would). But fix that later. Take the public hangouts away now.

Everybody else, if you had heard about the hangouts and were planning to participate, THANK YOU - but I've cancelled the rest of them. You should present anyway, though! I'd love to be contacted directly to talk over your ideas for proposals.

Wednesday, January 29, 2014

TRUCEConf

Please consider participating in TRUCEConf (March 18-19 in Cincinnati)!

The goal is to help the tech community heal, through learning from others outside our industry and having an open dialogue and on how we can be better humans to each other in the world of tech.

You may remember fierce controversy around TRUCEConf when virtually nothing was known about it but its name; without solid information, it was easy to read bad connotations into the name. I would have been uneasy myself if I hadn't known the founder, Elizabeth Naramore.

But now there's plenty of information, including the schedule, that should replace those concerns with enthusiasm. I think the format - a day of mind-opening speakers from all over, followed by an unconference day - should be very productive!

I'm really looking forward to it and hope that many of you can come. If you can't come in person, consider supporting the conference with a donation - they're going without corporate sponsors so your individual support means a ton. Thanks!