Charm Offensive

All good things come to an end, and so it was with NICAR 2014.

My brief stay in Charm City came to an end on Sunday afternoon after a final day of back-to-back panels on using and presenting various forms of Census data.

My only major gripe is that NICAR’s record attendance numbers for this year’s conference (1,000+ registrants!) meant that a lot of the hands-on sessions got packed really quickly. Rooms with 20-30 computer stations were flooded by at least half a dozen or more people sitting on the floor or standing in the back.

A series of Python workshops on Saturday seemed to draw the most demand. I arrived half an hour early to the room only to find that a line of 40 or so people had already queued up to get into 20-30 seats. Insanity.

Regardless, staff and presenters did their best to try to accomodate everyone and lots of the materials for the hands-on sessions have been posted by instructors on individual GitHub accounts. This means that attendants (or anyone with the right link) can review tutorials for sessions at their own pace and time.

Also, as much as I tend to hate crowds, this was probably the nicest crowd I’ve ever met. Literally people just wanting to help and meet other people. Plus, it was such a hub of excitement, passion, and ideas–making it incomparable to the gamut of other journalism conferences I’ve attended.

While this conference was only four days long, I’m definitely walking away with a list of potential story ideas, skills and tools to check out. The dilemma from here will be the same one I’ve had for a while: setting up time and initiative to continue building my skills outside from the daily grind of the newsroom.

The amount of ground covered in NICAR can be really extensive and overwhelming–especially for newcomers like me. IRE has provided a handy guide for most of the tipsheets and presentations delivered during the conference here.

But here’s just a super-short list of some links to tools and resources I’m personally planning on using. Since sessions were simultaneous, the following represents only a fraction of what was offered this past week. Much more can be found in the materials I linked to above.

I hope to try some of the more hands-on programming tutorials next weekend on my own.

ArcGIS is now offering free accounts for professional journalists. The mapping software, which allows you to create maps with a variety of base layers and also add map packs from other users in the community, usually requires a subscription. Staff said they have specifically rewritten their terms of use for media members, so I’d recommend checking that out first.

For those interested in higher-powered maps that go beyond Google’s Fusion Tables (which suffers from a problem known as “drop-outs”), consider paid options like CartoDB, OR, open-source (FREE) Leaflet. Leaflet is a javascript library that hosts a variety of plug-ins that’s sure to match the kind of map you’re hoping to roll out. Be forewarned: it’ll involve a fair amount of code tinkering though.

Census data is also made user-friendly with Census Reporter, which can produce files that merge datasets together that you can drop directly into your choice of mapping software. The tool is a Knight-funded tool that also lets you quickly analyze and visualize data.

SQLite plug-in for Firefox browser will handle basic to moderate datasets, and I know journos who specifically use this to cover the majority of their day-to-day data journalism projects.

For more rigorous datasets, download any one of several open-source (FREE) MySQL GUIs (Graphical User Interface). These are just friendly-user interfaces to help you run your SQL queries and make the import process smoother. The instructors I had recommended SQLyog for open-source options, and Navicat, as a paid option.

For those running Macs, SQLite is already built into your computer, so running SQL queries can be done directly using command line (Terminal is the application’s name). Windows users can also download the library. I’m still looking for a user-friendly guide on how to do this.

Down them All is a plug-in for FireFox that can “scrape” a set of webpages for you. It can be used simply to grab info from a list of urls.

Rubular is a Ruby-based regular expression editor. Regular expressions are insanely useful for cleaning data or parsing out fields. Dirty data is depressing data. As one of my friends put it, regular expressions tend to be the “un-sexy” tool of data journalism that he uses almost all the time.

Once you’ve cleaned up your dataset using regular expressions, text editors like NotePad++ (for Windows) or Text Wrangler, have handy dandy regular expression modes to then apply your regular expression to the data.