UChicago TechTeam

View Original

Project: Illinois Jobslink, Fall 2016

by Evan Gorstein

Searching for a job online can sometimes feel like looking for a needle in a haystack. With countless search options and innumerable job opportunities, the Internet makes it difficult to even know where to begin.  Luckily, the online employment search may have just gotten a little easier--at least for residents of Illinois with a basic knowledge of API protocol.

This past fall, UChicago TechTeam worked on a project to make the vast amounts of employment data published by the the Illinois Department of Economic Development easier to navigate and access.  The Department’s job board, located at IllinoisJobLink.com, requires you to sign up for an account before you can even begin using their more advanced search features.  The process of signing up is itself quite cumbersome, and even with an account, the search options are not as helpful as you’d think.  Worse yet, because the repository of jobs is continuously updated, you are forced to repeatedly check the board to see when a job that meets your preferences and qualifications gets posted.    

Cue TechTeam, which this past fall, decided to devote a project to making the data easier to navigate.  Led by second year Subhodh Kotekal, the project team worked to develop a web scraping tool for extracting the relevant employment information from each job listing and storing it in a SQLite database.  If you’re like me, and you don’t know what SQLite is, what’s important to know is that this database interacts seamlessly with the programming language SQL.  Thus, anyone with a basic knowledge of SQL can use this database to fine tune their job searches and directly access countless employment opportunities without signing up for an account on IllinoisJobsLink.

The second stage of the project involved building an API to make this data even more accessible, as the API allows users to access data using only simple HTTP requests, filtering with query strings.  The next step of the project would be to store everything in a Google Fusion Table, a system that would allow for advanced visualization of the data and increase navigability.  Unfortunately, the sheer size of the data set made this impossible...at least for now.  You can learn more about the project and view the Python code for the data scraper and API at the project’s github repository, linked here.

TechTeam is constantly looking for ways to make public data more accessible to the public!  This is a task that should keep up busy for quarters to come.