Skip to Content

twitter background check

As soon as social media became a pervasive force in the average person’s day-to-day life, employers started using online behavior to learn more about prospective and current employees.

The idea of the social media background check has been around for nearly as long as Facebook has. People are living more of their lives online than ever, which means that their online presences can offer a glimpse into who they are, what they value, and how they might behave at work. This argument is the one that employers use to justify browsing Facebook and Twitter accounts as part of the pre-employment screening process.

Social media background checks can sometimes identify problematic behavior from a potential hire. For instance, someone who has posted dozens of racist or sexist jokes on Facebook this week is probably not a person who you want to have on your team. Neither is someone with a history of badmouthing their former bosses on Twitter—but these examples can generate more problems for employers than peace of mind.

One of the risks of using social as a background check tool is that an employer must sort through a lot of information that is not relevant to the job to find red flags. Social media can reveal a lot of information about a person that employers are not supposed to know or ask about, including sexual orientation, political affiliation, gender identification, and race or ethnicity. This information, which is easy to obtain, could lead to discrimination in the workplace and beyond.

The presence of this potentially bias-creating material on social media accounts has left employers with two options: skip social media background checks or hire outside firms to sift through candidate profiles and deliver reports logging only relevant information. The problem is that some of the companies offering these “background check” services remain incapable of performing the nuanced evaluations that determine whether the info that they find is relevant to employment.

One example is Fama, a California-based company that claims to perform (among other things) social media background checks. The company has recently come under fire because of a viral tweet in which a job seeker shared pages of the 300-page report that Fama compiled about him for a prospective employer.

The report, the Twitter user says, is a PDF document that includes “every tweet I’ve ever liked” with the F-word in it. The user shared several photos of the report for all of Twitter to see.

Fama seems to use some an algorithm that scans a user’s entire Twitter history in search of tweets, retweets, or likes that involve “Flag Reasons” such as language, alcohol, sexism, or bigotry. Each “flag” has a “Flag Type” (“Good” or “Bad”) and appears on the background check report.

The problem is that Fama’s algorithm seems to search for words that might be deemed explicit or problematic with no regard to context—Twitter is a platform famous, in part, for posts filled with jokes and sarcasm.

With tools lagging in practicality and employers struggling to keep bias at bay, social media background checks are more problematic and less useful than employers think that they are. While thorough background checks are a must for any employer, social media checks often go too far, too fast.

Sources:

https://reclaimthenet.org/twitter-background-check/ |

https://www.thenation.com/article/archive/big-data-actually-reinforcing-social-inequalities/