Section 230: The Legal Shield Perpetuating Algorithmic Discrimination in Big Tech
Section 230 of the Communications Decency Act shields “providers of interactive computer services” against liability arising from content generated by third parties. In a generation marked by the prolific use of social media platforms that operate on user-generated content, the immunity conferred by Section 230 endows platforms like Meta and Twitter with an implicit license to field discriminatorily-targeted ads.
This discriminatory practice, referred to as ‘digital redlining,’ enables social media platforms to algorithmically capitalize on personal data by targeting its advertisements to consumers based on protected characteristics such as race and gender. For example, to curate ad-delivery on Facebook, the platform offers a map tool that enables advertisers to exclude users living within a particular geographic area from viewing an ad, simply by drawing a red line around a non-target region. As residential segregation remains largely unremitted in the United States, an ad-targeting feature that filters by zip code can easily be used as a proxy to perpetuate existing racial disparities in housing.
Facebook’s advertising platform also espouses more overt forms of digital redlining, allowing advertisers to exclude certain demographics by selecting from a drop-down menu that features hundreds of thousands of attributes and interests, such as “Hispanic Culture” and “women in the workforce.” To test the scope of Facebook’s ad-delivery system, one user successfully purchased an advertisement aimed only at white house hunters within minutes, directly contravening with the Fair Housing Act (“FHA”). The ease by which discriminatorily-targeted ads may be disseminated on social media by third-party advertisers represents a form of algorithmic oppression, for which Section 230 ultimately immunizes.
While the enactment of Section 230 in 1996 was precipitated by an urgency to insulate minors from pornography online, it also served to promote free expression and empower tech companies with the discretionary authority to self-regulate its content. However, the laissez-faire approach to online regulation adopted by Congress in 1996 is unsustainable in the age of social media. With the influx of user-generated content and advancements in personal data collection in Big Tech, social media platforms have virtually been handed a blank check under Section 230 to profit from discriminatory advertising at the expense of historically marginalized groups.
The immunity granted by Section 230 siphons the virtual space into its own legal vacuum, effectively immunizing Big Tech companies from established civil-rights and consumer-protection laws when the illegal content is generated by third-parties. After lawsuits were filed by civil rights groups alleging that Facebook hosted discriminatory advertisements in violation of the Fair Housing Act, Meta reached a settlement with the U.S. Department of Justice in June 2022, agreeing to build a new algorithm specific for housing ads. Pursuant to the settlement, Meta developed the Variance Reduction System, a new algorithm designed to bridge the gap between the eligible audiences and actual audiences for housing advertisements, and operatively, eliminate ad-stratification based on characteristics protected under the Fair Housing Act such as sex and race. In accordance with the settlement agreement, Meta must also cease delivering housing ads to profiles who “look like” other users and withhold these targeting options from advertisers, essentially insulating data from third-parties that would otherwise reveal a user’s connection to a FHA-protected class.
However, correction of one narrow algorithm cannot displace the inherent bias that continues to perpetuate the platform with regard to age- and gender-specific employment ads that conceal opportunities for protected groups. Ultimately, Big Tech companies are the most cost-avoidant in terms of preempting digital redlining when designing algorithms, so it logically follows that Section 230 should be interpreted more narrowly to dispose of absolute immunity and impose accountability on social media conglomerates when their algorithms enable third-parties to transmit discriminatory ads.
 47 U.S.C. §230.  Julia Angwin, It’s Time to Tear Up Big Tech’s Get-Out-of-Jail-Free Card, The N. Y. Times (Feb. 2023). https://www.nytimes.com/2023/02/20/opinion/facebook-section-230-supreme-court.html?searchResultPosition=8.  Linda Morris & Olga Akselrod, Holding Facebook Accountable for Digital Redlining, ACLU (Jan. 2022) https://www.aclu.org/news/privacy-technology/holding-facebook-accountable-for-digital-redlining.  HUD v. Facebook (U.S. Dep’t of Hous. and Urb. Dev., March 28, 2019).  Morris & Akselrod, supra note 3.  Angwin, supra note 2.  Id.  Id.  Olivier Sylvain, Platform Realism, Informational Inequality, and Section 230 Reform, The Yale J. F. 475, 476 (Nov. 2021).  Id. at 478.  Id. at 477.  Id. at 501.  Angwin, supra note 2.  Press Release No. 23-18, U.S. Dep’t of Just., Justice Department and Meta Platforms Inc. Reach Key Agreement as They Implement Groundbreaking Resolution to Address Discriminatory Delivery of Housing Advertisements (Jan. 9, 2023), https://www.justice.gov/opa/pr/justice-department-and-meta-platforms-inc-reach-key-agreement-they-implement-groundbreaking#:~:text=As%20the%20complaint%20alleged%2C%20Meta's,users%20who%20actually%20see%20the.  Id.  Angwin, supra note 2.  Id.