References
[1] J Khadijah Abdurahman. 2021. Calculating the Souls of Black Folk: Predictive Analytics in the New York City Administration for Children’s Services. In Colum. J. Race & L. Forum, Vol. 11. HeinOnline, 75.
[2] J. Khadijah Abdurahman. 2022. Birthing Predictions of Premature Death. (2022). https://logicmag.io/home/birthing-predictions-of-premature-death/
[3] Social Security Administration. 2023. Supplemental Security Income (SSI) in Pennsylvania. https://www.ssa.gov/pubs/EN-05-11150.pdf
[4] Patricia Auspos. 2017. Using Integrated Data Systems to Improve Case Management and Develop Predictive Modeling Tools. Case Study 4. Annie E. Casey Foundation (2017).
[5] Cora Bartelink, TA Van Yperen, IJ Ten Berge, Leontien De Kwaadsteniet, and CLM Witteman. 2014. Agreement on child maltreatment decisions: A nonrandomized study on the effects of structured decision-making. In Child & Youth Care Forum, Vol. 43. Springer, 639–654.
[6] Cora Bartelink, Tom A Van Yperen, and J Ingrid. 2015. Deciding on child maltreatment: A literature review on methods that improve decision-making. Child Abuse & Neglect 49 (2015), 142–153.
[7] Elinor Benami, Reid Whitaker, Vincent La, Hongjin Lin, Brandon R Anderson, and Daniel E Ho. 2021. The distributive effects of risk prediction in environmental compliance: Algorithmic design, environmental justice, and public policy. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 90–105.
[8] Emily M Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics 6 (2018), 587–604.
[9] Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José MF Moura, and Peter Eckersley. 2020. Explainable machine learning in deployment. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 648–657.
[10] Emily Black, Manish Raghavan, and Solon Barocas. 2022. Model Multiplicity: Opportunities, Concerns, and Solutions. In 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 850–863. https://doi.org/10.1145/3531146.3533149
[11] Sebastian Bordt, Michèle Finck, Eric Raidl, and Ulrike von Luxburg. 2022. Post-hoc explanations fail to achieve their purpose in adversarial contexts. arXiv preprint arXiv:2201.10295 (2022).
[12] Anna Brown, Alexandra Chouldechova, Emily Putnam-Hornstein, Andrew Tobin, and Rhema Vaithianathan. 2019. Toward algorithmic accountability in public services: A qualitative study of affected community perspectives on algorithmic decision-making in child welfare services. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–12.
[13] Carrie J Cai, Samantha Winter, David Steiner, Lauren Wilcox, and Michael Terry. 2019. “Hello AI”: uncovering the onboarding needs of medical practitioners for human-AI collaborative decision-making. Proceedings of the ACM on Human-computer Interaction 3, CSCW (2019), 1–24.
[14] Girish Chandrashekar and Ferat Sahin. 2014. A survey on feature selection methods. Computers & Electrical Engineering 40, 1 (2014), 16–28.
[15] Hao-Fei Cheng, Logan Stapleton, Anna Kawakami, Venkatesh Sivaraman, Yanghuidi Cheng, Diana Qing, Adam Perer, Kenneth Holstein, Zhiwei Steven Wu, and Haiyi Zhu. 2022. How child welfare workers reduce racial disparities in algorithmic decisions. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–22.
[16] Alexandra Chouldechova, Diana Benavides-Prado, Oleksandr Fialko, and Rhema Vaithianathan. 2018. A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. In Conference on Fairness, Accountability and Transparency. PMLR, 134–148.
[17] Sasha Costanza-Chock. 2020. Design justice: Community-led practices to build the worlds we need. The MIT Press.
[18] Allegheny County. [n. d.]. The DHS Data Warehouse. ([n. d.]). https://www.alleghenycounty.us/human-services/news-events/accomplishments/dhsdata-warehouse.aspx
[19] Ying Cui, Fu Chen, Ali Shiri, and Yaqin Fan. 2019. Predictive analytic models of student success in higher education: A review of methodology. Information and Learning Sciences (2019).
[20] Tim Dare and Eileen Gambrill. 2017. Ethical analysis: Predictive risk models at call screening for Allegheny County. Unpublished report (2017). https:// www.alleghenycountyanalytics.us/wp-content/uploads/2019/05/Ethical-Analysis-16-ACDHS-26_PredictiveRisk_Package_050119_FINAL-2.pdf
[21] Maria De-Arteaga, Riccardo Fogliato, and Alexandra Chouldechova. 2020. A case for humans-in-the-loop: Decisions in the presence of erroneous algorithmic scores. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–12.
[22] Jessica M Eaglin. 2017. Constructing recidivism risk. Emory LJ 67 (2017), 59.
[23] Virginia Eubanks. 2018. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
[24] Mary Flanagan, Daniel C. Howe, and Helen Nissenbaum. 2008. Embodying Values in Technology: Theory and Practice. Cambridge University Press, 322–353. https://doi.org/10.1017/CBO9780511498725.017
[25] Batya Friedman. 1996. Value-sensitive design. Interactions 3, 6 (1996), 16–23.
[26] Philip Gillingham. 2019. Can predictive algorithms assist decision-making in social work with children and families? Child abuse review 28, 2 (2019), 114–126.
[27] Stephanie K Glaberson. 2019. Coding over the cracks: predictive analytics and child protection. Fordham Urb. LJ 46 (2019), 307.
[28] James P Gleeson. 1987. Implementing structured decision-making procedures at child welfare intake. Child Welfare (1987), 101–112.
[29] Lauryn P Gouldin. 2016. Disentangling flight risk from dangerousness. BYU L. Rev. (2016), 837.
[30] Crystal Grant. 2022. ACLU White Paper: AI in Healthcare May Worsen Medical Racism. /legal-document/aclu-white-paperai-health-care-may-worsen-medical-racism
[31] Ben Green. 2021. Data science as political action: Grounding data science in a politics of justice. Journal of Social Computing 2, 3 (2021), 249–265.
[32] Ben Green. 2022. Escaping the impossibility of fairness: From formal to substantive algorithmic fairness. Philosophy & Technology 35, 4 (2022), 1–32.
[33] Sam Harper, Nicholas B King, Stephen C Meersman, Marsha E Reichman, Nancy Breen, and John Lynch. 2010. Implicit value judgments in the measurement of health inequalities. The Milbank Quarterly 88, 1 (2010), 4–29.
[34] Sally Ho and Garance Burke. 2022. An algorithm that screens for child neglect in Allegheny County raises concerns. (2022). https://apnews.com/ article/child-welfare-algorithm-investigation-9497ee937e0053ad4144a86c68241ef1
[35] Benefits Tech Action Hub. 2022. Colorado Medicaid, SNAP, CHIP, and TANF Wrongful Denials. https://www.btah.org/case-study/colorado-medicaidsnap-chip-and-tanf-wrongful-denials.html
[36] Hornby Zeller Associates Inc. 2018. Allegheny County Predictive Risk Modeling Tool Implementation: Process Evaluation. (2018). https://www.alleghenycountyanalytics.us/wp-content/uploads/2019/05/Process-Evaluation-from-16-ACDHS-26_PredictiveRisk_Package_ 050119_FINAL-4.pdf
[37] Abigail Z Jacobs. 2021. Measurement as governance in and for responsible AI. arXiv preprint arXiv:2109.05658 (2021).
[38] Abigail Z Jacobs and Hanna Wallach. 2021. Measurement and fairness. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 375–385.
[39] Will Johnson. 2004. Effectiveness of California’s child welfare structured decision making (SDM) model: a prospective study of the validity of the California Family Risk Assessment. Madison (Wisconsin, USA): Children’s Research Center (2004).
[40] Nathan Kallus and Angela Zhou. 2019. The fairness of risk scores beyond classification: Bipartite ranking and the xauc metric. Advances in neural information processing systems 32 (2019).
[41] Amir-Hossein Karimi, Gilles Barthe, Bernhard Schölkopf, and Isabel Valera. 2022. A Survey of Algorithmic Recourse:Contrastive Explanations and Consequential Recommendations. ACM Comput. Surv. (2022). https://doi.org/10.1145/3527848
[42] Anna Kawakami, Venkatesh Sivaraman, Hao-Fei Cheng, Logan Stapleton, Yanghuidi Cheng, Diana Qing, Adam Perer, Zhiwei Steven Wu, Haiyi Zhu, and Kenneth Holstein. 2022. Improving Human-AI Partnerships in Child Welfare: Understanding Worker Practices, Challenges, and Desires for Algorithmic Decision Support. In CHI Conference on Human Factors in Computing Systems. 1–18.
[43] Anna Kawakami, Venkatesh Sivaraman, Logan Stapleton, Hao-Fei Cheng, Adam Perer, Zhiwei Steven Wu, Haiyi Zhu, and Kenneth Holstein. 2022. “Why Do I Care What’s Similar?” Probing Challenges in AI-Assisted Child Welfare Decision-Making through Worker-AI Interface Design Concepts. In Designing Interactive Systems Conference. 454–470.
[44] Emily Keddell. 2015. The ethics of predictive risk modelling in the Aotearoa/New Zealand child welfare context: Child abuse prevention or neo-liberal tool? Critical Social Policy 35, 1 (2015), 69–88.
[45] Emily Keddell. 2019. Algorithmic justice in child protection: Statistical fairness, social justice and the implications for practice. Social Sciences 8, 10 (2019), 281.
[46] Kenji Kira and Larry A. Rendell. 1992. A Practical Approach to Feature Selection. In Machine Learning Proceedings 1992. San Francisco (CA), 249–256. https://doi.org/10.1016/B978-1-55860-247-2.50037-1
[47] Vipin Kumar and Sonajharia Minz. 2014. Feature Selection: A literature Review. Smart Comput. Rev. 4 (2014), 211–229.
[48] Himabindu Lakkaraju. 2021. Towards Reliable and Practicable Algorithmic Recourse. In Proceedings of the 30th ACM International Conference on Information amp; Knowledge Management (Virtual Event, Queensland, Australia) (CIKM ’21). Association for Computing Machinery, New York, NY, USA, 4. https://doi.org/10.1145/3459637.3482497
[49] Algorithmic Justice League. [n. d.]. The Algorithmic Justice League’s 101 Overview. ([n. d.]). https://www.ajl.org/learn-more
[50] Karen Levy, Kyla E Chasalow, and Sarah Riley. 2021. Algorithms and decision-making in the public sector. Annual Review of Law and Social Science 17 (2021), 309–334.
[51] Gabriel Lima, Nina Grgić-Hlača, Jin Keun Jeong, and Meeyoung Cha. 2022. The Conflict Between Explainable and Accountable Decision-Making Algorithms. arXiv preprint arXiv:2205.05306 (2022).
[52] Jorge M. Lobo, Alberto Jiménez-Valverde, and Raimundo Real. 2008. AUC: a misleading measure of the performance of predictive distribution models. Global Ecology and Biogeography 17, 2 (2008), 145–151. https://doi.org/10.1111/j.1466-8238.2007.00358.x arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1466-8238.2007.00358.x
[53] Wayne A Logan and Andrew Guthrie Ferguson. 2016. Policing criminal justice data. Minn. L. Rev. 101 (2016), 541.
[54] Noëmi Manders-Huits. 2011. What values in design? The challenge of incorporating moral values into design. Science and engineering ethics 17, 2 (2011), 271–287.
[55] Sandra G Mayson. 2017. Dangerous defendants. Yale LJ 127 (2017), 490.
[56] Mikaela Meyer, Aaron Horowitz, Erica Marshall, and Kristian Lum. 2022. Flipping the Script on Criminal Justice Risk Assessment: An actuarial model for assessing the risk the federal sentencing system poses to defendants. In 2022 ACM Conference on Fairness, Accountability, and Transparency. 366–378.
[57] Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency. 220–229.
[58] Shira Mitchell, Eric Potash, Solon Barocas, Alexander D’Amour, and Kristian Lum. 2021. Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application 8 (2021), 141–163.
[59] Shakir Mohamed, Marie-Therese Png, and William Isaac. 2020. Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology 33, 4 (2020), 659–684.
[60] Michael Muller, Ingrid Lange, Dakuo Wang, David Piorkowski, Jason Tsay, Q Vera Liao, Casey Dugan, and Thomas Erickson. 2019. How data science workers work with data: Discovery, capture, curation, design, creation. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–15.
[61] Michael Muller and Angelika Strohmayer. 2022. Forgetting Practices in the Data Sciences. In CHI Conference on Human Factors in Computing Systems. 1–19.
[62] Deirdre K Mulligan and Kenneth A Bamberger. 2018. Saving governance-by-design. California Law Review 106, 3 (2018), 697–784.
[63] Deirdre K Mulligan and Kenneth A Bamberger. 2019. Procurement as policy: Administrative process for machine learning. Berkeley Tech. LJ 34 (2019), 773.
[64] Debbie Nathan. 2020. The Long, Dark History of Family Separations: How politicians used the drug war and the welfare state to break up black and Native American families. (2020). https://reason.com/2020/10/13/the-long-dark-history-of-family-separations/
[65] Hina Naveed. 2022. “If I Wasn’t Poor, I Wouldn’t Be Unfit" The Family Separation Crisis in the US Child Welfare System. (2022). https: //www.aclu.org/report/if-i-wasnt-poor-i-wouldnt-be-unfit-family-separation-crisis-us-child-welfare-system
[66] Children’s Data Network. 2022. Los Angeles County Risk Stratification Model: Methodology Implementation Report. (2022). https://dcfs.lacounty. gov/wp-content/uploads/2022/08/Risk-Stratification-Methodology-Report_8.29.22.pdf
[67] Allegheny County Department of Human Services. 2017. Ethical Analysis: Predictive Risk Models at Call Screening for Allegheny County Response by the Allegheny County Department of Human Services. Alleghany County Analytics (2017).
[68] Allegheny County Department of Human Services. 2019. Developing Predictive Risk Models to Support Child Maltreatment Hotline Screening Decisions: Predictive Risk Package. https://www.alleghenycountyanalytics.us/wp-content/uploads/2019/05/16-ACDHS-26_PredictiveRisk_ Package_050119_FINAL-2.pdf
[69] Mark A Paige and Audrey Amrein-Beardsley. 2020. “Houston, We Have a Lawsuit”: A Cautionary Tale for the Implementation of Value-Added Models for High-Stakes Employment Decisions. Educational Researcher 49, 5 (2020), 350–359.
[70] Amandalynne Paullada, Inioluwa Deborah Raji, Emily M Bender, Emily Denton, and Alex Hanna. 2021. Data and its (dis) contents: A survey of dataset development and use in machine learning research. Patterns 2, 11 (2021), 100336.
[71] Martin Pawelczyk, Teresa Datta, Johannes van-den Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. 2022. Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse. https://doi.org/10.48550/ARXIV.2203.06768
[72] Tawana Petty, Mariella Saba, Tamika Lewis, Seeta Pena Gangadharan, and Virginia Eubanks. 2018. Reclaiming our data. (2018). https://www. odbproject.org/wp-content/uploads/2016/12/ODB.InterimReport.FINAL_.7.16.2018.pdf
[73] Eleanor Pratt and Heather Hahn. 2021. What Happens When People Face Unfair Treatment or Judgment When Applying for Public Assistance or Social Services? Washington, DC: Urban Institute (2021).
[74] Kaivalya Rawal, Ece Kamar, and Himabindu Lakkaraju. 2020. Can I Still Trust You?: Understanding the Impact of Distribution Shifts on Algorithmic Recourses. CoRR abs/2012.11788 (2020). arXiv:2012.11788 https://arxiv.org/abs/2012.11788
[75] Rashida Richardson, Jason M Schultz, and Kate Crawford. 2019. Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice. NYUL Rev. Online 94 (2019), 15.
[76] Putnam-Hornstein Emily Vaithianathan Rhema Rittenhouse, Katherine. 2022. Algorithms, Humans, and Racial Disparities in Child Protective Services: Evidence from the Allegheny Family Screening Tool. (2022).
[77] Dorothy Roberts. 2009. Shattered bonds: The color of child welfare. Hachette UK.
[78] Dorothy Roberts. 2022. Torn Apart: How the Child Welfare System Destroys Black Families–and How Abolition Can Build a Safer World. Basic Books.
[79] Samantha Robertson, Tonya Nguyen, and Niloufar Salehi. 2021. Modeling Assumptions Clash with the Real World: Transparency, Equity, and Community Challenges for Student Assignment Algorithms. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 589, 14 pages. https://doi.org/10.1145/3411764. 3445748
[80] David G. Robinson. 2022. The Kidney Transplant Algorithm’s Surprising Lessons for Ethical A.I. (2022). https://slate.com/technology/2022/08/kidneyallocation-algorithm-ai-ethics.html
[81] David G Robinson. 2022. Voices in the Code: A Story about People, Their Values, and the Algorithm They Made. Russell Sage Foundation.
[82] Anjana Samant, Aaron Horowitz, Kath Xu, and Sophie Beiers. 2022. Family Surveillance by Algorithm: The Rapidly Spreading Tools Few Have Heard Of. (2022). /fact-sheet/family-surveillance-algorithm
[83] Devansh Saxena, Karla Badillo-Urquiola, Pamela J Wisniewski, and Shion Guha. 2020. A human-centered review of algorithms used within the US child welfare system. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–15.
[84] Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In Proceedings of the conference on fairness, accountability, and transparency. 59–68.
[85] Noah Simon, Jerome Friedman, Trevor Hastie, and Rob Tibshirani. 2011. Regularization Paths for Cox’s Proportional Hazards Model via Coordinate Descent. Journal of Statistical Software 39, 5 (2011), 1–13. https://www.jstatsoft.org/v39/i05/
[86] Logan Stapleton, Min Hun Lee, Diana Qing, Marya Wright, Alexandra Chouldechova, Ken Holstein, Zhiwei Steven Wu, and Haiyi Zhu. 2022. Imagining New Futures beyond Predictive Systems in Child Welfare: A Qualitative Study with Impacted Stakeholders. In 2022 ACM Conference on Fairness, Accountability, and Transparency (Seoul, Republic of Korea) (FAccT ’22). Association for Computing Machinery, New York, NY, USA, 1162–1177. https://doi.org/10.1145/3531146.3533177
[87] Harini Suresh and John Guttag. 2021. Understanding Potential Sources of Harm throughout the Machine Learning Life Cycle. (2021).
[88] upEND Movement. [n. d.]. Family Policing System Definition. ([n. d.]). https://upendmovement.org/family-policing-definition/
[89] Rhema Vaithianathan, Haley Dinh, Allon Kalisher, Chamari Kithulgoda, Emily Kulick, Megh Mayur, Athena Ning, Diana Benavides Prado, and Emily Putnam-Hornstein. 2019. Implementing a Child Welfare Decision Aide in Douglas County: Methodology Report. https://csda.aut.ac.nz/__ data/assets/pdf_file/0009/347715/Douglas-County-Methodology_Final_3_02_2020.pdf
[90] Rhema Vaithianathan, Emily Kulick, Emily Putnam-Hornstein, and D Benavides-Prado. 2019. Allegheny family screening tool: Methodology, version 2. Center for Social Data Analytics (2019), 1–22.
[91] Rhema Vaithianathan, Emily Putnam-Hornstein, Nan Jiang, Parma Nand, and Tim Maloney. 2017. Developing predictive models to support child maltreatment hotline screening decisions: Allegheny County methodology and implementation. Center for Social data Analytics (2017).
[92] Suresh Venkatasubramanian and Mark Alfano. 2020. The Philosophical Basis of Algorithmic Recourse. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* ’20). Association for Computing Machinery, New York, NY, USA, 284–293. https://doi.org/10.1145/3351095.3372876
[93] Emma Williams. 2020. ‘Family Regulation,’ Not ‘Child Welfare’: Abolition Starts with Changing our Language. (2020). https://imprintnews.org/ opinion/family-regulation-not-child-welfare-abolition-starts-changing-language
[94] Qian Yang, Aaron Steinfeld, and John Zimmerman. 2019. Unremarkable ai: Fitting intelligent decision support into critical, clinical decision-making processes. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–11.
[95] Allegheny County. [n. d.]. Office of Behavioral Health. https://www.alleghenycounty.us/Human-Services/About/Offices/Behavioral-Health.aspx