false
OasisLMS
Catalog
2026 Georgia Society for Healthcare Human Resource ...
Artificial Intelligence - Fundamentals and Legal U ...
Artificial Intelligence - Fundamentals and Legal Update AI Presentation Slides (003)
Back to course
Pdf Summary
This March 11, 2026 presentation explains how modern AI—especially generative AI and large language models (e.g., ChatGPT, Claude, Llama)—affects healthcare human resources, and why HR professionals may be well positioned to manage AI risk. It contrasts “old” AI (decision trees) with machine-learning systems that are powerful but difficult to interpret, often unable to clearly explain decisions, and prone to errors and “hallucinations.” Because models are only as good as their training data, they can reproduce and amplify bias, create proxy discrimination, and raise serious trust and confidentiality concerns (including potential leakage of company secrets). The slides highlight the “black box” problem: if an organization cannot explain how an AI tool reached a decision, it may be unable to defend that decision in litigation.<br /><br />The program surveys common HR uses of AI, including scheduling, application intake and screening, recruiting chatbots, interview analytics, internal investigation support, and notetaking/transcription. It emphasizes legal and compliance risks in hiring (bias embedded in training data; proxy variables such as zip code or school; disability-related issues), noting that employers can remain liable even when using vendor tools. The presentation references Mobley v. Workday as an example of potential employer and vendor liability for discriminatory AI-driven hiring decisions, and points to EEOC guidance related to AI and the ADA.<br /><br />It also flags constraints in investigations (e.g., do not use AI to assess truthfulness under the EPPA; ensure meaningful human oversight for FMLA administration) and wage/hour pitfalls from AI scheduling and timekeeping. On the clinical side, it summarizes CMS/ONC clinical decision support expectations and new ACA Section 1557 regulations covering “patient care decision support tools,” explicitly including AI.<br /><br />Finally, it outlines emerging state laws (NYC, CA, CO, IL, TX, UT, etc.), pending Georgia proposals, and FTC guidance on biometrics, concluding that HR should help lead governance through policies, training, disclosures, and cross-functional oversight.
Keywords
generative AI in healthcare HR
large language models (ChatGPT Claude Llama)
AI risk management and governance
black box explainability in AI decisions
algorithmic bias and proxy discrimination
AI hiring and recruiting compliance (EEOC ADA)
Mobley v. Workday AI discrimination liability
AI in HR investigations and oversight (EPPA FMLA)
AI scheduling timekeeping wage and hour risks
state and federal AI regulations (NYC CA CO IL TX UT FTC CMS ONC ACA 1557)
×
Please select your language
1
English