What Will You Learn
This program examines the ethical and operational challenges created by unauthorized generative AI use in legal practice. Participants will learn how AI tools function, how shadow AI emerges in legal workflows, and how professional responsibility rules apply to emerging technologies. The program also addresses governance strategies, risk-scoring models for AI tools, and how AI-related records may affect discovery and evidentiary disputes. Practical guidance will help attorneys supervise technology use while protecting client confidentiality and legal work product.
What Will You Gain
Attorneys will gain practical frameworks for identifying and managing shadow AI risks within their organizations. Participants will leave with actionable guidance for building defensible AI policies, evaluating vendor tools, and aligning governance controls with professional responsibility obligations. The program also provides incident response strategies and discovery readiness practices designed to reduce litigation exposure. These insights help lawyers integrate AI responsibly while protecting client interests and institutional credibility.
Key topics to be discussed:
This course is co-sponsored with myLawCLE.
Date / Time: April 29, 2026
Closed-captioning available
Brett Holubeck, Senior Attorney | Kane Russell Coleman Logan PC
Brett Holubeck is a Senior Attorney at Kane Russell Coleman Logan PC whose practice focuses on labor and employment law, advising and defending employers in a broad range of workplace matters. His work includes counseling clients on discrimination claims, wage and hour issues, labor relations, and noncompetition matters while helping organizations manage employment risk and regulatory compliance. He represents employers through all stages of employment disputes, including counseling, litigation, arbitration, and agency proceedings, and regularly advises on workplace policies, internal investigations, and preventative compliance strategies designed to minimize liability.
Education & Credentials
Recognition & Leadership
Professional Involvement
Experience
Melissa J. Sachs, Partner | Constangy, Brooks, Smith & Prophete, LLP
Melissa J. Sachs is an attorney with Constangy, Brooks, Smith & Prophete, LLP whose practice focuses on employment law and litigation. She advises employers on workplace compliance, policy development, and risk management while representing clients in employment disputes and administrative proceedings. Her work includes counseling organizations on emerging workplace issues, employment policies, and regulatory compliance. Through both litigation and advisory work, she helps employers navigate complex legal risks and evolving workplace regulations.
Education & Credentials
Recognition & Leadership
Professional Involvement
Experience
Paul McVoy, SVP and Shareholder | Repario
Paul H. McVoy is Senior Vice President and Shareholder at Repario, a client-focused technology company that provides concierge-style litigation support services. With more than three decades of experience in discovery, he has worked on matters ranging from smaller disputes to complex, large-scale litigations. McVoy has been at the forefront of the evolution of electronic discovery and was an early advocate for technology-assisted review (TAR). He is actively involved in shaping best practices in the field through his work with The Sedona Conference and other industry initiatives focused on advancing eDiscovery standards and technology use in litigation.
Education & Credentials
Recognition & Leadership
Professional Involvement
Experience
I. Shadow AI in the Law Firm: Ethics, Competence, and Guardrails for Generative AI | 1:00pm – 2:00pm
Attorneys, paralegals, and staff are using AI at a frantic pace and those that fail to incorporate AI in their practice will eventually be left behind. Unfortunately, many law firms do not have policies for attorneys or staff, forbid any use of AI, or are too lenient in what they allow employees to use. The news is and will continue to be filled with cautionary tales about the failures of attorneys, staff, judges, and pro se individuals using AI in ways that resulted in sanctions or embarrassment. In this session, attendees will review strategies to deal with this emerging issue and some of the basic steps that all attorneys should take to address AI. Among the topics that will be reviewed are the Duty of Competence, responsibilities that attorneys generally have to clients regarding AI, how generative AI tools work, policies that firms should consider implementing to reduce risk, and the difference between shadow or unauthorized AI use and approved tools.
As generative AI tools rapidly enter everyday legal work, many firms face growing risks from “shadow AI”, unauthorized or unsupervised technology use by attorneys and staff. This session examines how professional responsibility rules, including the duty of competence and confidentiality, apply to AI-assisted legal practice. Attendees will learn how generative AI works, how to recognize shadow AI within legal workflows, and how to implement practical policies and guardrails that reduce ethical and operational risk.
Break | 2:00pm – 2:10pm
II. Shadow AI Governance in Legal Workflows: Risk Mapping, Proportional Controls, and Incident Readiness | 2:10pm – 3:10pm
This session provides a practical, defensible framework for governing shadow AI in legal workflows, focusing on where untracked AI use appears, how information moves and persists, and which exposure points create the greatest confidentiality and evidentiary risk. Attendees will apply a “reasonable security” lens and a DoCRA-style proportionality model to score AI use cases and vendor tools, align controls to risk, and document decisions through concise tool-level risk records. The session concludes with a litigation-aware response approach, including a playbook for the first 24–72 hours and discovery readiness steps that reduce downstream exposure.
This session provides a practical governance framework for identifying and managing shadow AI across legal workflows, focusing on how untracked AI use can expose confidential information and create evidentiary risks. Participants will learn how to map data exposure pathways, evaluate AI tools using proportional risk-scoring models, and implement baseline controls aligned with a “reasonable security” approach. The session also addresses vendor diligence and incident response strategies, including how to prepare for discovery issues and manage the critical first hours following an AI-related event.