Artificial intelligence (AI) once seemed like a futuristic, mystic tool wielded by technological wizards. Today, it’s at everyone’s doorstep, demanding attention from every industry, including the law. The legal community is both watching and responding, in real time, as the evolving technology brings equal parts promise and disruption to the profession. On one hand, AI offers powerful tools to streamline workflows and extend lawyers’ capacity. On the other, it raises urgent concerns about privacy, bias, accountability, and professional ethics—all while industry innovation is outpacing regulatory development.

Welcome or not, change is coming. Change that, all at once, sparks thrilling curiosity and concerned caution. As these forces converge, New York Law School (NYLS) is proactively leading its community through this transformative moment in legal practice, preparing tomorrow’s lawyers to embrace the future thoughtfully and responsibly.

NYLS aims to foster a community-wide culture of responsible exploration, and to play an active role in shaping the evolving norms of AI use in legal practice. Last year, the Law School launched a dedicated AI Task Force led by Dean Kim Hawkins, Professor Heidi K. Brown, and Professor Michael Pastor to examine how generative AI (GenAI) is reshaping both legal education and practice. Professor Pastor was also recently appointed the inaugural Dean for Technology Law Programs, a role that includes collaborating with Dean and President Anthony W. Crowell and Senior Associate Dean Matt Gewolb to integrate AI-focused strategies into the NYLS Strategic Plan. Professor Brown, Associate Dean for Upper Level Writing, continues to adapt and develop courses exploring the strategic, ethical, and technical dimensions of GenAI in legal writing. Dean Hawkins, who oversees the Clinical and Experiential Learning program and the first-year Legal Practice program, is conducting broad research focused on strengthening NYLS’s national leadership in the AI space.

UNDERSTANDING GENERATIVE AI

AI broadly refers to technologies and systems capable of performing complex tasks that typically require human reasoning and perception.  Although AI has existed for decades, its recent stratospheric launch into mainstream use—known as the “AI boom”—can be attributed to the development of GenAI. GenAI uses trained large language models (LLMs) that leverage existing data to create original text, visual media, audio, and more based on user prompts. Accessible, user-friendly LLMs—such as OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, and Microsoft’s Copilot—have made GenAI tools available to anyone with an internet connection.

The appeal of AI lies in its immense capabilities. GenAI has performed core legal tasks—such as document extraction, summarization, and transcript analysis—at or above the level of human lawyers significantly faster, and in 2023, one model passed the bar exam with a score in the 90th percentile. These types of findings help support predictions that AI could significantly alleviate administrative burdens in legal practice. GenAI is projected to automate as much as 44 percent of legal work in the United States by 2033, a figure that could translate into a combined 266 million hours of increased productivity.

THE NYLS APPROACH

Alongside its promise, AI has introduced a host of questions. Most, if not all, legal professionals and well-established legal systems will inevitably need to face and adapt to the implications of AI. But what about those just entering the profession—those stepping into the legal world at the very moment it’s being reshaped?

The Law School has begun integrating AI into its curriculum to ensure that students graduate with a meaningful understanding of how the technology applies to legal practice. Legal Practice faculty are currently working to incorporate AI literacy into the first-year curriculum for the coming year. In upper-level coursework, AI is being discussed in substantive classes and programs tied to the Innovation Center for Law and Technology and the James Tricarico Jr. Institute for the Business of Law and In-House Counsel. Recent course additions include Generative AI for Business Lawyers, taught by Adjunct Professor Lawrence Montle ’13, and Drafting Contracts: Technology Transactions, a skills-based drafting course taught by Dean Pastor. Several NYLS faculty members, including Adjunct Professors Steven E. Pegalis ’65, Lydia Payne-Johnson ’96, and Chinnu Joseph ’14, have been collaborating with NYLS leadership to expand AI-focused content within their courses. Externally, Joseph Solomon Distinguished Professor Rebecca Roiphe serves on the New York State Bar Association’s Task Force on Artificial Intelligence, which examines AI’s legal, social, and ethical impact.

“The need for lawyers to guide the development and growth of the industry from legal and regulatory perspectives, and to promote the ethical use of technology itself, has never been greater,” says Dean Crowell. “GenAI and emerging technologies are reshaping methods of legal research and writing, and increasing the possibilities to improve access to justice. NYLS is committed to ensuring that our students are not only prepared to work in this new era, but ready to lead.”

Michael Pastor teaching class

Dean Michael Pastor teaching The In-House Counsel Experience: A Skills and Simulation Seminar.

Heidi Brown teaching class

Professor Heidi K. Brown teaching Legal Writing and GenAI.

GOVERNANCE IN THE CYBER AGE

This past spring, the Tricarico Institute hosted the 4th Annual Tribeca Cybersecurity Summit, a daylong event exploring issues in cybersecurity. The keynote conversation, “Cybersecurity and Artificial Intelligence,” featured Amanda Miller ’11, Managing Counsel and Global Privacy Officer at the Institute of Electrical and Electronics Engineers (IEEE), and Kelly Moan, Chief Information Security Officer of the NYC Cyber Command. Moderated by Dean Pastor, the panel addressed themes of radical accountability, use-case expansion, and governance.

Touching on AI’s role in cybersecurity, the panelists emphasized the importance of maintaining strong foundational protocols. “My cybersecurity team at IEEE has been incorporating GenAI impact tests and incident response plans,” Miller explained. “Those protocols still need to be clearly defined and in place. We may now need to think beyond cyber parameters, but those structured processes help guide how we’re thinking about AI.”

Panelists also discussed how GenAI is complicating the cybersecurity landscape, with attackers using the technology to increase the scale and complexity of their threats. While it can serve as a weapon in the hands of bad actors, the New York City government is exploring how AI can be used as a shield to boost defense and innovation.

Moan highlighted the Cyber Command’s AI Action Plan, the first comprehensive framework for responsible AI use in municipal government. The plan outlines seven key initiatives focused on governance, workforce training, and agency-level support in ethical and effective AI implementation.

New York Law School 4th Annual Tribeca Cybersecurity Summit

Dean Michael Pastor, Kelly Moan, and Amanda Miller '11 at the 4th Annual Tribeca Cybersecurity Summit.

New York Law School 4th Annual Tribeca Cybersecurity Summit

Dean Michael Pastor and Adjunct Professor Lawrence Montle '13 speaking with students at the 4th Annual Tribeca Cybersecurity Summit.

HEALTH LAW AND CIVIL LIABILITY

In another Spring 2025 event, NYLS partnered with Sheppard Mullin Richter & Hampton LLP to host a panel on AI’s current and future role in health systems—from compliance and regulation to clinical decision-making. Panelists included Dean Pastor; Professor Pegalis; Dr. David L. Reich, President of  Mount Sinai Hospital and Chief Clinical Officer of the Mount Sinai Health System; and Elad Walach, CEO of Aidoc. The discussion was moderated by Adjunct Professor Adam Herbst, a Partner at Sheppard Mullin and former Deputy Commissioner of the New York State Department of Health.

Dr. Reich and Walach shared real-world examples of AI applications in clinical settings, highlighting measurable improvements in patient care and outcomes. Yet they, along with their fellow panelists, expressed concern about the risks of biased health data and unequal access to healthcare. “There’s a vital role for policymakers to play, working in conjunction with hospitals, doctors, and the legal community,” Dean Pastor said. “Government actors should be asking, ‘What datasets are being used? Are they being used lawfully? And are they being used with an eye on data privacy?’”

As with all NYLS-led discussions on AI, ethical use and responsibility were central themes of the panel. Such was also echoed in “The Role of the Civil Liability Tort System in an Age Where AI Impacts Healthcare,” a February 2025 Nassau Lawyer piece co-authored by Professors Pegalis and Herbst, Valentina Battista ’24, Rania Elsanhoury ’24, Lara Hakim ’24, Kyle Hunt ’24, and Michael San Roman ’24.The article strongly supports maintaining the civil liability tort system as a key safeguard for safety and accountability in healthcare. As AI becomes more embedded into medical practice, tort law must continue to ensure that AI tools are deployed with diligence, transparency, and care.

Shahrokh Falati and Steven E. Pegalis

Professors Shahrokh Falati '08 and Steven E. Pegalis '65.

Ai in Healthcare Panel

(From left to right) Professor Steven E. Pegalis '65, Elad Walach, Dean Micheal Pastor, Dr. David L. Reich, and Adjunct Professor Adam Herbst during the "AI in Healthcare" virtual panel.

A LEGACY OF EMBRACING TRANSFORMATION

Long before the AI Boom, NYLS was preparing its community to understand and address the legal implications of technological innovation. In 1977, the Law School established a Communications Media Center in response to the developing information economy. The Center was behind a 1985 colloquium exploring deregulation in the broadcast and telecommunications industries. In the 90s, it developed courses that touched on emerging legal issues surrounding the World Wide Web. At the turn of the 21st century, NYLS faculty co-authored “Lawyering Skills & New Technology,” a joint exploration of the opportunities and challenges that information technology presented across different areas of law. From early on, the Law School has foreseen, understood, and embraced the relationship between technology and legal education and practice.

That legacy continues in the work of NYLS alumni—many of whom are now leading the way in technology law, particularly within the AI space. Jisha Dymond ’03, Chief Compliance and Ethics Officer at OneTrust, advocates for in-house counsel and compliance professionals to adopt practical AI implementation strategies to drive innovation in their organizations. Charles Post ’10, Executive Vice President at Cimplifi, has spent more than a decade tackling complex contract data challenges through emerging technologies like AI. Jeff Tang ’12, Chief Intellectual Property Counsel at Circle, previously served as Vice President of Intellectual Property at BlackRock, where he advised the BlackRock Artificial Intelligence Lab on data privacy issues. Christina Segro ’12 serves as General Counsel at Paige, a software company that uses AI to enhance cancer detection and treatment through FDA-approved digital pathology tools. And Matthew H. Chung ’12, a Shareholder at Polsinelli PC, has helped grow patent portfolios for startups, universities, and Fortune 100 companies in AI and machine learning. The achievements of these alumni and many others demonstrate NYLS’s enduring commitment to fostering legal leaders of tomorrow.

CONCLUSION

“In the face of enormous change in the use of computers for research, writing, and learning, it is doubly important to recognize the human element in lawyering.” The late Professor Lawrence M. Grosberg wrote those words in the year 2000, as the legal world met the cyber age. Today, NYLS and the broader legal community stand at a similar cusp, peering at the face of transformation with caution and curiosity. And if it was doubly important to recognize the human element of lawyering then, it is triply so now.

In this new era, NYLS is leading with intention. With an emphasis on ethical reasoning, technological literacy, and human judgment, the Law School is preparing lawyers not just to adapt, but to lead. However advanced the tools of tomorrow become, one truth remains: the future of law will always be defined by the ethics, insight, and empathy of those who practice it.

STUDENT
PERSPECTIVES

John Marks ’26

John Marks

“For law students, GenAI can be your personal Socrates—its greatest value lies in posing questions that challenge your understanding and application of the law. When used intentionally, GenAI doesn’t shortcut the learning process; it expands and reinforces it. Ultimately, students benefit most by using GenAI to ask better questions—not to get easy answers.”

Evan Schuval ’25

Evan Schuval

“Generative AI is transforming how we draft, research, and even practice law, but it's not about replacing lawyers, it's about augmenting us. Over the next decade, AI will likely take on more routine tasks, allowing lawyers to focus on strategic thinking and client advocacy, especially in fields like entertainment law, where AI-generated content raises fresh legal questions and challenges. Professor Brown’s course was a meaningful step forward in preparing me to engage with emerging technologies, helping me approach this evolving landscape with both innovative thinking and an ethically grounded sense of responsibility. I'm excited to enter the field at a time when technology can amplify our impact, but I'm also mindful that human judgment, empathy, and ethics must remain at the forefront of legal practice.”

Anne Marie Mulligan ’26

Anne Marie Mulligan

NYLS courses like Legal Writing and GenAI have prepared me to adapt to AI's impact on legal practice. Lawyers must stay informed about technological tools and use them appropriately. While LLMs can reduce lawyers' workloads and increase legal access for underserved populations, practitioners must understand their limitations. Everyone should be concerned about environmental costs and worker displacement. Legal professionals and business leaders should carefully evaluate whether replacing functions with LLM tools is appropriate, particularly given that current pricing models are unsustainable, so companies risk dependency when providers like OpenAI inevitably raise prices to profitable levels.”

ALUMNI
PERSPECTIVES

Lydia Payne-Johnson ’96

Lydia Payne-Johnson

“In the absence of a federal privacy law, the U.S. has a sectoral approach. Before AI was brought into the mix, information privacy law operated in murky waters. AI will likely further erode privacy guardrails, making it more difficult for individuals to have transparency around how their data is being collected, dissected, shared, and used. My advice to students is to remember that at the core of this particular field of law is personal data. Data that, coupled with increasingly sophisticated technology, can be compared to a 1,000-piece puzzle for which U.S. law has not yet fully addressed.”

Professor Payne-Johnson is the Director of Data Governance, Compliance, and Identity Management at George Washington University. She teaches Information Privacy Law at NYLS.

Lawrence Montle ’13

Lawrence Montle

“The legal profession and the business entities it serves are increasingly adopting artificial intelligence tools in multiple areas, requiring attorneys to have at least basic AI skills to be competitive. Community involvement is a key part of the GenAI for Business Lawyers course. Alumni will come in and share their experiences with students to make the course more robust and representative of what new lawyers will be expected to perform.”

Professor Montle is the Chief Information Security and Privacy Officer at the New York State Insurance Fund. He currently teaches Cybersecurity Compliance and Generative AI for Business Lawyers.

Kyle Hunt ’24

Kyle Hunt

“Both my research for Professor Montle and my time in Professor Jeanne Somma's LegalTech class shone spotlights on the Braess' paradox to which automation can give rise. The current approach to AI, fueled as it is by technical wonder and never-before-dreamed convenience, sometimes elides the difference between scriveners and attorneys: that at the core of our profession is not prolific writing, but reasoning and sound judgment.”

As a student, Hunt served as an AI Research Intern for Professor Montle and was President of the Privacy Law Association. He is currently an AI Model Analyst and a Contracts Agency Attorney Interne with the New York City Office of Technology and Innovation.