516-227-0700

Matter of Samuel – Artificial Intelligence Hallucinates and an Incapacitated Person Makes a Will

January 29, 2024

Artificial Intelligence (“AI”)  made legal and mainstream news in 2023.  In a highly publicized and widely discussed case, Mata v. Avianca, Inc., the United States District Court for the Southern District of New York sanctioned attorneys for citing to non-existent, fake cases generated by Open AI’s ChatGPT.  Despite Mata’s stark warning to the bar, AI-generated fake caselaw continues to appear in litigation nationwide.  

In Matter of Samuel, the Kings County Surrogate’s Court confronted a lawyer’s careless use of AI in a contested probate proceeding.  The objectant’s counsel submitted “fake caselaw resulting from Artificial Intelligence hallucinations” in reply papers submitted on a summary judgment motion.  Five of the six cases cited in in the objectant’s reply papers were either erroneous or non-existent.  The court held that counsel violated the rule against “frivolous” litigation under 12 NYCRR 130-1.1 by making material misstatements to the Court concerning case law.

Surrogate Graham was careful to point out that AI is not, in and of itself, the problem.  While the court was “dubious” about attorneys using AI to prepare legal documents, it focused squarely on counsel’s failure to examine and scrutinize the ostensible authorities that AI cited in support of the objectant’s arguments.  The court found that counsel had sufficient time to review and analyze the AI generated reply papers and conduct a simple cite check on reliable legal search engines, which would have revealed AI’s reliance on non-existent, fake caselaw.  Counsel’s conduct, and not AI, was the real problem. 

The court quoted Matas warning of the dangers of  “fake” AI-generated caselaw in holding that sanctions were warranted:   

Many harms flow from the submission of fake opinions. The opposing party wastes time and money in exposing the deception. The Court’s time is taken from other important endeavors. The client may be deprived of arguments based on authentic judicial precedents. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct. It promotes cynicism about the legal profession and the American judicial system.

The court in Matter of Samuel moved beyond AI’s capacity to generate a reliable legal argument based on authentic case law to the decedent’s capacity to create a will.  It considered whether an “incapacitated person” under the care of a Mental Hygiene Law Article 81 Guardian could make a will.  The objectant, perhaps bolstered by fake caselaw, argued that the decedent lacked testamentary capacity at the time of execution of her will as a matter of law because she had been adjudicated “incapacitated” under MHL Article 81, and was appointed a Guardian. The court explained however, that “capacity” as that term is understood in the context of a MHL Article 81 proceeding is different from the “testamentary capacity” needed to execute a will.  An “incapacitated” person under the Mental Hygiene Law might be capable of making a will.  

A finding of “incapacity” pursuant to MHL § 81.02[b] requires “clear and convincing evidence” that a person is likely to suffer harm because she is (1) “unable to provide for personal needs and/or property management,” and (2) “cannot adequately understand and appreciate the nature and consequences of such inability.” The “primary consideration” in a guardianship proceeding is a person’s “functional level,” as reflected in the in the person’s (1) “management of the activities of daily living,” (2) “understanding and appreciation of the nature and consequences of any inability to manage the activities of daily living,” (3) “preferences . . . [about] managing the activities of daily living,” and (4) “the nature and extent of the person’s property and financial affairs and his or her ability to manage them” (MHL §§ 81.02[c][1], [2], [3], [4]).

By contrast, testamentary capacity requires a showing that the testator understood the nature and extent of her property, knew the natural objects of her bounty, and the contents of her will.  One can suffer from physical weakness and infirmity, a disease of the mind, and failing memory, and still possess testamentary capacity at the time of the execution of the will. Indeed as the court confirmed, even an incapacitated person who is under the care of a Guardian under MHL Article 81 can create a will.  A testator need not be as “sharp as a tack,” or “as clear as a bell,” to make a will.

Under the facts presented in Matter of Samuel a jury will decide whether the decedent had testamentary capacity.

Turning back to AI, below are two questions that I posed to ChatGPT, and the answers it provided:

  • What are the dangers of using AI to make legal arguments?

Using AI for legal arguments poses risks such as potential bias in the training data, lack of transparency in decision-making, and ethical concerns surrounding accountability and fairness. Additionally, reliance on AI may overlook nuances and context crucial in legal interpretation. It’s essential to consider these factors and use AI as a tool rather than a substitute for human legal expertise.

  • Can a person adjudicated incapacitated under New York’s Mental Hygiene Law create a will?   

In New York, if someone has been adjudicated incapacitated under the Mental Hygiene Law, they may be deemed incapable of creating a valid will. The ability to create a will often depends on mental capacity. It’s advisable to consult with a legal professional to get accurate and specific advice based on the individual circumstances and relevant laws.

AI’s responses to the questions posed are accurate even if imprecise, and appropriately caution the bar against taking the kinds of shortcuts that led counsel in Matter of Samuel and Mata down the wrong path.