Does Your AI Lawyer Have a Fool for a Client? A Double-Edged Sword: AI, Legal Advice, and the Risk of Going It Alone
AI is powerful and useful. But no AI chatbot can give you privileged legal advice, anything you type may become evidence, and courts are already sanctioning people for submitting unverified AI work.
“He who represents himself has a fool for a client.” —Abraham Lincoln
On This Page
Before you trust AI with your disability, life, or AD&D claim, read this guide—or call Dorian Law.
Using Artificial Intelligence (AI) for legal research and drafting can seem efficient, but for non-lawyers, it carries significant, often hidden, risks. The core issue is that AI interactions lack legal privilege, meaning anything you input or generate can become discoverable evidence against you in a lawsuit. This report explores these dangers, especially for disability, life, and Accidental Death & Dismemberment (AD&D) insurance claims under both ERISA and State Law. While AI offers tools, it cannot replace human legal judgment or the protection of an attorney. Unsupervised AI use by non-lawyers can lead to serious legal and financial problems, making caution and professional legal guidance essential.
The point of this article is to inform you of risks. The benefits of AI, and there are plenty, may outweigh those risks—or can be fully mitigated.
Introduction: Why AI Isn’t Your Lawyer
AI tools can speed up legal research and drafting, making complex tasks seem easier for non-lawyers dealing with legal issues or personal claims. For instance, AI can help brainstorm, summarize documents, or prepare for discussions. Even established legal platforms like Westlaw and LexisNexis have used AI for years to improve search results.
However, this convenience comes with major legal and ethical dangers, especially when non-lawyers use AI without legal supervision. The perceived privacy of AI tools is misleading because they don’t offer the same protection as an attorney-client relationship.
The Core Problem: No Privilege, Everything’s Discoverable
A critical risk for non-lawyers using AI for legal matters is the complete absence of attorney-client privilege. As the Creative Counsel Law once tweeted, “There’s no legal privilege when you use ChatGPT.” This means your questions, information, or documents shared with AI, and the AI’s responses, are not confidential. Instead, they are considered “discoverable evidence” in any future legal dispute.
For attorney-client privilege to exist, communication must be confidential, between an attorney and client, and for the purpose of getting legal advice. When you put sensitive information into public AI tools, you’re essentially sharing it with a third party (the AI provider), which usually waives any privilege. Even if no human at the AI company sees your data, the AI system might “learn” from it and stores it on third-party servers, which can be enough to lose privilege. This means your seemingly private legal inquiries can become public and discoverable, regardless of your intent to keep them confidential.
Key Risks for Non-Lawyers Using AI in Legal Matters
1. AI Makes Stuff Up: Fake Laws, Wrong Cases, Bad Advice
AI models are trained on existing data and don’t update in real-time, leading them to sometimes generate incorrect, outdated, or entirely made-up information, known as “hallucinations”. These can include fake laws, wrong case citations, or misleading legal conclusions. Relying on AI’s confident but unverified answers can lead to significant legal mistakes.
Courts have penalized lawyers who submit unverified AI outputs, especially those with fabricated legal citations, leading to public reprimands, disqualification, and disciplinary actions. A particularly tricky hallucination is the “wrong case, right law” phenomenon, where AI invents fake case names for otherwise correct legal statements, making errors hard to spot.
Crucially, non-lawyers, including those representing themselves (pro se litigants), are held to the same legal standards. They also face serious consequences, like case dismissal, for submitting court documents with unverified or fabricated AI content. Courts emphasize that everyone must verify all legal citations and facts before submitting them.
2. What You Type Could Be Used Against You
Lawyers have a strict duty to protect client information. However, when a non-lawyer inputs sensitive data—like medical records, financial details, or police reports—into public or free AI tools, they risk exposing this confidential information. These platforms might store, keep, or use your data for training their models, potentially making it accessible to others. This act of sharing with a third party typically waives attorney-client privilege.
Even if you use a private AI system within an organization, if a non-lawyer inputs privileged information, others using the same model could potentially access it, leading to a breach. Whether privilege is maintained often depends on the AI provider’s terms of service regarding data logging, storage, and access. Court orders requiring AI providers to keep chat logs, even if users try to delete them, further show that these interactions can become permanent, discoverable records.
Key factors influencing confidentiality and privilege waiver with AI:
Public/Free AI: Data is often retained for model training, leading to a high likelihood of waiver.
Paid/Enterprise AI: Offers more user control over data retention and often contractually prohibits use for training, reducing risk.
Legal-Specific AI: Designed to maintain privilege with secure retention and contractual prohibitions on training use, offering the lowest risk.
3. Using AI May Cross Into the Unauthorized Practice of Law
The unauthorized practice of law (UPL) occurs when someone provides legal advice or services without a license. While the exact definition varies, it generally includes advising on legal rights, drafting legal documents that affect rights, or representing others in legal proceedings.
For non-lawyers, using AI to generate specific legal advice or draft legal documents for others can be UPL, leading to severe penalties like injunctions, contempt of court, or even criminal charges. Even if a non-lawyer is supervised by an attorney, improper AI use that results in UPL can lead to ethical violations for the supervising lawyer. AI’s ability to generate seemingly authoritative legal advice can blur the line between “information” and “advice,” making it easy for non-lawyers to inadvertently engage in UPL.
4. AI’s Limitations: No Nuance, Context, or Judgment
AI tools lack the human qualities essential for legal practice. They cannot understand strategic implications, jurisdictional differences, or the unique factual and business contexts needed for sound legal advice and drafting. AI cannot perform conflict checks, protect attorney-client privilege, or fulfill the ethical duties of a lawyer.
Many AI models are “black boxes,” meaning their decision processes are opaque, making it hard for anyone, especially a non-lawyer, to assess their reliability, reasoning, or biases. This lack of transparency means errors are hard to trace and correct, potentially undermining the integrity of legal processes.
5. Inherent Biases and Unfair Outcomes
AI systems learn from vast datasets. If these datasets contain historical human biases or discriminatory practices, the AI can perpetuate and even amplify those biases. This is a major concern in legal contexts, as biased AI can lead to unfair or discriminatory results, particularly in insurance claims or criminal justice.
For example, AI used in insurance might base decisions on historical data that is skewed against certain groups or claim types, leading to “disparate impact discrimination”. This can expose individuals to legal challenges under anti-discrimination laws.
6. AI Can Hurt You—But Can’t Be Held Responsible
Unlike human legal professionals, AI tools have no legal liability for their errors or “hallucinations”. If a non-lawyer relies solely on AI for legal guidance and suffers a negative outcome, there’s no AI entity to hold responsible or sue for damages.
Furthermore, traditional legal malpractice insurance policies may not cover AI-related errors, potentially leaving individuals or firms without coverage. This “silent AI” problem means AI-driven risks might not be explicitly covered or excluded by existing insurance, leading to significant financial exposure.
AI Interactions as Discoverable Evidence
1. AI Chat Logs: A New Source of Evidence
AI chat logs, including both your prompts and the AI’s responses, are increasingly becoming targets for discovery in civil lawsuits. Courts view these interactions as potential evidence, similar to internet search history.
A federal judge recently ordered OpenAI to preserve all ChatGPT chat data in a copyright infringement case, regardless of OpenAI’s privacy policies or commitments to delete data. This suggests a trend toward long-term legal accountability for AI interactions, even if users try to delete them. What seems like a private conversation with an AI chatbot can become a permanent, discoverable record, potentially revealing sensitive information that could be used against you.
Examples of AI-generated content leading to sanctions or adverse outcomes:
Mata v. Avianca (S.D.N.Y.): An attorney used ChatGPT for legal research, resulting in fabricated citations. The attorney faced sanctions, public reprimand, and disqualification.
Alabama Dept. of Corrections case: Three attorneys were sanctioned, publicly reprimanded, and disqualified for submitting erroneous motions with fabricated citations generated by ChatGPT.
Al-Hamim v. Star Hearthstone LLC (Colorado Appellate Court): A pro se litigant’s appeal brief prepared with generative AI contained fabricated citations. The court affirmed the dismissal of his claims and warned of future sanctions.
Bankruptcy case (Federal Bankruptcy Court): A pro se debtor admitted using AI to generate legal arguments and case citations, which were found to be misrepresented or fictitious. While no sanctions were imposed due to independent dismissal, the court issued a clear warning.
2. Waived Privilege: An Open Book for Opponents
As discussed, using a public AI tool for legal research or drafting likely waives attorney-client privilege because the communication is shared with a third party (the AI provider). This means any “legal advice” sought, “legal questions” asked, or “legal drafting” assisted by AI can be freely discovered by an opposing party. The Creative Counsel Law tweet’s warning—“No attorney-client privilege. No confidentiality. No ethical duty. No one to protect you.”—directly links using general AI for legal matters to an immediate loss of confidentiality.
For non-lawyers, this can be very damaging. Your AI interactions could reveal your internal thought processes, strategic considerations, perceived weaknesses in your case, or even admissions that would otherwise be protected if you had consulted a licensed attorney. This unprotected data could then be used by the opposing party to weaken your claim or defense.
3. Court Sanctions for Unverified AI Use
Courts are increasingly strict about the integrity of legal filings assisted by AI, imposing severe sanctions for submitting unverified AI-generated content. These can include public reprimands, disqualification, fines, and referrals to disciplinary bodies.
It’s crucial for non-lawyers to understand that pro se litigants are not exempt from these standards. Courts consistently hold self-represented individuals to the same diligence and truthfulness as licensed attorneys. Failing to verify AI outputs, especially fabricated citations or misleading information, can lead to serious consequences, including the dismissal of your claims. Beyond formal sanctions, unverified AI use can harm your reputation, waste resources, and undermine the integrity of the judicial system.
AI in Insurance Claims: ERISA vs. State Law
1. Insurer Use of AI: Efficiency vs. “Wrongful AI”
Insurance companies are rapidly adopting AI to improve efficiency, streamline underwriting, and detect fraud. AI can analyze vast amounts of data, identify patterns, and automate tasks, speeding up the claims process.
Insurers use AI for:
Risk Assessment and Underwriting: Analyzing data to create risk profiles and determine premiums.
Claims Processing: Automating data entry, summarizing reports, and prioritizing cases.
Fraud Detection: Identifying anomalies and flagging suspicious activities, including manipulated photos and “deepfakes”. The UK, for example, saw a 300% increase in claims with manipulated documents from 2021-2023 due to deepfake technology.
However, AI’s focus on “leakage management” (cost reduction) can lead to unfair claim denials or delays. This creates a conflict of interest where algorithms might prioritize saving money over fair payouts.
Concerns about insurer AI include:
Bias and Lack of Context: AI systems trained on historical data can perpetuate biases, leading to discriminatory outcomes or undervalued claims.
Lack of Transparency: Insurer AI algorithms are often “black boxes,” making their decision-making processes opaque and hard for claimants to challenge.
Dehumanization: Replacing human interaction with AI responses can erode trust and increase frustration for policyholders.
Unregulated Environment: Unlike medical algorithms, insurance AI tools are largely unregulated, lacking independent testing for safety or fairness.
2. Discoverability in Insurance Disputes: A Tale of Two Laws
The discoverability of AI-generated content in insurance claims depends heavily on whether the claim falls under ERISA (federal law) or state law.
ERISA Claims: Limited Discovery
ERISA (Employee Retirement Income Security Act) governs most employer-sponsored disability, life, and AD&D insurance plans. ERISA litigation typically limits evidence to the “administrative record,” which includes all documents and information submitted to the plan administrator during the initial claim and appeal. This means new evidence usually cannot be introduced in court.
If a claimant uses AI to help with their claim or appeal (e.g., drafting arguments, summarizing medical records), those AI interactions and outputs, if submitted to the insurer, become part of this discoverable administrative record. The administrative appeal is often the claimant’s “last chance to enter new evidence”. Exceptions to this limited discovery are narrow, usually applying only in “de novo review” cases or to show insurer bias.
State Law Claims (e.g., Bad Faith): Broader Access
State law insurance claims, especially those alleging “bad faith” against an insurer, generally allow for much broader discovery. In bad faith litigation, the insurer’s entire claims file—including internal communications, claims diaries, and even underwriting manuals—is often discoverable.
This broader scope means claimants can potentially obtain information about how an insurer’s AI tools were used in processing their claim. If you allege that an insurer’s AI-driven processes led to a biased or unfair denial, you might be able to discover internal AI policies, AI-generated reports, or even information about the AI’s training data to prove bad faith. This allows claimants to probe the “black box” of AI decision-making.
Key differences in discovery scope:
ERISA: Governed by federal law, discovery is limited to the administrative record. No jury trials.
State Law: Governed by state law, may allow broader discovery, including the insurer’s internal AI processes in bad faith cases. May allow jury trials.
3. Your AI Use: A Potential Source of Adverse Evidence
For a non-lawyer pursuing an insurance claim, any AI interactions related to that claim can become discoverable evidence. This includes prompts used to research medical conditions, draft appeal letters, or seek general “legal advice”.
If these AI interactions reveal inconsistencies with your claim, admissions against interest, or reliance on fabricated information (e.g., AI hallucinations), the insurer can use them to undermine your credibility or the validity of your claim. For example, if you ask an AI tool about a medical condition in a way that contradicts your medical records, or if you draft an appeal letter with AI that includes unverified facts, these interactions could be used as adverse evidence. AI inputs and outputs can “tend to prove or disprove an alleged state of mind when an author’s or custodian’s subjective understanding or knowledge is at issue”.
The “Silent AI” problem, where AI-driven risks are not explicitly covered by existing insurance policies, can also extend to claimants. If a non-lawyer relies on AI for their claim and makes an error or misrepresentation, their own liability insurance (if applicable) might not cover AI-related errors, leaving them exposed to financial losses or legal repercussions.
Mitigating Risks: How Non-Lawyers Can Use AI Safely
1. Do Your Due Diligence: Understand AI Tools
Before using any AI tool for legal research or drafting, non-lawyers must thoroughly review the AI provider’s terms of service, privacy policies, and data retention practices. Ask: How is data stored? Is it used for training? Can it be permanently deleted? Is the platform designed for general consumers or legal professionals?
Prioritize enterprise-grade or legal-specific AI tools that offer strong privacy and data control.
Avoid free or public AI tools for any sensitive or confidential legal information.
2. Safeguard Confidentiality: Protect Sensitive Information
The safest approach is to never input any confidential, sensitive, or privileged information into general-purpose or public AI tools. This includes personal identifiable information (PII), medical records, financial data, or specific details about a legal claim.
If AI assistance is necessary, ensure data is thoroughly anonymized by removing all PII. Implement clear guidelines if AI tools are used in privileged contexts and prohibit employees from using personal AI accounts for work-related legal matters.
3. Avoid UPL: Know AI’s Boundaries
Non-lawyers should see AI as a sophisticated information retrieval and brainstorming tool, not a substitute for legal expertise or a source of definitive legal advice. AI cannot provide tailored legal advice, exercise professional judgment, or represent individuals in legal proceedings.
Do not use AI to provide legal advice to others, draft legal documents for third parties, or make substantive legal decisions without attorney review.
Always consult a licensed attorney for specific legal guidance, interpretation of complex laws, or when preparing formal legal documents. AI should support, not replace, human legal oversight.
4. The “Human in the Loop” is Essential: Verify Everything
Maintaining a “human in the loop” is the most critical mitigation strategy when using AI for legal tasks. Non-lawyers must critically review all AI-generated content for accuracy, relevance, and potential biases. Every citation, factual statement, and legal conclusion from AI must be independently verified against original, authoritative sources. Do not simply trust the AI; it is a tool to augment human intellect, not replace it.
This vigilance is especially important due to the risk of “hallucinations” and because pro se litigants are held to the same standards as attorneys regarding the truthfulness of court filings.
5. When to Engage Qualified Legal Counsel
For any complex legal matter, especially those involving potential litigation, sensitive personal information, significant financial stakes (like disability, life, or AD&D insurance claims), or when AI’s limitations become apparent, engaging a licensed and experienced attorney is paramount.
An attorney can provide privileged legal advice, ensure compliance with all applicable legal and ethical rules, navigate discovery complexities, and advocate effectively on your behalf. They can also help assess the risks and benefits of using specific AI tools in your situation and guide you on appropriate safeguards.
Conclusion: Proceed with Caution and (possibly) Professional Guidance
AI offers exciting possibilities for legal tasks, but for non-lawyers, it comes with significant legal and ethical risks. The lack of attorney-client privilege means your AI interactions can become discoverable evidence, potentially used against you. AI’s tendency to “hallucinate” and its inherent biases demand meticulous human oversight and verification. Courts are increasingly sanctioning parties, including self-represented individuals, for submitting unverified AI content.
In insurance claims, AI is used by insurers for efficiency, but this can lead to “wrongful AI” and unfair denials. The rules for discovering AI-related information differ significantly between ERISA and state law claims. Your own AI use can inadvertently create adverse evidence.
Ultimately, while AI can be a powerful tool, its unsupervised use by non-lawyers in legal contexts can lead to severe legal and financial repercussions. Navigating this evolving landscape requires extreme prudence, diligent verification of all AI-generated content, and, most importantly, the indispensable guidance of qualified legal counsel to protect your rights and interests.
FAQ: Using AI for Disability and Life Insurance Claims—What You Need to Know
1. Can I use AI tools like ChatGPT to help with my disability or life insurance claim?
A: Yes—but you shouldn’t rely on it alone.
AI can help organize information or brainstorm ideas, but it can’t replace a well thought out and researched presentation—things lawyers are trained at providing. Anything you input may become discoverable, and AI doesn’t understand legal nuance, appeal deadlines, or the strategic framing of insurance denials. Use it as a tool, not a solution.
2. Are my conversations with AI tools private when dealing with an insurance claim?
A: Yes and No—AI tools do not offer confidentiality or legal privilege.
When you use tools like ChatGPT or Google Gemini, your data may be stored and used for training. You may be able to stop this in the privacy settings. But the reality is that LLMs and AI don’t use your private information in a way that it would be connected back to you. It’s up to you if you’re comfortable with this. What these platforms don’t provide is attorney-client privilege, which means insurers or opposing counsel could subpoena your AI prompts and responses. That’s scary.
3. Is it safe to use AI to write a disability or life insurance appeal letter?
A: No—not unless you verify every detail.
AI tools may “hallucinate” fake case law or facts. They often misunderstand policy language. They become biased by what they perceive to be your goal. Yet they do this in a profoundly convincing way—the responses are often so good it’s hard to believe it’s true. Do not believe it’s true. Verify. Every. Detail. Submitting an AI-drafted appeal without review can damage your case and might permanently lock in errors, especially under ERISA rules that limit future evidence.
4. Can an insurance company use my AI-generated content against me?
A: Yes—AI prompts and outputs can become evidence.
If you use AI to draft statements, summarize records, or analyze your case, that content can be discoverable. If it contradicts your claim or includes inaccurate information, the insurer may use it to deny benefits or undermine your credibility.
5. Does using AI for insurance claim advice count as unauthorized practice of law?
A: It can—especially if you’re helping someone else or giving legal advice.
Only licensed attorneys can legally interpret policies, advise on legal rights, or prepare legal documents for others. Using AI to do any of those things without supervision may violate state laws against unauthorized practice of law.
6. Can I use AI tools for legal research about disability or life insurance law?
A: Not safely.
Free AI tools often generate outdated, incorrect, or imaginary legal information. Subscription AI tools (like offered by Westlaw or Lexis) don’t “hallucinate,” but they often struggle with relevance and nuance. Even if the advice sounds accurate, it may be based on the wrong jurisdiction—or entirely made up. Without legal training, it’s difficult (but not impossible) to verify what’s real and what’s not. If you know what you’re doing, using AI for legal research can be a useful tool. If you don’t know what you’re doing, I recommend you don’t use this as the time to learn “on the job.”
7. What are the risks of using AI in an ERISA long-term disability or life insurance appeal?
A: Your AI mistakes can become permanent.
ERISA generally limits court review to the claim file and administrative appeal. If you use AI to help with your ERISA appeal and submit flawed or misleading content, you may not be allowed to fix it later. Those errors can cost you your benefits.
8. How is AI treated differently in bad faith insurance lawsuits under state law?
A: State law claims allow broader discovery of AI-generated content.
In “bad faith” lawsuits involving life or disability insurance, your AI use can be fully investigated. Courts may compel you to produce chat logs or drafts from AI tools, especially if they shaped your complaint or legal arguments. Perhaps worse: if you erase them. This might be viewed as destroying evidence (spoilation in legal speak) and result in an adverse inference instruction given to the jury.
9. What could happen if I represent myself using AI in an insurance dispute?
You risk sanctions, dismissal, or losing your claim entirely.
Courts have sanctioned people for filing fake AI-generated case law—even if they didn’t realize it was fake. Self-represented individuals are held to the same legal standards as attorneys, and AI won’t shield you from consequences. On top of that, AI doesn’t owe you a duty of competence. Unlike a lawyer, AI is not responsible for its mistakes.
10. When should I avoid using AI and hire a real lawyer for an insurance claim?
A: Any time your benefits, money, or rights are on the line is a good time to talk to an attorney.
If your disability, life, or accidental death and dismemberment (AD&D) claim has been denied—or you’re appealing or preparing for litigation—you may need a lawyer. AI can’t offer strategy, protection, or privilege. Dorian Law can.
About the Author
Brent Dorian Brehm is the founder of Dorian Law, a firm dedicated to representing individuals denied disability, life, and AD&D insurance benefits. Based in Calabasas, Brent has nearly 20 years of experience handling complex ERISA and state law insurance claims throughout California and across the U.S. He is known for his strategic litigation style, client-centered advocacy, and unwavering commitment to holding insurance companies accountable—whether in state court, federal court, or at the appeal stage.