Does Your AI Lawyer Have a Fool for a Client? A Double-Edged Sword: AI, Legal Advice, and the Risk of Going It Alone

This street art–inspired image features a robotic lawyer in a suit with the tagline “Does Your AI Lawyer Have a Fool for a Client?”

AI is powerful and useful. But no AI chatbot can give you privileged legal advice, anything you type may become evidence, and courts are already sanctioning people for submitting unverified AI work.

He who represents himself has a fool for a client.” —Abraham Lincoln

On This Page

    (Updated August 8, 2025 for the GPT-5 era.)

    Before you trust AI with your disability, life, or AD&D claim, read this guide—or call Dorian Law.

    Using Artificial Intelligence (AI) for legal research and drafting can seem efficient, but for non-lawyers, it carries significant, often hidden, risks. The core issue is that AI interactions lack legal privilege, meaning anything you input or generate can become discoverable evidence against you in a lawsuit. This report explores these dangers, especially for disability, life, and Accidental Death & Dismemberment (AD&D) insurance claims under both ERISA and State Law. While AI offers tools, it cannot replace human legal judgment or the protection of an attorney. Unsupervised AI use by non-lawyers can lead to serious legal and financial problems, making caution and professional legal guidance essential.

    The point of this article is to inform you of risks. The benefits of AI—and there are plenty—may outweigh those risks or can be reduced with careful safeguards.

    Introduction: Why AI Isn’t Your Lawyer

    AI tools can speed up legal research and drafting, making complex tasks seem easier for non-lawyers dealing with legal issues or personal claims. For instance, AI can help brainstorm, summarize documents, or prepare for discussions. Even established legal platforms like Westlaw and LexisNexis have used AI for years to improve search results.

    However, this convenience comes with major legal and ethical dangers, especially when non-lawyers use AI without legal supervision. And smarter models don’t fix the basics: no privilege, broad discoverability, and real sanctions if you file unverified AI content. The perceived privacy of AI tools is misleading because they don’t offer the same protection as an attorney-client relationship.

    The Core Problem: No Privilege, Everything’s Discoverable

    A critical risk for non-lawyers using AI for legal matters is the complete absence of attorney-client privilege. As the Creative Counsel Law once tweeted, “There’s no legal privilege when you use ChatGPT.” This means your questions, information, or documents shared with AI, and the AI’s responses, are not confidential. Instead, they are considered “discoverable evidence” in any future legal dispute.

    For attorney-client privilege to exist, communication must be confidential, between an attorney and client, and for the purpose of getting legal advice. When you put sensitive information into public AI tools, you’re essentially sharing it with a third party (the AI provider), which usually waives any privilege. Even if no human at the AI company sees your data, the AI system might “learn” from it and stores it on third-party servers, which can be enough to lose privilege. Privacy toggles can reduce training/retention; they don’t create privilege. This means your seemingly private legal inquiries can become public and discoverable, regardless of your intent to keep them confidential.

    Key Risks for Non-Lawyers Using AI in Legal Matters

    1. AI Makes Stuff Up: Fake Laws, Wrong Cases, Bad Advice
    AI models are trained on existing data and can generate incorrect, outdated, or entirely made-up information, known as “hallucinations.” These can include fake laws, wrong case citations, or misleading legal conclusions. Relying on AI’s confident but unverified answers can lead to significant legal mistakes.

    Courts have penalized lawyers who submit unverified AI outputs, especially those with fabricated legal citations, leading to public reprimands and other sanctions. A particularly tricky hallucination is the “wrong case, right law” phenomenon, where AI invents fake case names for otherwise correct legal statements, making errors hard to spot.

    Crucially, non-lawyers, including those representing themselves (pro se litigants), are held to the same legal standards. They also face serious consequences, like case dismissal, for submitting court documents with unverified or fabricated AI content. Newer models still require human verification—every time.

    2. What You Type Could Be Used Against You
    Lawyers have a strict duty to protect client information. However, when a non-lawyer inputs sensitive data—like medical records, financial details, or police reports—into public or free AI tools, they risk exposing this confidential information. These platforms might store, keep, or use your data for training their models, potentially making it accessible to others. This act of sharing with a third party typically waives attorney-client privilege.

    Even if you use a private AI system within an organization, if a non-lawyer inputs privileged information, others using the same model could potentially access it, leading to a breach. Whether privilege is maintained often depends on the AI provider’s terms of service regarding data logging, storage, and access. Court orders can also reach server-side logs—even if you “deleted” your copy.

    Key factors influencing confidentiality and privilege waiver with AI:

    • Public/Free AI: Data may be retained for training—high waiver risk.

    • Paid/Enterprise AI: More control over retention and training—lower risk, still not privileged.

    • Legal-Specific AI: Designed for secure retention and no training—lowest risk, still not a lawyer.

    3. Using AI May Cross Into the Unauthorized Practice of Law
    The unauthorized practice of law (UPL) occurs when someone provides legal advice or services without a license. While the exact definition varies, it generally includes advising on legal rights, drafting legal documents that affect rights, or representing others in legal proceedings.

    For non-lawyers, using AI to generate specific legal advice or draft legal documents for others can be UPL, leading to severe penalties. Even if a non-lawyer is supervised by an attorney, improper AI use that results in UPL can lead to ethical violations for the supervising lawyer. AI’s ability to generate seemingly authoritative legal advice can blur the line between “information” and “advice,” making it easy for non-lawyers to inadvertently engage in UPL.

    4. AI’s Limitations: No Nuance, Context, or Judgment
    AI tools lack the human qualities essential for legal practice. They cannot understand strategic implications, jurisdictional differences, or the unique factual and business contexts needed for sound legal advice and drafting. AI cannot perform conflict checks, protect attorney-client privilege, or fulfill the ethical duties of a lawyer.

    5. Inherent Biases and Unfair Outcomes
    AI systems learn from vast datasets. If these datasets contain historical human biases or discriminatory practices, the AI can perpetuate and even amplify those biases. This is a major concern in legal contexts, as biased AI can lead to unfair or discriminatory results, particularly in insurance claims or criminal justice.

    6. AI Can Hurt You—But Can’t Be Held Responsible
    Unlike human legal professionals, AI tools have no legal liability for their errors or “hallucinations.” If a non-lawyer relies solely on AI for legal guidance and suffers a negative outcome, there’s no AI entity to hold responsible or sue for damages.

    Furthermore, traditional legal malpractice insurance policies may not cover AI-related errors, potentially leaving individuals or firms without coverage. This “silent AI” problem means AI-driven risks might not be explicitly covered or excluded by existing insurance, leading to significant financial exposure.

    AI Interactions as Discoverable Evidence

    1. AI Chat Logs: A New Source of Evidence
    AI chat logs, including both your prompts and the AI’s responses, are increasingly becoming targets for discovery in civil lawsuits. Courts view these interactions as potential evidence, similar to internet search history.

    A federal judge recently ordered OpenAI to preserve all ChatGPT chat data in a copyright case, regardless of privacy policies or deletion settings. This suggests a trend toward long-term accountability for AI interactions. What seems like a private conversation with an AI chatbot can become a permanent, discoverable record, potentially revealing sensitive information that could be used against you.

    Examples of AI-generated content leading to sanctions or adverse outcomes:

    • Mata v. Avianca (S.D.N.Y.): fabricated citations; sanctions.

    • Colorado appellate warning (pro se): fabricated citations risk sanctions.

    • Other courts: public reprimands and monetary penalties for fake or unverified AI cites.

    2. Waived Privilege: An Open Book for Opponents
    As discussed, using a public AI tool for legal research or drafting likely waives attorney-client privilege because the communication is shared with a third party. This means any “legal advice” sought, “legal questions” asked, or “legal drafting” assisted by AI can be freely discovered by an opposing party.

    3. Court Sanctions for Unverified AI Use
    Courts are increasingly strict about the integrity of legal filings assisted by AI, imposing sanctions for submitting unverified AI-generated content. Pro se litigants are held to the same standards as attorneys.

    Spoliation: Don’t Delete Your AI Trail (new)

    Spoliation means destroying or altering evidence when litigation is pending or reasonably anticipated. In the AI context, that includes chat logs, prompts, uploads, and outputs. Delete them after you “should” have preserved them and you risk sanctions—up to adverse-inference instructions (the jury can assume the missing material would have hurt you).

    Quick preservation checklist:

    • Issue a litigation hold (to yourself and anyone helping you).

    • Export/download relevant prompts, outputs, and attachments; screenshot if exporting isn’t available.

    • Disable auto-delete/ephemeral settings in chat tools and cloud storage.

    • Keep originals + metadata (files you pasted into AI and the exact outputs you used).

    • Date-stamp, don’t overwrite; avoid rename chains that break traceability.

    • Tell your lawyer which tools/accounts you used so we can tailor preservation.

    AI in Insurance Claims: ERISA vs. State Law

    1. Insurer Use of AI: Efficiency vs. “Wrongful AI”
    Insurance companies are rapidly adopting AI to improve efficiency, streamline underwriting, and detect fraud. AI can analyze vast amounts of data, identify patterns, and automate tasks, speeding up the claims process. But algorithms focused on “leakage” can tilt against fair payouts.

    2. Discoverability in Insurance Disputes: A Tale of Two Laws

    ERISA Claims: Limited Discovery
    ERISA litigation typically limits evidence to the administrative record—everything submitted to the plan during claim/appeal. If you submit AI-drafted arguments or summaries, those become part of the record. Errors can get locked in. Exceptions to limited discovery are narrow.

    State Law Claims (e.g., Bad Faith): Broader Access
    Bad-faith suits generally allow broader discovery, including the insurer’s internal processes and, in some cases, how AI tools influenced the claim. Your own AI use (prompts/drafts) can also be in play. Deleting them invites spoliation fights.

    Key differences:

    • ERISA: record-bound; no jury; tight discovery.

    • State law: wider discovery; potential jury; more room to probe AI decisions.

    3. Your AI Use: A Potential Source of Adverse Evidence
    Any AI interactions related to your claim—research prompts, draft letters, summaries—can be requested in discovery. If they contradict your claim or include inaccuracies, the insurer can use them to undermine credibility or deny benefits.

    Mitigating Risks: How Non-Lawyers Can Use AI Safely

    1. Do Your Due Diligence: Understand AI Tools
    Review terms of service and privacy policies. Prefer enterprise/legal-grade tools with strict data-handling. Avoid public/free tools for sensitive facts.

    2. Safeguard Confidentiality: Protect Sensitive Information
    Don’t paste PII, medical records, or full policy text into general-purpose AI. If you must, strip identifiers and keep it high-level.

    3. Avoid UPL: Know AI’s Boundaries
    Use AI for brainstorming and organization—not for final legal advice or drafting for others.

    4. The “Human in the Loop” is Essential: Verify Everything
    Every citation, fact, and legal conclusion must be independently verified in primary sources and the correct jurisdiction.

    5. When to Engage Qualified Legal Counsel
    If your benefits or rights are on the line—or ERISA appeal is looming—get counsel early. We’ll protect privilege, direct preservation, and keep the record clean.

    Conclusion: Proceed with Caution and (possibly) Professional Guidance

    AI offers exciting possibilities for legal tasks, but for non-lawyers, it comes with significant legal and ethical risks. There’s no attorney-client privilege in your chats, your AI trail is discoverable, and deleting it when litigation is foreseeable can be spoliation. In ERISA cases, AI errors can get cemented into the record. Use AI as a tool—not as your lawyer. If your benefits or rights are at stake, get experienced counsel. Dorian Law can help.

    FAQ: Using AI for Disability and Life Insurance Claims—What You Need to Know

    1. Can I use AI tools like ChatGPT to help with my disability or life insurance claim?

    A: Yes—but you shouldn’t rely on it alone.
    AI can help organize information or brainstorm ideas, but it can’t replace a well thought out and researched presentation—things lawyers are trained at providing. Anything you input may become discoverable, and AI doesn’t understand legal nuance, appeal deadlines, or the strategic framing of insurance denials. Use it as a tool, not a solution.

    2. Are my conversations with AI tools private when dealing with an insurance claim?

    A: Yes and No—AI tools do not offer confidentiality or legal privilege.
    When you use tools like ChatGPT or Google Gemini, your data may be stored and used for training. You may be able to stop this in the privacy settings. But the reality is that LLMs and AI don’t use your private information in a way that it would be connected back to you. It’s up to you if you’re comfortable with this. What these platforms don’t provide is attorney-client privilege, which means insurers or opposing counsel could subpoena your AI prompts and responses. That’s scary.

    3. Is it safe to use AI to write a disability or life insurance appeal letter?

    A: No—not unless you verify every detail.
    AI tools may “hallucinate” fake case law or facts. They often misunderstand policy language. They become biased by what they perceive to be your goal. Yet they do this in a profoundly convincing way—the responses are often so good it’s hard to believe it’s true. Do not believe it’s true. Verify. Every. Detail. Submitting an AI-drafted appeal without review can damage your case and might permanently lock in errors, especially under ERISA rules that limit future evidence.

    4. Can an insurance company use my AI-generated content against me?

    A: Yes—AI prompts and outputs can become evidence.
    If you use AI to draft statements, summarize records, or analyze your case, that content can be discoverable. If it contradicts your claim or includes inaccurate information, the insurer may use it to deny benefits or undermine your credibility.

    5. Does using AI for insurance claim advice count as unauthorized practice of law?

    A: It can—especially if you’re helping someone else or giving legal advice.
    Only licensed attorneys can legally interpret policies, advise on legal rights, or prepare legal documents for others. Using AI to do any of those things without supervision may violate state laws against unauthorized practice of law.

    6. Can I use AI tools for legal research about disability or life insurance law?

    A: Not safely.
    Free AI tools often generate outdated, incorrect, or imaginary legal information. Subscription AI tools (like offered by Westlaw or Lexis) don’t “hallucinate,” but they often struggle with relevance and nuance. Even if the advice sounds accurate, it may be based on the wrong jurisdiction—or entirely made up. Without legal training, it’s difficult (but not impossible) to verify what’s real and what’s not. If you know what you’re doing, using AI for legal research can be a useful tool. If you don’t know what you’re doing, I recommend you don’t use this as the time to learn “on the job.”

    7. What are the risks of using AI in an ERISA long-term disability or life insurance appeal?

    A: Your AI mistakes can become permanent.
    ERISA generally limits court review to the claim file and administrative appeal. If you use AI to help with your ERISA appeal and submit flawed or misleading content, you may not be allowed to fix it later. Those errors can cost you your benefits.

    8. How is AI treated differently in bad faith insurance lawsuits under state law?

    A: State law claims allow broader discovery of AI-generated content.
    In “bad faith” lawsuits involving life or disability insurance, your AI use can be fully investigated. Courts may compel you to produce chat logs or drafts from AI tools, especially if they shaped your complaint or legal arguments. Perhaps worse: if you erase them. This might be viewed as destroying evidence (spoilation in legal speak) and result in an adverse inference instruction given to the jury.

    9. What could happen if I represent myself using AI in an insurance dispute?

    You risk sanctions, dismissal, or losing your claim entirely.
    Courts have sanctioned people for filing fake AI-generated case law—even if they didn’t realize it was fake. Self-represented individuals are held to the same legal standards as attorneys, and AI won’t shield you from consequences. On top of that, AI doesn’t owe you a duty of competence. Unlike a lawyer, AI is not responsible for its mistakes.

    10. When should I avoid using AI and hire a real lawyer for an insurance claim?

    A: Any time your benefits, money, or rights are on the line is a good time to talk to an attorney.
    If your disability, life, or accidental death and dismemberment (AD&D) claim has been denied—or you’re appealing or preparing for litigation—you may need a lawyer. AI can’t offer strategy, protection, or privilege. Dorian Law can.

    About the Author

    Brent Dorian Brehm is the founder of Dorian Law, a firm dedicated to representing individuals denied disability, life, and AD&D insurance benefits. Based in Calabasas, Brent has nearly 20 years of experience handling complex ERISA and state law insurance claims throughout California and across the U.S. He is known for his strategic litigation style, client-centered advocacy, and unwavering commitment to holding insurance companies accountable—whether in state court, federal court, or at the appeal stage.

    Next
    Next

    When Insurers Don’t Explain Themselves: Why Vague Denials May Violate Federal Law