
Fee Sharing: The Ethics of Attorney Referral Fees
August 12, 2025
Author: David Lipman, Esq., Chief Legal Officer | General Counsel, Kanner & Pintaluga
Artificial intelligence (AI) is reshaping the legal landscape faster than nearly any previous technology. From contract drafting to claims evaluation, AI tools are changing how lawyers and clients approach case preparation, research, and communication. But along with its potential for speed and efficiency comes a new wave of risk—especially when AI is used without the proper guardrails.
Recent reports show that clients and law firms are turning to legal AI in different ways. Some clients, eager to move faster or save money, are using public AI platforms to draft contracts, claim summaries, or even demand letters before ever contacting an attorney. Others assume that when their lawyers use AI, the result is automatically faster and cheaper. In reality, both assumptions can be problematic and can create AI legal advice risks that jeopardize outcomes.
What Legal AI Can—and Cannot—Do Reliably
Are AI-generated legal documents valid? It depends on the document, the jurisdiction, and whether the content accurately reflects current law and your facts. A notary stamp on a flawed template won’t fix legal defects. Always have an attorney review critical documents before you sign or file them.
Used properly, AI can be a powerful assistant. However, it may miss the legal nuances of special scenarios. The safest, most reliable applications are usually organizational and administrative—not strategic or determinative.
What AI Generally Does Well
Summarizing and organizing: Turning long medical records, deposition transcripts, or email threads into digestible summaries that help a human reviewer triage issues faster.
Draft support: Generating first-draft talking points, issue lists, or document outlines that a lawyer then edits and localizes to the facts and jurisdiction.
Pattern spotting: Highlighting recurring terms in contracts, flagging missing dates or signatures, or grouping similar documents for faster review.
Task automation: Routing documents, labeling files, or extracting metadata so attorneys can focus on analysis and strategy.
Where AI is Risky without Attorney Oversight
Legal analysis and conclusions: Determining liability, interpreting statutes, or advising on strategy involves judgment, jurisdictional nuance, and ethics—areas where human accountability is essential.
Deadline and venue decisions: Calculating limitation periods or choosing where to file can be outcome-determinative; even small AI miscalculations can end a case before it starts.
Jurisdiction-specific drafting: “One-size-fits-all” language from a model can contradict local rules, court preferences, or state statutes.
Privilege and confidentiality management: Deciding what can be shared, with whom, and when is a legal judgment—not a software feature.
A common question we hear is, “Can I use ChatGPT for legal advice?” Generative AI and law do intersect productively, but AI cannot take the place of a licensed professional. It doesn’t adhere to the duties of competence, loyalty, and confidentiality. Treat its outputs as drafts or hypotheses—not as legal advice.
The Risk of Client-Generated AI Documents
When clients paste confidential information, strategy notes, or case details into public AI tools, they may unintentionally waive privilege or confidentiality. Many consumer-grade AI systems store prompts, and some even use them to train future responses. That means sensitive details could be exposed or discoverable later. Before uploading anything to an open-sourced AI large language model, please consult with your attorney, who can provide guidance on the potential dangers of sharing privileged or confidential information.
Even if no privacy issue arises, AI-drafted demand letters, notices, or complaints often carry subtle errors that have outsized consequences: the wrong defendant name, a missed exhibit, an incorrect date, or a statement that conflicts with medical records. Opposing insurers and defense counsel scrutinize these documents closely; mistakes can weaken negotiation leverage or credibility.
Hallucinations, Outdated Law, and Citation Errors
AI can “hallucinate”—presenting false facts or made-up citations with confidence. Add in template misfits, and you have a recipe for trouble. Here are common failure modes that can jeopardize claims and settlements:
- Hallucinated case law: The model cites non-existent opinions or misquotes holdings. If filed, this can draw sanctions or at least damage credibility.
- Outdated statutes or rules: AI trained on older material may miss recent amendments, appellate decisions, or emergency orders that change deadlines or standards.
- Wrong venue, service, or deadlines: Filing in the wrong court, miscalculating statutes of limitation, or missing service rules can terminate claims outright.
- Template carryover errors: Copy-paste artifacts (wrong party names, jurisdictions, or amounts) are easy to miss and can be used to challenge authenticity or intent.
Privacy, Confidentiality, and Ethics
Legal matters are governed by strict duties of confidentiality and privilege. Uploading facts to public AI tools can route data through multiple vendors, countries, and retention layers you don’t control.
Sharing strategy, attorney communications, or client identifiers with third-party systems might be argued as disclosure to a third party, potentially affecting privilege claims depending on the circumstances and jurisdiction. Law firms have an ethical duty to supervise non-lawyer assistants and vendors. That includes understanding data flows, storage, retention, and security—and informing clients appropriately.
Clients should expect their lawyers to use enterprise-grade, closed-model tools with contractual assurances that prompts and outputs are not used to train public models and are stored with strict access controls.
Where Your Data Goes When You Paste Case Details
When you paste information into a public chatbot, your text is typically transmitted to the provider’s servers for processing. Depending on the provider and your settings:
- Prompts may be logged and retained for debugging or product improvement.
- Human reviewers may sample inputs/outputs for quality control.
- Data may be stored in regions outside your jurisdiction.
- Deletion options may be limited or delayed, and backups can persist.
- Integrations (browser extensions, plug-ins) can add additional data processors you didn’t intend to use.
By contrast, secure legal AI environments use encryption at rest and in transit, strong access controls, data residency guarantees where possible, and contractual prohibitions on training with client data. Ask your lawyer what platform they use and why.
AI and the Economics of Legal Work
AI has created friction between clients and firms in the traditional billable-hour model. Some firms are using AI to do more in less time, then billing as though it took hours of manual work. Others are keeping efficiency gains to themselves, while clients wonder where the savings went.
Our firm operates differently. We don’t bill by the hour. We work on a contingency fee basis, meaning we’re paid only if we recover money for our clients. Our interests are fully aligned with yours: if AI helps us move faster or build a stronger case, you benefit directly. The technology becomes a tool for justice and efficiency—not a hidden cost. And because outcomes—not hours—govern our compensation, we have every incentive to use AI where it is safe and to invest lawyer time where it truly matters.
Responsible, Transparent Use of AI by Law Firms
Responsible use starts with clear policies and ends with human accountability. We use AI thoughtfully and transparently. Whenever we apply AI-assisted tools in your case—whether for research, document review, or analysis—we disclose that use in advance and obtain your informed consent.
We use only secure, closed-sourced, enterprise-grade platforms that protect confidentiality and do not share or train on client data. Lawyers and law firms must inform their clients upon engagement that they utilize AI in their practice. Best practice is to only use secure and closed-sourced AI LLMs.
Your lawyer should never delegate their legal judgment to AI; what makes lawyers who they are is their independent legal judgment. AI cannot be a substitute for independent thought and analysis. We treat AI like a paralegal and not as a substitute for the work product of a human attorney.
What Your Lawyer Should Disclose About AI Use
When contacting a lawyer about a personal injury, property damage, or any other legal issue, you should expect transparency about whether or not they use AI, and if they do, how they use it. Here’s what you should expect your lawyer to disclose:
- Which AI tools they use and why those tools were selected.
- Whether the tools are closed, enterprise-grade systems with no training on your data.
- What categories of tasks AI will support (e.g., summarization) versus what will always be done by attorneys (e.g., legal analysis, strategy, negotiations).
- How citations and facts are verified before anything is filed or shared.
- How your data is stored, who can access it, and how long it’s retained.
- How the firm ensures compliance with ethics rules and supervises technology vendors.
When to Talk to a Lawyer Instead of AI
If you’re weighing using a lawyer vs AI for legal questions, here are some things to consider. In general, AI is good at organizing information and drafting questions. You’ll need a lawyer to interpret the law, weigh risk, and make strategic decisions. And you should always talk to a lawyer about when to litigate and when to settle.
Think of AI as a calculator; the lawyer knows which equation to solve and why. And some situations demand licensed legal advice, not a chatbot draft, such as:
- Court filings and motions: Statutes of limitation, notice requirements, and service deadlines are unforgiving. Procedural missteps can derail a strong claim.
- Insurer or opposing party communications: Statements to adjusters, releases, or recorded interviews can affect liability and damages.
- Complex injuries or medical disputes: Causation, future care costs, and liens require strategy and expert coordination.
- Multi-party or disputed fault cases: Allocation of responsibility, contribution, and indemnity are fact- and venue-dependent.
- Settlement documents and releases: Language can extinguish claims you didn’t intend to waive.
How We Review AI-Drafted Materials Safely
Whether a draft originated from our secure tools or a client brings a chatbot-generated document, we follow a strict review protocol:
- Conflict and privilege screening: We remove extraneous personal data and confirm no privileged strategy is embedded before further processing.
- Source-of-truth check: We verify every fact against the underlying records—medical files, police reports, photographs, contracts, or correspondence.
- Jurisdictional tailoring: We adapt language to the correct venue, governing law, and local court preferences, and we confirm the right parties, case numbers, and captions.
- Citation validation: Every statute, regulation, or case citation is checked directly in authoritative databases. No unverified citations survive review.
- Risk and strategy alignment: We evaluate whether the draft advances the client’s goals, strengthens negotiation posture, and preserves optionality.
- Accessibility and tone: We edit for clarity, client voice, and readability; we remove jargon that could be misinterpreted by adjusters, judges, or juries.
Final attorney sign-off: A responsible lawyer approves the final version, assuming accountability for its accuracy and suitability.
AI doesn’t replace legal judgment, strategic thinking, or the human empathy that drives what we do. But when used responsibly, it can help us deliver even greater results. Our goal is to model that balance, leveraging these technological innovations while maintaining our clients’ trust and the integrity that defines the practice of Law. At our firm, technology will never take the place of ethics or accountability. It’s simply another way we work to achieve the best outcomes for the people we represent.

