Artificial intelligence

The 15 Most-Common AI Prompt Time Wasters

Every poorly structured AI prompt costs something: billable time spent reprompting, deal cycles slowed by generic analysis, compliance reviews built on assumptions nobody verified. Professionals who frame their inputs badly get outputs they cannot use, and the rework compounds fast.

The mistakes below surface constantly in legal and business environments. They fall into three categories: (i) prompts that misdirect reasoning, (ii) prompts that withhold critical context, and (iii) professional habits that turn good AI into rework. Each one is fixable in seconds, once you know what to look for.

The Prerequisite: Know What You Are Looking For

Before any of the fixes below matter, there is a more fundamental requirement: you need to understand the substance of what you are prompting about. AI amplifies competence, or it amplifies its absence.

A professional who understands the deal, the regulation, or the dispute can evaluate whether a model’s output is useful, incomplete, or wrong. Without that grounding, there is no reliable way to tell the difference. A contract review prompt might return a clean analysis, but if the user does not know the underlying transaction well enough, a critical indemnification gap passes unnoticed. The failure is not in the AI. It is in the distance between what the user knew and what the task required.

The practical implication is straightforward: (i) do your own preparation before you prompt, (ii) know what a good answer should look like in general terms so you can evaluate what comes back, and (iii) treat AI as a tool that accelerates informed judgment, not one that replaces it. Every prompt technique in this article works better when the person writing it already understands the project.

Mistakes That Misdirect Reasoning

1. Assumption Loading

“Since our contract is enforceable, what damages can we claim?” tells the model to skip the analysis that matters most. Reframe neutrally: “Is this contract enforceable under [law], and what damages might be available?” One extra clause up front eliminates an entire round of correction later.

2. Vague Jurisdiction and Scope

“Is this non-compete valid?” could apply to fifty different legal regimes. Specify governing law, role, compensation, and industry. “Is this non-compete enforceable under California law for a software engineer earning $90K?” gets you an answer you can act on without follow-up.

3. Compound Multi-Topic Prompts

Several questions in one input force the model to spread its analysis thin. Break complex inquiries apart, triage by priority, and synthesize the results yourself. You get deeper analysis on each point and spend less time sorting through a shallow omnibus response.

4. Goal Ambiguity

“Explain force majeure” produces a textbook overview. “Explain force majeure so I can assess whether our supplier’s claim triggers the clause in our 2022 agreement” produces something you can bring to a meeting. Tie every prompt to the decision it needs to support.

5. Omitting Documents and Key Facts

Without actual contract text, statutes, or data, the model generalizes. Paste the operative clauses. Include the dates and dollar figures. Concrete inputs produce specific outputs; abstract inputs produce abstract ones.

Context Gaps That Weaken Outputs

6. Role and Perspective Ambiguity

The same facts look different from the plaintiff’s side than the defendant’s. “What are the risks here?” lacks direction. “From the defendant’s perspective, what are the key exposure points?” gives the model a lens that shapes every sentence of its response.

7. Undefined Terms and Jargon

The model does not know what your team means by “SLA,” “MSA,” or “Phase 2.” Internal shorthand that goes undefined is an invitation for misinterpretation. A two-line glossary at the top of the prompt eliminates the problem entirely.

8. Temporal Blindness

“Is this compliant?” is unanswerable without dates. Which version of the regulation? When does the obligation mature? What is the deadline in the contract? Anchor every prompt in time: the applicable date, the effective period, and the deadline at issue.

Professional Habits That Generate Rework

9. Asking for Conclusions Instead of Reasoning

“Will we win this case?” invites a verdict you cannot use. Structure the prompt to return (i) the strongest arguments on each side, (ii) the factors that influence the outcome, and (iii) the assumptions each conclusion depends on. A framework for judgment beats a prediction every time.

10. Requesting Summaries When You Need Analysis

A summary tells you what a document says. An analysis tells you what it means for your situation. “Identify risky or missing clauses from the buyer’s perspective and flag ambiguities” gives you risk intelligence. “Summarize this contract” gives you a book report.

11. Over-Reliance on Templates

A “generic NDA template” has no governing law, no tailored provisions, and no assurance it will hold up. Specify (i) jurisdiction, (ii) the parties and their relationship, (iii) material terms and duration, and (iv) required provisions and carve-outs. Tailored inputs produce tailored drafts.

12. Accepting Unverifiable Outputs

Every claim the model makes without a citation is a claim you will have to verify by hand, on your own time. Build the check into the prompt itself: request sources and date-stamped authorities for every proposition. This makes it immediately clear when the model is extrapolating beyond its evidence base.

13. Inadequate Risk Framing

A list of risks without likelihood, impact, or mitigations cannot be prioritized. Instruct the model to (i) rank risks by probability and severity, (ii) propose mitigations for each, and (iii) outline a decision framework. That turns a worry list into an action plan.

14. Blurring Obligations and Preferences

In negotiation and compliance work, the gap between “must” and “should” is the gap between a regulatory requirement and a nice-to-have. Be explicit: “This is a regulatory obligation. If full implementation is not possible, propose a compliant alternative.” Ambiguity here creates real downstream confusion.

15. Skipping Validation

No output should be treated as final without a cross-check. Build it into the prompt: “Verify claims against cited statutes, flag inconsistencies, and confirm all authorities are current as of [date].” A few extra words at the input stage can eliminate hours of manual review.

The Seven-Step Prompt Checklist

Run every substantive prompt through these seven steps before sending. This eliminates most rework and second-guessing at the source.

  1. State your role. Identify your perspective, capacity, and jurisdiction so the model frames its analysis correctly.
  2. Supply the source material. Paste or attach the contracts, clauses, data, or facts the model needs to work from.
  3. Specify the deliverable. Name exactly what you need: risk analysis, contract markup, options memo, compliance assessment.
  4. Define the purpose. Connect the output to a decision: negotiation, board review, litigation strategy, regulatory filing.
  5. Set the format. Tell the model whether you need a memo, clause language, a risk matrix, or something else entirely.
  6. Identify controlling sources. Point to the statutes, case law, regulations, or contractual provisions that should anchor the answer.
  7. Require validation. Ask the model to cite its authorities, flag its assumptions, and confirm recency as of a specific date.

This takes less than a minute per prompt. It routinely saves multiples of that in rework, follow-up prompts, and manual verification.

The Takeaway

The professional advantage from AI comes from how you use it, not from having access to it. Better prompts reduce follow-up rounds, produce decision-ready outputs, and cut verification effort. Structured input is now a core professional skill, not a technical one. The faster teams internalize that, the faster the ROI follows.

About the Author

Michael Simon Baker is the principal at Michael S. Baker, P.C. (NYBusiness.Law/ ArtificialIntelligence.Lawyer), where his practice focuses on business law and AI implementation and governance. His work helps professionals use safe and efficient techniques to get reliable, decision-ready results from large language models in high-stakes environments.

Comments
To Top

Pin It on Pinterest

Share This