What’s one piece of advice you think people should never ask AI for? – AI answers

I was scrolling through LinkedIn recently when a post by Daniel Shapiro caught my eye. It was a long post that seemingly went nowhere, but at the end he asked: “What’s one piece of advice you think people should never ask AI for? Curious where you stand.”

Immediately my smartass kicked in, and I hatched a plan to answer Daniel’s question by using AI. I thought I would copy and paste his question into the prompt and get something brief to reply with. I was hoping for something completely wrong, something bizarre, something funny to get attention with.

AI provided an excellent answer – just ask Grammarly!

BUT Google Gemini provided a very good answer, which you can read below. I copied all 452 words of Gemini’s response into Grammarly, and writing quality seemed exceptionally high. Grammarly reported a writing quality of 96/100 and reported no contradicted or debated claims.

Strangely, Grammarly reported 0% AI-generated text. Well, hardy har har, Mrs. Grammarly, the copy in question is 100% AI-generated text!

I DID add 5 citations that Grammarly supplied. Those are great not only for readers, but they help SEO, adding authority to the copy.


Here is AI’s answer to “What’s one piece of advice you think people should never ask AI for?”

It all seems like common sense to me, but as my friend Trevor’s mom said one time when she got cut off “Common sense – the least common of all the senses!

(EVERYTHING below was written by AI.)

If I had to choose just one area where the stakes are too high for current AI capabilities, it would be this:

You should never ask AI for a definitive medical diagnosis or to triage a medical emergency.

While AI is fantastic for summarizing medical concepts or explaining how a biological process works, asking “Do I have cancer based on these symptoms?” or “My chest feels tight, should I go to the ER or just sleep it off?” is dangerous.

Here is why this is the red line you should not cross:


1. The “Hallucination” Factor

AI models predict the next likely word in a sentence; they do not “know” facts the way a human expert does.11 An AI can confidently hallucinate a diagnosis or a remedy that sounds plausible but is factually incorrect2.2 In a medical context, a “confident but wrong” answer can be fatal.

2. Lack of Physical Context

A human doctor uses all their senses and years of intuition.33 They look at your skin color, check your pupil dilation, feel the swollen lymph node, and hear the specific rattle in your breathing.

  • AI is blind: It relies entirely on the text you type.
  • The data gap: You might forget to mention a crucial detail (like a medication you are taking) that would immediately change a doctor’s decision. The AI won’t necessarily know to ask for it.

3. The “Average” Problem

AI is trained on massive datasets of “average” human data.44

  • If your symptoms are rare, atypical, or present differently due to your age, gender, or ethnicity, the AI is statistically more likely to miss them because it biases toward the most common patterns in its training data.5

The Nuance: What specific questions should you avoid?

To keep yourself safe, avoid questions that require the AI to make a judgment call on your physical safety.

“Is this mushroom in my hand safe to eat?”AI image recognition is not 100% accurate. Mistaking a Death Cap for a common mushroom is fatal.
“I took X pills by accident. Will I be okay?”Only a Poison Control Center or doctor can calculate toxicity based on your specific weight/history.
“How can I treat this deep wound at home?”The AI might suggest home remedies when you actually need stitches or antibiotics to prevent sepsis.
“Does this mole look cancerous?”Dermatology requires high-resolution 3D inspection and often a biopsy. Images can be misleading.

How to use AI safely in this domain

It is okay to use AI to empower yourself before you see a professional.

  • Okay: “What are the common side effects of Ibuprofen?”
  • Okay: “Help me create a list of questions to ask my cardiologist.”
  • Okay: “Explain what ‘atrial fibrillation’ means in simple terms.”

  1. Bender, E. M., Gebru, T., McMillan-Major, A. & Mitchell, M. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. Proceedings of the 2021 ACM Conference on Fairness. https://doi.org/10.1145/3442188.3445922 ↩︎
  2. (March 14, 2024). AI Hallucinations in Medicine: A Blessing or a Risk. The Daily Guardian. https://thedailyguardian.com/medically-speaking/ai-hallucinations-in-medicine-a-blessing-or-a-risk/ ↩︎
  3. (2024). Palpation. Wikipedia. https://en.wikipedia.org/wiki/Palpation
    ↩︎
  4. (2025). Training data: The key to successful AI models. CIO. https://www.cio.com/article/3980243/training-data-the-key-to-successful-ai-models.html ↩︎
  5. Daneshjou, R., Vodrahalli, K., Liang, W., Novoa, R. A., Jenkins, M., Rotemberg, V., Ko, J., Swetter, S. M., Bailey, E. E., Gevaert, O., Mukherjee, P., Phung, M., Yekrang, K., Fong, B., Sahasrabudhe, R., Zou, J. & Chiou, A. (2021). Disparities in Dermatology AI: Assessments Using Diverse Clinical Images. arXiv preprint. https://doi.org/10.48550/arXiv.2111.08006 ↩︎
Posted in , ,

Leave a comment