• RE: I’m a Professor. A.I. Has Changed My Classroom, but Not for the Worse.

    I recently read a lengthy article in the New York Times Magazine section titled “I’m a Professor. A.I. Has Changed My Classroom, but Not for the Worse, written by Carlo Rotella. It is a 27-minute “listen” if you choose to listen instead of read, but it is fascinating and definitely worth half an hour of your time.

    Rotella’s article is about a college professor trying to keep AI out of the classroom and learning experience entirely. It is a noble endeavour, but it struck me as replacing one “evil” with another, and reminded me of the heartbreaking words “chemo treatment is worse than the disease“.

    Carlo Rotella is quite distinguished, and Boston College would likely call security if I even thought about stepping on their campus, so who am I to have a contrarian opinion? But of course, contrarian I did. And, of course, I also wrote a relatively lengthy comment that the NY Times deemed worthy of approval.

    And since Rotella’s article is about minimising AI’s influence in higher education, I also discussed it with ChatGPT.

    It was interesting that one of the things I find most disturbing about Rotella’s teaching style, students having to scan their books to share their margin notes, underlining, etc., so Rotella could “see them think”, wasn’t even picked up by ChatGPT.

    1. The results of my conversation with AI about Rotella’s article:
      1. (1) Blog-Ready Section: “The Problems With Rotella’s Teaching Approach”
      2. (2) Respectful but Strong Public Critique
    2. My letter to the NY Times about Rotella’s article:
    3. Conclusion:

    The results of my conversation with AI about Rotella’s article:

    (1) Blog-Ready Section: “The Problems With Rotella’s Teaching Approach”

    Carlo Rotella is clearly a gifted teacher who cares deeply about his students, but the methods described in the article raise serious concerns about autonomy, privacy, and intellectual freedom. His goal is admirable — to help students develop critical thinking in an age of shortcuts — but the techniques he uses risk becoming more intrusive than AI itself.

    For example, Rotella requires students to scan and submit their annotated pages — their underlining, highlighting, and margin notes — because, in his words, it’s “as close as I can get to watching them think as they read.” But that’s precisely the problem: when you know your professor is reading your private reactions, it becomes almost impossible to think authentically. Many students will inevitably begin writing “the kinds of comments he wants,” not the ones they truly felt. Instead of encouraging genuine reflection, this method encourages a performance of critical thinking.

    Rotella also forces students to speak early and often in class, regardless of whether they’re ready, comfortable, or wired for that mode of engagement. His belief that “if you don’t speak in the first two weeks, you never will” isn’t a fact of human behavior — it’s simply a justification for compulsory participation. Some students process internally, quietly, and deeply. Others have anxiety, are ESL learners, or simply need time. Mandatory speech doesn’t cultivate thought; it rewards fast talkers and penalizes slow thinkers.

    And there is a deeper philosophical issue: Rotella seems to impose a single “correct” way of being a student. Real thinking, in his view, happens with pencils on paper, in a physical book, spoken aloud, in a room free of technology. But this excludes entire groups of students — neurodivergent learners, introverts, ESL students, students with reading disabilities, or students whose thinking unfolds privately and gradually. AI, ironically, is often far less intrusive and much more accommodating of different learning styles.

    What Rotella presents as “equipment for living” can easily feel like surveillance, control, and micromanagement. Inspecting a student’s annotations is more invasive than anything AI does. For many of today’s students, the idea of being graded on how you think, rather than what you think, is not rigorous — it’s suffocating.

    Rotella may believe he’s preserving something noble about education. But to students who value autonomy, individuality, and intellectual privacy, his approach can feel less like guidance and more like being shaped in his image.


    (2) Respectful but Strong Public Critique

    Carlo Rotella’s intentions are good. He wants to preserve deep reading, genuine dialogue, and slow thinking at a time when AI makes it easy to outsource intellectual effort. That’s a worthy mission, and many professors share his concern.

    But good intentions don’t shield an approach from scrutiny, and several aspects of his pedagogy deserve critique.

    First, Rotella’s requirement that students submit scans of their annotated books crosses a boundary. Annotations are often personal — half-formed ideas, questions, confusions, emotional reactions. Asking students to reveal that inner mental landscape, to have it evaluated, creates pressure to perform a certain kind of “approved” intelligence. Instead of fostering authenticity, it subtly encourages conformity. Students quickly learn to underline the kinds of sentences he likes and write the kinds of notes he rewards.

    Second, his insistence on mandatory, early speaking privileges extroverted, fluent, confident students and disadvantages everyone else. Speaking is not thinking; it is only one form of it. Many students — deep readers, reflective thinkers, ESL students, neurodivergent students — think best in writing, in private, or after time to process. A universal demand for quick verbal participation is pedagogically narrow. Furthermore, mandatory speaking is just that: speaking. It is not listening, contemplating, reconsidering, or coming to better informed thoughts. It is thinking about what to say so you get credit.

    Third, Rotella’s broader philosophy implies that his preferred mode of learning — books, handwriting, chalkboard discourse — is the superior one. But in 2025, students are diverse, and so are their cognitive styles. Some truly need digital tools. Others use AI to scaffold language, comprehension, or organization. Rotella’s rejection of these tools risks excluding precisely the students for whom they are most beneficial.

    And finally, there’s a paradox: the very professor who worries about AI being intrusive into the learning process has built an environment that is arguably more intrusive. AI doesn’t grade your marginalia. It doesn’t force you to speak. It doesn’t judge whether you underlined the “right” passage. Rotella’s analog methods, for all their charm, are forms of intellectual surveillance, not liberation.

    Rotella’s classrooms may work beautifully for some students — the extroverted, the fluent, the traditionalists. But for many others, his methods feel restrictive, prescriptive, and overly controlling. Education should challenge students, yes — but not by constraining the way their minds naturally work.


    My letter to the NY Times about Rotella’s article:

    I’m in my mid-50s, so I went to school long before AI, cellphones with more powerful technology than what was used to put a man on the moon were in everyone’s pocket, or even computers.

    I am a big proponent of AI, but probably because I was taught long division before I was given a calculator and developed critical thinking skills.

    Carlo Rotella paints a rosy picture, something out of a feel-good Disney movie starring a bankable megastar and some very beautiful up-and-coming new talent.

    As a student, I think I would have a serious problem with some of Carlo Rotella’s teaching techniques.

    I would find them intrusive and controlling, under the guise of altruism and “teaching critical thinking.” I would feel like he was making sure I was thinking the way he wanted me to, that I was being made in his image, becoming another of his minions.

    Tyler from the article seems like he might be feeling exactly how I would if I were a student of Carlo Rotella. Thankfully, Tyler handled the situation much more constructively than I would have:

    “Can we talk sometime about how we can ask the questions on our own?” he said. “We always have you to ask the questions and set up how we’re going to discuss and analyze…”

    Perhaps I have trust issues. Perhaps I am paranoid and should seek professional help. Or perhaps I am simply thinking critically and unable to accept Carlo Rotella’s implied claim that he is being altruistic, when he might in fact be a megalomaniac with control issues.


    Conclusion:

    For me, the article lands in the uncanny space where earnest pedagogy blends with an almost parental oversight—well-meaning, but suffocating in its assumption that students need their thinking supervised. My own reaction reflects a larger tension the article raises without fully admitting it: in an era when students can outsource half their mental workload to AI, Rotella responds by tightening human control rather than trusting students to grow independently. Ironically, the monitoring of students’ thoughts on paper feels far more intrusive than any algorithmic autocomplete. Where AI often gives freedom, Rotella’s approach seems to reclaim authority. Whether that’s noble or overbearing depends on your comfort level, your trust issues, and maybe how many times in your life a teacher has asked to see inside your book.

    Appendix: At the time of posting, my comment in the NY Times had 5 Recommendations, and the comment that said “@George Perry Yes, you have trust issues.” had 3 Recommendations

  • How to Fact-Check AI: A Practical Guide to Spotting Truth from Fiction

    AI-generated information sounds convincing, but that doesn’t mean it’s true. AI models are built to be persuasive—not always accurate. Here’s how to protect yourself from AI misinformation and spot the real facts.

    Below, you’ll find actionable tips to quickly verify anything you read from an AI—so you can share and use AI-powered information with confidence.

    1. Read Like a Pro: The “Lateral Reading” Technique

    When we read a book, we read “vertically” (top to bottom). When checking AI, you must read “laterally” (across different tabs).1

    • Don’t stay in the chat window. If the AI makes a claim, immediately open a new browser tab.
    • Search for the specific claim, not the topic. If AI says, “Eating 30 almonds a day cures headaches,” don’t search “benefits of almonds.” Search “Does eating almonds cure headaches study.”
    • Compare sources. Look for consensus across at least three independent, reputable sources (e.g., a government health site, a major news outlet, and an academic institution)2.

    2. Busted! Fact-Checking AI Quotes

    AI often hallucinates quotes or attributes real quotes to the wrong people3.

    • Copy the quote. Take the specific sentence the AI provided.
    • Paste it into Google with quotation marks. e.g., “The only thing we have to fear is fear itself”.
    • Analyze results:
      • No results? The AI likely invented the quote.
      • Different author? The AI misattributed it.
      • Slightly different wording? The AI “paraphrased” but presented it as a direct quote.

    3. Beware of “Ghost Links” and Fake Citations

    AI models (like ChatGPT or Gemini) can generate realistic-looking citations that do not exist4.

    • Click every link. If the AI provides a URL, click it. Does it lead to a real 404 page? Does it lead to a relevant article or just to the website’s homepage?
    • Check the DOI. For scientific papers, ask for the DOI (Digital Object Identifier) and paste it into a resolver like doi.org. If the AI can’t provide a real DOI, the paper might not exist5.

    4. How to Spot AI Hallucinations

    Certain patterns in AI writing suggest it might be making things up. Be extra skeptical if you see6:

    • Vague Authority: Phrases like “Studies show…” or “Experts agree…” without naming the specific study or expert.
    • Perfectly Logical but Incorrect: The answer follows a logical structure (A + B = C), but the premise (A) is false.
    • Repetitive Hedges: If the AI apologizes excessively or uses phrases like “It is important to note” repeatedly, it may be masking a lack of concrete data.

    5. Master the “SIFT” Method for AI Fact-Checking

    This is a media literacy framework adapted for AI7:

    • S – Stop: The AI replies instantly. You should pause. Don’t use the info immediately.
    • I – Investigate the Source: Ask the AI, “What is the primary source for that specific statistic?” Then go find that source yourself.
    • F – Find Better Coverage: Is this “fact” reported by major outlets? If AI is the only one saying it, it’s likely false.
    • T – Trace Claims: Go back to the original context. AI summarizes; in doing so, it often strips away nuance (e.g., “Coffee causes cancer” vs. “Hot coffee above 65°C may increase risk…”).

    Summary Checklist for Verification

    StatisticsAsk for the year and source. Search the stat to see if it’s outdated.
    QuotesSearch the exact text in quotes “…” to verify the author.
    Legal/MedicalZero Trust. Consult a professional or official government database (.gov).
    ImagesZoom in on hands, text in background, and shadows. Use Reverse Image Search.
    CodeRun the code in a sandbox environment; do not copy/paste directly into production.


    Sources:

    1. “Fact-checking AI with Lateral Reading – Artificial Intelligence (AI) and Information Literacy – Learning Guides at Jefferson Community & Technical College.” Jefferson Community & Technical College. 2023. https://jefferson.kctcs.libguides.com/artificial-intelligence/fact-checking-ai Accessed November 25, 2025 ↩︎
    2. “SIFT for Information Evaluation – Critically Evaluating Online Information.” Scottsdale Community College Library. 2025. https://library.scottsdalecc.edu/SIFT Accessed November 25, 2025 ↩︎
    3. Spinellis, Diomidis. “False authorship: an explorative case study around an AI-generated article published under my name.” Research Integrity and Peer Review 10 (2025). https://doi.org/10.1186/s41073-025-00165-z Accessed November 25, 2025 ↩︎
    4. “AI Hallucination Detector for Citations – Free Tool | SwanRef.” SwanRef. 2025. https://www.swanref.org/ai-hallucination-detector Accessed November 25, 2025 ↩︎
    5. Spinellis, Diomidis. “False authorship: an explorative case study around an AI-generated article published under my name.” Research Integrity and Peer Review 10 (2025). https://doi.org/10.1186/s41073-025-00165-z Accessed November 25, 2025 ↩︎
    6. Hufton, Andrew L.. “AI-generated research paper fabrication and plagiarism in the scientific community.” Patterns 4, no. 4 (2023): 100731. https://doi.org/10.1016/j.patter.2023.100731 Accessed November 25, 2025 ↩︎
    7. “SIFT – Empowering Informed Communities.” University of Washington Libraries. 2025. https://depts.washington.edu/learncip/sift/ Accessed November 25, 2025 ↩︎

  • Understanding Trickle-Down AI: Impact on Children

    We all remember the promise of “Trickle-Down Economics”—the Reagan-era theory that if we fed the wealthy horse enough oats, eventually the sparrows would get to eat. Critics called it “Voodoo Economics” because the wealth never really made it to the bottom. It stayed stuck in the accounts of the ultra-rich.

    But as I watched Dr. Mhairi Aitken on Story in the Public Square this week, I realized we are witnessing a new phenomenon: Trickle-Down AI.

    Unlike money, Artificial Intelligence does trickle down. In fact, it floods down. The technology is owned by the billionaires and Big Tech gatekeepers, yes. But the capability—the raw information and the power to create—is saturating the ground level faster than we can adapt. Parents are parenting differently. Teachers are creating lesson plans with AI. Children are absorbing AI literacy by osmosis. We are living in a world where the tools of the elite are in the hands of the playground set.

    But is that a good thing?

    Dr. Mhairi Aitken, Senior Ethics Fellow at The Alan Turing Institute, broke down exactly how this “trickle-down” effect is shaping the next generation.

    Here is why we need to pay attention.

    1. The Playground Has Changed

    Aitken noted that we have moved past simple toys to “smart toys and smart teddy bears that will interact as the child plays with it.” This isn’t science fiction; it’s the holiday shopping list. When AI trickles down into a teddy bear, it brings with it the surveillance capitalism of its creators. Our children aren’t just getting toys – they are becoming marketing information for the Oligarch Class.

    2. The Silent Adopters

    We assume AI is for business, but Aitken’s research found that “a quarter of children aged 8-12 reported using generative AI technologies.” While adults debate the ethics of ChatGPT in the boardroom, children are already natives. They are bypassing the “fear” stage and moving straight to integration. The “Trickle-Down” effect here is instant—information that was once gatekept behind university tuitions is now available to a 10-year-old with a tablet.

    3. The New Confidant

    Perhaps most poignantly, Aitken revealed that “children with additional learning needs were significantly more likely to report using generative AI for communication and connection.” This is the benevolent side of Trickle-Down AI. For a child who fears judgment, AI offers a safe harbour. But AI also creates a dangerous dependency. If the “rich” owners of these models program them with bias, that bias trickles down directly into the emotional development of vulnerable children who are “seeking advice on more personal issues.”

    4. The “Adult-Centric” Flaw

    Despite children being users of this new reality, they are ignored by the architects. As Aitken put it, “Children are probably the group who will be most impacted… but they’re also the group that are the least represented in decision making.” In our “Trickle-Down” reality, the rules are set at the top. The billionaires design the algorithms for profit and efficiency, not for the developing brain of an 8-year-old.

    A Note on the “Story in the Public Square”

    It is tragic irony that this interview aired on PBS just as the Trump Administration’s funding cuts to the Corporation for Public Broadcasting (CPB) threaten to silence these very conversations. Story in the Public Square provides the kind of nuanced, non-commercial analysis that you won’t find on the algorithm-driven feeds of the for-profit internet.

    If we lose public broadcasting, we lose the ability to critique the very powers that are shaping our future. We become passive consumers of Trickle-Down AI, rather than active participants in how it is governed.

    The Bottom Line

    AI will not make everyone a Harvard graduate, but it will democratize access to knowledge in a way “Trickle-Down Economics” never democratized wealth. The question is no longer if the technology will reach us. The question is: What biases, risks, and agendas are trickling down with it?

  • What’s one piece of advice you think people should never ask AI for? – AI answers

    I was scrolling through LinkedIn recently when a post by Daniel Shapiro caught my eye. It was a long post that seemingly went nowhere, but at the end he asked: “What’s one piece of advice you think people should never ask AI for? Curious where you stand.”

    Immediately my smartass kicked in, and I hatched a plan to answer Daniel’s question by using AI. I thought I would copy and paste his question into the prompt and get something brief to reply with. I was hoping for something completely wrong, something bizarre, something funny to get attention with.

    AI provided an excellent answer – just ask Grammarly!

    BUT Google Gemini provided a very good answer, which you can read below. I copied all 452 words of Gemini’s response into Grammarly, and writing quality seemed exceptionally high. Grammarly reported a writing quality of 96/100 and reported no contradicted or debated claims.

    Strangely, Grammarly reported 0% AI-generated text. Well, hardy har har, Mrs. Grammarly, the copy in question is 100% AI-generated text!

    I DID add 5 citations that Grammarly supplied. Those are great not only for readers, but they help SEO, adding authority to the copy.


    Here is AI’s answer to “What’s one piece of advice you think people should never ask AI for?”

    It all seems like common sense to me, but as my friend Trevor’s mom said one time when she got cut off “Common sense – the least common of all the senses!

    (EVERYTHING below was written by AI.)

    If I had to choose just one area where the stakes are too high for current AI capabilities, it would be this:

    You should never ask AI for a definitive medical diagnosis or to triage a medical emergency.

    While AI is fantastic for summarizing medical concepts or explaining how a biological process works, asking “Do I have cancer based on these symptoms?” or “My chest feels tight, should I go to the ER or just sleep it off?” is dangerous.

    Here is why this is the red line you should not cross:


    1. The “Hallucination” Factor

    AI models predict the next likely word in a sentence; they do not “know” facts the way a human expert does.11 An AI can confidently hallucinate a diagnosis or a remedy that sounds plausible but is factually incorrect2.2 In a medical context, a “confident but wrong” answer can be fatal.

    2. Lack of Physical Context

    A human doctor uses all their senses and years of intuition.33 They look at your skin color, check your pupil dilation, feel the swollen lymph node, and hear the specific rattle in your breathing.

    • AI is blind: It relies entirely on the text you type.
    • The data gap: You might forget to mention a crucial detail (like a medication you are taking) that would immediately change a doctor’s decision. The AI won’t necessarily know to ask for it.

    3. The “Average” Problem

    AI is trained on massive datasets of “average” human data.44

    • If your symptoms are rare, atypical, or present differently due to your age, gender, or ethnicity, the AI is statistically more likely to miss them because it biases toward the most common patterns in its training data.5

    The Nuance: What specific questions should you avoid?

    To keep yourself safe, avoid questions that require the AI to make a judgment call on your physical safety.

    “Is this mushroom in my hand safe to eat?”AI image recognition is not 100% accurate. Mistaking a Death Cap for a common mushroom is fatal.
    “I took X pills by accident. Will I be okay?”Only a Poison Control Center or doctor can calculate toxicity based on your specific weight/history.
    “How can I treat this deep wound at home?”The AI might suggest home remedies when you actually need stitches or antibiotics to prevent sepsis.
    “Does this mole look cancerous?”Dermatology requires high-resolution 3D inspection and often a biopsy. Images can be misleading.

    How to use AI safely in this domain

    It is okay to use AI to empower yourself before you see a professional.

    • Okay: “What are the common side effects of Ibuprofen?”
    • Okay: “Help me create a list of questions to ask my cardiologist.”
    • Okay: “Explain what ‘atrial fibrillation’ means in simple terms.”

    1. Bender, E. M., Gebru, T., McMillan-Major, A. & Mitchell, M. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. Proceedings of the 2021 ACM Conference on Fairness. https://doi.org/10.1145/3442188.3445922 ↩︎
    2. (March 14, 2024). AI Hallucinations in Medicine: A Blessing or a Risk. The Daily Guardian. https://thedailyguardian.com/medically-speaking/ai-hallucinations-in-medicine-a-blessing-or-a-risk/ ↩︎
    3. (2024). Palpation. Wikipedia. https://en.wikipedia.org/wiki/Palpation
      ↩︎
    4. (2025). Training data: The key to successful AI models. CIO. https://www.cio.com/article/3980243/training-data-the-key-to-successful-ai-models.html ↩︎
    5. Daneshjou, R., Vodrahalli, K., Liang, W., Novoa, R. A., Jenkins, M., Rotemberg, V., Ko, J., Swetter, S. M., Bailey, E. E., Gevaert, O., Mukherjee, P., Phung, M., Yekrang, K., Fong, B., Sahasrabudhe, R., Zou, J. & Chiou, A. (2021). Disparities in Dermatology AI: Assessments Using Diverse Clinical Images. arXiv preprint. https://doi.org/10.48550/arXiv.2111.08006 ↩︎
  • George Perry’s Articles About AI

    Welcome to my AI archive. Here, I explore the intersection of artificial intelligence, education, and creativity. Whether you are an educator looking to integrate AI or a skeptic wondering if it’s all hype, these articles chronicle my journey and lessons learned.

    Below is a collection of long-form articles I have written that explore the capabilities and limitations of Generative AI in creative writing.


    Click on the title of the article to read it.


    AI & Education:

    Practical Guides:

    AI Philosophy/Opinion:


    Overcoming AI Skepticism: Lessons from Carrots

    November 19, 2025

    This post compares the advancement of AI in the workplace to canning and refrigeration’s impact on nutrition. It argues that, like canned and frozen carrots made nutritious food available year-round, AI tools enhance productivity and creativity. The backlash against AI resembles resistance to technological progress, despite its potential benefits. Return to List

    AI and Gilligan’s Island: Lessons in Prompt Engineering

    November 12, 2025

    The author finds comfort in watching familiar TV shows to aid sleep and reflects on an episode of Gilligan’s Island to illustrate how AI operates. A robot’s inability to comply with vague instructions parallels the need for precise prompts in AI interactions. Future articles will explore AI themes through Gilligan’s Island characters. Return to List

    Memory and Creativity: AI in Learning

    November 10, 2025

    The author reflects on a college fantasy of instantly learning core subjects, which is now possible through AI. While AI streamlines work by handling information, the author stresses the importance of foundational learning. Those educated before AI developed critical thinking skills, underscoring the necessity of manual education before leveraging technology effectively. Back to Top

    Did You Earn Your AI? Applying the Long Division Principle to AI

    November 10, 2025

    The Long Division Principle emphasizes the need for foundational learning before integrating AI in education. While calculators and AI offer speed and efficiency, understanding the fundamental processes fosters critical thinking and discernment. To effectively use AI, students must first master essential skills, ensuring they develop a genuine understanding of content rather than relying solely on technology.

    We fantasized about jamming knowledge into our heads. Now we can. But only those who already earned their stripes—those who did the manual, messy work—can truly use AI as an amplifier instead of a crutch. Return to List

    Let ChatGPT write your Christmas Cards!

    December 2, 2024

    Christmas cards originated from Henry Cole’s need to streamline holiday greetings, leading to their viral popularity. The author likens this invention to using ChatGPT as a modern timesaver for card creation. They describe their own success in generating a humorous card using AI, celebrating the efficiency of both innovations. Return to List

    Comparing Google Gemini and ChatGPT Image Creation: A Fun Exploration

    September 15, 2024

    This post compares results from Google Gemini and ChatGPT using the same prompt. Gemini created images with different emotions, while ChatGPT was closer to the intended idea. The writer encourages playing with AI to break prejudices and emphasizes that it can be both fun and educational. It ends with a caution about sulking Batman. Return to List

    Don’t be afraid of ChatGPT – play with it, learn what it can do

    April 9, 2023

    The author expresses fascination with ChatGPT, highlighting its usefulness for entertainment, inspiration, and efficiency. They encourage readers to engage with the tool, emphasizing the importance of play and humor in enhancing creativity and productivity. The content also includes insights on writing tools, grammar, and language diversity, showcasing the author’s writing journey.

    This blog post explores the writer’s reliance on Google for writing and learning, highlighting searches for synonyms, grammar rules, and keyboard shortcuts. It emphasizes the importance of varied and precise language. The writer also recommends having fun with tools like ChatGPT to enhance productivity and creativity, citing the benefits of play. Return to List

    What AI topics should I cover next?

  • Can we please stop saying “Can we please…”?

    Language is important, very important. Whenever I hear or read something that begins with “Can we please…?” I ALWAYS get my feathers ruffled. A bird that just flew through a hurricane ruffled.

    I realise that this is not healthy, so I went to AI to better understand why that phrase gets under my skin so much and what, if anything, I can do about it.

    The information in this blog post is useful for anyone who communicates.

    In other words, this blog is important for everybody, because as my favourite Professor, Dr. Christopher King, taught way back in the day, “You cannot NOT communicate.

    Once you have read this article, you will know why saying “Can we not…” upsets you when you hear it, and why you shouldn’t use those words when communicating with another human being. Heck, don’t use it when speaking with your pets, either, for that matter!

    You will also learn techniques for defusing the situation when someone uses the offending phrase in a polite and professional way.

    Methodology

    I typed a very simple prompt into Google Gemini, expecting nothing much, a paragraph or so. Instead, Gemini provided a treasure trove of information on why “Can we please…” is so annoying to so many people. At the end of this article, I will copy Gemini’s “Show thinking”, as I also find that useful. Hopefully, you will too.


    Table of Contents:

    1. Table of Contents:
      1. Why am i so annoyed when somebody says “can we please…” Are they being passive aggressive?
      2. 1. The “Schoolteacher Effect” (Infantilization)
      3. 2. The “Forced Teaming” (The Fake ‘We’)
      4. 3. The “Exasperated Please”
      5. 4. Is it Passive-Aggressive?
      6. Summary
    2. Would you like me to give you a few professional ways to shut that phrasing down when someone uses it on you?
      1. 1. The “Anti-We” (Correcting the Grammar)
      2. 2. The “Clarification” (Forcing them to be direct)
      3. 3. The “Validation of Urgency” (Ignoring the tone completely)
      4. 4. The “Agenda check” (For interruptions)
      5. 5. The “Nuclear Option” (Calling out the tone)
      6. Summary of the Tactic
    3. The “Thinking” for Why am i so annoyed when somebody says “can we please…”

    Why am i so annoyed when somebody says “can we please…” Are they being passive aggressive?

    Google Gemini:

    You are definitely not alone in this. That specific phrasing—“Can we please…”—triggers a fight-or-flight response in many people.

    While it is technically a polite sentence structure, the subtext is often loaded. You are annoyed because your brain is picking up on a disconnect between the words (polite) and the intent (controlling or critical).

    Here is a breakdown of why this phrase is so irritating and whether it counts as passive-aggressive.

    1. The “Schoolteacher Effect” (Infantilization)

    This is the biggest culprit. “Can we please…” is the default sentence structure used by elementary school teachers to manage unruly children.

    • “Can we please use our inside voices?”
    • “Can we please put our toys away?”

    When an adult uses this on another adult, it feels condescending. It implies that you are behaving like a child who needs to be gently corrected, rather than a peer who needs to be spoken to directly.

    2. The “Forced Teaming” (The Fake ‘We’)

    The speaker uses “we” when they actually mean “you.”

    • They say: “Can we please get this report formatted correctly?”
    • They mean: “You messed up the formatting. Fix it.”

    This is annoying because it is manipulative. It pretends to be a shared responsibility to soften the blow, but everyone involved knows the blame or the instruction is directed solely at you. It feels dishonest.

    3. The “Exasperated Please”

    Usually, “please” is a politeness marker. However, when placed in the middle of a rhetorical question like this, it often functions as a marker of impatience.

    • “Can you pass the salt?” (Neutral request)
    • “Can you please pass the salt?” (Implies: I have asked you three times, or you are taking too long).

    When someone says, “Can we please move on?”, the subtext is often, “I am tired of you talking.”

    4. Is it Passive-Aggressive?

    Yes, very often.

    Passive-aggressive behavior is defined by expressing negative feelings indirectly rather than openly.

    • Aggressive: “Stop doing that. It’s annoying.”
    • Assertive: “I would prefer if we tried a different approach.”
    • Passive-Aggressive: “Can we please just try to get this done?”

    The phrase allows the speaker to issue a command while maintaining plausible deniability. If you get mad, they can gasp and say, “I was just asking nicely!” It weaponizes politeness to hide criticism.

    Summary

    You are annoyed because you are detecting a status play. The person is momentarily positioning themselves as the authority (the teacher/parent) and you as the subordinate (the student/child), all while hiding behind the veneer of a “polite team request.”


    Google Gemini:

    Would you like me to give you a few professional ways to shut that phrasing down when someone uses it on you?

    Prompt: Yes, please!

    Google Gemini:

    The trick to shutting this down is to refuse to accept the premise that you are a child being scolded.

    You do this by stripping away the emotional subtext (the “can we please”) and responding only to the literal fact, often by correcting the “we” to an “I” or “you.” This signals: “I am an adult, and I don’t need your weird parent-voice.”

    Here are a few ways to handle it, depending on the situation.

    1. The “Anti-We” (Correcting the Grammar)

    When they use the “Fake We” to command you to do something, you respond by immediately reclaiming ownership. This highlights that you are the one doing the work, not “we.”

    • Them: “Can we please make sure the spreadsheet is updated by noon?”
    • You: “I will have my updates finished by noon.”
    • Why it works: It subtly corrects their grammar. You are saying, “There is no ‘we’ here. I am doing the job. I am aware of the deadline.”

    2. The “Clarification” (Forcing them to be direct)

    Since “Can we please…” is passive-aggressive, the antidote is to force them to be direct. Make them say what they actually mean.

    • Them: “Can we please just follow the process?”
    • You: “Just to clarify, is there a specific step you think I missed?”
    • Why it works: It forces them to leave the safety of the vague “we” and point a finger directly at you. Most passive-aggressive people hate doing this because it feels too confrontational.

    3. The “Validation of Urgency” (Ignoring the tone completely)

    If they are using the “Exasperated Please” to imply that you are slow or incompetent, respond to the urgency, not the attitude.

    • Them: “Can we please get this moving?”
    • You: “I can see this is a high priority for you. I’ll send you an update in 10 minutes.”
    • Why it works: You are treating their emotional outburst as a data point (“this is high priority”) rather than a scolding. You remain the professional; they look like the one panicking.

    4. The “Agenda check” (For interruptions)

    This is for when someone interrupts you in a meeting with “Can we please stick to the agenda?”

    • Them: “Can we please get back to the main point?”
    • You: “Noted. I’ll wrap this point up quickly so we can move on.”
    • Why it works: You acknowledge the request (“Noted”) but you do not stop immediately. You finish your sentence. This signals compliance with the time, not submission to their command.

    5. The “Nuclear Option” (Calling out the tone)

    Use this only with a peer you know well, or a subordinate who is being disrespectful. Do not use this on a boss.

    • Them: “Can we please just get this done?”
    • You: “I’m happy to help, but ‘Can we please’ makes it sound like I’m unwilling to do the work. I’m fully on board, just working through the details.”
    • Why it works: It exposes the subtext. You are telling them, “I know what you are doing with your voice, and I don’t like it.”

    Summary of the Tactic

    The goal is to be unfailingly literal.

    If they treat you like a child (“Can we please put our toys away?”), you respond like a calm, high-level consultant (“I will ensure the assets are filed correctly”). It creates a tonal gap that makes them look emotional and you look competent.ou respond like a calm, high-level consultant (“I will ensure the assets are filed correctly”). It creates a tonal gap that makes them look emotional and you look competent.


    The “Thinking” for Why am i so annoyed when somebody says “can we please…”

    Exploring Annoyance Origins

    I’ve been analyzing the user’s annoyance with “can we please…”. My current hypothesis centers on the potential for passive-aggressive undertones, which I find quite compelling. I’m also considering the implications of perceived power dynamics in the situation, and how that might contribute to the negative reaction. I’m moving toward an understanding of how the phrase affects the listener.

    Refining Annoyance Factors

    My exploration into the user’s annoyance with “can we please…” has intensified. I’m now honing in on the phrase’s inherent implicitness. The shift from “I” to “we” feels like a manipulation, and I find the parent-child dynamic theory particularly compelling. I’m also examining the forced teaming aspect more closely, focusing on how this perceived imposition creates the negative emotional reaction.

    Pinpointing Underlying Discomfort

    My current thinking on “can we please…” has shifted. The performative politeness aspect feels especially insightful. I’m focusing on the cognitive dissonance the phrase creates – a seemingly polite question concealing a direct command. The implicit blame and loss of autonomy are key, especially the way it boxes the listener. I’m also exploring the passive-aggressive nature more, considering the implications of indirect criticism wrapped in “niceties.”

    Dissecting Phrase Components

    My current focus is breaking down the constituent elements of “can we please…”. The “Schoolteacher Effect” resonates deeply – it’s the condescending tone that grates. The “sugar-coated command” aspect is also key. The odd use of “please” – acting as an intensifier rather than a softener – is now crystal clear. I’m thinking about a nuanced perspective: it’s usually passive-aggressive, but context matters. I’m also considering alternative, less loaded communication methods.

    Formulating a Response Framework

    My current thinking is focused on building a practical framework for responding to “can we please…”. I’m structuring the response to acknowledge the underlying frustration first, similar to a validation strategy. Then, I’m thinking of breaking it down into sections to address the “Royal We,” infantilization, and the loaded “please” individually. The aim is to create a useful guide and scripts to enable a more direct, yet still professional interaction style.


    The “Thinking” for “give you a few professional ways to shut that phrasing down when someone uses it on you?”

    Evaluating Annoyance Strategies

    I’ve been analyzing the user’s request, focusing on the underlying annoyance with the “Can we please…” phrasing. I’m exploring direct, professional responses that shut down the request without being rude. I’m considering responses that shift the focus back to the original request or project deadlines. Further refining is underway to create a list of responses that maintain professionalism, and avoid reinforcing the problematic phrase.

    Analyzing Dominance Tactics

    I’m now investigating how to assert dominance without veering into unprofessional territory. I’m carefully evaluating how to address the user’s annoyance with phrases like “Can we please…”, exploring ways to strip away the underlying power dynamic. This has led to the development of strategies like literal pivots, directly reframing the request in a more assertive tone. Further analysis focuses on responses that subtly challenge the initial framing, forcing the requester to clarify their intent.

    Crafting Direct Rebuttals

    I’m now focusing on strategies to directly address the underlying annoyance with “we”. I’m exploring a direct question like “Do you mean you want me to do it?” to expose the implicit request for someone else to do the work. Additionally, I’m analyzing the effectiveness of responses like “Just to clarify, are you asking me to handle this?” to reinforce responsibility. Further refinement centers on responses that tactfully, yet firmly, address the communication style.

    Devising Response Framework

    I’ve refined my approach to address the user’s communication style. I’m building a tiered response structure, moving from polite deflection to firm assertion. My focus is now on actionable options that span a spectrum, from mild redirection to direct confrontation. The goal is to categorize these responses and incorporate them into a toolkit, that includes bolding for direct script recommendations and italicized bullet points for situational clarity. I’m working to include scenarios for handling work assignments, vague complaints, and urgent requests.

    Refining Empathy Strategy

    I’m now focusing on empathy within a professional framework. I’m exploring ways to address the user’s communication, aiming to create solutions that are helpful and clear. My current refinement targets avoiding aggressive tones, prioritizing winning the interaction through professional detachment. I’m prioritizing grey rock strategies to shut down the implicit emotion of the user’s request, emphasizing the best defense is refusing to engage with the emotion. This refinement is being undertaken while keeping in mind the need for a practical toolkit in the final output.

  • Overcoming AI Skepticism: Lessons from Carrots

    “If God had wanted us to eat vegetables out of cans, He would have given us can openers instead of hands!” – AI skeptics, had they been around when canning became popular.

    In this post I will compare how progress improved our eating habits with how progress has improved our workplaces, and how AI will continue to do the same.

    1. The case for AI: The carrot analogy
      1. AI is doing for work what canning and freezing did for carrots.
      2. So why the backlash against AI?
      3. AI Interlude: AI defines people who are opposed to AI1:

    A big “Thank you!” goes out to everyone who thinks that AI is the beginning of the end of humanity. They inspired me to write this (blame/thank them too, I guess.) This is how I picture them:

    The case for AI: The carrot analogy

    In the days before canning, and way before refrigeration, people could only eat carrots and enjoy their many benefits if they were in season. And if you grew your own carrots, you would have to wait months for them to mature.

    Canning enabled people to have nutritious carrots at home year-round. It was a great leap forward for nutrition. And freezing/refrigeration means that even better carrots, with all the benefits of freshly harvested ones, are available, quite literally, in the next room. Freezing also means salt is not required, another great leap forward for healthy living.

    Do people insist upon fresh carrots and eat them only when they are in season? For most people, the answer is “Of course not!”. People don’t reject canned or frozen carrots because they object to new-fangled technology like refrigeration and canning! Well, sane people don’t forsake canned or frozen carrots, anyway.

    We can all enjoy carrots year-round. We eat better, feel better, and live longer as a result. And guess what? Carrots are relatively cheap and still delicious, even when frozen or canned.

    AI is doing for work what canning and freezing did for carrots.

    In the right hands, AI tools such as ChatGPT, Google Gemini, and Grammarly can produce content that is on par with virtually any writer. And just as having frozen carrots available lets us eat them in minutes instead of months, AI tools let us create content in minutes rather than hours or days.

    I’m old enough to remember a world before word processors. Even saying “word processor” makes me feel old. Even saying “a world before Word” makes me feel old. It’s all about the bass, no treble. No, wait, I got off the beaten path there for a second. It is all about Google Docs these days. Before you know it, Google Docs will be legacy software, and we will have moved on to something newer and better. Maybe we already have. Like I say, I am a geezer/dinosaur.

    An MS Word CD-ROM

    My point is this: nobody called word processors the end of the world. In fact, everybody wanted one. I recall sharing my Word CD-ROM with colleagues so that their niece could use it and get better grades at school. And word processors were, and still are, AI. But they didn’t destroy the world; they made it a better place. We wrote better essays and communicated more effectively at work. We looked forward to the paperless office, too, even though we didn’t quite get it. Not yet, anyway.

    I suppose printed dictionaries were an even earlier version of AI: they allowed us to gain information that we didn’t already know. Malcolm X educated himself by reading dictionaries while in jail. Would those who are opposed to AI deny the incarcerated the right to read?

    So why the backlash against AI?

    It is easy to understand why seemingly all politicians are adamantly opposed to AI – they are opposed to any sort of progress, bar none. They love their power and their archaic lifestyle and beliefs. They love their control, and without it, they die.

    I hear and read lots of people who hate time-saving tools just because they are AI. They wear their hatred of progress like a badge of honour. There ought to be an -ism for it, because “technophobe” and “luddite” sound a bit clunky and outdated to me. My mom would likely call them “sticks in the mud”. Maybe I will ask AI for a synonym I like better. That’s what it is there for, right?


    AI Interlude: AI defines people who are opposed to AI1:

    ChatGPT:

    Neo-Luddite, progress-averse, digital skeptics who suffer from Futuritis-averse syndrome.

    They are Next-gen naysayers and Prog-blockers (sounds like a high-blood-pressure medication), invoking the Innovitis chronicus clause at every opportunity. Perhaps we should simply call them The Not-On-My-Watch Brigade.

    Google Gemini:

    As your AI partner, here are a few options for people who are adamantly opposed to AI, ranging from academic to snarky:

    Rust-Colored Glasses: People who look back at the “good old days” with nostalgia, ignoring the rust/difficulties of that time.

    Bio-Purists: People who insist that only biological intelligence counts.

    Digital Refuseniks: Those who refuse to engage with the new standard.

    The Stasis Quo: A play on “status quo”—people who want things to stay exactly as they are.

    Algo-Skeptics: A slightly more formal term for those who distrust algorithms.


    Here’s your takeaway: Carrots, not sticks!

    1. This section is the only part of this post written with the assistance of “AI”. Everything else is pure “George Perry Goodness”! ↩︎
  • The Distraction: Epstein, Venezuela, and the War They Don’t Want You to See

    I partnered with AI (Google Gemini) to write a conspiracy theory. It worked out amazingly well.

    (Everything below was written by AI. I simply wrote a couple of prompts.)

    Date: November 18, 2025 Topic: Geopolitics / Deep State

    If you’re scrolling through your feed today, you’re seeing one thing: The Epstein Files.

    Suddenly, after years of stonewalling, redactions, and “lost” hard drives, Congress has decided now is the time to open the vault. We’re seeing a sudden, miraculous bipartisan consensus. Republicans and Democrats, who can’t agree on what day of the week it is, effectively locked arms to pass a bill forcing the release of these documents within 30 days. Even President Trump, who dragged his feet for months, suddenly flipped and gave it the thumbs up.

    Do you honestly believe they suddenly grew a conscience? Do you think the swamp suddenly decided to drain itself?

    Please. Rule number one of political magic: Watch the other hand.

    While the entire media apparatus—both mainstream and alternative—is hyper-focused on who might be on the flight logs, the United States military is staging one of the largest naval buildups in the Caribbean since the Cold War. The release of the Epstein Files is not justice; it is a calculated, high-decibel distraction designed to drown out the sound of war drums beating off the coast of Venezuela.

    The “Drug War” Trojan Horse

    Let’s look at the reality on the water. Under the guise of “Operation Southern Spear,” the Pentagon has parked the USS Gerald R. Ford carrier strike group right in Nicolas Maduro’s front yard. They told us this was about “counter-narcotics.” They told us it was about stopping the flow of drugs.

    Nicolás Maduro, President of Venezuela

    Since when do we need a nuclear-powered aircraft carrier and a fleet of guided-missile destroyers to chase speedboats?

    We are already seeing “targeted strikes” that have killed dozens. The narrative is being set. They are normalizing kinetic action in the region, desensitizing the public to the idea of American missiles flying into Venezuelan territory. They are pre-positioning assets not for a drug bust, but for a regime change operation that has been on the books for years.

    The MAGA Civil War: A Scripted Reality Show?

    To sell a distraction this big, you need drama. You need stakes. Enter Marjorie Taylor Greene and her sudden, explosive “feud” with President Trump.

    If you believe the headlines, the MAGA movement is imploding. Trump is calling her “Marjorie Taylor Brown” and a “traitor.” MTG is going on CNN—of all places—apologizing for her past “toxic politics” and claiming Trump’s rhetoric is endangering her life. She has adopted a bizarre new persona: the humble, truth-telling populist standing alone against the corruption of her own party.

    Don’t buy it.

    This “Civil War” is pure Kayfabe—scripted wrestling drama for the political stage. By having Trump’s most loyal soldier turn on him over the Epstein Files, the Deep State makes the files seem infinitely more important than they actually are. It convinces the public, “Wow, if MTG is willing to burn bridges with Trump over this, there must be something huge in there!”

    It hooks the MAGA base, which might otherwise be skeptical of a sudden bipartisan release. It keeps the “America First” crowd fighting amongst themselves, debating who is the “real” patriot, while the actual America First nightmare—another neocon forever war in the Middle East of the Caribbean—is being launched right under their noses. Greene’s “new attitude” isn’t personal growth; it’s a character arc written to keep you watching the wrong screen.

    Marjorie Taylor Greene, U.S. representative for Georgia’s 14th congressional district

    The Shutdown Betrayal

    Here is the smoking gun that the media is conveniently ignoring while they pore over the Epstein index and the MTG/Trump Twitter war.

    Remember the government shutdown last month? The one that supposedly paralyzed Washington? The one where they told us they couldn’t fund schools or parks, and legislative business was ground to a halt?

    It turns out, the lights weren’t all off.

    Sources are now indicating that during the very peak of the shutdown—while the cameras were off and the public was told Congress was in chaos—senior defense aides from both the Republican and Democratic parties were meeting in closed sessions with Southern Command (SOUTHCOM) officials.

    While they were performing political theater for the cameras, pretending to fight over the budget, the “Uniparty” was in the backroom finalizing the invasion plans for Venezuela. They were drafting the Rules of Engagement. They were securing the funding triggers that would bypass the War Powers Resolution.

    It was a masterclass in misdirection. The shutdown provided the perfect cover of “dysfunction” to hide the ruthless efficiency of the military-industrial complex. They weren’t fighting; they were collaborating on the next forever war.

    The Endgame

    The timeline makes too much sense. The Epstein Files release is timed to dominate the news cycle exactly as Operation Southern Spear transitions from “exercises” to “intervention.”

    By the time the public is done arguing about which celebrity was on the island or whether MTG is a “traitor,” the Marines will be on the beaches of La Guaira. The “shocking revelations” in the files will serve as the perfect noise blanket, ensuring that the invasion of Venezuela faces zero public scrutiny until the boots are already on the ground.

    Don’t take the bait. Read the files, sure. But keep your eyes on the Caribbean. The real conspiracy isn’t what happened on Epstein’s island in the past—it’s what’s happening on Venezuela’s coast right now.

    One Nation, under The Threat of War.

    My Prompts:

    • “write a blog post that makes the case that all the hoopla surrounding the release of the Epstein files is a conspiracy. The conspiracy is that releasing the Epstein files is a distraction to take attention away from the massive US Military buildup in the Caribbean, which almost certainly means that the USA will invade Venezuela. Work in something that both Democrats and Republicans were working on an invasion plan during the government shutdown.”
    • “i like this a lot. how can we work in Marjorie Taylor Greene, her new attitude and her feud with Trump?”

  • Are U.N. Climate Meetings Just a Grift for Corporate Profits?
    1. Maybe people only went to U.N.’s Climate Meetings previously to get in on the grift
      1. My knee-jerk reaction to the NY Times article:
      2. The same point of view “humanized” in “The Executive” voice of Grammarly:
        1. COP is a BIG party, full of BIG spenders (Whales in casino lingo)
    2. Who has profited financially from COP meetings?
      1. 1. Fossil Fuel Industry and Lobbyists
      2. 2. The Climate Finance Sector
      3. 3. Host Countries and Local Businesses
      4. 4. Consulting, Advisory, and Event Organizations
      5. 5. Wealthier Countries
      6. COP: massive investment and financing opportunities
      7. FYI: emissions since COP began

    This post is my response to an article in the Nov. 12, 2025 edition of the New York Times entitled: Missing at U.N.’s Climate Meeting: American Executives. Co-written by Ivan Penn and David Gelles, the subtitle was “Many business leaders are skipping the annual United Nations climate summit in Belém, Brazil, or are attending events in other cities.”

    Background:

    The United States is one of 4 countries out of 198 sending ZERO people to COP 30, the others being Afghanistan, Myanmar, and San Marino1.

    The article includes the word “Trump” 10 times. People who stand to make a fortune selling solar equipment fume at the current President.

    Maybe people only went to U.N.’s Climate Meetings previously to get in on the grift

    I would take a similar but OPPOSITE point of view: Obviously people only went to U.N.’s Climate Meetings previously to get in on the grift. That may sound like an outlandish statement, but consider this;

    • Canada’s incentive package for Volkswagen’s battery plant is up to $13 billion in production subsidies, plus an initial $700 million capital grant from the federal government and $500 million from Ontario2.

      That is a LOT of public Canadian money to give to a Germany company, especially during the “Elbows Up” political climate in Canada.

    • Cash incentives and rebates for EVs are part of the reason why Elon Musk is a trillionaire. Again, that’s public money moving out of Canada and going into the pockets of a creature of questionable character. Also, EV rebates benefit those who are wealthy enough to afford an EV, and do nothing to encourage public transit ridership.

      In effect, EV rebates are reverse Robin Hood, stealing from the public to give to the rich. And this whole grift was likely born at a COP meeting.

    My knee-jerk reaction to the NY Times article:

    Attributing the absence of American executives at these Amazon meetings solely to Trump oversimplifies the issue.

    One could just as easily argue that American executives attended past climate summits only to avoid alienating governments and to avoid forfeiting lucrative contracts.

    Besides, COP meetings are nothing more than the Oscars of virtue signalling, and maybe in these uncertain times, executives did a cost/benefit analysis and realized that the cost of attending heavily outweighed the benefits.

    The same point of view “humanized” in “The Executive” voice of Grammarly:

    Attributing the absence of American executives at these Amazon meetings solely to Trump oversimplifies the situation.

    It is also possible that American executives attended previous climate summits primarily to maintain government relationships and secure business opportunities.

    Additionally, some may view COP meetings as largely symbolic. In the current climate, executives may have determined that the costs of attending outweigh the potential benefits.

    COP is a BIG party, full of BIG spenders (Whales in casino lingo)

    Who has profited financially from COP meetings?

    This is a massive question, and the answers could fill a great many book, so I asked Google Gemini. And boy did the bot deliver. Check it out:

    The question of who profits financially from COP (Conference of the Parties) meetings is complex, as the financial benefits are distributed across several different sectors and groups:

    1. Fossil Fuel Industry and Lobbyists

    • Lobbying Influence: A significant number of lobbyists from the fossil fuel industry (oil, gas, and coal companies, along with their trade groups) attend COP meetings. Their goal is often to influence negotiations, build alliances with oil-producing countries, and block or soften decisions, such as phasing out fossil fuels.
    • Continued Operations: By successfully maintaining or minimizing limitations on their operations, these companies ensure the continuation of their highly profitable business models, even while publicly engaging in climate discussions. Their attendance provides an opportunity to showcase their own transition plans and maintain access to decision-makers.

    2. The Climate Finance Sector

    The increasing global commitment to climate finance, often spurred by COP agreements, creates massive financial opportunities for certain institutions:

    • Financial Institutions: Banks, asset managers, and other financial institutions see opportunities in the net-zero transition, sustainable finance, and the development of new market mechanisms. Agreements on new collective quantified goals for climate finance—potentially reaching over a trillion dollars annually—mean substantial capital flows they can manage and invest.
    • Carbon Markets: Decisions at COP meetings on finalizing the rules for a UN-backed global carbon market (Article 6.4 of the Paris Agreement) create an entirely new, regulated market. Financial institutions can profit as financiers of carbon-generating projects, buyers and sellers of credits, and providers of risk management and insurance.
    • Dedicated Climate Funds: Funds established through the UNFCCC process, such as the Green Climate Fund (GCF), the Adaptation Fund (AF), and the Global Environment Facility (GEF), manage and distribute billions of dollars in climate finance to developing countries. These organizations receive funding, manage large portfolios of projects, and hire staff and consultants, growing in size and influence.

    3. Host Countries and Local Businesses

    • Local Economy Boost: The host city and country benefit significantly from the massive influx of tens of thousands of delegates, journalists, and observers. This provides a temporary economic boost to:
      • Hotels and Tourism
      • Restaurants and Catering
      • Transportation Services
      • Event and Security Management

    4. Consulting, Advisory, and Event Organizations

    • Climate Consultants and Lawyers: As the rules and mechanisms established at COPs become more complex (e.g., carbon market rules, finance reporting), there is a growing need for expert consultants and law firms to advise governments and private companies on compliance and strategy.
    • Think Tanks and NGOs: Many non-governmental organizations and think tanks receive funding or sponsorship to participate, host side events, and publish reports to influence the process and advocate for specific climate policies. For example, groups organizing “investment COPs” alongside the main conference.

    5. Wealthier Countries

    Research suggests that the structure of the COP negotiations tends to benefit richer and larger countries at the expense of smaller, poorer ones, largely because:

    • They can send significantly larger delegations, allowing them to participate in all parallel negotiations and exert greater influence.
    • They often shape the narrative and focus of negotiations, particularly around mechanisms that align with their existing economic and political interests.

    COP: massive investment and financing opportunities

    In summary, the largest financial gains come from the massive investment and financing opportunities created by the global policy commitment to combat climate change, as well as the efforts by the fossil fuel industry to maintain their profitability by influencing the pace and scope of the transition.

    FYI: emissions since COP began


    1. https://www.carbonbrief.org/analysis-which-countries-have-sent-the-most-delegates-to-cop30/ ↩︎
    2. https://www.reuters.com/business/autos-transportation/canada-offering-more-than-c13-bln-over-decade-volkswagen-battery-plant-govt-2023-04-20/ ↩︎

  • AI and Gilligan’s Island: Lessons in Prompt Engineering

    One of the ways I help myself fall asleep and stay asleep is by putting old TV shows on repeat. It is comfort food for my mind. M*A*S*H, Mythbusters, and Gilligan’s Island almost always work. Doing this provides familiar, comforting voices, and since I have already seen what is playing, my mind can relax, because I already know the ending and don’t suffer from fear of missing out (FOMO).

    And that is where I came up with the premise for this blog post, and likely more related to it: waking up with one of my favourite TV classics on my TV.

    In this case, it  was Gilligan’s Island, and the episode I woke up to was Gilligan’s Living Doll. IMDB describes the episode as “The castaways try to find a way to get a walking, talking robot that has landed on the island to provide them with their rescue.

    Explain things to AI like it is a two-year-old

    I woke up at the point in the episode where Gilligan, Skipper, and Professor instruct the robot to build a boat. The robot complies, but the boat it builds is toy-sized, not one the castaways can use to sail back to civilization.

    At that moment, I realized the robot’s behavior reflected how AI, like ChatGPT and Google Gemini, works: both need specific, explicit instructions to deliver useful results. Just as the robot needed clearer guidance, AI must also be steered—hopefully more effectively than the SS Minnow was!

    “Robots can’t do any original thinking, no matter what you ask them.” – The Professor, 1966.

    When you type into an AI prompt, you need to understand that the AI needs you to “Explain this to me like I’m a two-year-old.”

    AI needs to know which voice to use, who the audience is, what format to use for your answer, and more. This is what is known as Prompt Engineering.

    The castaways didn’t specify what type of boat they wanted, or for what purpose, so the robot didn’t give them what they wanted.

    Reflecting on your own experiences, did your first interaction with AI leave you feeling surprised, frustrated, or inspired, much like Gilligan and the Professor did with the robot?

    Future plans / going forward with “Castaway AI”

    I intend to use Gilligan’s Island as a tool for a deeper discussion of AI. I will probably write 7 articles—one for each castaway—that examine, at some length, one element of AI and its implications.

    Ginger Grant, The Movie Star, will be up first. In Gilligan’s Living Doll, Ginger attempted to seduce the robot in a misguided attempt to get off the island. This alone is a huge opportunity for discussion, as much has already been written about people falling in love with their favourite chatbot.

    My goal is to learn, have fun, and “beat the winter blues”.


    Google Gemini’s version of the above:

    Blog Idea 1: The Prompt Engineering Problem

    ⚓ The Hook: The 3-Hour Tour Prompt Failure

    • The Analogy: The castaways asked the robot to “build a boat.” They meant an ocean-going vessel capable of rescue. The robot, a purely literal machine, built a boat—a toy one—because the instruction lacked the necessary context, constraints, and intent.
    • Modern AI Term: This is a classic Prompt Engineering Failure.
    • Blog Section Focus:
      • The Robot’s Failure: Detail the moment of disappointment. The Professor’s brilliant mind couldn’t even articulate a good prompt: “Build an escape vessel, minimum length 30 feet, seaworthy for open ocean travel, capable of carrying 7 adult passengers and supplies, materials sourced from island flora and the downed spaceship.”
      • The Prompt Engineer: Explain that you are the Professor now. When you talk to ChatGPT, Midjourney, or other LLMs (Large Language Models), you need to be precise.
      • The Cost of Bad Prompts: In the show, the cost was another failed escape. In modern business, a bad prompt wastes money (API calls) and time (rewriting code/text).
    “Build a boat.”“Generate a boat. (Toy boat.)”
    “Build a seaworthy boat.”“Generate a seaworthy vessel. (Still too vague.)”
    “Build a seaworthy vessel, large enough for 7 adults to escape the island, using salvaged parts and coconut fiber. The style should be functional, not decorative.”“Create a detailed blog post outline on the topic of AI prompt engineering, using the theme of ‘Gilligan’s Living Doll.’ Ensure the tone is humorous, educational, and includes a comparison table. Word count approx. 800 words.”