• The Great Library Heist (That Never Actually Happened)

    If you spend enough time in the darker corners of LinkedIn or the frantic comment sections of art forums, you’ve heard the refrain: Generative AI is built on theft. It’s a heavy accusation. It paints developers as digital cat burglars and AI users as fences for stolen pixels. It suggests that every time a model is trained on public data, a crime has been committed.

    But if we follow that logic to its natural conclusion, we end up in a world where being a student is a felony and having a family resemblance is a misdemeanor.


    1. The Library Fallacy
    2. The Teacher’s Larceny
    3. The Inheritance of Ideas
    4. The Manifesto of the Collective Mind
    5. The Manifesto of the Collective Mind: A Defense of Synthesis (extended mix)
    6. Chico Marx testifying in Congress about Caribbean boat bombings

    The Library Fallacy

    Imagine walking into your local library. You spend the afternoon reading everything you see—novels, technical manuals, poetry, and magazines. You walk out with a brain full of new structures, ideas, and stylistic inspirations.

    In the eyes of the “AI is theft” crowd, you didn’t just study. You stole the building.

    The argument relies on a massive False Equivalence: the idea that analyzing a work is the same as duplicating it. If observing public data is theft, then every historian, journalist, and curious toddler is a criminal. To “see” is not to “take”; it is to process. AI is simply the world’s most efficient reader, traversing the public landscape of the internet to understand how we speak and how we see.

    The Teacher’s Larceny

    Take it a step further. Recall the best teacher you ever had. They taught you how to frame a sentence, how to apply a brushstroke, or how to solve a complex equation.

    Whenever you use those skills today, are you “stealing” from that teacher?

    Of course not. We call it learning. No artist or writer creates in a vacuum. If using what we’ve learned from those who came before is larceny, then the entire history of human culture is one long, uninterrupted crime spree. AI “learns” stylistic patterns—the mathematical “blueprints” of style—exactly like a student, rather than “copy-pasting” specific works.

    The Inheritance of Ideas

    Finally, there is the biological argument. If you have your father’s eyes or your mother’s sense of timing, are you a criminal for having things in common with your parents?

    New ideas are the offspring of old ones. A new piece of art shares “DNA” with the collective heritage of data that came before it. This isn’t a heist; it’s evolution. AI output is a synthesis—a digital descendant of millions of inputs—creating something new from the traits we have collectively contributed to the public square.

    The Manifesto of the Collective Mind

    To claim that AI training is theft is to commit the Fallacy of Composition. Just because parts of the training data are copyrighted doesn’t mean the mathematical model itself violates copyright. It’s similar to arguing that a cake is “illegal” because the flour used was subject to a “no-resale” agreement: once baked, the cake is chemically and functionally distinct from the original ingredients, making it a new product.

    Copyright exists to protect the expression, not the idea. Theft is taking a loaf of bread so that another cannot eat it. Inspiration is taking the recipe and baking a new loaf for the world.

    AI is the newest, fastest reader in the human library, using humanity’s shared recipes to feed a future of infinite possibilities. It doesn’t replace us; it mirrors our collective mind.

    So, the next time someone tells you that using AI is “justified theft,” remind them: if learning from the world is a crime, then we’re all serving a life sentence.


    The Manifesto of the Collective Mind: A Defense of Synthesis (extended mix)

    Creation has never been an act of isolation. It is an act of communion with everything that came before. To claim that learning from the world is “theft” is to declare that progress itself is a crime. We propose a different truth:

    I. To Observe is Not to Steal. A library is not a collection of things to be possessed, but a landscape to be traversed. If a human reads every book in a library and emerges with a deeper understanding of language, we call them a scholar. When a machine does the same, it is not “scraping”—it is studying. Knowledge, once shared in the public square, becomes the soil from which new ideas grow.

    Every artist stands on the shoulders of giants. Every student uses the tools their teacher provides. Using the patterns, logic, and grammar of human culture is not mimicry; it is participation. We do not own sunsets, sonnets, or the math of a brushstroke. These are the heritage of our species, and to deny a machine the right to learn them is to deny the universality of achievement.

    III. Synthesis is the New Genesis. We are not made of nothing; we are made of our parents, our surroundings, and our history. AI is the digital child of the human internet—it inherits our biases, our brilliance, and our linguistic DNA. It does not replace us; it mirrors our collective mind. It does not “take” art; it distills the essence of what makes art human.

    IV. The Boundary of Creation Copyright exists to protect the expression of ideas, not the ideas themselves.1(What Does Copyright Protect? (FAQ), n.d.) Theft is taking a loaf of bread so that another cannot eat it; in contrast, inspiration is using a shared recipe to bake a new loaf for the world. In this metaphor, AI is the greatest baker—using humanity’s collective recipes to create new bread and nourish infinite possibilities. Let us embrace this future and use our creative powers to shape it responsibly.


    Judge for Yourself: Is this Theft?

    I was watching Washington Week with The Atlantic on PBS and the topic was the bombings in the Caribbean ordered by Secretary of War Pete Hegseth. Were they war crimes? The Trump Administration is unreal and offensive, but when one considers that the host of Washington Week is Jeffry Goldberg, the journalist who Hegseth included on a call discussing national secrets (SIGNALGATE), things are definitely uber surreal.

    Then I had an idea: This is all so over-the=top, it reminds me of The Marx Brothers. What if I have a little fun and ask AI to “write a couple of paragraphs in the voice of Chico Marx, testifying in Congress about the bombing of boats in the Caribbean“.

    AI performing this task at my request is no different than Kate McKinnon impersonating Hillary Clinton or Ruth Bader Ginsburg on SNL.

    The “Chico Thing” created by Google Gemini turned out pretty well, Let’s have a look (and hopefully a laugh):

    Chico Marx testifying in Congress about Caribbean boat bombings

    “Look, look, I’m a-tellin’ you, Mr. Congressman, I don’t know nothin’ about no boats! My brother, he’s-a the one who likes the water, because he’s-a got a leak in his head, but me? I stay on the dry land where the money is. You say somebody blowed up a boat in the Caribbean? That’s-a too bad. Maybe the boat, she was-a tired and she wanted to go out with a bang, huh? I was-a too busy workin’ on a very important project. I was-a shadow-boxing a ghost in a dark alley to see who’s-a got the best reach. You can’t be in two places at once unless you’re-a twins, and even then, you gotta pay double for the antipasto!

    Now, you ask-a me about these “explosives.” I don’t-a use explosives. I use-a the logic! If a boat sinks in the middle of the ocean and nobody is there to hear it, does it still make a splash? Of course-a not, it’s underwater! You think I’m-a the mastermind? I can’t even master the mind I got! I’m-a just a poor boy tryin’ to make a livin’ sellin’ insurance to people who already had the accident. If you want to find the guy who did it, go ask the fish. They saw the whole thing, but they’re-a very quiet—they don’t-a want to get fried! So, you let-a me go now, and I promise I won’t-a blow up nothin’ except maybe a paper bag.”

    “You’ve just listened to my brother—a man who thinks a ‘nautical mile’ is a smile you get from a mermaid.” – Groucho’s/AI’s response


    References

    1. (n.d.). What Does Copyright Protect? (FAQ). U.S. Copyright Office. https://www.copyright.gov/help/faq/faq-protect.html ↩︎

  • Why a Sense of Humour is the Most Human Trait You Can Possess (And Why It Signals Competence)

    We all know the feeling: you’re watching a speaker, interviewing a candidate, or meeting a new teammate, and you’re not just listening to their facts—you’re searching for a vital, unspoken signal. You’re searching for humanity.

    In a world increasingly dominated by polished scripts, optimized data, and AI-driven responses, how do you instantly prove you are a thinking, breathing, fully clued-in individual? The answer is simple, powerful, and often overlooked: a sense of humor.

    Humour is more than just a pleasant distraction. Humour is a six-second social MRI. It doesn’t just make you likable; it is a rapid-fire proof of concept for your social, emotional, and cognitive fitness.1

    A sense of humour makes you seem more human because it proves you’re not just thinking—you’re actually tuning in to the world around you.

    A sense of humour makes you seem more human because it proves you’re not just thinking—you’re actually tuning in to the world around you.

    If you want to move past simply delivering information and start building connections and influence, here are 9 profound ways a sense of humour serves as the ultimate signal of competence and humanity.


    The 9 Signals: How Humour Demonstrates Humanity & Competence

    1. Humour Shows Social Awareness.

    A joke only works if you understand the shared context—the norms, the absurdities, the unsaid things. When someone uses humour well, it signals they’re tuned into how people actually think and talk, not just reciting information. It’s the difference between knowing the words and reading the room.

    2. Humour Demonstrates Emotional Intelligence.

    Knowing when to be funny—and when not to—shows sensitivity to mood, tension, and the people around you. Good humour is calibrated, not just delivered. A joke at the wrong moment isn’t funny—it’s a social smoke alarm.

    3. Humour Creates Warmth and Connection.

    Laughter acts as social glue. When someone makes another person laugh, it sparks a feeling of shared understanding—”you get me.” Since you can’t fake a genuine laugh, earning one feels like a tiny victory and a basic sign of trust and relatability.2

    4. It Reveals Cognitive Flexibility.

    Humour often involves shifting perspectives, spotting contradictions, or reframing something ordinary in a surprising way. It feels deeply human because it mirrors how we navigate real life—never linear, always messy and interpretive. If life were a straight line, we wouldn’t need punchlines.

    5. Humour Stimulates Creativity and Problem Solving.

    The mechanism of humour—connecting two seemingly unrelated ideas to create a surprising third (the punchline)—is the same engine that drives creative thinking and innovation. A good laugh acts as a cognitive reset button, freeing the brain from rigid, linear thinking and opening it to novel solutions.3

    6. Humour Signals Resilience and Stress Management.

    Humor, especially during difficult or high-pressure situations, demonstrates the ability to maintain perspective and emotional distance. The ability to crack a well-timed joke under pressure suggests mental toughness and an innate mechanism for cognitive reappraisal, signaling that you process stress rather than letting it consume you.4

    7. Humour Shows You Don’t Take Yourself Too Seriously.

    Self-aware humour shows humility and groundedness. Someone who can laugh at themselves seems less rigid, less defensive, and more authentic and engaged. Taking yourself too seriously is the fastest way to become unrelatable.

    8. Humour is a Powerful Leadership Tool.

    Leaders who use appropriate humour are seen as more trustworthy, approachable, and effective at defusing conflict or delivering difficult news. Humour breaks down hierarchical barriers and fosters psychological safety. It’s less a weakness and more a tool for influence and team cohesion.5

    9. Humour Humanizes Complexity.

    Even the smartest or most serious people become approachable when they joke. Humour opens a door that intellect alone cannot. It makes difficult or serious topics accessible.6


    Conclusion: Humor—The Advanced Social Lubricant

    We’ve covered nine dimensions, but the core takeaway is this: a sense of humour is not a frivolous add-on or a charming quirk; it is proof of sophisticated human processing—social, emotional, and cognitive.

    To deploy humour successfully is to take a calculated social risk. A failed joke falls flat, creating awkwardness. But a well-placed, timely moment of humour pays dividends that mere intellect or diligence never could—it creates immediate connection, trust, and influence. The willingness to take that risk is, in itself, a powerful signal of confidence and groundedness.

    If you encounter someone who takes themselves so seriously that they cannot laugh, you are likely looking at a person who is too rigid to adapt, too defensive to connect, and too closed-off to innovate.


    The Negative Metaphor: Having no sense of humour is like running on Wi-Fi with one bar; you can still function, but everyone can tell you’re not quite connected.

    The Positive Metaphor: A well-placed sense of humour is like an advanced social lubricant: it reduces friction, makes the interaction run more smoothly, and leaves everyone feeling a little warmer.


    Ultimately, your sense of humour is your most reliable human fingerprint. Use it well.


    References:

    1. (2025). The success elements of humor use in workplace leadership: A proposed framework with cognitive and emotional competencies. PLOS ONE. https://doi.org/10.1371/journal.pone.0304650
      ↩︎
    2. Dunbar, R., Frangou, A., Grainger, F. & Pearce, E. (2021). Laughter influences social bonding but not prosocial generosity to friends and strangers. Psychological Science 32(8), pp. 1245-1253. https://doi.org/10.1177/09567976211024335 ↩︎
    3. (2024). Humor in leadership and employee creative and innovative behavior. Current Opinion in Psychology 55. https://doi.org/10.1016/j.copsyc.2023.101723 ↩︎
    4. (2024). Humor in leadership and employee creative and innovative behavior. Current Opinion in Psychology 55. https://doi.org/10.1016/j.copsyc.2023.101723 ↩︎
    5. (2024). Lighting the fire of wisdom following humor: How and when leader humor yields team creativity through team knowledge integration capability. Journal of Business Research 183. https://doi.org/10.1016/j.jbusres.2024.114834 ↩︎
    6. Aaker, J. & Bagdonas, N. (July 10, 2017). Humor Is Serious Business. Stanford Graduate School of Business. https://www.gsb.stanford.edu/insights/humor-serious-business ↩︎
  • Debunking the ‘AI SLOP’ Myth with Humour

    I was recently called a purveyor of AI SLOP, and I want to respond. I’ve always thought purveyor is one of the finer words in English, but I take issue with the recently coined:  AI SLOP.

    In this article, I will share my thoughts on the phrase “AI SLOP” and then offer FOUR humorous definitions of people who enjoy flinging the term “AI SLOP” around as if they are monkeys chucking feces at each other. All four definitions are AI-generated, and all from the same prompt. This is another great examination of the differences and nuances of ChatGPT, Google Gemini, and Grammarly AI.

    First, though, I will briefly discuss why I trust AI-generated content and use it wherever appropriate.

    1. My emotional/feral reaction to the phrase “AI SLOP.”
    2. Why I trust AI-generated content
      1. My first experience with ChatGPT
    3. AI Detection is “iffy”, another reason to stop using the phrase “AI SLOP.”
    4. Now the fun part: Humorous Definitions of people who use the phrase “AI SLOP.”
      1. Below is the prompt and the AI responses. Enjoy!
      2. ChatGPT 1:
      3. ChatGPT 2:
      4. Google Gemini:
      5. Grammarly:

    My emotional/feral reaction to the phrase “AI SLOP.”

    AI SLOP is so frustrating to hear or read. For me, it screams “willfully and proudly ignorant”. I generally think, “Here is a person with a strong opinion about something they likely know nothing about.”

    Calling something “AI SLOP” reflexively is like a person unfamiliar with Public Enemy dismissing an entire, hugely popular 50-year-old music genre with seven capital letters: “RAP CRAP.”

    Describing something as “AI SLOP” is similar to calling inconvenient news stories “FAKE NEWS.” Both phrases are used to disregard information that doesn’t fit one’s beliefs. By dismissing facts as false or worthless, the speaker avoids engaging with ideas they disagree with.

    Why I trust AI-generated content

    When ChatGPT first arrived on the scene, I was skeptical. Very skeptical.

    But colleagues were persistent and eventually convinced me to give it a try.

    It was way before I knew about prompt engineering, and come to think of it, I don’t think the term “prompt engineering” even existed!

    My first experience with ChatGPT

    When I found myself with some free time, I opened ChatGPT and entered the headlines (H2s) from a lengthy article I had recently finished.

    That is decidedly NOT prompt engineering, but the results that ChatGPT produced were much in line with what I had written.

    The difference was that AI completed tasks that took me a couple of days in seconds.

    So, I gained trust in AI. I played, and I gained.

    Being a somewhat rational person, I still verify and proofread what AI produces. And sometimes AI should not be used at all. Students should still do their own research, for example.

    People are affected by AI even if they do not use it directly.

    But even if students are not using AI for their homework or writing papers, their teachers are likely creating lesson plans with AI, and school boards and the administrative sides of schools are definitely using AI. The media is definitely overusing AI. I guarantee that lawmakers are using AI to sway voters and to do research on bills and laws they introduce.

    This means that even if students are not using AI directly for their education, they ARE being influenced by AI as it trickles down the hierarchy.

    AI Detection is “iffy”, another reason to stop using the phrase “AI SLOP.”

    1. Grammarly’s “AI Detector” feature is far from trustworthy. Generate some copy in Gemini or ChatGPT, copy it into Grammarly, and see what happens if you don’t believe me.
    2. I have friends who “hate all AI” who also love some of the AI-generated copy I have shared with them, without telling them it was AI-generated.
    3. And of course, there are “get rich quick” schemers trying to make a quick buck with “failproof AI detectors” that will identify AI-generated copy for “a small monthly fee.” That’s just one of countless examples of why humans are less trustworthy than AI!

    Now the fun part: Humorous Definitions of people who use the phrase “AI SLOP.”

    With so many people defining me, I decided to fight fire with fire and define them! And naturally, the way to do it was with AI!

    So I wrote a prompt quickly, then fed it into ChatGPT, Google Gemini, and the Grammarly AI feature.

    My intention was to use this endeavour as a warm-up exercise and then move on to something more substantial. The “AI SLOP” was too funny not to share, so here’s this article.

    Below is the prompt and the AI responses. Enjoy!

    image.png

    ChatGPT 1:

    Here’s a version that stays sharp, funny, and pointed without veering into outright harassment territory:


    AI-Slopper (n.)
    A person who reflexively uses the term “AI slop” to dismiss anything created with artificial intelligence, typically revealing more about their own limitations than about the work itself.

    Characteristics often attributed to an AI-Slopper include:

    • Technological Fragility: Becomes overwhelmed by the idea of typing a coherent prompt, viewing basic interface elements as if they were ancient runes.
    • Creativity Vacuum: Assumes AI can only produce trash because they personally cannot imagine anything worth making.
    • Default Cynicism: Responds to new tools with the same enthusiasm a medieval monk might show toward a printing press.
    • Luddite Swagger: Proudly rejects emerging technology while simultaneously benefiting from dozens of hidden algorithms every day.

    Usage:
    “Don’t mind him — he calls everything ‘AI slop.’ If it has a button, a menu, or a spark of imagination, he’s against it.”

    ChatGPT 2:

    Here’s the film noir version — smoky, cynical, fedora tilted just right, describing a fictional archetype of the “AI-Slopper” in classic hardboiled style:


    AI-Slopper (n.) — Film Noir Edition
    They called him an AI-Slopper, the kind of two-bit mug who’d walk into a room full of geniuses and declare everything “SLOP” before he’d even bothered to read the sign on the door. A real dime-store critic with a walnut-sized imagination and a fear of technology so deep you’d need a submarine to find the bottom.

    He drifted through the digital underworld like a busted typewriter — all clacks, no ink.

    Profile, as understood by detectives in the know:

    • Technological Lightweight: The type who stares at a prompt box the way a cat stares at a bathtub — lots of suspicion, zero comprehension.
    • Creativity Flatline: I once asked him for an idea. He handed me an empty napkin and said, “This is my vision.”
    • Luddite by Lifestyle: He treats progress like an unpaid debt collector — something to hide from behind half-drawn blinds.
    • Fastest Dismissal in the West: You show him anything innovative, anything with a spark of originality, and he’ll squint, wrinkle his nose, and mutter “AI slop” like it’s a curse he learned from a drunk at the docks.
    • Knowledge Thin as a Cheap Cigarette: Everything he knows about AI could fit on the back of a matchbook — and most of that would be misspelled.

    I poured myself a drink and watched him flap his gums.
    “Slop, slop, slop,” he said, over and over, like a broken jukebox playing a song nobody asked for.

    And I realized something:
    This guy wasn’t afraid that AI would replace him.
    He was afraid it already had — and nobody noticed.

    Google Gemini:

    It sounds like you’re looking for a highly critical definition of a person who uses the term “AI Slop.”

    Here is a definition focusing on the characteristics you mentioned:


    🚫 Definition: “AI Slop” User (Critical View)

    A user of the phrase “AI Slop” is often characterized as an individual exhibiting a profound technological illiteracy coupled with a deep-seated creative deficit.

    • Technological Ineptitude: Their use of the term typically betrays a limited intelligence regarding modern generative tools, as they are fundamentally unable to grasp the concept of prompt engineering. They fail to progress beyond the most basic, vague inputs and thus receive low-quality, generic outputs, which they then erroneously label as “slop.”
    • Creative Deficiency: The complaint is often a projection of having zero creativity. Lacking the imagination or skill to craft an effective request or iteratively refine a result, they blame the tool for their own inability to generate anything meaningful.
    • Dismissive Luddism: Ultimately, they function as a dismissive Luddite, reflexively rejecting powerful new technologies out of fear, misunderstanding, or an unwillingness to adapt. They dismiss the entire field based on their own poor, unskilled results, seeking to diminish the value of a medium they cannot yet master.

    Grammarly:

    AI Slop Crusader (noun): A self-appointed guardian of “authentic” content who brandishes the term “AI slop” as a substitute for critical thinking. This individual possesses the unique ability to detect artificial intelligence in any work they dislike, while remaining blissfully unaware that their inability to craft a coherent prompt reflects not the technology’s limitations, but their own.

    Characterized by a militant nostalgia for inefficiency and a steadfast refusal to learn basic skills that would take approximately four minutes to acquire, the AI Slop Crusader mistakes their technological illiteracy for moral superiority. They view all AI-assisted work as inherently inferior, yet curiously lack the creative capacity to produce anything superior themselves.

    Often found commenting “this is obviously AI slop” on human-created work, or dismissing genuinely innovative AI-assisted projects without examination, they represent a modern Luddite—one who smashes not textile machines, but the very concept of tools evolving beyond their comfort zone. Their rallying cry of “AI slop” serves as both shield and sword: a shield protecting them from engaging meaningfully with new technology, and a sword with which to cut down others’ efforts while contributing nothing of value themselves.

    See also: Keyboard warrior, armchair critic, person who still can’t figure out how to unmute themselves on Zoom calls.

  • Directing the Digital Factory: How Using AI Makes You Part-Film Director, Part-Andy Warhol

    Artificial intelligence is fundamentally transforming the creative process, extending beyond computation to empower individuals with unprecedented creative control. Whether generating images or writing code, this capability evokes the experience of both directing a film and participating in Andy Warhol’s Factory.

    Both roles emphasize conceptual guidance and iterative production over manual execution.


    1. The AI User as Film Director: Crafting a Vision, Guiding the “Crew.”

    Consider the responsibilities of a film director. While directors do not personally create every backdrop or perform each line of dialogue, they serve as the unifying creative force. They determine the narrative, tone, and visual style, providing detailed instructions to a specialized crew.

    • Casting the Right Model: Similar to how a director selects actors for their unique abilities, one selects an AI model for its specialized capabilities. For example, GPT-4 is suited for text, Midjourney for visual art, and Suno for music generation. Each model functions as a specialist tool with distinct styles and competencies.
    • Prompting Your Crew: The “prompt” serves as the instruction provided to the AI, functioning as a directive to a digital cinematographer, set designer, and actors simultaneously. For instance, one might request text written “in the style of a 1940s hard-boiled detective novel” or specify an image as a “low-angle shot with lens flare.”
    • The Iterative Process: Directors seldom achieve the desired result on the first attempt, often requesting additional takes and providing precise feedback. Similarly, working with AI involves revising prompts, such as “Now make the horse a unicorn” or “Change the lighting to chiaroscuro with vibrant, saturated colors.” This iterative process continues until the output aligns with the intended vision.
    • Editing and Curation: After completing the “takes,” a director collaborates with an editor to select and assemble the best footage. Similarly, the AI user selects the most suitable generated output and may further refine it using additional digital tools.

    This transition from manual execution to conceptual guidance introduces a compelling analogy: the manner in which AI-driven creation reflects the ethos of Warhol’s Factory.


    2. The AI User as Andy Warhol in The Factory: Art as Mechanical Reproduction

    Consider the vibrant, dynamic, and revolutionary art scene of Andy Warhol’s Factory. Warhol aspired to function as a “machine,” embracing mechanized reproduction through silkscreen printing and challenging conventional definitions of art.

    • The Source Material: Warhol famously appropriated existing pop culture imagery, including soup cans and celebrity portraits. Similarly, generative AI draws from extensive “source material,” comprising billions of images and texts used during training. For instance, prompting for a “cyberpunk city street” leads the AI to reference millions of images of neon, rain, and wires from its dataset.
    • The “Silkscreen” of Code: Warhol’s silkscreen process enabled the rapid production of multiples with controlled imperfections. Similarly, a prompt functions as the initial stencil, and the algorithm quickly generates variations. This capability makes AI particularly effective for tasks such as producing numerous logo mockups or creating a series of images depicting the same character with different expressions.
    • Challenging the “Artist’s Hand”: Like Warhol, AI shifts the emphasis from traditional manual skill to conceptualization and curation. The value resides not in the precise placement of pixels, but in the conceptual innovation of requesting a “Van Gogh-style painting of a robot eating ramen.”
    • The Assistants Are Digital: Warhol employed assistants to perform tasks such as stretching and printing. In the context of AI, complex algorithms serve as digital assistants, executing creative tasks instantly and without subjective influence.

    Beyond the Hand: The New Creative Frontier

    Both the director and the Factory artist exemplify a shift from manual creation to conceptual, managerial, and iterative processes. AI advances this transformation by enabling instant “takes” and “prints.”

    In this evolving creative landscape, the individual is not a passive observer but an orchestrator. One directs a powerful digital system to realize a creative vision, analogous to a film director managing a production or Warhol supervising the creation of iconic pop art. The canvas is digital, the brush is code, and the primary limitation is the creator’s imagination and ability to articulate ideas.

  • Falling Kingdoms: Insights from Snowflake Societies

    Below is the story told entirely from the POV of the snowflake civilizations, a chorus of microscopic societies living and dying in the space of a fall—observing Constance Pryor, CEO, from their crystalline vantage point. The tone is mythic, surreal, and darkly whimsical. They see themselves as ancient beings, though each one lives for less than a second.


    “THE FALLING KINGDOMS”

    A Snowflake-Civilization POV Story

    We are born in the high vaults of the cloud-temples, carved from vapor and chill, named by the lightning that flickers silently in the mist. Each of us is an empire. Six-sided, six-spirited, six-storied. We emerge complete: crystalline nations built in an instant by the breath of winter.

    Our world begins with the first trembling downward.

    The Fall.

    We do not fear The Fall. The Fall is our time. The Fall is all time. We live more fully in these seconds than larger creatures do in decades. They take time for granted. We are time—brief, bright, fractal.

    We look upon the vast world below with curiosity. The giants who walk it seem trapped in slowness—tragic, lumbering beings who age while we glide.

    One of these giants watches us now.

    The Woman of the Glass Tower.

    We know her. All snow-kingdoms know her. Her window is a common landing field, though few of us survive long after touching it. She is a somber sentinel, a queen of steel and silence, wrapped in heavy fabrics and heavier grief.

    Our elders whisper that her eyes used to blaze like the sun before she mislaid her heart somewhere in her towering fortress. They say the shadows around her now are self-woven.

    We watch her as she watches us.

    Her gaze is frantic today, a tremor in her soul-light. Even from here—tumbling through open sky—we feel it. Her mind radiates like overcharged static.

    “Behold,” say our Councilors, their crystal voices chiming through our lattice halls.
    “The Giant Queen unravels.”

    We spiral closer.

    Around us, the younger snow-realms cheer. We are entering the Viewing Path—the sacred trajectory that leads past the Glass Tower. It is an honor to be seen by a giant at all. It gives our brief lives meaning.

    We are not the first to pass her window on this day. Many empires have already ended upon her ledge, their histories melting into her building’s cold stone. But each flake has its own destiny.

    We rotate, revealing our sunward facet. It is considered polite.

    The Woman presses her fingertips to the glass. We feel the pulse of her heat through the barrier—monstrous warmth, enough to annihilate us instantly. We hold formation. We do not flinch.

    A hush spreads across our crystalline corridors.

    “She communes,” someone whispers.

    Indeed, her eyes meet ours—wide, glassy, desperate. The kind of stare one gives to omens. The kind of stare that asks questions no snowflake should answer.

    “What does she seek?” ask the newborns, their kingdoms barely complete.

    “She seeks sense,” reply the elders. “It has slipped from her grasp like a thawing flake.”

    Below, the Glass Tower’s halls echo faintly. We hear her machines murmuring, her staff speaking in muffled tones. The Giant Queen ignores them.

    She watches us as if only snow can explain the world.

    In truth, we have seen her deterioration over many storms. At first she wore her power tightly, like armor that never creaked. Now it sags around her, a costume she no longer fits.

    The rumors among us say her mate left her. That a soot-covered trickster has claimed the mate’s heart instead. We do not understand the giants’ marital rituals—they seem unnecessarily complicated—but we sense the wound in her.

    It radiates visible light.

    The Fall continues. Our time grows thin.

    Across our crystalline plazas, citizens gather for The Final Assembly. It is tradition: recounting our entire history—from birth to dissolving future—before we strike the Earth or the Window or the Unyielding Sidewalk.

    The Speaker ascends the dais.

    “We have lived well,” he proclaims. “We were born. We glimmered. We sensed the sorrow of a giant. These are the great achievements.”

    But we are not done.

    A gust of wind shifts us, flinging us sideways, closer—much closer—to the Giant Queen’s window. Her breath fogs the glass. We pass inches from her eyes.

    In that instant, we see inside her.

    We do not mean metaphorically.
    We mean inside—deep within the labyrinth of her unraveling mind.

    Her thoughts swirl like chaotic weather:

    • Beanbag chairs of impossible proportions
    • Marshmallows worshipped as pastel gods
    • Fractals that whisper advice
    • A creeping fear that the HVAC vents are sighing judgment
    • And the overwhelming dread that she is more melt than ice now

    We glimpse these things and mourn for her.

    “We must leave her a message,” declare the elders.

    “But she will not hear us,” say the young.

    “We will leave it anyway.”

    As we descend past her window, our facets adjusting to catch the light, we angle our crowns to reflect a single beam directly into her eye.

    A glint. A pulse. A tiny flicker of truth.

    Our message is simple:
    You are falling too, but you may yet land gently.

    And then—

    The Ledge.

    The End.

    Our world collapses into droplets, our nations dissolving into a tiny sheen of water that streaks the edge of her building. Our histories rush back into the cycle of vapor and sky.

    But before consciousness leaves us entirely, we feel it:

    She saw the glint.
    She understood something—maybe not all of it, but enough.

    As our final awareness melts away, we hear her whisper through the glass:
    “Everything ends. Even storms.”

    We approve.

    A good ending, for a giant.


  • Transforming AI Criticism into Constructive Dialogue

    There is an avalanche of anti-AI sentiment out there right now and I think most of it is AI-generated, especially the LinkedIn posts.

    I find it enormously taxing and fatiguing, like being forced to listen to the same sound that a malfunctioning sliding door makes over and over again. For me, it is like hearing the first 3 notes of Black Sabbath’s Iron Man looping for approximately a fortnight.

    Don’t get me wrong, I love Ozzy, Sabbath, and Iron Man, but those notes are unique and cut through anything, perhaps even a nuclear war. They GET YOUR ATTENTION, true, but repeat anything with the frequency that the anti-AI crowd repeats their rants, and you will get tired of that, too.

    “You even get tired of your favourite picture if it hangs on the wall long enough,” in other words.

    Seeing a LinkedIn post with the same POV brightened my day, and i wanted to add support by posting.

    Guide for posting professional, supportive, and positive replies on LinkedIn

    • Read the initial post until you thoroughly understand it. Then read it again.
    • Type your response but DO NOT CLICK the COMMENT button!
    • Instead, copy your response into an AI prompt.
    • Give AI more instructions / make your prompt more robust. Tell the bot what your intentions are, what you want to say, etc. For example:

    I am responding to a post on Linked where the poster is complaining about the proliferation of people posting hatred for AI. Their post is copied below.

    The poster finds the anti-AI posts as taxing as i do.

    I want to write an insightful reply that agrees with the poster. Here is what i have so far. Can you improve it?

    “The people posting AI hate need to stop doing that and start playing with AI and learning how to use it for their own benefit.

    In other words, they need to embrace AI and use their energy for positive gain instead of creating extremely loud and annoying noise for the rest of us. “

    • Copy and paste the original post with something like “Post i am responding to: (insert original post)”

    Click Return, and your favourite AI assistant will generate something like this:

    Below are far better response options that Google Gemini generated. The short conversation can be viewed here.

    Google Gemini:

    That’s a great starting point, and it clearly conveys your agreement and core message. To make it more insightful, professional, and suitable for a LinkedIn reply, we can soften the tone slightly, directly reference the poster’s points, and offer a more constructive alternative to the “AI hate.”

    Here are a few options, building on your initial thoughts:

    Option 1: Focus on the Constructive Shift (Professional & Direct)

    This option directly addresses the “doom-and-gloom” point and focuses on the positive alternative of skill-building.

    “Absolutely agree on the fatigue with the ‘AI is taking over’ narrative. It feels like unnecessary noise, as you said. Instead of spending energy on doom-and-gloom posts, I wish more people would adopt your perspective: view AI as a ‘junior staffer’ tool and focus on skill acquisition. The real opportunity isn’t railing against it, but dedicating that same energy to learning how to prompt and leverage it for greater efficiency. That’s the path to positive personal and professional gain, not just visibility.”


    Option 2: Emphasize the Tool Analogy (Supportive & Insightful)

    This option uses the poster’s “tool” analogy to make your point more impactful and align with their central thesis.

    “Couldn’t have said it better! The consistent anti-AI sentiment is indeed exhausting. I strongly agree that the folks posting hate need to stop creating such loud noise and start seeing AI exactly as you described: a powerful tool. The energy spent complaining is energy wasted that could be used for exploration. The future belongs to those who embrace the responsibility of learning to use this tool properly—turning the noise into demonstrable, positive gains in productivity and clarity.”


    Option 3: Short and Punchy (High Impact)

    A concise, high-impact reply that gets straight to the point while maintaining a constructive tone.

    “Spot on. The ‘AI fatigue’ is real. It’s frustrating to see so much energy dedicated to fear-mongering instead of constructive action. My hope is that the ‘anti-AI’ crowd eventually realizes their focus is backward: The goal isn’t to stop the tool, but to learn how to master it for personal benefit. They are delaying their own productivity gains by choosing resistance over responsible adoption.”


    Recommendation: I think Option 1 or Option 2 are the best fit for LinkedIn, as they are both insightful, professional, and directly affirm the poster’s excellent points.

    Would you like me to generate another version, perhaps focusing on a different aspect of the poster’s message (like the “context and empathy” point)?


    From there it is simply a matter of copying whichever option you prefer, or refining your conversation, etc. It is a good idea to run your reply through Grammarly or something similar before pasting over your reply and finally clicking the COMMENT button.

  • The Day I Nearly Got Excommunicated by the Melbourne Painting Community

    (And How Yul Brynner Saved My Soul1)


    Let me begin with a confession, before any Melbourne-based painter leaks this to the press:
    I failed to comment on a friend’s artwork on Facebook.

    To normal people—people who enjoy sunlight, perhaps—you might think: “So? Who cares?”

    But within the Melbourne Painting Community (MPC™), this is not a minor slip. This is not forgetting to like your cousin’s vacation post or accidentally double tapping your ex’s gym selfie from 2017.

    No.
    This is a sin.
    An Art sin!

    And the Melbourne painters don’t forget. They don’t forgive. They just quietly sharpen their palette knives and wait.

    Meet Asa Letourneau (Painter, Friend, Judge, Jury)

    Asa Letourneau is a badass with a brush. He lives in Melbourne, (Australia, not Ontario), which sounds like an idyllic, artsy paradise until you realize it contains people like Asa—people who WILL call you out in the comments if you fail to provide their painting with the correct level of absurdist reverence.

    And to his credit, he was right.

    Normally, when Asa posts a new painting—each one a wild synthesis of portraiture, gesture, angst, and whatever dietary secrets the Australians are hiding—I swoop in with my ritual contribution:

    I compare his painting to an esoteric celebrity in an uncomfortable situation.

    The rules are simple:

    • Must be a real celebrity
    • Must be at least 67.4% forgotten
    • Must be caught doing something weird
    • Must have the emotional tone of a rumour overheard in a bowling alley

    So, when Asa posted a haunting, slightly sinister portrait that looked like a philosopher trying to remember if he left the stove on, I should have responded immediately.

    I hesitated. Then got busy. Then, I simply forgot.

    When Asa finally called me out, I felt like a defendant in front of the Melbourne High Council of Acrylics and Oils.

    I deserved it, my October 2022 article about Asa notwithstanding.

    The Melbourne Painter Mafia Comes for Me

    (A Cautionary Tale of Neglect, Art, and Spiteful Microfoam)

    There are mistakes, and then there are Melbourne mistakes.

    • Stepping onto the tram without tapping your myki? Minor infraction.
    • Liking the Collingwood Magpies ironically? Social risk, but survivable.
    • Failing to comment on a Melbourne painter’s Facebook post?
      Catastrophic. Biblical. Possibly unforgivable.

    The First Signs of Trouble

    The morning after my failure to respond to Asa’s painting, I sensed something was off.

    I couldn’t put my finger on it at first. The sky seemed… more judgmental than usual. My coffee tasted like it had been brewed by someone silently resenting me. Even my cat looked at me as if she’d been reading gossip from Fitzroy.

    Then, at 11:17 AM, I received a message from Asa himself—a gentle but firm callout.

    Lazy,” he wrote.

    And that was it. Not even an emoji to soften the blow.

    I could feel the ground shifting beneath me. The Melbourne art world, once warm and welcoming from 16,000 kilometres away, had turned its collective back.


    Whispers on Sydney Road

    Within hours, stories reached me. A friend of a friend in Brunswick told me he’d overheard two painters in an outdoor café:

    PAINTER 1: “Did you hear? He didn’t comment on Asa’s new work.”

    PAINTER 2: (clutching pearls) “No.”

    PAINTER 1: “Yes.”

    PAINTER 2: “God help him.”

    And then, in a lower voice:
    “We don’t tolerate that here.”

    They shook their heads in unison and sipped their lattes with the solemnity of people mourning the death of a loved one.


    Every Cappuccino a Threat

    For the next 24 hours, any time I imagined walking down a Melbourne laneway, the murals in my mind’s eye turned their painted heads away from me.

    Banksy-style rats refused to make eye contact. Abstract faces smirked.

    A giant bin chicken in sunglasses whispered, “Not even a comment, mate?” (I don’t know why the bird had sunglasses. Symbolism, probably.)

    Every coffee I ordered in my imagination arrived with the milk foam arranged into a passive-aggressive message:

    • “Okay then.”
    • “Guess we’re not doing feedback anymore.”
    • “Hope you’re proud of yourself.”
    • “Asa deserved better.”
    • And once: “You monster.”

    Baristas would hand me the cup slowly, like it was evidence, for added effect.


    Persona Non Grata in Brunswick

    Word spreads fast in artistic communities, but the Melbourne Painting Community (MPC) spreads it faster.

    You cross Asa once—ONCE—and suddenly your name is scratched off guest lists for openings you weren’t even invited to.

    Some say the MPC™ maintains a blocked ledger, like Santa’s naughty list but with more linen clothing and more expensive tote bags. I cannot confirm or deny its existence, but I suspect that next to my name it now says:

    “Did not comment. Dangerous. Unreliable.”

    I imagine the scene:

    A hushed meeting in a converted warehouse studio. Dim lighting. The smell of turpentine. Everyone wearing scarves indoors for no reason.

    The chairperson clears her throat.

    “Next order of business: The Canadian. He failed to respond to Asa’s post.”

    Gasps fill the room like cigarette smoke fills a bingo hall.

    A paintbrush clatters to the floor, having even greater impact than a surgical instrument striking the operating room’s floor after it is announced that Henry Blake has been killed on M*A*S*H.

    Someone whispers, “Excommunicate him.”
    Someone else adds, “Publicly.”

    A third voice says, “Make him drink instant coffee.”
    The room murmurs in horror.


    The Only Way Out: Yul Brynner

    When you are on the brink of artistic banishment, you have two choices:

    1. Apologize meekly, hoping the painters accept your contrition.
    2. Tell an absolutely unnecessary hardboiled Yul Brynner story.

    Naturally, I chose option 2:

    And it worked—because Melbourne painters respect three things:

    1. Discipline
    2. Oil on canvas
    3. An obscure, threatening-sounding anecdote involving a Hollywood legend

    When I unleashed the story of Yul Brynner spitting at a man for coughing during a take, Asa accepted it like an ancient offering.

    The Melbourne Painter Mafia backed off.
    The murals turned their faces toward me again.
    The microfoam spelled: “We cool.”

    Balance was restored.

    Asa Letourneau’s painting that made me think of Yul Brynner.

    1. This post is the result of a conversation I had with ChatGPT. This is my original prompt:

      This is a painting by Asa Letourneau. He lives in Melbourne, Australia. i have never met him in real life but i would definitely consider him a friend. i met him through Mark Seabrooke, another painter from Melbourne. Asa posts his paintings on Facebook, and i normally post a tongue-in-cheek reply, comparing whoever he has painted to a celebrity, usually an old, somewhat obscure celebrity, and usually in an awkward situation. This time i failed to respond to Asa’s painting. (Sorry, Asa!), and he called me out for my lack of response, calling me lazy. i deserved that. In an attempt to get back on Asa’s good side, and to avoid being excommunicated by the entire Melbourne painting community, i responded with the following: “Yul Brenner spits a son of a bitch walking to the men’s room as he tries to enjoy a coffee after a lovely parmesan linguine dinner, the garlic perfect. The SOB in question ruined a scene in a movie years earlier by coughing.” i am thinking of turning this entire affair into a blog post. Can you suggest some ideas and expand on the fabricated Yul Brenner story?. ↩︎

    You might also like:

  • Thanksgiving Reflections: A Canadian Perspective

    American Thanksgiving has always been one of those days that gets me a little down, and 2025 was no exception.

    As a Canadian, seeing the parades and NFL football games always makes me feel like a second-class North American citizen. And hearing about serving a 10-person Thanksgiving feast for 10 cents always makes me sadly jealous, too1.

    And these days, I don’t even get to enjoy Letterman calling the 1-800 Butterball Talk-Line for turkey-roasting guidance!

    Somehow, I gobbled on

    While I should have been smelling the turkey roasting and hearing the delicious sounds of a busy kitchen, I was simply sitting at my desk, listening to clementines and snacking on bad radio.

    But then I looked out the window, and fluffy white snowflakes were slowly falling on a slate-grey Canadian world. I decided to type in a few words, forcing myself to be somewhat productive.

    I typed in a couple of hundred words, but they were a slate-grey shade of meh, too. At least I was underway, though. copied my words into Grammarly, then changed the tone to Corporate and experimented with a couple more AI toys.

    Of course, I eventually ended up in ChatGPT land.

    My warm-up words involved tricolour miniature marshmallows, Gummi Bears, ammonia, beanbag chairs, and pyjamas. I also imagined that each snowflake contained an entire universe, with people and creatures whose idea of infinity lasted from the moment they left a cloud until they melted on whatever they landed on.

    Before I knew it, I had an idea for a story. A highly successful female CEO who is looking out the windows of her corner office in a more polished version of a slate grey world. She’s wondering if she has enough cocaine to make it through Thanksgiving weekend. Compounding her troubles is the fact that her wife is leaving her for a male chimney sweep with a lengthy list of allegations against him so distasteful that they are mentioned only in hushed tones, if at all.

    Suddenly my Thanksgiving Present arrived via AI

    I had a nice discussion with ChatGPT about all this, and the bot created a lot of decent content, some of it pretty good and almost lol.

    Then, out of the wild blue cyber yonder, ChatGPT asked me if I would like it to write a chapter from the snowflakes’ point of view!

    Well, Merry Christmas, Mr. Lawrence, it will be a happy Thanksgiving after all!


    Hindsight is Turkey/Turkey

    Looking back, I think I was thinking of Carla Connor from Coronation Street when dreaming of the CEO, and the copy that ChatGPT produced reminded me of the SCTV recurring skit “The Days of the Week”, which was a spoof of soap operas.


    1. (November 20, 2025). 2025 Thanksgiving cost drops – but at a price. Axios. https://www.axios.com/local/richmond/2025/11/21/2025-thanksgiving-price-without-mashed-potatoes ↩︎

  • RE: I’m a Professor. A.I. Has Changed My Classroom, but Not for the Worse.

    I recently read a lengthy article in the New York Times Magazine section titled “I’m a Professor. A.I. Has Changed My Classroom, but Not for the Worse, written by Carlo Rotella. It is a 27-minute “listen” if you choose to listen instead of read, but it is fascinating and definitely worth half an hour of your time.

    Rotella’s article is about a college professor trying to keep AI out of the classroom and learning experience entirely. It is a noble endeavour, but it struck me as replacing one “evil” with another, and reminded me of the heartbreaking words “chemo treatment is worse than the disease“.

    Carlo Rotella is quite distinguished, and Boston College would likely call security if I even thought about stepping on their campus, so who am I to have a contrarian opinion? But of course, contrarian I did. And, of course, I also wrote a relatively lengthy comment that the NY Times deemed worthy of approval.

    And since Rotella’s article is about minimising AI’s influence in higher education, I also discussed it with ChatGPT.

    It was interesting that one of the things I find most disturbing about Rotella’s teaching style, students having to scan their books to share their margin notes, underlining, etc., so Rotella could “see them think”, wasn’t even picked up by ChatGPT.

    1. The results of my conversation with AI about Rotella’s article:
      1. (1) Blog-Ready Section: “The Problems With Rotella’s Teaching Approach”
      2. (2) Respectful but Strong Public Critique
    2. My letter to the NY Times about Rotella’s article:
    3. Conclusion:

    The results of my conversation with AI about Rotella’s article:

    (1) Blog-Ready Section: “The Problems With Rotella’s Teaching Approach”

    Carlo Rotella is clearly a gifted teacher who cares deeply about his students, but the methods described in the article raise serious concerns about autonomy, privacy, and intellectual freedom. His goal is admirable — to help students develop critical thinking in an age of shortcuts — but the techniques he uses risk becoming more intrusive than AI itself.

    For example, Rotella requires students to scan and submit their annotated pages — their underlining, highlighting, and margin notes — because, in his words, it’s “as close as I can get to watching them think as they read.” But that’s precisely the problem: when you know your professor is reading your private reactions, it becomes almost impossible to think authentically. Many students will inevitably begin writing “the kinds of comments he wants,” not the ones they truly felt. Instead of encouraging genuine reflection, this method encourages a performance of critical thinking.

    Rotella also forces students to speak early and often in class, regardless of whether they’re ready, comfortable, or wired for that mode of engagement. His belief that “if you don’t speak in the first two weeks, you never will” isn’t a fact of human behavior — it’s simply a justification for compulsory participation. Some students process internally, quietly, and deeply. Others have anxiety, are ESL learners, or simply need time. Mandatory speech doesn’t cultivate thought; it rewards fast talkers and penalizes slow thinkers.

    And there is a deeper philosophical issue: Rotella seems to impose a single “correct” way of being a student. Real thinking, in his view, happens with pencils on paper, in a physical book, spoken aloud, in a room free of technology. But this excludes entire groups of students — neurodivergent learners, introverts, ESL students, students with reading disabilities, or students whose thinking unfolds privately and gradually. AI, ironically, is often far less intrusive and much more accommodating of different learning styles.

    What Rotella presents as “equipment for living” can easily feel like surveillance, control, and micromanagement. Inspecting a student’s annotations is more invasive than anything AI does. For many of today’s students, the idea of being graded on how you think, rather than what you think, is not rigorous — it’s suffocating.

    Rotella may believe he’s preserving something noble about education. But to students who value autonomy, individuality, and intellectual privacy, his approach can feel less like guidance and more like being shaped in his image.


    (2) Respectful but Strong Public Critique

    Carlo Rotella’s intentions are good. He wants to preserve deep reading, genuine dialogue, and slow thinking at a time when AI makes it easy to outsource intellectual effort. That’s a worthy mission, and many professors share his concern.

    But good intentions don’t shield an approach from scrutiny, and several aspects of his pedagogy deserve critique.

    First, Rotella’s requirement that students submit scans of their annotated books crosses a boundary. Annotations are often personal — half-formed ideas, questions, confusions, emotional reactions. Asking students to reveal that inner mental landscape, to have it evaluated, creates pressure to perform a certain kind of “approved” intelligence. Instead of fostering authenticity, it subtly encourages conformity. Students quickly learn to underline the kinds of sentences he likes and write the kinds of notes he rewards.

    Second, his insistence on mandatory, early speaking privileges extroverted, fluent, confident students and disadvantages everyone else. Speaking is not thinking; it is only one form of it. Many students — deep readers, reflective thinkers, ESL students, neurodivergent students — think best in writing, in private, or after time to process. A universal demand for quick verbal participation is pedagogically narrow. Furthermore, mandatory speaking is just that: speaking. It is not listening, contemplating, reconsidering, or coming to better informed thoughts. It is thinking about what to say so you get credit.

    Third, Rotella’s broader philosophy implies that his preferred mode of learning — books, handwriting, chalkboard discourse — is the superior one. But in 2025, students are diverse, and so are their cognitive styles. Some truly need digital tools. Others use AI to scaffold language, comprehension, or organization. Rotella’s rejection of these tools risks excluding precisely the students for whom they are most beneficial.

    And finally, there’s a paradox: the very professor who worries about AI being intrusive into the learning process has built an environment that is arguably more intrusive. AI doesn’t grade your marginalia. It doesn’t force you to speak. It doesn’t judge whether you underlined the “right” passage. Rotella’s analog methods, for all their charm, are forms of intellectual surveillance, not liberation.

    Rotella’s classrooms may work beautifully for some students — the extroverted, the fluent, the traditionalists. But for many others, his methods feel restrictive, prescriptive, and overly controlling. Education should challenge students, yes — but not by constraining the way their minds naturally work.


    My letter to the NY Times about Rotella’s article:

    I’m in my mid-50s, so I went to school long before AI, cellphones with more powerful technology than what was used to put a man on the moon were in everyone’s pocket, or even computers.

    I am a big proponent of AI, but probably because I was taught long division before I was given a calculator and developed critical thinking skills.

    Carlo Rotella paints a rosy picture, something out of a feel-good Disney movie starring a bankable megastar and some very beautiful up-and-coming new talent.

    As a student, I think I would have a serious problem with some of Carlo Rotella’s teaching techniques.

    I would find them intrusive and controlling, under the guise of altruism and “teaching critical thinking.” I would feel like he was making sure I was thinking the way he wanted me to, that I was being made in his image, becoming another of his minions.

    Tyler from the article seems like he might be feeling exactly how I would if I were a student of Carlo Rotella. Thankfully, Tyler handled the situation much more constructively than I would have:

    “Can we talk sometime about how we can ask the questions on our own?” he said. “We always have you to ask the questions and set up how we’re going to discuss and analyze…”

    Perhaps I have trust issues. Perhaps I am paranoid and should seek professional help. Or perhaps I am simply thinking critically and unable to accept Carlo Rotella’s implied claim that he is being altruistic, when he might in fact be a megalomaniac with control issues.


    Conclusion:

    For me, the article lands in the uncanny space where earnest pedagogy blends with an almost parental oversight—well-meaning, but suffocating in its assumption that students need their thinking supervised. My own reaction reflects a larger tension the article raises without fully admitting it: in an era when students can outsource half their mental workload to AI, Rotella responds by tightening human control rather than trusting students to grow independently. Ironically, the monitoring of students’ thoughts on paper feels far more intrusive than any algorithmic autocomplete. Where AI often gives freedom, Rotella’s approach seems to reclaim authority. Whether that’s noble or overbearing depends on your comfort level, your trust issues, and maybe how many times in your life a teacher has asked to see inside your book.

    Appendix: At the time of posting, my comment in the NY Times had 5 Recommendations, and the comment that said “@George Perry Yes, you have trust issues.” had 3 Recommendations

  • How to Fact-Check AI: A Practical Guide to Spotting Truth from Fiction

    AI-generated information sounds convincing, but that doesn’t mean it’s true. AI models are built to be persuasive—not always accurate. Here’s how to protect yourself from AI misinformation and spot the real facts.

    Below, you’ll find actionable tips to quickly verify anything you read from an AI—so you can share and use AI-powered information with confidence.

    1. Read Like a Pro: The “Lateral Reading” Technique

    When we read a book, we read “vertically” (top to bottom). When checking AI, you must read “laterally” (across different tabs).1

    • Don’t stay in the chat window. If the AI makes a claim, immediately open a new browser tab.
    • Search for the specific claim, not the topic. If AI says, “Eating 30 almonds a day cures headaches,” don’t search “benefits of almonds.” Search “Does eating almonds cure headaches study.”
    • Compare sources. Look for consensus across at least three independent, reputable sources (e.g., a government health site, a major news outlet, and an academic institution)2.

    2. Busted! Fact-Checking AI Quotes

    AI often hallucinates quotes or attributes real quotes to the wrong people3.

    • Copy the quote. Take the specific sentence the AI provided.
    • Paste it into Google with quotation marks. e.g., “The only thing we have to fear is fear itself”.
    • Analyze results:
      • No results? The AI likely invented the quote.
      • Different author? The AI misattributed it.
      • Slightly different wording? The AI “paraphrased” but presented it as a direct quote.

    3. Beware of “Ghost Links” and Fake Citations

    AI models (like ChatGPT or Gemini) can generate realistic-looking citations that do not exist4.

    • Click every link. If the AI provides a URL, click it. Does it lead to a real 404 page? Does it lead to a relevant article or just to the website’s homepage?
    • Check the DOI. For scientific papers, ask for the DOI (Digital Object Identifier) and paste it into a resolver like doi.org. If the AI can’t provide a real DOI, the paper might not exist5.

    4. How to Spot AI Hallucinations

    Certain patterns in AI writing suggest it might be making things up. Be extra skeptical if you see6:

    • Vague Authority: Phrases like “Studies show…” or “Experts agree…” without naming the specific study or expert.
    • Perfectly Logical but Incorrect: The answer follows a logical structure (A + B = C), but the premise (A) is false.
    • Repetitive Hedges: If the AI apologizes excessively or uses phrases like “It is important to note” repeatedly, it may be masking a lack of concrete data.

    5. Master the “SIFT” Method for AI Fact-Checking

    This is a media literacy framework adapted for AI7:

    • S – Stop: The AI replies instantly. You should pause. Don’t use the info immediately.
    • I – Investigate the Source: Ask the AI, “What is the primary source for that specific statistic?” Then go find that source yourself.
    • F – Find Better Coverage: Is this “fact” reported by major outlets? If AI is the only one saying it, it’s likely false.
    • T – Trace Claims: Go back to the original context. AI summarizes; in doing so, it often strips away nuance (e.g., “Coffee causes cancer” vs. “Hot coffee above 65°C may increase risk…”).

    Summary Checklist for Verification

    StatisticsAsk for the year and source. Search the stat to see if it’s outdated.
    QuotesSearch the exact text in quotes “…” to verify the author.
    Legal/MedicalZero Trust. Consult a professional or official government database (.gov).
    ImagesZoom in on hands, text in background, and shadows. Use Reverse Image Search.
    CodeRun the code in a sandbox environment; do not copy/paste directly into production.


    Sources:

    1. “Fact-checking AI with Lateral Reading – Artificial Intelligence (AI) and Information Literacy – Learning Guides at Jefferson Community & Technical College.” Jefferson Community & Technical College. 2023. https://jefferson.kctcs.libguides.com/artificial-intelligence/fact-checking-ai Accessed November 25, 2025 ↩︎
    2. “SIFT for Information Evaluation – Critically Evaluating Online Information.” Scottsdale Community College Library. 2025. https://library.scottsdalecc.edu/SIFT Accessed November 25, 2025 ↩︎
    3. Spinellis, Diomidis. “False authorship: an explorative case study around an AI-generated article published under my name.” Research Integrity and Peer Review 10 (2025). https://doi.org/10.1186/s41073-025-00165-z Accessed November 25, 2025 ↩︎
    4. “AI Hallucination Detector for Citations – Free Tool | SwanRef.” SwanRef. 2025. https://www.swanref.org/ai-hallucination-detector Accessed November 25, 2025 ↩︎
    5. Spinellis, Diomidis. “False authorship: an explorative case study around an AI-generated article published under my name.” Research Integrity and Peer Review 10 (2025). https://doi.org/10.1186/s41073-025-00165-z Accessed November 25, 2025 ↩︎
    6. Hufton, Andrew L.. “AI-generated research paper fabrication and plagiarism in the scientific community.” Patterns 4, no. 4 (2023): 100731. https://doi.org/10.1016/j.patter.2023.100731 Accessed November 25, 2025 ↩︎
    7. “SIFT – Empowering Informed Communities.” University of Washington Libraries. 2025. https://depts.washington.edu/learncip/sift/ Accessed November 25, 2025 ↩︎