• Directing the Digital Factory: How Using AI Makes You Part-Film Director, Part-Andy Warhol

    Artificial intelligence is fundamentally transforming the creative process, extending beyond computation to empower individuals with unprecedented creative control. Whether generating images or writing code, this capability evokes the experience of both directing a film and participating in Andy Warhol’s Factory.

    Both roles emphasize conceptual guidance and iterative production over manual execution.


    1. The AI User as Film Director: Crafting a Vision, Guiding the “Crew.”

    Consider the responsibilities of a film director. While directors do not personally create every backdrop or perform each line of dialogue, they serve as the unifying creative force. They determine the narrative, tone, and visual style, providing detailed instructions to a specialized crew.

    • Casting the Right Model: Similar to how a director selects actors for their unique abilities, one selects an AI model for its specialized capabilities. For example, GPT-4 is suited for text, Midjourney for visual art, and Suno for music generation. Each model functions as a specialist tool with distinct styles and competencies.
    • Prompting Your Crew: The “prompt” serves as the instruction provided to the AI, functioning as a directive to a digital cinematographer, set designer, and actors simultaneously. For instance, one might request text written “in the style of a 1940s hard-boiled detective novel” or specify an image as a “low-angle shot with lens flare.”
    • The Iterative Process: Directors seldom achieve the desired result on the first attempt, often requesting additional takes and providing precise feedback. Similarly, working with AI involves revising prompts, such as “Now make the horse a unicorn” or “Change the lighting to chiaroscuro with vibrant, saturated colors.” This iterative process continues until the output aligns with the intended vision.
    • Editing and Curation: After completing the “takes,” a director collaborates with an editor to select and assemble the best footage. Similarly, the AI user selects the most suitable generated output and may further refine it using additional digital tools.

    This transition from manual execution to conceptual guidance introduces a compelling analogy: the manner in which AI-driven creation reflects the ethos of Warhol’s Factory.


    2. The AI User as Andy Warhol in The Factory: Art as Mechanical Reproduction

    Consider the vibrant, dynamic, and revolutionary art scene of Andy Warhol’s Factory. Warhol aspired to function as a “machine,” embracing mechanized reproduction through silkscreen printing and challenging conventional definitions of art.

    • The Source Material: Warhol famously appropriated existing pop culture imagery, including soup cans and celebrity portraits. Similarly, generative AI draws from extensive “source material,” comprising billions of images and texts used during training. For instance, prompting for a “cyberpunk city street” leads the AI to reference millions of images of neon, rain, and wires from its dataset.
    • The “Silkscreen” of Code: Warhol’s silkscreen process enabled the rapid production of multiples with controlled imperfections. Similarly, a prompt functions as the initial stencil, and the algorithm quickly generates variations. This capability makes AI particularly effective for tasks such as producing numerous logo mockups or creating a series of images depicting the same character with different expressions.
    • Challenging the “Artist’s Hand”: Like Warhol, AI shifts the emphasis from traditional manual skill to conceptualization and curation. The value resides not in the precise placement of pixels, but in the conceptual innovation of requesting a “Van Gogh-style painting of a robot eating ramen.”
    • The Assistants Are Digital: Warhol employed assistants to perform tasks such as stretching and printing. In the context of AI, complex algorithms serve as digital assistants, executing creative tasks instantly and without subjective influence.

    Beyond the Hand: The New Creative Frontier

    Both the director and the Factory artist exemplify a shift from manual creation to conceptual, managerial, and iterative processes. AI advances this transformation by enabling instant “takes” and “prints.”

    In this evolving creative landscape, the individual is not a passive observer but an orchestrator. One directs a powerful digital system to realize a creative vision, analogous to a film director managing a production or Warhol supervising the creation of iconic pop art. The canvas is digital, the brush is code, and the primary limitation is the creator’s imagination and ability to articulate ideas.

  • Falling Kingdoms: Insights from Snowflake Societies

    Below is the story told entirely from the POV of the snowflake civilizations, a chorus of microscopic societies living and dying in the space of a fall—observing Constance Pryor, CEO, from their crystalline vantage point. The tone is mythic, surreal, and darkly whimsical. They see themselves as ancient beings, though each one lives for less than a second.


    “THE FALLING KINGDOMS”

    A Snowflake-Civilization POV Story

    We are born in the high vaults of the cloud-temples, carved from vapor and chill, named by the lightning that flickers silently in the mist. Each of us is an empire. Six-sided, six-spirited, six-storied. We emerge complete: crystalline nations built in an instant by the breath of winter.

    Our world begins with the first trembling downward.

    The Fall.

    We do not fear The Fall. The Fall is our time. The Fall is all time. We live more fully in these seconds than larger creatures do in decades. They take time for granted. We are time—brief, bright, fractal.

    We look upon the vast world below with curiosity. The giants who walk it seem trapped in slowness—tragic, lumbering beings who age while we glide.

    One of these giants watches us now.

    The Woman of the Glass Tower.

    We know her. All snow-kingdoms know her. Her window is a common landing field, though few of us survive long after touching it. She is a somber sentinel, a queen of steel and silence, wrapped in heavy fabrics and heavier grief.

    Our elders whisper that her eyes used to blaze like the sun before she mislaid her heart somewhere in her towering fortress. They say the shadows around her now are self-woven.

    We watch her as she watches us.

    Her gaze is frantic today, a tremor in her soul-light. Even from here—tumbling through open sky—we feel it. Her mind radiates like overcharged static.

    “Behold,” say our Councilors, their crystal voices chiming through our lattice halls.
    “The Giant Queen unravels.”

    We spiral closer.

    Around us, the younger snow-realms cheer. We are entering the Viewing Path—the sacred trajectory that leads past the Glass Tower. It is an honor to be seen by a giant at all. It gives our brief lives meaning.

    We are not the first to pass her window on this day. Many empires have already ended upon her ledge, their histories melting into her building’s cold stone. But each flake has its own destiny.

    We rotate, revealing our sunward facet. It is considered polite.

    The Woman presses her fingertips to the glass. We feel the pulse of her heat through the barrier—monstrous warmth, enough to annihilate us instantly. We hold formation. We do not flinch.

    A hush spreads across our crystalline corridors.

    “She communes,” someone whispers.

    Indeed, her eyes meet ours—wide, glassy, desperate. The kind of stare one gives to omens. The kind of stare that asks questions no snowflake should answer.

    “What does she seek?” ask the newborns, their kingdoms barely complete.

    “She seeks sense,” reply the elders. “It has slipped from her grasp like a thawing flake.”

    Below, the Glass Tower’s halls echo faintly. We hear her machines murmuring, her staff speaking in muffled tones. The Giant Queen ignores them.

    She watches us as if only snow can explain the world.

    In truth, we have seen her deterioration over many storms. At first she wore her power tightly, like armor that never creaked. Now it sags around her, a costume she no longer fits.

    The rumors among us say her mate left her. That a soot-covered trickster has claimed the mate’s heart instead. We do not understand the giants’ marital rituals—they seem unnecessarily complicated—but we sense the wound in her.

    It radiates visible light.

    The Fall continues. Our time grows thin.

    Across our crystalline plazas, citizens gather for The Final Assembly. It is tradition: recounting our entire history—from birth to dissolving future—before we strike the Earth or the Window or the Unyielding Sidewalk.

    The Speaker ascends the dais.

    “We have lived well,” he proclaims. “We were born. We glimmered. We sensed the sorrow of a giant. These are the great achievements.”

    But we are not done.

    A gust of wind shifts us, flinging us sideways, closer—much closer—to the Giant Queen’s window. Her breath fogs the glass. We pass inches from her eyes.

    In that instant, we see inside her.

    We do not mean metaphorically.
    We mean inside—deep within the labyrinth of her unraveling mind.

    Her thoughts swirl like chaotic weather:

    • Beanbag chairs of impossible proportions
    • Marshmallows worshipped as pastel gods
    • Fractals that whisper advice
    • A creeping fear that the HVAC vents are sighing judgment
    • And the overwhelming dread that she is more melt than ice now

    We glimpse these things and mourn for her.

    “We must leave her a message,” declare the elders.

    “But she will not hear us,” say the young.

    “We will leave it anyway.”

    As we descend past her window, our facets adjusting to catch the light, we angle our crowns to reflect a single beam directly into her eye.

    A glint. A pulse. A tiny flicker of truth.

    Our message is simple:
    You are falling too, but you may yet land gently.

    And then—

    The Ledge.

    The End.

    Our world collapses into droplets, our nations dissolving into a tiny sheen of water that streaks the edge of her building. Our histories rush back into the cycle of vapor and sky.

    But before consciousness leaves us entirely, we feel it:

    She saw the glint.
    She understood something—maybe not all of it, but enough.

    As our final awareness melts away, we hear her whisper through the glass:
    “Everything ends. Even storms.”

    We approve.

    A good ending, for a giant.


  • Transforming AI Criticism into Constructive Dialogue

    There is an avalanche of anti-AI sentiment out there right now and I think most of it is AI-generated, especially the LinkedIn posts.

    I find it enormously taxing and fatiguing, like being forced to listen to the same sound that a malfunctioning sliding door makes over and over again. For me, it is like hearing the first 3 notes of Black Sabbath’s Iron Man looping for approximately a fortnight.

    Don’t get me wrong, I love Ozzy, Sabbath, and Iron Man, but those notes are unique and cut through anything, perhaps even a nuclear war. They GET YOUR ATTENTION, true, but repeat anything with the frequency that the anti-AI crowd repeats their rants, and you will get tired of that, too.

    “You even get tired of your favourite picture if it hangs on the wall long enough,” in other words.

    Seeing a LinkedIn post with the same POV brightened my day, and i wanted to add support by posting.

    Guide for posting professional, supportive, and positive replies on LinkedIn

    • Read the initial post until you thoroughly understand it. Then read it again.
    • Type your response but DO NOT CLICK the COMMENT button!
    • Instead, copy your response into an AI prompt.
    • Give AI more instructions / make your prompt more robust. Tell the bot what your intentions are, what you want to say, etc. For example:

    I am responding to a post on Linked where the poster is complaining about the proliferation of people posting hatred for AI. Their post is copied below.

    The poster finds the anti-AI posts as taxing as i do.

    I want to write an insightful reply that agrees with the poster. Here is what i have so far. Can you improve it?

    “The people posting AI hate need to stop doing that and start playing with AI and learning how to use it for their own benefit.

    In other words, they need to embrace AI and use their energy for positive gain instead of creating extremely loud and annoying noise for the rest of us. “

    • Copy and paste the original post with something like “Post i am responding to: (insert original post)”

    Click Return, and your favourite AI assistant will generate something like this:

    Below are far better response options that Google Gemini generated. The short conversation can be viewed here.

    Google Gemini:

    That’s a great starting point, and it clearly conveys your agreement and core message. To make it more insightful, professional, and suitable for a LinkedIn reply, we can soften the tone slightly, directly reference the poster’s points, and offer a more constructive alternative to the “AI hate.”

    Here are a few options, building on your initial thoughts:

    Option 1: Focus on the Constructive Shift (Professional & Direct)

    This option directly addresses the “doom-and-gloom” point and focuses on the positive alternative of skill-building.

    “Absolutely agree on the fatigue with the ‘AI is taking over’ narrative. It feels like unnecessary noise, as you said. Instead of spending energy on doom-and-gloom posts, I wish more people would adopt your perspective: view AI as a ‘junior staffer’ tool and focus on skill acquisition. The real opportunity isn’t railing against it, but dedicating that same energy to learning how to prompt and leverage it for greater efficiency. That’s the path to positive personal and professional gain, not just visibility.”


    Option 2: Emphasize the Tool Analogy (Supportive & Insightful)

    This option uses the poster’s “tool” analogy to make your point more impactful and align with their central thesis.

    “Couldn’t have said it better! The consistent anti-AI sentiment is indeed exhausting. I strongly agree that the folks posting hate need to stop creating such loud noise and start seeing AI exactly as you described: a powerful tool. The energy spent complaining is energy wasted that could be used for exploration. The future belongs to those who embrace the responsibility of learning to use this tool properly—turning the noise into demonstrable, positive gains in productivity and clarity.”


    Option 3: Short and Punchy (High Impact)

    A concise, high-impact reply that gets straight to the point while maintaining a constructive tone.

    “Spot on. The ‘AI fatigue’ is real. It’s frustrating to see so much energy dedicated to fear-mongering instead of constructive action. My hope is that the ‘anti-AI’ crowd eventually realizes their focus is backward: The goal isn’t to stop the tool, but to learn how to master it for personal benefit. They are delaying their own productivity gains by choosing resistance over responsible adoption.”


    Recommendation: I think Option 1 or Option 2 are the best fit for LinkedIn, as they are both insightful, professional, and directly affirm the poster’s excellent points.

    Would you like me to generate another version, perhaps focusing on a different aspect of the poster’s message (like the “context and empathy” point)?


    From there it is simply a matter of copying whichever option you prefer, or refining your conversation, etc. It is a good idea to run your reply through Grammarly or something similar before pasting over your reply and finally clicking the COMMENT button.

  • The Day I Nearly Got Excommunicated by the Melbourne Painting Community

    (And How Yul Brynner Saved My Soul1)


    Let me begin with a confession, before any Melbourne-based painter leaks this to the press:
    I failed to comment on a friend’s artwork on Facebook.

    To normal people—people who enjoy sunlight, perhaps—you might think: “So? Who cares?”

    But within the Melbourne Painting Community (MPC™), this is not a minor slip. This is not forgetting to like your cousin’s vacation post or accidentally double tapping your ex’s gym selfie from 2017.

    No.
    This is a sin.
    An Art sin!

    And the Melbourne painters don’t forget. They don’t forgive. They just quietly sharpen their palette knives and wait.

    Meet Asa Letourneau (Painter, Friend, Judge, Jury)

    Asa Letourneau is a badass with a brush. He lives in Melbourne, (Australia, not Ontario), which sounds like an idyllic, artsy paradise until you realize it contains people like Asa—people who WILL call you out in the comments if you fail to provide their painting with the correct level of absurdist reverence.

    And to his credit, he was right.

    Normally, when Asa posts a new painting—each one a wild synthesis of portraiture, gesture, angst, and whatever dietary secrets the Australians are hiding—I swoop in with my ritual contribution:

    I compare his painting to an esoteric celebrity in an uncomfortable situation.

    The rules are simple:

    • Must be a real celebrity
    • Must be at least 67.4% forgotten
    • Must be caught doing something weird
    • Must have the emotional tone of a rumour overheard in a bowling alley

    So, when Asa posted a haunting, slightly sinister portrait that looked like a philosopher trying to remember if he left the stove on, I should have responded immediately.

    I hesitated. Then got busy. Then, I simply forgot.

    When Asa finally called me out, I felt like a defendant in front of the Melbourne High Council of Acrylics and Oils.

    I deserved it, my October 2022 article about Asa notwithstanding.

    The Melbourne Painter Mafia Comes for Me

    (A Cautionary Tale of Neglect, Art, and Spiteful Microfoam)

    There are mistakes, and then there are Melbourne mistakes.

    • Stepping onto the tram without tapping your myki? Minor infraction.
    • Liking the Collingwood Magpies ironically? Social risk, but survivable.
    • Failing to comment on a Melbourne painter’s Facebook post?
      Catastrophic. Biblical. Possibly unforgivable.

    The First Signs of Trouble

    The morning after my failure to respond to Asa’s painting, I sensed something was off.

    I couldn’t put my finger on it at first. The sky seemed… more judgmental than usual. My coffee tasted like it had been brewed by someone silently resenting me. Even my cat looked at me as if she’d been reading gossip from Fitzroy.

    Then, at 11:17 AM, I received a message from Asa himself—a gentle but firm callout.

    Lazy,” he wrote.

    And that was it. Not even an emoji to soften the blow.

    I could feel the ground shifting beneath me. The Melbourne art world, once warm and welcoming from 16,000 kilometres away, had turned its collective back.


    Whispers on Sydney Road

    Within hours, stories reached me. A friend of a friend in Brunswick told me he’d overheard two painters in an outdoor café:

    PAINTER 1: “Did you hear? He didn’t comment on Asa’s new work.”

    PAINTER 2: (clutching pearls) “No.”

    PAINTER 1: “Yes.”

    PAINTER 2: “God help him.”

    And then, in a lower voice:
    “We don’t tolerate that here.”

    They shook their heads in unison and sipped their lattes with the solemnity of people mourning the death of a loved one.


    Every Cappuccino a Threat

    For the next 24 hours, any time I imagined walking down a Melbourne laneway, the murals in my mind’s eye turned their painted heads away from me.

    Banksy-style rats refused to make eye contact. Abstract faces smirked.

    A giant bin chicken in sunglasses whispered, “Not even a comment, mate?” (I don’t know why the bird had sunglasses. Symbolism, probably.)

    Every coffee I ordered in my imagination arrived with the milk foam arranged into a passive-aggressive message:

    • “Okay then.”
    • “Guess we’re not doing feedback anymore.”
    • “Hope you’re proud of yourself.”
    • “Asa deserved better.”
    • And once: “You monster.”

    Baristas would hand me the cup slowly, like it was evidence, for added effect.


    Persona Non Grata in Brunswick

    Word spreads fast in artistic communities, but the Melbourne Painting Community (MPC) spreads it faster.

    You cross Asa once—ONCE—and suddenly your name is scratched off guest lists for openings you weren’t even invited to.

    Some say the MPC™ maintains a blocked ledger, like Santa’s naughty list but with more linen clothing and more expensive tote bags. I cannot confirm or deny its existence, but I suspect that next to my name it now says:

    “Did not comment. Dangerous. Unreliable.”

    I imagine the scene:

    A hushed meeting in a converted warehouse studio. Dim lighting. The smell of turpentine. Everyone wearing scarves indoors for no reason.

    The chairperson clears her throat.

    “Next order of business: The Canadian. He failed to respond to Asa’s post.”

    Gasps fill the room like cigarette smoke fills a bingo hall.

    A paintbrush clatters to the floor, having even greater impact than a surgical instrument striking the operating room’s floor after it is announced that Henry Blake has been killed on M*A*S*H.

    Someone whispers, “Excommunicate him.”
    Someone else adds, “Publicly.”

    A third voice says, “Make him drink instant coffee.”
    The room murmurs in horror.


    The Only Way Out: Yul Brynner

    When you are on the brink of artistic banishment, you have two choices:

    1. Apologize meekly, hoping the painters accept your contrition.
    2. Tell an absolutely unnecessary hardboiled Yul Brynner story.

    Naturally, I chose option 2:

    And it worked—because Melbourne painters respect three things:

    1. Discipline
    2. Oil on canvas
    3. An obscure, threatening-sounding anecdote involving a Hollywood legend

    When I unleashed the story of Yul Brynner spitting at a man for coughing during a take, Asa accepted it like an ancient offering.

    The Melbourne Painter Mafia backed off.
    The murals turned their faces toward me again.
    The microfoam spelled: “We cool.”

    Balance was restored.

    Asa Letourneau’s painting that made me think of Yul Brynner.

    1. This post is the result of a conversation I had with ChatGPT. This is my original prompt:

      This is a painting by Asa Letourneau. He lives in Melbourne, Australia. i have never met him in real life but i would definitely consider him a friend. i met him through Mark Seabrooke, another painter from Melbourne. Asa posts his paintings on Facebook, and i normally post a tongue-in-cheek reply, comparing whoever he has painted to a celebrity, usually an old, somewhat obscure celebrity, and usually in an awkward situation. This time i failed to respond to Asa’s painting. (Sorry, Asa!), and he called me out for my lack of response, calling me lazy. i deserved that. In an attempt to get back on Asa’s good side, and to avoid being excommunicated by the entire Melbourne painting community, i responded with the following: “Yul Brenner spits a son of a bitch walking to the men’s room as he tries to enjoy a coffee after a lovely parmesan linguine dinner, the garlic perfect. The SOB in question ruined a scene in a movie years earlier by coughing.” i am thinking of turning this entire affair into a blog post. Can you suggest some ideas and expand on the fabricated Yul Brenner story?. ↩︎

    You might also like:

  • Thanksgiving Reflections: A Canadian Perspective

    American Thanksgiving has always been one of those days that gets me a little down, and 2025 was no exception.

    As a Canadian, seeing the parades and NFL football games always makes me feel like a second-class North American citizen. And hearing about serving a 10-person Thanksgiving feast for 10 cents always makes me sadly jealous, too1.

    And these days, I don’t even get to enjoy Letterman calling the 1-800 Butterball Talk-Line for turkey-roasting guidance!

    Somehow, I gobbled on

    While I should have been smelling the turkey roasting and hearing the delicious sounds of a busy kitchen, I was simply sitting at my desk, listening to clementines and snacking on bad radio.

    But then I looked out the window, and fluffy white snowflakes were slowly falling on a slate-grey Canadian world. I decided to type in a few words, forcing myself to be somewhat productive.

    I typed in a couple of hundred words, but they were a slate-grey shade of meh, too. At least I was underway, though. copied my words into Grammarly, then changed the tone to Corporate and experimented with a couple more AI toys.

    Of course, I eventually ended up in ChatGPT land.

    My warm-up words involved tricolour miniature marshmallows, Gummi Bears, ammonia, beanbag chairs, and pyjamas. I also imagined that each snowflake contained an entire universe, with people and creatures whose idea of infinity lasted from the moment they left a cloud until they melted on whatever they landed on.

    Before I knew it, I had an idea for a story. A highly successful female CEO who is looking out the windows of her corner office in a more polished version of a slate grey world. She’s wondering if she has enough cocaine to make it through Thanksgiving weekend. Compounding her troubles is the fact that her wife is leaving her for a male chimney sweep with a lengthy list of allegations against him so distasteful that they are mentioned only in hushed tones, if at all.

    Suddenly my Thanksgiving Present arrived via AI

    I had a nice discussion with ChatGPT about all this, and the bot created a lot of decent content, some of it pretty good and almost lol.

    Then, out of the wild blue cyber yonder, ChatGPT asked me if I would like it to write a chapter from the snowflakes’ point of view!

    Well, Merry Christmas, Mr. Lawrence, it will be a happy Thanksgiving after all!


    Hindsight is Turkey/Turkey

    Looking back, I think I was thinking of Carla Connor from Coronation Street when dreaming of the CEO, and the copy that ChatGPT produced reminded me of the SCTV recurring skit “The Days of the Week”, which was a spoof of soap operas.


    1. (November 20, 2025). 2025 Thanksgiving cost drops – but at a price. Axios. https://www.axios.com/local/richmond/2025/11/21/2025-thanksgiving-price-without-mashed-potatoes ↩︎

  • RE: I’m a Professor. A.I. Has Changed My Classroom, but Not for the Worse.

    I recently read a lengthy article in the New York Times Magazine section titled “I’m a Professor. A.I. Has Changed My Classroom, but Not for the Worse, written by Carlo Rotella. It is a 27-minute “listen” if you choose to listen instead of read, but it is fascinating and definitely worth half an hour of your time.

    Rotella’s article is about a college professor trying to keep AI out of the classroom and learning experience entirely. It is a noble endeavour, but it struck me as replacing one “evil” with another, and reminded me of the heartbreaking words “chemo treatment is worse than the disease“.

    Carlo Rotella is quite distinguished, and Boston College would likely call security if I even thought about stepping on their campus, so who am I to have a contrarian opinion? But of course, contrarian I did. And, of course, I also wrote a relatively lengthy comment that the NY Times deemed worthy of approval.

    And since Rotella’s article is about minimising AI’s influence in higher education, I also discussed it with ChatGPT.

    It was interesting that one of the things I find most disturbing about Rotella’s teaching style, students having to scan their books to share their margin notes, underlining, etc., so Rotella could “see them think”, wasn’t even picked up by ChatGPT.

    1. The results of my conversation with AI about Rotella’s article:
      1. (1) Blog-Ready Section: “The Problems With Rotella’s Teaching Approach”
      2. (2) Respectful but Strong Public Critique
    2. My letter to the NY Times about Rotella’s article:
    3. Conclusion:

    The results of my conversation with AI about Rotella’s article:

    (1) Blog-Ready Section: “The Problems With Rotella’s Teaching Approach”

    Carlo Rotella is clearly a gifted teacher who cares deeply about his students, but the methods described in the article raise serious concerns about autonomy, privacy, and intellectual freedom. His goal is admirable — to help students develop critical thinking in an age of shortcuts — but the techniques he uses risk becoming more intrusive than AI itself.

    For example, Rotella requires students to scan and submit their annotated pages — their underlining, highlighting, and margin notes — because, in his words, it’s “as close as I can get to watching them think as they read.” But that’s precisely the problem: when you know your professor is reading your private reactions, it becomes almost impossible to think authentically. Many students will inevitably begin writing “the kinds of comments he wants,” not the ones they truly felt. Instead of encouraging genuine reflection, this method encourages a performance of critical thinking.

    Rotella also forces students to speak early and often in class, regardless of whether they’re ready, comfortable, or wired for that mode of engagement. His belief that “if you don’t speak in the first two weeks, you never will” isn’t a fact of human behavior — it’s simply a justification for compulsory participation. Some students process internally, quietly, and deeply. Others have anxiety, are ESL learners, or simply need time. Mandatory speech doesn’t cultivate thought; it rewards fast talkers and penalizes slow thinkers.

    And there is a deeper philosophical issue: Rotella seems to impose a single “correct” way of being a student. Real thinking, in his view, happens with pencils on paper, in a physical book, spoken aloud, in a room free of technology. But this excludes entire groups of students — neurodivergent learners, introverts, ESL students, students with reading disabilities, or students whose thinking unfolds privately and gradually. AI, ironically, is often far less intrusive and much more accommodating of different learning styles.

    What Rotella presents as “equipment for living” can easily feel like surveillance, control, and micromanagement. Inspecting a student’s annotations is more invasive than anything AI does. For many of today’s students, the idea of being graded on how you think, rather than what you think, is not rigorous — it’s suffocating.

    Rotella may believe he’s preserving something noble about education. But to students who value autonomy, individuality, and intellectual privacy, his approach can feel less like guidance and more like being shaped in his image.


    (2) Respectful but Strong Public Critique

    Carlo Rotella’s intentions are good. He wants to preserve deep reading, genuine dialogue, and slow thinking at a time when AI makes it easy to outsource intellectual effort. That’s a worthy mission, and many professors share his concern.

    But good intentions don’t shield an approach from scrutiny, and several aspects of his pedagogy deserve critique.

    First, Rotella’s requirement that students submit scans of their annotated books crosses a boundary. Annotations are often personal — half-formed ideas, questions, confusions, emotional reactions. Asking students to reveal that inner mental landscape, to have it evaluated, creates pressure to perform a certain kind of “approved” intelligence. Instead of fostering authenticity, it subtly encourages conformity. Students quickly learn to underline the kinds of sentences he likes and write the kinds of notes he rewards.

    Second, his insistence on mandatory, early speaking privileges extroverted, fluent, confident students and disadvantages everyone else. Speaking is not thinking; it is only one form of it. Many students — deep readers, reflective thinkers, ESL students, neurodivergent students — think best in writing, in private, or after time to process. A universal demand for quick verbal participation is pedagogically narrow. Furthermore, mandatory speaking is just that: speaking. It is not listening, contemplating, reconsidering, or coming to better informed thoughts. It is thinking about what to say so you get credit.

    Third, Rotella’s broader philosophy implies that his preferred mode of learning — books, handwriting, chalkboard discourse — is the superior one. But in 2025, students are diverse, and so are their cognitive styles. Some truly need digital tools. Others use AI to scaffold language, comprehension, or organization. Rotella’s rejection of these tools risks excluding precisely the students for whom they are most beneficial.

    And finally, there’s a paradox: the very professor who worries about AI being intrusive into the learning process has built an environment that is arguably more intrusive. AI doesn’t grade your marginalia. It doesn’t force you to speak. It doesn’t judge whether you underlined the “right” passage. Rotella’s analog methods, for all their charm, are forms of intellectual surveillance, not liberation.

    Rotella’s classrooms may work beautifully for some students — the extroverted, the fluent, the traditionalists. But for many others, his methods feel restrictive, prescriptive, and overly controlling. Education should challenge students, yes — but not by constraining the way their minds naturally work.


    My letter to the NY Times about Rotella’s article:

    I’m in my mid-50s, so I went to school long before AI, cellphones with more powerful technology than what was used to put a man on the moon were in everyone’s pocket, or even computers.

    I am a big proponent of AI, but probably because I was taught long division before I was given a calculator and developed critical thinking skills.

    Carlo Rotella paints a rosy picture, something out of a feel-good Disney movie starring a bankable megastar and some very beautiful up-and-coming new talent.

    As a student, I think I would have a serious problem with some of Carlo Rotella’s teaching techniques.

    I would find them intrusive and controlling, under the guise of altruism and “teaching critical thinking.” I would feel like he was making sure I was thinking the way he wanted me to, that I was being made in his image, becoming another of his minions.

    Tyler from the article seems like he might be feeling exactly how I would if I were a student of Carlo Rotella. Thankfully, Tyler handled the situation much more constructively than I would have:

    “Can we talk sometime about how we can ask the questions on our own?” he said. “We always have you to ask the questions and set up how we’re going to discuss and analyze…”

    Perhaps I have trust issues. Perhaps I am paranoid and should seek professional help. Or perhaps I am simply thinking critically and unable to accept Carlo Rotella’s implied claim that he is being altruistic, when he might in fact be a megalomaniac with control issues.


    Conclusion:

    For me, the article lands in the uncanny space where earnest pedagogy blends with an almost parental oversight—well-meaning, but suffocating in its assumption that students need their thinking supervised. My own reaction reflects a larger tension the article raises without fully admitting it: in an era when students can outsource half their mental workload to AI, Rotella responds by tightening human control rather than trusting students to grow independently. Ironically, the monitoring of students’ thoughts on paper feels far more intrusive than any algorithmic autocomplete. Where AI often gives freedom, Rotella’s approach seems to reclaim authority. Whether that’s noble or overbearing depends on your comfort level, your trust issues, and maybe how many times in your life a teacher has asked to see inside your book.

    Appendix: At the time of posting, my comment in the NY Times had 5 Recommendations, and the comment that said “@George Perry Yes, you have trust issues.” had 3 Recommendations

  • How to Fact-Check AI: A Practical Guide to Spotting Truth from Fiction

    AI-generated information sounds convincing, but that doesn’t mean it’s true. AI models are built to be persuasive—not always accurate. Here’s how to protect yourself from AI misinformation and spot the real facts.

    Below, you’ll find actionable tips to quickly verify anything you read from an AI—so you can share and use AI-powered information with confidence.

    1. Read Like a Pro: The “Lateral Reading” Technique

    When we read a book, we read “vertically” (top to bottom). When checking AI, you must read “laterally” (across different tabs).1

    • Don’t stay in the chat window. If the AI makes a claim, immediately open a new browser tab.
    • Search for the specific claim, not the topic. If AI says, “Eating 30 almonds a day cures headaches,” don’t search “benefits of almonds.” Search “Does eating almonds cure headaches study.”
    • Compare sources. Look for consensus across at least three independent, reputable sources (e.g., a government health site, a major news outlet, and an academic institution)2.

    2. Busted! Fact-Checking AI Quotes

    AI often hallucinates quotes or attributes real quotes to the wrong people3.

    • Copy the quote. Take the specific sentence the AI provided.
    • Paste it into Google with quotation marks. e.g., “The only thing we have to fear is fear itself”.
    • Analyze results:
      • No results? The AI likely invented the quote.
      • Different author? The AI misattributed it.
      • Slightly different wording? The AI “paraphrased” but presented it as a direct quote.

    AI models (like ChatGPT or Gemini) can generate realistic-looking citations that do not exist4.

    • Click every link. If the AI provides a URL, click it. Does it lead to a real 404 page? Does it lead to a relevant article or just to the website’s homepage?
    • Check the DOI. For scientific papers, ask for the DOI (Digital Object Identifier) and paste it into a resolver like doi.org. If the AI can’t provide a real DOI, the paper might not exist5.

    4. How to Spot AI Hallucinations

    Certain patterns in AI writing suggest it might be making things up. Be extra skeptical if you see6:

    • Vague Authority: Phrases like “Studies show…” or “Experts agree…” without naming the specific study or expert.
    • Perfectly Logical but Incorrect: The answer follows a logical structure (A + B = C), but the premise (A) is false.
    • Repetitive Hedges: If the AI apologizes excessively or uses phrases like “It is important to note” repeatedly, it may be masking a lack of concrete data.

    5. Master the “SIFT” Method for AI Fact-Checking

    This is a media literacy framework adapted for AI7:

    • S – Stop: The AI replies instantly. You should pause. Don’t use the info immediately.
    • I – Investigate the Source: Ask the AI, “What is the primary source for that specific statistic?” Then go find that source yourself.
    • F – Find Better Coverage: Is this “fact” reported by major outlets? If AI is the only one saying it, it’s likely false.
    • T – Trace Claims: Go back to the original context. AI summarizes; in doing so, it often strips away nuance (e.g., “Coffee causes cancer” vs. “Hot coffee above 65°C may increase risk…”).

    Summary Checklist for Verification

    StatisticsAsk for the year and source. Search the stat to see if it’s outdated.
    QuotesSearch the exact text in quotes “…” to verify the author.
    Legal/MedicalZero Trust. Consult a professional or official government database (.gov).
    ImagesZoom in on hands, text in background, and shadows. Use Reverse Image Search.
    CodeRun the code in a sandbox environment; do not copy/paste directly into production.


    Sources:

    1. “Fact-checking AI with Lateral Reading – Artificial Intelligence (AI) and Information Literacy – Learning Guides at Jefferson Community & Technical College.” Jefferson Community & Technical College. 2023. https://jefferson.kctcs.libguides.com/artificial-intelligence/fact-checking-ai Accessed November 25, 2025 ↩︎
    2. “SIFT for Information Evaluation – Critically Evaluating Online Information.” Scottsdale Community College Library. 2025. https://library.scottsdalecc.edu/SIFT Accessed November 25, 2025 ↩︎
    3. Spinellis, Diomidis. “False authorship: an explorative case study around an AI-generated article published under my name.” Research Integrity and Peer Review 10 (2025). https://doi.org/10.1186/s41073-025-00165-z Accessed November 25, 2025 ↩︎
    4. “AI Hallucination Detector for Citations – Free Tool | SwanRef.” SwanRef. 2025. https://www.swanref.org/ai-hallucination-detector Accessed November 25, 2025 ↩︎
    5. Spinellis, Diomidis. “False authorship: an explorative case study around an AI-generated article published under my name.” Research Integrity and Peer Review 10 (2025). https://doi.org/10.1186/s41073-025-00165-z Accessed November 25, 2025 ↩︎
    6. Hufton, Andrew L.. “AI-generated research paper fabrication and plagiarism in the scientific community.” Patterns 4, no. 4 (2023): 100731. https://doi.org/10.1016/j.patter.2023.100731 Accessed November 25, 2025 ↩︎
    7. “SIFT – Empowering Informed Communities.” University of Washington Libraries. 2025. https://depts.washington.edu/learncip/sift/ Accessed November 25, 2025 ↩︎

  • Understanding Trickle-Down AI: Impact on Children

    We all remember the promise of “Trickle-Down Economics”—the Reagan-era theory that if we fed the wealthy horse enough oats, eventually the sparrows would get to eat. Critics called it “Voodoo Economics” because the wealth never really made it to the bottom. It stayed stuck in the accounts of the ultra-rich.

    But as I watched Dr. Mhairi Aitken on Story in the Public Square this week, I realized we are witnessing a new phenomenon: Trickle-Down AI.

    Unlike money, Artificial Intelligence does trickle down. In fact, it floods down. The technology is owned by the billionaires and Big Tech gatekeepers, yes. But the capability—the raw information and the power to create—is saturating the ground level faster than we can adapt. Parents are parenting differently. Teachers are creating lesson plans with AI. Children are absorbing AI literacy by osmosis. We are living in a world where the tools of the elite are in the hands of the playground set.

    But is that a good thing?

    Dr. Mhairi Aitken, Senior Ethics Fellow at The Alan Turing Institute, broke down exactly how this “trickle-down” effect is shaping the next generation.

    Here is why we need to pay attention.

    1. The Playground Has Changed

    Aitken noted that we have moved past simple toys to “smart toys and smart teddy bears that will interact as the child plays with it.” This isn’t science fiction; it’s the holiday shopping list. When AI trickles down into a teddy bear, it brings with it the surveillance capitalism of its creators. Our children aren’t just getting toys – they are becoming marketing information for the Oligarch Class.

    2. The Silent Adopters

    We assume AI is for business, but Aitken’s research found that “a quarter of children aged 8-12 reported using generative AI technologies.” While adults debate the ethics of ChatGPT in the boardroom, children are already natives. They are bypassing the “fear” stage and moving straight to integration. The “Trickle-Down” effect here is instant—information that was once gatekept behind university tuitions is now available to a 10-year-old with a tablet.

    3. The New Confidant

    Perhaps most poignantly, Aitken revealed that “children with additional learning needs were significantly more likely to report using generative AI for communication and connection.” This is the benevolent side of Trickle-Down AI. For a child who fears judgment, AI offers a safe harbour. But AI also creates a dangerous dependency. If the “rich” owners of these models program them with bias, that bias trickles down directly into the emotional development of vulnerable children who are “seeking advice on more personal issues.”

    4. The “Adult-Centric” Flaw

    Despite children being users of this new reality, they are ignored by the architects. As Aitken put it, “Children are probably the group who will be most impacted… but they’re also the group that are the least represented in decision making.” In our “Trickle-Down” reality, the rules are set at the top. The billionaires design the algorithms for profit and efficiency, not for the developing brain of an 8-year-old.

    A Note on the “Story in the Public Square”

    It is tragic irony that this interview aired on PBS just as the Trump Administration’s funding cuts to the Corporation for Public Broadcasting (CPB) threaten to silence these very conversations. Story in the Public Square provides the kind of nuanced, non-commercial analysis that you won’t find on the algorithm-driven feeds of the for-profit internet.

    If we lose public broadcasting, we lose the ability to critique the very powers that are shaping our future. We become passive consumers of Trickle-Down AI, rather than active participants in how it is governed.

    The Bottom Line

    AI will not make everyone a Harvard graduate, but it will democratize access to knowledge in a way “Trickle-Down Economics” never democratized wealth. The question is no longer if the technology will reach us. The question is: What biases, risks, and agendas are trickling down with it?

  • What’s one piece of advice you think people should never ask AI for? – AI answers

    I was scrolling through LinkedIn recently when a post by Daniel Shapiro caught my eye. It was a long post that seemingly went nowhere, but at the end he asked: “What’s one piece of advice you think people should never ask AI for? Curious where you stand.”

    Immediately my smartass kicked in, and I hatched a plan to answer Daniel’s question by using AI. I thought I would copy and paste his question into the prompt and get something brief to reply with. I was hoping for something completely wrong, something bizarre, something funny to get attention with.

    AI provided an excellent answer – just ask Grammarly!

    BUT Google Gemini provided a very good answer, which you can read below. I copied all 452 words of Gemini’s response into Grammarly, and writing quality seemed exceptionally high. Grammarly reported a writing quality of 96/100 and reported no contradicted or debated claims.

    Strangely, Grammarly reported 0% AI-generated text. Well, hardy har har, Mrs. Grammarly, the copy in question is 100% AI-generated text!

    I DID add 5 citations that Grammarly supplied. Those are great not only for readers, but they help SEO, adding authority to the copy.


    Here is AI’s answer to “What’s one piece of advice you think people should never ask AI for?”

    It all seems like common sense to me, but as my friend Trevor’s mom said one time when she got cut off “Common sense – the least common of all the senses!

    (EVERYTHING below was written by AI.)

    If I had to choose just one area where the stakes are too high for current AI capabilities, it would be this:

    You should never ask AI for a definitive medical diagnosis or to triage a medical emergency.

    While AI is fantastic for summarizing medical concepts or explaining how a biological process works, asking “Do I have cancer based on these symptoms?” or “My chest feels tight, should I go to the ER or just sleep it off?” is dangerous.

    Here is why this is the red line you should not cross:


    1. The “Hallucination” Factor

    AI models predict the next likely word in a sentence; they do not “know” facts the way a human expert does.11 An AI can confidently hallucinate a diagnosis or a remedy that sounds plausible but is factually incorrect2.2 In a medical context, a “confident but wrong” answer can be fatal.

    2. Lack of Physical Context

    A human doctor uses all their senses and years of intuition.33 They look at your skin color, check your pupil dilation, feel the swollen lymph node, and hear the specific rattle in your breathing.

    • AI is blind: It relies entirely on the text you type.
    • The data gap: You might forget to mention a crucial detail (like a medication you are taking) that would immediately change a doctor’s decision. The AI won’t necessarily know to ask for it.

    3. The “Average” Problem

    AI is trained on massive datasets of “average” human data.44

    • If your symptoms are rare, atypical, or present differently due to your age, gender, or ethnicity, the AI is statistically more likely to miss them because it biases toward the most common patterns in its training data.5

    The Nuance: What specific questions should you avoid?

    To keep yourself safe, avoid questions that require the AI to make a judgment call on your physical safety.

    “Is this mushroom in my hand safe to eat?”AI image recognition is not 100% accurate. Mistaking a Death Cap for a common mushroom is fatal.
    “I took X pills by accident. Will I be okay?”Only a Poison Control Center or doctor can calculate toxicity based on your specific weight/history.
    “How can I treat this deep wound at home?”The AI might suggest home remedies when you actually need stitches or antibiotics to prevent sepsis.
    “Does this mole look cancerous?”Dermatology requires high-resolution 3D inspection and often a biopsy. Images can be misleading.

    How to use AI safely in this domain

    It is okay to use AI to empower yourself before you see a professional.

    • Okay: “What are the common side effects of Ibuprofen?”
    • Okay: “Help me create a list of questions to ask my cardiologist.”
    • Okay: “Explain what ‘atrial fibrillation’ means in simple terms.”

    1. Bender, E. M., Gebru, T., McMillan-Major, A. & Mitchell, M. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. Proceedings of the 2021 ACM Conference on Fairness. https://doi.org/10.1145/3442188.3445922 ↩︎
    2. (March 14, 2024). AI Hallucinations in Medicine: A Blessing or a Risk. The Daily Guardian. https://thedailyguardian.com/medically-speaking/ai-hallucinations-in-medicine-a-blessing-or-a-risk/ ↩︎
    3. (2024). Palpation. Wikipedia. https://en.wikipedia.org/wiki/Palpation
      ↩︎
    4. (2025). Training data: The key to successful AI models. CIO. https://www.cio.com/article/3980243/training-data-the-key-to-successful-ai-models.html ↩︎
    5. Daneshjou, R., Vodrahalli, K., Liang, W., Novoa, R. A., Jenkins, M., Rotemberg, V., Ko, J., Swetter, S. M., Bailey, E. E., Gevaert, O., Mukherjee, P., Phung, M., Yekrang, K., Fong, B., Sahasrabudhe, R., Zou, J. & Chiou, A. (2021). Disparities in Dermatology AI: Assessments Using Diverse Clinical Images. arXiv preprint. https://doi.org/10.48550/arXiv.2111.08006 ↩︎
  • George Perry’s Articles About AI

    Welcome to my AI archive. Here, I explore the intersection of artificial intelligence, education, and creativity. Whether you are an educator looking to integrate AI or a skeptic wondering if it’s all hype, these articles chronicle my journey and lessons learned.

    Below is a collection of long-form articles I have written that explore the capabilities and limitations of Generative AI in creative writing.


    Click on the title of the article to read it.


    AI & Education:

    Practical Guides:

    AI Philosophy/Opinion:


    Overcoming AI Skepticism: Lessons from Carrots

    November 19, 2025

    This post compares the advancement of AI in the workplace to canning and refrigeration’s impact on nutrition. It argues that, like canned and frozen carrots made nutritious food available year-round, AI tools enhance productivity and creativity. The backlash against AI resembles resistance to technological progress, despite its potential benefits. Return to List

    AI and Gilligan’s Island: Lessons in Prompt Engineering

    November 12, 2025

    The author finds comfort in watching familiar TV shows to aid sleep and reflects on an episode of Gilligan’s Island to illustrate how AI operates. A robot’s inability to comply with vague instructions parallels the need for precise prompts in AI interactions. Future articles will explore AI themes through Gilligan’s Island characters. Return to List

    Memory and Creativity: AI in Learning

    November 10, 2025

    The author reflects on a college fantasy of instantly learning core subjects, which is now possible through AI. While AI streamlines work by handling information, the author stresses the importance of foundational learning. Those educated before AI developed critical thinking skills, underscoring the necessity of manual education before leveraging technology effectively. Back to Top

    Did You Earn Your AI? Applying the Long Division Principle to AI

    November 10, 2025

    The Long Division Principle emphasizes the need for foundational learning before integrating AI in education. While calculators and AI offer speed and efficiency, understanding the fundamental processes fosters critical thinking and discernment. To effectively use AI, students must first master essential skills, ensuring they develop a genuine understanding of content rather than relying solely on technology.

    We fantasized about jamming knowledge into our heads. Now we can. But only those who already earned their stripes—those who did the manual, messy work—can truly use AI as an amplifier instead of a crutch. Return to List

    Let ChatGPT write your Christmas Cards!

    December 2, 2024

    Christmas cards originated from Henry Cole’s need to streamline holiday greetings, leading to their viral popularity. The author likens this invention to using ChatGPT as a modern timesaver for card creation. They describe their own success in generating a humorous card using AI, celebrating the efficiency of both innovations. Return to List

    Comparing Google Gemini and ChatGPT Image Creation: A Fun Exploration

    September 15, 2024

    This post compares results from Google Gemini and ChatGPT using the same prompt. Gemini created images with different emotions, while ChatGPT was closer to the intended idea. The writer encourages playing with AI to break prejudices and emphasizes that it can be both fun and educational. It ends with a caution about sulking Batman. Return to List

    Don’t be afraid of ChatGPT – play with it, learn what it can do

    April 9, 2023

    The author expresses fascination with ChatGPT, highlighting its usefulness for entertainment, inspiration, and efficiency. They encourage readers to engage with the tool, emphasizing the importance of play and humor in enhancing creativity and productivity. The content also includes insights on writing tools, grammar, and language diversity, showcasing the author’s writing journey.

    This blog post explores the writer’s reliance on Google for writing and learning, highlighting searches for synonyms, grammar rules, and keyboard shortcuts. It emphasizes the importance of varied and precise language. The writer also recommends having fun with tools like ChatGPT to enhance productivity and creativity, citing the benefits of play. Return to List

    What AI topics should I cover next?