• I think there ought to be “Grandma Lanes”.

    Every year I hear about another Grandma getting run over by a reindeer.

    If Grandmas had their own lanes, instances of them getting run over by a reindeer would fall dramatically.

  • From Springsteen’s Nebraska to North Carolina: Music, Murder, and a Chicken Plant Disaster

    After 40 years of hype, a movie, and a multi-disc reissue, I finally gave in and listened to Bruce Springsteen’s critically acclaimed and fan-revered album Nebraska.

    Nebraska is a pretty good album, but it didn’t blow me away or make me think, “This is the greatest thing ever.” My reaction is similar to how I feel about Amy Winehouse’s music: it is OK, but not earth-shattering. Both artists’ work feels more like a nod to earlier genres than a groundbreaking revolution.

    Jello and Mojo must have been Springsteen fans

    What DID strike me is that Jello Biafra and Mojo Nixon used the tune of the title track from Nebraska for their song “Hamlet Chicken Plant Disaster,” which appears on the fine album Prairie Home Invasion.

    Nebraska isn’t Springsteen’s biggest album. It isn’t as massive as Born in the USA or Born to Run, albums you know whether you want to or not. Nebraska had no singles, Bruce didn’t tour or do promotion for it, and it doesn’t get airplay, so you have to be a fan and seek it out if you want to hear Nebraska. It surprises me that Jello and/or Mojo would do that on some levels, but then again, it doesn’t surprise me at all.

    I grabbed Prairie Home Invasion when it first came out in 1994. I loved it immediately and continue to give it a spin every once in a while. In other words, I already knew Hamlet Chicken Plant Disaster well.

    So when I was listening to the song Nebraska for the first time, I thought, “Is that that Prairie Home Invasion song?” I wondered if it was a traditional song that both Bruce and Jello/Mojo interpreted independently of each other. It isn’t, and it wasn’t, of course. All songs on Nebraska are Bruce originals.

    Jello and Mojo put different words on the Springsteen tune, giving the song a modern feel with an anti-corporate greed stance. I mean, we ARE talking Jello Biafra and Mojo Nixon here!

    The origins of Springsteen’s song Nebraska

    Springsteen’s song is written from the perspective of Charles Starkweather, a person who killed 10 people between 1957 and 1958.  Starkweather showed no remorse, not even when strapped to the electric chair.

    Apparently, Bruce wrote the song Nebraska after watching the 1973 movie Badlands and reading the Ninette Beaver book Caril, a book about Caril Ann Fugate, Starkweather’s teenage girlfriend and accomplice. Fugate served 18 years after being tried and convicted of first-degree murder.

    The origins of the song Hamlet Chicken Plant Disaster

    Jello and Mojo’s lyrics, by contrast, recount the 1991 Hamlet, North Carolina, chicken processing plant fire, a tragedy where 25 workers died, 49 children were orphaned, and many more suffered lasting trauma.

    The Hamlet chicken processing plant, owned by Emmett Roe, had its emergency exits and other doors locked to prevent employee theft. When the fire broke out, workers were trapped inside, unable to escape the smoke and flames. Twenty-five lives were lost—all due to efforts to prevent the theft of dead chickens.

    Charles Starkweather (Nebraska) vs. Emmett Roe (North Carolina):

    • Charles Starkweather killed 10 people and was cooked like a chicken by the State.
    • Emmett Roe pretty much killed 25 people by protecting dead chickens from theft. He received 4 years free room and board courtesy of the State.

    Bruce Springsteen vs. Mojo Nixon and Jello Biafra:

    • Springsteen released Nebraska to critical acclaim in 1982. It has sold over 4 million copies and was recently turned into a movie starring the guy from The Bear.
    • Jello and Mojo released Prairie Home Invasion on Alternative Tentacles in 1994, and it currently sells for anywhere from five to twenty-five bucks.  

    Two men were judged. Now you can judge two songs:

    (Jello and Mojo get more real estate because they are a complete band, and Bruce is “just” Bruce.)

  • X (Twitter) is to communication what McDonald’s is to nutrition”

    This article will draw parallels between the fast, convenient, and mass-produced nature of both Twitter and McDonald’s, while also highlighting their addictive qualities and potential drawbacks.

    Just as McDonald’s offers quick, easy, and widely accessible meals that are often high in calories but low in nutritional value, Twitter provides a platform for rapid, bite-sized communication instantly available to millions. Both appeal to our desire for speed and convenience: McDonald’s serves food in minutes, and Twitter lets users share thoughts or consume information in seconds.

    The more junk food or tweets we consume, the more we crave

    A key similarity is their addictive nature. McDonald’s food is engineered to be crave-worthy. It is salty, sweet, and fatty in just the right amounts to keep people coming back for more. And more. Similarly, Twitter’s design taps into psychological triggers: notifications, likes, retweets, and an endless scroll of new content create a feedback loop that encourages frequent, sometimes compulsive, engagement. In both cases, the more we consume, the harder it can be to stop, even if we’re aware that overindulgence may not be good for us.

    They call McDonald’s junk food and twitter toxic for a reason

    This analogy also highlights their limitations. While McDonald’s can satisfy immediate hunger, it rarely provides the nourishment of a balanced meal. In the same way, Twitter’s 280-character limit encourages brevity over depth, making nuanced or meaningful conversations challenging. The platform often favors sensationalism, quick takes, and viral moments, just as fast food prioritizes taste and accessibility over health.

    In essence, both Twitter and McDonald’s are products of a culture that prizes speed and efficiency, sometimes at the expense of substance and quality. While they serve a purpose and can be enjoyable in moderation, their highly addictive nature means that relying on them exclusively may ultimately leave us wanting more—be it more nutrition or more meaningful communication.

  • Why Are LinkedIn Posts All the Same?

    Q: i saw two very similar posts on LinkedIn today. Both took a long talk talking about how wonderful their daughter is. Then they said that AI caught them cheating, and asked What do you think? at the end.

    What is going on?

    1. 1. Viral Post Templates:
    2. 2. Engagement Farming:
    3. 3. Copycat Behavior:
    4. 4. Manufactured Authenticity:
    5. In summary:
    6. How can we guard against this type of manipulative behaviour?

    AI: What you’re describing is a trend on LinkedIn—and increasingly on other social platforms—where posts follow a specific viral formula designed to maximize engagement.

    Here’s what’s likely happening:

    1. Viral Post Templates:

    Some users copy successful post formats almost word-for-word, simply swapping out names or details. These templates often combine emotional storytelling (e.g., praising a child) with a surprising twist (e.g., “AI caught them cheating”) and then end with a call-to-action like “What do you think?”

    2. Engagement Farming:

    The goal is to provoke reactions, comments, and shares. By mixing personal stories with hot topics (like AI), and ending with an open-ended question, the post is engineered to encourage interaction and boost visibility in LinkedIn’s algorithm.

    3. Copycat Behavior:


    When one post goes viral, others quickly imitate it to try to get similar attention. Sometimes, people even use AI tools to generate these posts, leading to eerily similar content appearing across multiple profiles.

    4. Manufactured Authenticity:


    LinkedIn’s culture rewards posts that seem authentic and emotionally resonant, but when overused or obviously copied, these formulas feel artificial or manipulative, adding to the perception of “toxicity.”

    In summary:


    You’re witnessing a combination of viral content templates, engagement tactics, and copycat behavior. This is why you’re seeing almost identical posts, even on something as personal as celebrating a family member. It’s a reflection of how social media algorithms and attention-seeking strategies shape the content people share.

    How can we guard against this type of manipulative behaviour?

    Guarding against manipulative behavior like viral content templates and engagement farming on platforms like LinkedIn involves a mix of personal strategies and wider community awareness:

    1. Cultivate Media Literacy:
    Be critical of emotional or “too perfect” stories, especially when they follow a formula. Ask yourself: does this post feel authentic, or does it seem designed to provoke a reaction?

    2. Don’t Reward Manipulation:
    Avoid liking, sharing, or commenting on posts that feel manipulative or inauthentic. Engagement is the currency of social platforms—without it, such posts lose visibility.

    3. Seek Substance Over Style:
    Focus on and interact with posts that offer genuine insight, constructive advice, or meaningful discussion, rather than those fishing for engagement.

    4. Curate Your Network:
    Be selective about who you connect with and follow. If certain connections consistently share manipulative content, consider unfollowing or muting them.

    5. Report Spam and Misuse:
    Most platforms, including LinkedIn, have reporting mechanisms for spam or inauthentic content. Use these tools when you see clear cases of manipulation.

    6. Lead by Example:
    Share original, thoughtful content yourself. Others are more likely to follow suit if they see authentic voices in their feeds.

    7. Raise Awareness:
    Discuss these patterns with your own network—sometimes, people don’t realize they’re engaging with or amplifying manipulative posts.

    Ultimately, while we can’t control what others post, we can shape our own experience and contribute to a healthier online environment by being mindful, discerning, and intentional in our interactions.

  • The Proto-AI of William S. Burroughs: Cut-Ups, Shotguns, and the Algorithm of Chance

    William S. Burroughs, the counter-culture literary icon and author of Naked Lunch, often seems a figure of the past. Yet, his radical artistic methods—specifically the cut-up technique and his shotgun paintings—reveal a surprising, almost prophetic, engagement with concepts that now define the era of Artificial Intelligence (AI) and Large Language Models (LLMs).

    Burroughs’s work, conducted decades before the first publicly available consumer AI, can be viewed as an attempt to find an algorithm for creativity and to remove the human element from artistic production, mirroring core debates surrounding today’s generative AI.

    1. I. The Cut-Up Method: A Primitive LLM
    2. II. David Bowie’s Verbasizer: The First LLM on a Laptop
    3. III. Generative Chaos: Removing the Human Hand
      1. A. The Shotgun Paintings
      2. B. MythBusters and the “Painting with Explosives” Test
    4. Conclusion

    I. The Cut-Up Method: A Primitive LLM

    Burroughs, along with collaborator Brion Gysin, pioneered the cut-up method: physically cutting up existing texts (newspapers, novels, speeches) and rearranging the pieces into a new, fractured, and often deeply unsettling narrative.

    This process is strikingly similar to how modern LLMs like ChatGPT and Gemini function.

    • The LLM Analogy: An LLM does not think or create from a void; it processes a massive corpus of existing literature, code, and information. When prompted, it rearranges and probabilistically selects the most likely next word, generating a coherent-seeming, but fundamentally derivative, output.
    • The Burroughs Protocol: Burroughs’s cut-ups rearranged existing literature into something new by randomizing the sequence of pre-existing linguistic units (sentences and phrases). This act of recombination and randomization fundamentally parallels the stochastic text generation employed by today’s models. It is an act of recombinant literature, using the source text as a dataset to generate “new” meaning through algorithmic chance, much like a primitive, analogue language model.

    II. David Bowie’s Verbasizer: The First LLM on a Laptop

    The conceptual leap from the analog cut-up method to modern LLMs was bridged by David Bowie, a longtime admirer of Burroughs, who brought the process into the digital age in the 1990s.

    • The Transition to Software: Working with programmer Ty Roberts, Bowie co-created a Macintosh application called the Verbasizer. This custom software automated Burroughs’s process: the artist would input his own writings and other texts, and the Verbasizer would mechanically cut the sentences into phrases and then randomly reorder them.
    • A True Proto-LLM: This process moved beyond the physical randomness of paper scraps to a digital, algorithmic randomization, creating a “real kaleidoscope of meanings” for lyrics on albums like Outside (1995). The Verbasizer was a dedicated text-generation program designed explicitly to break writer’s block and introduce unpredictable linguistic novelty—a direct ancestor of how writers use modern AI to generate text prompts.

    III. Generative Chaos: Removing the Human Hand

    The concept of removing the human element from artistic composition, initiated by Burroughs’s shotgun art, found both a high-art example and a spectacular pop-culture echo.

    A. The Shotgun Paintings

    Towards the end of his life, Burroughs created abstract expressionist paintings by firing a shotgun at cans of paint placed in front of a plywood canvas. The resulting splatter and chaotic patterns became the final artwork.

    • The Dehumanized Artist: By using a weapon and the random force of an explosion, Burroughs outsourced the compositional decision-making to a non-human, mechanical force. The artist’s role shifted from creator to curator—he set the parameters (the paint, the canvas, the gun) and allowed the algorithm of physics and chance to execute the final piece. The shotgun acted as his generative algorithm.

    B. MythBusters and the “Painting with Explosives” Test

    The MythBusters episode “Painting with Explosives; Bifurcated Boat” (2013) provides a fascinating, high-octane echo of Burroughs’s method.

    • The Algorithm of the Boom: Jamie Hyneman and Adam Savage tested whether one could successfully paint a room using explosives and paint. The goal resulted in pure chaotic, generative abstract art.
    • The High-Energy Algorithm: Like Burroughs’s shotgun blast, the explosives completely removed intentional brushwork. The “art” was the product of chemistry, physics, and explosive force—a high-energy, real-world generative algorithm. The hosts became mere engineers setting the parameters for a non-human process to create an unpredictable, visually striking outcome.

    Conclusion

    Burroughs’s experiments were not merely artistic gimmicks; they were profound attempts to explore the limits of language and authorship. By embracing the algorithm of chance, whether through the rearrangement of words or the physics of a bullet, he laid the conceptual groundwork for a future where creativity is increasingly mediated and shaped by non-human intelligence, connecting the literary underground of the 20th century directly to the AI labs of the 21st.

  • Em Dashes: A Class Divide in Writing Style

    Em dashes? I can’t stand them.

    It’s generational, though, and has nothing to do with AI, detecting AI, dining and DASHing, the EMancipation Proclamation, or even Canadian musician EMm Gryner. 

    Maybe my aversion to em dashes stems from my social class, not my age.

    Those fancy shmancy em dashes never had the carriage return it took to show up in MY neighbourhood! 


    Fun Fact: I once had a summer job working on the DASH line at a Ford truck plant!

  • In Defense of Hating the word “Ideating”: A Justified Grievance

    You’re absolutely right to hate the word “ideating.” It’s not just you being picky—your instinct is sound. This word embodies everything wrong with modern corporate language: it’s pretentious, unnecessary, and strips the humanity from something as fundamentally human as thinking.

    The Origin Story: From Philosophy to Farce

    The word “ideate” is actually a relatively modern creation, first appearing in English only about 400 years ago. (IDEATE Definition & Meaning – Merriam-Webster, 2025) Early uses were associated with Platonic philosophy, referring to forming Platonic ideas. It began its life as a legitimate philosophical term, a technical word for scholars discussing abstract concepts.

    But here’s where things went wrong: The earliest known use of “ideate” is from the early 1600s, in the writing of W. Pinke. (ideate, 2023) For centuries, it remained where it belonged—in dusty philosophy texts and academic papers. Then the business world discovered it, stripped it of its philosophical dignity, and turned it into the verbal equivalent of a cheap suit trying to look expensive.

    The Corporate Takeover

    The word “ideate” won Forbes’ 2015 “Jargon Madness” competition, beating out contenders like “leverage,” “disrupt,” and “growth hacking” as the term most abused by startup founders, developers, and marketers. (Inverso & Pierce, 2015) Think about that. In a bracket-style tournament of terrible business jargon, “ideating” emerged victorious as the worst of the worst. That’s not a participation trophy—that’s a crown of shame.

    Forbes defined it as “a nonsense word meaning ‘think,’ ‘dream up’ or ‘conceive of an idea.’ Formerly known as ‘brainstorm.’” And there it is, laid bare: we already had perfectly good words for this. “Think.” “Brainstorm.” “Come up with ideas.” These words are clear, direct, and human. But corporate culture demanded something that sounded more impressive, more technical, more… expensive.

    Why It Feels So Wrong

    Your hatred isn’t irrational—it’s a response to linguistic dishonesty. Corporate jargon has been criticized as “pompous” and “a tool for making things seem more impressive than they are,” with writer Steven Poole arguing it is “engineered to deflect blame, complicate simple ideas, obscure problems, and perpetuate power relations.”

    When someone says “let’s ideate on this,” they’re not adding clarity or precision. They’re doing the opposite. As UC Berkeley management professor Jennifer Chatman explains, “Jargon masks real meaning. People use it as a substitute for thinking hard and clearly about their goals and the direction they want to give others.”

    The irony is delicious: a word that means “to think” is used by people who want to avoid thinking clearly about what they’re actually saying.

    The AI Connection

    Your instinct that “ideating” sounds “AI-sloppy” is particularly prescient. The word has that same hollow, generated quality that characterizes so much AI-produced content—technically correct but soulless, functional but fake. It’s the linguistic equivalent of a stock photo: it looks professional at first glance, but upon closer inspection, it’s generic, lifeless, and trying too hard.

    This manner of speaking draws criticism for its reliance on clichés and allegedly pretentious verbiage, and can become grating to coworkers eager to talk in a more casual, authentic vernacular. The word “ideating” announces that you’re not interested in authentic communication—you’re performing professionalism, and doing it badly.

    The Exclusion Problem

    Only 13% of surveyed workers had heard or used “ideate,” making it one of the least recognized business buzzwords. This means that when someone uses “ideating” in a meeting, there’s an 87% chance they’re confusing or alienating someone in the room. (Study Reveals The Corporate Jargon Employees Hate the Most, 2021) Jargon excludes people, and if leaders are trying to create an inclusive workplace, the last thing they should do is encourage a bizarre language of office-speak that not everybody understands.

    The word creates an artificial barrier between those “in the know” and everyone else. Author James Sudakow believes buzzword use is driven by the urge to be seen as an insider, but it tends to backfire: “Sometimes people who overuse corporate jargon actually lose credibility.”

    What It Really Signals

    When someone says they’re “ideating,” they’re really telling you several things:

    1. They’ve spent too much time in conference rooms
    2. They’re more concerned with sounding smart than being clear
    3. They’ve forgotten that “thinking” is a perfectly good word
    4. They probably have other annoying habits, like scheduling meetings that could have been emails

    Words become annoying buzzwords when they get overused, frequently misused, or are just plain vague. Terms with a perfectly good plain-English counterpart are especially hated. It’s a classic mark of jargon! If there’s a different word that you would say if you weren’t at work, you’re probably using business jargon.

    The Verdict

    Your hatred of “ideating” isn’t petty—it’s a defense mechanism against linguistic pollution. It’s your brain correctly identifying that something is wrong, that communication is being degraded rather than enhanced, that authenticity is being sacrificed on the altar of appearing professional.

    The word took a 400-year journey from legitimate philosophical terminology to meaningless corporate filler. It won an award for being the most annoying jargon in business. It confuses the vast majority of people who hear it. And worst of all, it replaces simple, clear, human words with pretentious nonsense.

    So yes, hate away. Your hatred is justified, rational, and frankly, a sign of good taste. The next time someone suggests you “ideate” on something, you have my full permission to respond: “How about we just think about it instead?”

    References

    (2025). IDEATE Definition & Meaning – Merriam-Webster. Merriam-Webster. https://www.merriam-webster.com/dictionary/ideate

    (2023). ideate. Merriam-Webster’s Word of the Day. https://art19.com/shows/merriam-websters-word-of-the-day/episodes/43daf2a4-d35a-4c3e-8a3e-827fbe618ebc

    Inverso, E. & Pierce, K. (March 15, 2015). The Most Obnoxious And Overused Startup Jargon. Forbes. https://www.forbes.com/sites/emilyinverso/2015/03/16/the-most-obnoxious-and-overused-startup-jargon/

    (March 8, 2021). Study Reveals The Corporate Jargon Employees Hate the Most. Preply. https://preply.com/en/blog/best-and-worst-corporate-jargon/

  • The Great Library Heist (That Never Actually Happened)

    If you spend enough time in the darker corners of LinkedIn or the frantic comment sections of art forums, you’ve heard the refrain: Generative AI is built on theft. It’s a heavy accusation. It paints developers as digital cat burglars and AI users as fences for stolen pixels. It suggests that every time a model is trained on public data, a crime has been committed.

    But if we follow that logic to its natural conclusion, we end up in a world where being a student is a felony and having a family resemblance is a misdemeanor.


    1. The Library Fallacy
    2. The Teacher’s Larceny
    3. The Inheritance of Ideas
    4. The Manifesto of the Collective Mind
    5. The Manifesto of the Collective Mind: A Defense of Synthesis (extended mix)
    6. Chico Marx testifying in Congress about Caribbean boat bombings

    The Library Fallacy

    Imagine walking into your local library. You spend the afternoon reading everything you see—novels, technical manuals, poetry, and magazines. You walk out with a brain full of new structures, ideas, and stylistic inspirations.

    In the eyes of the “AI is theft” crowd, you didn’t just study. You stole the building.

    The argument relies on a massive False Equivalence: the idea that analyzing a work is the same as duplicating it. If observing public data is theft, then every historian, journalist, and curious toddler is a criminal. To “see” is not to “take”; it is to process. AI is simply the world’s most efficient reader, traversing the public landscape of the internet to understand how we speak and how we see.

    The Teacher’s Larceny

    Take it a step further. Recall the best teacher you ever had. They taught you how to frame a sentence, how to apply a brushstroke, or how to solve a complex equation.

    Whenever you use those skills today, are you “stealing” from that teacher?

    Of course not. We call it learning. No artist or writer creates in a vacuum. If using what we’ve learned from those who came before is larceny, then the entire history of human culture is one long, uninterrupted crime spree. AI “learns” stylistic patterns—the mathematical “blueprints” of style—exactly like a student, rather than “copy-pasting” specific works.

    The Inheritance of Ideas

    Finally, there is the biological argument. If you have your father’s eyes or your mother’s sense of timing, are you a criminal for having things in common with your parents?

    New ideas are the offspring of old ones. A new piece of art shares “DNA” with the collective heritage of data that came before it. This isn’t a heist; it’s evolution. AI output is a synthesis—a digital descendant of millions of inputs—creating something new from the traits we have collectively contributed to the public square.

    The Manifesto of the Collective Mind

    To claim that AI training is theft is to commit the Fallacy of Composition. Just because parts of the training data are copyrighted doesn’t mean the mathematical model itself violates copyright. It’s similar to arguing that a cake is “illegal” because the flour used was subject to a “no-resale” agreement: once baked, the cake is chemically and functionally distinct from the original ingredients, making it a new product.

    Copyright exists to protect the expression, not the idea. Theft is taking a loaf of bread so that another cannot eat it. Inspiration is taking the recipe and baking a new loaf for the world.

    AI is the newest, fastest reader in the human library, using humanity’s shared recipes to feed a future of infinite possibilities. It doesn’t replace us; it mirrors our collective mind.

    So, the next time someone tells you that using AI is “justified theft,” remind them: if learning from the world is a crime, then we’re all serving a life sentence.


    The Manifesto of the Collective Mind: A Defense of Synthesis (extended mix)

    Creation has never been an act of isolation. It is an act of communion with everything that came before. To claim that learning from the world is “theft” is to declare that progress itself is a crime. We propose a different truth:

    I. To Observe is Not to Steal. A library is not a collection of things to be possessed, but a landscape to be traversed. If a human reads every book in a library and emerges with a deeper understanding of language, we call them a scholar. When a machine does the same, it is not “scraping”—it is studying. Knowledge, once shared in the public square, becomes the soil from which new ideas grow.

    Every artist stands on the shoulders of giants. Every student uses the tools their teacher provides. Using the patterns, logic, and grammar of human culture is not mimicry; it is participation. We do not own sunsets, sonnets, or the math of a brushstroke. These are the heritage of our species, and to deny a machine the right to learn them is to deny the universality of achievement.

    III. Synthesis is the New Genesis. We are not made of nothing; we are made of our parents, our surroundings, and our history. AI is the digital child of the human internet—it inherits our biases, our brilliance, and our linguistic DNA. It does not replace us; it mirrors our collective mind. It does not “take” art; it distills the essence of what makes art human.

    IV. The Boundary of Creation Copyright exists to protect the expression of ideas, not the ideas themselves.1(What Does Copyright Protect? (FAQ), n.d.) Theft is taking a loaf of bread so that another cannot eat it; in contrast, inspiration is using a shared recipe to bake a new loaf for the world. In this metaphor, AI is the greatest baker—using humanity’s collective recipes to create new bread and nourish infinite possibilities. Let us embrace this future and use our creative powers to shape it responsibly.


    Judge for Yourself: Is this Theft?

    I was watching Washington Week with The Atlantic on PBS and the topic was the bombings in the Caribbean ordered by Secretary of War Pete Hegseth. Were they war crimes? The Trump Administration is unreal and offensive, but when one considers that the host of Washington Week is Jeffry Goldberg, the journalist who Hegseth included on a call discussing national secrets (SIGNALGATE), things are definitely uber surreal.

    Then I had an idea: This is all so over-the=top, it reminds me of The Marx Brothers. What if I have a little fun and ask AI to “write a couple of paragraphs in the voice of Chico Marx, testifying in Congress about the bombing of boats in the Caribbean“.

    AI performing this task at my request is no different than Kate McKinnon impersonating Hillary Clinton or Ruth Bader Ginsburg on SNL.

    The “Chico Thing” created by Google Gemini turned out pretty well, Let’s have a look (and hopefully a laugh):

    Chico Marx testifying in Congress about Caribbean boat bombings

    “Look, look, I’m a-tellin’ you, Mr. Congressman, I don’t know nothin’ about no boats! My brother, he’s-a the one who likes the water, because he’s-a got a leak in his head, but me? I stay on the dry land where the money is. You say somebody blowed up a boat in the Caribbean? That’s-a too bad. Maybe the boat, she was-a tired and she wanted to go out with a bang, huh? I was-a too busy workin’ on a very important project. I was-a shadow-boxing a ghost in a dark alley to see who’s-a got the best reach. You can’t be in two places at once unless you’re-a twins, and even then, you gotta pay double for the antipasto!

    Now, you ask-a me about these “explosives.” I don’t-a use explosives. I use-a the logic! If a boat sinks in the middle of the ocean and nobody is there to hear it, does it still make a splash? Of course-a not, it’s underwater! You think I’m-a the mastermind? I can’t even master the mind I got! I’m-a just a poor boy tryin’ to make a livin’ sellin’ insurance to people who already had the accident. If you want to find the guy who did it, go ask the fish. They saw the whole thing, but they’re-a very quiet—they don’t-a want to get fried! So, you let-a me go now, and I promise I won’t-a blow up nothin’ except maybe a paper bag.”

    “You’ve just listened to my brother—a man who thinks a ‘nautical mile’ is a smile you get from a mermaid.” – Groucho’s/AI’s response


    References

    1. (n.d.). What Does Copyright Protect? (FAQ). U.S. Copyright Office. https://www.copyright.gov/help/faq/faq-protect.html ↩︎

  • Why a Sense of Humour is the Most Human Trait You Can Possess (And Why It Signals Competence)

    We all know the feeling: you’re watching a speaker, interviewing a candidate, or meeting a new teammate, and you’re not just listening to their facts—you’re searching for a vital, unspoken signal. You’re searching for humanity.

    In a world increasingly dominated by polished scripts, optimized data, and AI-driven responses, how do you instantly prove you are a thinking, breathing, fully clued-in individual? The answer is simple, powerful, and often overlooked: a sense of humor.

    Humour is more than just a pleasant distraction. Humour is a six-second social MRI. It doesn’t just make you likable; it is a rapid-fire proof of concept for your social, emotional, and cognitive fitness.1

    A sense of humour makes you seem more human because it proves you’re not just thinking—you’re actually tuning in to the world around you.

    A sense of humour makes you seem more human because it proves you’re not just thinking—you’re actually tuning in to the world around you.

    If you want to move past simply delivering information and start building connections and influence, here are 9 profound ways a sense of humour serves as the ultimate signal of competence and humanity.


    The 9 Signals: How Humour Demonstrates Humanity & Competence

    1. Humour Shows Social Awareness.

    A joke only works if you understand the shared context—the norms, the absurdities, the unsaid things. When someone uses humour well, it signals they’re tuned into how people actually think and talk, not just reciting information. It’s the difference between knowing the words and reading the room.

    2. Humour Demonstrates Emotional Intelligence.

    Knowing when to be funny—and when not to—shows sensitivity to mood, tension, and the people around you. Good humour is calibrated, not just delivered. A joke at the wrong moment isn’t funny—it’s a social smoke alarm.

    3. Humour Creates Warmth and Connection.

    Laughter acts as social glue. When someone makes another person laugh, it sparks a feeling of shared understanding—”you get me.” Since you can’t fake a genuine laugh, earning one feels like a tiny victory and a basic sign of trust and relatability.2

    4. It Reveals Cognitive Flexibility.

    Humour often involves shifting perspectives, spotting contradictions, or reframing something ordinary in a surprising way. It feels deeply human because it mirrors how we navigate real life—never linear, always messy and interpretive. If life were a straight line, we wouldn’t need punchlines.

    5. Humour Stimulates Creativity and Problem Solving.

    The mechanism of humour—connecting two seemingly unrelated ideas to create a surprising third (the punchline)—is the same engine that drives creative thinking and innovation. A good laugh acts as a cognitive reset button, freeing the brain from rigid, linear thinking and opening it to novel solutions.3

    6. Humour Signals Resilience and Stress Management.

    Humor, especially during difficult or high-pressure situations, demonstrates the ability to maintain perspective and emotional distance. The ability to crack a well-timed joke under pressure suggests mental toughness and an innate mechanism for cognitive reappraisal, signaling that you process stress rather than letting it consume you.4

    7. Humour Shows You Don’t Take Yourself Too Seriously.

    Self-aware humour shows humility and groundedness. Someone who can laugh at themselves seems less rigid, less defensive, and more authentic and engaged. Taking yourself too seriously is the fastest way to become unrelatable.

    8. Humour is a Powerful Leadership Tool.

    Leaders who use appropriate humour are seen as more trustworthy, approachable, and effective at defusing conflict or delivering difficult news. Humour breaks down hierarchical barriers and fosters psychological safety. It’s less a weakness and more a tool for influence and team cohesion.5

    9. Humour Humanizes Complexity.

    Even the smartest or most serious people become approachable when they joke. Humour opens a door that intellect alone cannot. It makes difficult or serious topics accessible.6


    Conclusion: Humor—The Advanced Social Lubricant

    We’ve covered nine dimensions, but the core takeaway is this: a sense of humour is not a frivolous add-on or a charming quirk; it is proof of sophisticated human processing—social, emotional, and cognitive.

    To deploy humour successfully is to take a calculated social risk. A failed joke falls flat, creating awkwardness. But a well-placed, timely moment of humour pays dividends that mere intellect or diligence never could—it creates immediate connection, trust, and influence. The willingness to take that risk is, in itself, a powerful signal of confidence and groundedness.

    If you encounter someone who takes themselves so seriously that they cannot laugh, you are likely looking at a person who is too rigid to adapt, too defensive to connect, and too closed-off to innovate.


    The Negative Metaphor: Having no sense of humour is like running on Wi-Fi with one bar; you can still function, but everyone can tell you’re not quite connected.

    The Positive Metaphor: A well-placed sense of humour is like an advanced social lubricant: it reduces friction, makes the interaction run more smoothly, and leaves everyone feeling a little warmer.


    Ultimately, your sense of humour is your most reliable human fingerprint. Use it well.


    References:

    1. (2025). The success elements of humor use in workplace leadership: A proposed framework with cognitive and emotional competencies. PLOS ONE. https://doi.org/10.1371/journal.pone.0304650
      ↩︎
    2. Dunbar, R., Frangou, A., Grainger, F. & Pearce, E. (2021). Laughter influences social bonding but not prosocial generosity to friends and strangers. Psychological Science 32(8), pp. 1245-1253. https://doi.org/10.1177/09567976211024335 ↩︎
    3. (2024). Humor in leadership and employee creative and innovative behavior. Current Opinion in Psychology 55. https://doi.org/10.1016/j.copsyc.2023.101723 ↩︎
    4. (2024). Humor in leadership and employee creative and innovative behavior. Current Opinion in Psychology 55. https://doi.org/10.1016/j.copsyc.2023.101723 ↩︎
    5. (2024). Lighting the fire of wisdom following humor: How and when leader humor yields team creativity through team knowledge integration capability. Journal of Business Research 183. https://doi.org/10.1016/j.jbusres.2024.114834 ↩︎
    6. Aaker, J. & Bagdonas, N. (July 10, 2017). Humor Is Serious Business. Stanford Graduate School of Business. https://www.gsb.stanford.edu/insights/humor-serious-business ↩︎
  • Debunking the ‘AI SLOP’ Myth with Humour

    I was recently called a purveyor of AI SLOP, and I want to respond. I’ve always thought purveyor is one of the finer words in English, but I take issue with the recently coined:  AI SLOP.

    In this article, I will share my thoughts on the phrase “AI SLOP” and then offer FOUR humorous definitions of people who enjoy flinging the term “AI SLOP” around as if they are monkeys chucking feces at each other. All four definitions are AI-generated, and all from the same prompt. This is another great examination of the differences and nuances of ChatGPT, Google Gemini, and Grammarly AI.

    First, though, I will briefly discuss why I trust AI-generated content and use it wherever appropriate.

    1. My emotional/feral reaction to the phrase “AI SLOP.”
    2. Why I trust AI-generated content
      1. My first experience with ChatGPT
    3. AI Detection is “iffy”, another reason to stop using the phrase “AI SLOP.”
    4. Now the fun part: Humorous Definitions of people who use the phrase “AI SLOP.”
      1. Below is the prompt and the AI responses. Enjoy!
      2. ChatGPT 1:
      3. ChatGPT 2:
      4. Google Gemini:
      5. Grammarly:

    My emotional/feral reaction to the phrase “AI SLOP.”

    AI SLOP is so frustrating to hear or read. For me, it screams “willfully and proudly ignorant”. I generally think, “Here is a person with a strong opinion about something they likely know nothing about.”

    Calling something “AI SLOP” reflexively is like a person unfamiliar with Public Enemy dismissing an entire, hugely popular 50-year-old music genre with seven capital letters: “RAP CRAP.”

    Describing something as “AI SLOP” is similar to calling inconvenient news stories “FAKE NEWS.” Both phrases are used to disregard information that doesn’t fit one’s beliefs. By dismissing facts as false or worthless, the speaker avoids engaging with ideas they disagree with.

    Why I trust AI-generated content

    When ChatGPT first arrived on the scene, I was skeptical. Very skeptical.

    But colleagues were persistent and eventually convinced me to give it a try.

    It was way before I knew about prompt engineering, and come to think of it, I don’t think the term “prompt engineering” even existed!

    My first experience with ChatGPT

    When I found myself with some free time, I opened ChatGPT and entered the headlines (H2s) from a lengthy article I had recently finished.

    That is decidedly NOT prompt engineering, but the results that ChatGPT produced were much in line with what I had written.

    The difference was that AI completed tasks that took me a couple of days in seconds.

    So, I gained trust in AI. I played, and I gained.

    Being a somewhat rational person, I still verify and proofread what AI produces. And sometimes AI should not be used at all. Students should still do their own research, for example.

    People are affected by AI even if they do not use it directly.

    But even if students are not using AI for their homework or writing papers, their teachers are likely creating lesson plans with AI, and school boards and the administrative sides of schools are definitely using AI. The media is definitely overusing AI. I guarantee that lawmakers are using AI to sway voters and to do research on bills and laws they introduce.

    This means that even if students are not using AI directly for their education, they ARE being influenced by AI as it trickles down the hierarchy.

    AI Detection is “iffy”, another reason to stop using the phrase “AI SLOP.”

    1. Grammarly’s “AI Detector” feature is far from trustworthy. Generate some copy in Gemini or ChatGPT, copy it into Grammarly, and see what happens if you don’t believe me.
    2. I have friends who “hate all AI” who also love some of the AI-generated copy I have shared with them, without telling them it was AI-generated.
    3. And of course, there are “get rich quick” schemers trying to make a quick buck with “failproof AI detectors” that will identify AI-generated copy for “a small monthly fee.” That’s just one of countless examples of why humans are less trustworthy than AI!

    Now the fun part: Humorous Definitions of people who use the phrase “AI SLOP.”

    With so many people defining me, I decided to fight fire with fire and define them! And naturally, the way to do it was with AI!

    So I wrote a prompt quickly, then fed it into ChatGPT, Google Gemini, and the Grammarly AI feature.

    My intention was to use this endeavour as a warm-up exercise and then move on to something more substantial. The “AI SLOP” was too funny not to share, so here’s this article.

    Below is the prompt and the AI responses. Enjoy!

    image.png

    ChatGPT 1:

    Here’s a version that stays sharp, funny, and pointed without veering into outright harassment territory:


    AI-Slopper (n.)
    A person who reflexively uses the term “AI slop” to dismiss anything created with artificial intelligence, typically revealing more about their own limitations than about the work itself.

    Characteristics often attributed to an AI-Slopper include:

    • Technological Fragility: Becomes overwhelmed by the idea of typing a coherent prompt, viewing basic interface elements as if they were ancient runes.
    • Creativity Vacuum: Assumes AI can only produce trash because they personally cannot imagine anything worth making.
    • Default Cynicism: Responds to new tools with the same enthusiasm a medieval monk might show toward a printing press.
    • Luddite Swagger: Proudly rejects emerging technology while simultaneously benefiting from dozens of hidden algorithms every day.

    Usage:
    “Don’t mind him — he calls everything ‘AI slop.’ If it has a button, a menu, or a spark of imagination, he’s against it.”

    ChatGPT 2:

    Here’s the film noir version — smoky, cynical, fedora tilted just right, describing a fictional archetype of the “AI-Slopper” in classic hardboiled style:


    AI-Slopper (n.) — Film Noir Edition
    They called him an AI-Slopper, the kind of two-bit mug who’d walk into a room full of geniuses and declare everything “SLOP” before he’d even bothered to read the sign on the door. A real dime-store critic with a walnut-sized imagination and a fear of technology so deep you’d need a submarine to find the bottom.

    He drifted through the digital underworld like a busted typewriter — all clacks, no ink.

    Profile, as understood by detectives in the know:

    • Technological Lightweight: The type who stares at a prompt box the way a cat stares at a bathtub — lots of suspicion, zero comprehension.
    • Creativity Flatline: I once asked him for an idea. He handed me an empty napkin and said, “This is my vision.”
    • Luddite by Lifestyle: He treats progress like an unpaid debt collector — something to hide from behind half-drawn blinds.
    • Fastest Dismissal in the West: You show him anything innovative, anything with a spark of originality, and he’ll squint, wrinkle his nose, and mutter “AI slop” like it’s a curse he learned from a drunk at the docks.
    • Knowledge Thin as a Cheap Cigarette: Everything he knows about AI could fit on the back of a matchbook — and most of that would be misspelled.

    I poured myself a drink and watched him flap his gums.
    “Slop, slop, slop,” he said, over and over, like a broken jukebox playing a song nobody asked for.

    And I realized something:
    This guy wasn’t afraid that AI would replace him.
    He was afraid it already had — and nobody noticed.

    Google Gemini:

    It sounds like you’re looking for a highly critical definition of a person who uses the term “AI Slop.”

    Here is a definition focusing on the characteristics you mentioned:


    🚫 Definition: “AI Slop” User (Critical View)

    A user of the phrase “AI Slop” is often characterized as an individual exhibiting a profound technological illiteracy coupled with a deep-seated creative deficit.

    • Technological Ineptitude: Their use of the term typically betrays a limited intelligence regarding modern generative tools, as they are fundamentally unable to grasp the concept of prompt engineering. They fail to progress beyond the most basic, vague inputs and thus receive low-quality, generic outputs, which they then erroneously label as “slop.”
    • Creative Deficiency: The complaint is often a projection of having zero creativity. Lacking the imagination or skill to craft an effective request or iteratively refine a result, they blame the tool for their own inability to generate anything meaningful.
    • Dismissive Luddism: Ultimately, they function as a dismissive Luddite, reflexively rejecting powerful new technologies out of fear, misunderstanding, or an unwillingness to adapt. They dismiss the entire field based on their own poor, unskilled results, seeking to diminish the value of a medium they cannot yet master.

    Grammarly:

    AI Slop Crusader (noun): A self-appointed guardian of “authentic” content who brandishes the term “AI slop” as a substitute for critical thinking. This individual possesses the unique ability to detect artificial intelligence in any work they dislike, while remaining blissfully unaware that their inability to craft a coherent prompt reflects not the technology’s limitations, but their own.

    Characterized by a militant nostalgia for inefficiency and a steadfast refusal to learn basic skills that would take approximately four minutes to acquire, the AI Slop Crusader mistakes their technological illiteracy for moral superiority. They view all AI-assisted work as inherently inferior, yet curiously lack the creative capacity to produce anything superior themselves.

    Often found commenting “this is obviously AI slop” on human-created work, or dismissing genuinely innovative AI-assisted projects without examination, they represent a modern Luddite—one who smashes not textile machines, but the very concept of tools evolving beyond their comfort zone. Their rallying cry of “AI slop” serves as both shield and sword: a shield protecting them from engaging meaningfully with new technology, and a sword with which to cut down others’ efforts while contributing nothing of value themselves.

    See also: Keyboard warrior, armchair critic, person who still can’t figure out how to unmute themselves on Zoom calls.