The AI Communication Problem

The AI Communication Problem
AI Generated: An abstract image that represents the concept of artificial intelligence

There is a problem with AI.

I don't mean any of the technical, ethical, or political problems, I'm specifically talking about the problem we have of accurately informing the public about how AI works. Of course, I'm sure I'm not the first one to notice this, but I've been thinking about this recently. For a variety of factors, correctly communicating how AI works and what people should expect out of AI is extremely difficult. And when this communication is poor, people get the wrong idea about what they should use AI for, and how they should regulate it.

First, understanding how AI works is just plain hard. A lot of math, computer science, and technical lingo wrap the inner workings of AI and make it difficult to understand. Even for someone actively in the field and learning, it's hard to keep up with everything happening. How do you explain to the "average" person how a neural network works or how diffusion based generative AI works*? A common misconception people have is that LLMs are "copying" text from its dataset or that its answers must just be cobbled together versions of what it was trained on. It doesn't help that overtrained models sometimes do spit out training data word for word. But technically speaking, the AI has no memory. No database of text to pull from. Every single token generated was picked independently, and it is literally incapable of copying text from somewhere else. This is why its so frustrating when someone dismisses or criticizes LLMs for just copying what they learned - yes, they could spit out text word for word, but that's only because it learned that for those specific series of words, the next most likely word was the one it learned. Of course, I'm not defending the effects of this - AIs spitting out text verbatim is highly problematic. In fact, throughout this whole post, I'm not defending AI at all, just pointing out different gaps in communication between AI developers and the general public.

Second, AI is too magical. You ask something of it, and it magically produces the "right" answer. We often use the metaphor that Large Language Models are just souped up keyboard word predictors. Which is spot on, but the problem is that they feel nothing like a word predictor. You don't talk to your keyboard or ask it questions and expect reasonable answers. Of course, the mathematics behind the AIs are essentially the same, but since we have arbitrarily (with many obvious reasons) decided to train LLMs for chat-like responses, it feels like we are talking to a person, or at least some kind of "intelligence". In reality, there is no "intelligence" the computer predicts the next token given some input tokens. Here lies another thing I hear often; that they have asked AI for factual information. Even when those people know that AI can generate inaccurate answers, they view it as just that, a possibility of incorrectness, not that the whole output has absolutely no basis in truth. It just so happens that most of the time, it is mostly right, thanks to its training. This weekend, I went to a TEDx talk. And one of the highly educated panelists (when asked a question about AI), said that they often use LLMs to gain some background knowledge on a new topic in neurology to avoid doing hours of research before a lecture. This caused me so much internal pain to hear those words. Sure, for a "common knowledge" topic, asking LLMs for facts might be fine - but for neurology? Everything the LLM outputs should be viewed as ineligible for truth, until backed up by a proper source.

Third, AI is too varied. Everything from Tesla's "self-driving" software to ChatGPT to DALLE is AI. But each one works in fundamentally different ways such that they are almost incomparable. The only similarity is that their "intelligence" is "artificial". But AI gets used as a lump buzz word that's used for this, and that, and everything. So, when someone or some company talks about the new AI they're working on, someone could hear that and think of the vapid hallucinations of LLMs like ChatGPT, or be reminded of the ethical whirlwind around image generators like DALLE and Midjourney, or think of the terrible political ramifications of deepfake technology. It's just too varied. The term is so broad that its meaningless now. But people don't realize it. People (often on reddit) see a headline with "AI" in it, and think of completely different kinds of AI, without realizing how they might or might not be related.

Fourth, AI is advancing way too quickly. Again, I'm not saying that this in itself is a bad thing or that we should limit AI research, but I'm saying that it poses a challenge to AI communication. AI is developing so quickly at such an exponential pace that what is true one day, might become completely false the next day. One day, an AI might struggle to depict hands, and that is spread as a tip to spot AI generated media. But the next, that issue might have been essentially solved and now the old advice is not only incorrect, it is actively harmful as it leads people to believe the wrong things. Also, this breakneck pace doesn't even give the general public time to let the capabilities and limitations of the AI sink in, and it doesn't get any better with policy makers and politicians.

Fifth, AI is scary. And for good reason too - as mentioned above, it is developing faster than any technology has so far. But fear makes humans act irrationally. Many people hate on AI and think it should all be shut down for one reason or another. Whether that be theft from artists, job security threatened, or the implications of a "post-truth" era, people are afraid, and reasonably so. But people need to realize that to a certain degree, "the cat is out of the bag". Whether OpenAI gets shut down tomorrow or not, the math and the science are all there - it's just waiting for someone to gather the right dataset, implement the right technique, and have enough resources to train it. So when techniques like Nightshade or Glaze is spread like gospel on Twitter (X), it frustrates me. Not because I think that artists' should have their life's work ruined, but because those techniques will not save them in the long run. A simple 3px gaussian blur, some minute random noise, then a denoiser or sharpener will ultimately make the subpixel data poisoning to moot**. There is no stopping the AI train. I'm not a doomerist though - I believe that given enough collective will, we can implement change and can create effective policies. But we need to realize that AI and all of its scary effects is something that we must face, not avoid.

So, what can we do? While I wish I could end this post on a happy note or propose some silver bullet, it is not so. I have no good answer. We can of course come up with better explanations of AI, educate people on the real possibilities and limitations of AI, and try to come to a consensus on how we want to integrate AI into our future. But that is much easier said than done. I just hope that humanity's ace; our ability to adapt, will save us yet again***.

Appendix:

Disclaimer: I'm just some student with some background and experience in AI. I'm no experienced ethicist or even AI expert. And if you read the post carefully, I make multiple unsourced claims. These claims are not necessarily true, although I genuinely believe they are more right than wrong. But of course, I seek to learn. I didn't bother verifying any facts because this is a vent post, not an academic paper.

*: So much lingo. So much jargon. Trying to follow a guide or tutorial and seeing a line like "About 3 or 5 epochs should be enough, just make sure your total iterations doesn't get too high. Watch the loss line and make sure it doesn't make this shape, or its probably overtraining. Try turning the temperature down a little and decrease the learning rate and use the adaptive model. Load this GGUF 4 bit quantized model for LoRA training later on" Like almost every other word is a specialized word. It has taken me so long to figure out what each of those words mean and what they do.

**: I have not looked up any source for this, for reasons mentioned in the disclaimer. But people on twitter looooooove to tell artists to Glaze their work to prevent it from being stolen by AI!!!! As if the majority of data has not already been collected and trained on. And surely sub pixel level manipulations for data poisoning cannot hold up to multi pixel transformations like a quick blur and sharpening. At worst you could take a phone picture of a monitor and remove the subpixel manipulations. Like there is no way this is ever fully effective.

There is also another interesting theory floating around the internet. That already, AI and computer generated media have flooded the internet and that any future datasets created from the internet will be loaded with low quality AI works that harm future training efforts. In some ways, this feels true. Literally every twitter comment section is just bots talking to each other and advertising literally everything under the sun. I have not seen a single large twitter account with an intelligible comment section. I always open them expecting to see people discussing the tweet, but its just random nonsense. Even on Pinterest, doing a simple search like "fantasy" or "portrait" will yield a surprising number of AI generated results. I even added a bunch of AI images to a moodboard I was putting together for a project without noticing. I only found out later because I was trying to look for details to get inspired by then realized the details made no sense because it was AI generated. How long until the internet is an unusable pool of AI outputs? Hopefully never.

***: A genuine question I have is: What happens in a post-scarcity world? To a certain degree, we are already there. I believe humanity currently grows enough calories to feed everyone on the planet. But there are still starving people. Thanks, human nature and the structure of our mostly capitalist society. But what happens when most people's labor is not needed? If we just lay everyone off, people will have no money to spend to keep the economy running. Will we finally implement some sort of Universal Basic Income? Will we simply let AI run the mechanisms to fulfill our basic needs and will everyone focus on fulfilling their passions and creativities? Or will the working class stay trapped in a meaningless cycle of earning money and paying money just for the sake of keeping the ultimately meaningless dollars flowing and the economy going. Money, at the end of the day, is just a tool we made up to help us organize society. When does this tool turn into a prison? Oops, got too philosophical there, would fit right in on r/im14andthisisdeep.

Thanks for scrolling to the bottom of this post.