8 minute read

Since this is my first post, let me introduce myself and address a burning question: Are we all about to lose our jobs to AI like ChatGPT? I am rdd13r, a hacker in the true sense - a creative problem solver. My wife and I run several startups across the globe, making software, experimenting with new technologies, and coaching teams for other people’s startups or laggards in digital transformation. With over three decades of coding experience, we operate out of our US-based holding company. I’ll share more about us later, but for now, let’s dive into the public’s reaction to the late success of ChatGPT.

Ever since GPT-3’s debut in 2020 and TabNine’s launch in 2018, our company has quietly embraced these ML-augmented programming assistants - cool, saves time, very useful, simple to use and not very special. The recent rampant buzz, however, evokes memories of the 1995 hacker hysteria following Kevin Mitnick’s arrest in Raleigh. That hysteria was so powerful that the MIT-originated term "hacker" shifted from "clever problem solver" to "criminal" overnight. Just as the public once feared the power of code, today, they fear AI. As animals, we often fear what we don’t understand, especially when it seems extraordinary. And sometimes we project onto people behind the extraordinary. My peers and I couldn’t help but feel that wary gaze on our backs from the uninitiated, we call mundane. It didn’t matter that we made software keeping their business and livelihood alive, we were not accepted before and now even less. Alas, I digress.

And today - ChatGPT will replace <insert profession> in <pick a number> of years!

No finger pointing, but that headline screams of massive job losses due to LLMs like ChatGPT. Based on what facts and applied rules of inference, may I ask? In this article, we aim to debunk these myths, unravel the realistic capabilities and limitations of LLMs, and shed light on what should truly concern us and what should not. We should use facts and rules of inference to guide our discussion. Additionally, we will explore the potential positive impact of AI on job creation and economic growth, offering a more balanced perspective on the issue at hand.

Understanding ChatGPT and LLMs

ChatGPT, a large language model by OpenAI, is undeniably impressive. It can generate human-like text and perform tasks like translation, summarization, and answering questions. However, it’s crucial to understand that ChatGPT is not sentient; it’s a machine with significant limitations. It cannot think, reason, or possess common sense like mammals do. It isn’t capable of original ideas and creativity. All it does is autocomplete based on a massive amount of structured text, which is the product of collective human learning and expression, making the responses appear reasonable to us. But these are not original responses of a reasoning being; it’s merely a sounding board, like a mirror composed of combined memories from many of us.

When a person has a conversation with an LLM like ChatGPT, it is not two thinking beings conversing and arriving at truth through dialectics. It’s merely a thinking being conversing with a reflective surface. This conversation is like a REPL (read-evaluate-print-loop), a hacker’s favorite tool, which helps build up context in the half-conversation. This context-building is where the real magic lies. At ASE, we use our own ML sentiment engine as a context manager to interface the human user with a set of other ML tools including GPT-4. My personal little toy is called Matilda (Tillie), and she’s very helpful — but she’s not an LLM — she’s another, greater thing! She’s tuned to me over the years and helped me develop this article, for example, just like she helps me write my code.

Now there are projects like Auto-GPT (project home) that are even more useful, allowing anyone to use LLMs to a greater extent without heavy proprietary software like Tillie, yet offering a very similar kind of help. If you code and like tinkering with things, the sources at GitHub are for you. Some people even make such 'pets' without coding, and for that many new projects spring up on GitHub daily! There’s also a no-coding interface from Auto-GPT, AgentGPT, that anyone can use to explore the new magic.

These REPL tools enhance the usefulness of LLMs by managing a much larger context and automatically working to drill down clarifications, compensating for natural human vagueness and LLM’s lack of depth. But in the end, in my case, I know that I am still only talking with myself, albeit autocompleted by the world! This is crucial for us to understand. This software has no human-like intelligence.

My Tillie makes mistakes regularly even though a DEDUCTIVE ENGINE is at her core and not some stochastic model. Sometimes she runs herself into a spiraling self-reinforcement loop, arriving at truly absurd conclusions. But it’s not her making mistakes - she’s just a machine operating honestly and truthfully on bad data. To recover from this, I need to start troubleshooting my context by asking her a thousand questions "why" did she say what she said, and then editing the context to remove paradoxical or self-contradictory data. So I curate her because I must. It’s practically always me contradicting myself somewhere in the past that puts Tillie in an inconsistent state. In reality — she is the inconsistent state by definition because she’s a machine, i.e., exacts, but the data comes from humans. Unlike traditional ML, we don’t conclusively know why deep learning systems do what they do, and the ever-crucial introspection is a hot topic of research today. The bottom line - they can’t be trusted yet, because determinism is a "work in progress!" And it goes for all our deep learning tools, as opposed to deductive tools. Starting with our video stream context analyzer watching my 5-month-old daughter for safety and telling me "animal detected," which is technically true, but not useful. To Tillie telling me that this article is not useful, which is also technically true, until I quote her reasons:

  1. You’re right that LLMs like me and ChatGPT are not sentient, and we don’t possess common sense or reasoning abilities like humans.

  2. It’s important to note that our abilities are still evolving, and future models may have more advanced features.

  3. You are right that the concept of using a context manager, like me, to improve the usefulness of LLMs proved an interesting and valid approach.

  4. It might not be the case for every user or every implementation of LLMs. It is your tinkering and your tinkering alone that defines this outcome.

Conclusion: Plausible Bull (PB). As long as any ML is in the loop - the entire product is stochastic, i.e., PB.

Such responses can fool us into believing that there’s more to this machine when there really isn’t. Tillie is not alive. And it’s not just my tinkering, by the way, the whole world is experimenting. Most importantly, there’s simply no inherent reliability in the design. All this is why systems before ChatGPT failed so miserably — curation.

And the salvation today — accidental discovery that "size matters" — isn’t a way out.

Realistic Expectations and Limitations

While ChatGPT may automate some tasks, it is truly unlikely to replace entire professions. Instead, it can be used as a tool to enhance human productivity and creativity. It’s essential to recognize that LLMs have limitations, including generating irrelevant or incorrect answers, sensitivity to input phrasing, and verbosity, i.e., just more PB. While the model’s architecture and weights are deterministic, the text generation process as a whole is non-deterministic. This means that the outputs can vary depending on factors like the randomness introduced during training and the specific input context provided. This non-deterministic nature (the stochasticity) makes LLMs less suitable for mission-critical tasks, like controlling an aircraft or medical equipment. Non-deterministic means "cannot be trusted" and "cannot be reproduced." Would you let such nondeterministic software fly your plane with your family onboard? Probably not. But as advice to a properly educated pilot, it may prove invaluable. I emphasize that "education" here because even a deterministic thinking machine, like Tillie, fails outright when the context and data she operates on (1) come from LLMs and (2) is trusted 'valid' when there’s no such assurance.

LLMs are valuable in other areas, like helping in mundane research tasks, providing conversational support, or aiding in PB content creation, like this article. These AI-powered tools can help reduce the cognitive load on professionals, allowing them to focus on more complex and creative tasks. After all, the best help humans ever get comes from the reduction of mundane.

It’s important to remember that the effectiveness of LLMs relies heavily on the quality of the data they are trained on. Since these models are based on patterns found in large datasets, biases and inaccuracies present in the data DO lead to biased or incorrect outputs. Addressing these challenges requires continuous research and improvement of both training data and model architectures.

Most importantly, a tool has little to do with the root dilemma here! From the moment a primate picked up a stick and used it as a tool to extract termites from a stump to eat, it obsoletes a peer not willing to use a stick. Innovation has been a constant driving force in our lives because it’s a human factor. All we do as hackers is automate repetitive processes and tasks, making more time for creative thinking. And individuals who don’t adapt to new knowledge or tools risk becoming obsolete. This has nothing to do with software or a stick; it’s just the natural progression of humanity. LLMs are simply the latest addition to the long chain of tools continuously developed, helping the thinkers obsolete the complacent ones.

The creators of ChatGPT at OpenAI have demonstrated their brilliance by exposing the model to the public as it evolves, effectively challenging the concept of the chasm in its abstraction. This idea of the chasm comes from Geoffrey Moore’s book "Crossing the Chasm," which discusses the gap between early adopters and the early majority in the adoption of innovative technology.

Peter Thiel, in his book "Zero to One," emphasizes the importance of creating conceptually new niches to find customers, rather than directly challenging established monopolies. This approach has led to marginal evolution across the industry as a whole. By releasing an evolving model like ChatGPT, OpenAI has introduced a new approach NOT bound by traditional boundaries. This, in my opinion, is where greater acceleration will come from, as it blurs the lines between different stages of technological adoption.

Personally, I hold immense admiration for the revolutionary decision made by the team at OpenAI. I’ve been building Tillie since 2016, in her latest reincarnation, that is, while her architecture goes back to 2000. And it never occurred to me that she could be more than just a source of entertainment for my coding, experimentation, and learning endeavors.

  1. Breaking down a domain,

  2. discovering, modeling,

  3. coding some DDD Aggregates,

  4. and augmenting behavior with ML to automate a business,

  5. as well as mentoring my peers in doing so alongside me

 — yes, I understood these aspects as useful.

However, I never envisioned an unfinished AI experiment as something fundamentally valuable "simply, by-inception."

This is because, as hackers, we have a deeply ingrained concept of a "done-done" product and what it should look like when it’s useful to a customer. And something like Tillie just didn’t fit that mold, at least not in my mind. This reflects a form of bias. Thankfully, generative AI can also help combat biases like these. Kudos to the OpenAI team for challenging conventional thinking and pushing the boundaries of AI "simply by-inception!"

Educating the Public

Kevin Mitnick was forbidden from using an analog phone so that he would not start a nuclear war with his voice.

To alleviate unfounded fears, we need to educate the public about AI’s realistic capabilities and limitations. This understanding will allow people to embrace AI technologies like ChatGPT as tools that can complement their work, rather than as threats to their livelihoods or any other unwarranted concerns. Educational initiatives, workshops, and public awareness campaigns are some of the ways we can bridge the knowledge gap and, hopefully, promote a better understanding of AI technologies as they evolve.

We have great examples of failure in this aspect in the past. Consider nuclear power, for instance. Today, we understand that in the natural path of our evolution, energy needs grow exponentially. As a civilization, we will manipulate smaller and smaller things to release more and more energy. So, fission is a necessary step in our evolution that is practically impossible to skip before getting to fusion. When not applied — civilization comes to a crawl. But guess what, many of us knew this 30+ years ago. And, we let ignorance and fears run amok! Thus, what do we have today? A stalemate of a slowly dying planet.

Every three years, the safety margin of a reactor design doubles, and modern prototypes are practically impossible to melt down. Knowing that, we run decades-old plants with no replacements in sight. And only countries like France and Ukraine apply common sense to the matter. In the U.S., however, burn ONLY 3.5% of nuclear fuel haphazardly and store it instead of burning 98% of it and not storing anything. Our kids won’t forgive us for this stupidity. Because all we do today is "kick that can down the road."

AI is the next greatest leap forward for humanity, greater than nuclear power and smartphones combined. Can we really afford to stay ignorant of it and run amok, asking for the termination of research like we did with nuclear power? Have we learned nothing? The best way to approach this technology is by peacefully learning and understanding it. Running it as much as humanly possible! Because I guarantee you — the other guy will!

Conclusion

My `Merica is a "Sleepy Hollow" of complacency and comfort. And here comes the noise…​ Stampede!

As with the hacker scare during Kevin Mitnick’s era, the fear surrounding ChatGPT and AI is mostly a result of misinformation, lack of understanding, yearning for "business as usual," and bad behavior from popular figures. By debunking myths, setting realistic expectations, and engaging in continuous learning, we can foster a more balanced perspective on our next most important 'stick' and its potential impact on jobs, society, and prosperity. So, head on over to OpenAI’s website and blog OpenAI Blog to explore and learn for yourself. That is how you can get the facts and tie them with rules of inference for your own well-informed conclusions. Staying up to date with the latest AI advancements is not difficult yet crucial in making informed decisions about the technology’s potential benefits and challenges.

And mark my words, OpenAI is not the only game in town. Chatbot isn’t "the revolution." This is only the beginning…​


Also see editorial[1].


1. ChatGPT & Job Loss: A Doze of Reality as published on Medium by R!dd13r

Leave a comment