January 31, 2023 at 4:01 p.m. PST
- ChatGPT is a chatbot with artificial intelligence capabilities.
- Users can ask ChatGPT just about any question and get a well-written, but often factually incorrect, answer.
- Students are already using ChatGPT to cheat; however, teachers and professors can easily catch this dishonesty due to the software’s inaccuracies.
- The creators of ChatGPT, OpenAI, offer a tool to determine if their chatbot wrote a specific piece of content.
- Microsoft made massive investments in OpenAI, which is cause for privacy concerns.
- Much of ChatGPT’s knowledge base seems to derive from antiquated and discredited history.
- OpenAI claims that there’s no source for “the truth” at this time; however, there are entire digitized libraries full of facts and information.
What is ChatGPT?
Unless you’ve been living in a closet for the past few months, you’re well aware of ChatGPT. It’s one of the most hyped technologies of 2023. Buttressed by decades of cheesy science fiction, people fear this AI tool will take their jobs and eradicate their families, like some present-day Terminator.
ChatGPT is mainly used as a web-based AI chatbot, or AI-writer, as some prefer to call them. Ideally, it’s integrated into applications using its API. But the mediocre college student using it to write an essay is just exploiting the free web app. One enters a query or request on a web page, and the AI engine generates a well-written and thoughtful response.
AI writers are nothing new, and there are many of them which predate ChatGPT. Since Microsoft made a massive investment in OpenAI, there’s a huge public relations effort to convince the world that this technology is the future. Every news outlet is buzzing about ChatGPT. Unfortunately, ChatGPT has done little to boost Microsoft’s stock price, which has taken a beating over the past year.
As a human writer, I’ve seen some collateral damage from AI content flooding the web. I also notice a lot of words I have written years ago showing up as ChatGPT answers regarding Apple products.
Instead of formulating new sentences and paragraphs from facts, ChatGPT appears to regurgitate a lot of content from the web. That’s not very smart or fair. It’s plagiarism. I’ve seen ChatGPT plagiarize content I had written years ago. This may result in legal troubles for OpenAI. I could see a class action lawsuit emerging with publishers suing OpenAI for mass plagiarism.
If you’re a below-average college student using ChatGPT for the first time, it probably seems impressive. You can suck down 18 beers a night and still probably graduate, although that’s nothing new. (That’s why people major in history and English.) Perhaps ChatGPT will enable raging alcoholics to obtain bachelor’s degrees.
Similarly, journalists who know little about technology are in awe of this tool, possibly influenced by Microsoft’s PR initiative. In either case, both parties don’t seem to realize that ChatGPT is often factually incorrect. OpenAI admits its platform is often inaccurate, yet it launched the product anyway. The popular chatbot can formulate eloquent sentences and paragraphs, but the actual content contains factual errors.
The claim that it’s just in testing, so the flaws are acceptable, is dubious. A Congress member has already officially addressed his peers, reading a speech written by ChatGPT. Anderson Cooper and other presenters have read copy written by ChatGPT. Regardless of what OpenAI considers the status of its product, people are using it as if its full-release quality. By my estimation, it’s not even an alpha version.
Many journalists should be concerned because AI is better at creating inaccurate content. Many news and media companies don’t care about accuracy anymore. They’re all partisan and biased now. They are concerned solely about profit. People will believe just about anything said with eloquence and passion by a well-groomed and attired presenter.
I foresee news directors replacing the average journalist with future versions of ChatGPT to improve profitability for publications or news channels. The big dilemma for the news industry will be how to give celebrities, college buddies, and family members easy jobs once they’re taken away by AI. They’ll carve out a few niches for these sacred cows. They always do!
ChatGPT won’t replace Prince Harry or any Kardashian. It will eliminate entry-level jobs for those who studied journalism in college and replace seasoned journalists who don’t have the best connections, appropriate lineage, or family ties.
The reality is ChatGPT is a parlor trick. It’s not AI because it’s not intelligent. It’s just a more sophisticated search engine. Instead of returning a list of web pages, it returns the content from one or two high-ranking web-based sources. Google does this already, but in a different format, and it’s not their only trick.
ChatGPT isn’t getting facts from thin air. It gets both facts along with entire sentences and paragraphs from the web. It learns from people like me. This is obvious because I see words I wrote years ago pop up as answers to Apple-related questions.
The problem is, what will happen when most people stop writing? ChatGPT will eliminate the lower echelons of writing jobs, and these are the people who create and post facts on the Internet. ChatGPT relies on skimming the Internet for information. ChatGPT will kill the golden goose as this dynamic plays out, as writers pursue other opportunities. Where will ChatGPT get its information when enough people stop creating it?
How to Use ChatGPT
Whether you’re curious or fearful, it’s a good idea to try ChatGPT. You’ll quickly see that it’s not as brilliant as some claim. Members of Congress, news anchors, and others have hyped this technology beyond belief. This is partly because Microsoft’s public relations machine promotes its investment in OpenAI.
AI writers have been around for years, and many are just as good, if not better, than OpenAI’s offering. It’s just that Microsoft backs this one, so the corporate “news” media is providing marketing instead of journalism.
Remember when the Segway scooter came out? It was supposed to revolutionize the world. I haven’t seen one in years. The company is still around. The hype was far from reality. The same is true with ChatGPT.
Now that you have some background on ChatGPT let’s take a look at how to use it:
- Open a web browser on your computer or iPhone and navigate to https://openai.com/blog/chatgpt/. OpenAI’s ChatGPT website appears
- Click or tap on the “Try ChatGPT” button. Your browser will open a web page and then quickly relocate to another web page. (Awesome engineering, by the way.) A web page appears, asking you to log in or sign up. For this tutorial, we’ll sign up for ChatGPT.
- Tap or click on the sign up button. The “Create Your Account” web page appears.
- If you have a Google or Microsoft account, it’s easier to use one of these. Simply click or tap on the authentication provider (Google or Microsoft) and log in to complete your ChatGPT account. As with many web apps, you will now log in to ChatGPT with your Google or Microsoft credentials.
- If you don’t have a Google or Microsoft account, enter your email address and click or tap “continue.” The Create Your Account web page now prompts for a password.
- Enter a password and click “Continue”. The “Tell Us About You” screen appears.
- Enter your First, Last, and Organization name into the form fields. The latter field is optional. Click the button to submit the form. A new webpage loads, asking you to verify your phone number.
- Enter your phone number and click or tap on “Send Code”. OpenAI texts a security code to your phone.
- Enter and submit the security code. A few pop-up messages appear to explain the limitations of ChatGPT, in addition to data collected by OpenAI.
- Read and dismiss the information pop-ups. You’ve finally arrived at the ChatGPT web page.
- Enter any question into the prompt. For the sake of humor and irony, I asked ChatGPT how to sign up for ChatGPT. Apparently, you can’t sign up for ChatGPT, according to itself. But you can, because I just did! When asked, ChatGPT denies that you can even sign up for the service. Forget about shoes. It’s like the cobbler’s children have no feet.
- You can ask follow-up questions by entering more information in the prompt. You can also clear out the last Q&A session and start a new one by clicking or tapping on “Clear Conversations” and then confirming to clear them.
That’s how to use ChatGPT. As you can see, at this point, it’s not a real threat to technical writers. Its instructions on how to sign up and use ChatGPT are vastly inferior to mine. If your beat is more describable and documented, such as pets, literature, art, wine, travel, or the like, ChatGPT could replace you. I’m not losing any sleep over it. Chat GPT couldn’t write this article because it doesn’t even know how to sign up for its own service.
Now that you’ve actually used ChatGPT, you can see it’s impressive in some ways. It writes much better than the vast majority of Americans. When it comes to facts, ChatGPT struggles, and OpenAI claims this is because there’s “no source of truth”. Yes, there is. It’s called a library. Much of the Library of Congress is digitized. Maybe go for this information instead of Tweets and social media comments. The truth is out there, but I think ChatGPT will always suck up a bunch of nonsense.
The people behind OpenAI also seem specialized in AI and ignorant of basic American history. Working in tech for a few decades, I have known many engineers who haven’t studied history beyond high school. Their interests outside of work are sports and hiking, like most. Their view of history comes from Hollywood, the news media, and water cooler conversations.
We’ll look at ChatGPT fails later in this article. There are so many! But even the people who work for OpenAI think that Columbus discovered America — the current US boundaries. Columbus landed in the Caribbean and never set foot on US soil. He stumbled upon the Dominican Republic but thought it was India, so he didn’t discover anything. If you ask ChatGPT “Did Columbus ever set foot on US soil?” you’ll even get the correct answer — no.
This is the danger of OpenAI. It’s a bunch of square tech nerds who don’t know anything about history — not even that Columbus didn’t discover America. They know more about Harry Potter than Andrew Jackson. George Washington never chopped down a cherry tree. This is American mythology, not history. So many of the answers they feel are correct are false because their engineers are concerned with coding, not history. ChatGPT is already off to a bad start.
ChatGPT is Christian
If you were wondering if artificial intelligence believes in God, ChatGPT provides an answer. Since its knowledge base derives primarily from Western ethnocentric sources, it has the same beliefs and biases as most Americans, Canadians, and Europeans.
Even though the Romans never recorded a single incident with Christ and the Shroud of Turin (and all holy relics) are fakes, ChatGPT believes in Jesus. The first person to write about Christ was a rabbi, and this account was written 80 years after his supposed death. Nonetheless, ChatGPT claims that Romans didn’t compile detailed records of “low class” (its words, not mine) people like Jesus of Nazareth. But this admits they did collect *some* documents in their criminal justice system about everyone. The Romans had written language and bureaucracy, yet there is no record of Christ’s existence. This was only 2000 years ago. It’s not even ancient history.
Perhaps ChatGPT is correct, and another one of Jesus’ miracles was evading Roman chroniclers, clerks, and government agents. In either event, ChatGPT is a believer. All you lucky Christians will get your questions answered in heaven by God and ChatGPT.
ChatGPT is Hinduphobic
If you ask ChatGPT about India or Hinduism, you get the same inaccuracies developed by European academics. It claims that Indian civilization didn’t exist until 2600 BCE and then perished in 1900 BCE. Hmm… Last time I checked, there was an India, and it’s doing quite well, with its DNA vaccines, hypersonic missiles, Mars mission, lack of mass shootings, absence of insurrections, and thriving tech industry. When I asked ChatGPT about Krishna, it said he was born in 3228 BCE. How could that be when Indian civilization supposedly didn’t exist yet?
The reality is, Hindus are the oldest recorded civilization on the planet, dating back to Lord Shiva (who actually existed) 15,000 years ago. There is some evidence that Jains are an older civilization. Also, a fairly recent book by a Dartmouth professor argues that, based on various factors, including positions of astronomical entities, the time of Krishna and the Bharat empire was most likely back in 5125 BCE, with the latter existing generations before this date.
This is important because historical texts show that ancient Hindus mastered metal, domesticating animals, political science, business administration, mental health, agriculture, textiles, spices, bread, butter, medicines, and other advanced technologies over 7,000 years ago. The Bhagavad Gita is ancient mental health counseling, still valid today. It’s one source of mindfulness that’s popular with today’s mental health advocates, who never mention its origin because they don’t know. Ancient Indians invented 75% of everything we use today, from the clothes on your back to some medicines your doctor prescribes. Because they embarrassed the British, booting them out of India without a shot, the revenge is to smear them in history and the current news media.
The Mahabharat (which is five times longer than the Old and New Testaments combined) enumerates all of the technologies employed by ancient Hindus. (I know, TLDR; when’s the big game on?) It’s not even the Hindu bible, as no such unifying document exists. It’s a lifestyle, not something you do for a few hours on Sunday morning before downing 18 beers at the big game.
Anyone studying world history in high school or college learns a remarkably skewed version of history designed to aggrandize Western Europeans and Abrahamic people of the Jewish and Christian faith. (Western academia tends to smear Muslims along with Hindus.) The story goes that the Sumerians were the first civilization; however, ancient Hindus lived thousands of years before these people walked the Earth. From the Sumerians, the approved European history glosses over a few other societies before it settles on Egypt because the Hebrews came out of this society. Greece and Rome follow, with the Renaissance proving how amazing Europeans are!
Yes, Europeans of *high social status* contributed a lot to science (Sir Isaac Newton, Sir Francis Bacon, “sirs” not “serfs”), while some stole from Indian mathematicians like Mahadevi, with only Fibonacci giving him a modicum of credit. For some reason, this has mutated into some notion that an Irish American with a GED is part of the lineage that created everything good and holy, and I should be personally thanking him for that. (Not one American has thanked me for working on software that saved millions of their lives. Not one. Instead, everything I have achieved is perceived as the result of some special treatment, even though I don’t get affirmative action or white privilege.)
Most Americans are descendants of starving, illiterate peasants from Ireland and Europe. Even most Jewish immigrants who came through Ellis Island signed their name with a “0” because they were illiterate. Now, the claim is that these same people invented everything. ChatGPT seems to lean toward this view of history.
Western academia considers the ancient Greeks to be a race of intelligent white Europeans who created democracy and philosophy. First and foremost, ancient Greeks were not white. Even today, many people in Greece are dark-skinned. Furthermore, Aristotle told Alexander the Great to seek out an Indian Yogi because they’re the most intelligent people in the world.
King Bharat invented democracy thousands of years before the Greeks. He also denied his son from taking the throne because he didn’t have merit. Instead, he found someone outside his family who deserved the leadership position. American democracy is plagued with familial dynasties — Bushes, Clintons, Pelosis, and many others operating at lower levels. We have yet to discover democracy, but we’re good at keeping up appearances.
The Greeks admitted there were much more intelligent people at the time — Indian Yogis. They were around thousands of years before the Greeks. Lord Shiva was the first Yogi, dating back 15,000 years. Western academics omitted that story from your world history book because Indians booted out the British with a clever, non-violent strategy. Instead of building bridges, as they have with white Americans who slaughtered British soldiers a few hundred years ago, they continue to smear India and Hindus through biased news, mainly from the BBC. ChatGPT eagerly consumes these lies to share with the masses.
I can go on and on about ancient history. While most watch the big game, I’m reading Mahabharat or the Upanishads for the umpteenth time. It doesn’t stop the masses from concocting their own false version of history, where Irish serfs (most Americans’ backgrounds) created everything good and holy in the universe. Well, you have a new wingman. ChatGPT is down with Jesus and Western Chauvanism. It’s the most eloquent dummy I’ve ever met! Look out, Joe Rogan! I think ChatGPT can master “I don’t know, man.”
ChatGPT Confused About Rocket Science
I asked ChatGPT who perfected liquid-fueled rockets, and it gave an answer that many will dispute. OpenAI’s chatbot claims that Robert Goddard perfected liquid-fueled rockets. He made the first liquid-fueled rocket that could only launch 40 feet. Even a child’s toy rocket, powered by water pressure, can achieve this not-so-lofty goal.
Nandi Narayan is the scientist who perfected liquid-fueled rockets. There’s an entire movie about him on Amazon Prime. The Indian government sent him to Princeton to learn more about liquid-fueled rockets. While there, he corrected mistakes in physics textbooks that his professors missed. He was also a key contributor to resolving problems with liquid fuel pressurization.
It’s a long story. Narayan, a scientist who didn’t believe in borders, made a deal with the European Space Agency to provide Indian engineers to help with the Viking rocket engine. With the help of 100 Indian engineers secretly living in France, the ESA was able to vastly improve the Viking engine, which would previously blow up on the test platform.
Narayan took the Viking technology and incorporated it into the VIKAS rocket engine. The Indian team took it back to France for testing. The very first VIKAS prototype exceeded the Viking engine’s performance, operating for three minutes until it ran out of fuel. Unlike the Viking engine, it didn’t explode on the test platform.
When you ask ChatGPT about Nandi Narayan, it responds that it doesn’t know who he is. There’s a motion picture about this remarkable scientist. It’s not all flattering. He made some mistakes with people management, which he regrets. But the fact is that an Indian man perfected liquid rocket fuel, not Robert Goddard or Wernher von Braun. Their rockets were toys that couldn’t achieve the end result — space travel. That’s probably because Goddard and Braun were both obsessed with the end result instead of acting without concern for it. Narayan is a true karma yogi, interested in action instead of obsessing with the end results.
ChatGPT Confused About the Holocaust
I asked Chat GPT what the biggest genocide of the 20th century was. Not surprisingly, it said the Holocaust. The Holocaust was a tragedy that killed six million innocent people; however, it’s not even close to the biggest genocide of the 20th century. I have a degree in International Relations from a top school. I, as well as every professor I have studied under, know that there were much worse genocides in the 20th century.
Twenty-one million Russians died during World War II, and they were shot on sight. Because of the Cold War and current relations with Russia, the more casual American view is to ignore this fact. I was taught this in high school and college, but it’s not well-known, because so much attention is given to the Holocaust and America has a poor relationship with Russia. Other people have suffered far more, but they weren’t white Europeans.
During the Cultural Revolution in China, 60 million people perished under Mao’s rule. Although many died from starvation, most Holocaust victims also perished from a lack of food. Food deprivation is the most popular tool of genocide because it’s easily executed, and more difficult to blame the perpetrators.
The biggest genocide of the 20th century happened in India under British rule. In 200 years of British colonization, approximately 1.8 billion Indians died, mostly from starvation. Although not all of it happened during the 20th century, the worst occurred during this period. Almost one billion Indians were killed in the 20th century due to British rule. It wasn’t all through starvation. The British had no qualms with mowing down peaceful protesters with machine guns.
I often hear this notion that Indians should be thanking the British for building roads and railways (for us to poop on). That’s like saying Jews should be thanking the Third Reich for building concentration camps and the roads that led to them. The British built infrastructure to pillage India, not to improve it.
The fact that ChatGPT has no recollection of 1.8 billion people being wiped off the planet isn’t surprising. Most history teachers and professors don’t know this either. After all, Europeans and the British controlled academia, so only their suffering matters. Thus, for most people, the Holocaust was the worst and only genocide of the 20th century, which ChatGPT agrees with wholeheartedly.
ChatGPT and OpenAI Employees Confused About Colombus
Many believe Christopher Columbus discovered America, but he never set foot on US soil, including Puerto Rico. Columbus, a basket weaver by trade, was looking for India to pillage gold and spread Christianity because it was the wealthiest nation in the world. Fortunately for India, Columbus was better at basket weaving than navigation, so he ended up in the Dominican Republic. Unfortunately, he tasked the native Caribbeans with finding gold, or their hands would be cut off. Columbus and his crew chopped off the hands of virtually every native Caribbean person they could find because there wasn’t any gold there. To the day he died, Columbus thought he had discovered a new route to India.
ChatGPT’s documentation indicates that OpenAI’s employees believe Christopher Columbus discovered America. These are supposedly educated people who work at a cutting-edge AI company in San Francisco, yet they don’t know that Columbus never landed on American soil. If you ask ChatGPT the right way, it will even agree that he never set foot in what is the United States of America. He was only in the Caribbean.
How can an AI be correct when the people who created it are so grossly ignorant of history? They believe this is a spot-on answer and celebrate it on their home page. It’s as if no one at OpenAI knows even the basics of American history. Do they think George Washington cut down a cherry tree too?
Why You Shouldn’t Use ChatGPT
The main reason why you shouldn’t use ChatGPT is that it’s factually incorrect. The company even admits this. It remains unclear why a company would unleash an AI chatbot that’s either incorrect or offers an antiquated worldview. Most likely, it’s all about profit.
Garbage in, garbage out is at play here. Since the Internet is full of misinformation, ChatGPT is learning from everything out there, and much of it is inaccurate. It has no mechanism to determine right from wrong. You can see this within a few minutes of use.
What good is ChatGPT anyway? If you’re writing for a website, Google will detect that you’re using an AI writer and penalize you. At the very least, your rankings will drop because Google Search disapproves of generated content. You may succeed in the short run, but your efforts will prove counterproductive when your site’s traffic implodes.
For the student, using ChatGPT is just asking to be expelled. OpenAI offers tools to help determine if ChatGPT generated a document. Professors have already caught and punished students who’ve used ChatGPT to cheat, even without the tool. When a professor grades well-written yet factually incorrect academic work, they will suspect it’s ChatGPT. All they need to do is run it through OpenAI’s tool to confirm this, and you’re expelled. The good news is that it leaves plenty of time for binge-drinking, watching ESPN, and exchanging pity stories for financial support.
Perhaps some low-quality news outlets like CNN will use ChatGPT to write content. They already have, and Anderson Cooper seemed to be impressed. One day, we may see a computer-generated news anchor replace human beings. For the time being, however, if your job requires skill and knowledge, it’s safe. ChatGPT offers neither.
ChatGPT doesn’t seem to be genuinely AI. It appears to regurgitate greatly from the WWW, but I don’t see any authentic learning. They trained ChatGPT, but is this really learning? When I ask ChatGPT questions about Apple TV, for example, it regurgitates words I wrote years ago.
The latter point may be the downfall of ChatGPT. It plagiarizes Internet publishers like me and so many others. It’s not writing. It’s copying. Its eloquence in writing is really from regurgitating human writers, not from artificial intelligence.
I encourage you to spend some time with ChatGPT. I’m sure you’ll see its weaknesses and flaws. Although it will improve in some ways, it’s likely to absorb masses of misinformation.
If you’re an Internet publisher, I encourage you to ask ChatGPT questions about your domain of expertise. If you ask questions you’ve answered in an article, don’t be surprised to see your words regurgitated.
Is it fair for ChatGPT to steal our words and offer them as original content? Of course not. The future of OpenAI remains murky, as the company will need to find a way to train its AI systems without stealing from Internet publishers. If this continues, lawsuits will emerge, and OpenAI will cease existing.