Read the Beforeitsnews.com story here. Advertise at Before It's News here.
Profile image
By Monetary Sovereignty blog
Contributor profile | More stories
Story Views
Now:
Last hour:
Last 24 hours:
Total:

The Rise of the Chatbot. Are we an interim species?

% of readers think this story is Fact. Add your two cents.


We humans are among the latest in a long line of mostly extinct species on earth, and to the best of our knowledge, we are the smartest.

But are we the final step, or are we just another interim species? Will we be replaced by an even smarter species?

Nature has tried millions of experiments. There have been notable experiments with size, with the dinosaurs taking center stage.

The first dinosaurs emerged during the Triassic Period, 252 to 201 million years ago. During the Jurassic Period (201 to 145 million years ago) many large land animals went extinct, leaving more opportunity for the dinosaurs.

During the Cretaceous Period (145 to 66 million years ago) dinosaurs continued to evolve, and the biggest dinosaurs emerged. The Argentinosaurus huinculensis is the biggest dinosaur ever found.

And then they died.

But for the whales, nature’s experiment with size ended, to be replaced by the experiment with intelligence, which featured the mammals.

While many dinosaurs were warm blooded, and had large brains, both facilitating intelligence, our hands and upright stature seem to have brought us to the apex of intelligence.

So far, for the experiment continues.

The big news in intelligence is artificial intelligence (AI) as demonstrated in chatbots.

IBM says, A chatbot is a computer program that uses artificial intelligence (AI) and natural language processing (NLP) to understand customer questions and automate responses to them, simulating human conversation.”

If you use Siri or Alexa, you are using a basic chatbot. You ask a question in plain language and get an answer in plain language. So ubiquitous are these programs and devices that we often take for granted the technological miracle they represent.

I ask my tiny wristwatch a question, and despite my midwestern accent, and the variety of ways I phrase it, the watch searches the internet and within mere seconds, delivers an answer in a language of my choosing — both in audio and in print.

Are we an interim species?

It is a miracle, but it is yesterday’s miracle. Today’s technology has taken the concept much further.

Today, you can ask a chatbot to develop an original treatise on a subject.

The chatbot will search the Internet using advanced keyword techniques and create a paper containing information and a reasoned discussion.

In that sense, it operates much like you would if given the same assignment.

Chatbots learn via “computer learning,” AI trial and error, to provide “better” responses (meaning more accurate and human).

Being computer programs, chatbots can conduct millions of trials and learn from millions of errors in a relatively (compared to you and me) short time. They can work 24/7, don’t tire, and they don’t forget.

Thus, through time, chatbots continually become “smarter.”

Although chatbot responses can seem eerily human, they still lack what you might call “common sense,” a basic understanding of reality — but they are learning.

Cosmos magazine published an article about “Chatbot blunders.”

Here are some excerpts:

It’s taken just a few days for Google AI chatbot Bard to make headlines for the wrong reasons.

Google shared a GIF showing Bard answering the question: “What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?”

One of Bard’s answers – that the telescope “took the very first pictures of a planet outside of our own solar system” – is more artificial than intelligent.

A number of astronomers have taken to Twitter to point out that the first exoplanet image was taken in 2004 – 18 years before Webb began taking its first snaps of the universe.

No one should be surprised that machines make mistakes, some of which can be hilarious. But we rely on them to be perfect, and they are — at a basic level. They copy and paste much better than we do. They can compute our income taxes flawlessly.

This essential perfection can lead us to believe in an overall perfection that does not exist and never will.

Google’s embarrassment over this mistake is compounded by the fact that it’s Bard’s first answer ever… and it was wrong! Bard is Google’s rushed answer to Microsoft-backed ChatGPT.

Both Bard and ChatGPT are powered by large language models (LLM) – deep learning algorithms that can recognize and generate content based on vast amounts of data.

The problem is that, sometimes, these chatbots simply make stuff up. There have even been reports that ChatGPT has produced made-up references.

“Wrong answers.” “Make stuff up.” Apparently, ChatGPT is even more human than some might have imagined.

It’s not “conscious” because the AI itself is not conscious; nevertheless, they are called “hallucinations.” They are the result of the software trying to fill in gaps and trying to make things sound natural and accurate.

It’s a well-known problem for LLMs and was even acknowledged by ChatGPT developers OpenAI in its release statement on November 30, 2022: ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”

“Not conscious.” “Trying to make things sound accurate.” That sounds like some of the economists I know. ”

Experts say even the responses to the “successes” of artificial intelligence chatbots need to be tempered by an element of restraint.

The fundamental problem has to do with where the chatbots get their information. Remember the old computer mantra, “Garbage in, garbage out”?

That still applies. It applies to human responses, and it applies to computer responses. Why would machines be any more accurate?

In a paper published last week, University of Minnesota Law School researchers subjected ChatGPT to four real exams at the university. The exams were then graded blind.

After answering nearly 100 multiple-choice questions and 12 essay questions, ChatGPT received an average score of C+ – a low but passing grade.

C+ is pretty impressive, assuming the scorers were correct. If we have a chatbot grade the answers given by another chatbot, how will we know the “correct” grade?

Are we to assume human grading is more accurate?

Another team of researchers put ChatGPT through the United States Medical Licensing Exam (USMLE) – a notoriously difficult series of three exams.

A pass grade for the USMLE is usually around 60 percent. The researchers found that ChatGPT tested on 350 of the 376 public questions available from the June 2022 USMLE release scored between 52.4 and 75.0 percent.

I wonder how ChatGPT scored between 52.4 and 75.0 percent. Did they give the test repeatedly? Who determined which answers were correct?

In medicine, as in most sciences, much of what was thought to be correct yesterday now has been found incorrect, and tomorrow, that will change again.

It’s called “science,” the purpose of which is to identify and correct yesterday’s misunderstandings.

The authors claim in their research, published in PLOS Digital Health, that “ChatGPT produced at least one significant insight in 88.9% of all responses.”

In this case, “significant insight” refers to something in the chatbot’s responses that is new, non-obvious, and clinically valid.

How were “new,” “non-obvious,” and “clinically valid” determined? If a chatbot disagrees with a human, who is right?

But Dr. Simon McCallum, a senior lecturer in software engineering at New Zealand’s Victoria University of Wellington, says that ChatGPT’s performance isn’t even the most impressive of AI trained in medical settings.

Google’s Med-PaLM, a specialist arm of the chat tool Glan-PaLM, is another LLM focused on medical texts and conversations.

“ChatGPT may pass the exam, but Med-PaLM is able to give advice to patients that is as good as a professional GP. And both of these systems are improving.”

And who determines that advice is “as good as a professional GP”? It would be informative to learn how that was determined.

I don’t have access to a sophisticated chatbot, so if you do, I would appreciate your asking it such questions as:

  1. “What do United States federal taxes pay for?”
  2. “Who will have to pay off the federal debt?”
  3. “Is the federal debt too high?”
  4. “How does the federal government borrow money?”
  5. “Does federal deficit spending cause inflations?”

I chose the above questions because I suspect even the current level of chatbot technology merely regurgitates the common beliefs on any subject and does not analyze the way humans do.

I asked my Siri question #1, and she (it) answered, “Here’s what I found: Governments can use tax revenue to provide public services such as social security, healthcare, national defense, and education.”

The keywords are “Here’s what I found.” Siri isn’t thinking. Siri merely is playing back.

It gave the standard answer, which would be correct for state, county, and city governments, but it is not valid for the U.S. federal government. Siri has not yet learned about Monetary Sovereignty.

But what if Siri did learn about Monetary Sovereignty (MS). Ask most economists and they will tell you the federal government does borrow money, an answer with which MS strongly disagrees. Many, if not most, economists disagree with MS’s precepts.

The MS answers to the above questions are:

  1. Federal taxes pay for nothing. They help the government control the economy by taxing what it wishes to discourage and by giving tax breaks to what it encourages. That’s the theoretical purpose. The real goal is to make the rich richer by widening the income/wealth/power Gap between the rich and the rest.
  2. The so-called “debt” is paid off by returning dollars already in T-security accounts to the owners of those accounts.
  3. No, the federal debt (i.e., the total of T-securities) is not too high. Decreasing the debt causes recessions and depressions. Increasing the federal debt would help increase the Gross Domestic Product (GPD), i.e., grow the economy.
  4. The federal government never borrows money. It creates all the dollars it needs by pressing computer keys.
  5. No, shortages of critical goods and services, usually oil and food, cause inflations. Federal spending doesn’t cause shortages or inflations.

I suspect that chatbots, which use AI to learn the correct answers, will not provide the MS answers, as those answers will be the minority view. Siri, for instance, told me the federal government borrows to pay its bills.

Chatbots are giant data-gathering machines. They really are good at that. We humans are data-gathering machines, too. We analyze data the way chatbots do by comparing it with what we already know.

But humans function differently. I suspect the more creative among us are more receptive or willing to examine minority concepts.

I suspect we are more likely to investigate the rejected, the impossible, the already “proved” wrong, and the crazy “what if” ideas that AI is designed to winnow out.

Our thinking is what differentiates us from the rest of life on earth. We imagine. We visualize. We dream. We hope. We aspire. We dare to be different.

If nature has a plan, was the plan for us to be smart enough to create artificial intelligence?

Today, we drift toward a “Terminator” world. As we simultaneously birth, rule over, and battle our machines, will there come a time when our electronic children replace us?

Are we nature’s interim species, on earth to pave the way for the next experiment?

Rodger Malcolm Mitchell
Monetary Sovereignty

Twitter: @rodgermitchell Search #monetarysovereignty
Facebook: Rodger Malcolm Mitchell

……………………………………………………………………..

The Sole Purpose of Government Is to Improve and Protect the Lives of the People.

MONETARY SOVEREIGNTY


Source: https://mythfighter.com/2023/02/10/the-rise-of-the-chatbot-are-we-an-interim-species/


Before It’s News® is a community of individuals who report on what’s going on around them, from all around the world.

Anyone can join.
Anyone can contribute.
Anyone can become informed about their world.

"United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.

Please Help Support BeforeitsNews by trying our Natural Health Products below!


Order by Phone at 888-809-8385 or online at https://mitocopper.com M - F 9am to 5pm EST

Order by Phone at 866-388-7003 or online at https://www.herbanomic.com M - F 9am to 5pm EST

Order by Phone at 866-388-7003 or online at https://www.herbanomics.com M - F 9am to 5pm EST


Humic & Fulvic Trace Minerals Complex - Nature's most important supplement! Vivid Dreams again!

HNEX HydroNano EXtracellular Water - Improve immune system health and reduce inflammation.

Ultimate Clinical Potency Curcumin - Natural pain relief, reduce inflammation and so much more.

MitoCopper - Bioavailable Copper destroys pathogens and gives you more energy. (See Blood Video)

Oxy Powder - Natural Colon Cleanser!  Cleans out toxic buildup with oxygen!

Nascent Iodine - Promotes detoxification, mental focus and thyroid health.

Smart Meter Cover -  Reduces Smart Meter radiation by 96%! (See Video).

Report abuse

    Comments

    Your Comments
    Question   Razz  Sad   Evil  Exclaim  Smile  Redface  Biggrin  Surprised  Eek   Confused   Cool  LOL   Mad   Twisted  Rolleyes   Wink  Idea  Arrow  Neutral  Cry   Mr. Green

    MOST RECENT
    Load more ...

    SignUp

    Login

    Newsletter

    Email this story
    Email this story

    If you really want to ban this commenter, please write down the reason:

    If you really want to disable all recommended stories, click on OK button. After that, you will be redirect to your options page.