X

Will AI Make Us Even Dumber?

Last week, I sent you an interview I did with ChatGPT.

And it was a hit with readers.

This was an awesome conversation to read, Chris! What a great chance for us readers to finally get a glimpse of ChatGPT after hearing so much about it. Can’t wait to read more articles with you and ChatGPT’s thoughts on the economy, tech, or crypto…

– Carlos S.

ChatGPT is the artificial intelligence (“AI”) chatbot everyone is talking about.

It has read and remembered every word on the internet. And it can answer almost any question you have, in an eerily human-like way, based on what it’s read.

And it’s just one of the advances in AI that’s making the news.

AIs are performing surgeries, flying F-16 fighter jets, and discovering lifesaving drugs.

It’s hard to think of an area of our lives AI won’t radically transform.

Here’s Jeff Brown, our resident tech expert here at Legacy Research…

AI has become some of the most cutting-edge tech of our modern age. AIs can perceive their environments and act to maximize their chances of achieving certain goals. They can learn, in other words.

We use AI in healthcare, cybersecurity, e-commerce, advertising, agriculture, education, finance, and more. There’s almost no area of society or industry AIs will leave untouched.

Jeff is worth listening to on this. He nailed the rise of generative AIs like ChatGPT in these pages nearly three year ago, long before it blew up in the mainstream press. As he wrote in April 2019…

Imagine a world where we can use a chat window on our phone, laptop, or desktop to talk to a seemingly real person at the other end of the line – but with a near-limitless ability to help answer questions and assist with daily tasks.

I, for one, can’t wait for this. We’re so close…

All the tasks that consume so much of our time – making an appointment, disputing a health insurance claim, finding a receipt, replying to an email, ordering groceries, looking up a schedule – will all be at our fingertips with the help of a personalized digital assistant: an AI.

But not everyone is excited about the rise of AI. Some of Jeff’s readers are downright skeptical…

Reader comment: Hello Mr. Jeff Brown, I read your articles about artificial intelligence. I think you are too optimistic sometimes. I think you should also point out the bad sides of AI.

– Jonaem M.

Jeff’s response: Hi, Jonaem. I’m an optimist. Time and again, technology has proven to be the greatest tool we have to solve the world’s biggest problems. In that sense, I think of myself as a rational optimist.

But one of the greatest challenges the world faces over the next decade will be coping with the profound changes brought from advancements in technology and biotechnology.

I often share this chart of linear versus exponential growth. But it’s a great illustration of what I’m talking about.

Humans are pretty good at predicting – and adapting to – linear change. Small, incremental changes give us time to adapt slowly and naturally. But that’s not how technology advances.

It advances at an exponential rate. Better tech builds better tech. So we get this compounding growth curve (blue line on chart) that suddenly heads skyward.

That’s what’s happening now in just about every area of technology. This disruption between our linear thinking and the exponential growth trend will unlock tremendous wealth. It will also make our lives easier and more productive.

But here’s the thing… The degree of change will be difficult to adjust to. It will all feel like it’s happening too fast. For example, Elon Musk’s Neuralink could augment humans with advanced AI capabilities.

I share your concerns. More than any time in history, AI is now widely available. Some of the most advanced algorithms are open sourced for anyone to use.

For example, it’s remarkably inexpensive to genetically edit viruses. The risk of bioterrorism has never been higher. So, as exciting as it is, we must be very careful with this technology.

Another concern about AI is its role in the dumbing down of society.

Many of us are already losing the ability to read and comprehend long-form content like a book or a newsletter. That’s at least part of the reason why we’ve become so reactionary. We can no longer slow our thinking, absorb, and question new information, and view it in a calm, rational way.

Too many of us simply react to something that we see without asking ourselves the source of the information. Is it true? Is the information biased? What research or data set was the conclusion based on?

ChatGPT, and so many other AI systems that will follow this year, are the “easy button.” Like ordering food from Uber Eats, it’s just a click of a button… and done.

For that reason alone, we’ll see mass adoption of this technology. And that will make many of us lazy. And intellectual laziness is not a recipe for a peaceful and decent society.

As I mentioned above, ChatGPT and other generative AIs learn by reading vast amounts of online data. And one Jeff reader wants to know if this risks creating biased AIs…

Reader question: On what kind of data has the AI been trained? Biased news articles, pundit opinions, and troll comments, which are heavily laced with logical fallacies?

How much of that training data involves a detailed analysis of the procedures and statistics in a proper scientific study? Or a literary critique that’s not simply click-bait?

BTW, is [Jeff’s e-letter] The Bleeding Edge ever written by an AI?

– Joe B.

Jeff’s response: Thanks, Joe. An AI has never written an issue of The Bleeding Edge. But I have thought about training an AI on every issue. It could mimic my writing style.

AIs can have deep biases. And yes, it depends on the data set that an AI is trained on. Training an AI such as ChatGPT on a body of biased information will produce biased results.

Worse, these AIs ascribe confidence to answers based on how often they see the same thing on the internet.

So, we’re going to have to keep a critical eye on this and other AI implementations. We can’t automatically trust their output.

Finally for today, concerned parents want to know what advanced AI means for their son’s dream of being a pilot…

Reader question: Hi Jeff, We really enjoy reading your daily newsletter and we’d like to get your thoughts on something. One of our sons wants to be a pilot but is worried that eventually AI will completely take over that occupation and his career might come to an end long before he is ready to retire. What say you?

– K&J

Jeff’s response: Thanks for writing in, K&J. It’s dear to my heart.

I got my B.S. in Aeronautical and Astronautical Engineering from Purdue University. And my first job was in the aerospace industry. I worked as a contractor at Boeing in Everett, Washington on the 777 program.

Your son is smart to be asking this question.

The answer, I’m afraid, is yes. AIs will replace human pilots. But within what timeframe?

My best guess is that for the next decade or so there will still be humans in the cockpit. They won’t be doing a whole lot though, unless you’re a bush pilot.

Airline pilots already do little hands-on-the-yoke flying. On a typical commercial flight, the pilot flies in a traditional sense for a few minutes on landing and take-off. The rest of the time, the plane is on autopilot.

As commercial airplanes expand the use of autonomous technology, pilots will be largely in the cockpit as backups.

If your son has a desire to fly, I’d encourage it. There should be 10 to 20 years of work ahead for him.

That’s all we have for today’s mailbag.

Do you have questions, fears, or concerns about AI? Are you interested in learning how to profit from this megatrend?

Write us at feedback@legacyresearch.com. We read every email you send in. And we love to hear you and your fellow readers.

Have a great weekend.

Regards,

Chris Lowe
Editor, The Daily Cut