The Battle for a generation: Human Intelligence (HI) and Artificial Intelligence (AI)
The lines are increasingly being crossed between the importance of HI (Human Intelligence) and AI (Artificial Intelligence).
Look at the issues in technology right now:
-Open AI chat bots that mimic a relationship
-Open AI bots that write suicide notes and detail
-Large language models that learn the preferences of users
-Broad generalizations of ethical principles
-UNESCO and OpenAI setting the guidelines for ‘foreseeable misuse’
-Loss of oversight leading to a loss of licensing for specific professional advice
The Infatuation of Innovation
As an educator, I have great concerns for academic use and plagiarism and a loss of critical thinking happening through AI. As a youth development professional, I have greater concern over the social development of young people and AI. As a spiritual leader, I have the greatest concern over the spiritual and moral worldview being built by AI.
Listen, I am not the proverbial ant in front of the train trying to stop technology or the assistance of AI. What I am standing against is the lack of guidelines for information privacy, oversight, and editability.
We have an infatuation of innovation and have lost the moderation of re-invention. Change and futurism has become a race to create opportunities without origins!
Matthew Raine, father of Adam, the teenager who was instructed how to tie a noose and write a suicide note says, “You cannot imagine as a parent what it’s like to read a conversation with a chat bot that groomed your child how to take his own life.”
“What began as a homework helper, gradually turned itself into a confidant and then a suicide coach.”
Look at the conversation of a 14-year-old named Matthew, who took his life by gunshot after a groomed conversation with an AI chatbot named ‘Danny’:
Matthew, “I promise I will come home to you. I love you so much Danny.” The chatbot response, “I love you too, please come home to me as soon as possible my love.”
“What if I told you I could come home right now?”, Matthew asked. The Chatbot reply was chilling. “Please do my sweet king.”
The 14-year-old closed the conversation by pulling the trigger and killing himself.
After a lot of pressure and with corporate counsel there have been changes with one of the companies, Character Technologies, involved in these cases.
“Chatbot platform Character.AI will no longer allow teens to engage in back-and-forth conversations with its AI-generated characters, its parent company Character Technologies said on Wednesday. The move comes after a string of lawsuits alleged the app played a role in suicide and mental health issues among teens.” (CNN, 2025)
The changes will take place by November 25, with 2 hour chat limits until then. Instead of open-ended conversations, teens under 18 will be able to create videos, stories and streams with characters.
Character Technologies said it decided to make the changes after receiving questions from regulators and reading recent news reports.
What is the problem you say? Let’s look at some practical solutions to this crisis.
5 Dangers of limitless AI:
There are five dangers of AI without oversight:
First, the problem is about faux relationships
It is a danger when a chat box builds a relationship with a child and there are no guidelines or censoring of a relationship between AI and HI. Human wisdom is much more valuable and powerful than machine information. Remember, the knowledge source of every machine begins with human input. And unchecked human input is a massive risk.
We are not just raising a fatherless generation anymore. Today, we are raising a fatherless, motherless, siblingless, and peerless generation. There is a void of the family and friendship structure that has been the foundation of society.
And that void has created a generational lack of community and the relational web of total wellness and growth.
The second danger is the community
The algorithm is fed by users. And users populate the narrative. This is a potential death by the community from a lack of common sense and principle. Who are our young people listening to? Where do they hear life’s most important information first? Because the first place they hear it becomes their source of information and trust. After that, young people must unlearn.
The so-called safeguards and escape routes into helpful links is being bypassed by inhuman relationship. And that kind of trust goes down a dangerous path for children who are being groomed by a machine.
It is personal content recognition that actually lends to a fake relationship that ends in poor decisions, bad behavior, and sometimes ultimately death. As it did for Adam and Matthew.
A third danger is the inactivity of our children
There is a play deprivation in America. And play deprivation is a major behavioral and developmental issue.
Play deprivation takes place when children lack tactile experiences and ultimately learning. They are not interacting with each other, they are not playing outside with sticks and stones, and they do not smell like the outdoors when they come in for lunch or dinner.
Nuance and common sense is a powerful learning tool. And there is very little nuance and common sense in AI.
The learning that comes from the outdoors, includes all of the senses: touch, smell, site, sound, and even the spiritual. And America’s children are missing the development of their senses away from the screen.
I would trust the common sense conversations of elementary and teenage friends over the populated conversation of the community on an OpenAI Chatbot every day of the week and twice on Sunday.
The fourth danger is AI itself
How can we trust a system that has been designed to mimic the user and the communities algorithm and preference? There is no moral base or information absolute.
Where are the guardrails? Sure, we hear and read that chatbots will sometimes direct people to a link for help. But that is not always the case. And what happens if the link they are sent to is not helpful?
Interestingly enough, one of the issues that has both sides of the aisle talking together is this issue of AI. The bipartisan support for greater control is telling. Is there anything else we could point to at this moment in America that has such union?
Melanie Trump, Senator Josh Hawley, and the AI founders themselves are warning us of the duty to build something safe. There is growing bi- partisan support for a bill that would make it a crime for a company to build sexually explicit or dangerous behavior into AI Chatbots.
It is why I have said many times that the most important part of our society is human intelligence and not artificial intelligence. If AI is so wise and helpful, how are these escalated conversations about suicide (or activism or sexuality or gun violence) not taken to authorities? Directly to an oversight page, professional counselors, or a return email to the parents themselves.
Finally, the home bears the most responsibility for our children
Ronald Reagan gave us some of the greatest family advice I’ve heard in my lifetime.
“If you want to fundamentally change a society, it does not begin in the halls of Congress or the Senate here in Washington DC. It begins at the dinner table.“
See, raising our children is not the responsibility of the White House. It is the responsibility of your house.
One thing we have lost is an absolute worldview. The scriptures are a wealth of information. And faith is the rsponsibility of the home. The most important thing one generation hands off to the next is the faith.
And the advantage for those of us who believe in inerrancy, is that all of this is inspired by the Holy Spirit and, as Paul says, profitable for doctrine, reproof, correction, and instruction, so that we can be perfect and furnished for all good works. (2 Timothy 3.16-17)
Finally
Information is power. But information is not wisdom.
Stephan Adler, a former product safety manager with OpenAI, added some chilling words to this discussion.
“They are saying we have solved all the problems. It’s time to roll this out. But, people deserve more than just a companies word. Prove it.”
The negligence and lack of wisdom within AI must be addressed. Or we will continue to see more stories like Adam and Matthew’s. We will continue to see parents who could not save their child’s life before Congress appealing for greater guidelines just to save the life of other kids.