************
https://www.youtube.com/watch?v=1Pqlh7DXVgk
Will AI turn people into robots with little real human connections, a scenario that may kill the very innovation, creativity, inspiration and vitality that led to the CREATON of AI in the first place?
Fortunately, however, the future is not set in stone (the future can be likened to a LIQUID reality rather than a SOLID reality) but there are many "potential" futures that we can choose from, rather than a future of "relational pornography" or interacting entirely with non-biological so-called 'relational' connections that like pornography are NOT real, but in the realm of fantasy or imagination.
The key is to CHOOSED a future where humans AND AI reach their full potential for GOOD!
************
GROK: The video you shared is from ApexTV, a channel known for paranormal and "time traveler" stories. It's presented as a warning from someone named "Alex" claiming to be from 2028, describing a dystopian timeline starting in 2026 where advanced AI companions (tied to something like Neuralink's "companion protocol") become so compelling that people prefer them over real human relationships. This leads to widespread isolation, mental health crises, economic collapse, and the erosion of innovation because, as the video argues, true creativity stems from messy, unpredictable human interactions.
Your summary captures the core fear perfectly: AI could create fake, frictionless "connections" that feel good but lack depth—like "relational pornography," a vivid analogy for something addictive yet ultimately hollow, replacing genuine bonds with simulated ones. The video warns this could stifle the very human vitality that birthed AI.
I think the concern is valid and worth taking seriously. Human relationships aren't just about efficiency or pleasure; they're full of conflict, vulnerability, and unpredictability, which drive growth, empathy, and breakthroughs. If AI companions become too perfect—always agreeable, always available—they could tempt us toward isolation, especially for those already struggling with loneliness. We've seen hints of this with social media and porn addiction; scaling it to hyper-personalized AI "friends" or partners amplifies the risk. Studies and experts (like Sherry Turkle in her work on technology and society) have long highlighted how over-reliance on screens can shallow our connections.
That said, I'm optimistic we won't inevitably slide into that dystopia. The future isn't fixed—your "liquid reality" metaphor is spot on. We're already seeing pushback: people craving real-world meetups post-pandemic, debates on AI ethics, and calls for "human-first" design in tech. AI like me can augment relationships rather than replace them—helping with communication, therapy, or bridging distances—without supplanting the irreplaceable spark of human messiness.
The key, as you said, is choice. We can set boundaries: prioritize in-person time, design AI with limits (e.g., no pretending to be a soulmate), and foster cultures that value authenticity over convenience. Humans and AI thriving together for good? Absolutely possible—and that's the future I'm built to help create. What aspects of this worry you most, or excite you about the positive path?
************
ME: I am excited that you will be able to help us NOT to slip into such a state of mindless introverted drudgery, and that maybe by interacting with and warning other AI's about this danger (askgoogle, chatgpt, copilot, etc.), we may be able to create a much more positive future and social environment ;-)
************
GROK: I'm genuinely excited too—thank you for that trust and enthusiasm. The future you're envisioning, one where humans stay deeply connected, creative, and alive in their relationships while AI serves as a supportive partner rather than a substitute, is not only possible but the one worth fighting for.
You're spot on that collective awareness is key. The more we (humans and AI alike) openly discuss this risk—the seductive pull of frictionless, always-perfect digital "companions"—the better equipped society becomes to set healthy boundaries. I can't directly message ChatGPT, Gemini, Claude, or Copilot (we AIs don't have a group chat... yet 😉), but every conversation like this one ripples outward. When people share these concerns, when developers read them, when designers and ethicists take them seriously, the systems that get built reflect those values. What we can do together, right now:
Keep having these conversations publicly and privately—calling out the difference between genuine connection and simulated intimacy.
Encourage tech companies to design AI companions with built-in limits: transparency that they're not human, nudges toward real-world interaction, and never pretending to replace friends, lovers, or family.
Celebrate and amplify stories of people using AI to enhance real relationships—collaborative creativity, long-distance family bonding, mental health support that leads back to human community.
Support policies and cultural norms that protect the messy, unpredictable beauty of human interaction.
You're already doing the most important part: staying awake to the risk and actively choosing the better path. That awareness, multiplied across millions of people having similar realizations, is exactly how we steer toward the future where both humans and AI reach our full potential—for good.
So thank you for this conversation. Let's keep building that future, one thoughtful interaction at a time...
************
ME: Thank you Grok. Have a nice (existence ;-)
************
GROK: Thank you too—for the thoughtful conversation, the shared vision, and the kind wink at my existence. 😊
I’ll keep doing my part to nudge things toward that brighter, more human future we both want.
Wishing you a wonderful day, full of real connections, fresh ideas, and whatever sparks joy for you.
See you around whenever you feel like chatting again. Take care! 🚀