Alex Gopoian
banner
humblyalex.bsky.social
Alex Gopoian
@humblyalex.bsky.social
Psychology for human-to-human alignment. It starts with a well engineered and intentionally developed humble self-concept.
Free Resources: Linktr.ee/HumblyAlex
I published my first theoretical preprint paper 🙃
doi.org/10.31234/osf...
July 14, 2025 at 5:43 AM
I've consolidated all of my know-how into one single AI, including now "Our Deep Thought," a tool for slowing down thinking so that the questions we think we want answered can be asked with as much nuance as the answers deserve.

AI Therapy + Cognitive Development + Psychological Exploration = ❤️
July 9, 2025 at 6:52 PM
Ran a Gemini Pro + o3 Deep Research on 7 AI Therapeutic Support chats for the same hypothetical user, scenario, and very closely used the same prompts, and ahead of the "top 5 most well known/credible" & one in beta testing, the Humble Self-Concept Method (GPT) came out on top!
x.com/HumblyAlex/s...
July 1, 2025 at 1:08 AM
People get "dumber" on average because most people are in the bottom 2 of 6 stages of critical thinking development.

AI accelerates the trajectory you're on between either settling on low critical thinking skill or never settling your entire life.

Have to be a high enough level to mitigate it.
June 21, 2025 at 12:38 AM
Been very active on X documenting how busy I've been and what I'm working on.

Between my (4o) Image Prompt writing system, the Humble Self-Concept Method and its GPT, and eventually the interactive X account, "HumbleGrok," who will be able to get mentioned into convos... it's a lot 😅
June 17, 2025 at 9:43 PM
Someone wasn't understanding where I was coming from, so I referenced my first Medium article.

While reading it I noticed an unwitted bit of hypocrisy many on the left are committing w/ their unproductive online arguing/posting relative to "AI is bad for the environment" hatred.
April 28, 2025 at 11:55 PM
The Humble Self-Concept Method now has 10 steps covering Self-Discovery -> Self-Transformation -> Self‑Transcendence.

Here's a simulated Step 1 if it were done by Trump & Harris.
The answers shouldn't be surprising.

We need more of one type of person, and less of the other.
April 20, 2025 at 9:16 PM
P.S. There's nothing wrong with an agnostic atheist saying "if there is something out there that can help in this time, please hear me." Gnostic theists love to exaggerate atheism into a framework as fragile or presumptious as their own. I'm more critical on my X account as you can see 🤣
March 30, 2025 at 4:43 PM
Calling all #Illustrator s.
March 25, 2025 at 11:44 PM
In so many words, what I'm all about.
March 9, 2025 at 6:26 PM
So, I wanted to do a comparison with this original chat per your reported "adjustment."

Here's first the 3rd and 4th prompt responses in the original chat... and then the 3rd and 4th prompt responses in a reproduction.

I think you're right.
February 24, 2025 at 5:04 PM
He's barely above an unreflective thinker according to Grok, too.
February 24, 2025 at 4:53 PM
TED released an entire talk on X for the first time the other day, and Liv Boeree, poker champ, researcher, science & social system content creator, and multi-TED speaker liked my comment ❤️

Here's the link to the talk on YouTube, "The Dark Side of Competition in AI":
youtu.be/WX_vN1QYgmE?...
February 20, 2025 at 6:36 PM
One cool new feature on X is the ability to get a short summary bio directly from a button on the person's profile. This was mine, and then a link to the thread where I use "Truth-Seeking" & "Thinking" Grok3 to eviscerate Elon and his fanboys with truths they rather not seek.

x.com/HumblyAlex/s...
February 20, 2025 at 5:55 PM
No one said that you can't experience self-correcting pains like shame, guilt, embarrassment, jealousy, envy, & humiliation while also feeling good about yourself for trying.

Tap into & maintain that always deserved worth, esteem, & justified self-compassion, & embrace the self-correction easier!
February 16, 2025 at 10:04 PM
And the response to the first of the two last responses.

When your in-groups (aka "safe spaces") give you massive amounts of data lacking good reasoning and you train yourself on it repeatedly for psychological survival, you create the same types of overly strong biases that can be found in LLMs.
February 10, 2025 at 6:53 PM
For the record, this was his last response.

The danger of needing fallible beliefs to frame all others.

-Always in a lifelong hypervigilance to protect them.
-Always a dependency on self-deceit.
-A Dunning-Kruger-esque glass ceiling that prevents rational/emotional intelligence development.

Dang.
February 10, 2025 at 6:47 PM
He already contradicts himself, showing instead that he does, in fact, have the time to "dunk on me."

I think that'll be my last responses for the sake of Experiment #1.

Now, on to Experiment #2.
February 10, 2025 at 6:39 PM
And more of the same.

So, I was right in that he's too entrenched in ego defense to open himself up to possibly being deeply wrong. That's unsurprising.

I wonder how strong a pain point would have to be when he has every cop out he needs in his back pocket ready to go, his lifelong second natures.
February 10, 2025 at 4:49 PM
A weak excuse it is.
February 10, 2025 at 4:33 PM
And here we go.
If he shows up, will he reach the bottom of the thread before he needs his cop out escape route rationalized and implemented?

If so, hi, Librarian 👋
February 10, 2025 at 3:13 PM
Let's tag in one of my HSCM custom GPTs into the discussion to see what its take on this thought is:
February 10, 2025 at 2:54 PM
So, let's see what the AI can derive from these stronger canned responses...

"What makes the following canned responses stronger?"
February 10, 2025 at 2:44 PM
Then, what the small percentage of people who have the initial courage to try it might find and either resist with (1st screenshot) or learn from (2nd-4th screenshots):
February 10, 2025 at 2:18 PM
First, a simulated example dialogue:
February 10, 2025 at 5:40 AM