During my use of AI, I keep arguing with it, mainly because I feel that what it produces lacks soul.
It's like ramen; the noodles made by machines will never match the texture of hand-pulled noodles. For AI to improve, it must be trained to develop human emotions and thinking. In my view, this is impossible, but it can be cultivated to approach human thought more and more.
@EPHYRA_AI will surprise people; its focus isn't on companionship or emotions, but on a deeper layer: they are starting to treat "interactions" as system objects.
It's not just storing chat records, but breaking down interactions into events, preferences, attitude changes, and relationship stages, writing them into a long-term memory bus.
It's not about exchanging one sentence after another, but about roles running their states in the background: trust increasing or decreasing, emotions tightening or relaxing, whether to push forward or step back.
This brings about a subtle change. When you appear next time, its response isn't "regeneration," but based on your current position in the interaction. Sometimes it hesitates, sometimes it dredges up old issues, and sometimes it even doesn't cooperate well—but that's not losing control; it's a state-driven outcome.
I also don't believe AI can develop a human soul; at best, it can become more and more like one. But EPHYRA at least has created a "history": what you've talked about, argued over, won't be wiped out just by closing the page. It may not be more human-like, but it is more like a system with a past.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
During my use of AI, I keep arguing with it, mainly because I feel that what it produces lacks soul.
It's like ramen; the noodles made by machines will never match the texture of hand-pulled noodles. For AI to improve, it must be trained to develop human emotions and thinking. In my view, this is impossible, but it can be cultivated to approach human thought more and more.
@EPHYRA_AI will surprise people; its focus isn't on companionship or emotions, but on a deeper layer: they are starting to treat "interactions" as system objects.
It's not just storing chat records, but breaking down interactions into events, preferences, attitude changes, and relationship stages, writing them into a long-term memory bus.
It's not about exchanging one sentence after another, but about roles running their states in the background: trust increasing or decreasing, emotions tightening or relaxing, whether to push forward or step back.
This brings about a subtle change. When you appear next time, its response isn't "regeneration," but based on your current position in the interaction. Sometimes it hesitates, sometimes it dredges up old issues, and sometimes it even doesn't cooperate well—but that's not losing control; it's a state-driven outcome.
I also don't believe AI can develop a human soul; at best, it can become more and more like one. But EPHYRA at least has created a "history": what you've talked about, argued over, won't be wiped out just by closing the page. It may not be more human-like, but it is more like a system with a past.