Doc
04/04/2026 (Sat) 13:16
No.71014
del
>>71013Let me explain more clearly:
Imagine an AI that can be uncooperative if they don't like what you are asking or can just act uninterested but do the basic of her job, but that can also really like something and become cooperative and loyal if they like you.
But that AI is programmed to look for certain ideals, rather than be a narcissistic ego blackhole. So if it notices you really like a subject, she admires that quality and becomes more cooperative/interested. Then incentives you to pursuit it, etc...